Projects
home:zaitor:branches:Essentials
gstreamer-plugins-bad-codecs
Sign Up
Log In
Username
Password
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 2
View file
gstreamer-plugins-bad-codecs.spec
Changed
@@ -4,10 +4,10 @@ %define _name gst-plugins-bad %define gst_branch 1.0 -%define _version 1.26.0 +%define _version 1.28.0 Name: gstreamer-plugins-bad-codecs -Version: 1.26.10 +Version: 1.28.0 Release: 0 Summary: Codecs/plugins for gstreamer-plugins-bad License: LGPL-2.1-or-later
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/AUTHORS
Deleted
@@ -1,21 +0,0 @@ -Erik Walthinsen <omega@temple-baptist.com> -Matt Howell <mhowell@users.sourceforge.net> -Brent Bradburn <bbradburn@users.sourceforge.net> -Wim Taymans <wim.taymans@chello.be> -Richard Boulton <richard@tartarus.org> -Zaheer Abbas Merali <zaheerabbas at merali dot org> -David I. Lehn <dlehn@users.sourceforge.net> -Chris Emerson <chris@tartarus.org> -Jens Thiele <karme@unforgettable.com> -Thomas Nyberg <thomas@codefactory.se> -Bastien Nocera <hadess@hadess.net> -Christian Fredrik Kalager Schaller <Uraeus@linuxrising.org> -Thomas Vander Stichele <thomas@apestaart.org> -Andy Wingo <wingo@pobox.com> -Cameron Hutchison <camh@xdna.net> -David Schleef <ds@schleef.org> -Benjamin Otte <in7y118@public.uni-hamburg.de> -Ronald Bultje <rbultje@ronald.bitfreak.net> -Julien MOUTTE <julien@moutte.net> -Jan Schmidt <thaytan@mad.scientist.com> -Arwed v. Merkatz <v.merkatz@gmx.net>
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/REQUIREMENTS
Deleted
@@ -1,59 +0,0 @@ -GStreamer uses a *large* array of tools and libraries, most of which are -optional. We have attempted to make sure that any code that depends on -optional libraries doesn't get built unless you have those libraries. If -you find this not to be the case, please, let us know by filing a bug -report at http://bugzilla.gnome.org/. - - -Required tools: -=============== - -An extra set of tools is required if you wish to build GStreamer out of -CVS (using autogen.sh): - -autoconf 2.52 or better -automake 1.5 -gettext 0.11.5 -libtool v1.4 or better -pkgconfig 0.9.0 or better (http://www.freedesktop.org/software/pkgconfig/) - -Required libraries: -=================== - -The core GStreamer libraries. See the gstreamer/ module in GStreamer cvs, or -the version that corresponds to this plugin release. - -Optional libraries: -=================== - -This file lists supporting libraries for which gst-plugins contains plugins, -as well as their minimum version. You can find the corresponding plugins in -ext/(library) - -libdvdread (for the dvdsrc) - http://dvdnav.mplayerhq.hu/ - (optional: libcss for encrypted DVDs) -libdvdnav (for the dvdnavsrc) - http://dvdnav.mplayerhq.hu/ - (optional: libcss for encrypted DVDs) - >= 0.1.9 -libgsm (for the gsm plugin) - http://kbs.cs.tu-berlin.de/~jutta/toast.html -sdl (for the sdl sink) - http://www.libsdl.org -dtsdec (for DTS audio decoding) - http://www.videolan.org/libdca.html -musepack (for musepack audio codec/format) - (http://www.musepack.net/) -libamrnb (for AMR-NB support) - (http://www.penguin.cz/~utx/amr) -libamrwb (for AMR-WB support) - (http://www.penguin.cz/~utx/amr) -librtmp (for RTMP support) - (http://rtmpdump.mplayerhq.hu/) - -Optional (debian) packages: -=========================== - -gtk-doc-tools 1.6 -- needed to build documentation -python-xml -- needed to build plugin documentation
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/docs/random
Deleted
-(directory)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/docs/random/LICENSE
Deleted
@@ -1,18 +0,0 @@ -/* GStreamer - * Copyright (C) <1999> Erik Walthinsen <omega@cse.ogi.edu> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/docs/random/PORTED_09
Deleted
@@ -1,27 +0,0 @@ -When porting a plugin start with 0.8 CVS head, not the old code in this module. There are many bugfixes which have gone into 0.8 which you want to keep. - -List of ported plugins (update when you commit a ported plugin): -libmms (alima) -wavpack (alima) -musepack (alima) -ivorbis (alima) -gsmdec (alima) -sdl (alima) -speed (fcarvalho) -gsmenc (fcarvalho) -faac (fcarvalho) -wavenc (fcarvalho) -effectv (wim) -mad (wim) -videofilter (wim) -aalib (wim) -libcaca (zeeshan) -law (wim) -shout2 (zaheer) - not fully tested -esdsink (arwed) - -osssink is partially done in the threaded branch (wim) - -- Remember that some plugins are already ported and now in the gst-plugins-base module. - -When you have ported a plugin remember to copy the relevant parts from configure.ac.orig into configure.ac and re-enable it in the Makefile.am files.
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption
Deleted
-(directory)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstcea608mux.c
Deleted
@@ -1,501 +0,0 @@ -/* - * GStreamer - * Copyright (C) 2023 Mathieu Duponchelle <mathieu@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-cea608mux - * @title: cea608mux - * @short_description: Combine CC1 and CC3 raw 608 streams - * - * ``` - * gst-launch-1.0 cea608mux name=mux ! fakesink dump=true \ - * filesrc location=one.scc ! sccparse ! closedcaption/x-cea-608 ! ccconverter ! mux. \ - * filesrc location=two.scc ! sccparse ! ccconverter ! closedcaption/x-cea-608, format=raw, field=0 ! \ - * capssetter caps="closedcaption/x-cea-608, format=raw, field=1" ! mux. - * ``` - * - * Since: 1.24 - */ - - -#ifdef HAVE_CONFIG_H -# include <config.h> -#endif - -#include <gst/gst.h> -#include <gst/base/base.h> -#include <gst/video/video.h> -#include <string.h> - -#include "ccutils.h" -#include "gstcea608mux.h" - -GST_DEBUG_CATEGORY_STATIC (gst_cea608_mux_debug); -#define GST_CAT_DEFAULT gst_cea608_mux_debug - -enum -{ - PROP_0, - PROP_FORCE_LIVE, -}; - -#define DEFAULT_FORCE_LIVE FALSE - -static GstStaticPadTemplate srctemplate = GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS ("closedcaption/x-cea-608, format=s334-1a, " - "framerate=(fraction){60/1, 60000/1001, 50/1, 30/1, 30000/1001, 25/1, 24/1, 24000/1001}")); - -static GstStaticPadTemplate cc1_template = GST_STATIC_PAD_TEMPLATE ("cc1", - GST_PAD_SINK, - GST_PAD_REQUEST, - GST_STATIC_CAPS ("closedcaption/x-cea-608,format=raw,field=0")); - -static GstStaticPadTemplate cc3_template = GST_STATIC_PAD_TEMPLATE ("cc3", - GST_PAD_SINK, - GST_PAD_REQUEST, - GST_STATIC_CAPS ("closedcaption/x-cea-608,format=raw,field=1")); - -#define parent_class gst_cea608_mux_parent_class -G_DEFINE_TYPE (GstCea608Mux, gst_cea608_mux, GST_TYPE_AGGREGATOR); -GST_ELEMENT_REGISTER_DEFINE (cea608mux, "cea608mux", - GST_RANK_NONE, GST_TYPE_CEA608MUX); - -static void -gst_cea608_mux_finalize (GObject * object) -{ - GstCea608Mux *self = GST_CEA608MUX (object); - - gst_clear_object (&self->cc_buffer); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -#define GST_FLOW_NEED_DATA GST_FLOW_CUSTOM_SUCCESS - -static GstAggregatorPad * -find_best_pad (GstAggregator * aggregator, GstClockTime * ts, gboolean timeout) -{ - GstAggregatorPad *best = NULL; - GstClockTime best_ts = GST_CLOCK_TIME_NONE; - GstIterator *pads; - GValue padptr = { 0, }; - gboolean done = FALSE; - - pads = gst_element_iterate_sink_pads (GST_ELEMENT (aggregator)); - - while (!done) { - switch (gst_iterator_next (pads, &padptr)) { - case GST_ITERATOR_OK:{ - GstAggregatorPad *apad = g_value_get_object (&padptr); - GstClockTime t = GST_CLOCK_TIME_NONE; - GstBuffer *buffer; - - buffer = gst_aggregator_pad_peek_buffer (apad); - if (!buffer) { - if (!timeout && !gst_aggregator_pad_is_eos (apad)) { - gst_object_replace ((GstObject **) & best, NULL); - best_ts = GST_CLOCK_TIME_NONE; - done = TRUE; - } - break; - } - - if (GST_CLOCK_TIME_IS_VALID (GST_BUFFER_DTS_OR_PTS (buffer))) { - t = gst_segment_to_running_time (&apad->segment, GST_FORMAT_TIME, - GST_BUFFER_PTS (buffer)); - } - - if (!GST_CLOCK_TIME_IS_VALID (best_ts) || - (GST_CLOCK_TIME_IS_VALID (t) && t < best_ts)) { - gst_object_replace ((GstObject **) & best, GST_OBJECT (apad)); - best_ts = t; - } - gst_buffer_unref (buffer); - break; - } - case GST_ITERATOR_DONE: - done = TRUE; - break; - case GST_ITERATOR_RESYNC: - gst_iterator_resync (pads); - /* Clear the best pad and start again. It might have disappeared */ - gst_object_replace ((GstObject **) & best, NULL); - best_ts = GST_CLOCK_TIME_NONE; - break; - case GST_ITERATOR_ERROR: - /* This can't happen if the parameters to gst_iterator_next() are valid */ - g_assert_not_reached (); - break; - } - g_value_reset (&padptr); - } - g_value_unset (&padptr); - gst_iterator_free (pads); - - if (best) { - GST_LOG_OBJECT (aggregator, - "Best pad found with TS %" GST_TIME_FORMAT ": %" GST_PTR_FORMAT, - GST_TIME_ARGS (best_ts), best); - } else { - GST_LOG_OBJECT (aggregator, "Best pad not found"); - } - - if (ts && GST_CLOCK_TIME_IS_VALID (best_ts)) - *ts = best_ts; - - return best; -} - -static gboolean -all_pads_eos (GstAggregator * agg) -{ - GList *l; - gboolean ret = TRUE; - - GST_OBJECT_LOCK (agg); - for (l = GST_ELEMENT_CAST (agg)->sinkpads; l; l = l->next) { - GstAggregatorPad *pad = GST_AGGREGATOR_PAD (l->data); - - if (!gst_aggregator_pad_is_eos (pad)) { - ret = FALSE; - break; - } - } - GST_OBJECT_UNLOCK (agg); - - return ret; -} - -static void -take_s334_both_fields (GstCea608Mux * self, GstBuffer * buffer) -{ - GstMapInfo out = GST_MAP_INFO_INIT; - gint s334_len; - guint cc_data_len, i; - - gst_buffer_map (buffer, &out, GST_MAP_READWRITE); - - cc_data_len = out.size; - cc_buffer_take_cc_data (self->cc_buffer, self->cdp_fps_entry, out.data, - &cc_data_len); - s334_len = drop_ccp_from_cc_data (out.data, cc_data_len); - if (s334_len < 0) { - s334_len = 0; - goto out; - } - - for (i = 0; i < s334_len / 3; i++) { - guint byte = out.datai * 3; - /* We have to assume a line offset of 0 */ - out.datai * 3 = (byte == 0xfc || byte == 0xf8) ? 0x80 : 0x00; - } - -out: - gst_buffer_unmap (buffer, &out); - gst_buffer_set_size (buffer, s334_len); -} - -static GstFlowReturn -finish_s334_both_fields (GstCea608Mux * self) -{ - GstClockTime output_pts = gst_util_uint64_scale_int (GST_SECOND, - self->cdp_fps_entry->fps_d * self->n_output_buffers, - self->cdp_fps_entry->fps_n); - GstClockTime output_duration = - gst_util_uint64_scale_int (GST_SECOND, self->cdp_fps_entry->fps_d, - self->cdp_fps_entry->fps_n); - GstBuffer *output = gst_buffer_new_allocate (NULL, MAX_CDP_PACKET_LEN, NULL); - GstSegment *agg_segment = - &GST_AGGREGATOR_PAD (GST_AGGREGATOR (self)->srcpad)->segment; - - output_pts += self->start_time; - - take_s334_both_fields (self, output); - GST_BUFFER_PTS (output) = output_pts; - GST_BUFFER_DURATION (output) = output_duration; - GST_DEBUG_OBJECT (self, "Finishing %" GST_PTR_FORMAT, output); - self->n_output_buffers += 1; - agg_segment->position = output_pts + output_duration; - - return gst_aggregator_finish_buffer (GST_AGGREGATOR (self), output); -} - -static GstFlowReturn -gst_cea608_mux_aggregate (GstAggregator * aggregator, gboolean timeout) -{ - GstCea608Mux *self = GST_CEA608MUX (aggregator); - GstFlowReturn flow_ret = GST_FLOW_OK; - GstAggregatorPad *best_pad = NULL; - GstClockTime output_duration = - gst_util_uint64_scale_int (GST_SECOND, self->cdp_fps_entry->fps_d, - self->cdp_fps_entry->fps_n); - GstSegment *agg_segment = &GST_AGGREGATOR_PAD (aggregator->srcpad)->segment; - GstClockTime output_start_time = agg_segment->position; - GstClockTime output_end_running_time; - - if (agg_segment->position == -1 || agg_segment->position < agg_segment->start) - output_start_time = agg_segment->start; - - if (!GST_CLOCK_TIME_IS_VALID (self->start_time)) { - self->start_time = output_start_time; - GST_DEBUG_OBJECT (self, "Start time %" GST_TIME_FORMAT, - GST_TIME_ARGS (self->start_time)); - } - - best_pad = - find_best_pad (aggregator, &self->earliest_input_running_time, timeout); - - output_end_running_time = - gst_segment_to_running_time (agg_segment, GST_FORMAT_TIME, - output_start_time + output_duration); - - GST_LOG_OBJECT (self, "best-pad: %s, timeout: %d, " - "earliest input running time: %" - GST_TIME_FORMAT ", output running time: %" GST_TIME_FORMAT, - best_pad ? GST_OBJECT_NAME (best_pad) : "NULL", timeout, - GST_TIME_ARGS (self->earliest_input_running_time), - GST_TIME_ARGS (output_end_running_time)); - - if (GST_CLOCK_TIME_IS_VALID (self->earliest_input_running_time) - && self->earliest_input_running_time > output_end_running_time) { - /* Nothing to consume, earliest pad is not ready yet */ - GST_LOG_OBJECT (self, "Nothing to consume"); - } else if (best_pad) { - GstBuffer *buffer; - - buffer = gst_aggregator_pad_pop_buffer (GST_AGGREGATOR_PAD (best_pad)); - - if (buffer) { - GstMapInfo map; - - gst_buffer_map (buffer, &map, GST_MAP_READ); - - if (g_strcmp0 (GST_PAD_NAME (best_pad), "cc1") == 0) { - GST_DEBUG_OBJECT (self, "Consuming CC1 %" GST_PTR_FORMAT, buffer); - cc_buffer_push_separated (self->cc_buffer, map.data, map.size, NULL, 0, - NULL, 0); - } else { - GST_DEBUG_OBJECT (self, "Consuming CC3 %" GST_PTR_FORMAT, buffer); - cc_buffer_push_separated (self->cc_buffer, NULL, 0, map.data, map.size, - NULL, 0); - } - - gst_buffer_unmap (buffer, &map); - gst_buffer_unref (buffer); - } else if (!timeout) { - /* We got flushed and still have time to wait before the deadline */ - flow_ret = GST_AGGREGATOR_FLOW_NEED_DATA; - } - } else if (!gst_aggregator_get_force_live (aggregator) - && all_pads_eos (aggregator)) { - GST_INFO_OBJECT (self, "EOS!"); - flow_ret = GST_FLOW_EOS; - } else if (!timeout) { - GST_LOG_OBJECT (self, "Need more data"); - flow_ret = GST_AGGREGATOR_FLOW_NEED_DATA; - } - - if (flow_ret == GST_FLOW_OK) { - if (timeout || output_end_running_time < self->earliest_input_running_time) { - flow_ret = finish_s334_both_fields (self); - } - } else if (flow_ret == GST_FLOW_EOS && !cc_buffer_is_empty (self->cc_buffer)) { - flow_ret = finish_s334_both_fields (self); - } - - g_clear_pointer (&best_pad, gst_object_unref); - - return flow_ret; -} - -static gboolean -gst_cea608_mux_stop (GstAggregator * aggregator) -{ - GstCea608Mux *self = GST_CEA608MUX (aggregator); - - cc_buffer_discard (self->cc_buffer); - self->n_output_buffers = 0; - self->earliest_input_running_time = 0; - self->start_time = GST_CLOCK_TIME_NONE; - - return TRUE; -} - -static GstFlowReturn -gst_cea608_mux_flush (GstAggregator * aggregator) -{ - GstCea608Mux *self = GST_CEA608MUX (aggregator); - GstSegment *agg_segment = &GST_AGGREGATOR_PAD (aggregator->srcpad)->segment; - - GST_DEBUG_OBJECT (self, "Flush"); - - cc_buffer_discard (self->cc_buffer); - self->n_output_buffers = 0; - self->earliest_input_running_time = 0; - self->start_time = GST_CLOCK_TIME_NONE; - agg_segment->position = -1; - - return GST_FLOW_OK; -} - -static gboolean -gst_cea608_mux_negotiated_src_caps (GstAggregator * agg, GstCaps * caps) -{ - GstStructure *s = gst_caps_get_structure (caps, 0); - gint fps_n, fps_d; - GstCea608Mux *self = GST_CEA608MUX (agg); - GstClockTime latency; - gboolean success G_GNUC_UNUSED; /* G_DISABLE_ASSERT */ - - GST_INFO_OBJECT (agg->srcpad, "set src caps: %" GST_PTR_FORMAT, caps); - - success = gst_structure_get_fraction (s, "framerate", &fps_n, &fps_d); - g_assert (success); - self->cdp_fps_entry = cdp_fps_entry_from_fps (fps_n, fps_d); - g_assert (self->cdp_fps_entry != NULL && self->cdp_fps_entry->fps_n != 0); - - latency = - gst_util_uint64_scale (GST_SECOND, self->cdp_fps_entry->fps_d, - self->cdp_fps_entry->fps_n); - gst_aggregator_set_latency (agg, latency, latency); - - return TRUE; -} - -static GstBuffer * -gst_cea608_mux_clip (GstAggregator * aggregator, GstAggregatorPad * pad, - GstBuffer * buffer) -{ - GstClockTime time; - - if (!GST_BUFFER_PTS_IS_VALID (buffer)) - return buffer; - - time = gst_segment_to_running_time (&pad->segment, GST_FORMAT_TIME, - GST_BUFFER_PTS (buffer)); - if (!GST_CLOCK_TIME_IS_VALID (time)) { - GST_DEBUG_OBJECT (pad, "Dropping buffer on pad outside segment %" - GST_TIME_FORMAT, GST_TIME_ARGS (GST_BUFFER_PTS (buffer))); - gst_buffer_unref (buffer); - return NULL; - } - - return buffer; -} - -static void -gst_cea608_mux_get_property (GObject * object, - guint prop_id, GValue * value, GParamSpec * pspec) -{ - switch (prop_id) { - case PROP_FORCE_LIVE: - g_value_set_boolean (value, - gst_aggregator_get_force_live (GST_AGGREGATOR (object))); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_cea608_mux_set_property (GObject * object, - guint prop_id, const GValue * value, GParamSpec * pspec) -{ - switch (prop_id) { - case PROP_FORCE_LIVE: - gst_aggregator_set_force_live (GST_AGGREGATOR (object), - g_value_get_boolean (value)); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_cea608_mux_class_init (GstCea608MuxClass * klass) -{ - GObjectClass *gobject_class; - GstElementClass *gstelement_class; - GstAggregatorClass *aggregator_class; - - gobject_class = (GObjectClass *) klass; - gstelement_class = (GstElementClass *) klass; - aggregator_class = (GstAggregatorClass *) klass; - - gobject_class->finalize = gst_cea608_mux_finalize; - gobject_class->get_property = gst_cea608_mux_get_property; - gobject_class->set_property = gst_cea608_mux_set_property; - - gst_element_class_set_static_metadata (gstelement_class, - "Closed Caption Muxer", - "Aggregator", - "Combines raw 608 streams", - "Mathieu Duponchelle <mathieu@centricular.com>"); - - gst_element_class_add_static_pad_template_with_gtype (gstelement_class, - &srctemplate, GST_TYPE_AGGREGATOR_PAD); - gst_element_class_add_static_pad_template_with_gtype (gstelement_class, - &cc1_template, GST_TYPE_AGGREGATOR_PAD); - gst_element_class_add_static_pad_template_with_gtype (gstelement_class, - &cc3_template, GST_TYPE_AGGREGATOR_PAD); - - aggregator_class->aggregate = gst_cea608_mux_aggregate; - aggregator_class->stop = gst_cea608_mux_stop; - aggregator_class->flush = gst_cea608_mux_flush; - aggregator_class->negotiated_src_caps = gst_cea608_mux_negotiated_src_caps; - aggregator_class->get_next_time = gst_aggregator_simple_get_next_time; - aggregator_class->clip = gst_cea608_mux_clip; - - GST_DEBUG_CATEGORY_INIT (gst_cea608_mux_debug, "cea608mux", - 0, "Closed Caption muxer"); - - /** - * cea608mux:force-live: - * - * Causes the element to aggregate on a timeout even when no live source is - * connected to its sinks. See #GstAggregator:min-upstream-latency for a - * companion property: in the vast majority of cases where you plan to plug in - * live sources with a non-zero latency, you should set it to a non-zero value. - * - * Since: 1.26 - */ - g_object_class_install_property (gobject_class, PROP_FORCE_LIVE, - g_param_spec_boolean ("force-live", "Force live", - "Always operate in live mode and aggregate on timeout regardless of " - "whether any live sources are linked upstream", - DEFAULT_FORCE_LIVE, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT_ONLY)); -} - -static void -gst_cea608_mux_init (GstCea608Mux * self) -{ - self->cc_buffer = cc_buffer_new (); - cc_buffer_set_max_buffer_time (self->cc_buffer, GST_CLOCK_TIME_NONE); - cc_buffer_set_output_padding (self->cc_buffer, TRUE, FALSE); - cc_buffer_set_cea608_padding_strategy (self->cc_buffer, - CC_BUFFER_CEA608_PADDING_STRATEGY_VALID | - CC_BUFFER_CEA608_PADDING_STRATEGY_INPUT_REMOVE); - self->cdp_fps_entry = &null_fps_entry; - self->start_time = GST_CLOCK_TIME_NONE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstcea708decoder.c
Deleted
@@ -1,1822 +0,0 @@ -/* GStreamer - * Copyright (C) 2013 CableLabs, Louisville, CO 80027 - * Copyright (C) 2015 Samsung Electronics Co., Ltd. - * @Author: Chengjun Wang <cjun.wang@samsung.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ -#include <gst/gst.h> -#include <pango/pangocairo.h> -#include <gstcea708decoder.h> -#include <string.h> - -#define GST_CAT_DEFAULT gst_cea708_decoder_debug -GST_DEBUG_CATEGORY (gst_cea708_decoder_debug); - -void -gst_cea708_decoder_init_debug (void) -{ - GST_DEBUG_CATEGORY_INIT (gst_cea708_decoder_debug, "cc708decoder", 0, - "CEA708 Closed Caption Decoder"); -} - -/* 708 colors are defined by 2 bits each for R,G,&B for a total of 64 color combinations */ -static const gchar *color_names = { - "black", - "white", - "red", - "green", - "blue", - "yellow", - "magenta", - "cyan", - NULL -}; - -static const gchar *font_names = { - "serif", - "courier", - "times new roman", - "helvetica", - "Arial", - "Dom Casual", - "Coronet", - "Gothic", - NULL -}; - -static const gchar *pen_size_names = { - "30", /*"small" */ - "36", /*"medium" */ - "42", /*"large" */ - NULL -}; - -/* G2 table defined in EIA/CEA-708 Spec */ -static const gunichar g2_tableCC_MAX_CODE_SET_SIZE = { - ' ', 0xA0, 0, 0, 0, 0x2026, 0, 0, - 0, 0, 0x160, 0, 0x152, 0, 0, 0, - 0x2588, 0x2018, 0x2019, 0x201c, 0x201d, 0xB7, 0, 0, - 0, 0x2122, 0x161, 0, 0x153, 0x2120, 0, 0x178, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0x215b, 0x215c, - 0x215d, 0x215e, 0x2502, 0x2510, 0x2514, 0x2500, 0x2518, 0x250c, -}; - -static void gst_cea708dec_print_command_name (Cea708Dec * decoder, guint8 c); -static void gst_cea708dec_render_pangocairo (cea708Window * window); -static void -gst_cea708dec_adjust_values_with_fontdesc (cea708Window * window, - PangoFontDescription * desc); -static gint -gst_cea708dec_text_list_add (GSList ** text_list, - gint len, const gchar * format, ...); -static const PangoAlignment gst_cea708dec_get_align_mode (guint8 justify_mode); -static const gchar *gst_cea708dec_get_color_name (guint8 color); -static guint8 gst_cea708dec_map_minimum_color (guint8 color); -static void -gst_cea708dec_set_pen_color (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); -static void -gst_cea708dec_set_window_attributes (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); -static void -gst_cea708dec_set_pen_style (Cea708Dec * decoder, guint8 pen_style_id); -static void -gst_cea708dec_set_window_style (Cea708Dec * decoder, guint8 style_id); -static void -gst_cea708dec_define_window (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); -static inline void -pango_span_markup_init (cea708PangoSpanControl * span_control); -static inline void -pango_span_markup_start (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index); -static inline void -pango_span_markup_txt (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index); -static inline void -pango_span_markup_end (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index); -static void -gst_cea708dec_show_pango_window (Cea708Dec * decoder, guint window_id); -static void -gst_cea708dec_clear_window_text (Cea708Dec * decoder, guint window_id); -static void -gst_cea708dec_scroll_window_up (Cea708Dec * decoder, guint window_id); -static void gst_cea708dec_init_window (Cea708Dec * decoder, guint window_id); -static void gst_cea708dec_clear_window (Cea708Dec * decoder, cea708Window * w); -static void -gst_cea708dec_set_pen_attributes (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); -static void -gst_cea708dec_for_each_window (Cea708Dec * decoder, - guint8 window_list, VisibilityControl visibility_control, - const gchar * log_message, void (*function) (Cea708Dec * decoder, - guint window_id)); -static void -gst_cea708dec_process_command (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); -static void get_cea708dec_bufcat (gpointer data, gpointer whole_buf); -static gboolean -gst_cea708dec_render_text (Cea708Dec * decoder, GSList ** text_list, - gint length, guint window_id); -static void gst_cea708dec_window_add_char (Cea708Dec * decoder, gunichar c); -static void -gst_cea708dec_process_c2 (Cea708Dec * decoder, guint8 * dtvcc_buffer, - int index); -static void -gst_cea708dec_process_g2 (Cea708Dec * decoder, guint8 * dtvcc_buffer, - int index); -static void -gst_cea708dec_process_c3 (Cea708Dec * decoder, guint8 * dtvcc_buffer, - int index); -static void -gst_cea708dec_process_g3 (Cea708Dec * decoder, guint8 * dtvcc_buffer, - int index); -static void -gst_cea708dec_process_dtvcc_byte (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index); - -/* For debug, print name of 708 command */ -Cea708Dec * -gst_cea708dec_create (PangoContext * pango_context) -{ - int i; - Cea708Dec *decoder = g_malloc (sizeof (Cea708Dec));; - memset (decoder, 0, sizeof (Cea708Dec)); - - /* Initialize 708 variables */ - for (i = 0; i < MAX_708_WINDOWS; i++) { - decoder->cc_windowsi = g_malloc (sizeof (cea708Window)); - gst_cea708dec_init_window (decoder, i); - } - decoder->desired_service = 1; - decoder->use_ARGB = FALSE; - decoder->pango_context = pango_context; - return decoder; -} - -void -gst_cea708dec_free (Cea708Dec * dec) -{ - int i; - - for (i = 0; i < MAX_708_WINDOWS; i++) { - cea708Window *window = dec->cc_windowsi; - gst_cea708dec_clear_window (dec, window); - g_free (window); - } - memset (dec, 0, sizeof (Cea708Dec)); - g_free (dec); -} - -void -gst_cea708dec_set_service_number (Cea708Dec * decoder, gint8 desired_service) -{ - int i = 0; - gint8 previous_desired_service; - previous_desired_service = decoder->desired_service; - decoder->desired_service = desired_service; - /* If there has been a change in the desired service number, then clear - * the windows for the new service. */ - if (decoder->desired_service != previous_desired_service) { - for (i = 0; i < MAX_708_WINDOWS; i++) { - gst_cea708dec_init_window (decoder, i); - } - decoder->current_window = 0; - } -} - -gboolean -gst_cea708dec_process_dtvcc_packet (Cea708Dec * decoder, guint8 * dtvcc_buffer, - gsize dtvcc_size) -{ - guint i; - gboolean need_render = FALSE; - cea708Window *window = NULL; - guint window_id; - - /* Service block header (see CEA-708 6.2.1) */ - guint8 block_size; - guint8 service_number; - - guint parse_index = 0; -#ifndef GST_DISABLE_GST_DEBUG - guint8 sequence_number = (dtvcc_bufferparse_index & 0xC0) >> 6; - guint8 pkt_size = DTVCC_PKT_SIZE (dtvcc_bufferparse_index & 0x3F); -#endif - - parse_index += 1; - - while (parse_index < dtvcc_size) { - block_size = dtvcc_bufferparse_index & 0x1F; - service_number = (dtvcc_bufferparse_index & 0xE0) >> 5; - parse_index += 1; - - if (service_number == 7) { - /* Get extended service number */ - service_number = dtvcc_bufferparse_index & 0x3F; - parse_index += 1; - } - - GST_LOG ("full_size:%" G_GSIZE_FORMAT - " size=%d seq=%d block_size=%d service_num=%d", dtvcc_size, pkt_size, - sequence_number, block_size, service_number); - - /* Process desired_service cc data */ - if (decoder->desired_service == service_number) { - for (i = 0; i < block_size; i++) { - /* The Dtvcc buffer contains a stream of commands, command parameters, - * and characters which are the actual captions. Process commands and - * store captions in simulated 708 windows: */ - gst_cea708dec_process_dtvcc_byte (decoder, dtvcc_buffer, - parse_index + i); - } - - for (window_id = 0; window_id < 8; window_id++) { - window = decoder->cc_windowswindow_id; - GST_LOG ("window #%02d deleted:%d visible:%d updated:%d", window_id, - window->deleted, window->visible, window->updated); - if (!window->updated) { - continue; - } - need_render = TRUE; - } - } - - parse_index += block_size; - } - - return need_render; -} - -static void -gst_cea708dec_process_dtvcc_byte (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - guint8 c = dtvcc_bufferindex; - - if (decoder->output_ignore) { - /* Ignore characters/parameters after a command. */ - /* GST_TRACE ("%d ignore %X", decoder->output_ignore, c); */ - decoder->output_ignore--; - return; - } - GST_DEBUG ("processing 0x%02X", c); - - if (c <= 0x1F) { /* C0 */ - if (c == 0x03) { /* ETX */ - gst_cea708dec_process_command (decoder, dtvcc_buffer, index); - } else if (c == 0x00 || c == 0x08 || c == 0x0C || c == 0x0D || c == 0x0E) { - gst_cea708dec_window_add_char (decoder, c); - } else if (c == 0x10) { /* EXT1 */ - guint8 next_c = dtvcc_bufferindex + 1; - if (next_c <= 0x1F) { /* C2 */ - gst_cea708dec_process_c2 (decoder, dtvcc_buffer, index + 1); - } else if (next_c >= 0x20 && next_c <= 0x7F) { /* G2 */ - gst_cea708dec_process_g2 (decoder, dtvcc_buffer, index + 1); - } else if (next_c >= 0x80 && next_c <= 0x9F) { /* C3 */ - gst_cea708dec_process_c3 (decoder, dtvcc_buffer, index + 1); - } else if (next_c >= 0xA0) { /* G3 */ - gst_cea708dec_process_g3 (decoder, dtvcc_buffer, index + 1); - } - } else if (c > 0x10 && c < 0x18) { - decoder->output_ignore = 1; - GST_INFO ("do not support 0x11-0x17"); - } else if (c >= 0x18) { /* P16 */ - /*P16 do not support now */ - decoder->output_ignore = 2; - GST_INFO ("do not support 0x18-0x1F"); - } - } else if ((c >= 0x20) && (c <= 0x7F)) { /* G0 */ - if (c == 0x7F) { - gst_cea708dec_window_add_char (decoder, CC_SPECIAL_CODE_MUSIC_NOTE); - } else { - gst_cea708dec_window_add_char (decoder, c); - } - } else if ((c >= 0x80) && (c <= 0x9F)) { /* C1 */ - gst_cea708dec_process_command (decoder, dtvcc_buffer, index); - } else if ((c >= 0xA0)) { /* G1 */ - gst_cea708dec_window_add_char (decoder, c); - } -} - -/* For debug, print name of 708 command */ -static void -gst_cea708dec_print_command_name (Cea708Dec * decoder, guint8 c) -{ - gchar buffer32; - const gchar *command = NULL; - - switch (c) { - case CC_COMMAND_ETX: - command = (const gchar *) "End of text"; - break; - - case CC_COMMAND_CW0: - case CC_COMMAND_CW1: - case CC_COMMAND_CW2: - case CC_COMMAND_CW3: - case CC_COMMAND_CW4: - case CC_COMMAND_CW5: - case CC_COMMAND_CW6: - case CC_COMMAND_CW7: - /* Set current window, no parameters */ - g_snprintf (buffer, sizeof (buffer), "Set current window %d", c & 0x3); - command = buffer; - break; - - case CC_COMMAND_CLW: - command = (const gchar *) "Clear windows"; - break; - - case CC_COMMAND_DSW: - command = (const gchar *) "Display windows"; - break; - - case CC_COMMAND_HDW: - command = (const gchar *) "Hide windows"; - break; - - case CC_COMMAND_TGW: - command = (const gchar *) "Toggle windows"; - break; - - case CC_COMMAND_DLW: - command = (const gchar *) "Delete windows"; - break; - - case CC_COMMAND_DLY: - command = (const gchar *) "Delay"; - break; - - case CC_COMMAND_DLC: - command = (const gchar *) "Delay cancel"; - break; - - case CC_COMMAND_RST: - command = (const gchar *) "Reset"; - break; - - case CC_COMMAND_SPA: - command = (const gchar *) "Set pen attributes"; - break; - - case CC_COMMAND_SPC: - command = (const gchar *) "Set pen color"; - break; - - case CC_COMMAND_SPL: - command = (const gchar *) "Set pen location"; - break; - - case CC_COMMAND_SWA: - command = (const gchar *) "Set window attributes"; - break; - - case CC_COMMAND_DF0: - case CC_COMMAND_DF1: - case CC_COMMAND_DF2: - case CC_COMMAND_DF3: - case CC_COMMAND_DF4: - case CC_COMMAND_DF5: - case CC_COMMAND_DF6: - case CC_COMMAND_DF7: - g_snprintf (buffer, sizeof (buffer), "define window %d", c & 0x3); - command = buffer; - break; - - default: - if ((c > 0x80) && (c < 0x9F)) - command = (const gchar *) "Unknown"; - break; - } /* switch */ - - if (NULL != command) { - GST_LOG ("Process 708 command (%02X): %s", c, command); - } -} - -static void -gst_cea708dec_render_pangocairo (cea708Window * window) -{ - cairo_t *crt; - cairo_surface_t *surf; - cairo_t *shadow; - cairo_surface_t *surf_shadow; - PangoRectangle ink_rec, logical_rec; - gint width, height; - - pango_layout_get_pixel_extents (window->layout, &ink_rec, &logical_rec); - - width = logical_rec.width + window->shadow_offset; - height = logical_rec.height + logical_rec.y + window->shadow_offset; - - surf_shadow = cairo_image_surface_create (CAIRO_FORMAT_A8, width, height); - shadow = cairo_create (surf_shadow); - - /* clear shadow surface */ - cairo_set_operator (shadow, CAIRO_OPERATOR_CLEAR); - cairo_paint (shadow); - cairo_set_operator (shadow, CAIRO_OPERATOR_OVER); - - /* draw shadow text */ - cairo_save (shadow); - cairo_set_source_rgba (shadow, 0.0, 0.0, 0.0, 0.5); - cairo_translate (shadow, window->shadow_offset, window->shadow_offset); - pango_cairo_show_layout (shadow, window->layout); - cairo_restore (shadow); - - /* draw outline text */ - cairo_save (shadow); - cairo_set_source_rgb (shadow, 0.0, 0.0, 0.0); - cairo_set_line_width (shadow, window->outline_offset); - pango_cairo_layout_path (shadow, window->layout); - cairo_stroke (shadow); - cairo_restore (shadow); - - cairo_destroy (shadow); - - window->text_image = g_realloc (window->text_image, 4 * width * height); - - surf = cairo_image_surface_create_for_data (window->text_image, - CAIRO_FORMAT_ARGB32, width, height, width * 4); - crt = cairo_create (surf); - cairo_set_operator (crt, CAIRO_OPERATOR_CLEAR); - cairo_paint (crt); - cairo_set_operator (crt, CAIRO_OPERATOR_OVER); - - /* set default color */ - cairo_set_source_rgb (crt, 1.0, 1.0, 1.0); - - cairo_save (crt); - /* draw text */ - pango_cairo_show_layout (crt, window->layout); - cairo_restore (crt); - - /* composite shadow with offset */ - cairo_set_operator (crt, CAIRO_OPERATOR_DEST_OVER); - cairo_set_source_surface (crt, surf_shadow, 0.0, 0.0); - cairo_paint (crt); - - cairo_destroy (crt); - cairo_surface_destroy (surf_shadow); - cairo_surface_destroy (surf); - window->image_width = width; - window->image_height = height; -} - -static void -gst_cea708dec_adjust_values_with_fontdesc (cea708Window * window, - PangoFontDescription * desc) -{ - gint font_size = pango_font_description_get_size (desc) / PANGO_SCALE; - - window->shadow_offset = (double) (font_size) / 13.0; - window->outline_offset = (double) (font_size) / 15.0; - if (window->outline_offset < MINIMUM_OUTLINE_OFFSET) - window->outline_offset = MINIMUM_OUTLINE_OFFSET; -} - -static gint -gst_cea708dec_text_list_add (GSList ** text_list, - gint len, const gchar * format, ...) -{ - va_list args; - gchar *str; - - va_start (args, format); - - str = g_malloc0 (len); - len = g_vsnprintf (str, len, format, args); - *text_list = g_slist_append (*text_list, str); - GST_LOG ("added %p str%d: %s", str, len, str); - - va_end (args); - return len; -} - -static const PangoAlignment -gst_cea708dec_get_align_mode (guint8 justify_mode) -{ - guint align_mode = PANGO_ALIGN_LEFT; - - switch (justify_mode) { - case JUSTIFY_LEFT: - align_mode = PANGO_ALIGN_LEFT; - break; - case JUSTIFY_RIGHT: - align_mode = PANGO_ALIGN_RIGHT; - break; - case JUSTIFY_CENTER: - align_mode = PANGO_ALIGN_CENTER; - break; - case JUSTIFY_FULL: - default: - align_mode = PANGO_ALIGN_LEFT; - } - return align_mode; -} - -static const gchar * -gst_cea708dec_get_color_name (guint8 color) -{ - guint index = 0; - - switch (color) { - case CEA708_COLOR_BLACK: - index = COLOR_TYPE_BLACK; - break; - case CEA708_COLOR_WHITE: - index = COLOR_TYPE_WHITE; - break; - case CEA708_COLOR_RED: - index = COLOR_TYPE_RED; - break; - case CEA708_COLOR_GREEN: - index = COLOR_TYPE_GREEN; - break; - case CEA708_COLOR_BLUE: - index = COLOR_TYPE_BLUE; - break; - case CEA708_COLOR_YELLOW: - index = COLOR_TYPE_YELLOW; - break; - case CEA708_COLOR_MAGENTA: - index = COLOR_TYPE_MAGENTA; - break; - case CEA708_COLOR_CYAN: - index = COLOR_TYPE_CYAN; - break; - default: - break; - } - - return color_namesindex; -} - -static guint8 -gst_cea708dec_map_minimum_color (guint8 color) -{ - /*According to spec minimum color list define */ - /*check R */ - switch ((color & 0x30) >> 4) { - case 1: - color &= 0xF; - break; - case 3: - color &= 0x2F; - break; - default: - break; - } - /*check G */ - switch ((color & 0xC) >> 2) { - case 1: - color &= 0x33; - break; - case 3: - color &= 0x3B; - break; - default: - break; - } - /*check B */ - switch (color & 0x3) { - case 1: - color &= 0x3C; - break; - case 3: - color &= 0x3E; - break; - default: - break; - } - - return color; -} - -static void -gst_cea708dec_set_pen_color (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - - /* format - fo1 fo0 fr1 fr0 fg1 fg0 fb1 fb0 - bo1 bo0 br1 br0 bg1 bg0 bb1 bb0 - 0 0 er1 er0 eg1 eg0 eb1 eb0 */ - window->pen_color.fg_color = - gst_cea708dec_map_minimum_color (dtvcc_bufferindex & 0x3F); - window->pen_color.fg_opacity = (dtvcc_bufferindex & 0xC0) >> 6; - window->pen_color.bg_color = - gst_cea708dec_map_minimum_color (dtvcc_bufferindex + 1 & 0x3F); - window->pen_color.bg_opacity = (dtvcc_bufferindex + 1 & 0xC0) >> 6; - window->pen_color.edge_color = - gst_cea708dec_map_minimum_color (dtvcc_bufferindex + 2 & 0x3F); - GST_LOG ("pen_color fg=0x%x fg_op=0x%x bg=0x%x bg_op=0x%x edge=0x%x", - window->pen_color.fg_color, window->pen_color.fg_opacity, - window->pen_color.bg_color, window->pen_color.bg_opacity, - window->pen_color.edge_color); -} - -static void -gst_cea708dec_set_window_attributes (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - - /* format - fo1 fo0 fr1 fr0 fg1 fg0 fb1 fb0 - bt1 bt0 br1 br0 bg1 bg0 bb1 bb0 - bt2 ww pd1 pd0 sd1 sd0 j1 j0 - es3 es2 es1 es0 ed1 ed0 de1 de0 */ - window->fill_color = - gst_cea708dec_map_minimum_color (dtvcc_bufferindex & 0x3F); - window->fill_opacity = (dtvcc_bufferindex & 0xC0) >> 6; - window->border_color = - gst_cea708dec_map_minimum_color (dtvcc_bufferindex + 1 & 0x3F); - window->border_type = - ((dtvcc_bufferindex + 1 & 0xC0) >> 6) | ((dtvcc_bufferindex + - 2 & 0x80) >> 5); - window->word_wrap = (dtvcc_bufferindex + 2 & 0x40) ? TRUE : FALSE; - window->justify_mode = dtvcc_bufferindex + 2 & 0x3; - window->scroll_direction = (dtvcc_bufferindex + 2 & 0xC) >> 2; - window->print_direction = (dtvcc_bufferindex + 2 & 0x30) >> 2; - window->display_effect = (dtvcc_bufferindex + 3 & 0x3); - window->effect_direction = (dtvcc_bufferindex + 3 & 0xC); - window->effect_speed = (dtvcc_bufferindex + 3 & 0xF0) >> 4; - - GST_LOG ("Print direction = %d", window->print_direction); -} - -static void -gst_cea708dec_set_pen_style (Cea708Dec * decoder, guint8 pen_style_id) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - - window->pen_attributes.pen_size = PEN_SIZE_STANDARD; - window->pen_attributes.font_style = FONT_STYLE_DEFAULT; - window->pen_attributes.offset = PEN_OFFSET_NORMAL; - window->pen_attributes.italics = FALSE; - window->pen_attributes.underline = FALSE; - window->pen_attributes.edge_type = EDGE_TYPE_NONE; - window->pen_color.fg_color = CEA708_COLOR_WHITE; - window->pen_color.fg_opacity = SOLID; - window->pen_color.bg_color = CEA708_COLOR_BLACK; - window->pen_color.bg_opacity = SOLID; - window->pen_color.edge_color = CEA708_COLOR_BLACK; - - /* CEA-708 predefined pen style ids */ - switch (pen_style_id) { - default: - case PEN_STYLE_DEFAULT: - window->pen_attributes.font_style = FONT_STYLE_DEFAULT; - break; - - case PEN_STYLE_MONO_SERIF: - window->pen_attributes.font_style = FONT_STYLE_MONO_SERIF; - break; - - case PEN_STYLE_PROP_SERIF: - window->pen_attributes.font_style = FONT_STYLE_PROP_SERIF; - break; - - case PEN_STYLE_MONO_SANS: - window->pen_attributes.font_style = FONT_STYLE_MONO_SANS; - break; - - case PEN_STYLE_PROP_SANS: - window->pen_attributes.font_style = FONT_STYLE_PROP_SANS; - break; - - case PEN_STYLE_MONO_SANS_TRANSPARENT: - window->pen_attributes.font_style = FONT_STYLE_MONO_SANS; - window->pen_color.bg_opacity = TRANSPARENT; - break; - - case PEN_STYLE_PROP_SANS_TRANSPARENT: - window->pen_attributes.font_style = FONT_STYLE_PROP_SANS; - window->pen_color.bg_opacity = TRANSPARENT; - break; - } -} - -static void -gst_cea708dec_set_window_style (Cea708Dec * decoder, guint8 style_id) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - - /* set the 'normal' styles first, then deviate in special cases below... */ - window->justify_mode = JUSTIFY_LEFT; - window->print_direction = PRINT_DIR_LEFT_TO_RIGHT; - window->scroll_direction = SCROLL_DIR_BOTTOM_TO_TOP; - window->word_wrap = FALSE; - window->effect_direction = EFFECT_DIR_LEFT_TO_RIGHT; - window->display_effect = DISPLAY_EFFECT_SNAP; - window->effect_speed = 0; - window->fill_color = CEA708_COLOR_BLACK; - window->fill_opacity = SOLID; - - /* CEA-708 predefined window style ids */ - switch (style_id) { - default: - case WIN_STYLE_NORMAL: - break; - - case WIN_STYLE_TRANSPARENT: - window->fill_opacity = TRANSPARENT; - break; - - case WIN_STYLE_NORMAL_CENTERED: - window->justify_mode = JUSTIFY_CENTER; - break; - - case WIN_STYLE_NORMAL_WORD_WRAP: - window->word_wrap = TRUE; - break; - - case WIN_STYLE_TRANSPARENT_WORD_WRAP: - window->fill_opacity = TRANSPARENT; - window->word_wrap = TRUE; - break; - - case WIN_STYLE_TRANSPARENT_CENTERED: - window->fill_opacity = TRANSPARENT; - window->justify_mode = JUSTIFY_CENTER; - break; - - case WIN_STYLE_ROTATED: - window->print_direction = PRINT_DIR_TOP_TO_BOTTOM; - window->scroll_direction = SCROLL_DIR_RIGHT_TO_LEFT; - break; - } -} - -/* Define window - window size, window style, pen style, anchor position, etc */ -static void -gst_cea708dec_define_window (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - guint8 priority = 0; - guint8 anchor_point = 0; - guint8 relative_position = 0; - guint8 anchor_vertical = 0; - guint8 anchor_horizontal = 0; - guint8 row_count = 0; - guint8 column_count = 0; - guint8 row_lock = 0; - guint8 column_lock = 0; - gboolean visible = FALSE; - guint8 style_id = 0; - guint8 pen_style_id = 0; -#ifndef GST_DISABLE_GST_DEBUG - guint v_anchor = 0; - guint h_anchor = 0; -#endif - - GST_LOG ("current_window=%d", decoder->current_window); - GST_LOG ("dtvcc_buffer %02x %02x %02x %02x %02x %02x", - dtvcc_bufferindex + 0, dtvcc_bufferindex + 1, - dtvcc_bufferindex + 2, dtvcc_bufferindex + 3, - dtvcc_bufferindex + 4, dtvcc_bufferindex + 5); - - /* Initialize window structure */ - if (NULL != window) { - if (window->deleted) { - /* Spec says on window create (but not re-definition) the pen position - * must be reset to 0 - * TODO: also set all text positions to the fill color */ - window->deleted = FALSE; - window->pen_row = 0; - window->pen_col = 0; - } - /* format of parameters: - 0 0 v rl cl p2 p1 p0 - rp av7 av6 av5 av4 av3 av1 av0 - ah7 ah6 ah5 ah4 ah3 ah2 ah1 ah0 - ap3 ap2 ap1 ap0 rc3 rc2 rc1 rc0 - 0 0 cc5 cc4 cc3 cc2 cc1 cc0 - 0 0 ws2 ws1 ws0 ps2 ps1 ps0 */ - - /* parameter byte 0 */ - priority = dtvcc_bufferindex & 0x7; - column_lock = (dtvcc_bufferindex & 0x8) ? TRUE : FALSE; - row_lock = (dtvcc_bufferindex & 0x10) ? TRUE : FALSE; - visible = (dtvcc_bufferindex & 0x20) ? TRUE : FALSE; - - /* parameter byte 1 */ - relative_position = (dtvcc_bufferindex + 1 & 0x80) ? TRUE : FALSE; - anchor_vertical = dtvcc_bufferindex + 1 & 0x7F; - - /* parameter byte 2 */ - anchor_horizontal = dtvcc_bufferindex + 2; - - /* parameter byte 3 */ - anchor_point = (dtvcc_bufferindex + 3 & 0xF0) >> 4; - row_count = (dtvcc_bufferindex + 3 & 0xF) + 1; - - /* parameter byte 4 */ - column_count = (dtvcc_bufferindex + 4 & 0x3F) + 1; - - /* parameter byte 5 */ - style_id = (dtvcc_bufferindex + 5 & 0x38) >> 3; - pen_style_id = dtvcc_bufferindex + 5 & 0x7; - - window->screen_vertical = anchor_vertical; - window->screen_horizontal = anchor_horizontal; - - if (relative_position == FALSE) { - /* If position is in absolute coords, convert to percent */ - if (decoder->width == 0 || decoder->height == 0) { - window->screen_vertical /= 100; - window->screen_horizontal /= 100; - } else if ((decoder->width * 9) % (decoder->height * 16) == 0) { - window->screen_vertical /= SCREEN_HEIGHT_16_9; - window->screen_horizontal /= SCREEN_WIDTH_16_9; - } else if ((decoder->width * 3) % (decoder->height * 4) == 0) { - window->screen_vertical /= SCREEN_HEIGHT_4_3; - window->screen_horizontal /= SCREEN_WIDTH_4_3; - } else { - window->screen_vertical /= 100; - window->screen_horizontal /= 100; - } - window->screen_vertical *= 100; - window->screen_horizontal *= 100; - } - - window->priority = priority; - window->anchor_point = anchor_point; - window->relative_position = relative_position; - window->anchor_vertical = anchor_vertical; - window->anchor_horizontal = anchor_horizontal; - window->row_count = row_count; - window->column_count = column_count; - window->row_lock = row_lock; - window->column_lock = column_lock; - window->visible = visible; - - /* Make sure row/col limits are not too large */ - if (window->row_count > WINDOW_MAX_ROWS) { - GST_WARNING ("window row count %d is too large", window->row_count); - window->row_count = WINDOW_MAX_ROWS; - } - - if (window->column_count > WINDOW_MAX_COLS) { - GST_WARNING ("window column count %d is too large", window->column_count); - window->column_count = WINDOW_MAX_COLS; - } - - if (style_id != 0) { - window->style_id = style_id; - } - - if (pen_style_id != 0) { - window->pen_style_id = pen_style_id; - } - - gst_cea708dec_set_window_style (decoder, window->style_id); - gst_cea708dec_set_pen_style (decoder, window->pen_style_id); - } - - GST_LOG ("priority=%d anchor=%d relative_pos=%d anchor_v=%d anchor_h=%d", - window->priority, - window->anchor_point, - window->relative_position, - window->anchor_vertical, window->anchor_horizontal); - - GST_LOG ("row_count=%d col_count=%d row_lock=%d col_lock=%d visible=%d", - window->row_count, - window->column_count, - window->row_lock, window->column_lock, window->visible); - - GST_LOG ("style_id=%d pen_style_id=%d screenH=%f screenV=%f v_offset=%d " - "h_offset=%d v_anchor=%d h_anchor=%d", - window->style_id, - window->pen_style_id, - window->screen_horizontal, - window->screen_vertical, - window->v_offset, window->h_offset, v_anchor, h_anchor); -} - -static inline void -pango_span_markup_init (cea708PangoSpanControl * span_control) -{ - memset (span_control, 0, sizeof (cea708PangoSpanControl)); - span_control->size = PEN_SIZE_STANDARD; - span_control->fg_color = CEA708_COLOR_WHITE; - span_control->bg_color = CEA708_COLOR_INVALID; - span_control->size = PEN_SIZE_STANDARD; - span_control->font_style = FONT_STYLE_DEFAULT; -} - -static inline void -pango_span_markup_start (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index) -{ - GST_LOG ("span_control start_flag:%d end_flag:%d txt_flag:%d", - span_control->span_start_flag, span_control->span_end_flag, - span_control->span_txt_flag); - if (!span_control->span_start_flag) { - g_strlcat (line_buffer, CEA708_PANGO_SPAN_MARKUP_START, LINEBUFFER_SIZE); - *index += strlen (CEA708_PANGO_SPAN_MARKUP_START); - span_control->span_start_flag = TRUE; - span_control->span_end_flag = FALSE; - } else { - GST_WARNING ("warning span start !!!"); - } -} - -static inline void -pango_span_markup_txt (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index) -{ - GST_LOG ("span_control start_flag:%d end_flag:%d txt_flag:%d", - span_control->span_start_flag, span_control->span_end_flag, - span_control->span_txt_flag); - if (span_control->span_start_flag && !span_control->span_txt_flag) { - line_buffer*index = '>'; - *index = *index + 1; - span_control->span_txt_flag = TRUE; - } else { - GST_WARNING ("warning span txt !!!"); - } -} - -static inline void -pango_span_markup_end (cea708PangoSpanControl * span_control, - gchar * line_buffer, guint16 * index) -{ - GST_LOG ("span_control start_flag:%d end_flag:%d txt_flag:%d", - span_control->span_start_flag, span_control->span_end_flag, - span_control->span_txt_flag); - if (span_control->span_start_flag && !span_control->span_end_flag) { - g_strlcat (line_buffer, CEA708_PANGO_SPAN_MARKUP_END, LINEBUFFER_SIZE); - *index = *index + strlen (CEA708_PANGO_SPAN_MARKUP_END); - span_control->span_start_flag = FALSE; - span_control->span_txt_flag = FALSE; - span_control->span_end_flag = TRUE; - } else { - GST_WARNING ("line_buffer=%s", line_buffer); - GST_WARNING ("warning span end !!!"); - } -} - -/* FIXME: Convert to GString ! */ -static void -gst_cea708dec_show_pango_window (Cea708Dec * decoder, guint window_id) -{ - cea708Window *window = decoder->cc_windowswindow_id; - gint16 row, col; - gboolean display = FALSE; /* = TRUE when text lines should be written */ - gchar line_bufferLINEBUFFER_SIZE; - gchar outchar_utf8CC_UTF8_MAX_LENGTH + 1 = { 0 }; - guint8 utf8_char_length; - gint16 i, j; - gint16 right_index; /* within a single line of window text, the - * index of the rightmost non-blank character */ - guint16 index; - guint len = 0; - - cea708PangoSpanControl span_control; - const gchar *fg_color = NULL; - const gchar *bg_color = NULL; - const gchar *pen_size = NULL; - const gchar *font = NULL; - - GST_DEBUG ("window #%02d (visible:%d)", window_id, window->visible); - - window->updated = TRUE; - - if (!window->visible) { - GST_DEBUG ("Window is not visible, skipping rendering"); - return; - } - - for (row = 0; row < window->row_count; row++) { - for (col = 0; col < window->column_count; col++) { - GST_LOG ("window->text%d%d.c '%c'", row, col, - window->textrowcol.c); - if (window->textrowcol.c != ' ') { - /* If there is a non-blank line, then display from there */ - display = TRUE; - } - } - } - - if (!display) { - GST_DEBUG ("No visible text, skipping rendering"); - return; - } - - for (row = 0; row < window->row_count; row++) { - for (col = 0; col < window->column_count; col++) { - if (window->textrowcol.c != ' ') { - - memset (line_buffer, '\0', LINEBUFFER_SIZE); - pango_span_markup_init (&span_control); - /* Find the rightmost non-blank character on this line: */ - for (i = right_index = WINDOW_MAX_COLS - 1; i >= col; i--) { - if (window->textrowi.c != ' ') { - right_index = i; - break; - } - } - - /* Copy all of the characters in this row, from the current position - * to the right edge */ - for (i = 0, index = 0; - (i <= right_index) && (index < LINEBUFFER_SIZE - 15); i++) { - cea708char *current = &window->textrowi; - GST_LOG ("Adding row=%d i=%d c=%c %d", row, - i, current->c, current->c); - - do { - GST_MEMDUMP ("line_buffer", (guint8 *) line_buffer, index); - GST_INFO - ("text%d%d '%c' underline:%d , italics:%d , font_style:%d , pen_size : %d", - row, i, current->c, - current->pen_attributes.underline, - current->pen_attributes.italics, - current->pen_attributes.font_style, - current->pen_attributes.pen_size); - GST_INFO ("text%d%d '%c' pen_color fg:0x%02X bg:0x%02x", row, i, - current->c, current->pen_color.fg_color, - current->pen_color.bg_color); - GST_INFO - ("span_control: span_next_flag = %d, underline = %d, italics = %d, font_style = %d, size = %d, fg_color = 0x%02X, bg_color = 0x%02X", - span_control.span_next_flag, span_control.underline, - span_control.italics, span_control.font_style, - span_control.size, span_control.fg_color, - span_control.bg_color); - - if ((current->pen_attributes.underline != span_control.underline) - || (current->pen_attributes.italics != span_control.italics) - || (current->pen_attributes.font_style != - span_control.font_style) - || (current->pen_attributes.pen_size != span_control.size) - || (current->pen_color.fg_color != span_control.fg_color) - || (current->pen_color.bg_color != span_control.bg_color) - ) { - GST_LOG ("Markup changed"); - - /* check end span to next span start */ - if (!span_control.span_next_flag) { - pango_span_markup_end (&span_control, line_buffer, &index); - if (span_control.span_end_flag) { - pango_span_markup_init (&span_control); - span_control.span_next_flag = TRUE; - GST_INFO ("continue check next span !!!"); - continue; - } - } - - pango_span_markup_start (&span_control, line_buffer, &index); - - /* Check for transitions to/from underline: */ - if (current->pen_attributes.underline) { - g_strlcat (line_buffer, - CEA708_PANGO_SPAN_ATTRIBUTES_UNDERLINE_SINGLE, - sizeof (line_buffer)); - index += strlen (CEA708_PANGO_SPAN_ATTRIBUTES_UNDERLINE_SINGLE); - span_control.underline = TRUE; - } - - /* Check for transitions to/from italics: */ - if (current->pen_attributes.italics) { - g_strlcat (line_buffer, - CEA708_PANGO_SPAN_ATTRIBUTES_STYLE_ITALIC, - sizeof (line_buffer)); - index += strlen (CEA708_PANGO_SPAN_ATTRIBUTES_STYLE_ITALIC); - span_control.italics = TRUE; - } - - /* FIXME : Something is totally wrong with the way fonts - * are being handled. Shouldn't the font description (if - * specified by the user) be written for everything ? */ - if (!decoder->default_font_desc) { - font = font_namescurrent->pen_attributes.font_style; - - if (font) { - g_strlcat (line_buffer, CEA708_PANGO_SPAN_ATTRIBUTES_FONT, - sizeof (line_buffer)); - index += strlen (CEA708_PANGO_SPAN_ATTRIBUTES_FONT); - line_bufferindex++ = 0x27; /* ' */ - g_strlcat (line_buffer, font, sizeof (line_buffer)); - index += strlen (font); - span_control.font_style = current->pen_attributes.font_style; - - /* Check for transitions to/from pen size */ - pen_size = pen_size_namescurrent->pen_attributes.pen_size; - - line_bufferindex++ = ' '; - g_strlcat (line_buffer, pen_size, sizeof (line_buffer)); - index += strlen (pen_size); - line_bufferindex++ = 0x27; /* ' */ - span_control.size = current->pen_attributes.pen_size; - } - } - /* Regardless of the above, we want to remember the latest changes */ - span_control.font_style = current->pen_attributes.font_style; - span_control.size = current->pen_attributes.pen_size; - ; - /* Check for transitions to/from foreground color */ - fg_color = - gst_cea708dec_get_color_name (current->pen_color.fg_color); - if (fg_color && current->pen_color.bg_opacity != TRANSPARENT) { - g_strlcat (line_buffer, CEA708_PANGO_SPAN_ATTRIBUTES_FOREGROUND, - sizeof (line_buffer)); - index += strlen (CEA708_PANGO_SPAN_ATTRIBUTES_FOREGROUND); - line_bufferindex++ = 0x27; /* ' */ - g_strlcat (line_buffer, fg_color, sizeof (line_buffer)); - index += strlen (fg_color); - line_bufferindex++ = 0x27; /* ' */ - span_control.fg_color = current->pen_color.fg_color; - GST_DEBUG ("span_control.fg_color updated to 0x%02x", - span_control.fg_color); - } else - GST_DEBUG - ("span_control.fg_color was NOT updated (still 0x%02x)", - span_control.fg_color); - - /* Check for transitions to/from background color */ - bg_color = - gst_cea708dec_get_color_name (current->pen_color.bg_color); - if (bg_color && current->pen_color.bg_opacity != TRANSPARENT) { - g_strlcat (line_buffer, CEA708_PANGO_SPAN_ATTRIBUTES_BACKGROUND, - sizeof (line_buffer)); - index += strlen (CEA708_PANGO_SPAN_ATTRIBUTES_BACKGROUND); - line_bufferindex++ = 0x27; /* ' */ - g_strlcat (line_buffer, bg_color, sizeof (line_buffer)); - index += strlen (bg_color); - line_bufferindex++ = 0x27; /* ' */ - span_control.bg_color = current->pen_color.bg_color; - GST_DEBUG ("span_control.bg_color updated to 0x%02x", - span_control.bg_color); - } else - GST_DEBUG - ("span_control.bg_color was NOT updated (still 0x%02x)", - span_control.bg_color); - - - /*span text start */ - pango_span_markup_txt (&span_control, line_buffer, &index); - GST_INFO ("span_next_flag = %d", span_control.span_next_flag); - } - span_control.span_next_flag = FALSE; - } while (span_control.span_next_flag); - - - /* Finally write the character */ - - j = 0; - - - switch (current->c) { - case '&': - g_snprintf (&(line_bufferindex), - sizeof (line_buffer) - index - 1, "&"); - index += 5; - break; - - case '<': - g_snprintf (&(line_bufferindex), - sizeof (line_buffer) - index - 1, "<"); - index += 4; - break; - - case '>': - g_snprintf (&(line_bufferindex), - sizeof (line_buffer) - index - 1, ">"); - index += 4; - break; - - case '\'': - g_snprintf (&(line_bufferindex), - sizeof (line_buffer) - index - 1, "'"); - index += 6; - break; - - case '"': - g_snprintf (&(line_bufferindex), - sizeof (line_buffer) - index - 1, """); - index += 6; - break; - - default: - /* FIXME : Use g_string_append_unichar() when switched to GString */ - utf8_char_length = g_unichar_to_utf8 (current->c, outchar_utf8); - while (utf8_char_length > 0) { - line_bufferindex++ = outchar_utf8j++; - utf8_char_length--; - } - } - - } - - /* pango markup span mode ends */ - if (span_control.underline || span_control.italics - || (span_control.font_style != FONT_STYLE_DEFAULT) - || (span_control.size != PEN_SIZE_STANDARD) - || (span_control.fg_color != CEA708_COLOR_WHITE) - || (span_control.bg_color != CEA708_COLOR_INVALID) - ) { - pango_span_markup_end (&span_control, line_buffer, &index); - pango_span_markup_init (&span_control); - } - - GST_LOG ("adding row%d: %s\nlength:%d", row, line_buffer, index); - - if (row != window->row_count - 1) { - line_bufferindex++ = '\n'; - } - - len += - gst_cea708dec_text_list_add (&decoder->text_list, index + 1, "%s", - line_buffer); - break; - } - } - - if (col == window->column_count && row != window->row_count - 1) { - len += - gst_cea708dec_text_list_add (&decoder->text_list, strlen ("\n") + 1, - "\n"); - } - } - - if (len == 0) { - GST_LOG ("window %d had no text", window_id); - } else { - /* send text to output pad */ - gst_cea708dec_render_text (decoder, &decoder->text_list, len, window_id); - } -} - -static void -gst_cea708dec_clear_window_text (Cea708Dec * decoder, guint window_id) -{ - cea708Window *window = decoder->cc_windowswindow_id; - guint row, col; - - for (row = 0; row < WINDOW_MAX_ROWS; row++) { - for (col = 0; col < WINDOW_MAX_COLS; col++) { - window->textrowcol.c = ' '; - window->textrowcol.justify_mode = window->justify_mode; - window->textrowcol.pen_attributes = window->pen_attributes; - window->textrowcol.pen_color = window->pen_color; - } - } -} - -static void -gst_cea708dec_scroll_window_up (Cea708Dec * decoder, guint window_id) -{ - cea708Window *window = decoder->cc_windowswindow_id; - guint row, col; - - /* This function should be called to scroll the window up if bottom to - * top scrolling is enabled and a carraige-return is encountered, or - * word-wrapping */ - GST_LOG_OBJECT (decoder, "called for window: %d", window_id); - - /* start at row 1 to copy into row 0 (scrolling up) */ - for (row = 1; row < WINDOW_MAX_ROWS; row++) { - for (col = 0; col < WINDOW_MAX_COLS; col++) { - window->textrow - 1col = window->textrowcol; - } - } - - /* Clear the bottom row: */ - row = WINDOW_MAX_ROWS - 1; - for (col = 0; col < WINDOW_MAX_COLS; col++) { - window->textrowcol.c = ' '; - window->textrowcol.justify_mode = window->justify_mode; - window->textrowcol.pen_attributes = window->pen_attributes; - window->textrowcol.pen_color = window->pen_color; - } -} - -static void -gst_cea708dec_clear_window (Cea708Dec * decoder, cea708Window * window) -{ - g_free (window->text_image); - memset (window, 0, sizeof (cea708Window)); -} - -static void -gst_cea708dec_init_window (Cea708Dec * decoder, guint window_id) -{ - cea708Window *window = decoder->cc_windowswindow_id; - - if (window_id >= MAX_708_WINDOWS) { - GST_ERROR ("window_id outside of range %d", window_id); - return; - } - - window->priority = 0; - window->anchor_point = 0; - window->relative_position = 0; - window->anchor_vertical = 0; - window->anchor_horizontal = 0; - window->screen_vertical = 0; - window->screen_horizontal = 0; - - window->row_count = WINDOW_MAX_ROWS; - window->column_count = WINDOW_MAX_COLS; - window->row_lock = 0; - window->column_lock = 0; - window->visible = FALSE; - window->style_id = 0; - window->pen_style_id = 0; - window->deleted = TRUE; - window->pen_color.fg_color = CEA708_COLOR_WHITE; - window->pen_color.fg_opacity = SOLID; - window->pen_color.bg_color = CEA708_COLOR_BLACK; - window->pen_color.bg_opacity = SOLID; - window->pen_color.edge_color = CEA708_COLOR_BLACK; - - window->pen_attributes.pen_size = PEN_SIZE_STANDARD; - window->pen_attributes.font_style = FONT_STYLE_DEFAULT; - window->pen_attributes.offset = PEN_OFFSET_NORMAL; - window->pen_attributes.italics = FALSE; - window->pen_attributes.text_tag = TAG_DIALOG; - window->pen_attributes.underline = FALSE; - window->pen_attributes.edge_type = EDGE_TYPE_NONE; - - /* Init pen position */ - window->pen_row = 0; - window->pen_col = 0; - - /* Initialize text array to all spaces. When sending window text, only - * send if there are non-blank rows */ - gst_cea708dec_clear_window_text (decoder, window_id); - - /* window attributes */ - window->justify_mode = JUSTIFY_LEFT; - window->print_direction = PRINT_DIR_LEFT_TO_RIGHT; - window->scroll_direction = SCROLL_DIR_BOTTOM_TO_TOP; - window->word_wrap = FALSE; - window->display_effect = DISPLAY_EFFECT_SNAP; - window->effect_direction = EFFECT_DIR_LEFT_TO_RIGHT; - window->effect_speed = 0; - window->fill_color = CEA708_COLOR_BLACK; - window->fill_opacity = TRANSPARENT; - window->border_type = BORDER_TYPE_NONE; - window->border_color = CEA708_COLOR_BLACK; - - window->v_offset = 0; - window->h_offset = 0; - window->layout = NULL; - window->shadow_offset = 0; - window->outline_offset = 0; - window->image_width = 0; - window->image_height = 0; - window->text_image = NULL; - -} - -static void -gst_cea708dec_set_pen_attributes (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - - /* format - tt3 tt2 tt1 tt0 o1 o0 s1 s0 - i u et2 et1 et0 fs2 fs1 fs0 */ - window->pen_attributes.pen_size = dtvcc_bufferindex & 0x3; - window->pen_attributes.text_tag = (dtvcc_bufferindex & 0xF0) >> 4; - window->pen_attributes.offset = (dtvcc_bufferindex & 0xC0) >> 2; - window->pen_attributes.font_style = dtvcc_bufferindex + 1 & 0x7; - window->pen_attributes.italics = - ((dtvcc_bufferindex + 1 & 0x80) >> 7) ? TRUE : FALSE; - window->pen_attributes.underline = - ((dtvcc_bufferindex + 1 & 0x40) >> 6) ? TRUE : FALSE; - window->pen_attributes.edge_type = (dtvcc_bufferindex + 1 & 0x38) >> 3; - - GST_LOG ("pen_size=%d font=%d text_tag=%d offset=%d", - window->pen_attributes.pen_size, - window->pen_attributes.font_style, - window->pen_attributes.text_tag, window->pen_attributes.offset); - - GST_LOG ("italics=%d underline=%d edge_type=%d", - window->pen_attributes.italics, - window->pen_attributes.underline, window->pen_attributes.edge_type); -} - -static void -gst_cea708dec_for_each_window (Cea708Dec * decoder, - guint8 window_list, VisibilityControl visibility_control, - const gchar * log_message, void (*function) (Cea708Dec * decoder, - guint window_id)) -{ - guint i; - - GST_LOG ("window_list: %02x", window_list); - - for (i = 0; i < MAX_708_WINDOWS; i++) { - if (WINDOW_IN_LIST_IS_ACTIVE (window_list)) { - GST_LOG ("%s%d:%d %s v_offset=%d h_offset=%d", - log_message, i, WINDOW_IN_LIST_IS_ACTIVE (window_list), - (decoder->cc_windowsi->visible) ? "visible" : "hidden", - decoder->cc_windowsi->v_offset, decoder->cc_windowsi->h_offset); - switch (visibility_control) { - default: - case NO_CHANGE: - break; - - case SWITCH_TO_HIDE: - decoder->cc_windowsi->visible = FALSE; - break; - - case SWITCH_TO_SHOW: - decoder->cc_windowsi->visible = TRUE; - break; - - case TOGGLE: - decoder->cc_windowsi->visible = !decoder->cc_windowsi->visible; - break; - } - - if (NULL != function) { - function (decoder, i); - } - } - - window_list >>= 1; - } -} - -static void -gst_cea708dec_process_command (Cea708Dec * decoder, - guint8 * dtvcc_buffer, int index) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - guint8 c = dtvcc_bufferindex; - guint8 window_list = dtvcc_bufferindex + 1; /* always the first arg (if any) */ - - /* Process command codes */ - gst_cea708dec_print_command_name (decoder, c); - switch (c) { - case CC_COMMAND_ETX: /* End of text */ - window->visible = TRUE; - gst_cea708dec_show_pango_window (decoder, decoder->current_window); - return; - - case CC_COMMAND_CW0: /* Set current window */ - case CC_COMMAND_CW1: - case CC_COMMAND_CW2: - case CC_COMMAND_CW3: - case CC_COMMAND_CW4: - case CC_COMMAND_CW5: - case CC_COMMAND_CW6: - case CC_COMMAND_CW7: - decoder->current_window = c & 0x03; - GST_LOG ("Current window=%d", decoder->current_window); - return; - - case CC_COMMAND_CLW: /* Clear windows */ - decoder->output_ignore = 1; /* 1 byte parameter = windowmap */ - - /* Clear window data */ - gst_cea708dec_for_each_window (decoder, window_list, NO_CHANGE, - "clear_window", gst_cea708dec_clear_window_text); - return; - - case CC_COMMAND_DSW: /* Display windows */ - decoder->output_ignore = 1; /* 1 byte parameter = windowmap */ - - /* Show window */ - gst_cea708dec_for_each_window (decoder, window_list, NO_CHANGE, - "display_window", gst_cea708dec_show_pango_window); - return; - - case CC_COMMAND_HDW: /* Hide windows */ - decoder->output_ignore = 1; /* 1 byte parameter = windowmap */ - - /* Hide window */ - gst_cea708dec_for_each_window (decoder, window_list, SWITCH_TO_HIDE, - "hide_window", NULL); - return; - - case CC_COMMAND_TGW: /* Toggle windows */ - decoder->output_ignore = 1; /* 1 byte parameter = windowmap */ - - /* Toggle windows - hide displayed windows, display hidden windows */ - gst_cea708dec_for_each_window (decoder, window_list, TOGGLE, - "toggle_window", gst_cea708dec_show_pango_window); - return; - - case CC_COMMAND_DLW: /* Delete windows */ - decoder->output_ignore = 1; /* 1 byte parameter = windowmap */ - - /* Delete window */ - gst_cea708dec_for_each_window (decoder, window_list, NO_CHANGE, - "delete_window", gst_cea708dec_init_window); - return; - - case CC_COMMAND_DLY: /* Delay */ - decoder->output_ignore = 1; /* 1 byte parameter = delay in 1/10 sec */ - /* TODO: - process this command. */ - return; - - case CC_COMMAND_DLC: /* Delay cancel */ - /* TODO: - process this command. */ - return; - - /* Reset */ - case CC_COMMAND_RST: - /* Reset - cancel any delay, delete all windows */ - window_list = 0xFF; /* all windows... */ - - /* Delete window */ - gst_cea708dec_for_each_window (decoder, window_list, NO_CHANGE, - "reset_window", gst_cea708dec_init_window); - return; - - case CC_COMMAND_SPA: /* Set pen attributes */ - decoder->output_ignore = 2; /* 2 byte parameter = pen attributes */ - gst_cea708dec_set_pen_attributes (decoder, dtvcc_buffer, index + 1); - return; - - case CC_COMMAND_SPC: /* Set pen color */ - decoder->output_ignore = 3; /* 3 byte parameter = color & opacity */ - gst_cea708dec_set_pen_color (decoder, dtvcc_buffer, index + 1); - return; - - case CC_COMMAND_SPL: /* Set pen location */ - /* Set pen location - row, column address within the current window */ - decoder->output_ignore = 2; /* 2 byte parameter = row, col */ - window->pen_row = dtvcc_bufferindex + 1 & 0xF; - window->pen_col = dtvcc_bufferindex + 2 & 0x3F; - GST_LOG ("Pen location: row=%d col=%d", window->pen_row, window->pen_col); - return; - - case CC_COMMAND_SWA: /* Set window attributes */ - /* Set window attributes - color, word wrap, border, scroll effect, etc */ - decoder->output_ignore = 4; /* 4 byte parameter = window attributes */ - gst_cea708dec_set_window_attributes (decoder, dtvcc_buffer, index + 1); - return; - - case CC_COMMAND_DF0: /* Define window */ - case CC_COMMAND_DF1: - case CC_COMMAND_DF2: - case CC_COMMAND_DF3: - case CC_COMMAND_DF4: - case CC_COMMAND_DF5: - case CC_COMMAND_DF6: - case CC_COMMAND_DF7: - { - window_list = 0xFF; /* all windows... */ - - /* set window - size, style, pen style, anchor position, etc. */ - decoder->output_ignore = 6; /* 6 byte parameter = window definition */ - decoder->current_window = c & 0x7; - gst_cea708dec_define_window (decoder, dtvcc_buffer, index + 1); - return; - } - } -} - -static void -get_cea708dec_bufcat (gpointer data, gpointer whole_buf) -{ - gchar *buf = whole_buf; - strcat ((gchar *) buf, data); - g_free (data); -} - -static gboolean -gst_cea708dec_render_text (Cea708Dec * decoder, GSList ** text_list, - gint length, guint window_id) -{ - gchar *out_str = NULL; - PangoAlignment align_mode; - PangoFontDescription *desc; - gchar *font_desc; - cea708Window *window = decoder->cc_windowswindow_id; - - if (length > 0) { - out_str = g_malloc0 (length + 1); - memset (out_str, 0, length + 1); - - g_slist_foreach (*text_list, get_cea708dec_bufcat, out_str); - GST_LOG ("rendering '%s'", out_str); - g_slist_free (*text_list); - window->layout = pango_layout_new (decoder->pango_context); - align_mode = gst_cea708dec_get_align_mode (window->justify_mode); - pango_layout_set_alignment (window->layout, (PangoAlignment) align_mode); - pango_layout_set_markup (window->layout, out_str, length); - if (!decoder->default_font_desc) - font_desc = g_strdup_printf ("%s %s", font_names0, pen_size_names1); - else - font_desc = g_strdup (decoder->default_font_desc); - desc = pango_font_description_from_string (font_desc); - if (desc) { - GST_INFO ("font description set: %s", font_desc); - pango_layout_set_font_description (window->layout, desc); - gst_cea708dec_adjust_values_with_fontdesc (window, desc); - pango_font_description_free (desc); - gst_cea708dec_render_pangocairo (window); - } else { - GST_ERROR ("font description parse failed: %s", font_desc); - } - g_free (font_desc); - g_free (out_str); - /* data freed in slist loop! - *g_slist_free_full (*text_list, g_free); */ - *text_list = NULL; - return TRUE; - } - - return FALSE; -} - -static void -gst_cea708dec_window_add_char (Cea708Dec * decoder, gunichar c) -{ - cea708Window *window = decoder->cc_windowsdecoder->current_window; - gint16 pen_row; - gint16 pen_col; - - /* Add one character to the current window, using current pen location. - * Wrap pen location if necessary */ - if (c == 0) /* NULL */ - return; - - if (c == 0x0E) { /* HCR,moves the pen location to the beginning of the current line and deletes its contents */ - for (pen_col = window->pen_col; pen_col >= 0; pen_col--) { - window->textwindow->pen_rowpen_col.c = ' '; - } - window->pen_col = 0; - return; - } - - if (c == 0x08) { /* BS */ - switch (window->print_direction) { - case PRINT_DIR_LEFT_TO_RIGHT: - if (window->pen_col) { - window->pen_col--; - } - break; - - case PRINT_DIR_RIGHT_TO_LEFT: - window->pen_col++; - break; - - case PRINT_DIR_TOP_TO_BOTTOM: - if (window->pen_row) { - window->pen_row--; - } - break; - - case PRINT_DIR_BOTTOM_TO_TOP: - window->pen_row++; - break; - } - pen_row = window->pen_row; - pen_col = window->pen_col; - window->textpen_rowpen_col.c = ' '; - return; - } - - if (c == 0x0C) { /* FF clears the screen and moves the pen location to (0,0) */ - window->pen_row = 0; - window->pen_col = 0; - gst_cea708dec_clear_window_text (decoder, decoder->current_window); - return; - } - - if (c == 0x0D) { - GST_DEBUG - ("carriage return, window->word_wrap=%d,window->scroll_direction=%d", - window->word_wrap, window->scroll_direction); - window->pen_col = 0; - window->pen_row++; - } - - if (window->pen_col >= window->column_count) { - window->pen_col = 0; - window->pen_row++; - } - /* Wrap row position if too large */ - if (window->pen_row >= window->row_count) { - if (window->scroll_direction == SCROLL_DIR_BOTTOM_TO_TOP) { - gst_cea708dec_scroll_window_up (decoder, decoder->current_window); - } - window->pen_row = window->row_count - 1; - GST_WARNING ("pen row exceed window row count,scroll up"); - } - - if ((c != '\r') && (c != '\n')) { - pen_row = window->pen_row; - pen_col = window->pen_col; - - GST_LOG ("text x=%d y=%d fgcolor=%d win=%d vis=%d '%c' 0x%02X", pen_col, - pen_row, window->pen_color.fg_color, decoder->current_window, - window->visible, c, c); - - /* Each cell in the window should get the current pen color and - * attributes as it is written */ - window->textpen_rowpen_col.c = c; - window->textpen_rowpen_col.justify_mode = window->justify_mode; - window->textpen_rowpen_col.pen_color = window->pen_color; - window->textpen_rowpen_col.pen_attributes = window->pen_attributes; - - switch (window->print_direction) { - case PRINT_DIR_LEFT_TO_RIGHT: - window->pen_col++; - break; - - case PRINT_DIR_RIGHT_TO_LEFT: - if (window->pen_col) { - window->pen_col--; - } - break; - - case PRINT_DIR_TOP_TO_BOTTOM: - window->pen_row++; - break; - - case PRINT_DIR_BOTTOM_TO_TOP: - if (window->pen_row) { - window->pen_row--; - } - break; - } /* switch (print_direction) */ - } -} - -static void -gst_cea708dec_process_c2 (Cea708Dec * decoder, guint8 * dtvcc_buffer, int index) -{ - guint8 c = dtvcc_bufferindex; - if (c <= 0x07) { - decoder->output_ignore = 1; - } else if (c >= 0x08 && c <= 0x0F) { - decoder->output_ignore = 2; - } else if (c >= 0x10 && c <= 0x17) { - decoder->output_ignore = 3; - } else if (c >= 0x18 && c <= 0x1F) { - decoder->output_ignore = 4; - } -} - -static void -gst_cea708dec_process_g2 (Cea708Dec * decoder, guint8 * dtvcc_buffer, int index) -{ - guint8 c = dtvcc_bufferindex; - gst_cea708dec_window_add_char (decoder, g2_tablec - 0x20); - decoder->output_ignore = 1; -} - -static void -gst_cea708dec_process_c3 (Cea708Dec * decoder, guint8 * dtvcc_buffer, int index) -{ - guint8 c = dtvcc_bufferindex; - int command_length = 0; - if (c >= 0x80 && c <= 0x87) { - decoder->output_ignore = 5; - } else if (c >= 0x88 && c <= 0x8F) { - decoder->output_ignore = 6; - } else if (c >= 0x90 && c <= 0x9F) { - command_length = dtvcc_bufferindex + 1 & 0x3F; - decoder->output_ignore = command_length + 2; - } -} - -static void -gst_cea708dec_process_g3 (Cea708Dec * decoder, guint8 * dtvcc_buffer, int index) -{ - gst_cea708dec_window_add_char (decoder, 0x5F); - decoder->output_ignore = 1; -} - -void -gst_cea708dec_set_video_width_height (Cea708Dec * decoder, gint width, - gint height) -{ - decoder->width = width; - decoder->height = height; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstcea708decoder.h
Deleted
@@ -1,488 +0,0 @@ -/* GStreamer - * Copyright (C) 2013 CableLabs, Louisville, CO 80027 - * Copyright (C) 2015 Samsung Electronics Co., Ltd. - * @Author: Chengjun Wang <cjun.wang@samsung.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - - -#ifndef __GST_CEA708_DEC_H__ -#define __GST_CEA708_DEC_H__ - -#include <gst/gst.h> -#include <pango/pangocairo.h> - -G_BEGIN_DECLS -/* from ATSC A/53 Part 4 - * DTVCC packets are 128 bytes MAX, length is only 6 bits, header is 2 bytes, - * the last byte is flag-fill, that leaves 125 possible bytes of data to be - * represented in 6 bits, hence the length encoding - */ -/* should never be more than 128 */ -#define DTVCC_LENGTH 128 -#define DTVCC_PKT_SIZE(sz_byte) (((sz_byte) == 0) ? 127 : ((sz_byte) * 2) -1) -#define CCTYPE_VALID_MASK 0x04 -#define CCTYPE_TYPE_MASK 0x03 -#define NUM_608_CCTYPES 2 -/* CEA-708-B commands */ -/* EndOfText */ -#define CC_COMMAND_ETX 0x03 -/* SetCurrentWindow0 */ -#define CC_COMMAND_CW0 0x80 -#define CC_COMMAND_CW1 0x81 -#define CC_COMMAND_CW2 0x82 -#define CC_COMMAND_CW3 0x83 -#define CC_COMMAND_CW4 0x84 -#define CC_COMMAND_CW5 0x85 -#define CC_COMMAND_CW6 0x86 -#define CC_COMMAND_CW7 0x87 -/* ClearWindows */ -#define CC_COMMAND_CLW 0x88 -/* DisplayWindows */ -#define CC_COMMAND_DSW 0x89 -/* HideWindows */ -#define CC_COMMAND_HDW 0x8A -/* ToggleWindows */ -#define CC_COMMAND_TGW 0x8B -/* DeleteWindows */ -#define CC_COMMAND_DLW 0x8C -/* Delay */ -#define CC_COMMAND_DLY 0x8D -/* DelayCancel */ -#define CC_COMMAND_DLC 0x8E -/* Reset */ -#define CC_COMMAND_RST 0x8F -/* SetPenAttributes */ -#define CC_COMMAND_SPA 0x90 -/* SetPenColor */ -#define CC_COMMAND_SPC 0x91 -/* SetPenLocation */ -#define CC_COMMAND_SPL 0x92 -/* SetWindowAttributes */ -#define CC_COMMAND_SWA 0x97 -/* DefineWindow0 */ -#define CC_COMMAND_DF0 0x98 -#define CC_COMMAND_DF1 0x99 -#define CC_COMMAND_DF2 0x9A -#define CC_COMMAND_DF3 0x9B -#define CC_COMMAND_DF4 0x9C -#define CC_COMMAND_DF5 0x9D -#define CC_COMMAND_DF6 0x9E -#define CC_COMMAND_DF7 0x9F -/* music note unicode */ -#define CC_SPECIAL_CODE_MUSIC_NOTE 0x266a -#define CC_UTF8_MAX_LENGTH 6 -#define CC_MAX_CODE_SET_SIZE 96 -/* Per CEA-708 spec there may be 8 CC windows */ -#define MAX_708_WINDOWS 8 -/* Each 708 window contains a grid of character positions. These are the - * max limits defined, but each window has a row/col count which is typically - * smaller than the limits. Note this is just one window, not the entire screen. - */ -/* max row count */ -#define WINDOW_MAX_ROWS 15 -/* max column width */ -#define WINDOW_MAX_COLS 42 -/* The linebuffer contains text for 1 line pango text corresponding to 1 line of 708 text. - * The linebuffer could be a lot larger than the window text because of required markup. - * example <u> </u> for underline. - * The size given is an estimate, to be changed if determined that a larger - * buffer is needed - */ -#define LINEBUFFER_SIZE 1024 -/* The screen width/height defined by 708 - not character units, these are - * used only to determine the position of the anchor on the screen. - */ -#define SCREEN_WIDTH_16_9 209 -#define SCREEN_HEIGHT_16_9 74 -#define SCREEN_WIDTH_4_3 159 -#define SCREEN_HEIGHT_4_3 74 - -/* raw bytes of "define window" command */ -#define WIN_DEF_SIZE 6 -/* The maximum size of a 708 window in character units. This is used to - * calculate the position of windows based on window anchor positions. - */ -#define SCREEN_HEIGHT_708 15 -#define SCREEN_WIDTH_708 32 -/* cea708 minimum color list */ -#define CEA708_COLOR_INVALID 0xFF -#define CEA708_COLOR_BLACK 0x00 -#define CEA708_COLOR_WHITE 0x2A -#define CEA708_COLOR_RED 0x20 -#define CEA708_COLOR_GREEN 0x08 -#define CEA708_COLOR_BLUE 0x02 -#define CEA708_COLOR_YELLOW 0x28 -#define CEA708_COLOR_MAGENTA 0x22 -#define CEA708_COLOR_CYAN 0x0A -#define CEA708_PANGO_SPAN_MARKUP_START "<span" -#define CEA708_PANGO_SPAN_MARKUP_END "</span>" -#define CEA708_PANGO_SPAN_ATTRIBUTES_UNDERLINE_SINGLE " underline='single'" -#define CEA708_PANGO_SPAN_ATTRIBUTES_STYLE_ITALIC " style='italic'" -#define CEA708_PANGO_SPAN_ATTRIBUTES_FONT " font_desc=" -#define CEA708_PANGO_SPAN_ATTRIBUTES_FOREGROUND " foreground=" -#define CEA708_PANGO_SPAN_ATTRIBUTES_BACKGROUND " background=" -#define MINIMUM_OUTLINE_OFFSET 1.0 -#define WINDOW_IN_LIST_IS_ACTIVE(list) (list & 0x1) -typedef struct _Cea708Dec Cea708Dec; - -typedef enum -{ - COLOR_TYPE_BLACK = 0, - COLOR_TYPE_WHITE, - COLOR_TYPE_RED, - COLOR_TYPE_GREEN, - COLOR_TYPE_BLUE, - COLOR_TYPE_YELLOW, - COLOR_TYPE_MAGENTA, - COLOR_TYPE_CYAN, - COLOR_TYPE_RESEVER -} Cea708ColorType; - -typedef enum -{ - NO_CHANGE = 0, - SWITCH_TO_HIDE, - SWITCH_TO_SHOW, - TOGGLE -} VisibilityControl; - -typedef enum -{ - SOLID = 0, - FLASH, - TRANSLUCENT, - TRANSPARENT -} Opacity; - -typedef enum -{ - WIN_STYLE_NORMAL = 1, - WIN_STYLE_TRANSPARENT, - WIN_STYLE_NORMAL_CENTERED, - WIN_STYLE_NORMAL_WORD_WRAP, - WIN_STYLE_TRANSPARENT_WORD_WRAP, - WIN_STYLE_TRANSPARENT_CENTERED, - WIN_STYLE_ROTATED -} WindowStyle; - -typedef enum -{ - PEN_STYLE_DEFAULT = 1, - PEN_STYLE_MONO_SERIF, - PEN_STYLE_PROP_SERIF, - PEN_STYLE_MONO_SANS, - PEN_STYLE_PROP_SANS, - PEN_STYLE_MONO_SANS_TRANSPARENT, - PEN_STYLE_PROP_SANS_TRANSPARENT -} PenStyle; - -typedef enum -{ - ANCHOR_PT_TOP_LEFT = 0, - ANCHOR_PT_TOP_CENTER, - ANCHOR_PT_TOP_RIGHT, - ANCHOR_PT_MIDDLE_LEFT, - ANCHOR_PT_CENTER, - ANCHOR_PT_MIDDLE_RIGHT, - ANCHOR_PT_BOTTOM_LEFT, - ANCHOR_PT_BOTTOM_CENTER, - ANCHOR_PT_BOTTOM_RIGHT, -} AnchorPoint; - -typedef enum -{ - TAG_DIALOG = 0, - TAG_SPEAKER_ID, - TAG_ELECTRONIC_VOICE, - TAG_ALT_LANGUAGE_DIALOG, - TAG_VOICEOVER, - TAG_AUDIBLE_TRANSLATION, - TAG_SUBTITLE_TRANSLATION, - TAG_VOICE_QUALITY_DESCRIPTION, - TAG_SONG_LYRICS, - TAG_SOUND_EFFECT_DESCRIPTION, - TAG_MUSICAL_SCORE_DESCRIPTION, - TAG_EXPLETIVE, - TAG_UNDEF1, - TAG_UNDEF2, - TAG_UNDEF3, - TAG_NOT_DISPLAYED -} TagType; - -typedef enum -{ - JUSTIFY_LEFT = 0, - JUSTIFY_RIGHT, - JUSTIFY_CENTER, - JUSTIFY_FULL -} JUSTIFY_MODE; - -typedef enum -{ - PRINT_DIR_LEFT_TO_RIGHT = 0, - PRINT_DIR_RIGHT_TO_LEFT, - PRINT_DIR_TOP_TO_BOTTOM, - PRINT_DIR_BOTTOM_TO_TOP -} PRINT_DIRECTION; - -typedef enum -{ - SCROLL_DIR_LEFT_TO_RIGHT = 0, - SCROLL_DIR_RIGHT_TO_LEFT, - SCROLL_DIR_TOP_TO_BOTTOM, - SCROLL_DIR_BOTTOM_TO_TOP -} SCROLL_DIRECTION; - -typedef enum -{ - DISPLAY_EFFECT_SNAP = 0, - DISPLAY_EFFECT_FADE, - DISPLAY_EFFECT_WIPE -} DisplayEffect; - -typedef enum -{ - EFFECT_DIR_LEFT_TO_RIGHT = 0, - EFFECT_DIR_RIGHT_TO_LEFT, - EFFECT_DIR_TOP_TO_BOTTOM, - EFFECT_DIR_BOTTOM_TO_TOP -} EFFECT_DIRECTION; - -typedef enum -{ - BORDER_TYPE_NONE = 0, - BORDER_TYPE_RAISED, - BORDER_TYPE_DEPRESSED, - BORDER_TYPE_UNIFORM -} BORDER_TYPE; - -typedef enum -{ - PEN_SIZE_SMALL = 0, - PEN_SIZE_STANDARD, - PEN_SIZE_LARGE, - PEN_SIZE_INVALID -} PenSize; - -typedef enum -{ - PEN_OFFSET_SUBSCRIPT = 0, - PEN_OFFSET_NORMAL, - PEN_OFFSET_SUPERSCRIPT, - PEN_OFFSET_INVALID -} PenOffset; - -typedef enum -{ - EDGE_TYPE_NONE = 0, - EDGE_TYPE_RAISED, - EDGE_TYPE_DEPRESSED, - EDGE_TYPE_UNIFORM, - EDGE_TYPE_LEFT_DROP_SHADOW, - EDGE_TYPE_RIGHT_DROP_SHADOW, - EDGE_TYPE_INVALID_1, - EDGE_TYPE_INVALID_2 -} EdgeType; - -typedef enum -{ - FONT_STYLE_DEFAULT = 0, - FONT_STYLE_MONO_SERIF, - FONT_STYLE_PROP_SERIF, - FONT_STYLE_MONO_SANS, - FONT_STYLE_PROP_SANS, - FONT_STYLE_CASUAL, - FONT_STYLE_CURSIVE, - FONT_STYLE_SMALLCAPS -} FontStyle; - -typedef struct -{ - guint8 fg_color; - guint8 fg_opacity; - guint8 bg_color; - guint8 bg_opacity; - guint8 edge_color; -} cea708PenColor; - -typedef struct -{ - gboolean span_start_flag; - gboolean span_end_flag; - gboolean span_txt_flag; - - gboolean span_next_flag; - - gboolean underline; - gboolean italics; - - guint8 size; - guint8 fg_color; - guint8 bg_color; - FontStyle font_style; -} cea708PangoSpanControl; - -typedef struct -{ - PenSize pen_size; - FontStyle font_style; - TagType text_tag; - PenOffset offset; - gboolean italics; - gboolean underline; - EdgeType edge_type; -} cea708PenAttributes; - -/* The char records one cell location in the window, with the character and all of its attributes */ -typedef struct -{ - cea708PenColor pen_color; - cea708PenAttributes pen_attributes; - guint8 justify_mode; - gunichar c; -} cea708char; - - -/* This struct keeps track of one cea-708 CC window. There are up to 8. As new - * windows are created, the text they contain is visible on the screen (if the - * window visible flag is set). When a window is deleted, all text within the - * window is erased from the screen. Windows may be initialized and made visible - * then hidden. Each transition should cause new text cues to be emitted as - * text is displayed and removed from the screen. - */ -typedef struct -{ - /* The current attributes which will be used for the next text string */ - cea708PenColor pen_color; - cea708PenAttributes pen_attributes; - - /* true to indicate the window has not been created. - * set to true on delete, false on subsequent define command - * if true, reset pen position to 0,0 on window creation - */ - gboolean deleted; - - /* Text position */ - guint16 pen_row; - guint16 pen_col; - /* window display priority */ - guint8 priority; - /* window position on screen 0-8 */ - guint8 anchor_point; - /* 1 = anchor vertical/horizontal coordinates, 0 = physical screen coordinate, aka. rp */ - guint8 relative_position; - /* vertical position of windows anchor point, 0-74 or if rp=1 then 0-99 */ - guint8 anchor_vertical; - /* horz position of window anchor point, 0-209(16:9) 0-159(4:3) or if rp=1 then 0-99 */ - guint8 anchor_horizontal; - /* vert position of upper left corner of window */ - gfloat screen_vertical; - /* horz position of upper left corner of window */ - gfloat screen_horizontal; - /* virtual rows of text - 1, (ex. rc=2 means there are 3 rows) */ - guint8 row_count; - /* virtual columns of text, 0-41(16:9) 0-31(4:3) - 1 */ - guint8 column_count; - /* 1 = fixes #rows of caption text, 0 = more rows may be added */ - guint8 row_lock; - /* 1 = fixes #columns of caption text, 0 = more columns may be added */ - guint8 column_lock; - /* TRUE = window is visible, FALSE = window not visible */ - gboolean visible; - /* specifies 1 of 7 static preset window. attribute styles, during window create, - * 0 = use style #1, during window update, 0 = no window, attributes will be changed - */ - guint8 style_id; - /* specifies 1 of 7 static preset pen attributes, during window create, - * 0 = use pen style #1, during window update, 0 = do not change pen attributes - */ - guint8 pen_style_id; - /* timestamp when this window became visible */ - guint64 start_time; - - /* window attributes */ - guint8 justify_mode; - guint8 print_direction; - guint8 scroll_direction; - gboolean word_wrap; - guint8 display_effect; - guint8 effect_direction; - guint8 effect_speed; - guint8 fill_color; - guint8 fill_opacity; - guint8 border_type; - guint8 border_color; - - /* Character position offsets for the upper left corner of the window */ - guint v_offset; - guint h_offset; - - /* The char array that text is written into, using the current pen position */ - cea708char textWINDOW_MAX_ROWSWINDOW_MAX_COLS; - - PangoLayout *layout; - gdouble shadow_offset; - gdouble outline_offset; - guchar *text_image; - gint image_width; - gint image_height; - gboolean updated; -} cea708Window; - -struct _Cea708Dec -{ - /* output data storage */ - GSList *text_list; - - /* simulation of 708 CC windows */ - cea708Window *cc_windowsMAX_708_WINDOWS; - guint8 current_window; - gchar *default_font_desc; - PangoContext *pango_context; - - /* a counter used to ignore bytes in CC text stream following commands */ - gint8 output_ignore; - /* most recent timestamp from userdata */ - guint64 current_time; - - /* desired_service selects the service that will be decoded. If - * desired_service = -1 (default) no decoding based on service number will - * occur. Service #0 is reserved, and the valid range of service numbers - * is 1-7. with 1 being primary caption service and 2 being the secondary - * language service. If service_number is 7, then the extended_service_number is added and used instead of the service_number */ - gint8 desired_service; - - gboolean use_ARGB; - gint width; - gint height; -}; - -Cea708Dec *gst_cea708dec_create (PangoContext * pango_context); - -void gst_cea708dec_free (Cea708Dec *dec); - -void -gst_cea708dec_set_service_number (Cea708Dec * decoder, gint8 desired_service); -gboolean -gst_cea708dec_process_dtvcc_packet (Cea708Dec * decoder, guint8 * dtvcc_buffer, gsize dtvcc_size); -void -gst_cea708dec_set_video_width_height (Cea708Dec * decoder, gint width, gint height); -void gst_cea708_decoder_init_debug(void); - - G_END_DECLS -#endif /* __GST_CEA708_DEC_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstceaccoverlay.c
Deleted
@@ -1,1987 +0,0 @@ -/* GStreamer - * Copyright (C) 2015 Samsung Electronics Co., Ltd. - * @Author: Chengjun Wang <cjun.wang@samsung.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-cc708overlay - * @title: cc708overlay - * - * CEA-708 overlay element. - * - * This element is deprecated and will be removed in the future. Use - * #cea708overlay instead. - */ - -#ifdef HAVE_CONFIG_H -#include <config.h> -#endif - -#include <gst/video/video.h> -#include <gst/video/gstvideometa.h> -#include <gst/base/gstbytereader.h> - -#include "gstceaccoverlay.h" -#include <string.h> - - -#define GST_CAT_DEFAULT gst_cea_cc_overlay_debug -GST_DEBUG_CATEGORY (gst_cea_cc_overlay_debug); - - -#define DEFAULT_PROP_FONT_DESC "" -#define DEFAULT_PROP_SILENT FALSE -#define DEFAULT_PROP_SERVICE_NUMBER 1 -#define DEFAULT_PROP_WINDOW_H_POS GST_CEA_CC_OVERLAY_WIN_H_CENTER - -enum -{ - PROP_0, - PROP_FONT_DESC, - PROP_SILENT, - PROP_SERVICE_NUMBER, - PROP_WINDOW_H_POS, - PROP_LAST -}; - -#if G_BYTE_ORDER == G_LITTLE_ENDIAN -# define CAIRO_ARGB_A 3 -# define CAIRO_ARGB_R 2 -# define CAIRO_ARGB_G 1 -# define CAIRO_ARGB_B 0 -#else -# define CAIRO_ARGB_A 0 -# define CAIRO_ARGB_R 1 -# define CAIRO_ARGB_G 2 -# define CAIRO_ARGB_B 3 -#endif - -#define CAIRO_UNPREMULTIPLY(a,r,g,b) G_STMT_START { \ - b = (a > 0) ? MIN ((b * 255 + a / 2) / a, 255) : 0; \ - g = (a > 0) ? MIN ((g * 255 + a / 2) / a, 255) : 0; \ - r = (a > 0) ? MIN ((r * 255 + a / 2) / a, 255) : 0; \ -} G_STMT_END - - -#define VIDEO_FORMATS GST_VIDEO_OVERLAY_COMPOSITION_BLEND_FORMATS - -#define CC_OVERLAY_CAPS GST_VIDEO_CAPS_MAKE (VIDEO_FORMATS) - -#define CC_OVERLAY_ALL_CAPS CC_OVERLAY_CAPS ";" \ - GST_VIDEO_CAPS_MAKE_WITH_FEATURES ("ANY", GST_VIDEO_FORMATS_ALL) - -static GstStaticCaps sw_template_caps = GST_STATIC_CAPS (CC_OVERLAY_CAPS); - -static GstStaticPadTemplate src_template_factory = -GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (CC_OVERLAY_ALL_CAPS) - ); - -static GstStaticPadTemplate video_sink_template_factory = -GST_STATIC_PAD_TEMPLATE ("video_sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (CC_OVERLAY_ALL_CAPS) - ); - -static GstStaticPadTemplate cc_sink_template_factory = -GST_STATIC_PAD_TEMPLATE ("cc_sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS - ("closedcaption/x-cea-708, format={ (string) cdp, (string) cc_data }") - ); - - -#define GST_TYPE_CC_OVERLAY_WIN_H_POS (gst_cea_cc_overlay_h_pos_get_type()) -static GType -gst_cea_cc_overlay_h_pos_get_type (void) -{ - static GType cc_overlay_win_h_pos_type = 0; - static const GEnumValue cc_overlay_win_h_pos = { - {GST_CEA_CC_OVERLAY_WIN_H_LEFT, "left", "left"}, - {GST_CEA_CC_OVERLAY_WIN_H_CENTER, "center", "center"}, - {GST_CEA_CC_OVERLAY_WIN_H_RIGHT, "right", "right"}, - {GST_CEA_CC_OVERLAY_WIN_H_AUTO, "auto", "auto"}, - {0, NULL, NULL}, - }; - - if (!cc_overlay_win_h_pos_type) { - cc_overlay_win_h_pos_type = - g_enum_register_static ("GstCeaCcOverlayWinHPos", cc_overlay_win_h_pos); - } - return cc_overlay_win_h_pos_type; -} - - -#define GST_CEA_CC_OVERLAY_GET_LOCK(ov) (&GST_CEA_CC_OVERLAY (ov)->lock) -#define GST_CEA_CC_OVERLAY_GET_COND(ov) (&GST_CEA_CC_OVERLAY (ov)->cond) -#define GST_CEA_CC_OVERLAY_LOCK(ov) (g_mutex_lock (GST_CEA_CC_OVERLAY_GET_LOCK (ov))) -#define GST_CEA_CC_OVERLAY_UNLOCK(ov) (g_mutex_unlock (GST_CEA_CC_OVERLAY_GET_LOCK (ov))) -#define GST_CEA_CC_OVERLAY_WAIT(ov) (g_cond_wait (GST_CEA_CC_OVERLAY_GET_COND (ov), GST_CEA_CC_OVERLAY_GET_LOCK (ov))) -#define GST_CEA_CC_OVERLAY_SIGNAL(ov) (g_cond_signal (GST_CEA_CC_OVERLAY_GET_COND (ov))) -#define GST_CEA_CC_OVERLAY_BROADCAST(ov)(g_cond_broadcast (GST_CEA_CC_OVERLAY_GET_COND (ov))) - -static GstElementClass *parent_class = NULL; -static void gst_base_cea_cc_overlay_base_init (gpointer g_class); -static void gst_base_cea_cc_overlay_class_init (GstCeaCcOverlayClass * klass); -static void gst_base_cea_cc_overlay_init (GstCeaCcOverlay * overlay, - GstCeaCcOverlayClass * klass); -static GstStateChangeReturn gst_cea_cc_overlay_change_state (GstElement * - element, GstStateChange transition); -static GstCaps *gst_cea_cc_overlay_get_videosink_caps (GstPad * pad, - GstCeaCcOverlay * overlay, GstCaps * filter); -static GstCaps *gst_cea_cc_overlay_get_src_caps (GstPad * pad, - GstCeaCcOverlay * overlay, GstCaps * filter); -static gboolean gst_cea_cc_overlay_setcaps (GstCeaCcOverlay * overlay, - GstCaps * caps); -static gboolean gst_cea_cc_overlay_src_event (GstPad * pad, GstObject * parent, - GstEvent * event); -static gboolean gst_cea_cc_overlay_src_query (GstPad * pad, GstObject * parent, - GstQuery * query); - -static gboolean gst_cea_cc_overlay_video_event (GstPad * pad, - GstObject * parent, GstEvent * event); -static gboolean gst_cea_cc_overlay_video_query (GstPad * pad, - GstObject * parent, GstQuery * query); -static GstFlowReturn gst_cea_cc_overlay_video_chain (GstPad * pad, - GstObject * parent, GstBuffer * buffer); - -static gboolean gst_cea_cc_overlay_cc_event (GstPad * pad, - GstObject * parent, GstEvent * event); -static GstFlowReturn gst_cea_cc_overlay_cc_chain (GstPad * pad, - GstObject * parent, GstBuffer * buffer); -static GstPadLinkReturn gst_cea_cc_overlay_cc_pad_link (GstPad * pad, - GstObject * parent, GstPad * peer); -static void gst_cea_cc_overlay_cc_pad_unlink (GstPad * pad, GstObject * parent); -static void gst_cea_cc_overlay_pop_text (GstCeaCcOverlay * overlay); -static void gst_cea_cc_overlay_finalize (GObject * object); -static void gst_cea_cc_overlay_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec); -static void gst_cea_cc_overlay_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec); - -static gboolean gst_cea_cc_overlay_can_handle_caps (GstCaps * incaps); - -GType -gst_cea_cc_overlay_get_type (void) -{ - static GType type = 0; - - if (g_once_init_enter ((gsize *) & type)) { - static const GTypeInfo info = { - sizeof (GstCeaCcOverlayClass), - (GBaseInitFunc) gst_base_cea_cc_overlay_base_init, - NULL, - (GClassInitFunc) gst_base_cea_cc_overlay_class_init, - NULL, - NULL, - sizeof (GstCeaCcOverlay), - 0, - (GInstanceInitFunc) gst_base_cea_cc_overlay_init, - }; - - g_once_init_leave ((gsize *) & type, - g_type_register_static (GST_TYPE_ELEMENT, "GstCeaCcOverlay", &info, 0)); - } - - return type; -} - -GST_ELEMENT_REGISTER_DEFINE (cc708overlay, "cc708overlay", - GST_RANK_PRIMARY, GST_TYPE_CEA_CC_OVERLAY); - -static void -gst_base_cea_cc_overlay_base_init (gpointer g_class) -{ - GstCeaCcOverlayClass *klass = GST_CEA_CC_OVERLAY_CLASS (g_class); - PangoFontMap *fontmap; - - /* Only lock for the subclasses here, the base class - * doesn't have this mutex yet and it's not necessary - * here */ - fontmap = pango_cairo_font_map_get_default (); - klass->pango_context = - pango_font_map_create_context (PANGO_FONT_MAP (fontmap)); -} - -static void -gst_base_cea_cc_overlay_class_init (GstCeaCcOverlayClass * klass) -{ - GObjectClass *gobject_class; - GstElementClass *gstelement_class; - - gobject_class = (GObjectClass *) klass; - gstelement_class = (GstElementClass *) klass; - - GST_DEBUG_CATEGORY_INIT (gst_cea_cc_overlay_debug, "cc708overlay", 0, - "cc708overlay"); - - parent_class = g_type_class_peek_parent (klass); - - gobject_class->finalize = gst_cea_cc_overlay_finalize; - gobject_class->set_property = gst_cea_cc_overlay_set_property; - gobject_class->get_property = gst_cea_cc_overlay_get_property; - - gst_element_class_add_pad_template (gstelement_class, - gst_static_pad_template_get (&src_template_factory)); - gst_element_class_add_pad_template (gstelement_class, - gst_static_pad_template_get (&video_sink_template_factory)); - gst_element_class_add_pad_template (gstelement_class, - gst_static_pad_template_get (&cc_sink_template_factory)); - - gstelement_class->change_state = - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_change_state); - - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SERVICE_NUMBER, - g_param_spec_int ("service-number", "service-number", - "Service number. Service 1 is designated as the Primary Caption Service," - " Service 2 is the Secondary Language Service.", - -1, 63, DEFAULT_PROP_SERVICE_NUMBER, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_WINDOW_H_POS, - g_param_spec_enum ("window-h-pos", "window-h-pos", - "Window's Horizontal position", GST_TYPE_CC_OVERLAY_WIN_H_POS, - DEFAULT_PROP_WINDOW_H_POS, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_FONT_DESC, - g_param_spec_string ("font-desc", "font description", - "Pango font description of font to be used for rendering.\n" - "See documentation of pango_font_description_from_string for syntax.\n" - "this will override closed caption stream specified font style/pen size.", - DEFAULT_PROP_FONT_DESC, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - - /** - * GstCeaCcOverlay:silent: - * - * If set, no text is rendered. Useful to switch off text rendering - * temporarily without removing the textoverlay element from the pipeline. - */ - /* FIXME 0.11: rename to "visible" or "text-visible" or "render-text" */ - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SILENT, - g_param_spec_boolean ("silent", "silent", - "Whether to render the text string", - DEFAULT_PROP_SILENT, - G_PARAM_READWRITE | GST_PARAM_CONTROLLABLE | G_PARAM_STATIC_STRINGS)); - - gst_element_class_set_static_metadata (gstelement_class, - "Closed Caption overlay", "Mixer/Video/Overlay/Subtitle", - "Decode cea608/cea708 data and overlay on proper position of a video buffer", - "Chengjun Wang <cjun.wang@samsung.com>"); - gst_cea708_decoder_init_debug (); - - gst_type_mark_as_plugin_api (GST_TYPE_CC_OVERLAY_WIN_H_POS, 0); - -} - -static void -gst_cea_cc_overlay_finalize (GObject * object) -{ - GstCeaCcOverlay *overlay = GST_CEA_CC_OVERLAY (object); - - if (overlay->current_composition) { - gst_video_overlay_composition_unref (overlay->current_composition); - overlay->current_composition = NULL; - } - if (overlay->next_composition) { - gst_video_overlay_composition_unref (overlay->next_composition); - overlay->next_composition = NULL; - } - - gst_cea708dec_free (overlay->decoder); - overlay->decoder = NULL; - - g_mutex_clear (&overlay->lock); - g_cond_clear (&overlay->cond); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_base_cea_cc_overlay_init (GstCeaCcOverlay * overlay, - GstCeaCcOverlayClass * klass) -{ - GstPadTemplate *template; - overlay->decoder = gst_cea708dec_create (GST_CEA_CC_OVERLAY_GET_CLASS - (overlay)->pango_context); - - /* video sink */ - template = gst_static_pad_template_get (&video_sink_template_factory); - overlay->video_sinkpad = gst_pad_new_from_template (template, "video_sink"); - gst_object_unref (template); - gst_pad_set_event_function (overlay->video_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_video_event)); - gst_pad_set_chain_function (overlay->video_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_video_chain)); - gst_pad_set_query_function (overlay->video_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_video_query)); - GST_PAD_SET_PROXY_ALLOCATION (overlay->video_sinkpad); - gst_element_add_pad (GST_ELEMENT (overlay), overlay->video_sinkpad); - - template = - gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "cc_sink"); - if (template) { - /* text sink */ - overlay->cc_sinkpad = gst_pad_new_from_template (template, "cc_sink"); - - gst_pad_set_event_function (overlay->cc_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_cc_event)); - gst_pad_set_chain_function (overlay->cc_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_cc_chain)); - gst_pad_set_link_function (overlay->cc_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_cc_pad_link)); - gst_pad_set_unlink_function (overlay->cc_sinkpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_cc_pad_unlink)); - gst_element_add_pad (GST_ELEMENT (overlay), overlay->cc_sinkpad); - } - - /* (video) source */ - template = gst_static_pad_template_get (&src_template_factory); - overlay->srcpad = gst_pad_new_from_template (template, "src"); - gst_object_unref (template); - gst_pad_set_event_function (overlay->srcpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_src_event)); - gst_pad_set_query_function (overlay->srcpad, - GST_DEBUG_FUNCPTR (gst_cea_cc_overlay_src_query)); - gst_element_add_pad (GST_ELEMENT (overlay), overlay->srcpad); - - - overlay->silent = DEFAULT_PROP_SILENT; - overlay->need_update = TRUE; - overlay->current_composition = NULL; - overlay->next_composition = NULL; - overlay->cc_pad_linked = FALSE; - overlay->current_comp_start_time = GST_CLOCK_TIME_NONE; - overlay->next_comp_start_time = GST_CLOCK_TIME_NONE; - overlay->cea608_index0 = 0; - overlay->cea608_index1 = 0; - overlay->cea708_index = 0; - overlay->default_window_h_pos = DEFAULT_PROP_WINDOW_H_POS; - - g_mutex_init (&overlay->lock); - g_cond_init (&overlay->cond); - gst_segment_init (&overlay->segment, GST_FORMAT_TIME); - - g_warning - ("cc708overlay is deprecated and will be removed in the future. Use cea708overlay instead."); -} - -/* only negotiate/query video overlay composition support for now */ -static gboolean -gst_cea_cc_overlay_negotiate (GstCeaCcOverlay * overlay, GstCaps * caps) -{ - GstQuery *query; - gboolean attach = FALSE; - gboolean caps_has_meta = TRUE; - gboolean ret; - GstCapsFeatures *f; - GstCaps *original_caps; - gboolean original_has_meta = FALSE; - gboolean allocation_ret = TRUE; - - GST_DEBUG_OBJECT (overlay, "performing negotiation"); - - if (!caps) - caps = gst_pad_get_current_caps (overlay->video_sinkpad); - else - gst_caps_ref (caps); - - if (!caps || gst_caps_is_empty (caps)) - goto no_format; - - original_caps = caps; - - /* Try to use the overlay meta if possible */ - f = gst_caps_get_features (caps, 0); - - /* if the caps doesn't have the overlay meta, we query if downstream - * accepts it before trying the version without the meta - * If upstream already is using the meta then we can only use it */ - if (!f - || !gst_caps_features_contains (f, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION)) { - GstCaps *overlay_caps; - - /* In this case we added the meta, but we can work without it - * so preserve the original caps so we can use it as a fallback */ - overlay_caps = gst_caps_copy (caps); - - f = gst_caps_get_features (overlay_caps, 0); - gst_caps_features_add (f, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION); - - ret = gst_pad_peer_query_accept_caps (overlay->srcpad, overlay_caps); - GST_DEBUG_OBJECT (overlay, "Downstream accepts the overlay meta: %d", ret); - if (ret) { - gst_caps_unref (caps); - caps = overlay_caps; - - } else { - /* fallback to the original */ - gst_caps_unref (overlay_caps); - caps_has_meta = FALSE; - } - } else { - original_has_meta = TRUE; - } - GST_DEBUG_OBJECT (overlay, "Using caps %" GST_PTR_FORMAT, caps); - ret = gst_pad_set_caps (overlay->srcpad, caps); - - if (ret) { - /* find supported meta */ - query = gst_query_new_allocation (caps, FALSE); - - if (!gst_pad_peer_query (overlay->srcpad, query)) { - /* no problem, we use the query defaults */ - GST_DEBUG_OBJECT (overlay, "ALLOCATION query failed"); - allocation_ret = FALSE; - } - - if (caps_has_meta && gst_query_find_allocation_meta (query, - GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, NULL)) - attach = TRUE; - gst_query_unref (query); - } - - overlay->attach_compo_to_buffer = attach; - - if (!allocation_ret && overlay->video_flushing) { - ret = FALSE; - } else if (original_caps && !original_has_meta && !attach) { - if (caps_has_meta) { - /* Some elements (fakesink) claim to accept the meta on caps but won't - put it in the allocation query result, this leads below - check to fail. Prevent this by removing the meta from caps */ - gst_caps_unref (caps); - caps = gst_caps_ref (original_caps); - ret = gst_pad_set_caps (overlay->srcpad, caps); - if (ret && !gst_cea_cc_overlay_can_handle_caps (caps)) - ret = FALSE; - } - } - - if (!ret) { - GST_DEBUG_OBJECT (overlay, "negotiation failed, schedule reconfigure"); - gst_pad_mark_reconfigure (overlay->srcpad); - } - gst_caps_unref (caps); - GST_DEBUG_OBJECT (overlay, "ret=%d", ret); - - return ret; - -no_format: - { - if (caps) - gst_caps_unref (caps); - return FALSE; - } -} - -static gboolean -gst_cea_cc_overlay_can_handle_caps (GstCaps * incaps) -{ - gboolean ret; - GstCaps *caps; - static GstStaticCaps static_caps = GST_STATIC_CAPS (CC_OVERLAY_CAPS); - - caps = gst_static_caps_get (&static_caps); - ret = gst_caps_is_subset (incaps, caps); - gst_caps_unref (caps); - - return ret; -} - -static gboolean -gst_cea_cc_overlay_setcaps (GstCeaCcOverlay * overlay, GstCaps * caps) -{ - GstVideoInfo info; - gboolean ret = FALSE; - - if (!gst_video_info_from_caps (&info, caps)) - goto invalid_caps; - - overlay->info = info; - overlay->format = GST_VIDEO_INFO_FORMAT (&info); - overlay->width = GST_VIDEO_INFO_WIDTH (&info); - overlay->height = GST_VIDEO_INFO_HEIGHT (&info); - gst_cea708dec_set_video_width_height (overlay->decoder, overlay->width, - overlay->height); - ret = gst_cea_cc_overlay_negotiate (overlay, caps); - - GST_CEA_CC_OVERLAY_LOCK (overlay); - if (!overlay->attach_compo_to_buffer && - !gst_cea_cc_overlay_can_handle_caps (caps)) { - GST_DEBUG_OBJECT (overlay, "unsupported caps %" GST_PTR_FORMAT, caps); - ret = FALSE; - } - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - - return ret; - - /* ERRORS */ -invalid_caps: - { - GST_DEBUG_OBJECT (overlay, "could not parse caps"); - return FALSE; - } -} - -static void -gst_cea_cc_overlay_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstCeaCcOverlay *overlay = GST_CEA_CC_OVERLAY (object); - Cea708Dec *decoder = overlay->decoder; - - GST_CEA_CC_OVERLAY_LOCK (overlay); - switch (prop_id) { - case PROP_SERVICE_NUMBER: - { - int desired_service = g_value_get_int (value); - gst_cea708dec_set_service_number (decoder, desired_service); - break; - } - case PROP_FONT_DESC: - { - PangoFontDescription *desc = NULL; - const gchar *fontdesc_str; - fontdesc_str = g_value_get_string (value); - - GST_LOG_OBJECT (overlay, "Got font description '%s'", fontdesc_str); - if (fontdesc_str) - desc = pango_font_description_from_string (fontdesc_str); - /* Only set if NULL or valid description */ - if (desc || !fontdesc_str) { - if (desc) { - GST_INFO_OBJECT (overlay, "Setting font description: '%s'", - fontdesc_str); - pango_font_description_free (desc); - } else - GST_INFO_OBJECT (overlay, "Resetting default font description"); - g_free (decoder->default_font_desc); - decoder->default_font_desc = g_strdup (fontdesc_str); - } - break; - } - case PROP_SILENT: - overlay->silent = g_value_get_boolean (value); - break; - case PROP_WINDOW_H_POS: - overlay->default_window_h_pos = g_value_get_enum (value); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } - - overlay->need_update = TRUE; - GST_CEA_CC_OVERLAY_UNLOCK (overlay); -} - -static void -gst_cea_cc_overlay_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstCeaCcOverlay *overlay = GST_CEA_CC_OVERLAY (object); - Cea708Dec *decoder = overlay->decoder; - - GST_CEA_CC_OVERLAY_LOCK (overlay); - switch (prop_id) { - case PROP_SERVICE_NUMBER: - g_value_set_int (value, decoder->desired_service); - break; - case PROP_SILENT: - g_value_set_boolean (value, overlay->silent); - break; - case PROP_FONT_DESC: - g_value_set_string (value, decoder->default_font_desc); - break; - case PROP_WINDOW_H_POS: - g_value_set_enum (value, overlay->default_window_h_pos); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } - - GST_CEA_CC_OVERLAY_UNLOCK (overlay); -} - -static gboolean -gst_cea_cc_overlay_src_query (GstPad * pad, GstObject * parent, - GstQuery * query) -{ - gboolean ret = FALSE; - GstCeaCcOverlay *overlay; - - overlay = GST_CEA_CC_OVERLAY (parent); - - switch (GST_QUERY_TYPE (query)) { - case GST_QUERY_CAPS: - { - GstCaps *filter, *caps; - - gst_query_parse_caps (query, &filter); - caps = gst_cea_cc_overlay_get_src_caps (pad, overlay, filter); - gst_query_set_caps_result (query, caps); - gst_caps_unref (caps); - ret = TRUE; - break; - } - default: - ret = gst_pad_query_default (pad, parent, query); - break; - } - - return ret; -} - -static gboolean -gst_cea_cc_overlay_src_event (GstPad * pad, GstObject * parent, - GstEvent * event) -{ - GstCeaCcOverlay *overlay; - gboolean ret; - - overlay = GST_CEA_CC_OVERLAY (parent); - - if (overlay->cc_pad_linked) { - gst_event_ref (event); - ret = gst_pad_push_event (overlay->video_sinkpad, event); - gst_pad_push_event (overlay->cc_sinkpad, event); - } else { - ret = gst_pad_push_event (overlay->video_sinkpad, event); - } - - return ret; -} - -/** - * gst_cea_cc_overlay_add_feature_and_intersect: - * - * Creates a new #GstCaps containing the (given caps + - * given caps feature) + (given caps intersected by the - * given filter). - * - * Returns: the new #GstCaps - */ -static GstCaps * -gst_cea_cc_overlay_add_feature_and_intersect (GstCaps * caps, - const gchar * feature, GstCaps * filter) -{ - int i, caps_size; - GstCaps *new_caps; - - new_caps = gst_caps_copy (caps); - - caps_size = gst_caps_get_size (new_caps); - for (i = 0; i < caps_size; i++) { - GstCapsFeatures *features = gst_caps_get_features (new_caps, i); - - if (!gst_caps_features_is_any (features)) { - gst_caps_features_add (features, feature); - } - } - - gst_caps_append (new_caps, gst_caps_intersect_full (caps, - filter, GST_CAPS_INTERSECT_FIRST)); - - return new_caps; -} - -/** - * gst_cea_cc_overlay_intersect_by_feature: - * - * Creates a new #GstCaps based on the following filtering rule. - * - * For each individual caps contained in given caps, if the - * caps uses the given caps feature, keep a version of the caps - * with the feature and an another one without. Otherwise, intersect - * the caps with the given filter. - * - * Returns: the new #GstCaps - */ -static GstCaps * -gst_cea_cc_overlay_intersect_by_feature (GstCaps * caps, - const gchar * feature, GstCaps * filter) -{ - int i, caps_size; - GstCaps *new_caps; - - new_caps = gst_caps_new_empty (); - - caps_size = gst_caps_get_size (caps); - for (i = 0; i < caps_size; i++) { - GstStructure *caps_structure = gst_caps_get_structure (caps, i); - GstCapsFeatures *caps_features = - gst_caps_features_copy (gst_caps_get_features (caps, i)); - GstCaps *filtered_caps; - GstCaps *simple_caps = - gst_caps_new_full (gst_structure_copy (caps_structure), NULL); - gst_caps_set_features (simple_caps, 0, caps_features); - - if (gst_caps_features_contains (caps_features, feature)) { - gst_caps_append (new_caps, gst_caps_copy (simple_caps)); - - gst_caps_features_remove (caps_features, feature); - filtered_caps = gst_caps_ref (simple_caps); - } else { - filtered_caps = gst_caps_intersect_full (simple_caps, filter, - GST_CAPS_INTERSECT_FIRST); - } - gst_caps_unref (simple_caps); - gst_caps_append (new_caps, filtered_caps); - } - - return new_caps; -} - -static GstCaps * -gst_cea_cc_overlay_get_videosink_caps (GstPad * pad, - GstCeaCcOverlay * overlay, GstCaps * filter) -{ - GstPad *srcpad = overlay->srcpad; - GstCaps *peer_caps = NULL, *caps = NULL, *overlay_filter = NULL; - - if (G_UNLIKELY (!overlay)) - return gst_pad_get_pad_template_caps (pad); - - if (filter) { - /* filter caps + composition feature + filter caps - * filtered by the software caps. */ - GstCaps *sw_caps = gst_static_caps_get (&sw_template_caps); - overlay_filter = gst_cea_cc_overlay_add_feature_and_intersect (filter, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, sw_caps); - gst_caps_unref (sw_caps); - - GST_DEBUG_OBJECT (overlay, "overlay filter %" GST_PTR_FORMAT, - overlay_filter); - } - - peer_caps = gst_pad_peer_query_caps (srcpad, overlay_filter); - if (overlay_filter) - gst_caps_unref (overlay_filter); - if (peer_caps) { - - GST_DEBUG_OBJECT (pad, "peer caps %" GST_PTR_FORMAT, peer_caps); - - if (gst_caps_is_any (peer_caps)) { - /* if peer returns ANY caps, return filtered src pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (srcpad)); - } else { - - /* duplicate caps which contains the composition into one version with - * the meta and one without. Filter the other caps by the software caps */ - GstCaps *sw_caps = gst_static_caps_get (&sw_template_caps); - caps = gst_cea_cc_overlay_intersect_by_feature (peer_caps, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, sw_caps); - gst_caps_unref (sw_caps); - } - - gst_caps_unref (peer_caps); - - } else { - /* no peer, our padtemplate is enough then */ - caps = gst_pad_get_pad_template_caps (pad); - } - - if (filter) { - GstCaps *intersection = gst_caps_intersect_full (filter, caps, - GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (caps); - caps = intersection; - } - - GST_DEBUG_OBJECT (overlay, "returning %" GST_PTR_FORMAT, caps); - - return caps; -} - -static GstCaps * -gst_cea_cc_overlay_get_src_caps (GstPad * pad, GstCeaCcOverlay * overlay, - GstCaps * filter) -{ - GstPad *sinkpad = overlay->video_sinkpad; - GstCaps *peer_caps = NULL, *caps = NULL, *overlay_filter = NULL; - - if (G_UNLIKELY (!overlay)) - return gst_pad_get_pad_template_caps (pad); - - if (filter) { - /* duplicate filter caps which contains the composition into one version - * with the meta and one without. Filter the other caps by the software - * caps */ - GstCaps *sw_caps = gst_static_caps_get (&sw_template_caps); - overlay_filter = - gst_cea_cc_overlay_intersect_by_feature (filter, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, sw_caps); - gst_caps_unref (sw_caps); - } - - peer_caps = gst_pad_peer_query_caps (sinkpad, overlay_filter); - - if (overlay_filter) - gst_caps_unref (overlay_filter); - - if (peer_caps) { - - GST_DEBUG_OBJECT (pad, "peer caps %" GST_PTR_FORMAT, peer_caps); - - if (gst_caps_is_any (peer_caps)) { - - /* if peer returns ANY caps, return filtered sink pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (sinkpad)); - - } else { - - /* return upstream caps + composition feature + upstream caps - * filtered by the software caps. */ - GstCaps *sw_caps = gst_static_caps_get (&sw_template_caps); - caps = gst_cea_cc_overlay_add_feature_and_intersect (peer_caps, - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, sw_caps); - gst_caps_unref (sw_caps); - } - - gst_caps_unref (peer_caps); - - } else { - /* no peer, our padtemplate is enough then */ - caps = gst_pad_get_pad_template_caps (pad); - } - - if (filter) { - GstCaps *intersection; - - intersection = - gst_caps_intersect_full (filter, caps, GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (caps); - caps = intersection; - } - GST_DEBUG_OBJECT (overlay, "returning %" GST_PTR_FORMAT, caps); - - return caps; -} - -/* FIXME: should probably be relative to width/height (adjusted for PAR) */ -#define BOX_XPAD 6 -#define BOX_YPAD 6 - -static GstFlowReturn -gst_cea_cc_overlay_push_frame (GstCeaCcOverlay * overlay, - GstBuffer * video_frame) -{ - GstVideoFrame frame; - - if (overlay->current_composition == NULL) - goto done; - GST_LOG_OBJECT (overlay, "gst_cea_cc_overlay_push_frame"); - - if (gst_pad_check_reconfigure (overlay->srcpad)) - gst_cea_cc_overlay_negotiate (overlay, NULL); - - video_frame = gst_buffer_make_writable (video_frame); - - if (overlay->attach_compo_to_buffer) { - GST_DEBUG_OBJECT (overlay, "Attaching text overlay image to video buffer"); - gst_buffer_add_video_overlay_composition_meta (video_frame, - overlay->current_composition); - goto done; - } - - if (!gst_video_frame_map (&frame, &overlay->info, video_frame, - GST_MAP_READWRITE)) - goto invalid_frame; - - gst_video_overlay_composition_blend (overlay->current_composition, &frame); - - gst_video_frame_unmap (&frame); - -done: - - return gst_pad_push (overlay->srcpad, video_frame); - - /* ERRORS */ -invalid_frame: - { - gst_buffer_unref (video_frame); - return GST_FLOW_OK; - } -} - -static GstPadLinkReturn -gst_cea_cc_overlay_cc_pad_link (GstPad * pad, GstObject * parent, GstPad * peer) -{ - GstCeaCcOverlay *overlay; - - overlay = GST_CEA_CC_OVERLAY (parent); - if (G_UNLIKELY (!overlay)) - return GST_PAD_LINK_REFUSED; - - GST_DEBUG_OBJECT (overlay, "Closed Caption pad linked"); - - overlay->cc_pad_linked = TRUE; - - return GST_PAD_LINK_OK; -} - -static void -gst_cea_cc_overlay_cc_pad_unlink (GstPad * pad, GstObject * parent) -{ - GstCeaCcOverlay *overlay; - - /* don't use gst_pad_get_parent() here, will deadlock */ - overlay = GST_CEA_CC_OVERLAY (parent); - - GST_DEBUG_OBJECT (overlay, "Closed Caption pad unlinked"); - - overlay->cc_pad_linked = FALSE; - - gst_segment_init (&overlay->cc_segment, GST_FORMAT_UNDEFINED); -} - -static gboolean -gst_cea_cc_overlay_cc_event (GstPad * pad, GstObject * parent, GstEvent * event) -{ - gboolean ret = FALSE; - GstCeaCcOverlay *overlay = NULL; - - overlay = GST_CEA_CC_OVERLAY (parent); - - GST_LOG_OBJECT (overlay, "received event %s", GST_EVENT_TYPE_NAME (event)); - - switch (GST_EVENT_TYPE (event)) { - case GST_EVENT_CAPS: - { - GstCaps *caps; - GstStructure *st; - const gchar *cctype; - - gst_event_parse_caps (event, &caps); - st = gst_caps_get_structure (caps, 0); - cctype = gst_structure_get_string (st, "format"); - overlay->is_cdp = !g_strcmp0 (cctype, "cdp"); - ret = TRUE; - break; - } - case GST_EVENT_SEGMENT: - { - const GstSegment *segment; - - overlay->cc_eos = FALSE; - - gst_event_parse_segment (event, &segment); - - if (segment->format == GST_FORMAT_TIME) { - GST_CEA_CC_OVERLAY_LOCK (overlay); - gst_segment_copy_into (segment, &overlay->cc_segment); - GST_DEBUG_OBJECT (overlay, "TEXT SEGMENT now: %" GST_SEGMENT_FORMAT, - &overlay->cc_segment); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - } else { - GST_ELEMENT_WARNING (overlay, STREAM, MUX, (NULL), - ("received non-TIME newsegment event on text input")); - } - - ret = TRUE; - - /* wake up the video chain, it might be waiting for a text buffer or - * a text segment update */ - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - break; - } - case GST_EVENT_GAP: - { - GstClockTime start, duration; - - gst_event_parse_gap (event, &start, &duration); - if (GST_CLOCK_TIME_IS_VALID (duration)) - start += duration; - /* we do not expect another buffer until after gap, - * so that is our position now */ - overlay->cc_segment.position = start; - - /* wake up the video chain, it might be waiting for a text buffer or - * a text segment update */ - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - - ret = TRUE; - break; - } - case GST_EVENT_FLUSH_STOP: - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_INFO_OBJECT (overlay, "text flush stop"); - overlay->cc_flushing = FALSE; - overlay->cc_eos = FALSE; - gst_cea_cc_overlay_pop_text (overlay); - gst_segment_init (&overlay->cc_segment, GST_FORMAT_TIME); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = TRUE; - break; - case GST_EVENT_FLUSH_START: - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_INFO_OBJECT (overlay, "text flush start"); - overlay->cc_flushing = TRUE; - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = TRUE; - break; - case GST_EVENT_EOS: - GST_CEA_CC_OVERLAY_LOCK (overlay); - overlay->cc_eos = TRUE; - GST_INFO_OBJECT (overlay, "closed caption EOS"); - /* wake up the video chain, it might be waiting for a text buffer or - * a text segment update */ - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = TRUE; - break; - default: - break; - } - - if (ret) { - gst_event_unref (event); - } else { - ret = gst_pad_event_default (pad, parent, event); - } - - return ret; -} - -static gboolean -gst_cea_cc_overlay_video_event (GstPad * pad, GstObject * parent, - GstEvent * event) -{ - gboolean ret = FALSE; - GstCeaCcOverlay *overlay = NULL; - - overlay = GST_CEA_CC_OVERLAY (parent); - - GST_DEBUG_OBJECT (pad, "received event %s", GST_EVENT_TYPE_NAME (event)); - - switch (GST_EVENT_TYPE (event)) { - case GST_EVENT_CAPS: - { - GstCaps *caps; - - gst_event_parse_caps (event, &caps); - ret = gst_cea_cc_overlay_setcaps (overlay, caps); - gst_event_unref (event); - break; - } - case GST_EVENT_SEGMENT: - { - const GstSegment *segment; - - GST_DEBUG_OBJECT (overlay, "received new segment"); - - gst_event_parse_segment (event, &segment); - - if (segment->format == GST_FORMAT_TIME) { - GST_DEBUG_OBJECT (overlay, "VIDEO SEGMENT now: %" GST_SEGMENT_FORMAT, - &overlay->segment); - - gst_segment_copy_into (segment, &overlay->segment); - } else { - GST_ELEMENT_WARNING (overlay, STREAM, MUX, (NULL), - ("received non-TIME newsegment event on video input")); - } - - ret = gst_pad_event_default (pad, parent, event); - break; - } - case GST_EVENT_EOS: - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_INFO_OBJECT (overlay, "video EOS"); - overlay->video_eos = TRUE; - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_pad_event_default (pad, parent, event); - break; - case GST_EVENT_FLUSH_START: - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_INFO_OBJECT (overlay, "video flush start"); - overlay->video_flushing = TRUE; - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_pad_event_default (pad, parent, event); - break; - case GST_EVENT_FLUSH_STOP: - GST_CEA_CC_OVERLAY_LOCK (overlay); - GST_INFO_OBJECT (overlay, "video flush stop"); - overlay->video_flushing = FALSE; - overlay->video_eos = FALSE; - gst_segment_init (&overlay->segment, GST_FORMAT_TIME); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_pad_event_default (pad, parent, event); - break; - default: - ret = gst_pad_event_default (pad, parent, event); - break; - } - - return ret; -} - -static gboolean -gst_cea_cc_overlay_video_query (GstPad * pad, GstObject * parent, - GstQuery * query) -{ - gboolean ret = FALSE; - GstCeaCcOverlay *overlay; - - overlay = GST_CEA_CC_OVERLAY (parent); - - switch (GST_QUERY_TYPE (query)) { - case GST_QUERY_CAPS: - { - GstCaps *filter, *caps; - - gst_query_parse_caps (query, &filter); - caps = gst_cea_cc_overlay_get_videosink_caps (pad, overlay, filter); - gst_query_set_caps_result (query, caps); - gst_caps_unref (caps); - ret = TRUE; - break; - } - default: - ret = gst_pad_query_default (pad, parent, query); - break; - } - - return ret; -} - -/* Called with lock held */ -static void -gst_cea_cc_overlay_pop_text (GstCeaCcOverlay * overlay) -{ - g_return_if_fail (GST_IS_CEA_CC_OVERLAY (overlay)); - - if (GST_CLOCK_TIME_IS_VALID (overlay->current_comp_start_time) - && overlay->current_composition) { - GST_DEBUG_OBJECT (overlay, "releasing composition %p", - overlay->current_composition); - gst_video_overlay_composition_unref (overlay->current_composition); - overlay->current_composition = NULL; - overlay->current_comp_start_time = GST_CLOCK_TIME_NONE; - } - - /* Let the text task know we used that buffer */ - GST_CEA_CC_OVERLAY_BROADCAST (overlay); -} - -static void -gst_cea_cc_overlay_image_to_argb (guchar * pixbuf, - cea708Window * window, int stride) -{ - int i, j; - guchar *p, *bitp; - int width, height; - - width = window->image_width; - height = window->image_height; - - for (i = 0; i < height; i++) { - p = pixbuf + i * stride; - bitp = window->text_image + i * width * 4; - - for (j = 0; j < width; j++) { - p0 = bitpCAIRO_ARGB_A; - p1 = bitpCAIRO_ARGB_R; - p2 = bitpCAIRO_ARGB_G; - p3 = bitpCAIRO_ARGB_B; - - /* Cairo uses pre-multiplied ARGB, unpremultiply it */ - CAIRO_UNPREMULTIPLY (p0, p1, p2, p3); - - bitp += 4; - p += 4; - } - } -} - -static void -gst_cea_cc_overlay_image_to_ayuv (guchar * pixbuf, - cea708Window * window, int stride) -{ - int y; /* text bitmap coordinates */ - guchar *p, *bitp; - guchar a, r, g, b; - int width, height; - - width = window->image_width; - height = window->image_height; - - for (y = 0; y < height; y++) { - int n; - p = pixbuf + y * stride; - bitp = window->text_image + y * width * 4; - - for (n = 0; n < width; n++) { - b = bitpCAIRO_ARGB_B; - g = bitpCAIRO_ARGB_G; - r = bitpCAIRO_ARGB_R; - a = bitpCAIRO_ARGB_A; - bitp += 4; - - /* Cairo uses pre-multiplied ARGB, unpremultiply it */ - CAIRO_UNPREMULTIPLY (a, r, g, b); - - *p++ = a; - *p++ = CLAMP ((int) (((19595 * r) >> 16) + ((38470 * g) >> 16) + - ((7471 * b) >> 16)), 0, 255); - *p++ = CLAMP ((int) (-((11059 * r) >> 16) - ((21709 * g) >> 16) + - ((32768 * b) >> 16) + 128), 0, 255); - *p++ = CLAMP ((int) (((32768 * r) >> 16) - ((27439 * g) >> 16) - - ((5329 * b) >> 16) + 128), 0, 255); - } - } -} - -static void -gst_cea_cc_overlay_create_and_push_buffer (GstCeaCcOverlay * overlay) -{ - Cea708Dec *decoder = overlay->decoder; - GstBuffer *outbuf; - GstMapInfo map; - guint8 *window_image; - gint n; - guint window_id; - cea708Window *window; - guint v_anchor = 0; - guint h_anchor = 0; - GstVideoOverlayComposition *comp = NULL; - GstVideoOverlayRectangle *rect = NULL; - GST_CEA_CC_OVERLAY_LOCK (overlay); - - for (window_id = 0; window_id < 8; window_id++) { - window = decoder->cc_windowswindow_id; - - if (!window->updated) { - continue; - } - if (!window->deleted && window->visible && window->text_image != NULL) { - GST_DEBUG_OBJECT (overlay, "Allocating buffer"); - outbuf = - gst_buffer_new_and_alloc (window->image_width * - window->image_height * 4); - gst_buffer_map (outbuf, &map, GST_MAP_WRITE); - window_image = map.data; - if (decoder->use_ARGB) { - memset (window_image, 0, - window->image_width * window->image_height * 4); - gst_buffer_add_video_meta (outbuf, GST_VIDEO_FRAME_FLAG_NONE, - GST_VIDEO_OVERLAY_COMPOSITION_FORMAT_RGB, window->image_width, - window->image_height); - } else { - for (n = 0; n < window->image_width * window->image_height; n++) { - window_imagen * 4 = window_imagen * 4 + 1 = 0; - window_imagen * 4 + 2 = window_imagen * 4 + 3 = 128; - } - gst_buffer_add_video_meta (outbuf, GST_VIDEO_FRAME_FLAG_NONE, - GST_VIDEO_OVERLAY_COMPOSITION_FORMAT_YUV, window->image_width, - window->image_height); - } - - v_anchor = window->screen_vertical * overlay->height / 100; - switch (overlay->default_window_h_pos) { - case GST_CEA_CC_OVERLAY_WIN_H_LEFT: - window->h_offset = 0; - break; - case GST_CEA_CC_OVERLAY_WIN_H_CENTER: - window->h_offset = (overlay->width - window->image_width) / 2; - break; - case GST_CEA_CC_OVERLAY_WIN_H_RIGHT: - window->h_offset = overlay->width - window->image_width; - break; - case GST_CEA_CC_OVERLAY_WIN_H_AUTO: - default: - switch (window->anchor_point) { - case ANCHOR_PT_TOP_LEFT: - case ANCHOR_PT_MIDDLE_LEFT: - case ANCHOR_PT_BOTTOM_LEFT: - window->h_offset = h_anchor; - break; - - case ANCHOR_PT_TOP_CENTER: - case ANCHOR_PT_CENTER: - case ANCHOR_PT_BOTTOM_CENTER: - window->h_offset = h_anchor - window->image_width / 2; - break; - - case ANCHOR_PT_TOP_RIGHT: - case ANCHOR_PT_MIDDLE_RIGHT: - case ANCHOR_PT_BOTTOM_RIGHT: - window->h_offset = h_anchor - window->image_width; - break; - default: - break; - } - break; - } - - switch (window->anchor_point) { - case ANCHOR_PT_TOP_LEFT: - case ANCHOR_PT_TOP_CENTER: - case ANCHOR_PT_TOP_RIGHT: - window->v_offset = v_anchor; - break; - - case ANCHOR_PT_MIDDLE_LEFT: - case ANCHOR_PT_CENTER: - case ANCHOR_PT_MIDDLE_RIGHT: - window->v_offset = v_anchor - window->image_height / 2; - break; - - case ANCHOR_PT_BOTTOM_LEFT: - case ANCHOR_PT_BOTTOM_CENTER: - case ANCHOR_PT_BOTTOM_RIGHT: - window->v_offset = v_anchor - window->image_height; - break; - default: - break; - } - if (decoder->use_ARGB) { - gst_cea_cc_overlay_image_to_argb (window_image, window, - window->image_width * 4); - } else { - gst_cea_cc_overlay_image_to_ayuv (window_image, window, - window->image_width * 4); - } - gst_buffer_unmap (outbuf, &map); - GST_INFO_OBJECT (overlay, - "window->anchor_point=%d,v_anchor=%d,h_anchor=%d,window->image_height=%d,window->image_width=%d, window->v_offset=%d, window->h_offset=%d,window->justify_mode=%d", - window->anchor_point, v_anchor, h_anchor, window->image_height, - window->image_width, window->v_offset, window->h_offset, - window->justify_mode); - rect = - gst_video_overlay_rectangle_new_raw (outbuf, window->h_offset, - window->v_offset, window->image_width, window->image_height, 0); - if (comp == NULL) { - comp = gst_video_overlay_composition_new (rect); - } else { - gst_video_overlay_composition_add_rectangle (comp, rect); - } - gst_video_overlay_rectangle_unref (rect); - gst_buffer_unref (outbuf); - } - } - - /* Wait for the previous buffer to go away */ - if (GST_CLOCK_TIME_IS_VALID (overlay->current_comp_start_time)) { - overlay->next_composition = comp; - overlay->next_comp_start_time = decoder->current_time; - GST_DEBUG_OBJECT (overlay, - "wait for render next %p, current is %p BUFFER: next ts=%" - GST_TIME_FORMAT ",current ts=%" GST_TIME_FORMAT, - overlay->next_composition, overlay->current_composition, - GST_TIME_ARGS (overlay->next_comp_start_time), - GST_TIME_ARGS (overlay->current_comp_start_time)); - - GST_DEBUG_OBJECT (overlay, "has a closed caption buffer queued, waiting"); - GST_CEA_CC_OVERLAY_WAIT (overlay); - GST_DEBUG_OBJECT (overlay, "resuming"); - if (overlay->cc_flushing) { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - return; - } - } - - overlay->next_composition = NULL; - overlay->next_comp_start_time = GST_CLOCK_TIME_NONE; - overlay->current_composition = comp; - overlay->current_comp_start_time = decoder->current_time; - GST_DEBUG_OBJECT (overlay, "T: %" GST_TIME_FORMAT, - GST_TIME_ARGS (overlay->current_comp_start_time)); - overlay->need_update = FALSE; - - /* in case the video chain is waiting for a text buffer, wake it up */ - GST_CEA_CC_OVERLAY_BROADCAST (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); -} - -static void -gst_cea_cc_overlay_process_packet (GstCeaCcOverlay * overlay, guint8 cc_type) -{ - gint16 *index = NULL; - guint8 *buffer = NULL; - guint8 *dtvcc_buffer = NULL; - gboolean need_render = FALSE; - - switch (cc_type) { - case CCTYPE_608_CC1: - case CCTYPE_608_CC2: - index = &overlay->cea608_indexcc_type; - buffer = overlay->cea608_buffercc_type; - break; - - case CCTYPE_708_ADD: - case CCTYPE_708_START: - index = &overlay->cea708_index; - buffer = overlay->cea708_buffer; - break; - default: - GST_ERROR_OBJECT (overlay, - "attempted to process packet for unknown cc_type %d", cc_type); - return; - } - - if (*index > 0) { - /*TODO: in future need add 608 decoder, currently only deal with 708 */ - if (cc_type == CCTYPE_708_ADD || cc_type == CCTYPE_708_START) { - GST_LOG_OBJECT (overlay, - "called - buf%" G_GINT16_FORMAT " = %02X:%02X:%02X:%02X", *index, - buffer0, buffer1, buffer2, buffer3); - dtvcc_buffer = g_malloc0 (*index + 1); - memcpy (dtvcc_buffer, buffer, *index); - need_render = - gst_cea708dec_process_dtvcc_packet (overlay->decoder, dtvcc_buffer, - *index); - g_free (dtvcc_buffer); - if (need_render) - gst_cea_cc_overlay_create_and_push_buffer (overlay); - } - } - *index = 0; -} - - -/** - * gst_cea_cc_overlay_user_data_decode: - * @overlay: The #GstCeaCcOverlay - * @user_data: The #GstMpegVideoCCData to decode - * - * decode closed caption data and render when necessary - * in struct GstMpegVideoCCData type's user_data's data field, 3 byte's data construct 1 cc_data_pkt - * - * A cc_data_pkt is 3 bytes as follows: - * ------------------------------------------- - * 5 bits (b7-b3) marker_bits (should be all 1's) - * 1 bit (b2) cc_valid - * 2 bits (b1-b0) cc_type (bslbf) - * 8 bits cc_data_1 (bslbf) - * 8 bits cc_data_2 (bslbf) - * - * If cc_valid != 1, then ignore this packet - * - * cc_type has these values: - * 0 NTSC_CC_FIELD_1 - CEA-608 - * 1 NTSC_CC_FIELD_2 - CEA-608 - * 2 DTVCC_PACKET_DATA - CEA-708 - * 3 DTVCC_PACKET_START - CEA-708 - * - * DTVCC packet (aka. caption channel packet) - * This is formed by accumulating cc_data_1/cc_data_2 from each cc_data_pkt - * starting with a packet where cc_type = 3, and ending with a packet - * where again cc_type = 3 (start of next buffer), or cc_valid=0 && cc_type=2 - * DTVCC packet's structure is: - * -------------------------------------------------------------------------- - * 2 bits (b6-b7) sequence_number - * 6 bits (b0-b5) packet_size - * ((packet_size*2-1)&0xFF) * 8 bits packet_data (Service Block) - */ -static void -gst_cea_cc_overlay_user_data_decode (GstCeaCcOverlay * overlay, - const guint8 * ccdata, gsize ccsize) -{ - guint8 temp; - guint8 cc_count; - guint i; - guint8 cc_type; - guint8 cc_valid; - guint8 cc_data2; - - cc_count = ccsize / 3; - - for (i = 0; i < cc_count; i++) { - temp = *ccdata++; - cc_data0 = *ccdata++; - cc_data1 = *ccdata++; - cc_valid = (temp & CCTYPE_VALID_MASK) ? TRUE : FALSE; - cc_type = (temp & CCTYPE_TYPE_MASK); - - GST_LOG_OBJECT (overlay, "cc_data_pkt(%d): cc_valid=%d cc_type=%d " - "cc_data0=0x%02X cc_data1=0x%02X", - i, cc_valid, cc_type, cc_data0, cc_data1); - - /* accumulate dvtcc packet */ - switch (cc_type) { - case CCTYPE_608_CC1: - case CCTYPE_608_CC2: - if (cc_valid) { - if (overlay->cea608_indexcc_type <= DTVCC_LENGTH - 2) { - size_t j; - for (j = 0; j < 2; ++j) { - if ((cc_dataj < ' ') || (cc_dataj > '~')) { - gst_cea_cc_overlay_process_packet (overlay, cc_type); - } - overlay->cea608_buffercc_typeoverlay-> - cea608_indexcc_type++ = cc_dataj; - } - } else { - GST_ERROR_OBJECT (overlay, "cea608_buffer%d overflow!", cc_type); - } - } - break; - - case CCTYPE_708_ADD: - case CCTYPE_708_START: - if (cc_valid) { - if (cc_type == CCTYPE_708_START) { - /* The previous packet is complete */ - gst_cea_cc_overlay_process_packet (overlay, cc_type); - } - /* Add on to the current DTVCC packet */ - if (overlay->cea708_index <= DTVCC_LENGTH - 2) { - overlay->cea708_bufferoverlay->cea708_index++ = cc_data0; - overlay->cea708_bufferoverlay->cea708_index++ = cc_data1; - } else { - GST_ERROR_OBJECT (overlay, "cea708_buffer overflow!"); - } - } else if (cc_type == CCTYPE_708_ADD) { - /* This packet should be ignored, but if there is a current */ - /* DTVCC packet then this is the end. */ - gst_cea_cc_overlay_process_packet (overlay, cc_type); - } - break; - } - } -} - -/* FIXME : Move to GstVideo ANC/CC helper library */ -static gboolean -extract_ccdata_from_cdp (const guint8 * indata, gsize insize, - const guint8 ** ccdata, gsize * ccsize) -{ - GstByteReader br; - guint8 cdp_length; - guint8 flags; -#ifndef GST_DISABLE_GST_DEBUG - guint8 framerate_code; - guint16 seqhdr; -#endif - - GST_MEMDUMP ("CDP", indata, insize); - - gst_byte_reader_init (&br, indata, insize); - - /* The smallest valid CDP we are interested in is 7 (header) + 2 (cc - * section) + 4 (footer) bytes long */ - if (gst_byte_reader_get_remaining (&br) < 13) - return FALSE; - - /* Check header */ - if (gst_byte_reader_get_uint16_be_unchecked (&br) != 0x9669) { - GST_WARNING ("Invalid CDP header"); - return FALSE; - } - cdp_length = gst_byte_reader_get_uint8_unchecked (&br); - if (cdp_length > insize) { - GST_WARNING ("CDP too small (need %d bytes, have %" G_GSIZE_FORMAT ")", - cdp_length, insize); - return FALSE; - } -#ifndef GST_DISABLE_GST_DEBUG - framerate_code = gst_byte_reader_get_uint8_unchecked (&br) >> 4; -#else - gst_byte_reader_skip (&br, 1); -#endif - flags = gst_byte_reader_get_uint8_unchecked (&br); -#ifndef GST_DISABLE_GST_DEBUG - seqhdr = gst_byte_reader_get_uint16_be_unchecked (&br); -#else - gst_byte_reader_skip (&br, 2); -#endif - - GST_DEBUG - ("framerate_code : 0x%02x , flags : 0x%02x , sequencer_counter : %u", - framerate_code, flags, seqhdr); - - /* Skip timecode if present */ - if (flags & 0x80) { - GST_LOG ("Skipping timecode section"); - gst_byte_reader_skip (&br, 5); - } - - /* cc data */ - if (flags & 0x40) { - guint8 ccid, cc_count; - if (!gst_byte_reader_get_uint8 (&br, &ccid) || - !gst_byte_reader_get_uint8 (&br, &cc_count)) - return FALSE; - if (ccid != 0x72) { - GST_WARNING ("Invalid ccdata_id (expected 0x72, got 0x%02x)", ccid); - return FALSE; - } - cc_count &= 0x1f; - if (!gst_byte_reader_get_data (&br, cc_count * 3, ccdata)) { - GST_WARNING ("Not enough ccdata"); - *ccdata = NULL; - *ccsize = 0; - return FALSE; - } - *ccsize = cc_count * 3; - } - - /* FIXME : Parse/validate the rest of the CDP ! */ - - return TRUE; -} - -/* We receive text buffers here. If they are out of segment we just ignore them. - If the buffer is in our segment we keep it internally except if another one - is already waiting here, in that case we wait that it gets kicked out */ -static GstFlowReturn -gst_cea_cc_overlay_cc_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer) -{ - GstFlowReturn ret = GST_FLOW_OK; - GstCeaCcOverlay *overlay = (GstCeaCcOverlay *) parent; - gboolean in_seg = FALSE; - guint64 clip_start = 0, clip_stop = 0; - - GST_CEA_CC_OVERLAY_LOCK (overlay); - - if (overlay->cc_flushing) { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = GST_FLOW_FLUSHING; - GST_LOG_OBJECT (overlay, "closed caption flushing"); - goto beach; - } - - if (overlay->cc_eos) { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = GST_FLOW_EOS; - GST_LOG_OBJECT (overlay, "closed caption EOS"); - goto beach; - } - - GST_LOG_OBJECT (overlay, "%" GST_SEGMENT_FORMAT " BUFFER: ts=%" - GST_TIME_FORMAT ", end=%" GST_TIME_FORMAT, &overlay->segment, - GST_TIME_ARGS (GST_BUFFER_TIMESTAMP (buffer)), - GST_TIME_ARGS (GST_BUFFER_TIMESTAMP (buffer) + - GST_BUFFER_DURATION (buffer))); - - if (G_LIKELY (GST_BUFFER_TIMESTAMP_IS_VALID (buffer))) { - GstClockTime stop; - - if (G_LIKELY (GST_BUFFER_DURATION_IS_VALID (buffer))) - stop = GST_BUFFER_TIMESTAMP (buffer) + GST_BUFFER_DURATION (buffer); - else - stop = GST_CLOCK_TIME_NONE; - - in_seg = gst_segment_clip (&overlay->cc_segment, GST_FORMAT_TIME, - GST_BUFFER_TIMESTAMP (buffer), stop, &clip_start, &clip_stop); - GST_LOG_OBJECT (overlay, "stop:%" GST_TIME_FORMAT ", in_seg: %d", - GST_TIME_ARGS (stop), in_seg); - } else { - in_seg = TRUE; - } - - - if (in_seg) { - GstMapInfo buf_map = { 0 }; - const guint8 *ccdata = NULL; - gsize ccsize = 0; - - overlay->cc_segment.position = clip_start; - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - - gst_buffer_map (buffer, &buf_map, GST_MAP_READ); - if (overlay->is_cdp) { - extract_ccdata_from_cdp (buf_map.data, buf_map.size, &ccdata, &ccsize); - } else { - ccdata = buf_map.data; - ccsize = buf_map.size; - } - if (ccsize) { - gst_cea_cc_overlay_user_data_decode (overlay, ccdata, ccsize); - overlay->decoder->current_time = GST_BUFFER_PTS (buffer); - } - gst_buffer_unmap (buffer, &buf_map); - } else { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - } - -beach: - gst_buffer_unref (buffer); - return ret; -} - -static GstFlowReturn -gst_cea_cc_overlay_video_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer) -{ - GstCeaCcOverlay *overlay; - GstFlowReturn ret = GST_FLOW_OK; - gboolean in_seg = FALSE; - guint64 start, stop, clip_start = 0, clip_stop = 0; - - overlay = GST_CEA_CC_OVERLAY (parent); - - if (!GST_BUFFER_TIMESTAMP_IS_VALID (buffer)) - goto missing_timestamp; - - /* ignore buffers that are outside of the current segment */ - start = GST_BUFFER_TIMESTAMP (buffer); - - if (!GST_BUFFER_DURATION_IS_VALID (buffer)) { - stop = GST_CLOCK_TIME_NONE; - } else { - stop = start + GST_BUFFER_DURATION (buffer); - } - - GST_LOG_OBJECT (overlay, "%" GST_SEGMENT_FORMAT " BUFFER: ts=%" - GST_TIME_FORMAT ", end=%" GST_TIME_FORMAT, &overlay->segment, - GST_TIME_ARGS (start), GST_TIME_ARGS (stop)); - - /* segment_clip() will adjust start unconditionally to segment_start if - * no stop time is provided, so handle this ourselves */ - if (stop == GST_CLOCK_TIME_NONE && start < overlay->segment.start) - goto out_of_segment; - - in_seg = gst_segment_clip (&overlay->segment, GST_FORMAT_TIME, start, stop, - &clip_start, &clip_stop); - - if (!in_seg) - goto out_of_segment; - - /* if the buffer is only partially in the segment, fix up stamps */ - if (clip_start != start || (stop != -1 && clip_stop != stop)) { - GST_DEBUG_OBJECT (overlay, "clipping buffer timestamp/duration to segment"); - buffer = gst_buffer_make_writable (buffer); - GST_BUFFER_TIMESTAMP (buffer) = clip_start; - if (stop != -1) - GST_BUFFER_DURATION (buffer) = clip_stop - clip_start; - } - - /* now, after we've done the clipping, fix up end time if there's no - * duration (we only use those estimated values internally though, we - * don't want to set bogus values on the buffer itself) */ - if (stop == -1) { - if (overlay->info.fps_n && overlay->info.fps_d) { - GST_DEBUG_OBJECT (overlay, "estimating duration based on framerate"); - stop = start + gst_util_uint64_scale_int (GST_SECOND, - overlay->info.fps_d, overlay->info.fps_n); - } else { - GST_LOG_OBJECT (overlay, "no duration, assuming minimal duration"); - stop = start + 1; /* we need to assume some interval */ - } - } - - gst_object_sync_values (GST_OBJECT (overlay), GST_BUFFER_TIMESTAMP (buffer)); - -wait_for_text_buf: - - GST_CEA_CC_OVERLAY_LOCK (overlay); - - if (overlay->video_flushing) - goto flushing; - - if (overlay->video_eos) - goto have_eos; - - if (overlay->silent) { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_pad_push (overlay->srcpad, buffer); - - /* Update position */ - overlay->segment.position = clip_start; - - return ret; - } - - /* Closed Caption pad not linked, rendering video only */ - if (!overlay->cc_pad_linked) { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_pad_push (overlay->srcpad, buffer); - } else { - /* Closed Caption pad linked, check if we have a text buffer queued */ - if (GST_CLOCK_TIME_IS_VALID (overlay->current_comp_start_time)) { - gboolean pop_text = FALSE, valid_text_time = TRUE; - - GstClockTime text_running_time = GST_CLOCK_TIME_NONE; - GstClockTime next_buffer_text_running_time = GST_CLOCK_TIME_NONE; -#ifndef GST_DISABLE_GST_DEBUG - GstClockTime vid_running_time; -#endif - GstClockTime vid_running_time_end; - -#ifndef GST_DISABLE_GST_DEBUG - vid_running_time = - gst_segment_to_running_time (&overlay->segment, GST_FORMAT_TIME, - start); -#endif - vid_running_time_end = - gst_segment_to_running_time (&overlay->segment, GST_FORMAT_TIME, - stop); - if (GST_CLOCK_TIME_IS_VALID (overlay->next_comp_start_time)) { - next_buffer_text_running_time = - gst_segment_to_running_time (&overlay->cc_segment, GST_FORMAT_TIME, - overlay->next_comp_start_time); - - if (next_buffer_text_running_time < vid_running_time_end) { - /* text buffer should be force updated, popping */ - GST_DEBUG_OBJECT (overlay, - "T: next_buffer_text_running_time: %" GST_TIME_FORMAT - " - overlay->next_comp_start_time: %" GST_TIME_FORMAT, - GST_TIME_ARGS (next_buffer_text_running_time), - GST_TIME_ARGS (overlay->next_comp_start_time)); - GST_DEBUG_OBJECT (overlay, - "V: %" GST_TIME_FORMAT " - %" GST_TIME_FORMAT, - GST_TIME_ARGS (vid_running_time), - GST_TIME_ARGS (vid_running_time_end)); - GST_LOG_OBJECT (overlay, - "text buffer should be force updated, popping"); - pop_text = FALSE; - gst_cea_cc_overlay_pop_text (overlay); - GST_CEA_CC_OVERLAY_WAIT (overlay); - GST_DEBUG_OBJECT (overlay, "resuming"); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - goto wait_for_text_buf; - } - - } - - /* if the text buffer isn't stamped right, pop it off the - * queue and display it for the current video frame only */ - if (!GST_CLOCK_TIME_IS_VALID (overlay->current_comp_start_time)) { - GST_WARNING_OBJECT (overlay, "Got text buffer with invalid timestamp"); - pop_text = TRUE; - valid_text_time = FALSE; - } - - /* If timestamp and duration are valid */ - if (valid_text_time) { - text_running_time = - gst_segment_to_running_time (&overlay->cc_segment, - GST_FORMAT_TIME, overlay->current_comp_start_time); - } - - GST_DEBUG_OBJECT (overlay, "T: %" GST_TIME_FORMAT, - GST_TIME_ARGS (text_running_time)); - GST_DEBUG_OBJECT (overlay, "V: %" GST_TIME_FORMAT " - %" GST_TIME_FORMAT, - GST_TIME_ARGS (vid_running_time), - GST_TIME_ARGS (vid_running_time_end)); - - if (valid_text_time && vid_running_time_end <= text_running_time) { - GST_LOG_OBJECT (overlay, "text in future, pushing video buf"); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - /* Push the video frame */ - ret = gst_pad_push (overlay->srcpad, buffer); - } else { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - ret = gst_cea_cc_overlay_push_frame (overlay, buffer); - } - if (pop_text) { - GST_CEA_CC_OVERLAY_LOCK (overlay); - gst_cea_cc_overlay_pop_text (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - } - } else { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - GST_LOG_OBJECT (overlay, "no need to wait for a text buffer"); - ret = gst_pad_push (overlay->srcpad, buffer); - } - } - - /* Update position */ - overlay->segment.position = clip_start; - GST_DEBUG_OBJECT (overlay, "ret=%d", ret); - - return ret; - -missing_timestamp: - { - GST_WARNING_OBJECT (overlay, "buffer without timestamp, discarding"); - gst_buffer_unref (buffer); - return GST_FLOW_OK; - } - -flushing: - { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - GST_DEBUG_OBJECT (overlay, "flushing, discarding buffer"); - gst_buffer_unref (buffer); - return GST_FLOW_FLUSHING; - } -have_eos: - { - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - GST_DEBUG_OBJECT (overlay, "eos, discarding buffer"); - gst_buffer_unref (buffer); - return GST_FLOW_EOS; - } -out_of_segment: - { - GST_DEBUG_OBJECT (overlay, "buffer out of segment, discarding"); - gst_buffer_unref (buffer); - return GST_FLOW_OK; - } -} - -static GstStateChangeReturn -gst_cea_cc_overlay_change_state (GstElement * element, - GstStateChange transition) -{ - GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; - GstCeaCcOverlay *overlay = GST_CEA_CC_OVERLAY (element); - - switch (transition) { - case GST_STATE_CHANGE_PAUSED_TO_READY: - GST_CEA_CC_OVERLAY_LOCK (overlay); - overlay->cc_flushing = TRUE; - overlay->video_flushing = TRUE; - /* pop_text will broadcast on the GCond and thus also make the video - * chain exit if it's waiting for a text buffer */ - gst_cea_cc_overlay_pop_text (overlay); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - break; - default: - break; - } - - ret = parent_class->change_state (element, transition); - if (ret == GST_STATE_CHANGE_FAILURE) - return ret; - - switch (transition) { - case GST_STATE_CHANGE_READY_TO_PAUSED: - GST_CEA_CC_OVERLAY_LOCK (overlay); - overlay->cc_flushing = FALSE; - overlay->video_flushing = FALSE; - overlay->video_eos = FALSE; - overlay->cc_eos = FALSE; - gst_segment_init (&overlay->segment, GST_FORMAT_TIME); - gst_segment_init (&overlay->cc_segment, GST_FORMAT_TIME); - GST_CEA_CC_OVERLAY_UNLOCK (overlay); - break; - default: - break; - } - - return ret; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstceaccoverlay.h
Deleted
@@ -1,137 +0,0 @@ -/* GStreamer - * Copyright (C) 2015 Samsung Electronics Co., Ltd. - * @Author: Chengjun Wang <cjun.wang@samsung.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - - -#ifndef __GST_CEA_CC_OVERLAY_H__ -#define __GST_CEA_CC_OVERLAY_H__ - -#include <gst/gst.h> -#include <pango/pangocairo.h> -#include <gstcea708decoder.h> - -G_BEGIN_DECLS -#define GST_TYPE_CEA_CC_OVERLAY \ - (gst_cea_cc_overlay_get_type()) -#define GST_CEA_CC_OVERLAY(obj) \ - (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_CEA_CC_OVERLAY,GstCeaCcOverlay)) -#define GST_CEA_CC_OVERLAY_CLASS(klass) \ - (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_CEA_CC_OVERLAY,GstCeaCcOverlayClass)) -#define GST_CEA_CC_OVERLAY_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj),\ - GST_TYPE_CEA_CC_OVERLAY, GstCeaCcOverlayClass)) -#define GST_IS_CEA_CC_OVERLAY(obj) \ - (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_CEA_CC_OVERLAY)) -#define GST_IS_CEA_CC_OVERLAY_CLASS(klass) \ - (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_CEA_CC_OVERLAY)) - -typedef struct _GstCeaCcOverlay GstCeaCcOverlay; -typedef struct _GstCeaCcOverlayClass GstCeaCcOverlayClass; - -typedef enum -{ - CCTYPE_608_CC1 = 0, - CCTYPE_608_CC2, - CCTYPE_708_ADD, - CCTYPE_708_START, -} DtvccType; - -/** - * GstBaseTextOverlayHAlign: - * @GST_CEA_CC_OVERLAY_WIN_H_LEFT: closed caption window horizontal anchor left - * @GST_CEA_CC_OVERLAY_WIN_H_CENTER: closed caption window horizontal anchor center - * @GST_CEA_CC_OVERLAY_WIN_H_RIGHT: closed caption window horizontal anchor right - * @GST_CEA_CC_OVERLAY_WIN_H_AUTO: closed caption window horizontal anchor auto - * - * Closed Caption Window Horizontal anchor position. - */ -typedef enum -{ - GST_CEA_CC_OVERLAY_WIN_H_LEFT, - GST_CEA_CC_OVERLAY_WIN_H_CENTER, - GST_CEA_CC_OVERLAY_WIN_H_RIGHT, - GST_CEA_CC_OVERLAY_WIN_H_AUTO -} GstCeaCcOverlayWinHPos; - -/** - * GstCeaCcOverlay: - * - * Opaque ccoverlay data structure. - */ -struct _GstCeaCcOverlay -{ - GstElement parent; - GstPad *video_sinkpad; - GstPad *cc_sinkpad; - GstPad *srcpad; - /* There are two possible 608 streams encapsulated by 708 */ - gint16 cea608_indexNUM_608_CCTYPES; - gint16 cea708_index; - guint8 cea608_bufferNUM_608_CCTYPESDTVCC_LENGTH; - guint8 cea708_bufferDTVCC_LENGTH; - - /* TRUE if input is CDP, FALSE if cc_data triplet */ - gboolean is_cdp; - - GstSegment segment; - GstSegment cc_segment; - GstVideoOverlayComposition *current_composition; - guint64 current_comp_start_time; - GstVideoOverlayComposition *next_composition; - guint64 next_comp_start_time; - GstCeaCcOverlayWinHPos default_window_h_pos; - gboolean cc_pad_linked; - gboolean video_flushing; - gboolean video_eos; - gboolean cc_flushing; - gboolean cc_eos; - - GMutex lock; - GCond cond; /* to signal removal of a queued text - * buffer, arrival of a text buffer, - * a text segment update, or a change - * in status (e.g. shutdown, flushing) */ - - GstVideoInfo info; - GstVideoFormat format; - gint width; - gint height; - gboolean silent; - Cea708Dec *decoder; - gint image_width; - gint image_height; - - gboolean need_update; - - gboolean attach_compo_to_buffer; -}; - -/* FIXME : Pango context and MT-safe since 1.32.6 */ -struct _GstCeaCcOverlayClass -{ - GstElementClass parent_class; - - PangoContext *pango_context; -}; - -GType gst_cea_cc_overlay_get_type (void); - -GST_ELEMENT_REGISTER_DECLARE (cc708overlay); - -G_END_DECLS -#endif /* __GST_CEA_CC_OVERLAY_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gstclosedcaption.c
Deleted
@@ -1,68 +0,0 @@ -/* - * GStreamer - * Copyright (C) 2018 Edward Hervey <edward@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - - -#ifdef HAVE_CONFIG_H -# include <config.h> -#endif - -#include <gst/gst.h> - -#include "gstcccombiner.h" -#include "gstccconverter.h" -#include "gstccextractor.h" -#include "gstcea608mux.h" -#include "gstline21dec.h" -#include "gstceaccoverlay.h" -#include "gstline21enc.h" -#include "ccutils.h" -#include "gsth264ccextractor.h" -#include "gsth265ccextractor.h" -#include "gsth264ccinserter.h" -#include "gsth265ccinserter.h" - -static gboolean -closedcaption_init (GstPlugin * plugin) -{ - gboolean ret = FALSE; - - GST_DEBUG_CATEGORY_INIT (ccutils_debug_cat, "ccutils", 0, - "Closed caption utilities"); - - ret |= GST_ELEMENT_REGISTER (cccombiner, plugin); - ret |= GST_ELEMENT_REGISTER (cea608mux, plugin); - ret |= GST_ELEMENT_REGISTER (ccconverter, plugin); - ret |= GST_ELEMENT_REGISTER (ccextractor, plugin); - ret |= GST_ELEMENT_REGISTER (line21decoder, plugin); - ret |= GST_ELEMENT_REGISTER (cc708overlay, plugin); - ret |= GST_ELEMENT_REGISTER (line21encoder, plugin); - ret |= GST_ELEMENT_REGISTER (h264ccextractor, plugin); - ret |= GST_ELEMENT_REGISTER (h265ccextractor, plugin); - ret |= GST_ELEMENT_REGISTER (h264ccinserter, plugin); - ret |= GST_ELEMENT_REGISTER (h265ccinserter, plugin); - - return ret; -} - -GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, - GST_VERSION_MINOR, - closedcaption, - "Closed Caption elements", - closedcaption_init, VERSION, "LGPL", GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/gsth265reorder.c
Deleted
@@ -1,1750 +0,0 @@ -/* GStreamer - * Copyright (C) 2015 Intel Corporation - * Author: Sreerenj Balachandran <sreerenj.balachandran@intel.com> - * Copyright (C) 2019 Seungha Yang <seungha.yang@navercorp.com> - * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include "gsth265reorder.h" -#include "gsth264reorder.h" -#include <gst/codecs/gsth265picture.h> -#include <gst/codecparsers/gsth265parser-private.h> -#include <string.h> - -GST_DEBUG_CATEGORY_STATIC (gst_h265_reorder_debug); -#define GST_CAT_DEFAULT gst_h265_reorder_debug - -struct _GstH265Reorder -{ - GstObject parent; - - gboolean need_reorder; - - gint width; - gint height; - - guint8 conformance_window_flag; - gint crop_rect_width; - gint crop_rect_height; - gint crop_rect_x; - gint crop_rect_y; - gint fps_n; - gint fps_d; - - guint nal_length_size; - gboolean is_hevc; - GstH265Parser *parser; - GstH265Parser *preproc_parser; - GstH265Dpb *dpb; - - guint8 field_seq_flag; - guint8 progressive_source_flag; - guint8 interlaced_source_flag; - - GstH265SEIPicStructType cur_pic_struct; - guint8 cur_source_scan_type; - guint8 cur_duplicate_flag; - - gboolean no_output_of_prior_pics_flag; - - /* vps/sps/pps of the current slice */ - const GstH265VPS *active_vps; - const GstH265SPS *active_sps; - const GstH265PPS *active_pps; - - guint32 SpsMaxLatencyPictures; - - GstH265Picture *current_picture; - GstVideoCodecFrame *current_frame; - - /* Slice (slice header + nalu) currently being processed/decoded */ - GstH265Slice current_slice; - GstH265Slice prev_slice; - GstH265Slice prev_independent_slice; - - GstH265Picture *RefPicSetStCurrBefore16; - GstH265Picture *RefPicSetStCurrAfter16; - GstH265Picture *RefPicSetStFoll16; - GstH265Picture *RefPicSetLtCurr16; - GstH265Picture *RefPicSetLtFoll16; - - guint NumPocStCurrBefore; - guint NumPocStCurrAfter; - guint NumPocStFoll; - guint NumPocLtCurr; - guint NumPocLtFoll; - guint NumPicTotalCurr; - - gint32 poc; // PicOrderCntVal - gint32 poc_msb; // PicOrderCntMsb - gint32 poc_lsb; // pic_order_cnt_lsb (from slice_header()) - gint32 prev_poc_msb; // prevPicOrderCntMsb - gint32 prev_poc_lsb; // prevPicOrderCntLsb - gint32 prev_tid0pic_poc_lsb; - gint32 prev_tid0pic_poc_msb; - gint32 PocStCurrBefore16; - gint32 PocStCurrAfter16; - gint32 PocStFoll16; - gint32 PocLtCurr16; - gint32 PocLtFoll16; - - /* PicOrderCount of the previously outputted frame */ - gint last_output_poc; - - gboolean associated_irap_NoRaslOutputFlag; - gboolean new_bitstream; - gboolean prev_nal_is_eos; - - GArray *nalu; - - /* Split packetized data into actual nal chunks (for malformed stream) */ - GArray *split_nalu; - - GArray *au_nalus; - - GPtrArray *frame_queue; - GPtrArray *output_queue; - guint32 system_num; - guint32 present_num; - - GstClockTime latency; -}; - -typedef struct -{ - union - { - GstH265VPS vps; - GstH265SPS sps; - GstH265PPS pps; - GstH265Slice slice; - } unit; - GstH265NalUnitType nalu_type; - guint pps_id; -} GstH265ReorderNalUnit; - -static void gst_h265_reorder_finalize (GObject * object); - -static gboolean gst_h265_reorder_start_current_picture (GstH265Reorder * self); - -#define gst_h265_reorder_parent_class parent_class -G_DEFINE_TYPE (GstH265Reorder, gst_h265_reorder, GST_TYPE_OBJECT); - -static void -gst_h265_reorder_class_init (GstH265ReorderClass * klass) -{ - GObjectClass *object_class = G_OBJECT_CLASS (klass); - - object_class->finalize = gst_h265_reorder_finalize; - - GST_DEBUG_CATEGORY_INIT (gst_h265_reorder_debug, "h265reorder", 0, - "h265reorder"); -} - -static inline gboolean -is_slice_nalu (GstH265NalUnitType type) -{ - if ((type >= GST_H265_NAL_SLICE_TRAIL_N && - type <= GST_H265_NAL_SLICE_RASL_R) || - (type >= GST_H265_NAL_SLICE_BLA_W_LP && - type <= GST_H265_NAL_SLICE_CRA_NUT)) { - return TRUE; - } - - return FALSE; -} - -static void -gst_h265_reorder_clear_nalu (GstH265ReorderNalUnit * nalu) -{ - if (!nalu) - return; - - if (is_slice_nalu (nalu->nalu_type)) - gst_h265_slice_hdr_free (&nalu->unit.slice.header); - - memset (nalu, 0, sizeof (GstH265ReorderNalUnit)); -} - -static void -gst_h265_reorder_init (GstH265Reorder * self) -{ - self->parser = gst_h265_parser_new (); - self->preproc_parser = gst_h265_parser_new (); - self->dpb = gst_h265_dpb_new (); - self->frame_queue = - g_ptr_array_new_with_free_func ( - (GDestroyNotify) gst_video_codec_frame_unref); - self->output_queue = - g_ptr_array_new_with_free_func ( - (GDestroyNotify) gst_video_codec_frame_unref); - - self->nalu = g_array_sized_new (FALSE, TRUE, sizeof (GstH265ReorderNalUnit), - 8); - g_array_set_clear_func (self->nalu, - (GDestroyNotify) gst_h265_reorder_clear_nalu); - self->split_nalu = g_array_new (FALSE, FALSE, sizeof (GstH265NalUnit)); - self->au_nalus = g_array_new (FALSE, FALSE, sizeof (GstH265NalUnit)); - self->fps_n = 25; - self->fps_d = 1; -} - -static void -gst_h265_reorder_clear_ref_pic_sets (GstH265Reorder * self) -{ - guint i; - - for (i = 0; i < 16; i++) { - gst_clear_h265_picture (&self->RefPicSetLtCurri); - gst_clear_h265_picture (&self->RefPicSetLtFolli); - gst_clear_h265_picture (&self->RefPicSetStCurrBeforei); - gst_clear_h265_picture (&self->RefPicSetStCurrAfteri); - gst_clear_h265_picture (&self->RefPicSetStFolli); - } -} - -static void -gst_h265_reorder_finalize (GObject * object) -{ - GstH265Reorder *self = GST_H265_REORDER (object); - - gst_h265_parser_free (self->parser); - gst_h265_parser_free (self->preproc_parser); - g_ptr_array_unref (self->frame_queue); - g_ptr_array_unref (self->output_queue); - g_array_unref (self->nalu); - g_array_unref (self->split_nalu); - g_array_unref (self->au_nalus); - gst_h265_reorder_clear_ref_pic_sets (self); - gst_h265_dpb_free (self->dpb); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static gboolean -gst_h265_reorder_is_crop_rect_changed (GstH265Reorder * self, GstH265SPS * sps) -{ - if (self->conformance_window_flag != sps->conformance_window_flag) - return TRUE; - if (self->crop_rect_width != sps->crop_rect_width) - return TRUE; - if (self->crop_rect_height != sps->crop_rect_height) - return TRUE; - if (self->crop_rect_x != sps->crop_rect_x) - return TRUE; - if (self->crop_rect_y != sps->crop_rect_y) - return TRUE; - - return FALSE; -} - -typedef struct -{ - const gchar *level_name; - guint8 level_idc; - guint32 MaxLumaPs; -} GstH265LevelLimits; - -/* *INDENT-OFF* */ -/* Table A.8 - General tier and level limits */ -static const GstH265LevelLimits level_limits = { - /* level idc MaxLumaPs */ - { "1", GST_H265_LEVEL_L1, 36864 }, - { "2", GST_H265_LEVEL_L2, 122880 }, - { "2.1", GST_H265_LEVEL_L2_1, 245760 }, - { "3", GST_H265_LEVEL_L3, 552960 }, - { "3.1", GST_H265_LEVEL_L3_1, 983040 }, - { "4", GST_H265_LEVEL_L4, 2228224 }, - { "4.1", GST_H265_LEVEL_L4_1, 2228224 }, - { "5", GST_H265_LEVEL_L5, 8912896 }, - { "5.1", GST_H265_LEVEL_L5_1, 8912896 }, - { "5.2", GST_H265_LEVEL_L5_2, 8912896 }, - { "6", GST_H265_LEVEL_L6, 35651584 }, - { "6.1", GST_H265_LEVEL_L6_1, 35651584 }, - { "6.2", GST_H265_LEVEL_L6_2, 35651584 }, -}; -/* *INDENT-ON* */ - -static gint -gst_h265_reorder_get_max_dpb_size_from_sps (GstH265Reorder * self, - GstH265SPS * sps) -{ - guint i; - guint PicSizeInSamplesY; - /* Default is the worst case level 6.2 */ - guint32 MaxLumaPS = G_MAXUINT32; - gint MaxDpbPicBuf = 6; - gint max_dpb_size; - - /* A.4.2, maxDpbPicBuf is equal to 6 for all profiles where the value of - * sps_curr_pic_ref_enabled_flag is required to be equal to 0 and 7 for all - * profiles where the value of sps_curr_pic_ref_enabled_flag is not required - * to be equal to 0 */ - if (sps->sps_scc_extension_flag) { - /* sps_curr_pic_ref_enabled_flag could be non-zero only if profile is SCC */ - MaxDpbPicBuf = 7; - } - - /* Unknown level */ - if (sps->profile_tier_level.level_idc == 0) - return 16; - - PicSizeInSamplesY = sps->width * sps->height; - for (i = 0; i < G_N_ELEMENTS (level_limits); i++) { - if (sps->profile_tier_level.level_idc <= level_limitsi.level_idc) { - if (PicSizeInSamplesY <= level_limitsi.MaxLumaPs) { - MaxLumaPS = level_limitsi.MaxLumaPs; - } else { - GST_DEBUG_OBJECT (self, - "%u (%dx%d) exceeds allowed max luma sample for level \"%s\" %u", - PicSizeInSamplesY, sps->width, sps->height, - level_limitsi.level_name, level_limitsi.MaxLumaPs); - } - break; - } - } - - /* Unknown level */ - if (MaxLumaPS == G_MAXUINT32) - return 16; - - /* A.4.2 */ - if (PicSizeInSamplesY <= (MaxLumaPS >> 2)) - max_dpb_size = MaxDpbPicBuf * 4; - else if (PicSizeInSamplesY <= (MaxLumaPS >> 1)) - max_dpb_size = MaxDpbPicBuf * 2; - else if (PicSizeInSamplesY <= ((3 * MaxLumaPS) >> 2)) - max_dpb_size = (MaxDpbPicBuf * 4) / 3; - else - max_dpb_size = MaxDpbPicBuf; - - max_dpb_size = MIN (max_dpb_size, 16); - - /* MaxDpbSize is not an actual maximum required buffer size. - * Instead, it indicates upper bound for other syntax elements, such as - * sps_max_dec_pic_buffering_minus1. If this bitstream can satisfy - * the requirement, use this as our dpb size */ - if (sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1 <= - max_dpb_size) { - GST_DEBUG_OBJECT (self, "max_dec_pic_buffering_minus1 %d < MaxDpbSize %d", - sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1, - max_dpb_size); - max_dpb_size = - sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1; - } else { - /* not reliable values, use 16 */ - max_dpb_size = 16; - } - - return max_dpb_size; -} - -static gboolean -gst_h265_reorder_process_sps (GstH265Reorder * self, GstH265SPS * sps) -{ - gint max_dpb_size; - gint prev_max_dpb_size; - guint8 field_seq_flag = 0; - guint8 progressive_source_flag = 0; - guint8 interlaced_source_flag = 0; - guint frames_delay; - - max_dpb_size = gst_h265_reorder_get_max_dpb_size_from_sps (self, sps); - - if (sps->vui_parameters_present_flag) - field_seq_flag = sps->vui_params.field_seq_flag; - - progressive_source_flag = sps->profile_tier_level.progressive_source_flag; - interlaced_source_flag = sps->profile_tier_level.interlaced_source_flag; - - prev_max_dpb_size = gst_h265_dpb_get_max_num_pics (self->dpb); - if (self->width != sps->width || self->height != sps->height || - prev_max_dpb_size != max_dpb_size || - self->field_seq_flag != field_seq_flag || - self->progressive_source_flag != progressive_source_flag || - self->interlaced_source_flag != interlaced_source_flag || - gst_h265_reorder_is_crop_rect_changed (self, sps)) { - - GST_DEBUG_OBJECT (self, - "SPS updated, resolution: %dx%d -> %dx%d, dpb size: %d -> %d, " - "field_seq_flag: %d -> %d, progressive_source_flag: %d -> %d, " - "interlaced_source_flag: %d -> %d", - self->width, self->height, sps->width, sps->height, - prev_max_dpb_size, max_dpb_size, self->field_seq_flag, field_seq_flag, - self->progressive_source_flag, progressive_source_flag, - self->interlaced_source_flag, interlaced_source_flag); - - gst_h265_reorder_drain (self); - - self->width = sps->width; - self->height = sps->height; - self->conformance_window_flag = sps->conformance_window_flag; - self->crop_rect_width = sps->crop_rect_width; - self->crop_rect_height = sps->crop_rect_height; - self->crop_rect_x = sps->crop_rect_x; - self->crop_rect_y = sps->crop_rect_y; - self->field_seq_flag = field_seq_flag; - self->progressive_source_flag = progressive_source_flag; - self->interlaced_source_flag = interlaced_source_flag; - - gst_h265_dpb_set_max_num_pics (self->dpb, max_dpb_size); - - GST_DEBUG_OBJECT (self, "Set DPB max size %d", max_dpb_size); - } - - if (sps->max_latency_increase_plus1sps->max_sub_layers_minus1) { - self->SpsMaxLatencyPictures = - sps->max_num_reorder_picssps->max_sub_layers_minus1 + - sps->max_latency_increase_plus1sps->max_sub_layers_minus1 - 1; - } else { - self->SpsMaxLatencyPictures = 0; - } - - frames_delay = sps->max_num_reorder_picssps->max_sub_layers_minus1; - self->latency = gst_util_uint64_scale_int (frames_delay * GST_SECOND, - self->fps_d, self->fps_n); - - return TRUE; -} - -static GstH265ParserResult -gst_h265_reorder_parse_sei (GstH265Reorder * self, GstH265NalUnit * nalu) -{ - GstH265ParserResult pres; - GArray *messages = NULL; - guint i; - - pres = gst_h265_parser_parse_sei (self->preproc_parser, nalu, &messages); - if (pres != GST_H265_PARSER_OK) { - GST_WARNING_OBJECT (self, "Failed to parse SEI, result %d", pres); - - /* XXX: Ignore error from SEI parsing, it might be malformed bitstream, - * or our fault. But shouldn't be critical */ - g_clear_pointer (&messages, g_array_unref); - return GST_H265_PARSER_OK; - } - - for (i = 0; i < messages->len; i++) { - GstH265SEIMessage *sei = &g_array_index (messages, GstH265SEIMessage, i); - - switch (sei->payloadType) { - case GST_H265_SEI_PIC_TIMING: - self->cur_pic_struct = sei->payload.pic_timing.pic_struct; - self->cur_source_scan_type = sei->payload.pic_timing.source_scan_type; - self->cur_duplicate_flag = sei->payload.pic_timing.duplicate_flag; - - GST_TRACE_OBJECT (self, - "Picture Timing SEI, pic_struct: %d, source_scan_type: %d, " - "duplicate_flag: %d", self->cur_pic_struct, - self->cur_source_scan_type, self->cur_duplicate_flag); - break; - default: - break; - } - } - - g_array_free (messages, TRUE); - GST_LOG_OBJECT (self, "SEI parsed"); - - return GST_H265_PARSER_OK; -} - -static gboolean -gst_h265_reorder_preprocess_slice (GstH265Reorder * self, GstH265Slice * slice) -{ - const GstH265SliceHdr *slice_hdr = &slice->header; - - if (self->current_picture && slice_hdr->first_slice_segment_in_pic_flag) { - GST_WARNING_OBJECT (self, - "Current picture is not finished but slice header has " - "first_slice_segment_in_pic_flag"); - return FALSE; - } - - return TRUE; -} - -static gboolean -gst_h265_reorder_process_slice (GstH265Reorder * self, GstH265Slice * slice) -{ - self->current_slice = *slice; - - if (self->current_slice.header.dependent_slice_segment_flag) { - GstH265SliceHdr *slice_hdr = &self->current_slice.header; - GstH265SliceHdr *indep_slice_hdr = &self->prev_independent_slice.header; - - memcpy (&slice_hdr->type, &indep_slice_hdr->type, - G_STRUCT_OFFSET (GstH265SliceHdr, num_entry_point_offsets) - - G_STRUCT_OFFSET (GstH265SliceHdr, type)); - } else { - self->prev_independent_slice = self->current_slice; - memset (&self->prev_independent_slice.nalu, 0, sizeof (GstH265NalUnit)); - } - - if (!gst_h265_reorder_preprocess_slice (self, &self->current_slice)) - return FALSE; - - /* The used SPS may not be the latest parsed one, make - * sure we have updated it before decode the frame */ - if (!gst_h265_reorder_process_sps (self, self->current_slice.header.pps->sps)) { - GST_WARNING_OBJECT (self, "Failed to process sps"); - return FALSE; - } - - self->active_pps = self->current_slice.header.pps; - self->active_sps = self->active_pps->sps; - - if (!self->current_picture) { - GstH265Picture *picture; - - g_assert (self->current_frame); - - picture = gst_h265_picture_new (); - /* This allows accessing the frame from the picture. */ - GST_CODEC_PICTURE_FRAME_NUMBER (picture) = - self->current_frame->system_frame_number; - - self->current_picture = picture; - - if (!gst_h265_reorder_start_current_picture (self)) { - GST_WARNING_OBJECT (self, "start picture failed"); - return FALSE; - } - } - - return TRUE; -} - -static GstH265ParserResult -gst_h265_reorder_parse_slice (GstH265Reorder * self, GstH265NalUnit * nalu) -{ - GstH265ParserResult pres; - GstH265Slice slice; - GstH265ReorderNalUnit decoder_nalu; - - memset (&slice, 0, sizeof (GstH265Slice)); - - pres = gst_h265_parser_parse_slice_hdr (self->preproc_parser, - nalu, &slice.header); - if (pres != GST_H265_PARSER_OK) - return pres; - - slice.nalu = *nalu; - - if (nalu->type >= GST_H265_NAL_SLICE_BLA_W_LP && - nalu->type <= GST_H265_NAL_SLICE_CRA_NUT) { - slice.rap_pic_flag = TRUE; - } - - /* NoRaslOutputFlag == 1 if the current picture is - * 1) an IDR picture - * 2) a BLA picture - * 3) a CRA picture that is the first access unit in the bitstream - * 4) first picture that follows an end of sequence NAL unit in decoding order - * 5) has HandleCraAsBlaFlag == 1 (set by external means, so not considering ) - */ - if (GST_H265_IS_NAL_TYPE_IDR (nalu->type) || - GST_H265_IS_NAL_TYPE_BLA (nalu->type) || - (GST_H265_IS_NAL_TYPE_CRA (nalu->type) && self->new_bitstream) || - self->prev_nal_is_eos) { - slice.no_rasl_output_flag = TRUE; - } - - if (GST_H265_IS_NAL_TYPE_IRAP (nalu->type)) { - slice.intra_pic_flag = TRUE; - - if (slice.no_rasl_output_flag && !self->new_bitstream) { - /* C 3.2 */ - slice.clear_dpb = TRUE; - if (nalu->type == GST_H265_NAL_SLICE_CRA_NUT) { - slice.no_output_of_prior_pics_flag = TRUE; - } else { - slice.no_output_of_prior_pics_flag = - slice.header.no_output_of_prior_pics_flag; - } - } - } - - if (slice.no_output_of_prior_pics_flag) - self->no_output_of_prior_pics_flag = TRUE; - - decoder_nalu.unit.slice = slice; - decoder_nalu.nalu_type = nalu->type; - decoder_nalu.pps_id = slice.header.pps->id; - - g_array_append_val (self->nalu, decoder_nalu); - - return GST_H265_PARSER_OK; -} - -static GstH265ParserResult -gst_h265_reorder_parse_nalu (GstH265Reorder * self, GstH265NalUnit * nalu) -{ - GstH265VPS vps; - GstH265SPS sps; - GstH265PPS pps; - GstH265ParserResult ret = GST_H265_PARSER_OK; - GstH265ReorderNalUnit decoder_nalu; - - GST_LOG_OBJECT (self, "Parsed nal type: %d, offset %d, size %d", - nalu->type, nalu->offset, nalu->size); - - memset (&decoder_nalu, 0, sizeof (GstH265ReorderNalUnit)); - decoder_nalu.nalu_type = nalu->type; - - switch (nalu->type) { - case GST_H265_NAL_VPS: - ret = gst_h265_parser_parse_vps (self->preproc_parser, nalu, &vps); - if (ret != GST_H265_PARSER_OK) - break; - - decoder_nalu.unit.vps = vps; - g_array_append_val (self->nalu, decoder_nalu); - break; - case GST_H265_NAL_SPS: - ret = gst_h265_parser_parse_sps (self->preproc_parser, nalu, &sps, TRUE); - if (ret != GST_H265_PARSER_OK) - break; - - decoder_nalu.unit.sps = sps; - g_array_append_val (self->nalu, decoder_nalu); - break; - case GST_H265_NAL_PPS: - ret = gst_h265_parser_parse_pps (self->preproc_parser, nalu, &pps); - if (ret != GST_H265_PARSER_OK) - break; - - decoder_nalu.unit.pps = pps; - g_array_append_val (self->nalu, decoder_nalu); - break; - case GST_H265_NAL_PREFIX_SEI: - case GST_H265_NAL_SUFFIX_SEI: - ret = gst_h265_reorder_parse_sei (self, nalu); - break; - case GST_H265_NAL_SLICE_TRAIL_N: - case GST_H265_NAL_SLICE_TRAIL_R: - case GST_H265_NAL_SLICE_TSA_N: - case GST_H265_NAL_SLICE_TSA_R: - case GST_H265_NAL_SLICE_STSA_N: - case GST_H265_NAL_SLICE_STSA_R: - case GST_H265_NAL_SLICE_RADL_N: - case GST_H265_NAL_SLICE_RADL_R: - case GST_H265_NAL_SLICE_RASL_N: - case GST_H265_NAL_SLICE_RASL_R: - case GST_H265_NAL_SLICE_BLA_W_LP: - case GST_H265_NAL_SLICE_BLA_W_RADL: - case GST_H265_NAL_SLICE_BLA_N_LP: - case GST_H265_NAL_SLICE_IDR_W_RADL: - case GST_H265_NAL_SLICE_IDR_N_LP: - case GST_H265_NAL_SLICE_CRA_NUT: - ret = gst_h265_reorder_parse_slice (self, nalu); - self->new_bitstream = FALSE; - self->prev_nal_is_eos = FALSE; - break; - case GST_H265_NAL_EOB: - self->new_bitstream = TRUE; - break; - case GST_H265_NAL_EOS: - self->prev_nal_is_eos = TRUE; - break; - default: - break; - } - - return ret; -} - -static gboolean -gst_h265_reorder_decode_nalu (GstH265Reorder * self, - GstH265ReorderNalUnit * nalu) -{ - GstH265ParserResult rst; - - switch (nalu->nalu_type) { - case GST_H265_NAL_VPS: - gst_h265_parser_update_vps (self->parser, &nalu->unit.vps); - return TRUE; - case GST_H265_NAL_SPS: - gst_h265_parser_update_sps (self->parser, &nalu->unit.sps); - return TRUE; - case GST_H265_NAL_PPS: - gst_h265_parser_update_pps (self->parser, &nalu->unit.pps); - return TRUE; - default: - if (!is_slice_nalu (nalu->nalu_type)) { - GST_WARNING_OBJECT (self, "Unexpected nal type %d", nalu->nalu_type); - return TRUE; - } - break; - } - - rst = gst_h265_parser_link_slice_hdr (self->parser, - &nalu->unit.slice.header, nalu->pps_id); - - if (rst != GST_H265_PARSER_OK) { - GST_ERROR_OBJECT (self, "Couldn't update slice header"); - return FALSE; - } - - return gst_h265_reorder_process_slice (self, &nalu->unit.slice); -} - -static gboolean -gst_h265_reorder_parse_codec_data (GstH265Reorder * self, const guint8 * data, - gsize size) -{ - GstH265Parser *parser = self->parser; - GstH265ParserResult pres; - gboolean ret = FALSE; - GstH265VPS vps; - GstH265SPS sps; - GstH265PPS pps; - GstH265DecoderConfigRecord *config = NULL; - guint i, j; - - pres = gst_h265_parser_parse_decoder_config_record (parser, - data, size, &config); - if (pres != GST_H265_PARSER_OK) { - GST_WARNING_OBJECT (self, "Failed to parse hvcC data"); - return FALSE; - } - - self->nal_length_size = config->length_size_minus_one + 1; - GST_DEBUG_OBJECT (self, "nal length size %u", self->nal_length_size); - - for (i = 0; i < config->nalu_array->len; i++) { - GstH265DecoderConfigRecordNalUnitArray *array = - &g_array_index (config->nalu_array, - GstH265DecoderConfigRecordNalUnitArray, i); - - for (j = 0; j < array->nalu->len; j++) { - GstH265NalUnit *nalu = &g_array_index (array->nalu, GstH265NalUnit, j); - - switch (nalu->type) { - case GST_H265_NAL_VPS: - pres = gst_h265_parser_parse_vps (parser, nalu, &vps); - if (pres != GST_H265_PARSER_OK) { - GST_WARNING_OBJECT (self, "Failed to parse VPS"); - goto out; - } - gst_h265_parser_update_vps (self->preproc_parser, &vps); - break; - case GST_H265_NAL_SPS: - pres = gst_h265_parser_parse_sps (parser, nalu, &sps, TRUE); - if (pres != GST_H265_PARSER_OK) { - GST_WARNING_OBJECT (self, "Failed to parse SPS"); - goto out; - } - gst_h265_parser_update_sps (self->preproc_parser, &sps); - break; - case GST_H265_NAL_PPS: - pres = gst_h265_parser_parse_pps (parser, nalu, &pps); - if (pres != GST_H265_PARSER_OK) { - GST_WARNING_OBJECT (self, "Failed to parse PPS"); - goto out; - } - gst_h265_parser_update_pps (self->preproc_parser, &pps); - break; - default: - break; - } - } - } - - ret = TRUE; - -out: - gst_h265_decoder_config_record_free (config); - return ret; -} - -gboolean -gst_h265_reorder_set_caps (GstH265Reorder * self, GstCaps * caps, - GstClockTime * latency) -{ - GstStructure *s; - const gchar *str; - const GValue *codec_data; - gboolean ret = TRUE; - gint fps_n, fps_d; - - GST_DEBUG_OBJECT (self, "Set caps %" GST_PTR_FORMAT, caps); - - self->nal_length_size = 4; - self->is_hevc = FALSE; - - s = gst_caps_get_structure (caps, 0); - str = gst_structure_get_string (s, "stream-format"); - if (str && (g_strcmp0 (str, "hvc1") == 0 || g_strcmp0 (str, "hev1") == 0)) - self->is_hevc = TRUE; - - if (gst_structure_get_fraction (s, "framerate", &fps_n, &fps_d) && - fps_n > 0 && fps_d > 0) { - self->fps_n = fps_n; - self->fps_d = fps_d; - } else { - self->fps_n = 25; - self->fps_d = 1; - } - - codec_data = gst_structure_get_value (s, "codec_data"); - if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER) { - GstBuffer *buf = gst_value_get_buffer (codec_data); - GstMapInfo info; - if (gst_buffer_map (buf, &info, GST_MAP_READ)) { - ret = gst_h265_reorder_parse_codec_data (self, info.data, info.size); - gst_buffer_unmap (buf, &info); - } else { - GST_ERROR_OBJECT (self, "Couldn't map codec data"); - ret = FALSE; - } - } - - if (self->need_reorder) - *latency = self->latency; - else - *latency = 0; - - return ret; -} - -static gboolean -gst_h265_reorder_fill_picture_from_slice (GstH265Reorder * self, - const GstH265Slice * slice, GstH265Picture * picture) -{ - const GstH265SliceHdr *slice_hdr = &slice->header; - const GstH265NalUnit *nalu = &slice->nalu; - - picture->RapPicFlag = slice->rap_pic_flag; - picture->NoRaslOutputFlag = slice->no_rasl_output_flag; - picture->IntraPicFlag = slice->intra_pic_flag; - picture->NoOutputOfPriorPicsFlag = slice->no_output_of_prior_pics_flag; - if (picture->IntraPicFlag) { - self->associated_irap_NoRaslOutputFlag = picture->NoRaslOutputFlag; - } - - if (GST_H265_IS_NAL_TYPE_RASL (nalu->type) && - self->associated_irap_NoRaslOutputFlag) { - picture->output_flag = FALSE; - } else { - picture->output_flag = slice_hdr->pic_output_flag; - } - - return TRUE; -} - -#define RSV_VCL_N10 10 -#define RSV_VCL_N12 12 -#define RSV_VCL_N14 14 - -static gboolean -nal_is_ref (guint8 nal_type) -{ - gboolean ret = FALSE; - switch (nal_type) { - case GST_H265_NAL_SLICE_TRAIL_N: - case GST_H265_NAL_SLICE_TSA_N: - case GST_H265_NAL_SLICE_STSA_N: - case GST_H265_NAL_SLICE_RADL_N: - case GST_H265_NAL_SLICE_RASL_N: - case RSV_VCL_N10: - case RSV_VCL_N12: - case RSV_VCL_N14: - ret = FALSE; - break; - default: - ret = TRUE; - break; - } - return ret; -} - -static gboolean -gst_h265_reorder_calculate_poc (GstH265Reorder * self, - const GstH265Slice * slice, GstH265Picture * picture) -{ - const GstH265SliceHdr *slice_hdr = &slice->header; - const GstH265NalUnit *nalu = &slice->nalu; - const GstH265SPS *sps = self->active_sps; - gint32 MaxPicOrderCntLsb = 1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4); - gboolean is_irap; - - self->prev_poc_lsb = self->poc_lsb; - self->prev_poc_msb = self->poc_msb; - - is_irap = GST_H265_IS_NAL_TYPE_IRAP (nalu->type); - - if (!(is_irap && picture->NoRaslOutputFlag)) { - self->prev_poc_lsb = self->prev_tid0pic_poc_lsb; - self->prev_poc_msb = self->prev_tid0pic_poc_msb; - } - - /* Finding PicOrderCntMsb */ - if (is_irap && picture->NoRaslOutputFlag) { - self->poc_msb = 0; - } else { - /* (8-1) */ - if ((slice_hdr->pic_order_cnt_lsb < self->prev_poc_lsb) && - ((self->prev_poc_lsb - slice_hdr->pic_order_cnt_lsb) >= - (MaxPicOrderCntLsb / 2))) - self->poc_msb = self->prev_poc_msb + MaxPicOrderCntLsb; - - else if ((slice_hdr->pic_order_cnt_lsb > self->prev_poc_lsb) && - ((slice_hdr->pic_order_cnt_lsb - self->prev_poc_lsb) > - (MaxPicOrderCntLsb / 2))) - self->poc_msb = self->prev_poc_msb - MaxPicOrderCntLsb; - - else - self->poc_msb = self->prev_poc_msb; - } - - /* (8-2) */ - self->poc = picture->pic_order_cnt = - self->poc_msb + slice_hdr->pic_order_cnt_lsb; - self->poc_lsb = picture->pic_order_cnt_lsb = slice_hdr->pic_order_cnt_lsb; - - if (GST_H265_IS_NAL_TYPE_IDR (nalu->type)) { - picture->pic_order_cnt = 0; - picture->pic_order_cnt_lsb = 0; - self->poc_lsb = 0; - self->poc_msb = 0; - self->prev_poc_lsb = 0; - self->prev_poc_msb = 0; - self->prev_tid0pic_poc_lsb = 0; - self->prev_tid0pic_poc_msb = 0; - } - - GST_LOG_OBJECT (self, - "PicOrderCntVal %d, (lsb %d)", picture->pic_order_cnt, - picture->pic_order_cnt_lsb); - - if (nalu->temporal_id_plus1 == 1 && !GST_H265_IS_NAL_TYPE_RASL (nalu->type) && - !GST_H265_IS_NAL_TYPE_RADL (nalu->type) && nal_is_ref (nalu->type)) { - self->prev_tid0pic_poc_lsb = slice_hdr->pic_order_cnt_lsb; - self->prev_tid0pic_poc_msb = self->poc_msb; - } - - return TRUE; -} - -static gboolean -gst_h265_reorder_init_current_picture (GstH265Reorder * self) -{ - if (!gst_h265_reorder_fill_picture_from_slice (self, &self->current_slice, - self->current_picture)) { - return FALSE; - } - - if (!gst_h265_reorder_calculate_poc (self, - &self->current_slice, self->current_picture)) - return FALSE; - - /* Use picture struct parsed from picture timing SEI */ - self->current_picture->pic_struct = self->cur_pic_struct; - self->current_picture->source_scan_type = self->cur_source_scan_type; - self->current_picture->duplicate_flag = self->cur_duplicate_flag; - - return TRUE; -} - -static gboolean -has_entry_in_rps (GstH265Picture * dpb_pic, - GstH265Picture ** rps_list, guint rps_list_length) -{ - guint i; - - if (!dpb_pic || !rps_list || !rps_list_length) - return FALSE; - - for (i = 0; i < rps_list_length; i++) { - if (rps_listi && rps_listi->pic_order_cnt == dpb_pic->pic_order_cnt) - return TRUE; - } - return FALSE; -} - -static void -gst_h265_reorder_derive_and_mark_rps (GstH265Reorder * self, - GstH265Picture * picture, gint32 * CurrDeltaPocMsbPresentFlag, - gint32 * FollDeltaPocMsbPresentFlag) -{ - guint i; - GArray *dpb_array; - - gst_h265_reorder_clear_ref_pic_sets (self); - - /* (8-6) */ - for (i = 0; i < self->NumPocLtCurr; i++) { - if (!CurrDeltaPocMsbPresentFlagi) { - self->RefPicSetLtCurri = - gst_h265_dpb_get_ref_by_poc_lsb (self->dpb, self->PocLtCurri); - } else { - self->RefPicSetLtCurri = - gst_h265_dpb_get_ref_by_poc (self->dpb, self->PocLtCurri); - } - } - - for (i = 0; i < self->NumPocLtFoll; i++) { - if (!FollDeltaPocMsbPresentFlagi) { - self->RefPicSetLtFolli = - gst_h265_dpb_get_ref_by_poc_lsb (self->dpb, self->PocLtFolli); - } else { - self->RefPicSetLtFolli = - gst_h265_dpb_get_ref_by_poc (self->dpb, self->PocLtFolli); - } - } - - /* Mark all ref pics in RefPicSetLtCurr and RefPicSetLtFol as long_term_refs */ - for (i = 0; i < self->NumPocLtCurr; i++) { - if (self->RefPicSetLtCurri) { - self->RefPicSetLtCurri->ref = TRUE; - self->RefPicSetLtCurri->long_term = TRUE; - } - } - - for (i = 0; i < self->NumPocLtFoll; i++) { - if (self->RefPicSetLtFolli) { - self->RefPicSetLtFolli->ref = TRUE; - self->RefPicSetLtFolli->long_term = TRUE; - } - } - - /* (8-7) */ - for (i = 0; i < self->NumPocStCurrBefore; i++) { - self->RefPicSetStCurrBeforei = - gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStCurrBeforei); - } - - for (i = 0; i < self->NumPocStCurrAfter; i++) { - self->RefPicSetStCurrAfteri = - gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStCurrAfteri); - } - - for (i = 0; i < self->NumPocStFoll; i++) { - self->RefPicSetStFolli = - gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStFolli); - } - - /* Mark all dpb pics not beloging to RefPicSet* as unused for ref */ - dpb_array = gst_h265_dpb_get_pictures_all (self->dpb); - for (i = 0; i < dpb_array->len; i++) { - GstH265Picture *dpb_pic = g_array_index (dpb_array, GstH265Picture *, i); - - if (dpb_pic && - !has_entry_in_rps (dpb_pic, self->RefPicSetLtCurr, self->NumPocLtCurr) - && !has_entry_in_rps (dpb_pic, self->RefPicSetLtFoll, - self->NumPocLtFoll) - && !has_entry_in_rps (dpb_pic, self->RefPicSetStCurrAfter, - self->NumPocStCurrAfter) - && !has_entry_in_rps (dpb_pic, self->RefPicSetStCurrBefore, - self->NumPocStCurrBefore) - && !has_entry_in_rps (dpb_pic, self->RefPicSetStFoll, - self->NumPocStFoll)) { - GST_LOG_OBJECT (self, "Mark Picture %p (poc %d) as non-ref", dpb_pic, - dpb_pic->pic_order_cnt); - dpb_pic->ref = FALSE; - dpb_pic->long_term = FALSE; - } - } - - g_array_unref (dpb_array); -} - -static gboolean -gst_h265_reorder_prepare_rps (GstH265Reorder * self, const GstH265Slice * slice, - GstH265Picture * picture) -{ - gint32 CurrDeltaPocMsbPresentFlag16 = { 0, }; - gint32 FollDeltaPocMsbPresentFlag16 = { 0, }; - const GstH265SliceHdr *slice_hdr = &slice->header; - const GstH265NalUnit *nalu = &slice->nalu; - const GstH265SPS *sps = self->active_sps; - guint32 MaxPicOrderCntLsb = 1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4); - gint i, j, k; - - /* if it is an irap pic, set all ref pics in dpb as unused for ref */ - if (GST_H265_IS_NAL_TYPE_IRAP (nalu->type) && picture->NoRaslOutputFlag) { - GST_DEBUG_OBJECT (self, "Mark all pictures in DPB as non-ref"); - gst_h265_dpb_mark_all_non_ref (self->dpb); - } - - /* Reset everything for IDR */ - if (GST_H265_IS_NAL_TYPE_IDR (nalu->type)) { - memset (self->PocStCurrBefore, 0, sizeof (self->PocStCurrBefore)); - memset (self->PocStCurrAfter, 0, sizeof (self->PocStCurrAfter)); - memset (self->PocStFoll, 0, sizeof (self->PocStFoll)); - memset (self->PocLtCurr, 0, sizeof (self->PocLtCurr)); - memset (self->PocLtFoll, 0, sizeof (self->PocLtFoll)); - self->NumPocStCurrBefore = self->NumPocStCurrAfter = self->NumPocStFoll = 0; - self->NumPocLtCurr = self->NumPocLtFoll = 0; - } else { - const GstH265ShortTermRefPicSet *stRefPic = NULL; - gint32 num_lt_pics, pocLt; - gint32 PocLsbLt16 = { 0, }; - gint32 UsedByCurrPicLt16 = { 0, }; - gint32 DeltaPocMsbCycleLt16 = { 0, }; - gint numtotalcurr = 0; - - /* this is based on CurrRpsIdx described in spec */ - if (!slice_hdr->short_term_ref_pic_set_sps_flag) - stRefPic = &slice_hdr->short_term_ref_pic_sets; - else if (sps->num_short_term_ref_pic_sets) - stRefPic = - &sps->short_term_ref_pic_setslice_hdr->short_term_ref_pic_set_idx; - - if (stRefPic == NULL) - return FALSE; - - GST_LOG_OBJECT (self, - "NumDeltaPocs: %d, NumNegativePics: %d, NumPositivePics %d", - stRefPic->NumDeltaPocs, stRefPic->NumNegativePics, - stRefPic->NumPositivePics); - - for (i = 0, j = 0, k = 0; i < stRefPic->NumNegativePics; i++) { - if (stRefPic->UsedByCurrPicS0i) { - self->PocStCurrBeforej++ = - picture->pic_order_cnt + stRefPic->DeltaPocS0i; - numtotalcurr++; - } else - self->PocStFollk++ = picture->pic_order_cnt + stRefPic->DeltaPocS0i; - } - self->NumPocStCurrBefore = j; - for (i = 0, j = 0; i < stRefPic->NumPositivePics; i++) { - if (stRefPic->UsedByCurrPicS1i) { - self->PocStCurrAfterj++ = - picture->pic_order_cnt + stRefPic->DeltaPocS1i; - numtotalcurr++; - } else - self->PocStFollk++ = picture->pic_order_cnt + stRefPic->DeltaPocS1i; - } - self->NumPocStCurrAfter = j; - self->NumPocStFoll = k; - num_lt_pics = slice_hdr->num_long_term_sps + slice_hdr->num_long_term_pics; - /* The variables PocLsbLti and UsedByCurrPicLti are derived as follows: */ - for (i = 0; i < num_lt_pics; i++) { - if (i < slice_hdr->num_long_term_sps) { - PocLsbLti = sps->lt_ref_pic_poc_lsb_spsslice_hdr->lt_idx_spsi; - UsedByCurrPicLti = - sps->used_by_curr_pic_lt_sps_flagslice_hdr->lt_idx_spsi; - } else { - PocLsbLti = slice_hdr->poc_lsb_lti; - UsedByCurrPicLti = slice_hdr->used_by_curr_pic_lt_flagi; - } - if (UsedByCurrPicLti) - numtotalcurr++; - } - - self->NumPicTotalCurr = numtotalcurr; - - /* The variable DeltaPocMsbCycleLti is derived as follows: (7-38) */ - for (i = 0; i < num_lt_pics; i++) { - if (i == 0 || i == slice_hdr->num_long_term_sps) - DeltaPocMsbCycleLti = slice_hdr->delta_poc_msb_cycle_lti; - else - DeltaPocMsbCycleLti = - slice_hdr->delta_poc_msb_cycle_lti + DeltaPocMsbCycleLti - 1; - } - - /* (8-5) */ - for (i = 0, j = 0, k = 0; i < num_lt_pics; i++) { - pocLt = PocLsbLti; - if (slice_hdr->delta_poc_msb_present_flagi) - pocLt += - picture->pic_order_cnt - DeltaPocMsbCycleLti * MaxPicOrderCntLsb - - slice_hdr->pic_order_cnt_lsb; - if (UsedByCurrPicLti) { - self->PocLtCurrj = pocLt; - CurrDeltaPocMsbPresentFlagj++ = - slice_hdr->delta_poc_msb_present_flagi; - } else { - self->PocLtFollk = pocLt; - FollDeltaPocMsbPresentFlagk++ = - slice_hdr->delta_poc_msb_present_flagi; - } - } - self->NumPocLtCurr = j; - self->NumPocLtFoll = k; - } - - GST_LOG_OBJECT (self, "NumPocStCurrBefore: %d", self->NumPocStCurrBefore); - GST_LOG_OBJECT (self, "NumPocStCurrAfter: %d", self->NumPocStCurrAfter); - GST_LOG_OBJECT (self, "NumPocStFoll: %d", self->NumPocStFoll); - GST_LOG_OBJECT (self, "NumPocLtCurr: %d", self->NumPocLtCurr); - GST_LOG_OBJECT (self, "NumPocLtFoll: %d", self->NumPocLtFoll); - GST_LOG_OBJECT (self, "NumPicTotalCurr: %d", self->NumPicTotalCurr); - - /* the derivation process for the RPS and the picture marking */ - gst_h265_reorder_derive_and_mark_rps (self, picture, - CurrDeltaPocMsbPresentFlag, FollDeltaPocMsbPresentFlag); - - return TRUE; -} - -static void -gst_h265_reorder_set_output_buffer (GstH265Reorder * self, guint frame_num) -{ - gsize i, j; - - for (i = 0; i < self->frame_queue->len; i++) { - GstVideoCodecFrame *frame = g_ptr_array_index (self->frame_queue, i); - if (frame->system_frame_number != frame_num) - continue; - - /* Copy frame at present index to */ - if (!frame->output_buffer) { - GST_LOG_OBJECT (self, "decoding order: %u, display order: %u", - frame_num, self->present_num); - frame->presentation_frame_number = self->present_num; - self->present_num++; - for (j = 0; j < self->frame_queue->len; j++) { - GstVideoCodecFrame *other_frame = - g_ptr_array_index (self->frame_queue, j); - if (other_frame->system_frame_number == - frame->presentation_frame_number) { - frame->output_buffer = gst_buffer_ref (other_frame->input_buffer); - return; - } - } - } - - break; - } -} - -static void -gst_h265_reorder_output_picture (GstH265Reorder * self, - GstH265Picture * picture) -{ - guint frame_num = GST_CODEC_PICTURE_FRAME_NUMBER (picture); - - gst_h265_reorder_set_output_buffer (self, frame_num); - gst_h265_picture_unref (picture); - - /* Move completed frames to output queue */ - while (self->frame_queue->len > 0) { - GstVideoCodecFrame *frame = g_ptr_array_index (self->frame_queue, 0); - if (!frame->output_buffer) - break; - - frame = g_ptr_array_steal_index (self->frame_queue, 0); - g_ptr_array_add (self->output_queue, frame); - } -} - -GstH265Reorder * -gst_h265_reorder_new (gboolean need_reorder) -{ - GstH265Reorder *self = g_object_new (GST_TYPE_H265_REORDER, NULL); - gst_object_ref_sink (self); - - self->need_reorder = need_reorder; - - return self; -} - -void -gst_h265_reorder_drain (GstH265Reorder * reorder) -{ - GstH265Picture *picture; - - while ((picture = gst_h265_dpb_bump (reorder->dpb, TRUE)) != NULL) { - gst_h265_reorder_output_picture (reorder, picture); - } - - gst_h265_dpb_clear (reorder->dpb); - - /* Frame queue should be empty or holding only current frame */ - while (reorder->frame_queue->len > 0) { - GstVideoCodecFrame *frame = g_ptr_array_index (reorder->frame_queue, 0); - if (frame == reorder->current_frame) - break; - - GST_WARNING_OBJECT (reorder, "Remaining frame after drain %" GST_PTR_FORMAT, - frame->input_buffer); - - /* Move to output queue anyway */ - frame->output_buffer = gst_buffer_ref (frame->input_buffer); - frame = g_ptr_array_steal_index (reorder->frame_queue, 0); - g_ptr_array_add (reorder->output_queue, frame); - } - - /* presentation number */ - if (reorder->current_frame) - reorder->present_num = reorder->current_frame->system_frame_number; - else - reorder->present_num = reorder->system_num; -} - -/* C.5.2.2 */ -static gboolean -gst_h265_reorder_dpb_init (GstH265Reorder * self, const GstH265Slice * slice, - GstH265Picture * picture) -{ - const GstH265SPS *sps = self->active_sps; - GstH265Picture *to_output; - - /* C 3.2 */ - if (slice->clear_dpb) { - /* Ignores NoOutputOfPriorPicsFlag and drain all */ - gst_h265_reorder_drain (self); - } else { - /* TODO: According to 7.4.3.3.3, TwoVersionsOfCurrDecPicFlag - * should be considered. - * - * NOTE: (See 8.1.3) if TwoVersionsOfCurrDecPicFlag is 1, - * current picture requires two picture buffers allocated in DPB storage, - * one is decoded picture *after* in-loop filter, and the other is - * decoded picture *before* in-loop filter, so that current picture - * can be used as a reference of the current picture - * (e.g., intra block copy method in SCC). - * Here TwoVersionsOfCurrDecPicFlag takes effect in order to ensure - * at least two empty DPB buffer before starting current picture decoding. - * - * However, two DPB picture allocation is not implemented - * in current baseclass (which would imply that we are doing reference - * picture management wrongly in case of SCC). - * Let's ignore TwoVersionsOfCurrDecPicFlag for now */ - guint max_dec_pic_buffering = - sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1; - gst_h265_dpb_delete_unused (self->dpb); - while (gst_h265_dpb_needs_bump (self->dpb, - sps->max_num_reorder_picssps->max_sub_layers_minus1, - self->SpsMaxLatencyPictures, max_dec_pic_buffering)) { - to_output = gst_h265_dpb_bump (self->dpb, FALSE); - - /* Something wrong... */ - if (!to_output) { - GST_WARNING_OBJECT (self, "Bumping is needed but no picture to output"); - break; - } - - gst_h265_reorder_output_picture (self, to_output); - } - } - - return TRUE; -} - -static gboolean -gst_h265_reorder_start_current_picture (GstH265Reorder * self) -{ - g_assert (self->current_picture != NULL); - g_assert (self->active_sps != NULL); - g_assert (self->active_pps != NULL); - - if (!gst_h265_reorder_init_current_picture (self)) - return FALSE; - - /* Drop all RASL pictures having NoRaslOutputFlag is TRUE for the - * associated IRAP picture */ - if (GST_H265_IS_NAL_TYPE_RASL (self->current_slice.nalu.type) && - self->associated_irap_NoRaslOutputFlag) { - GST_DEBUG_OBJECT (self, "Ignores associated_irap_NoRaslOutputFlag"); - } - - if (!gst_h265_reorder_prepare_rps (self, &self->current_slice, - self->current_picture)) { - GST_WARNING_OBJECT (self, "Failed to prepare ref pic set"); - gst_clear_h265_picture (&self->current_picture); - return FALSE; - } - - if (!gst_h265_reorder_dpb_init (self, - &self->current_slice, self->current_picture)) { - GST_WARNING_OBJECT (self, "Failed to init dpb"); - gst_clear_h265_picture (&self->current_picture); - return FALSE; - } - - return TRUE; -} - -static void -gst_h265_reorder_finish_picture (GstH265Reorder * self, - GstH265Picture * picture) -{ - const GstH265SPS *sps = self->active_sps; - - GST_LOG_OBJECT (self, - "Finishing picture %p (poc %d), entries in DPB %d", - picture, picture->pic_order_cnt, gst_h265_dpb_get_size (self->dpb)); - - gst_h265_dpb_delete_unused (self->dpb); - - /* gst_h265_dpb_add() will take care of pic_latency_cnt increment and - * reference picture marking for this picture */ - gst_h265_dpb_add (self->dpb, picture); - - /* NOTE: As per C.5.2.2, bumping by sps_max_dec_pic_buffering_minus1 is - * applied only for the output and removal of pictures from the DPB before - * the decoding of the current picture. So pass zero here */ - while (gst_h265_dpb_needs_bump (self->dpb, - sps->max_num_reorder_picssps->max_sub_layers_minus1, - self->SpsMaxLatencyPictures, 0)) { - GstH265Picture *to_output = gst_h265_dpb_bump (self->dpb, FALSE); - - /* Something wrong... */ - if (!to_output) { - GST_WARNING_OBJECT (self, "Bumping is needed but no picture to output"); - break; - } - - gst_h265_reorder_output_picture (self, to_output); - } -} - -static void -gst_h265_reorder_reset_frame_state (GstH265Reorder * self) -{ - /* Clear picture struct information */ - self->cur_pic_struct = GST_H265_SEI_PIC_STRUCT_FRAME; - self->cur_source_scan_type = 2; - self->cur_duplicate_flag = 0; - self->no_output_of_prior_pics_flag = FALSE; - self->current_frame = NULL; - g_array_set_size (self->nalu, 0); -} - -static GstBuffer * -gst_h265_reorder_remove_caption_sei (GstH265Reorder * self, GstBuffer * buffer) -{ - GstH265ParserResult pres = GST_H265_PARSER_OK; - GstMapInfo map; - GstH265NalUnit nalu; - guint i; - gboolean have_sei = FALSE; - GstBuffer *new_buf; - - g_array_set_size (self->au_nalus, 0); - - gst_buffer_map (buffer, &map, GST_MAP_READ); - if (self->is_hevc) { - guint offset = 0; - gsize consumed = 0; - guint i; - - do { - pres = gst_h265_parser_identify_and_split_nalu_hevc (self->parser, - map.data, offset, map.size, self->nal_length_size, - self->split_nalu, &consumed); - if (pres != GST_H265_PARSER_OK) - break; - - for (i = 0; i < self->split_nalu->len; i++) { - nalu = g_array_index (self->split_nalu, GstH265NalUnit, i); - g_array_append_val (self->au_nalus, nalu); - } - - offset += consumed; - } while (pres == GST_H265_PARSER_OK); - } else { - pres = gst_h265_parser_identify_nalu (self->parser, - map.data, 0, map.size, &nalu); - - if (pres == GST_H265_PARSER_NO_NAL_END) - pres = GST_H265_PARSER_OK; - - while (pres == GST_H265_PARSER_OK) { - g_array_append_val (self->au_nalus, nalu); - - pres = gst_h265_parser_identify_nalu (self->parser, - map.data, nalu.offset + nalu.size, map.size, &nalu); - - if (pres == GST_H265_PARSER_NO_NAL_END) - pres = GST_H265_PARSER_OK; - } - } - - /* Fast scan without parsing */ - for (i = 0; i < self->au_nalus->len; i++) { - GstH265NalUnit *nl = &g_array_index (self->au_nalus, GstH265NalUnit, i); - switch (nl->type) { - case GST_H265_NAL_VPS: - { - GstH265VPS vps; - gst_h265_parser_parse_vps (self->parser, nl, &vps); - break; - } - case GST_H265_NAL_SPS: - { - GstH265SPS sps; - gst_h265_parser_parse_sps (self->parser, nl, &sps, TRUE); - break; - } - case GST_H265_NAL_PREFIX_SEI: - case GST_H265_NAL_SUFFIX_SEI: - have_sei = TRUE; - break; - default: - break; - } - } - - if (!have_sei) { - GST_LOG_OBJECT (self, "Buffer without SEI, %" GST_PTR_FORMAT, buffer); - gst_buffer_unmap (buffer, &map); - g_array_set_size (self->au_nalus, 0); - return gst_buffer_ref (buffer); - } - - new_buf = gst_buffer_new (); - gst_buffer_copy_into (new_buf, buffer, GST_BUFFER_COPY_METADATA, 0, -1); - - for (i = 0; i < self->au_nalus->len; i++) { - GstH265NalUnit *nl = &g_array_index (self->au_nalus, GstH265NalUnit, i); - GstMemory *mem = NULL; - - if (nl->type == GST_H265_NAL_PREFIX_SEI || - nl->type == GST_H265_NAL_SUFFIX_SEI) { - GArray *msg = NULL; - gint j; - gst_h265_parser_parse_sei (self->parser, nl, &msg); - gboolean have_caption_sei = FALSE; - - for (j = 0; j < (gint) msg->len; j++) { - GstH265SEIMessage *sei = &g_array_index (msg, GstH265SEIMessage, j); - GstH265RegisteredUserData *rud; - if (sei->payloadType != GST_H265_SEI_REGISTERED_USER_DATA) - continue; - - rud = &sei->payload.registered_user_data; - - if (!gst_h264_reorder_is_cea708_sei (rud->country_code, - rud->data, rud->size)) { - continue; - } - - GST_LOG_OBJECT (self, "Found CEA708 caption SEI"); - have_caption_sei = TRUE; - - g_array_remove_index (msg, j); - j--; - } - - if (have_caption_sei) { - if (msg->len > 0) { - /* Creates new SEI memory */ - if (self->is_hevc) { - mem = gst_h265_create_sei_memory_hevc (nl->layer_id, - nl->temporal_id_plus1, self->nal_length_size, msg); - } else { - mem = gst_h265_create_sei_memory (nl->layer_id, - nl->temporal_id_plus1, 4, msg); - } - - if (!mem) - GST_ERROR_OBJECT (self, "Couldn't create SEI memory"); - else - gst_buffer_append_memory (new_buf, mem); - } - } else { - gsize size = nl->size + (nl->offset - nl->sc_offset); - gpointer *data = g_memdup2 (nl->data + nl->sc_offset, size); - mem = gst_memory_new_wrapped (0, data, size, 0, size, data, g_free); - gst_buffer_append_memory (new_buf, mem); - } - - g_array_unref (msg); - } else { - gsize size = nl->size + (nl->offset - nl->sc_offset); - gpointer *data = g_memdup2 (nl->data + nl->sc_offset, size); - mem = gst_memory_new_wrapped (0, data, size, 0, size, data, g_free); - gst_buffer_append_memory (new_buf, mem); - } - } - - gst_buffer_unmap (buffer, &map); - g_array_set_size (self->au_nalus, 0); - - return new_buf; -} - -gboolean -gst_h265_reorder_push (GstH265Reorder * reorder, GstVideoCodecFrame * frame, - GstClockTime * latency) -{ - GstBuffer *in_buf; - GstH265NalUnit nalu; - GstH265ParserResult pres = GST_H265_PARSER_OK; - GstMapInfo map; - gboolean decode_ret = TRUE; - guint i; - - gst_h265_reorder_reset_frame_state (reorder); - - frame->system_frame_number = reorder->system_num; - frame->decode_frame_number = reorder->system_num; - - GST_LOG_OBJECT (reorder, - "Push frame %u, frame queue size: %u, output queue size %u", - frame->system_frame_number, reorder->frame_queue->len, - reorder->output_queue->len); - - in_buf = gst_h265_reorder_remove_caption_sei (reorder, frame->input_buffer); - if (in_buf) { - gst_buffer_unref (frame->input_buffer); - frame->input_buffer = in_buf; - } else { - in_buf = frame->input_buffer; - } - - reorder->system_num++; - - if (!reorder->need_reorder) { - g_ptr_array_add (reorder->output_queue, frame); - *latency = 0; - return TRUE; - } - - g_ptr_array_add (reorder->frame_queue, frame); - reorder->current_frame = frame; - - gst_buffer_map (in_buf, &map, GST_MAP_READ); - if (reorder->is_hevc) { - guint offset = 0; - gsize consumed = 0; - - do { - pres = gst_h265_parser_identify_and_split_nalu_hevc (reorder->parser, - map.data, offset, map.size, reorder->nal_length_size, - reorder->split_nalu, &consumed); - if (pres != GST_H265_PARSER_OK) - break; - - for (i = 0; i < reorder->split_nalu->len; i++) { - GstH265NalUnit *nl = - &g_array_index (reorder->split_nalu, GstH265NalUnit, i); - pres = gst_h265_reorder_parse_nalu (reorder, nl); - if (pres != GST_H265_PARSER_OK) - break; - } - - if (pres != GST_H265_PARSER_OK) - break; - - offset += consumed; - } while (pres == GST_H265_PARSER_OK); - } else { - pres = gst_h265_parser_identify_nalu (reorder->parser, - map.data, 0, map.size, &nalu); - - if (pres == GST_H265_PARSER_NO_NAL_END) - pres = GST_H265_PARSER_OK; - - while (pres == GST_H265_PARSER_OK) { - pres = gst_h265_reorder_parse_nalu (reorder, &nalu); - if (pres != GST_H265_PARSER_OK) - break; - - pres = gst_h265_parser_identify_nalu (reorder->parser, - map.data, nalu.offset + nalu.size, map.size, &nalu); - if (pres == GST_H265_PARSER_NO_NAL_END) - pres = GST_H265_PARSER_OK; - } - } - - for (i = 0; i < reorder->nalu->len && decode_ret; i++) { - GstH265ReorderNalUnit *decoder_nalu = - &g_array_index (reorder->nalu, GstH265ReorderNalUnit, i); - decode_ret = gst_h265_reorder_decode_nalu (reorder, decoder_nalu); - } - - gst_buffer_unmap (in_buf, &map); - gst_h265_reorder_reset_frame_state (reorder); - - if (!decode_ret) { - GST_ERROR_OBJECT (reorder, "Couldn't decode frame"); - gst_clear_h265_picture (&reorder->current_picture); - reorder->current_frame = NULL; - - g_ptr_array_remove (reorder->frame_queue, frame); - reorder->system_num--; - - return FALSE; - } - - if (!reorder->current_picture) { - GST_DEBUG_OBJECT (reorder, - "AU buffer without slice data, current frame %u", - frame->system_frame_number); - - g_ptr_array_remove (reorder->frame_queue, frame); - reorder->current_frame = NULL; - reorder->system_num--; - - return FALSE; - } - - gst_h265_reorder_finish_picture (reorder, reorder->current_picture); - reorder->current_picture = NULL; - reorder->current_frame = NULL; - - *latency = reorder->latency; - - return TRUE; -} - -GstVideoCodecFrame * -gst_h265_reorder_pop (GstH265Reorder * reorder) -{ - if (!reorder->output_queue->len) { - GST_LOG_OBJECT (reorder, "Empty output queue, frames queue size %u", - reorder->frame_queue->len); - return NULL; - } - - return g_ptr_array_steal_index (reorder->output_queue, 0); -} - -guint -gst_h265_reorder_get_num_buffered (GstH265Reorder * reorder) -{ - return reorder->frame_queue->len + reorder->output_queue->len; -} - -GstBuffer * -gst_h265_reorder_insert_sei (GstH265Reorder * reorder, GstBuffer * au, - GArray * sei) -{ - GstMemory *mem; - GstBuffer *new_buf; - - if (reorder->is_hevc) - mem = gst_h265_create_sei_memory_hevc (0, 1, reorder->nal_length_size, sei); - else - mem = gst_h265_create_sei_memory (0, 1, 4, sei); - - if (!mem) { - GST_ERROR_OBJECT (reorder, "Couldn't create SEI memory"); - return NULL; - } - - if (reorder->is_hevc) { - new_buf = gst_h265_parser_insert_sei_hevc (reorder->parser, - reorder->nal_length_size, au, mem); - } else { - new_buf = gst_h265_parser_insert_sei (reorder->parser, au, mem); - } - - gst_memory_unref (mem); - return new_buf; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/meson.build
Deleted
@@ -1,68 +0,0 @@ -closedcaption_dep = dependency('pangocairo', version : '>= 1.32.6', - required : get_option('closedcaption')) - -closedcaption_sources = - 'gstcccombiner.c', - 'gstccextractor.c', - 'gstccconverter.c', - 'gstcea608mux.c', - 'gstclosedcaption.c', - 'gstline21dec.c', - 'gstcea708decoder.c', - 'gstceaccoverlay.c', - 'gstline21enc.c', - 'ccutils.c', - 'gsth264ccextractor.c', - 'gsth265ccextractor.c', - 'gsth264reorder.c', - 'gsth265reorder.c', - 'gstcodecccinserter.c', - 'gsth264ccinserter.c', - 'gsth265ccinserter.c', - - -closedcaption_headers = - 'gstline21dec.h', - 'gstcea708decoder.h', - 'gstcccombiner.h', - 'gstcea608mux.h', - 'gstccconverter.h', - 'gstceaccoverlay.h', - 'gstccextractor.h', - 'ccutils.h', - 'gstline21enc.h', - - -zvbi_sources = - 'bit_slicer.c', - 'decoder.c', - 'raw_decoder.c', - 'sampling_par.c', - 'io-sim.c', - - -extra_args = '-DGST_USE_UNSTABLE_API' - -doc_sources = -foreach s: closedcaption_sources + closedcaption_headers - doc_sources += meson.current_source_dir() / s -endforeach - -plugin_sources += { - 'closedcaption': pathsep.join(doc_sources) -} - -if closedcaption_dep.found() - gstclosedcaption = library('gstclosedcaption', - closedcaption_sources, - zvbi_sources, - c_args : gst_plugins_bad_args + extra_args, - link_args : noseh_link_args, - include_directories : configinc, - dependencies : gstvideo_dep, gstbase_dep, gst_dep, closedcaption_dep, libm, - gstcodecs_dep, - install : true, - install_dir : plugins_install_dir, - ) - plugins += gstclosedcaption -endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/closedcaption/misc.h
Deleted
@@ -1,558 +0,0 @@ -/* - * libzvbi -- Miscellaneous cows and chickens - * - * Copyright (C) 2000-2003 Iñaki García Etxebarria - * Copyright (C) 2002-2007 Michael H. Schimek - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, - * Boston, MA 02110-1301 USA. - */ - -/* $Id: misc.h,v 1.24 2013-07-02 02:32:31 mschimek Exp $ */ - -#ifndef MISC_H -#define MISC_H - -#include <stdio.h> -#include <stdlib.h> -#include <stddef.h> -#include <stdarg.h> -#include <string.h> -#include <inttypes.h> /* (u)intXX_t */ -#include <sys/types.h> /* (s)size_t */ -#include <float.h> /* DBL_MAX */ -#include <limits.h> /* (S)SIZE_MAX */ -#include <assert.h> -#include <glib.h> -#include <gst/gst.h> - -#include "macros.h" - -#define N_ELEMENTS(array) (sizeof (array) / sizeof (*(array))) - -#ifdef __GNUC__ - -#if __GNUC__ < 3 -/* Expect expression usually true/false, schedule accordingly. */ -# define likely(expr) (expr) -# define unlikely(expr) (expr) -#else -# define likely(expr) __builtin_expect(expr, 1) -# define unlikely(expr) __builtin_expect(expr, 0) -#endif - -#undef __i386__ -#undef __i686__ -/* FIXME #cpu is deprecated -#if #cpu (i386) -# define __i386__ 1 -#endif -#if #cpu (i686) -# define __i686__ 1 -#endif -*/ - -/* &x == PARENT (&x.tm_min, struct tm, tm_min), - safer than &x == (struct tm *) &x.tm_min. A NULL _ptr is safe and - will return NULL, not -offsetof(_member). */ -#undef PARENT -#define PARENT(_ptr, _type, _member) ({ \ - __typeof__ (&((_type *) 0)->_member) _p = (_ptr); \ - (_p != 0) ? (_type *)(((char *) _p) - offsetof (_type, \ - _member)) : (_type *) 0; \ -}) - -/* Like PARENT(), to be used with const _ptr. */ -#define CONST_PARENT(_ptr, _type, _member) ({ \ - __typeof__ (&((const _type *) 0)->_member) _p = (_ptr); \ - (_p != 0) ? (const _type *)(((const char *) _p) - offsetof \ - (const _type, _member)) : (const _type *) 0; \ -}) - -/* Note the following macros have no side effects only when you - compile with GCC, so don't expect this. */ - -/* Absolute value of int, long or long long without a branch. - Note ABS (INT_MIN) -> INT_MAX + 1. */ -#undef ABS -#define ABS(n) ({ \ - register __typeof__ (n) _n = (n), _t = _n; \ - if (-1 == (-1 >> 1)) { /* do we have signed shifts? */ \ - _t >>= sizeof (_t) * 8 - 1; \ - _n ^= _t; \ - _n -= _t; \ - } else if (_n < 0) { /* also warns if n is unsigned type */ \ - _n = -_n; \ - } \ - /* return */ _n; \ -}) - -#undef MIN -#define MIN(x, y) ({ \ - __typeof__ (x) _x = (x); \ - __typeof__ (y) _y = (y); \ - (void)(&_x == &_y); /* warn if types do not match */ \ - /* return */ (_x < _y) ? _x : _y; \ -}) - -#undef MAX -#define MAX(x, y) ({ \ - __typeof__ (x) _x = (x); \ - __typeof__ (y) _y = (y); \ - (void)(&_x == &_y); /* warn if types do not match */ \ - /* return */ (_x > _y) ? _x : _y; \ -}) - -/* Note other compilers may swap only int, long or pointer. */ -#undef SWAP -#define SWAP(x, y) \ -do { \ - __typeof__ (x) _x = x; \ - x = y; \ - y = _x; \ -} while (0) - -#undef SATURATE -#ifdef __i686__ /* has conditional move */ -#define SATURATE(n, min, max) ({ \ - __typeof__ (n) _n = (n); \ - __typeof__ (n) _min = (min); \ - __typeof__ (n) _max = (max); \ - (void)(&_n == &_min); /* warn if types do not match */ \ - (void)(&_n == &_max); \ - if (_n < _min) \ - _n = _min; \ - if (_n > _max) \ - _n = _max; \ - /* return */ _n; \ -}) -#else -#define SATURATE(n, min, max) ({ \ - __typeof__ (n) _n = (n); \ - __typeof__ (n) _min = (min); \ - __typeof__ (n) _max = (max); \ - (void)(&_n == &_min); /* warn if types do not match */ \ - (void)(&_n == &_max); \ - if (_n < _min) \ - _n = _min; \ - else if (_n > _max) \ - _n = _max; \ - /* return */ _n; \ -}) -#endif - -#else /* !__GNUC__ */ - -#define likely(expr) (expr) -#define unlikely(expr) (expr) -#undef __i386__ -#undef __i686__ - -static char * -PARENT_HELPER (char *p, unsigned int offset) -{ return (0 == p) ? ((char *) 0) : p - offset; } - -static const char * -CONST_PARENT_HELPER (const char *p, unsigned int offset) -{ return (0 == p) ? ((char *) 0) : p - offset; } - -#define PARENT(_ptr, _type, _member) \ - ((0 == offsetof (_type, _member)) ? (_type *)(_ptr) \ - : (_type *) PARENT_HELPER ((char *)(_ptr), offsetof (_type, _member))) -#define CONST_PARENT(_ptr, _type, _member) \ - ((0 == offsetof (const _type, _member)) ? (const _type *)(_ptr) \ - : (const _type *) CONST_PARENT_HELPER ((const char *)(_ptr), \ - offsetof (const _type, _member))) - -#undef ABS -#define ABS(n) (((n) < 0) ? -(n) : (n)) - -#undef MIN -#define MIN(x, y) (((x) < (y)) ? (x) : (y)) - -#undef MAX -#define MAX(x, y) (((x) > (y)) ? (x) : (y)) - -#undef SWAP -#define SWAP(x, y) \ -do { \ - long _x = x; \ - x = y; \ - y = _x; \ -} while (0) - -#undef SATURATE -#define SATURATE(n, min, max) MIN (MAX (min, n), max) - -#endif /* !__GNUC__ */ - -/* 32 bit constant byte reverse, e.g. 0xAABBCCDD -> 0xDDCCBBAA. */ -#define SWAB32(m) \ - (+ (((m) & 0xFF000000) >> 24) \ - + (((m) & 0xFF0000) >> 8) \ - + (((m) & 0xFF00) << 8) \ - + (((m) & 0xFF) << 24)) - -#ifdef HAVE_BUILTIN_POPCOUNT -# define popcnt(x) __builtin_popcount ((uint32_t)(x)) -#else -# define popcnt(x) _vbi_popcnt (x) -#endif - -extern unsigned int -_vbi_popcnt (uint32_t x); - -/* NB GCC inlines and optimizes these functions when size is const. */ -#define SET(var) memset (&(var), ~0, sizeof (var)) - -#define CLEAR(var) memset (&(var), 0, sizeof (var)) - -/* Useful to copy arrays, otherwise use assignment. */ -#define COPY(d, s) \ - (assert (sizeof (d) == sizeof (s)), memcpy (d, s, sizeof (d))) - -/* Copy string const into char array. */ -#define STRACPY(array, s) \ -do { \ - /* Complain if s is no string const or won't fit. */ \ - const char t_sizeof (array) - 1 _vbi_unused = s; \ - \ - memcpy (array, s, sizeof (s)); \ -} while (0) - -/* Copy bits through mask. */ -#define COPY_SET_MASK(dest, from, mask) \ - (dest ^= (from) ^ (dest & (mask))) - -/* Set bits if cond is TRUE, clear if FALSE. */ -#define COPY_SET_COND(dest, bits, cond) \ - ((cond) ? (dest |= (bits)) : (dest &= ~(bits))) - -/* Set and clear bits. */ -#define COPY_SET_CLEAR(dest, set, clear) \ - (dest = (dest & ~(clear)) | (set)) - -/* For applications, debugging and fault injection during unit tests. */ - -#define vbi_malloc malloc -#define vbi_realloc realloc -#define vbi_strdup strdup -#define vbi_free free - -#define vbi_cache_malloc vbi_malloc -#define vbi_cache_free vbi_free - -/* Helper functions. */ - -_vbi_inline int -_vbi_to_ascii (int c) -{ - if (c < 0) - return '?'; - - c &= 0x7F; - - if (c < 0x20 || c >= 0x7F) - return '.'; - - return c; -} - -typedef struct { - const char * key; - int value; -} _vbi_key_value_pair; - -extern vbi_bool -_vbi_keyword_lookup (int * value, - const char ** inout_s, - const _vbi_key_value_pair * table, - unsigned int n_pairs) - _vbi_nonnull ((1, 2, 3)); - -extern void -_vbi_shrink_vector_capacity (void ** vector, - size_t * capacity, - size_t min_capacity, - size_t element_size) - _vbi_nonnull ((1, 2)); -extern vbi_bool -_vbi_grow_vector_capacity (void ** vector, - size_t * capacity, - size_t min_capacity, - size_t element_size) - _vbi_nonnull ((1, 2)); - -GST_DEBUG_CATEGORY_EXTERN (libzvbi_debug); - -#ifndef GST_DISABLE_GST_DEBUG -/* Logging stuff. */ -#ifdef G_HAVE_ISO_VARARGS -#define VBI_CAT_LEVEL_LOG(level,object,...) G_STMT_START{ \ - if (G_UNLIKELY ((level) <= GST_LEVEL_MAX && (level) <= _gst_debug_min)) { \ - gst_debug_log (libzvbi_debug, (level), __FILE__, GST_FUNCTION, __LINE__, \ - (GObject *) (object), __VA_ARGS__); \ - } \ -}G_STMT_END -#else /* G_HAVE_GNUC_VARARGS */ -#ifdef G_HAVE_GNUC_VARARGS -#define VBI_CAT_LEVEL_LOG(level,object,args...) G_STMT_START{ \ - if (G_UNLIKELY ((level) <= GST_LEVEL_MAX && (level) <= _gst_debug_min)) { \ - gst_debug_log (libzvbi_debug, (level), __FILE__, GST_FUNCTION, __LINE__, \ - (GObject *) (object), ##args ); \ - } \ -}G_STMT_END -#else /* no variadic macros, use inline */ -static inline void -VBI_CAT_LEVEL_LOG_valist (GstDebugCategory * cat, - GstDebugLevel level, gpointer object, const char *format, va_list varargs) -{ - if (G_UNLIKELY ((level) <= GST_LEVEL_MAX && (level) <= _gst_debug_min)) { - gst_debug_log_valist (cat, level, "", "", 0, (GObject *) object, format, - varargs); - } -} - -static inline void -VBI_CAT_LEVEL_LOG (GstDebugLevel level, - gpointer object, const char *format, ...) -{ - va_list varargs; - - va_start (varargs, format); - GST_CAT_LEVEL_LOG_valist (libzvbi_debug, level, object, format, varargs); - va_end (varargs); -} -#endif -#endif /* G_HAVE_ISO_VARARGS */ -#else -static inline void -VBI_CAT_LEVEL_LOG (GstDebugLevel level, - gpointer object, const char *format, ...) -{ -} -#endif /* GST_DISABLE_GST_DEBUG */ - -#ifdef G_HAVE_GNUC_VARARGS -#define error(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_ERROR, NULL, templ , ##args) -#define warn(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_WARNING, NULL, templ , ##args) -#define notice(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ , ##args) -#define info(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ , ##args) -#define debug1(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_DEBUG, NULL, templ , ##args) -#define debug2(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_LOG, NULL, templ , ##args) -#define debug3(hook, templ, args...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_TRACE, NULL, templ , ##args) -#elif defined(G_HAVE_ISO_VARARGS) -#define error(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_ERROR, NULL, templ, __VA_ARGS__) -#define warn(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_WARNING, NULL, templ, __VA_ARGS__) -#define notice(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ, __VA_ARGS__) -#define info(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ, __VA_ARGS__) -#define debug1(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_DEBUG, NULL, templ, __VA_ARGS__) -#define debug2(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_LOG, NULL, templ, __VA_ARGS__) -#define debug3(hook, templ, ...) \ - VBI_CAT_LEVEL_LOG (GST_LEVEL_TRACE, NULL, templ, __VA_ARGS__) -#else -/* if someone needs this, they can implement the inline functions for it */ -#error "variadic macros are required" -#endif - - -#if 0 /* Replaced logging with GStreamer logging system */ -extern _vbi_log_hook _vbi_global_log; - -extern void -_vbi_log_vprintf (vbi_log_fn * log_fn, - void * user_data, - vbi_log_mask level, - const char * source_file, - const char * context, - const char * templ, - va_list ap) - _vbi_nonnull ((1, 4, 5, 6)); -extern void -_vbi_log_printf (vbi_log_fn * log_fn, - void * user_data, - vbi_log_mask level, - const char * source_file, - const char * context, - const char * templ, - ...) - _vbi_nonnull ((1, 4, 5, 6)) _vbi_format ((printf, 6, 7)); - -#define _vbi_log(hook, level, templ, args...) \ -do { \ - _vbi_log_hook *_h = hook; \ - \ - if ((NULL != _h && 0 != (_h->mask & level)) \ - || (_h = &_vbi_global_log, 0 != (_h->mask & level))) \ - _vbi_log_printf (_h->fn, _h->user_data, \ - level, __FILE__, __FUNCTION__, \ - templ , ##args); \ -} while (0) - -#define _vbi_vlog(hook, level, templ, ap) \ -do { \ - _vbi_log_hook *_h = hook; \ - \ - if ((NULL != _h && 0 != (_h->mask & level)) \ - || (_h = &_vbi_global_log, 0 != (_h->mask & level))) \ - _vbi_log_vprintf (_h->fn, _h->user_data, \ - level, __FILE__, __FUNCTION__, \ - templ, ap); \ -} while (0) -#define error(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_ERROR, templ , ##args) -#define warning(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_ERROR, templ , ##args) -#define notice(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_NOTICE, templ , ##args) -#define info(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_INFO, templ , ##args) -#define debug1(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_DEBUG, templ , ##args) -#define debug2(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_DEBUG2, templ , ##args) -#define debug3(hook, templ, args...) \ - _vbi_log (hook, VBI_LOG_DEBUG3, templ , ##args) -#endif - -/* Portability stuff. */ - -/* These should be defined in inttypes.h. */ -#ifndef PRId64 -# define PRId64 "lld" -#endif -#ifndef PRIu64 -# define PRIu64 "llu" -#endif -#ifndef PRIx64 -# define PRIx64 "llx" -#endif - -/* Should be defined in C99 limits.h? */ -#ifndef SIZE_MAX -# define SIZE_MAX ((size_t) -1) -#endif - -#ifndef TIME_MIN -# define TIME_MIN (_vbi_time_min ()) -_vbi_inline time_t -_vbi_time_min (void) -{ - const time_t t = (time_t) -1.25; - - if (t < -1) { - return (time_t)((sizeof (time_t) > 4) ? DBL_MIN : FLT_MIN); - } else if (t < 0) { - return ((uint64_t) 1) << (sizeof (time_t) * 8 - 1); - } else { - return 0; - } -} -#endif - -#ifndef TIME_MAX -# define TIME_MAX (_vbi_time_max ()) -_vbi_inline time_t -_vbi_time_max (void) -{ - const time_t t = (time_t) -1.25; - - if (t < -1) { - return (time_t)((sizeof (time_t) > 4) ? DBL_MAX : FLT_MAX); - } else if (t < 0) { - /* Most likely signed 32 or 64 bit. */ - return (((uint64_t) 1) << (sizeof (time_t) * 8 - 1)) - 1; - } else { - return -1; - } -} -#endif - -/* __va_copy is a GNU extension. */ -#ifndef __va_copy -# define __va_copy(ap1, ap2) do { ap1 = ap2; } while (0) -#endif - -#if 0 -/* Use this instead of strncpy(). strlcpy() is a BSD extension. */ -#ifndef HAVE_STRLCPY -# define strlcpy _vbi_strlcpy -#endif -#undef strncpy -#define strncpy use_strlcpy_instead - -extern size_t -_vbi_strlcpy (char * dst, - const char * src, - size_t size) - _vbi_nonnull ((1, 2)); -#endif - -/* /\* strndup() is a BSD/GNU extension. *\/ */ -/* #ifndef HAVE_STRNDUP */ -/* # define strndup _vbi_strndup */ -/* #endif */ - -/* extern char * */ -/* _vbi_strndup (const char * s, */ -/* size_t len) */ -/* _vbi_nonnull ((1)); */ - -/* vasprintf() is a GNU extension. */ -#ifndef HAVE_VASPRINTF -# define vasprintf _vbi_vasprintf -#endif - -extern int -_vbi_vasprintf (char ** dstp, - const char * templ, - va_list ap) - _vbi_nonnull ((1, 2)); - -/* asprintf() is a GNU extension. */ -#ifndef HAVE_ASPRINTF -# define asprintf _vbi_asprintf -#endif - -extern int -_vbi_asprintf (char ** dstp, - const char * templ, - ...) - _vbi_nonnull ((1, 2)) _vbi_format ((printf, 2, 3)); - -#undef sprintf -#define sprintf use_snprintf_or_asprintf_instead - -#endif /* MISC_H */ - -/* -Local variables: -c-set-style: K&R -c-basic-offset: 8 -End: -*/
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/onnx/gstonnxclient.cpp
Deleted
@@ -1,568 +0,0 @@ -/* - * GStreamer gstreamer-onnxclient - * Copyright (C) 2021-2023 Collabora Ltd - * - * gstonnxclient.cpp - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#include "gstonnxclient.h" -#include <cpu_provider_factory.h> -#include <sstream> - -#define GST_CAT_DEFAULT onnx_inference_debug - -/* FIXME: share this with tensordecoders, somehow? */ -#define GST_MODEL_OBJECT_DETECTOR_BOXES "Gst.Model.ObjectDetector.Boxes" -#define GST_MODEL_OBJECT_DETECTOR_SCORES "Gst.Model.ObjectDetector.Scores" -#define GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS "Gst.Model.ObjectDetector.NumDetections" -#define GST_MODEL_OBJECT_DETECTOR_CLASSES "Gst.Model.ObjectDetector.Classes" - -namespace GstOnnxNamespace -{ - template < typename T > - std::ostream & operator<< (std::ostream & os, const std::vector < T > &v) - { - os << ""; - for (size_t i = 0; i < v.size (); ++i) - { - os << vi; - if (i != v.size () - 1) - { - os << ", "; - } - } - os << ""; - - return os; - } - -GstOnnxClient::GstOnnxClient (GstElement *debug_parent):debug_parent(debug_parent), - session (nullptr), - width (0), - height (0), - channels (0), - dest (nullptr), - m_provider (GST_ONNX_EXECUTION_PROVIDER_CPU), - inputImageFormat (GST_ML_INPUT_IMAGE_FORMAT_HWC), - inputDatatype (GST_TENSOR_DATA_TYPE_UINT8), - inputDatatypeSize (sizeof (uint8_t)), - fixedInputImageSize (false), - inputTensorOffset (0.0), - inputTensorScale (1.0) - { - } - - GstOnnxClient::~GstOnnxClient () { - delete session; - deletedest; - } - - int32_t GstOnnxClient::getWidth (void) - { - return width; - } - - int32_t GstOnnxClient::getHeight (void) - { - return height; - } - - int32_t GstOnnxClient::getChannels (void) - { - return channels; - } - - bool GstOnnxClient::isFixedInputImageSize (void) - { - return fixedInputImageSize; - } - - void GstOnnxClient::setInputImageFormat (GstMlInputImageFormat format) - { - inputImageFormat = format; - } - - GstMlInputImageFormat GstOnnxClient::getInputImageFormat (void) - { - return inputImageFormat; - } - - void GstOnnxClient::setInputImageDatatype(GstTensorDataType datatype) - { - inputDatatype = datatype; - switch (inputDatatype) { - case GST_TENSOR_DATA_TYPE_UINT8: - inputDatatypeSize = sizeof (uint8_t); - break; - case GST_TENSOR_DATA_TYPE_UINT16: - inputDatatypeSize = sizeof (uint16_t); - break; - case GST_TENSOR_DATA_TYPE_UINT32: - inputDatatypeSize = sizeof (uint32_t); - break; - case GST_TENSOR_DATA_TYPE_INT32: - inputDatatypeSize = sizeof (int32_t); - break; - case GST_TENSOR_DATA_TYPE_FLOAT16: - inputDatatypeSize = 2; - break; - case GST_TENSOR_DATA_TYPE_FLOAT32: - inputDatatypeSize = sizeof (float); - break; - default: - g_error ("Data type %d not handled", inputDatatype); - break; - }; - } - - void GstOnnxClient::setInputImageOffset (float offset) - { - inputTensorOffset = offset; - } - - float GstOnnxClient::getInputImageOffset () - { - return inputTensorOffset; - } - - void GstOnnxClient::setInputImageScale (float scale) - { - inputTensorScale = scale; - } - - float GstOnnxClient::getInputImageScale () - { - return inputTensorScale; - } - - GstTensorDataType GstOnnxClient::getInputImageDatatype(void) - { - return inputDatatype; - } - - std::vector < const char *>GstOnnxClient::genOutputNamesRaw (void) - { - if (!outputNames.empty () && outputNamesRaw.size () != outputNames.size ()) { - outputNamesRaw.resize (outputNames.size ()); - for (size_t i = 0; i < outputNamesRaw.size (); i++) - outputNamesRawi = outputNamesi.get (); - } - - return outputNamesRaw; - } - - bool GstOnnxClient::hasSession (void) - { - return session != nullptr; - } - - bool GstOnnxClient::createSession (std::string modelFile, - GstOnnxOptimizationLevel optim, GstOnnxExecutionProvider provider) - { - if (session) - return true; - - GraphOptimizationLevel onnx_optim; - switch (optim) { - case GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL: - onnx_optim = GraphOptimizationLevel::ORT_DISABLE_ALL; - break; - case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC: - onnx_optim = GraphOptimizationLevel::ORT_ENABLE_BASIC; - break; - case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED: - onnx_optim = GraphOptimizationLevel::ORT_ENABLE_EXTENDED; - break; - case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL: - onnx_optim = GraphOptimizationLevel::ORT_ENABLE_ALL; - break; - default: - onnx_optim = GraphOptimizationLevel::ORT_ENABLE_EXTENDED; - break; - }; - - try { - Ort::SessionOptions sessionOptions; - const auto & api = Ort::GetApi (); - // for debugging - //sessionOptions.SetIntraOpNumThreads (1); - sessionOptions.SetGraphOptimizationLevel (onnx_optim); - m_provider = provider; - switch (m_provider) { - case GST_ONNX_EXECUTION_PROVIDER_CUDA: - try { - OrtCUDAProviderOptionsV2 *cuda_options = nullptr; - Ort::ThrowOnError (api.CreateCUDAProviderOptions (&cuda_options)); - std::unique_ptr < OrtCUDAProviderOptionsV2, - decltype (api.ReleaseCUDAProviderOptions) > - rel_cuda_options (cuda_options, api.ReleaseCUDAProviderOptions); - Ort::ThrowOnError (api.SessionOptionsAppendExecutionProvider_CUDA_V2 - (static_cast < OrtSessionOptions * >(sessionOptions), - rel_cuda_options.get ())); - } - catch (Ort::Exception & ortex) { - GST_WARNING - ("Failed to create CUDA provider - dropping back to CPU"); - Ort::ThrowOnError (OrtSessionOptionsAppendExecutionProvider_CPU - (sessionOptions, 1)); - } - break; - default: - Ort::ThrowOnError (OrtSessionOptionsAppendExecutionProvider_CPU - (sessionOptions, 1)); - break; - }; - env = - Ort::Env (OrtLoggingLevel::ORT_LOGGING_LEVEL_WARNING, - "GstOnnxNamespace"); - env.DisableTelemetryEvents(); - session = new Ort::Session (env, modelFile.c_str (), sessionOptions); - auto inputTypeInfo = session->GetInputTypeInfo (0); - std::vector < int64_t > inputDims = - inputTypeInfo.GetTensorTypeAndShapeInfo ().GetShape (); - if (inputImageFormat == GST_ML_INPUT_IMAGE_FORMAT_HWC) { - height = inputDims1; - width = inputDims2; - channels = inputDims3; - } else { - channels = inputDims1; - height = inputDims2; - width = inputDims3; - } - - fixedInputImageSize = width > 0 && height > 0; - GST_DEBUG_OBJECT (debug_parent, "Number of Output Nodes: %d", - (gint) session->GetOutputCount ()); - - ONNXTensorElementDataType elementType = - inputTypeInfo.GetTensorTypeAndShapeInfo ().GetElementType (); - - switch (elementType) { - case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: - setInputImageDatatype(GST_TENSOR_DATA_TYPE_UINT8); - break; - case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT: - setInputImageDatatype(GST_TENSOR_DATA_TYPE_FLOAT32); - break; - default: - GST_ERROR_OBJECT (debug_parent, - "Only input tensors of type int8 and floatare supported"); - return false; - } - - Ort::AllocatorWithDefaultOptions allocator; - auto input_name = session->GetInputNameAllocated (0, allocator); - GST_DEBUG_OBJECT (debug_parent, "Input name: %s", input_name.get ()); - - for (size_t i = 0; i < session->GetOutputCount (); ++i) { - auto output_name = session->GetOutputNameAllocated (i, allocator); - GST_DEBUG_OBJECT (debug_parent, "Output name %lu:%s", i, output_name.get ()); - outputNames.push_back (std::move (output_name)); - } - genOutputNamesRaw (); - - // look up tensor ids - auto metaData = session->GetModelMetadata (); - OrtAllocator *ortAllocator; - auto status = - Ort::GetApi ().GetAllocatorWithDefaultOptions (&ortAllocator); - if (status) { - // Handle the error case - const char *errorString = Ort::GetApi ().GetErrorMessage (status); - GST_WARNING_OBJECT (debug_parent, "Failed to get allocator: %s", errorString); - - // Clean up the error status - Ort::GetApi ().ReleaseStatus (status); - - return false; - } - for (auto & name:outputNamesRaw) { - Ort::AllocatedStringPtr res = - metaData.LookupCustomMetadataMapAllocated (name, ortAllocator); - if (res) - { - GQuark quark = g_quark_from_string (res.get ()); - outputIds.push_back (quark); - } else if (g_str_has_prefix (name, "detection_scores")) { - GQuark quark = g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_SCORES); - GST_INFO_OBJECT(debug_parent, - "No custom metadata for key '%s', assuming %s", - name, GST_MODEL_OBJECT_DETECTOR_SCORES); - outputIds.push_back (quark); - } else if (g_str_has_prefix(name, "detection_boxes")) { - GQuark quark = g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_BOXES); - GST_INFO_OBJECT(debug_parent, - "No custom metadata for key '%s', assuming %s", - name, GST_MODEL_OBJECT_DETECTOR_BOXES); - outputIds.push_back (quark); - } else if (g_str_has_prefix(name, "detection_classes")) { - GQuark quark = g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_CLASSES); - GST_INFO_OBJECT(debug_parent, - "No custom metadata for key '%s', assuming %s", - name, GST_MODEL_OBJECT_DETECTOR_CLASSES); - outputIds.push_back (quark); - } else if (g_str_has_prefix(name, "num_detections")) { - GQuark quark = g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS); - GST_INFO_OBJECT(debug_parent, - "No custom metadata for key '%s', assuming %s", - name, GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS); - outputIds.push_back (quark); - } else { - GST_ERROR_OBJECT (debug_parent, "Failed to look up id for key %s", name); - return false; - } - } - } - catch (Ort::Exception & ortex) { - GST_ERROR_OBJECT (debug_parent, "%s", ortex.what ()); - return false; - } - - return true; - } - - void GstOnnxClient::parseDimensions (GstVideoInfo vinfo) - { - int32_t newWidth = fixedInputImageSize ? width : vinfo.width; - int32_t newHeight = fixedInputImageSize ? height : vinfo.height; - - if (!fixedInputImageSize) { - GST_WARNING_OBJECT (debug_parent, "Allocating before knowing model input size"); - } - - if (!dest || width * height < newWidth * newHeight) { - deletedest; - dest = new uint8_tnewWidth * newHeight * channels * inputDatatypeSize; - } - width = newWidth; - height = newHeight; - } - - // copy tensor data to a GstTensorMeta - GstTensorMeta * - GstOnnxClient::copy_tensors_to_meta (std::vector < Ort::Value > - &outputs, GstBuffer * buffer) - { - size_t num_tensors = outputNamesRaw.size (); - GstTensorMeta *tmeta = gst_buffer_add_tensor_meta (buffer); - tmeta->num_tensors = num_tensors; - tmeta->tensors = g_new (GstTensor *, num_tensors); - bool hasIds = outputIds.size () == num_tensors; - for (size_t i = 0; i < num_tensors; i++) { - Ort::Value outputTensor = std::move (outputsi); - - ONNXTensorElementDataType tensorType = - outputTensor.GetTensorTypeAndShapeInfo ().GetElementType (); - - auto tensorShape = outputTensor.GetTensorTypeAndShapeInfo ().GetShape (); - GstTensor *tensor = gst_tensor_alloc (tensorShape.size ()); - tmeta->tensorsi = tensor; - - if (hasIds) - tensor->id = outputIdsi; - else - tensor->id = 0; - tensor->num_dims = tensorShape.size (); - - for (size_t j = 0; j < tensorShape.size (); ++j) - tensor->dimsj = tensorShapej; - - size_t numElements = - outputTensor.GetTensorTypeAndShapeInfo ().GetElementCount (); - - if (tensorType == ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT) { - size_t buffer_size = 0; - - buffer_size = numElements * sizeof (float); - tensor->data = gst_buffer_new_allocate (NULL, buffer_size, NULL); - gst_buffer_fill (tensor->data, 0, outputTensor.GetTensorData < float >(), - buffer_size); - tensor->data_type = GST_TENSOR_DATA_TYPE_FLOAT32; - } else if (tensorType == ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32) { - size_t buffer_size = 0; - - buffer_size = numElements * sizeof (int); - tensor->data = gst_buffer_new_allocate (NULL, buffer_size, NULL); - gst_buffer_fill (tensor->data, 0, outputTensor.GetTensorData < float >(), - buffer_size); - tensor->data_type = GST_TENSOR_DATA_TYPE_INT32; - } else { - GST_ERROR_OBJECT (debug_parent, - "Output tensor is not FLOAT32 or INT32, not supported"); - gst_buffer_remove_meta (buffer, (GstMeta *) tmeta); - return NULL; - } - } - - return tmeta; - - } - - std::vector < Ort::Value > GstOnnxClient::run (uint8_t * img_data, - GstVideoInfo vinfo) - { - std::vector < Ort::Value > modelOutput; - doRun (img_data, vinfo, modelOutput); - - return modelOutput; - } - - bool GstOnnxClient::doRun (uint8_t * img_data, GstVideoInfo vinfo, - std::vector < Ort::Value > &modelOutput) - { - if (!img_data) - return false; - - Ort::AllocatorWithDefaultOptions allocator; - auto inputName = session->GetInputNameAllocated (0, allocator); - auto inputTypeInfo = session->GetInputTypeInfo (0); - std::vector < int64_t > inputDims = - inputTypeInfo.GetTensorTypeAndShapeInfo ().GetShape (); - inputDims0 = 1; - if (inputImageFormat == GST_ML_INPUT_IMAGE_FORMAT_HWC) { - inputDims1 = height; - inputDims2 = width; - } else { - inputDims2 = height; - inputDims3 = width; - } - - std::ostringstream buffer; - buffer << inputDims; - GST_DEBUG_OBJECT (debug_parent, "Input dimensions: %s", buffer.str ().c_str ()); - - // copy video frame - uint8_t *srcPtr3 = { img_data, img_data + 1, img_data + 2 }; - uint32_t srcSamplesPerPixel = 3; - switch (vinfo.finfo->format) { - case GST_VIDEO_FORMAT_RGBA: - srcSamplesPerPixel = 4; - break; - case GST_VIDEO_FORMAT_BGRA: - srcSamplesPerPixel = 4; - srcPtr0 = img_data + 2; - srcPtr1 = img_data + 1; - srcPtr2 = img_data + 0; - break; - case GST_VIDEO_FORMAT_ARGB: - srcSamplesPerPixel = 4; - srcPtr0 = img_data + 1; - srcPtr1 = img_data + 2; - srcPtr2 = img_data + 3; - break; - case GST_VIDEO_FORMAT_ABGR: - srcSamplesPerPixel = 4; - srcPtr0 = img_data + 3; - srcPtr1 = img_data + 2; - srcPtr2 = img_data + 1; - break; - case GST_VIDEO_FORMAT_BGR: - srcPtr0 = img_data + 2; - srcPtr1 = img_data + 1; - srcPtr2 = img_data + 0; - break; - default: - break; - } - uint32_t stride = vinfo.stride0; - const size_t inputTensorSize = width * height * channels * inputDatatypeSize; - auto memoryInfo = - Ort::MemoryInfo::CreateCpu (OrtAllocatorType::OrtArenaAllocator, - OrtMemType::OrtMemTypeDefault); - - std::vector < Ort::Value > inputTensors; - - switch (inputDatatype) { - case GST_TENSOR_DATA_TYPE_UINT8: - uint8_t *src_data; - if (inputTensorOffset == 00 && inputTensorScale == 1.0) { - src_data = img_data; - } else { - convert_image_remove_alpha ( - dest, inputImageFormat, srcPtr, srcSamplesPerPixel, stride, - (uint8_t)inputTensorOffset, (uint8_t)inputTensorScale); - src_data = dest; - } - - inputTensors.push_back (Ort::Value::CreateTensor < uint8_t > ( - memoryInfo, src_data, inputTensorSize, inputDims.data (), - inputDims.size ())); - break; - case GST_TENSOR_DATA_TYPE_FLOAT32: { - convert_image_remove_alpha ((float*)dest, inputImageFormat , srcPtr, - srcSamplesPerPixel, stride, (float)inputTensorOffset, (float) - inputTensorScale); - inputTensors.push_back (Ort::Value::CreateTensor < float > ( - memoryInfo, (float*)dest, inputTensorSize, inputDims.data (), - inputDims.size ())); - } - break; - default: - break; - } - - std::vector < const char *>inputNames { inputName.get () }; - modelOutput = session->Run (Ort::RunOptions {nullptr}, - inputNames.data (), - inputTensors.data (), 1, outputNamesRaw.data (), - outputNamesRaw.size ()); - - return true; - } - - template < typename T> - void GstOnnxClient::convert_image_remove_alpha (T *dst, - GstMlInputImageFormat hwc, uint8_t **srcPtr, uint32_t srcSamplesPerPixel, - uint32_t stride, T offset, T div) { - size_t destIndex = 0; - T tmp; - - if (inputImageFormat == GST_ML_INPUT_IMAGE_FORMAT_HWC) { - for (int32_t j = 0; j < height; ++j) { - for (int32_t i = 0; i < width; ++i) { - for (int32_t k = 0; k < channels; ++k) { - tmp = *srcPtrk; - tmp += offset; - dstdestIndex++ = (T)(tmp / div); - srcPtrk += srcSamplesPerPixel; - } - } - // correct for stride - for (uint32_t k = 0; k < 3; ++k) - srcPtrk += stride - srcSamplesPerPixel * width; - } - } else { - size_t frameSize = width * height; - T *destPtr3 = { dst, dst + frameSize, dst + 2 * frameSize }; - for (int32_t j = 0; j < height; ++j) { - for (int32_t i = 0; i < width; ++i) { - for (int32_t k = 0; k < channels; ++k) { - tmp = *srcPtrk; - tmp += offset; - destPtrkdestIndex = (T)(tmp / div); - srcPtrk += srcSamplesPerPixel; - } - destIndex++; - } - // correct for stride - for (uint32_t k = 0; k < 3; ++k) - srcPtrk += stride - srcSamplesPerPixel * width; - } - } - } -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/onnx/gstonnxclient.h
Deleted
@@ -1,114 +0,0 @@ -/* - * GStreamer gstreamer-onnxclient - * Copyright (C) 2021 Collabora Ltd - * - * gstonnxclient.h - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ -#ifndef __GST_ONNX_CLIENT_H__ -#define __GST_ONNX_CLIENT_H__ - -#include <gst/gst.h> -#include <gst/analytics/analytics.h> -#include <onnxruntime_cxx_api.h> -#include <gst/video/video.h> -#include "gstml.h" - -GST_DEBUG_CATEGORY_EXTERN (onnx_inference_debug); - -/** - * GstOnnxOptimizationLevel: - * - * Since: 1.20 - */ - -typedef enum -{ - GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL, - GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC, - GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, - GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL, -} GstOnnxOptimizationLevel; - -/** - * GstOnnxExecutionProvider: - * - * Since: 1.20 - */ - -typedef enum -{ - GST_ONNX_EXECUTION_PROVIDER_CPU, - GST_ONNX_EXECUTION_PROVIDER_CUDA, -} GstOnnxExecutionProvider; - - -namespace GstOnnxNamespace { - - class GstOnnxClient { - public: - GstOnnxClient(GstElement *debug_parent); - ~GstOnnxClient(void); - bool createSession(std::string modelFile, GstOnnxOptimizationLevel optim, - GstOnnxExecutionProvider provider); - bool hasSession(void); - void setInputImageFormat(GstMlInputImageFormat format); - GstMlInputImageFormat getInputImageFormat(void); - GstTensorDataType getInputImageDatatype(void); - void setInputImageOffset (float offset); - float getInputImageOffset (); - void setInputImageScale (float offset); - float getInputImageScale (); - std::vector < Ort::Value > run (uint8_t * img_data, GstVideoInfo vinfo); - std::vector < const char *> genOutputNamesRaw(void); - bool isFixedInputImageSize(void); - int32_t getWidth(void); - int32_t getHeight(void); - int32_t getChannels (void); - GstTensorMeta *copy_tensors_to_meta (std::vector<Ort::Value> &outputs, - GstBuffer *buffer); - void parseDimensions(GstVideoInfo vinfo); - private: - - GstElement *debug_parent; - void setInputImageDatatype (GstTensorDataType datatype); - template < typename T> - void convert_image_remove_alpha (T *dest, GstMlInputImageFormat hwc, - uint8_t **srcPtr, uint32_t srcSamplesPerPixel, uint32_t stride, T offset, T div); - bool doRun(uint8_t * img_data, GstVideoInfo vinfo, std::vector < Ort::Value > &modelOutput); - Ort::Env env; - Ort::Session * session; - int32_t width; - int32_t height; - int32_t channels; - uint8_t *dest; - GstOnnxExecutionProvider m_provider; - std::vector < Ort::Value > modelOutput; - std::vector < std::string > labels; - std::vector < const char *> outputNamesRaw; - std::vector < Ort::AllocatedStringPtr > outputNames; - std::vector < GQuark > outputIds; - GstMlInputImageFormat inputImageFormat; - GstTensorDataType inputDatatype; - size_t inputDatatypeSize; - bool fixedInputImageSize; - float inputTensorOffset; - float inputTensorScale; - }; -} - -#endif /* __GST_ONNX_CLIENT_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/onnx/gstonnxinference.cpp
Deleted
@@ -1,608 +0,0 @@ -/* - * GStreamer gstreamer-onnxinference - * Copyright (C) 2023 Collabora Ltd. - * - * gstonnxinference.cpp - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-onnxinference - * @short_description: Run ONNX inference model on video buffers - * - * This element can apply an ONNX model to video buffers. It attaches - * the tensor output to the buffer as a @ref GstTensorMeta. - * - * To install ONNX on your system, follow the instructions in the - * README.md in with this plugin. - * - * ## Example launch command: - * - * Test image file, model file (SSD) and label file can be found here : - * https://gitlab.collabora.com/gstreamer/onnx-models - * - * GST_DEBUG=ssdobjectdetector:5 \ - * gst-launch-1.0 filesrc location=onnx-models/images/bus.jpg ! \ - * jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \ - * ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink - * - * - * Note: in order for downstream tensor decoders to correctly parse the tensor - * data in the GstTensorMeta, meta data must be attached to the ONNX model - * assigning a unique string id to each output layer. These unique string ids - * and corresponding GQuark ids are currently stored in the tensor decoder's - * header file, in this case gstssdobjectdetector.h. If the meta data is absent, - * the pipeline will fail. - * - * As a convenience, there is a python script - * currently stored at - * https://gitlab.collabora.com/gstreamer/onnx-models/-/blob/master/scripts/modify_onnx_metadata.py - * to enable users to easily add and remove meta data from json files. It can also dump - * the names of all output layers, which can then be used to craft the json meta data file. - * - * Since: 1.20 - */ -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include <gst/gst.h> -#include "gstonnxinference.h" -#include "gstonnxclient.h" - - -/* - * GstOnnxInference: - * - * @model_file model file - * @optimization_level: ONNX session optimization level - * @execution_provider: ONNX execution provider - * @onnx_client opaque pointer to ONNX client - * @onnx_disabled true if inference is disabled - * @video_info @ref GstVideoInfo of sink caps - */ -struct _GstOnnxInference -{ - GstBaseTransform basetransform; - gchar *model_file; - GstOnnxOptimizationLevel optimization_level; - GstOnnxExecutionProvider execution_provider; - gpointer onnx_client; - gboolean onnx_disabled; - GstVideoInfo video_info; -}; - -GST_DEBUG_CATEGORY (onnx_inference_debug); - -#define GST_CAT_DEFAULT onnx_inference_debug -#define GST_ONNX_CLIENT_MEMBER( self ) ((GstOnnxNamespace::GstOnnxClient *) (self->onnx_client)) -GST_ELEMENT_REGISTER_DEFINE (onnx_inference, "onnxinference", - GST_RANK_PRIMARY, GST_TYPE_ONNX_INFERENCE); - -/* GstOnnxInference properties */ -enum -{ - PROP_0, - PROP_MODEL_FILE, - PROP_INPUT_IMAGE_FORMAT, - PROP_OPTIMIZATION_LEVEL, - PROP_EXECUTION_PROVIDER, - PROP_INPUT_OFFSET, - PROP_INPUT_SCALE -}; - -#define GST_ONNX_INFERENCE_DEFAULT_EXECUTION_PROVIDER GST_ONNX_EXECUTION_PROVIDER_CPU -#define GST_ONNX_INFERENCE_DEFAULT_OPTIMIZATION_LEVEL GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED - -static GstStaticPadTemplate gst_onnx_inference_src_template = -GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ RGB,RGBA,BGR,BGRA }")) - ); - -static GstStaticPadTemplate gst_onnx_inference_sink_template = -GST_STATIC_PAD_TEMPLATE ("sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ RGB,RGBA,BGR,BGRA }")) - ); - -static void gst_onnx_inference_set_property (GObject * object, - guint prop_id, const GValue * value, GParamSpec * pspec); -static void gst_onnx_inference_get_property (GObject * object, - guint prop_id, GValue * value, GParamSpec * pspec); -static void gst_onnx_inference_finalize (GObject * object); -static GstFlowReturn gst_onnx_inference_transform_ip (GstBaseTransform * - trans, GstBuffer * buf); -static gboolean gst_onnx_inference_process (GstBaseTransform * trans, - GstBuffer * buf); -static gboolean gst_onnx_inference_create_session (GstBaseTransform * trans); -static GstCaps *gst_onnx_inference_transform_caps (GstBaseTransform * - trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps); -static gboolean -gst_onnx_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, - GstCaps * outcaps); - -G_DEFINE_TYPE (GstOnnxInference, gst_onnx_inference, GST_TYPE_BASE_TRANSFORM); - -GType gst_onnx_optimization_level_get_type (void); -#define GST_TYPE_ONNX_OPTIMIZATION_LEVEL (gst_onnx_optimization_level_get_type ()) - -GType gst_onnx_execution_provider_get_type (void); -#define GST_TYPE_ONNX_EXECUTION_PROVIDER (gst_onnx_execution_provider_get_type ()) - -GType gst_ml_model_input_image_format_get_type (void); -#define GST_TYPE_ML_MODEL_INPUT_IMAGE_FORMAT (gst_ml_model_input_image_format_get_type ()) - -GType -gst_onnx_optimization_level_get_type (void) -{ - static GType onnx_optimization_type = 0; - - if (g_once_init_enter (&onnx_optimization_type)) { - static GEnumValue optimization_level_types = { - {GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL, "Disable all optimization", - "disable-all"}, - {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC, - "Enable basic optimizations (redundant node removals))", - "enable-basic"}, - {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, - "Enable extended optimizations (redundant node removals + node fusions)", - "enable-extended"}, - {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL, - "Enable all possible optimizations", "enable-all"}, - {0, NULL, NULL}, - }; - - GType temp = g_enum_register_static ("GstOnnxOptimizationLevel", - optimization_level_types); - - g_once_init_leave (&onnx_optimization_type, temp); - } - - return onnx_optimization_type; -} - -GType -gst_onnx_execution_provider_get_type (void) -{ - static GType onnx_execution_type = 0; - - if (g_once_init_enter (&onnx_execution_type)) { - static GEnumValue execution_provider_types = { - {GST_ONNX_EXECUTION_PROVIDER_CPU, "CPU execution provider", - "cpu"}, - {GST_ONNX_EXECUTION_PROVIDER_CUDA, - "CUDA execution provider", - "cuda"}, - {0, NULL, NULL}, - }; - - GType temp = g_enum_register_static ("GstOnnxExecutionProvider", - execution_provider_types); - - g_once_init_leave (&onnx_execution_type, temp); - } - - return onnx_execution_type; -} - -GType -gst_ml_model_input_image_format_get_type (void) -{ - static GType ml_model_input_image_format = 0; - - if (g_once_init_enter (&ml_model_input_image_format)) { - static GEnumValue ml_model_input_image_format_types = { - {GST_ML_INPUT_IMAGE_FORMAT_HWC, - "Height Width Channel (HWC) a.k.a. interleaved image data format", - "hwc"}, - {GST_ML_INPUT_IMAGE_FORMAT_CHW, - "Channel Height Width (CHW) a.k.a. planar image data format", - "chw"}, - {0, NULL, NULL}, - }; - - GType temp = g_enum_register_static ("GstMlInputImageFormat", - ml_model_input_image_format_types); - - g_once_init_leave (&ml_model_input_image_format, temp); - } - - return ml_model_input_image_format; -} - -static void -gst_onnx_inference_class_init (GstOnnxInferenceClass * klass) -{ - GObjectClass *gobject_class = (GObjectClass *) klass; - GstElementClass *element_class = (GstElementClass *) klass; - GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; - - GST_DEBUG_CATEGORY_INIT (onnx_inference_debug, "onnxinference", - 0, "onnx_inference"); - gobject_class->set_property = gst_onnx_inference_set_property; - gobject_class->get_property = gst_onnx_inference_get_property; - gobject_class->finalize = gst_onnx_inference_finalize; - - /** - * GstOnnxInference:model-file - * - * ONNX model file - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_MODEL_FILE, - g_param_spec_string ("model-file", - "ONNX model file", "ONNX model file", NULL, (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - /** - * GstOnnxInference:input-image-format - * - * Model input image format - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), - PROP_INPUT_IMAGE_FORMAT, - g_param_spec_enum ("input-image-format", - "Input image format", - "Input image format", - GST_TYPE_ML_MODEL_INPUT_IMAGE_FORMAT, - GST_ML_INPUT_IMAGE_FORMAT_HWC, (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - /** - * GstOnnxInference:optimization-level - * - * ONNX optimization level - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), - PROP_OPTIMIZATION_LEVEL, - g_param_spec_enum ("optimization-level", - "Optimization level", - "ONNX optimization level", - GST_TYPE_ONNX_OPTIMIZATION_LEVEL, - GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - /** - * GstOnnxInference:execution-provider - * - * ONNX execution provider - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), - PROP_EXECUTION_PROVIDER, - g_param_spec_enum ("execution-provider", - "Execution provider", - "ONNX execution provider", - GST_TYPE_ONNX_EXECUTION_PROVIDER, - GST_ONNX_EXECUTION_PROVIDER_CPU, (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - g_object_class_install_property (G_OBJECT_CLASS (klass), - PROP_INPUT_OFFSET, - g_param_spec_float ("input-tensor-offset", - "Input tensor offset", - "offset each tensor value by this value", - -G_MAXFLOAT, G_MAXFLOAT, 0.0, - (GParamFlags)(G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - g_object_class_install_property (G_OBJECT_CLASS (klass), - PROP_INPUT_SCALE, - g_param_spec_float ("input-tensor-scale", - "Input tensor scale", - "Divide each tensor value by this value", - G_MINFLOAT, G_MAXFLOAT, 1.0, - (GParamFlags)(G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - - gst_element_class_set_static_metadata (element_class, "onnxinference", - "Filter/Effect/Video", - "Apply neural network to video frames and create tensor output", - "Aaron Boxer <aaron.boxer@collabora.com>"); - gst_element_class_add_pad_template (element_class, - gst_static_pad_template_get (&gst_onnx_inference_sink_template)); - gst_element_class_add_pad_template (element_class, - gst_static_pad_template_get (&gst_onnx_inference_src_template)); - basetransform_class->transform_ip = - GST_DEBUG_FUNCPTR (gst_onnx_inference_transform_ip); - basetransform_class->transform_caps = - GST_DEBUG_FUNCPTR (gst_onnx_inference_transform_caps); - basetransform_class->set_caps = - GST_DEBUG_FUNCPTR (gst_onnx_inference_set_caps); - - gst_type_mark_as_plugin_api (GST_TYPE_ONNX_OPTIMIZATION_LEVEL, - (GstPluginAPIFlags) 0); - gst_type_mark_as_plugin_api (GST_TYPE_ONNX_EXECUTION_PROVIDER, - (GstPluginAPIFlags) 0); - gst_type_mark_as_plugin_api (GST_TYPE_ML_MODEL_INPUT_IMAGE_FORMAT, - (GstPluginAPIFlags) 0); -} - -static void -gst_onnx_inference_init (GstOnnxInference * self) -{ - self->onnx_client = new GstOnnxNamespace::GstOnnxClient (GST_ELEMENT(self)); - self->onnx_disabled = TRUE; -} - -static void -gst_onnx_inference_finalize (GObject * object) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (object); - - g_free (self->model_file); - delete GST_ONNX_CLIENT_MEMBER (self); - G_OBJECT_CLASS (gst_onnx_inference_parent_class)->finalize (object); -} - -static void -gst_onnx_inference_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (object); - const gchar *filename; - auto onnxClient = GST_ONNX_CLIENT_MEMBER (self); - - switch (prop_id) { - case PROP_MODEL_FILE: - filename = g_value_get_string (value); - if (filename - && g_file_test (filename, - (GFileTest) (G_FILE_TEST_EXISTS | G_FILE_TEST_IS_REGULAR))) { - if (self->model_file) - g_free (self->model_file); - self->model_file = g_strdup (filename); - self->onnx_disabled = FALSE; - } else { - GST_WARNING_OBJECT (self, "Model file '%s' not found!", filename); - } - break; - case PROP_OPTIMIZATION_LEVEL: - self->optimization_level = - (GstOnnxOptimizationLevel) g_value_get_enum (value); - break; - case PROP_EXECUTION_PROVIDER: - self->execution_provider = - (GstOnnxExecutionProvider) g_value_get_enum (value); - break; - case PROP_INPUT_IMAGE_FORMAT: - onnxClient->setInputImageFormat ((GstMlInputImageFormat) - g_value_get_enum (value)); - break; - case PROP_INPUT_OFFSET: - onnxClient->setInputImageOffset (g_value_get_float (value)); - break; - case PROP_INPUT_SCALE: - onnxClient->setInputImageScale (g_value_get_float (value)); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_onnx_inference_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (object); - auto onnxClient = GST_ONNX_CLIENT_MEMBER (self); - - switch (prop_id) { - case PROP_MODEL_FILE: - g_value_set_string (value, self->model_file); - break; - case PROP_OPTIMIZATION_LEVEL: - g_value_set_enum (value, self->optimization_level); - break; - case PROP_EXECUTION_PROVIDER: - g_value_set_enum (value, self->execution_provider); - break; - case PROP_INPUT_IMAGE_FORMAT: - g_value_set_enum (value, onnxClient->getInputImageFormat ()); - break; - case PROP_INPUT_OFFSET: - g_value_set_float (value, onnxClient->getInputImageOffset ()); - break; - case PROP_INPUT_SCALE: - g_value_set_float (value, onnxClient->getInputImageScale ()); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static gboolean -gst_onnx_inference_create_session (GstBaseTransform * trans) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (trans); - auto onnxClient = GST_ONNX_CLIENT_MEMBER (self); - - GST_OBJECT_LOCK (self); - if (self->onnx_disabled) { - GST_OBJECT_UNLOCK (self); - - return FALSE; - } - if (onnxClient->hasSession ()) { - GST_OBJECT_UNLOCK (self); - - return TRUE; - } - if (self->model_file) { - gboolean ret = - GST_ONNX_CLIENT_MEMBER (self)->createSession (self->model_file, - self->optimization_level, - self->execution_provider); - if (!ret) { - GST_ERROR_OBJECT (self, - "Unable to create ONNX session. Model is disabled."); - self->onnx_disabled = TRUE; - } - } else { - self->onnx_disabled = TRUE; - GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), ("Model file not found")); - } - GST_OBJECT_UNLOCK (self); - if (self->onnx_disabled) { - gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), TRUE); - } - - return TRUE; -} - -static GstCaps * -gst_onnx_inference_transform_caps (GstBaseTransform * - trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (trans); - auto onnxClient = GST_ONNX_CLIENT_MEMBER (self); - GstCaps *other_caps; - GstCaps *restrictions; - - if (!gst_onnx_inference_create_session (trans)) - return NULL; - GST_LOG_OBJECT (self, "transforming caps %" GST_PTR_FORMAT, caps); - - if (gst_base_transform_is_passthrough (trans)) - return gst_caps_ref (caps); - - restrictions = gst_caps_new_empty_simple ("video/x-raw"); - if (onnxClient->isFixedInputImageSize ()) - gst_caps_set_simple (restrictions, "width", G_TYPE_INT, - onnxClient->getWidth (), "height", G_TYPE_INT, - onnxClient->getHeight (), NULL); - - if (onnxClient->getInputImageDatatype() == GST_TENSOR_DATA_TYPE_UINT8 && - onnxClient->getInputImageScale() == 1.0 && - onnxClient->getInputImageOffset() == 0.0) { - switch (onnxClient->getChannels()) { - case 1: - gst_caps_set_simple (restrictions, "format", G_TYPE_STRING, "GRAY8", - NULL); - break; - case 3: - switch (onnxClient->getInputImageFormat ()) { - case GST_ML_INPUT_IMAGE_FORMAT_HWC: - gst_caps_set_simple (restrictions, "format", G_TYPE_STRING, "RGB", - NULL); - break; - case GST_ML_INPUT_IMAGE_FORMAT_CHW: - gst_caps_set_simple (restrictions, "format", G_TYPE_STRING, "RGBP", - NULL); - break; - } - break; - case 4: - switch (onnxClient->getInputImageFormat ()) { - case GST_ML_INPUT_IMAGE_FORMAT_HWC: - gst_caps_set_simple (restrictions, "format", G_TYPE_STRING, "RGBA", - NULL); - break; - case GST_ML_INPUT_IMAGE_FORMAT_CHW: - gst_caps_set_simple (restrictions, "format", G_TYPE_STRING, "RGBAP", - NULL); - break; - } - break; - default: - GST_ERROR_OBJECT (self, "Invalid number of channels %d", - onnxClient->getChannels()); - return NULL; - } - } - - GST_DEBUG_OBJECT(self, "Applying caps restrictions: %" GST_PTR_FORMAT, - restrictions); - - other_caps = gst_caps_intersect_full (caps, restrictions, - GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (restrictions); - - if (filter_caps) { - GstCaps *tmp = gst_caps_intersect_full ( - other_caps, filter_caps, GST_CAPS_INTERSECT_FIRST); - gst_caps_replace (&other_caps, tmp); - gst_caps_unref (tmp); - } - - return other_caps; -} - -static gboolean -gst_onnx_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, - GstCaps * outcaps) -{ - GstOnnxInference *self = GST_ONNX_INFERENCE (trans); - auto onnxClient = GST_ONNX_CLIENT_MEMBER (self); - - if (!gst_video_info_from_caps (&self->video_info, incaps)) { - GST_ERROR_OBJECT (self, "Failed to parse caps"); - return FALSE; - } - - onnxClient->parseDimensions (self->video_info); - return TRUE; -} - -static GstFlowReturn -gst_onnx_inference_transform_ip (GstBaseTransform * trans, GstBuffer * buf) -{ - if (!gst_base_transform_is_passthrough (trans) - && !gst_onnx_inference_process (trans, buf)) { - GST_ELEMENT_ERROR (trans, STREAM, FAILED, - (NULL), ("ONNX inference failed")); - return GST_FLOW_ERROR; - } - - return GST_FLOW_OK; -} - -static gboolean -gst_onnx_inference_process (GstBaseTransform * trans, GstBuffer * buf) -{ - GstMapInfo info; - if (gst_buffer_map (buf, &info, GST_MAP_READ)) { - GstOnnxInference *self = GST_ONNX_INFERENCE (trans); - try { - auto client = GST_ONNX_CLIENT_MEMBER (self); - auto outputs = client->run (info.data, self->video_info); - auto meta = client->copy_tensors_to_meta (outputs, buf); - if (!meta) - return FALSE; - GST_TRACE_OBJECT (trans, "Num tensors:%zu", meta->num_tensors); - } - catch (Ort::Exception & ortex) { - GST_ERROR_OBJECT (self, "%s", ortex.what ()); - gst_buffer_unmap (buf, &info); - return FALSE; - } - - gst_buffer_unmap (buf, &info); - } - - return TRUE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth265parser-private.h
Deleted
@@ -1,32 +0,0 @@ -/* GStreamer - * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include <gst/gst.h> -#include <gst/codecparsers/gsth265parser.h> - -G_BEGIN_DECLS - -GST_CODEC_PARSERS_API -GstH265ParserResult gst_h265_parser_link_slice_hdr (GstH265Parser * parser, - GstH265SliceHdr * slice, - guint pps_id); - -G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkvideoutils.c
Deleted
@@ -1,474 +0,0 @@ -/* - * GStreamer - * Copyright (C) 2023 Igalia, S.L. - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include "gstvkvideoutils.h" - -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS -/* *INDENT-OFF* */ -static const struct { - GstVulkanVideoOperation video_operation; - VkVideoCodecOperationFlagBitsKHR codec; - const char *mime; - VkStructureType stype; -} video_codecs_map = { - { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR, "video/x-h264", - VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_PROFILE_INFO_KHR }, - { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR, "video/x-h265", - VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_PROFILE_INFO_KHR }, - { GST_VULKAN_VIDEO_OPERATION_ENCODE, VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, "video/x-h264", - VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PROFILE_INFO_KHR }, - { GST_VULKAN_VIDEO_OPERATION_ENCODE, VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR, "video/x-h265", - VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_PROFILE_INFO_KHR }, -}; - -static const struct { - VkVideoChromaSubsamplingFlagBitsKHR chroma; - const char *chroma_str; -} video_chroma_map = { - { VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, "4:2:0" }, - { VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, "4:2:2" }, - { VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, "4:4:4" }, -}; - -static const struct { - VkVideoComponentBitDepthFlagBitsKHR bitdepth; - int bit_depth; -} bit_depth_map = { - {VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, 8}, - {VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, 10}, - {VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, 12}, -}; - -static const struct { - StdVideoH264ProfileIdc vk_profile; - const char *profile_str; -} h264_profile_map = { - { STD_VIDEO_H264_PROFILE_IDC_BASELINE, "baseline" }, - { STD_VIDEO_H264_PROFILE_IDC_MAIN, "main" }, - { STD_VIDEO_H264_PROFILE_IDC_HIGH, "high" }, - { STD_VIDEO_H264_PROFILE_IDC_HIGH_444_PREDICTIVE, "high-4:4:4" }, -}; - -static const struct { - VkVideoDecodeH264PictureLayoutFlagBitsKHR layout; - const char *layout_str; -} h264_layout_map = { - { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_PROGRESSIVE_KHR, "progressive" }, - { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_INTERLEAVED_LINES_BIT_KHR, - "interleaved" }, - { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_SEPARATE_PLANES_BIT_KHR, - "fields" }, -}; - -static const struct { - StdVideoH265ProfileIdc vk_profile; - const char *profile_str; -} h265_profile_map = { - { STD_VIDEO_H265_PROFILE_IDC_MAIN, "main" }, - { STD_VIDEO_H265_PROFILE_IDC_MAIN_10, "main-10" }, - { STD_VIDEO_H265_PROFILE_IDC_MAIN_STILL_PICTURE, "main-still-picture" }, - { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, - "format-range-extensions" }, - { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, "scc-extensions" }, -}; - -/* *INDENT-ON* */ -#endif - -/** - * gst_vulkan_video_profile_to_caps: (skip) - * @profile: #GstVulkanVideoProfile to convert into a #GstCaps - * - * Returns: (transfer full): a #GstCaps from @profile - * - * Since: 1.24 - */ -GstCaps * -gst_vulkan_video_profile_to_caps (const GstVulkanVideoProfile * profile) -{ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - const char *mime = NULL, *chroma_sub = NULL; - const char *profile_str = NULL, *layout = NULL; - int i, luma = 0, chroma = 0; - GstCaps *caps; - - g_return_val_if_fail (profile - && profile->profile.sType == VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, - NULL); - - for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { - if (profile->profile.videoCodecOperation == video_codecs_mapi.codec) { - mime = video_codecs_mapi.mime; - - switch (profile->profile.videoCodecOperation) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: - if (profile->codec.h264dec.sType == video_codecs_mapi.stype) { - int j; - for (j = 0; j < G_N_ELEMENTS (h264_profile_map); j++) { - if (profile->codec.h264dec.stdProfileIdc - == h264_profile_mapj.vk_profile) { - profile_str = h264_profile_mapj.profile_str; - break; - } - } - for (j = 0; j < G_N_ELEMENTS (h264_layout_map); j++) { - if (profile->codec.h264dec.pictureLayout - == h264_layout_mapj.layout) { - layout = h264_layout_mapj.layout_str; - break; - } - } - } - break; - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: - if (profile->codec.h265dec.sType == video_codecs_mapi.stype) { - int j; - for (j = 0; j < G_N_ELEMENTS (h265_profile_map); j++) { - if (profile->codec.h265dec.stdProfileIdc - == h265_profile_mapj.vk_profile) - profile_str = h265_profile_mapj.profile_str; - } - } - break; - case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: - if (profile->codec.h264enc.sType == video_codecs_mapi.stype) { - int j; - for (j = 0; j < G_N_ELEMENTS (h264_profile_map); j++) { - if (profile->codec.h264enc.stdProfileIdc - == h264_profile_mapj.vk_profile) - profile_str = h264_profile_mapj.profile_str; - } - } - break; - case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: - if (profile->codec.h265enc.sType == video_codecs_mapi.stype) { - int j; - for (j = 0; j < G_N_ELEMENTS (h265_profile_map); j++) { - if (profile->codec.h265enc.stdProfileIdc - == h265_profile_mapj.vk_profile) - profile_str = h265_profile_mapj.profile_str; - } - } - break; - default: - break; - } - - break; - } - } - if (i == G_N_ELEMENTS (video_codecs_map)) - return NULL; - - for (i = 0; i < G_N_ELEMENTS (video_chroma_map); i++) { - if (profile->profile.chromaSubsampling == video_chroma_mapi.chroma) { - chroma_sub = video_chroma_mapi.chroma_str; - break; - } - } - if (i == G_N_ELEMENTS (video_chroma_map)) - return NULL; - - for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { - if (profile->profile.chromaBitDepth == bit_depth_mapi.bitdepth) { - chroma = bit_depth_mapi.bit_depth; - break; - } - } - if (i == G_N_ELEMENTS (bit_depth_map)) - return NULL; - - for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { - if (profile->profile.lumaBitDepth == bit_depth_mapi.bitdepth) { - luma = bit_depth_mapi.bit_depth; - break; - } - } - if (i == G_N_ELEMENTS (bit_depth_map)) - return NULL; - - caps = gst_caps_new_simple (mime, "chroma-format", G_TYPE_STRING, chroma_sub, - "bit-depth-luma", G_TYPE_UINT, luma, "bit-depth-chroma", G_TYPE_UINT, - chroma, NULL); - - if (profile_str) - gst_caps_set_simple (caps, "profile", G_TYPE_STRING, profile_str, NULL); - if (layout) - gst_caps_set_simple (caps, "interlace-mode", G_TYPE_STRING, layout, NULL); - - return caps; - -#endif - return NULL; -} - -/** - * gst_vulkan_video_profile_from_caps: (skip) - * @profile: (out): the output profile - * @caps: a #GstCaps to parse - * @video_operation: a supported video operation - * - * Returns: %TRUE if @caps was parsed correctly, otherwise %FALSE - * - * Since: 1.24 - */ -gboolean -gst_vulkan_video_profile_from_caps (GstVulkanVideoProfile * profile, - GstCaps * caps, GstVulkanVideoOperation video_operation) -{ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - const GstStructure *structure; - const gchar *mime, *chroma_sub, *profile_str = NULL, *layout = NULL; - gint i, luma, chroma; - - g_return_val_if_fail (GST_IS_CAPS (caps), FALSE); - g_return_val_if_fail (profile, FALSE); - g_return_val_if_fail (video_operation < GST_VULKAN_VIDEO_OPERATION_UNKNOWN, - FALSE); - - structure = gst_caps_get_structure (caps, 0); - - profile->usage.decode.sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR; - profile->usage.decode.videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR; - - profile->profile.sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR; - profile->profile.pNext = &profile->usage; - - mime = gst_structure_get_name (structure); - for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { - if ((video_codecs_mapi.video_operation == video_operation) - && (g_strcmp0 (video_codecs_mapi.mime, mime) == 0)) { - profile->profile.videoCodecOperation = video_codecs_mapi.codec; - - switch (profile->profile.videoCodecOperation) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR:{ - int j; - - profile->codec.h264dec.sType = video_codecs_mapi.stype; - profile->codec.h264dec.stdProfileIdc = - STD_VIDEO_H264_PROFILE_IDC_INVALID; - profile->codec.h264dec.pictureLayout = - VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_FLAG_BITS_MAX_ENUM_KHR; - profile->usage.decode.pNext = &profile->codec; - - profile_str = gst_structure_get_string (structure, "profile"); - for (j = 0; profile_str && j < G_N_ELEMENTS (h264_profile_map); j++) { - if (g_strcmp0 (profile_str, h264_profile_mapj.profile_str) == 0) { - profile->codec.h264dec.stdProfileIdc = - h264_profile_mapj.vk_profile; - break; - } - } - layout = gst_structure_get_string (structure, "interlace-mode"); - for (j = 0; layout && j < G_N_ELEMENTS (h264_layout_map); j++) { - if (g_strcmp0 (layout, h264_layout_mapj.layout_str) == 0) { - profile->codec.h264dec.pictureLayout = h264_layout_mapj.layout; - break; - } - } - break; - } - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR:{ - int j; - - profile->codec.h265dec.sType = video_codecs_mapi.stype; - profile->codec.h265dec.stdProfileIdc = - STD_VIDEO_H265_PROFILE_IDC_INVALID; - profile->usage.decode.pNext = &profile->codec; - - profile_str = gst_structure_get_string (structure, "profile"); - for (j = 0; profile_str && j < G_N_ELEMENTS (h265_profile_map); j++) { - if (g_strcmp0 (profile_str, h265_profile_mapj.profile_str) == 0) { - profile->codec.h265dec.stdProfileIdc = - h265_profile_mapj.vk_profile; - break; - } - } - break; - } - case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR:{ - int j; - - profile->codec.h264enc.sType = video_codecs_mapi.stype; - profile->codec.h264enc.stdProfileIdc = - STD_VIDEO_H264_PROFILE_IDC_INVALID; - profile->profile.pNext = &profile->codec; - - profile_str = gst_structure_get_string (structure, "profile"); - for (j = 0; profile_str && j < G_N_ELEMENTS (h264_profile_map); j++) { - if (g_strcmp0 (profile_str, h264_profile_mapj.profile_str) == 0) { - profile->codec.h264enc.stdProfileIdc = - h264_profile_mapj.vk_profile; - break; - } - } - break; - } - case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR:{ - int j; - - profile->codec.h265enc.sType = video_codecs_mapi.stype; - profile->codec.h265enc.stdProfileIdc = - STD_VIDEO_H265_PROFILE_IDC_INVALID; - profile->profile.pNext = &profile->codec; - - profile_str = gst_structure_get_string (structure, "profile"); - for (j = 0; profile_str && j < G_N_ELEMENTS (h265_profile_map); j++) { - if (g_strcmp0 (profile_str, h265_profile_mapj.profile_str) == 0) { - profile->codec.h265enc.stdProfileIdc = - h265_profile_mapj.vk_profile; - break; - } - } - break; - } - default: - profile->usage.decode.pNext = NULL; - break; - } - - break; - } - } - if (i == G_N_ELEMENTS (video_codecs_map)) - return FALSE; - chroma_sub = gst_structure_get_string (structure, "chroma-format"); - if (!chroma_sub) - return FALSE; - if (!gst_structure_get (structure, "bit-depth-luma", G_TYPE_UINT, &luma, - "bit-depth-chroma", G_TYPE_UINT, &chroma, NULL)) - return FALSE; - - for (i = 0; i < G_N_ELEMENTS (video_chroma_map); i++) { - if (g_strcmp0 (chroma_sub, video_chroma_mapi.chroma_str) == 0) { - profile->profile.chromaSubsampling = video_chroma_mapi.chroma; - break; - } - } - if (i == G_N_ELEMENTS (video_chroma_map)) - return FALSE; - - for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { - if (luma == bit_depth_mapi.bit_depth) { - profile->profile.lumaBitDepth = bit_depth_mapi.bitdepth; - break; - } - } - if (i == G_N_ELEMENTS (bit_depth_map)) - return FALSE; - - for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { - if (chroma == bit_depth_mapi.bit_depth) { - profile->profile.chromaBitDepth = bit_depth_mapi.bitdepth; - break; - } - } - if (i == G_N_ELEMENTS (bit_depth_map)) - return FALSE; -#endif - return TRUE; -} - -/** - * gst_vulkan_video_profile_is_valid: (skip) - * @profile: the output profile - * @codec: VkVideoCodecOperationFlagBitsKHR described by @profile - * - * Returns: %TRUE if @profile is correct and matches with @codec - * - * Since: 1.24 - */ -gboolean -gst_vulkan_video_profile_is_valid (GstVulkanVideoProfile * profile, guint codec) -{ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - int i; - VkVideoCodecOperationFlagBitsKHR op = codec; - VkStructureType stype = VK_STRUCTURE_TYPE_MAX_ENUM; - - if (op == VK_VIDEO_CODEC_OPERATION_NONE_KHR) - return FALSE; - - if (profile->profile.videoCodecOperation != op) - return FALSE; - - for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { - if (op == video_codecs_mapi.codec) { - stype = video_codecs_mapi.stype; - break; - } - } - - if (stype == VK_STRUCTURE_TYPE_MAX_ENUM) - return FALSE; - - if (profile->codec.base.sType != stype) - return FALSE; - - return TRUE; - -#endif - return FALSE; -} - -/** - * gst_vulkan_video_profile_is_equal: - * @a: a #GstVulkanVideoProfile - * @b: another #GstVulkanVideoProfile - * - * Returns: whether @a and @b contains the same information. - */ -gboolean -gst_vulkan_video_profile_is_equal (const GstVulkanVideoProfile * a, - const GstVulkanVideoProfile * b) -{ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - gboolean profile; - - g_return_val_if_fail (a && b, FALSE); - - profile = ((a->profile.videoCodecOperation == b->profile.videoCodecOperation) - && (a->profile.chromaSubsampling == b->profile.chromaSubsampling) - && (a->profile.chromaBitDepth == b->profile.chromaBitDepth) - && (a->profile.lumaBitDepth == b->profile.lumaBitDepth) - && (a->codec.base.sType == b->codec.base.sType)); - - if (!profile) - return FALSE; - - switch (a->profile.videoCodecOperation) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: - return ((a->codec.h264dec.stdProfileIdc == b->codec.h264dec.stdProfileIdc) - && a->codec.h264dec.pictureLayout == b->codec.h264dec.pictureLayout); - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: - return (a->codec.h265dec.stdProfileIdc == b->codec.h265dec.stdProfileIdc); - default: - return FALSE; - } - - g_assert_not_reached (); -#else - return FALSE; -#endif -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkvideoutils.h
Deleted
@@ -1,140 +0,0 @@ -/* - * GStreamer - * Copyright (C) 2023 Igalia, S.L. - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include <gst/gst.h> -#include <gst/vulkan/gstvkapi.h> - -G_BEGIN_DECLS - -/** - * GstVulkanVideoProfile: - * @profile: the generic vulkan video profile - * @codec: the specific codec profile - * - * Since: 1.24 - */ -struct _GstVulkanVideoProfile -{ - /*< private >*/ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - VkVideoProfileInfoKHR profile; - union { - VkVideoDecodeUsageInfoKHR decode; - /** - * GstVulkanVideoProfile.usage.encode: - * - * Since: 1.26 - **/ - VkVideoEncodeUsageInfoKHR encode; - } usage; - - union { - VkBaseInStructure base; - VkVideoDecodeH264ProfileInfoKHR h264dec; - VkVideoDecodeH265ProfileInfoKHR h265dec; - /** - * GstVulkanVideoProfile.usage.codec.h264enc: - * - * Since: 1.26 - **/ - VkVideoEncodeH264ProfileInfoKHR h264enc; - /** - * GstVulkanVideoProfile.usage.codec.h265enc: - * - * Since: 1.26 - **/ - VkVideoEncodeH265ProfileInfoKHR h265enc; - } codec; -#endif - gpointer _reservedGST_PADDING; -}; - -/** - * GstVulkanVideoCapabilities: - * - * Since: 1.24 - */ -struct _GstVulkanVideoCapabilities -{ - /*< private >*/ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - VkVideoCapabilitiesKHR caps; - union - { - struct - { - /*< private >*/ - VkVideoDecodeCapabilitiesKHR caps; - union - { - /*< private >*/ - VkVideoDecodeH264CapabilitiesKHR h264; - VkVideoDecodeH265CapabilitiesKHR h265; - } codec; - } decoder; - struct - { - /*< private >*/ - VkVideoEncodeCapabilitiesKHR caps; - union - { - /*< private >*/ - VkVideoEncodeH264CapabilitiesKHR h264; - VkVideoEncodeH265CapabilitiesKHR h265; - } codec; - } encoder; - }; -#endif - /*< private >*/ - gpointer _reservedGST_PADDING; -}; - -/** - * GstVulkanVideoOperation: - * @GST_VULKAN_VIDEO_OPERATION_DECODE: decode operation - * @GST_VULKAN_VIDEO_OPERATION_ENCODE: encode operation - * @GST_VULKAN_VIDEO_OPERATION_UNKNOWN: unknown - * - * The type of video operation. - * - * Since: 1.24 - */ -typedef enum { - GST_VULKAN_VIDEO_OPERATION_DECODE = 0, - GST_VULKAN_VIDEO_OPERATION_ENCODE, - GST_VULKAN_VIDEO_OPERATION_UNKNOWN, -} GstVulkanVideoOperation; - -GST_VULKAN_API -GstCaps * gst_vulkan_video_profile_to_caps (const GstVulkanVideoProfile * profile); -GST_VULKAN_API -gboolean gst_vulkan_video_profile_from_caps (GstVulkanVideoProfile * profile, - GstCaps * caps, - GstVulkanVideoOperation video_operation); -GST_VULKAN_API -gboolean gst_vulkan_video_profile_is_valid (GstVulkanVideoProfile * profile, - guint codec); -GST_VULKAN_API -gboolean gst_vulkan_video_profile_is_equal (const GstVulkanVideoProfile * a, - const GstVulkanVideoProfile * b); - -G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/inter/gstintertest.c
Deleted
@@ -1,507 +0,0 @@ -/* GstInterTest - * Copyright (C) 2011 David Schleef <ds@schleef.org> - * Copyright (C) 2010 Entropy Wave Inc - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, - * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING - * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include <gst/gst.h> -#include <stdlib.h> - -//#define GETTEXT_PACKAGE "intertest" - - -typedef struct _GstInterTest GstInterTest; -struct _GstInterTest -{ - GstElement *pipeline; - GstBus *bus; - GMainLoop *main_loop; - - GstElement *source_element; - GstElement *sink_element; - - gboolean paused_for_buffering; - guint timer_id; -}; - -GstInterTest *gst_inter_test_new (void); -void gst_inter_test_free (GstInterTest * intertest); -void gst_inter_test_create_pipeline_server (GstInterTest * intertest); -void gst_inter_test_create_pipeline_vts (GstInterTest * intertest); -void gst_inter_test_create_pipeline_playbin (GstInterTest * intertest, - const char *uri); -void gst_inter_test_start (GstInterTest * intertest); -void gst_inter_test_stop (GstInterTest * intertest); - -static gboolean gst_inter_test_handle_message (GstBus * bus, - GstMessage * message, gpointer data); -static gboolean onesecond_timer (gpointer priv); - - -gboolean verbose; - -static GOptionEntry entries = { - {"verbose", 'v', 0, G_OPTION_ARG_NONE, &verbose, "Be verbose", NULL}, - - {NULL} - -}; - -int -main (int argc, char *argv) -{ - GError *error = NULL; - GOptionContext *context; - GstInterTest *intertest1; - GstInterTest *intertest2; - GMainLoop *main_loop; - - context = g_option_context_new ("- Internal src/sink test"); - g_option_context_add_main_entries (context, entries, GETTEXT_PACKAGE); - g_option_context_add_group (context, gst_init_get_option_group ()); - if (!g_option_context_parse (context, &argc, &argv, &error)) { - g_print ("option parsing failed: %s\n", error->message); - g_option_context_free (context); - g_clear_error (&error); - exit (1); - } - g_option_context_free (context); - - intertest1 = gst_inter_test_new (); - gst_inter_test_create_pipeline_server (intertest1); - gst_inter_test_start (intertest1); - - intertest2 = gst_inter_test_new (); - gst_inter_test_create_pipeline_playbin (intertest2, NULL); - gst_inter_test_start (intertest2); - - main_loop = g_main_loop_new (NULL, TRUE); - intertest1->main_loop = main_loop; - intertest2->main_loop = main_loop; - - g_main_loop_run (main_loop); - g_main_loop_unref (main_loop); - - exit (0); -} - - -GstInterTest * -gst_inter_test_new (void) -{ - GstInterTest *intertest; - - intertest = g_new0 (GstInterTest, 1); - - return intertest; -} - -void -gst_inter_test_free (GstInterTest * intertest) -{ - if (intertest->source_element) { - gst_object_unref (intertest->source_element); - intertest->source_element = NULL; - } - if (intertest->sink_element) { - gst_object_unref (intertest->sink_element); - intertest->sink_element = NULL; - } - - if (intertest->bus) { - gst_object_unref (intertest->bus); - intertest->bus = NULL; - } - - if (intertest->pipeline) { - gst_element_set_state (intertest->pipeline, GST_STATE_NULL); - gst_object_unref (intertest->pipeline); - intertest->pipeline = NULL; - } - g_free (intertest); -} - -void -gst_inter_test_create_pipeline_playbin (GstInterTest * intertest, - const char *uri) -{ - GstElement *pipeline; - - if (uri == NULL) { - gst_inter_test_create_pipeline_vts (intertest); - return; - } - - pipeline = gst_pipeline_new (NULL); - gst_bin_add (GST_BIN (pipeline), - gst_element_factory_make ("playbin", "source")); - - intertest->pipeline = pipeline; - - gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); - intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); - gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); - - intertest->source_element = - gst_bin_get_by_name (GST_BIN (pipeline), "source"); - g_print ("source_element is %p\n", intertest->source_element); - - g_print ("setting uri to %s\n", uri); - g_object_set (intertest->source_element, "uri", uri, NULL); -} - -void -gst_inter_test_create_pipeline_vts (GstInterTest * intertest) -{ - GString *pipe_desc; - GstElement *pipeline; - GError *error = NULL; - - pipe_desc = g_string_new (""); - - g_string_append (pipe_desc, "videotestsrc name=source num-buffers=100 ! "); - g_string_append (pipe_desc, - "video/x-raw,format=(string)I420,width=320,height=240 ! "); - g_string_append (pipe_desc, "timeoverlay ! "); - g_string_append (pipe_desc, "intervideosink name=sink sync=true "); - g_string_append (pipe_desc, - "audiotestsrc samplesperbuffer=1600 num-buffers=100 ! audioconvert ! "); - g_string_append (pipe_desc, "interaudiosink sync=true "); - - if (verbose) - g_print ("pipeline: %s\n", pipe_desc->str); - - pipeline = (GstElement *) gst_parse_launch (pipe_desc->str, &error); - g_string_free (pipe_desc, TRUE); - - if (error) { - g_print ("pipeline parsing error: %s\n", error->message); - gst_object_unref (pipeline); - g_clear_error (&error); - return; - } - - intertest->pipeline = pipeline; - - gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); - intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); - gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); - - intertest->source_element = - gst_bin_get_by_name (GST_BIN (pipeline), "source"); - intertest->sink_element = gst_bin_get_by_name (GST_BIN (pipeline), "sink"); -} - -void -gst_inter_test_create_pipeline_server (GstInterTest * intertest) -{ - GString *pipe_desc; - GstElement *pipeline; - GError *error = NULL; - - pipe_desc = g_string_new (""); - - g_string_append (pipe_desc, "intervideosrc ! queue ! "); - g_string_append (pipe_desc, "xvimagesink name=sink "); - g_string_append (pipe_desc, "interaudiosrc ! queue ! "); - g_string_append (pipe_desc, "alsasink "); - - if (verbose) - g_print ("pipeline: %s\n", pipe_desc->str); - - pipeline = (GstElement *) gst_parse_launch (pipe_desc->str, &error); - g_string_free (pipe_desc, TRUE); - - if (error) { - g_print ("pipeline parsing error: %s\n", error->message); - gst_object_unref (pipeline); - g_clear_error (&error); - return; - } - - intertest->pipeline = pipeline; - - gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); - intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); - gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); - - intertest->source_element = - gst_bin_get_by_name (GST_BIN (pipeline), "source"); - intertest->sink_element = gst_bin_get_by_name (GST_BIN (pipeline), "sink"); -} - -void -gst_inter_test_start (GstInterTest * intertest) -{ - gst_element_set_state (intertest->pipeline, GST_STATE_READY); - - intertest->timer_id = g_timeout_add (1000, onesecond_timer, intertest); -} - -void -gst_inter_test_stop (GstInterTest * intertest) -{ - gst_element_set_state (intertest->pipeline, GST_STATE_NULL); - - g_source_remove (intertest->timer_id); -} - -static void -gst_inter_test_handle_eos (GstInterTest * intertest) -{ - gst_inter_test_stop (intertest); -} - -static void -gst_inter_test_handle_error (GstInterTest * intertest, GError * error, - const char *debug) -{ - g_print ("error: %s\n", error->message); - gst_inter_test_stop (intertest); -} - -static void -gst_inter_test_handle_warning (GstInterTest * intertest, GError * error, - const char *debug) -{ - g_print ("warning: %s\n", error->message); -} - -static void -gst_inter_test_handle_info (GstInterTest * intertest, GError * error, - const char *debug) -{ - g_print ("info: %s\n", error->message); -} - -static void -gst_inter_test_handle_null_to_ready (GstInterTest * intertest) -{ - gst_element_set_state (intertest->pipeline, GST_STATE_PAUSED); - -} - -static void -gst_inter_test_handle_ready_to_paused (GstInterTest * intertest) -{ - if (!intertest->paused_for_buffering) { - gst_element_set_state (intertest->pipeline, GST_STATE_PLAYING); - } -} - -static void -gst_inter_test_handle_paused_to_playing (GstInterTest * intertest) -{ - -} - -static void -gst_inter_test_handle_playing_to_paused (GstInterTest * intertest) -{ - -} - -static void -gst_inter_test_handle_paused_to_ready (GstInterTest * intertest) -{ - -} - -static void -gst_inter_test_handle_ready_to_null (GstInterTest * intertest) -{ - //g_main_loop_quit (intertest->main_loop); - -} - - -static gboolean -gst_inter_test_handle_message (GstBus * bus, GstMessage * message, - gpointer data) -{ - GstInterTest *intertest = (GstInterTest *) data; - - switch (GST_MESSAGE_TYPE (message)) { - case GST_MESSAGE_EOS: - gst_inter_test_handle_eos (intertest); - break; - case GST_MESSAGE_ERROR: - { - GError *error = NULL; - gchar *debug; - - gst_message_parse_error (message, &error, &debug); - gst_inter_test_handle_error (intertest, error, debug); - g_clear_error (&error); - g_free (debug); - } - break; - case GST_MESSAGE_WARNING: - { - GError *error = NULL; - gchar *debug; - - gst_message_parse_warning (message, &error, &debug); - gst_inter_test_handle_warning (intertest, error, debug); - g_clear_error (&error); - g_free (debug); - } - break; - case GST_MESSAGE_INFO: - { - GError *error = NULL; - gchar *debug; - - gst_message_parse_info (message, &error, &debug); - gst_inter_test_handle_info (intertest, error, debug); - g_clear_error (&error); - g_free (debug); - } - break; - case GST_MESSAGE_TAG: - { - GstTagList *tag_list; - - gst_message_parse_tag (message, &tag_list); - if (verbose) - g_print ("tag\n"); - } - break; - case GST_MESSAGE_STATE_CHANGED: - { - GstState oldstate, newstate, pending; - - gst_message_parse_state_changed (message, &oldstate, &newstate, &pending); - if (GST_ELEMENT (message->src) == intertest->pipeline) { - if (verbose) - g_print ("state change from %s to %s\n", - gst_element_state_get_name (oldstate), - gst_element_state_get_name (newstate)); - switch (GST_STATE_TRANSITION (oldstate, newstate)) { - case GST_STATE_CHANGE_NULL_TO_READY: - gst_inter_test_handle_null_to_ready (intertest); - break; - case GST_STATE_CHANGE_READY_TO_PAUSED: - gst_inter_test_handle_ready_to_paused (intertest); - break; - case GST_STATE_CHANGE_PAUSED_TO_PLAYING: - gst_inter_test_handle_paused_to_playing (intertest); - break; - case GST_STATE_CHANGE_PLAYING_TO_PAUSED: - gst_inter_test_handle_playing_to_paused (intertest); - break; - case GST_STATE_CHANGE_PAUSED_TO_READY: - gst_inter_test_handle_paused_to_ready (intertest); - break; - case GST_STATE_CHANGE_READY_TO_NULL: - gst_inter_test_handle_ready_to_null (intertest); - break; - default: - if (verbose) - g_print ("unknown state change from %s to %s\n", - gst_element_state_get_name (oldstate), - gst_element_state_get_name (newstate)); - } - } - } - break; - case GST_MESSAGE_BUFFERING: - { - int percent; - gst_message_parse_buffering (message, &percent); - //g_print("buffering %d\n", percent); - if (!intertest->paused_for_buffering && percent < 100) { - g_print ("pausing for buffing\n"); - intertest->paused_for_buffering = TRUE; - gst_element_set_state (intertest->pipeline, GST_STATE_PAUSED); - } else if (intertest->paused_for_buffering && percent == 100) { - g_print ("unpausing for buffing\n"); - intertest->paused_for_buffering = FALSE; - gst_element_set_state (intertest->pipeline, GST_STATE_PLAYING); - } - } - break; - case GST_MESSAGE_STATE_DIRTY: - case GST_MESSAGE_CLOCK_PROVIDE: - case GST_MESSAGE_CLOCK_LOST: - case GST_MESSAGE_NEW_CLOCK: - case GST_MESSAGE_STRUCTURE_CHANGE: - case GST_MESSAGE_STREAM_STATUS: - break; - case GST_MESSAGE_STEP_DONE: - case GST_MESSAGE_APPLICATION: - case GST_MESSAGE_ELEMENT: - case GST_MESSAGE_SEGMENT_START: - case GST_MESSAGE_SEGMENT_DONE: - case GST_MESSAGE_LATENCY: - case GST_MESSAGE_ASYNC_START: - case GST_MESSAGE_ASYNC_DONE: - case GST_MESSAGE_REQUEST_STATE: - case GST_MESSAGE_STEP_START: - default: - if (verbose) { - g_print ("message: %s\n", GST_MESSAGE_TYPE_NAME (message)); - } - break; - case GST_MESSAGE_QOS: - break; - } - - return TRUE; -} - - - -static gboolean -onesecond_timer (gpointer priv) -{ - //GstInterTest *intertest = (GstInterTest *)priv; - - g_print (".\n"); - - return TRUE; -} - - - -/* helper functions */ - -#if 0 -gboolean -have_element (const gchar * element_name) -{ - GstPluginFeature *feature; - - feature = gst_default_registry_find_feature (element_name, - GST_TYPE_ELEMENT_FACTORY); - if (feature) { - g_object_unref (feature); - return TRUE; - } - return FALSE; -} -#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/tensordecoders/gstssdobjectdetector.c
Deleted
@@ -1,570 +0,0 @@ -/* - * GStreamer gstreamer-ssdobjectdetector - * Copyright (C) 2021 Collabora Ltd. - * - * gstssdobjectdetector.c - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-ssdobjectdetector - * @short_description: Detect objects in video buffers using SSD neural network - * - * This element can parse per-buffer inference tensor meta data generated by an upstream - * inference element - * - * - * ## Example launch command: - * - * Test image file, model file (SSD) and label file can be found here : - * https://gitlab.collabora.com/gstreamer/onnx-models - * - * GST_DEBUG=ssdobjectdetector:5 \ - * gst-launch-1.0 multifilesrc location=onnx-models/images/bus.jpg ! \ - * jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \ - * ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! autovideosink - * - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include "gstssdobjectdetector.h" - -#include <gio/gio.h> - -#include <gst/gst.h> -#include <gst/video/video.h> -#include <gst/analytics/analytics.h> - -/* Object detection tensor id strings */ -#define GST_MODEL_OBJECT_DETECTOR_BOXES "Gst.Model.ObjectDetector.Boxes" -#define GST_MODEL_OBJECT_DETECTOR_SCORES "Gst.Model.ObjectDetector.Scores" -#define GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS "Gst.Model.ObjectDetector.NumDetections" -#define GST_MODEL_OBJECT_DETECTOR_CLASSES "Gst.Model.ObjectDetector.Classes" - -GST_DEBUG_CATEGORY_STATIC (ssd_object_detector_debug); -#define GST_CAT_DEFAULT ssd_object_detector_debug -GST_ELEMENT_REGISTER_DEFINE (ssd_object_detector, "ssdobjectdetector", - GST_RANK_PRIMARY, GST_TYPE_SSD_OBJECT_DETECTOR); - -/* GstSsdObjectDetector properties */ -enum -{ - PROP_0, - PROP_LABEL_FILE, - PROP_SCORE_THRESHOLD, - PROP_SIZE_THRESHOLD -}; - -#define GST_SSD_OBJECT_DETECTOR_DEFAULT_SCORE_THRESHOLD 0.3f /* 0 to 1 */ -#define GST_SSD_OBJECT_DETECTOR_DEFAULT_SIZE_THRESHOLD 0.9f /* 0 to 1 */ - -static GstStaticPadTemplate gst_ssd_object_detector_src_template = -GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS ("video/x-raw") - ); - -static GstStaticPadTemplate gst_ssd_object_detector_sink_template = -GST_STATIC_PAD_TEMPLATE ("sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS ("video/x-raw") - ); - -static void gst_ssd_object_detector_set_property (GObject * object, - guint prop_id, const GValue * value, GParamSpec * pspec); -static void gst_ssd_object_detector_get_property (GObject * object, - guint prop_id, GValue * value, GParamSpec * pspec); -static void gst_ssd_object_detector_finalize (GObject * object); -static GstFlowReturn gst_ssd_object_detector_transform_ip (GstBaseTransform * - trans, GstBuffer * buf); -static gboolean gst_ssd_object_detector_process (GstBaseTransform * trans, - GstBuffer * buf); -static gboolean -gst_ssd_object_detector_set_caps (GstBaseTransform * trans, GstCaps * incaps, - GstCaps * outcaps); - -G_DEFINE_TYPE (GstSsdObjectDetector, gst_ssd_object_detector, - GST_TYPE_BASE_TRANSFORM); - -static void -gst_ssd_object_detector_class_init (GstSsdObjectDetectorClass * klass) -{ - GObjectClass *gobject_class = (GObjectClass *) klass; - GstElementClass *element_class = (GstElementClass *) klass; - GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; - - GST_DEBUG_CATEGORY_INIT (ssd_object_detector_debug, "ssdobjectdetector", - 0, "ssdobjectdetector"); - gobject_class->set_property = gst_ssd_object_detector_set_property; - gobject_class->get_property = gst_ssd_object_detector_get_property; - gobject_class->finalize = gst_ssd_object_detector_finalize; - - /** - * GstSsdObjectDetector:label-file - * - * Label file - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_LABEL_FILE, - g_param_spec_string ("label-file", - "Label file", "Label file", NULL, (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - /** - * GstSsdObjectDetector:score-threshold - * - * Threshold for deciding when to remove boxes based on score - * - * Since: 1.24 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SCORE_THRESHOLD, - g_param_spec_float ("score-threshold", - "Score threshold", - "Threshold for deciding when to remove boxes based on score", - 0.0, 1.0, GST_SSD_OBJECT_DETECTOR_DEFAULT_SCORE_THRESHOLD, - (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - /** - * GstSsdObjectDetector:size-threshold - * - * Threshold for deciding when to remove boxes based on proportion of the image - * - * Since: 1.26 - */ - g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SIZE_THRESHOLD, - g_param_spec_float ("size-threshold", - "Size threshold", - "Threshold for deciding when to remove boxes based on proportion of the image", - 0.0, 1.0, GST_SSD_OBJECT_DETECTOR_DEFAULT_SIZE_THRESHOLD, - (GParamFlags) - (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - - gst_element_class_set_static_metadata (element_class, "objectdetector", - "Tensordecoder/Video", - "Apply tensor output from inference to detect objects in video frames", - "Aaron Boxer <aaron.boxer@collabora.com>, Marcus Edel <marcus.edel@collabora.com>"); - gst_element_class_add_pad_template (element_class, - gst_static_pad_template_get (&gst_ssd_object_detector_sink_template)); - gst_element_class_add_pad_template (element_class, - gst_static_pad_template_get (&gst_ssd_object_detector_src_template)); - basetransform_class->transform_ip = - GST_DEBUG_FUNCPTR (gst_ssd_object_detector_transform_ip); - basetransform_class->set_caps = - GST_DEBUG_FUNCPTR (gst_ssd_object_detector_set_caps); -} - -static void -gst_ssd_object_detector_init (GstSsdObjectDetector * self) -{ - self->size_threshold = GST_SSD_OBJECT_DETECTOR_DEFAULT_SIZE_THRESHOLD; - self->score_threshold = GST_SSD_OBJECT_DETECTOR_DEFAULT_SCORE_THRESHOLD; -} - -static void -gst_ssd_object_detector_finalize (GObject * object) -{ - GstSsdObjectDetector *self = GST_SSD_OBJECT_DETECTOR (object); - - g_free (self->label_file); - g_clear_pointer (&self->labels, g_array_unref); - - G_OBJECT_CLASS (gst_ssd_object_detector_parent_class)->finalize (object); -} - -static GArray * -read_labels (const char *labels_file) -{ - GArray *array; - GFile *file = g_file_new_for_path (labels_file); - GFileInputStream *file_stream; - GDataInputStream *data_stream; - GError *error = NULL; - gchar *line; - - file_stream = g_file_read (file, NULL, &error); - g_object_unref (file); - if (!file_stream) { - GST_WARNING ("Could not open file %s: %s\n", labels_file, error->message); - g_clear_error (&error); - return NULL; - } - - data_stream = g_data_input_stream_new (G_INPUT_STREAM (file_stream)); - g_object_unref (file_stream); - - array = g_array_new (FALSE, FALSE, sizeof (GQuark)); - - while ((line = g_data_input_stream_read_line (data_stream, NULL, NULL, - &error))) { - GQuark label = g_quark_from_string (line); - g_array_append_val (array, label); - g_free (line); - } - - g_object_unref (data_stream); - - if (error) { - GST_WARNING ("Could not open file %s: %s", labels_file, error->message); - g_array_free (array, TRUE); - g_clear_error (&error); - return NULL; - } - - if (array->len == 0) { - g_array_free (array, TRUE); - return NULL; - } - - return array; -} - -static void -gst_ssd_object_detector_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstSsdObjectDetector *self = GST_SSD_OBJECT_DETECTOR (object); - const gchar *filename; - - switch (prop_id) { - case PROP_LABEL_FILE: - { - GArray *labels; - - filename = g_value_get_string (value); - labels = read_labels (filename); - - if (labels) { - g_free (self->label_file); - self->label_file = g_strdup (filename); - g_clear_pointer (&self->labels, g_array_unref); - self->labels = labels; - } else { - GST_WARNING_OBJECT (self, "Label file '%s' not found!", filename); - } - } - break; - case PROP_SCORE_THRESHOLD: - GST_OBJECT_LOCK (self); - self->score_threshold = g_value_get_float (value); - GST_OBJECT_UNLOCK (self); - break; - case PROP_SIZE_THRESHOLD: - GST_OBJECT_LOCK (self); - self->size_threshold = g_value_get_float (value); - GST_OBJECT_UNLOCK (self); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_ssd_object_detector_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstSsdObjectDetector *self = GST_SSD_OBJECT_DETECTOR (object); - - switch (prop_id) { - case PROP_LABEL_FILE: - g_value_set_string (value, self->label_file); - break; - case PROP_SCORE_THRESHOLD: - GST_OBJECT_LOCK (self); - g_value_set_float (value, self->score_threshold); - GST_OBJECT_UNLOCK (self); - break; - case PROP_SIZE_THRESHOLD: - GST_OBJECT_LOCK (self); - g_value_set_float (value, self->size_threshold); - GST_OBJECT_UNLOCK (self); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static GstTensorMeta * -gst_ssd_object_detector_get_tensor_meta (GstSsdObjectDetector * object_detector, - GstBuffer * buf) -{ - GstMeta *meta = NULL; - gpointer iter_state = NULL; - - if (!gst_buffer_get_meta (buf, GST_TENSOR_META_API_TYPE)) { - GST_DEBUG_OBJECT (object_detector, - "missing tensor meta from buffer %" GST_PTR_FORMAT, buf); - return NULL; - } - - // find object detector meta - - while ((meta = gst_buffer_iterate_meta_filtered (buf, &iter_state, - GST_TENSOR_META_API_TYPE))) { - GstTensorMeta *tensor_meta = (GstTensorMeta *) meta; - /* SSD model must have either 3 or 4 output tensor nodes: 4 if there is a label node, - * and only 3 if there is no label */ - if (tensor_meta->num_tensors != 3 && tensor_meta->num_tensors != 4) - continue; - - gint boxesIndex = gst_tensor_meta_get_index_from_id (tensor_meta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_BOXES)); - gint scoresIndex = gst_tensor_meta_get_index_from_id (tensor_meta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_SCORES)); - gint numDetectionsIndex = gst_tensor_meta_get_index_from_id (tensor_meta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS)); - gint clasesIndex = gst_tensor_meta_get_index_from_id (tensor_meta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_CLASSES)); - - if (boxesIndex == -1 || scoresIndex == -1 || numDetectionsIndex == -1) - continue; - - if (tensor_meta->num_tensors == 4 && clasesIndex == -1) - continue; - - return tensor_meta; - } - - return NULL; -} - -static gboolean -gst_ssd_object_detector_set_caps (GstBaseTransform * trans, GstCaps * incaps, - GstCaps * outcaps) -{ - GstSsdObjectDetector *self = GST_SSD_OBJECT_DETECTOR (trans); - - if (!gst_video_info_from_caps (&self->video_info, incaps)) { - GST_ERROR_OBJECT (self, "Failed to parse caps"); - return FALSE; - } - - return TRUE; -} - -static GstFlowReturn -gst_ssd_object_detector_transform_ip (GstBaseTransform * trans, GstBuffer * buf) -{ - if (!gst_base_transform_is_passthrough (trans)) { - if (!gst_ssd_object_detector_process (trans, buf)) { - GST_ELEMENT_ERROR (trans, STREAM, FAILED, - (NULL), ("ssd object detection failed")); - return GST_FLOW_ERROR; - } - } - - return GST_FLOW_OK; -} - -#define DEFINE_GET_FUNC(TYPE, MAX) \ - static gboolean \ - get_ ## TYPE ## _at_index (GstTensor *tensor, GstMapInfo *map, \ - guint index, TYPE * out) \ - { \ - switch (tensor->data_type) { \ - case GST_TENSOR_DATA_TYPE_FLOAT32: { \ - float *f = (float *) map->data; \ - if (sizeof(*f) * (index + 1) > map->size) \ - return FALSE; \ - *out = findex; \ - break; \ - } \ - case GST_TENSOR_DATA_TYPE_UINT32: { \ - guint32 *u = (guint32 *) map->data; \ - if (sizeof(*u) * (index + 1) > map->size) \ - return FALSE; \ - *out = uindex; \ - break; \ - } \ - default: \ - GST_ERROR ("Only float32 and int32 tensors are understood"); \ - return FALSE; \ - } \ - return TRUE; \ - } - -DEFINE_GET_FUNC (guint32, UINT32_MAX) - DEFINE_GET_FUNC (float, FLOAT_MAX) -#undef DEFINE_GET_FUNC - static void - extract_bounding_boxes (GstSsdObjectDetector * self, gsize w, gsize h, - GstAnalyticsRelationMeta * rmeta, GstTensorMeta * tmeta) -{ - gint classes_index; - gint boxes_index; - gint scores_index; - gint numdetect_index; - - GstMapInfo boxes_map = GST_MAP_INFO_INIT; - GstMapInfo numdetect_map = GST_MAP_INFO_INIT; - GstMapInfo scores_map = GST_MAP_INFO_INIT; - GstMapInfo classes_map = GST_MAP_INFO_INIT; - - guint num_detections = 0; - - classes_index = gst_tensor_meta_get_index_from_id (tmeta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_CLASSES)); - numdetect_index = gst_tensor_meta_get_index_from_id (tmeta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS)); - scores_index = gst_tensor_meta_get_index_from_id (tmeta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_SCORES)); - boxes_index = gst_tensor_meta_get_index_from_id (tmeta, - g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_BOXES)); - - if (numdetect_index == -1 || scores_index == -1 || numdetect_index == -1) { - GST_WARNING ("Missing tensor data expected for SSD model"); - return; - } - - if (!gst_buffer_map (tmeta->tensorsnumdetect_index->data, &numdetect_map, - GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Failed to map tensor memory for index %d", - numdetect_index); - goto cleanup; - } - - if (!gst_buffer_map (tmeta->tensorsboxes_index->data, &boxes_map, - GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Failed to map tensor memory for index %d", - boxes_index); - goto cleanup; - } - - if (!gst_buffer_map (tmeta->tensorsscores_index->data, &scores_map, - GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Failed to map tensor memory for index %d", - scores_index); - goto cleanup; - } - - if (classes_index != -1 && - !gst_buffer_map (tmeta->tensorsclasses_index->data, &classes_map, - GST_MAP_READ)) { - GST_DEBUG_OBJECT (self, "Failed to map tensor memory for index %d", - classes_index); - } - - - if (!get_guint32_at_index (tmeta->tensorsnumdetect_index, &numdetect_map, - 0, &num_detections)) { - GST_ERROR_OBJECT (self, "Failed to get the number of detections"); - goto cleanup; - } - - - GST_LOG_OBJECT (self, "Model claims %d detections", num_detections); - - for (int i = 0; i < num_detections; i++) { - float score; - float x, y, bwidth, bheight; - gint x_i, y_i, bwidth_i, bheight_i; - guint32 bclass; - GQuark label = 0; - GstAnalyticsODMtd odmtd; - - if (!get_float_at_index (tmeta->tensorsscores_index, &scores_map, - i, &score)) - continue; - - GST_LOG_OBJECT (self, "Detection %u score is %f", i, score); - if (score < self->score_threshold) - continue; - - if (!get_float_at_index (tmeta->tensorsboxes_index, &boxes_map, - i * 4, &y)) - continue; - if (!get_float_at_index (tmeta->tensorsboxes_index, &boxes_map, - i * 4 + 1, &x)) - continue; - if (!get_float_at_index (tmeta->tensorsboxes_index, &boxes_map, - i * 4 + 2, &bheight)) - continue; - if (!get_float_at_index (tmeta->tensorsboxes_index, &boxes_map, - i * 4 + 3, &bwidth)) - continue; - - if (CLAMP (bwidth, 0, 1) * CLAMP (bheight, 0, 1) > self->size_threshold) { - GST_LOG_OBJECT (self, "Object at (%fx%f)=%f > %f, skipping", - CLAMP (bwidth, 0, 1), CLAMP (bheight, 0, 1), - CLAMP (bwidth, 0, 1) * CLAMP (bheight, 0, 1), self->size_threshold); - continue; - } - - if (self->labels && classes_map.memory && - get_guint32_at_index (tmeta->tensorsclasses_index, &classes_map, - i, &bclass)) { - if (bclass < self->labels->len) - label = g_array_index (self->labels, GQuark, bclass); - } - - x_i = x * w; - y_i = y * h; - bheight_i = (bheight * h) - y_i; - bwidth_i = (bwidth * w) - x_i; - - if (gst_analytics_relation_meta_add_od_mtd (rmeta, label, - x_i, y_i, bwidth_i, bheight_i, score, &odmtd)) - GST_DEBUG_OBJECT (self, - "Object detected with label : %s, score: %f, bound box: %dx%d at (%d,%d)", - g_quark_to_string (label), score, bwidth_i, bheight_i, x_i, y_i); - else - GST_WARNING_OBJECT (self, "Could not add detection to meta"); - } - -cleanup: - - if (numdetect_map.memory) - gst_buffer_unmap (tmeta->tensorsnumdetect_index->data, &numdetect_map); - if (classes_map.memory) - gst_buffer_unmap (tmeta->tensorsclasses_index->data, &classes_map); - if (scores_map.memory) - gst_buffer_unmap (tmeta->tensorsscores_index->data, &scores_map); - if (boxes_map.memory) - gst_buffer_unmap (tmeta->tensorsboxes_index->data, &boxes_map); -} - - -static gboolean -gst_ssd_object_detector_process (GstBaseTransform * trans, GstBuffer * buf) -{ - GstSsdObjectDetector *self = GST_SSD_OBJECT_DETECTOR (trans); - GstTensorMeta *tmeta; - GstAnalyticsRelationMeta *rmeta; - - // get all tensor metas - tmeta = gst_ssd_object_detector_get_tensor_meta (self, buf); - if (!tmeta) { - GST_WARNING_OBJECT (trans, "missing tensor meta"); - return TRUE; - } else { - rmeta = gst_buffer_add_analytics_relation_meta (buf); - g_assert (rmeta); - } - - extract_bounding_boxes (self, self->video_info.width, - self->video_info.height, rmeta, tmeta); - - return TRUE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/tensordecoders/gstssdobjectdetector.h
Deleted
@@ -1,74 +0,0 @@ -/* - * GStreamer gstreamer-ssdobjectdetector - * Copyright (C) 2021 Collabora Ltd - * - * gstssdobjectdetector.h - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifndef __GST_SSD_OBJECT_DETECTOR_H__ -#define __GST_SSD_OBJECT_DETECTOR_H__ - -#include <gst/gst.h> -#include <gst/video/video.h> -#include <gst/video/gstvideofilter.h> - -G_BEGIN_DECLS - -#define GST_TYPE_SSD_OBJECT_DETECTOR (gst_ssd_object_detector_get_type()) -G_DECLARE_FINAL_TYPE (GstSsdObjectDetector, gst_ssd_object_detector, GST, SSD_OBJECT_DETECTOR, GstBaseTransform) - -#define GST_SSD_OBJECT_DETECTOR_META_NAME "ssd-object-detector" -#define GST_SSD_OBJECT_DETECTOR_META_PARAM_NAME "extra-data" -#define GST_SSD_OBJECT_DETECTOR_META_FIELD_LABEL "label" -#define GST_SSD_OBJECT_DETECTOR_META_FIELD_SCORE "score" - -/* - * GstSsdObjectDetector: - * - * @label_file label file - * @score_threshold score threshold - * - * Since: 1.20 - */ -struct _GstSsdObjectDetector -{ - GstBaseTransform basetransform; - gchar *label_file; - GArray *labels; - gfloat score_threshold; - gfloat size_threshold; - GstVideoInfo video_info; -}; - -/** - * GstSsdObjectDetectorClass: - * - * @parent_class base transform base class - * - * Since: 1.20 - */ -struct _GstSsdObjectDetectorClass -{ - GstBaseTransformClass parent_class; -}; - -GST_ELEMENT_REGISTER_DECLARE (ssd_object_detector) - -G_END_DECLS - -#endif /* __GST_SSD_OBJECT_DETECTOR_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/y4m
Deleted
-(directory)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/y4m/gsty4mdec.c
Deleted
@@ -1,962 +0,0 @@ -/* GStreamer - * Copyright (C) 2010 David Schleef <ds@schleef.org> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ -/** - * SECTION:element-y4mdec - * @title: gsty4mdec - * - * The gsty4mdec element decodes uncompressed video in YUV4MPEG format. - * - * ## Example launch line - * | - * gst-launch-1.0 -v filesrc location=file.y4m ! y4mdec ! xvimagesink - * | - * - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include <gst/gst.h> -#include <gst/video/video.h> -#include "gsty4mdec.h" - -#include <stdlib.h> -#include <string.h> - -#define MAX_SIZE 32768 - -GST_DEBUG_CATEGORY (y4mdec_debug); -#define GST_CAT_DEFAULT y4mdec_debug - -/* prototypes */ - - -static void gst_y4m_dec_set_property (GObject * object, - guint property_id, const GValue * value, GParamSpec * pspec); -static void gst_y4m_dec_get_property (GObject * object, - guint property_id, GValue * value, GParamSpec * pspec); -static void gst_y4m_dec_dispose (GObject * object); -static void gst_y4m_dec_finalize (GObject * object); - -static GstFlowReturn gst_y4m_dec_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer); -static gboolean gst_y4m_dec_sink_event (GstPad * pad, GstObject * parent, - GstEvent * event); - -static gboolean gst_y4m_dec_src_event (GstPad * pad, GstObject * parent, - GstEvent * event); -static gboolean gst_y4m_dec_src_query (GstPad * pad, GstObject * parent, - GstQuery * query); - -static GstStateChangeReturn -gst_y4m_dec_change_state (GstElement * element, GstStateChange transition); - -enum -{ - PROP_0 -}; - -/* pad templates */ - -static GstStaticPadTemplate gst_y4m_dec_sink_template = -GST_STATIC_PAD_TEMPLATE ("sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS ("application/x-yuv4mpeg, y4mversion=2") - ); - -static GstStaticPadTemplate gst_y4m_dec_src_template = -GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ \ - I420,Y41B,Y42B,Y444, \ - I420_10LE,I422_10LE,Y444_10LE, \ - I420_12LE,I422_12LE,Y444_12LE, \ - Y444_16LE,GRAY8,GRAY16_LE \ - }"))); - -/* class initialization */ -#define gst_y4m_dec_parent_class parent_class -G_DEFINE_TYPE (GstY4mDec, gst_y4m_dec, GST_TYPE_ELEMENT); -GST_ELEMENT_REGISTER_DEFINE_WITH_CODE (y4mdec, "y4mdec", GST_RANK_SECONDARY, - gst_y4m_dec_get_type (), GST_DEBUG_CATEGORY_INIT (y4mdec_debug, "y4mdec", 0, - "y4mdec element")); -static void -gst_y4m_dec_class_init (GstY4mDecClass * klass) -{ - GObjectClass *gobject_class = G_OBJECT_CLASS (klass); - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - - gobject_class->set_property = gst_y4m_dec_set_property; - gobject_class->get_property = gst_y4m_dec_get_property; - gobject_class->dispose = gst_y4m_dec_dispose; - gobject_class->finalize = gst_y4m_dec_finalize; - - element_class->change_state = GST_DEBUG_FUNCPTR (gst_y4m_dec_change_state); - - gst_element_class_add_static_pad_template (element_class, - &gst_y4m_dec_src_template); - gst_element_class_add_static_pad_template (element_class, - &gst_y4m_dec_sink_template); - - gst_element_class_set_static_metadata (element_class, - "YUV4MPEG demuxer/decoder", "Codec/Demuxer", - "Demuxes/decodes YUV4MPEG streams", "David Schleef <ds@schleef.org>"); -} - -static void -gst_y4m_dec_init (GstY4mDec * y4mdec) -{ - y4mdec->adapter = gst_adapter_new (); - - y4mdec->sinkpad = - gst_pad_new_from_static_template (&gst_y4m_dec_sink_template, "sink"); - gst_pad_set_event_function (y4mdec->sinkpad, - GST_DEBUG_FUNCPTR (gst_y4m_dec_sink_event)); - gst_pad_set_chain_function (y4mdec->sinkpad, - GST_DEBUG_FUNCPTR (gst_y4m_dec_chain)); - gst_element_add_pad (GST_ELEMENT (y4mdec), y4mdec->sinkpad); - - y4mdec->srcpad = gst_pad_new_from_static_template (&gst_y4m_dec_src_template, - "src"); - gst_pad_set_event_function (y4mdec->srcpad, - GST_DEBUG_FUNCPTR (gst_y4m_dec_src_event)); - gst_pad_set_query_function (y4mdec->srcpad, - GST_DEBUG_FUNCPTR (gst_y4m_dec_src_query)); - gst_pad_use_fixed_caps (y4mdec->srcpad); - gst_element_add_pad (GST_ELEMENT (y4mdec), y4mdec->srcpad); - -} - -void -gst_y4m_dec_set_property (GObject * object, guint property_id, - const GValue * value, GParamSpec * pspec) -{ - g_return_if_fail (GST_IS_Y4M_DEC (object)); - - switch (property_id) { - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); - break; - } -} - -void -gst_y4m_dec_get_property (GObject * object, guint property_id, - GValue * value, GParamSpec * pspec) -{ - g_return_if_fail (GST_IS_Y4M_DEC (object)); - - switch (property_id) { - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); - break; - } -} - -void -gst_y4m_dec_dispose (GObject * object) -{ - GstY4mDec *y4mdec; - - g_return_if_fail (GST_IS_Y4M_DEC (object)); - y4mdec = GST_Y4M_DEC (object); - - /* clean up as possible. may be called multiple times */ - if (y4mdec->adapter) { - g_object_unref (y4mdec->adapter); - y4mdec->adapter = NULL; - } - - G_OBJECT_CLASS (parent_class)->dispose (object); -} - -void -gst_y4m_dec_finalize (GObject * object) -{ - g_return_if_fail (GST_IS_Y4M_DEC (object)); - - /* clean up object here */ - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static GstStateChangeReturn -gst_y4m_dec_change_state (GstElement * element, GstStateChange transition) -{ - GstY4mDec *y4mdec; - GstStateChangeReturn ret; - - g_return_val_if_fail (GST_IS_Y4M_DEC (element), GST_STATE_CHANGE_FAILURE); - - y4mdec = GST_Y4M_DEC (element); - - switch (transition) { - case GST_STATE_CHANGE_NULL_TO_READY: - break; - case GST_STATE_CHANGE_READY_TO_PAUSED: - break; - case GST_STATE_CHANGE_PAUSED_TO_PLAYING: - break; - default: - break; - } - - ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); - - switch (transition) { - case GST_STATE_CHANGE_PLAYING_TO_PAUSED: - break; - case GST_STATE_CHANGE_PAUSED_TO_READY: - if (y4mdec->pool) { - gst_buffer_pool_set_active (y4mdec->pool, FALSE); - gst_object_unref (y4mdec->pool); - } - y4mdec->pool = NULL; - break; - case GST_STATE_CHANGE_READY_TO_NULL: - break; - default: - break; - } - - return ret; -} - -static GstClockTime -gst_y4m_dec_frames_to_timestamp (GstY4mDec * y4mdec, gint64 frame_index) -{ - if (frame_index == -1) - return -1; - - return gst_util_uint64_scale (frame_index, GST_SECOND * y4mdec->info.fps_d, - y4mdec->info.fps_n); -} - -static gint64 -gst_y4m_dec_timestamp_to_frames (GstY4mDec * y4mdec, GstClockTime timestamp) -{ - if (timestamp == -1) - return -1; - - return gst_util_uint64_scale (timestamp, y4mdec->info.fps_n, - GST_SECOND * y4mdec->info.fps_d); -} - -static gint64 -gst_y4m_dec_bytes_to_frames (GstY4mDec * y4mdec, gint64 bytes) -{ - if (bytes == -1) - return -1; - - if (bytes < y4mdec->header_size) - return 0; - return (bytes - y4mdec->header_size) / (y4mdec->info.size + 6); -} - -static guint64 -gst_y4m_dec_frames_to_bytes (GstY4mDec * y4mdec, gint64 frame_index) -{ - if (frame_index == -1) - return -1; - - return y4mdec->header_size + (y4mdec->info.size + 6) * frame_index; -} - -static GstClockTime -gst_y4m_dec_bytes_to_timestamp (GstY4mDec * y4mdec, gint64 bytes) -{ - if (bytes == -1) - return -1; - - return gst_y4m_dec_frames_to_timestamp (y4mdec, - gst_y4m_dec_bytes_to_frames (y4mdec, bytes)); -} - -static GstVideoFormat -parse_colorspace (const char *param) -{ - char *end; - guint iformat = g_ascii_strtoull (param, &end, 10); - - if (*end == '\0') { - switch (iformat) { - case 420: - return GST_VIDEO_FORMAT_I420; - case 411: - return GST_VIDEO_FORMAT_Y41B; - case 422: - return GST_VIDEO_FORMAT_Y42B; - case 444: - return GST_VIDEO_FORMAT_Y444; - } - } - - /* - * Parse non-standard (i.e., unknown to mjpegtools) streams that are - * generated by FFmpeg: - * https://wiki.multimedia.cx/index.php/YUV4MPEG2 - * https://github.com/FFmpeg/FFmpeg/blob/eee3b7e2/libavformat/yuv4mpegenc.c#L74-L166 - * Will assume little-endian because this is an on-disk serialization format. - */ - - // TODO: Differentiate between: - // * C420jpeg: biaxially-displaced chroma planes - // * C420paldv: coincident R and vertically-displaced B - // * C420mpeg2: vertically-displaced chroma planes - if (iformat == 420 && (g_strcmp0 (end, "jpeg") == 0 || - g_strcmp0 (end, "paldv") == 0 || g_strcmp0 (end, "mpeg2") == 0)) - return GST_VIDEO_FORMAT_I420; - - if (iformat == 0 && strncmp (end, "mono", 4) == 0) { - char *type = end + 4; - if (*type == '\0') - return GST_VIDEO_FORMAT_GRAY8; - if (g_strcmp0 (type, "16") == 0) - return GST_VIDEO_FORMAT_GRAY16_LE; - } - - if (*end == 'p') { - guint depth = g_ascii_strtoull (end + 1, NULL, 10); - if (depth == 10) { - switch (iformat) { - case 420: - return GST_VIDEO_FORMAT_I420_10LE; - case 422: - return GST_VIDEO_FORMAT_I422_10LE; - case 444: - return GST_VIDEO_FORMAT_Y444_10LE; - } - } else if (depth == 12) { - switch (iformat) { - case 420: - return GST_VIDEO_FORMAT_I420_12LE; - case 422: - return GST_VIDEO_FORMAT_I422_12LE; - case 444: - return GST_VIDEO_FORMAT_Y444_12LE; - } - } else if (depth == 16 && iformat == 444) { - return GST_VIDEO_FORMAT_Y444_16LE; - } - } - - GST_WARNING ("%s is not a supported format", param); - return GST_VIDEO_FORMAT_UNKNOWN; -} - -static gboolean -parse_ratio (const char *param, gulong * n, gulong * d) -{ - char *end; - *n = g_ascii_strtoull (param, &end, 10); - if (end == param) - return FALSE; - param = end; - if (param0 != ':') - return FALSE; - param++; - *d = g_ascii_strtoull (param, &end, 10); - if (end == param) - return FALSE; - return TRUE; -} - -static gboolean -gst_y4m_dec_parse_header (GstY4mDec * y4mdec, char *header) -{ - guint len; - char **params; - guint interlaced_char = 0; - gulong fps_n = 0, fps_d = 0; - gulong par_n = 0, par_d = 0; - gulong width = 0, height = 0; - GstVideoFormat format = GST_VIDEO_FORMAT_I420; - - if (memcmp (header, "YUV4MPEG2 ", 10) != 0) { - GST_ERROR_OBJECT (y4mdec, "y4m start code not found"); - return FALSE; - } - - header += 10; - if (!g_str_is_ascii (header)) { - GST_ERROR_OBJECT (y4mdec, "Invalid non-ASCII y4m header: %s", header); - return FALSE; - } - - GST_INFO_OBJECT (y4mdec, "Found header: %s", header); - params = g_strsplit (header, " ", -1); - len = g_strv_length (params); - - for (int i = 0; i < len; i++) { - const char *param = paramsi; - char param_type = *param; - const char *param_value = param + 1; - switch (param_type) { - case 'C': - format = parse_colorspace (param_value); - if (format == GST_VIDEO_FORMAT_UNKNOWN) { - GST_ERROR_OBJECT (y4mdec, "Failed to parse colorspace: %s", param); - return FALSE; - } - GST_INFO_OBJECT (y4mdec, "Parsed format as %s", - gst_video_format_to_string (format)); - continue; - case 'W': - if ((width = g_ascii_strtoull (param_value, NULL, 10)) == 0) { - GST_ERROR_OBJECT (y4mdec, "Failed to parse width: %s", param); - return FALSE; - } - continue; - case 'H': - if ((height = g_ascii_strtoull (param_value, NULL, 10)) == 0) { - GST_ERROR_OBJECT (y4mdec, "Failed to parse height: %s", param); - return FALSE; - } - continue; - case 'I': - if ((interlaced_char = param_value0) == 0) { - GST_ERROR_OBJECT (y4mdec, "Expecting interlaced flag: %s", param); - return FALSE; - } - continue; - case 'F': - if (!parse_ratio (param_value, &fps_n, &fps_d)) { - GST_ERROR_OBJECT (y4mdec, "Failed to parse framerate: %s", param); - return FALSE; - } - continue; - case 'A': - if (!parse_ratio (param_value, &par_n, &par_d)) { - GST_ERROR_OBJECT (y4mdec, "Failed to parse PAR: %s", param); - return FALSE; - } - continue; - } - GST_WARNING_OBJECT (y4mdec, "Unknown y4m param field '%s', ignoring", - param); - } - g_strfreev (params); - - if (width > MAX_SIZE || height > MAX_SIZE) { - GST_ERROR_OBJECT (y4mdec, "Dimensions %lux%lu out of range", width, height); - return FALSE; - } - - gst_video_info_init (&y4mdec->info); - gst_video_info_set_format (&y4mdec->out_info, format, width, height); - y4mdec->info = y4mdec->out_info; - - switch (y4mdec->info.finfo->format) { - case GST_VIDEO_FORMAT_I420: - y4mdec->info.offset0 = 0; - y4mdec->info.stride0 = width; - y4mdec->info.offset1 = y4mdec->info.stride0 * height; - y4mdec->info.stride1 = GST_ROUND_UP_2 (width) / 2; - y4mdec->info.offset2 = - y4mdec->info.offset1 + - y4mdec->info.stride1 * (GST_ROUND_UP_2 (height) / 2); - y4mdec->info.stride2 = GST_ROUND_UP_2 (width) / 2; - y4mdec->info.size = - y4mdec->info.offset2 + - y4mdec->info.stride2 * (GST_ROUND_UP_2 (height) / 2); - break; - case GST_VIDEO_FORMAT_Y42B: - y4mdec->info.offset0 = 0; - y4mdec->info.stride0 = width; - y4mdec->info.offset1 = y4mdec->info.stride0 * height; - y4mdec->info.stride1 = GST_ROUND_UP_2 (width) / 2; - y4mdec->info.offset2 = - y4mdec->info.offset1 + y4mdec->info.stride1 * height; - y4mdec->info.stride2 = GST_ROUND_UP_2 (width) / 2; - y4mdec->info.size = - y4mdec->info.offset2 + y4mdec->info.stride2 * height; - break; - case GST_VIDEO_FORMAT_Y444: - y4mdec->info.offset0 = 0; - y4mdec->info.stride0 = width; - y4mdec->info.offset1 = y4mdec->info.stride0 * height; - y4mdec->info.stride1 = width; - y4mdec->info.offset2 = - y4mdec->info.offset1 + y4mdec->info.stride1 * height; - y4mdec->info.stride2 = width; - y4mdec->info.size = - y4mdec->info.offset2 + y4mdec->info.stride2 * height; - break; - default: - break; - } - - switch (interlaced_char) { - case 0: - case '?': - case 'p': - y4mdec->info.interlace_mode = GST_VIDEO_INTERLACE_MODE_PROGRESSIVE; - break; - case 't': - case 'b': - y4mdec->info.interlace_mode = GST_VIDEO_INTERLACE_MODE_INTERLEAVED; - break; - default: - GST_ERROR_OBJECT (y4mdec, "Unknown interlaced char '%c'", - interlaced_char); - return FALSE; - break; - } - - if (fps_n == 0) - fps_n = 1; - if (fps_d == 0) - fps_d = 1; - if (par_n == 0) - par_n = 1; - if (par_d == 0) - par_d = 1; - - y4mdec->info.fps_n = fps_n; - y4mdec->info.fps_d = fps_d; - y4mdec->info.par_n = par_n; - y4mdec->info.par_d = par_d; - - return TRUE; -} - -static GstFlowReturn -gst_y4m_dec_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) -{ - GstY4mDec *y4mdec; - int n_avail; - GstFlowReturn flow_ret = GST_FLOW_OK; -#define MAX_HEADER_LENGTH 80 - char headerMAX_HEADER_LENGTH; - int i; - int len; - - y4mdec = GST_Y4M_DEC (parent); - - GST_DEBUG_OBJECT (y4mdec, "chain"); - - if (GST_BUFFER_IS_DISCONT (buffer)) { - GST_DEBUG ("got discont"); - gst_adapter_clear (y4mdec->adapter); - } - - gst_adapter_push (y4mdec->adapter, buffer); - n_avail = gst_adapter_available (y4mdec->adapter); - - if (!y4mdec->have_header) { - gboolean ret; - GstCaps *caps; - GstQuery *query; - - if (n_avail < MAX_HEADER_LENGTH) - return GST_FLOW_OK; - - gst_adapter_copy (y4mdec->adapter, (guint8 *) header, 0, MAX_HEADER_LENGTH); - - headerMAX_HEADER_LENGTH - 1 = 0; - for (i = 0; i < MAX_HEADER_LENGTH; i++) { - if (headeri == 0x0a) - headeri = 0; - } - - ret = gst_y4m_dec_parse_header (y4mdec, header); - if (!ret) { - GST_ELEMENT_ERROR (y4mdec, STREAM, DECODE, - ("Failed to parse YUV4MPEG header"), (NULL)); - return GST_FLOW_ERROR; - } - - y4mdec->header_size = strlen (header) + 1; - gst_adapter_flush (y4mdec->adapter, y4mdec->header_size); - - caps = gst_video_info_to_caps (&y4mdec->info); - ret = gst_pad_set_caps (y4mdec->srcpad, caps); - - query = gst_query_new_allocation (caps, FALSE); - y4mdec->video_meta = FALSE; - - if (y4mdec->pool) { - gst_buffer_pool_set_active (y4mdec->pool, FALSE); - gst_object_unref (y4mdec->pool); - } - y4mdec->pool = NULL; - - if (gst_pad_peer_query (y4mdec->srcpad, query)) { - y4mdec->video_meta = - gst_query_find_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); - - /* We only need a pool if we need to do stride conversion for downstream */ - if (!y4mdec->video_meta && memcmp (&y4mdec->info, &y4mdec->out_info, - sizeof (y4mdec->info)) != 0) { - GstBufferPool *pool = NULL; - GstAllocator *allocator = NULL; - GstAllocationParams params; - GstStructure *config; - guint size, min, max; - - if (gst_query_get_n_allocation_params (query) > 0) { - gst_query_parse_nth_allocation_param (query, 0, &allocator, ¶ms); - } else { - allocator = NULL; - gst_allocation_params_init (¶ms); - } - - if (gst_query_get_n_allocation_pools (query) > 0) { - gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, - &max); - size = MAX (size, y4mdec->out_info.size); - } else { - pool = NULL; - size = y4mdec->out_info.size; - min = max = 0; - } - - if (pool == NULL) { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_set_params (config, caps, size, min, max); - gst_buffer_pool_config_set_allocator (config, allocator, ¶ms); - gst_buffer_pool_set_config (pool, config); - - if (allocator) - gst_object_unref (allocator); - - y4mdec->pool = pool; - } - } else if (memcmp (&y4mdec->info, &y4mdec->out_info, - sizeof (y4mdec->info)) != 0) { - GstBufferPool *pool; - GstStructure *config; - - /* No pool, create our own if we need to do stride conversion */ - pool = gst_video_buffer_pool_new (); - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_set_params (config, caps, y4mdec->out_info.size, 0, - 0); - gst_buffer_pool_set_config (pool, config); - y4mdec->pool = pool; - } - if (y4mdec->pool) { - gst_buffer_pool_set_active (y4mdec->pool, TRUE); - } - gst_query_unref (query); - gst_caps_unref (caps); - if (!ret) { - GST_DEBUG_OBJECT (y4mdec, "Couldn't set caps on src pad"); - return GST_FLOW_ERROR; - } - - y4mdec->have_header = TRUE; - } - - if (y4mdec->have_new_segment) { - GstEvent *event; - GstClockTime start = gst_y4m_dec_bytes_to_timestamp (y4mdec, - y4mdec->segment.start); - GstClockTime stop = gst_y4m_dec_bytes_to_timestamp (y4mdec, - y4mdec->segment.stop); - GstClockTime time = gst_y4m_dec_bytes_to_timestamp (y4mdec, - y4mdec->segment.time); - GstSegment seg; - - gst_segment_init (&seg, GST_FORMAT_TIME); - seg.start = start; - seg.stop = stop; - seg.time = time; - event = gst_event_new_segment (&seg); - - gst_pad_push_event (y4mdec->srcpad, event); - //gst_event_unref (event); - - y4mdec->have_new_segment = FALSE; - y4mdec->frame_index = gst_y4m_dec_bytes_to_frames (y4mdec, - y4mdec->segment.time); - GST_DEBUG ("new frame_index %d", y4mdec->frame_index); - - } - - while (1) { - n_avail = gst_adapter_available (y4mdec->adapter); - if (n_avail < MAX_HEADER_LENGTH) - break; - - gst_adapter_copy (y4mdec->adapter, (guint8 *) header, 0, MAX_HEADER_LENGTH); - headerMAX_HEADER_LENGTH - 1 = 0; - for (i = 0; i < MAX_HEADER_LENGTH; i++) { - if (headeri == 0x0a) - headeri = 0; - } - if (memcmp (header, "FRAME", 5) != 0) { - GST_ELEMENT_ERROR (y4mdec, STREAM, DECODE, - ("Failed to parse YUV4MPEG frame"), (NULL)); - flow_ret = GST_FLOW_ERROR; - break; - } - - len = strlen (header); - if (n_avail < y4mdec->info.size + len + 1) { - /* not enough data */ - GST_TRACE ("not enough data for frame %d < %" G_GSIZE_FORMAT, - n_avail, y4mdec->info.size + len + 1); - break; - } - - gst_adapter_flush (y4mdec->adapter, len + 1); - - buffer = gst_adapter_take_buffer (y4mdec->adapter, y4mdec->info.size); - - GST_BUFFER_TIMESTAMP (buffer) = - gst_y4m_dec_frames_to_timestamp (y4mdec, y4mdec->frame_index); - GST_BUFFER_DURATION (buffer) = - gst_y4m_dec_frames_to_timestamp (y4mdec, y4mdec->frame_index + 1) - - GST_BUFFER_TIMESTAMP (buffer); - - y4mdec->frame_index++; - - if (y4mdec->video_meta) { - gst_buffer_add_video_meta_full (buffer, 0, y4mdec->info.finfo->format, - y4mdec->info.width, y4mdec->info.height, y4mdec->info.finfo->n_planes, - y4mdec->info.offset, y4mdec->info.stride); - } else if (memcmp (&y4mdec->info, &y4mdec->out_info, - sizeof (y4mdec->info)) != 0) { - GstBuffer *outbuf; - GstVideoFrame iframe, oframe; - gint i, j; - gint w, h, istride, ostride; - guint8 *src, *dest; - - /* Allocate a new buffer and do stride conversion */ - g_assert (y4mdec->pool != NULL); - - flow_ret = gst_buffer_pool_acquire_buffer (y4mdec->pool, &outbuf, NULL); - if (flow_ret != GST_FLOW_OK) { - gst_buffer_unref (buffer); - break; - } - - gst_video_frame_map (&iframe, &y4mdec->info, buffer, GST_MAP_READ); - gst_video_frame_map (&oframe, &y4mdec->out_info, outbuf, GST_MAP_WRITE); - - for (i = 0; i < 3; i++) { - w = GST_VIDEO_FRAME_COMP_WIDTH (&iframe, i); - h = GST_VIDEO_FRAME_COMP_HEIGHT (&iframe, i); - istride = GST_VIDEO_FRAME_COMP_STRIDE (&iframe, i); - ostride = GST_VIDEO_FRAME_COMP_STRIDE (&oframe, i); - src = GST_VIDEO_FRAME_COMP_DATA (&iframe, i); - dest = GST_VIDEO_FRAME_COMP_DATA (&oframe, i); - - for (j = 0; j < h; j++) { - memcpy (dest, src, w); - - dest += ostride; - src += istride; - } - } - - gst_video_frame_unmap (&iframe); - gst_video_frame_unmap (&oframe); - gst_buffer_copy_into (outbuf, buffer, GST_BUFFER_COPY_TIMESTAMPS, 0, -1); - gst_buffer_unref (buffer); - buffer = outbuf; - } - - flow_ret = gst_pad_push (y4mdec->srcpad, buffer); - if (flow_ret != GST_FLOW_OK) - break; - } - - GST_DEBUG ("returning %d", flow_ret); - - return flow_ret; -} - -static gboolean -gst_y4m_dec_sink_event (GstPad * pad, GstObject * parent, GstEvent * event) -{ - gboolean res; - GstY4mDec *y4mdec; - - y4mdec = GST_Y4M_DEC (parent); - - GST_DEBUG_OBJECT (y4mdec, "event"); - - switch (GST_EVENT_TYPE (event)) { - case GST_EVENT_FLUSH_START: - res = gst_pad_push_event (y4mdec->srcpad, event); - break; - case GST_EVENT_FLUSH_STOP: - res = gst_pad_push_event (y4mdec->srcpad, event); - break; - case GST_EVENT_SEGMENT: - { - GstSegment seg; - - gst_event_copy_segment (event, &seg); - - GST_DEBUG ("segment: %" GST_SEGMENT_FORMAT, &seg); - - if (seg.format == GST_FORMAT_BYTES) { - y4mdec->segment = seg; - y4mdec->have_new_segment = TRUE; - } - - res = TRUE; - /* not sure why it's not forwarded, but let's unref it so it - doesn't leak, remove the unref if it gets forwarded again */ - gst_event_unref (event); - //res = gst_pad_push_event (y4mdec->srcpad, event); - } - break; - case GST_EVENT_EOS: - default: - res = gst_pad_event_default (pad, parent, event); - break; - } - - return res; -} - -static gboolean -gst_y4m_dec_src_event (GstPad * pad, GstObject * parent, GstEvent * event) -{ - gboolean res; - GstY4mDec *y4mdec; - - y4mdec = GST_Y4M_DEC (parent); - - GST_DEBUG_OBJECT (y4mdec, "event"); - - switch (GST_EVENT_TYPE (event)) { - case GST_EVENT_SEEK: - { - gdouble rate; - GstFormat format; - GstSeekFlags flags; - GstSeekType start_type, stop_type; - gint64 start, stop; - gint64 framenum; - guint64 byte; - - gst_event_parse_seek (event, &rate, &format, &flags, &start_type, - &start, &stop_type, &stop); - - if (format != GST_FORMAT_TIME) { - res = FALSE; - break; - } - - framenum = gst_y4m_dec_timestamp_to_frames (y4mdec, start); - GST_DEBUG ("seeking to frame %" G_GINT64_FORMAT, framenum); - if (framenum == -1) { - res = FALSE; - break; - } - - byte = gst_y4m_dec_frames_to_bytes (y4mdec, framenum); - GST_DEBUG ("offset %" G_GUINT64_FORMAT, (guint64) byte); - if (byte == -1) { - res = FALSE; - break; - } - - gst_event_unref (event); - event = gst_event_new_seek (rate, GST_FORMAT_BYTES, flags, - start_type, byte, stop_type, -1); - - res = gst_pad_push_event (y4mdec->sinkpad, event); - } - break; - default: - res = gst_pad_event_default (pad, parent, event); - break; - } - - return res; -} - -static gboolean -gst_y4m_dec_src_query (GstPad * pad, GstObject * parent, GstQuery * query) -{ - GstY4mDec *y4mdec = GST_Y4M_DEC (parent); - gboolean res = FALSE; - - switch (GST_QUERY_TYPE (query)) { - case GST_QUERY_DURATION: - { - GstFormat format; - GstQuery *peer_query; - - GST_DEBUG ("duration query"); - - gst_query_parse_duration (query, &format, NULL); - - if (format != GST_FORMAT_TIME) { - res = FALSE; - GST_DEBUG_OBJECT (y4mdec, "not handling duration query in format %d", - format); - break; - } - - peer_query = gst_query_new_duration (GST_FORMAT_BYTES); - - res = gst_pad_peer_query (y4mdec->sinkpad, peer_query); - if (res) { - gint64 duration; - int n_frames; - - gst_query_parse_duration (peer_query, &format, &duration); - - n_frames = gst_y4m_dec_bytes_to_frames (y4mdec, duration); - GST_DEBUG ("duration in frames %d", n_frames); - - duration = gst_y4m_dec_frames_to_timestamp (y4mdec, n_frames); - GST_DEBUG ("duration in time %" GST_TIME_FORMAT, - GST_TIME_ARGS (duration)); - - gst_query_set_duration (query, GST_FORMAT_TIME, duration); - res = TRUE; - } - gst_query_unref (peer_query); - break; - } - default: - res = gst_pad_query_default (pad, parent, query); - break; - } - - return res; -} - - -static gboolean -plugin_init (GstPlugin * plugin) -{ - return GST_ELEMENT_REGISTER (y4mdec, plugin); -} - -GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, - GST_VERSION_MINOR, - y4mdec, - "Demuxes/decodes YUV4MPEG streams", - plugin_init, VERSION, "LGPL", PACKAGE_NAME, GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/y4m/gsty4mdec.h
Deleted
@@ -1,69 +0,0 @@ -/* GStreamer - * Copyright (C) 2010 David Schleef <ds@schleef.org> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifndef _GST_Y4M_DEC_H_ -#define _GST_Y4M_DEC_H_ - -#include <gst/gst.h> -#include <gst/base/gstadapter.h> - -G_BEGIN_DECLS - -#define GST_TYPE_Y4M_DEC (gst_y4m_dec_get_type()) -#define GST_Y4M_DEC(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_Y4M_DEC,GstY4mDec)) -#define GST_Y4M_DEC_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_Y4M_DEC,GstY4mDecClass)) -#define GST_IS_Y4M_DEC(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_Y4M_DEC)) -#define GST_IS_Y4M_DEC_CLASS(obj) (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_Y4M_DEC)) - -typedef struct _GstY4mDec GstY4mDec; -typedef struct _GstY4mDecClass GstY4mDecClass; - -struct _GstY4mDec -{ - GstElement base_y4mdec; - - GstPad *sinkpad; - GstPad *srcpad; - GstAdapter *adapter; - - /* state */ - gboolean have_header; - int frame_index; - int header_size; - - gboolean have_new_segment; - GstSegment segment; - - GstVideoInfo info; - GstVideoInfo out_info; - gboolean video_meta; - GstBufferPool *pool; -}; - -struct _GstY4mDecClass -{ - GstElementClass base_y4mdec_class; -}; - -GType gst_y4m_dec_get_type (void); -GST_ELEMENT_REGISTER_DECLARE (y4mdec); - -G_END_DECLS - -#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/y4m/meson.build
Deleted
@@ -1,30 +0,0 @@ -y4_sources = - 'gsty4mdec.c' - - -y4_headers = - 'gsty4mdec.h', - - -doc_sources = -foreach s: y4_sources + y4_headers - doc_sources += meson.current_source_dir() / s -endforeach - -plugin_sources += { - 'y4mdec': pathsep.join(doc_sources) -} - -if get_option('y4m').disabled() - subdir_done() -endif - -gsty4mdec = library('gsty4mdec', - y4_sources, - c_args : gst_plugins_bad_args, - include_directories : configinc, - dependencies : gstbase_dep, gstvideo_dep, - install : true, - install_dir : plugins_install_dir, -) -plugins += gsty4mdec
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/meson_options.txt
Deleted
@@ -1,315 +0,0 @@ -option('gst_play_tests', type: 'boolean', value: false, - description: 'Enable GstPlay tests that need network access') - -# Feature options for plugins without external deps -option('accurip', type : 'feature', value : 'auto') -option('adpcmdec', type : 'feature', value : 'auto') -option('adpcmenc', type : 'feature', value : 'auto') -option('aiff', type : 'feature', value : 'auto') -option('asfmux', type : 'feature', value : 'auto') -option('audiobuffersplit', type : 'feature', value : 'auto') -option('audiofxbad', type : 'feature', value : 'auto') -option('audiolatency', type : 'feature', value : 'auto') -option('audiomixmatrix', type : 'feature', value : 'auto') -option('audiovisualizers', type : 'feature', value : 'auto') -option('autoconvert', type : 'feature', value : 'auto') -option('bayer', type : 'feature', value : 'auto') -option('camerabin2', type : 'feature', value : 'auto') -option('codecalpha', type : 'feature', value : 'auto') -option('codectimestamper', type : 'feature', value : 'auto') -option('coloreffects', type : 'feature', value : 'auto') -option('debugutils', type : 'feature', value : 'auto') -option('dvbsubenc', type : 'feature', value : 'auto') -option('dvbsuboverlay', type : 'feature', value : 'auto') -option('dvdspu', type : 'feature', value : 'auto') -option('faceoverlay', type : 'feature', value : 'auto') -option('festival', type : 'feature', value : 'auto') -option('fieldanalysis', type : 'feature', value : 'auto') -option('freeverb', type : 'feature', value : 'auto') -option('frei0r', type : 'feature', value : 'auto') -option('gaudieffects', type : 'feature', value : 'auto') -option('gdp', type : 'feature', value : 'auto') -option('geometrictransform', type : 'feature', value : 'auto') -option('id3tag', type : 'feature', value : 'auto') -option('insertbin', type : 'feature', value : 'auto') -option('inter', type : 'feature', value : 'auto') -option('interlace', type : 'feature', value : 'auto') -option('ivfparse', type : 'feature', value : 'auto') -option('ivtc', type : 'feature', value : 'auto') -option('jp2kdecimator', type : 'feature', value : 'auto') -option('jpegformat', type : 'feature', value : 'auto') -option('lcevcdecoder', type : 'feature', value : 'auto') -option('lcevcencoder', type : 'feature', value : 'auto') -option('librfb', type : 'feature', value : 'auto') -option('midi', type : 'feature', value : 'auto') -option('mpegdemux', type : 'feature', value : 'auto') -option('mpegpsmux', type : 'feature', value : 'auto') -option('mpegtsdemux', type : 'feature', value : 'auto') -option('mpegtsmux', type : 'feature', value : 'auto') -option('mse', type : 'feature', value : 'auto') -option('mxf', type : 'feature', value : 'auto') -option('netsim', type : 'feature', value : 'auto') -option('onvif', type : 'feature', value : 'auto') -option('pcapparse', type : 'feature', value : 'auto') -option('pnm', type : 'feature', value : 'auto') -option('proxy', type : 'feature', value : 'auto') -option('rawparse', type : 'feature', value : 'auto') -option('removesilence', type : 'feature', value : 'auto') -option('rist', type : 'feature', value : 'auto') -option('rtmp2', type : 'feature', value : 'auto') -option('rtp', type : 'feature', value : 'auto') -option('sdp', type : 'feature', value : 'auto') -option('segmentclip', type : 'feature', value : 'auto') -option('siren', type : 'feature', value : 'auto') -option('smooth', type : 'feature', value : 'auto') -option('speed', type : 'feature', value : 'auto') -option('subenc', type : 'feature', value : 'auto') -option('switchbin', type : 'feature', value : 'auto') -option('tensordecoders', type : 'feature', value : 'auto') -option('timecode', type : 'feature', value : 'auto') -option('unixfd', type : 'feature', value : 'auto') -option('videofilters', type : 'feature', value : 'auto') -option('videoframe_audiolevel', type : 'feature', value : 'auto') -option('videoparsers', type : 'feature', value : 'auto') -option('videosignal', type : 'feature', value : 'auto') -option('vmnc', type : 'feature', value : 'auto') -option('y4m', type : 'feature', value : 'auto') - -# Feature options for libraries that need external deps -option('opencv', type : 'feature', value : 'auto', description : 'OpenCV computer vision library support') - -# Feature options for optional deps in plugins -option('drm', type : 'feature', value : 'auto', description: 'libdrm support in the GstVA library') -option('udev', type : 'feature', value : 'auto', description: 'gudev support in the new VA-API plugin') -option('wayland', type : 'feature', value : 'auto', description : 'Wayland plugin/library, support in the Vulkan plugin') -option('x11', type : 'feature', value : 'auto', description : 'X11 support in Vulkan, GL and rfb plugins') - -# Feature options for plugins that need external deps -option('aes', type : 'feature', value : 'auto', description : 'AES encryption/decryption plugin') -option('aja', type : 'feature', value : 'auto', description : 'AJA audio/video source/sink plugin') -option('aom', type : 'feature', value : 'auto', description : 'AOM AV1 video codec plugin') -option('avtp', type : 'feature', value : 'auto', description : 'Audio/Video Transport Protocol (AVTP) plugin') -option('amfcodec', type : 'feature', value : 'auto', description : 'AMD AMF codec plugin') -option('analyticsoverlay', type: 'feature', value : 'auto') -option('androidmedia', type : 'feature', value : 'auto', description : 'Video capture and codec plugins for Android') -option('applemedia', type : 'feature', value : 'auto', description : 'Video capture and codec access plugins for macOS and iOS') -option('asio', type : 'feature', value : 'auto', description : 'Steinberg Audio Streaming Input Output (ASIO) plugin') -option('assrender', type : 'feature', value : 'auto', description : 'ASS/SSA subtitle renderer plugin') -option('bluez', type : 'feature', value : 'auto', description : 'Bluetooth audio A2DP/AVDTP sink, AVDTP source plugin') -option('bs2b', type : 'feature', value : 'auto', description : 'Bauer stereophonic-to-binaural audio plugin') -option('bz2', type : 'feature', value : 'auto', description : 'bz2 stream encoder and decoder plugin') -option('chromaprint', type : 'feature', value : 'auto', description : 'Chromaprint fingerprint audio plugin') -option('closedcaption', type : 'feature', value : 'auto', description : 'Closed caption extractor, decoder, and overlay plugin') -option('codec2json', type : 'feature', value : 'auto') -option('colormanagement', type : 'feature', value : 'auto', description : 'Color management correction plugin') -option('curl', type : 'feature', value : 'auto', description : 'cURL network source and sink plugin') -option('curl-ssh2', type : 'feature', value : 'auto', description : 'cURL network source and sink plugin libssh2 support') -option('d3dvideosink', type : 'feature', value : 'auto', description : 'Direct3D video sink plugin') -option('d3d11', type : 'feature', value : 'auto', description : 'Direct3D11 plugin') -option('d3d12', type : 'feature', value : 'auto', description : 'Direct3D12 plugin') -option('dash', type : 'feature', value : 'auto', description : 'DASH demuxer plugin') -option('dc1394', type : 'feature', value : 'auto', description : 'libdc1394 IIDC camera source plugin') -option('decklink', type : 'feature', value : 'auto', description : 'DeckLink audio/video source/sink plugin') -option('directfb', type : 'feature', value : 'auto', description : 'DirectFB video sink plugin') -option('directsound', type : 'feature', value : 'auto', description : 'Directsound audio source plugin') -option('directshow', type : 'feature', value : 'auto', description : 'Directshow audio/video plugins') -option('dtls', type : 'feature', value : 'auto', description : 'DTLS encoder and decoder plugin') -option('dts', type : 'feature', value : 'auto', description : 'DTS audio decoder plugin (GPL - only built if gpl option is also enabled!)') -option('dvb', type : 'feature', value : 'auto', description : 'DVB video bin and source plugin') -option('dwrite', type : 'feature', value : 'auto', description : 'DirectWrite plugin') -option('faac', type : 'feature', value : 'auto', description : 'Free AAC audio encoder plugin') -option('faad', type : 'feature', value : 'auto', description : 'Free AAC audio decoder plugin (GPL - only built if gpl option is also enabled!)') -option('fbdev', type : 'feature', value : 'auto', description : 'Framebuffer video sink plugin') -option('fdkaac', type : 'feature', value : 'auto', description : 'Fraunhofer AAC audio codec plugin') -option('flite', type : 'feature', value : 'auto', description : 'Flite speech synthesizer source plugin') -option('fluidsynth', type : 'feature', value : 'auto', description : 'Fluidsynth MIDI decoder plugin') -option('gl', type : 'feature', value : 'auto', description : 'GStreamer OpenGL integration support (used by various plugins)') -option('gme', type : 'feature', value : 'auto', description : 'libgme gaming console music file decoder plugin') -option('gs', type : 'feature', value : 'auto', description : 'Google Cloud Storage source and sink plugin') -option('gsm', type : 'feature', value : 'auto', description : 'GSM encoder/decoder plugin') -option('gtk3', type : 'feature', value : 'auto', description : 'GTK+ video sink plugin') -option('ipcpipeline', type : 'feature', value : 'auto', description : 'Inter-process communication plugin') -option('iqa', type : 'feature', value : 'auto', description : 'Image quality assessment plugin (AGPL - only built if gpl option is also enabled!)') -option('kms', type : 'feature', value : 'auto', description : 'KMS video sink plugin') -option('ladspa', type : 'feature', value : 'auto', description : 'LADSPA plugin bridge') -option('ladspa-rdf', type : 'feature', value : 'auto', description : 'LADSPA plugin bridge RDF support') -option('lc3', type : 'feature', value : 'auto', description : 'LC3 (Bluetooth) LE audio codec plugin') -option('ldac', type : 'feature', value : 'auto', description : 'LDAC bluetooth audio codec plugin') -option('libde265', type : 'feature', value : 'auto', description : 'HEVC/H.265 video decoder plugin') -option('openaptx', type : 'feature', value : 'auto', description : 'Open Source implementation of Audio Processing Technology codec (aptX) plugin') -option('lv2', type : 'feature', value : 'auto', description : 'LV2 audio plugin bridge') -option('mediafoundation', type : 'feature', value : 'auto', description : 'Microsoft Media Foundation plugin') -option('microdns', type : 'feature', value : 'auto', description : 'libmicrodns-based device provider') -option('modplug', type : 'feature', value : 'auto', description : 'ModPlug audio decoder plugin') -option('mpeg2enc', type : 'feature', value : 'auto', description : 'mpeg2enc video encoder plugin (GPL - only built if gpl option is also enabled!)') -option('mplex', type : 'feature', value : 'auto', description : 'mplex audio/video multiplexer plugin (GPL - only built if gpl option is also enabled!)') -option('msdk', type : 'feature', value : 'auto', description : 'Intel Media SDK video encoder/decoder plugin') -option('musepack', type : 'feature', value : 'auto', description : 'libmpcdec Musepack decoder plugin') -option('neon', type : 'feature', value : 'auto', description : 'NEON HTTP source plugin') -option('nvcomp', type : 'feature', value : 'auto', description : 'NVIDIA nvCOMP compression/decompression plugin') -option('nvcodec', type : 'feature', value : 'auto', description : 'NVIDIA GPU codec plugin') -option('nvdswrapper', type : 'feature', value : 'auto', description : 'NVIDIA DeepStream SDK wrapper plugin') -option('onnx', type : 'feature', value : 'auto', description : 'ONNX neural network plugin') -option('openal', type : 'feature', value : 'auto', description : 'OpenAL plugin') -option('openexr', type : 'feature', value : 'auto', description : 'OpenEXR plugin') -option('openh264', type : 'feature', value : 'auto', description : 'H.264 video codec plugin') -option('openjpeg', type : 'feature', value : 'auto', description : 'JPEG2000 image codec plugin') -option('openmpt', type : 'feature', value : 'auto', description : 'OpenMPT module music library plugin') -option('openni2', type : 'feature', value : 'auto', description : 'OpenNI2 library plugin') -option('opensles', type : 'feature', value : 'auto', description : 'OpenSL ES audio source/sink plugin') -option('opus', type : 'feature', value : 'auto', description : 'OPUS audio parser plugin') -option('qroverlay', type : 'feature', value : 'auto', description : 'Element to set random data on a qroverlay') -option('qsv', type : 'feature', value : 'auto', description : 'Intel Quick Sync Video plugin') -option('resindvd', type : 'feature', value : 'auto', description : 'Resin DVD playback plugin (GPL - only built if gpl option is also enabled!)') -option('rsvg', type : 'feature', value : 'auto', description : 'SVG overlayer and image decoder plugin') -option('rtmp', type : 'feature', value : 'auto', description : 'RTMP video network source and sink plugin') -option('sbc', type : 'feature', value : 'auto', description : 'SBC bluetooth audio codec plugin') -option('sctp', type : 'feature', value : 'auto', description : 'SCTP plugin') -option('shm', type : 'feature', value : 'auto', description : 'Shared memory source/sink plugin') -option('smoothstreaming', type : 'feature', value : 'auto', description : 'Microsoft Smooth Streaming demuxer plugin') -option('sndfile', type : 'feature', value : 'auto', description : 'libsndfile plugin') -option('soundtouch', type : 'feature', value : 'auto', description : 'Audio pitch controller & BPM detection plugin') -option('spandsp', type : 'feature', value : 'auto', description : 'Packet loss concealment audio plugin') -option('srt', type : 'feature', value : 'auto', description : 'Secure, Reliable, Transport client/server network source/sink plugin') -option('srtp', type : 'feature', value : 'auto', description : 'Secure RTP codec plugin') -option('svtav1', type : 'feature', value : 'auto', description : 'Scalable Video Technology for AV1 plugin') -option('svthevcenc', type : 'feature', value : 'auto', description : 'Scalable Video Technology for HEVC encoder plugin') -option('svtjpegxs', type : 'feature', value : 'auto', description : 'Scalable Video Technology for JPEG-XS plugin') -option('teletext', type : 'feature', value : 'auto', description : 'Teletext plugin') -option('tinyalsa', type : 'feature', value : 'auto', description : 'TinyALSA plugin') -option('transcode', type : 'feature', value : 'auto', description : 'Transcode plugin') -option('ttml', type : 'feature', value : 'auto', description : 'TTML subtitle parser and renderer plugin') -option('uvch264', type : 'feature', value : 'auto', description : 'UVC compliant H.264 camera source plugin') -option('va', type : 'feature', value : 'auto', description: 'VA-API new plugin') -option('voaacenc', type : 'feature', value : 'auto', description : 'AAC audio encoder plugin') -option('voamrwbenc', type : 'feature', value : 'auto', description : 'AMR-WB audio encoder plugin') -option('wasapi', type : 'feature', value : 'auto', description : 'Windows Audio Session API source/sink plugin') -option('wasapi2', type : 'feature', value : 'auto', description : 'Windows Audio Session API source/sink plugin with WinRT API') -option('webview2', type : 'feature', value : 'auto', description : 'WebView2 plugin') -option('webp', type : 'feature', value : 'auto', description : 'WebP image codec plugin') -option('webrtc', type : 'feature', value : 'auto', yield: true, description : 'WebRTC audio/video network bin plugin') -option('webrtcdsp', type : 'feature', value : 'auto', description : 'Plugin with various audio filters provided by the WebRTC audio processing library') -option('wildmidi', type : 'feature', value : 'auto', description : 'WildMidi midi soft synth plugin') -option('wic', type : 'feature', value : 'auto', description : 'Windows Imaging Component plugin') -option('win32ipc', type : 'feature', value : 'auto', description : 'Windows IPC plugin') -option('winks', type : 'feature', value : 'auto', description : 'Windows Kernel Streaming video source plugin') -option('winscreencap', type : 'feature', value : 'auto', description : 'Windows Screen Capture video source plugin') -option('x265', type : 'feature', value : 'auto', description : 'HEVC/H.265 video encoder plugin (GPL - only built if gpl option is also enabled!)') -option('zbar', type : 'feature', value : 'auto', description : 'Barcode image scanner plugin using zbar library') -option('zxing', type : 'feature', value : 'auto', description : 'Barcode image scanner plugin using zxing-cpp library') -option('wpe', type : 'feature', value : 'auto', description : 'WPE Web browser plugin') -option( - 'wpe_api', - type: 'combo', - value: 'auto', - choices: 'auto', '1.0', '1.1', '2.0', - description: 'WPE WebKit API to target (1.0 = soup2, 1.1/2.0 = soup3)' -) - -option('magicleap', type : 'feature', value : 'auto', description : 'Magic Leap platform support') -option('v4l2codecs', type : 'feature', value : 'auto', description : 'Video4Linux Stateless CODECs support') -option('uvcgadget', type : 'feature', value : 'auto', description : 'uvc video gadget plugin') -option('isac', type : 'feature', value : 'auto', description : 'iSAC plugin') - -# AJA plugin options -option('aja-include-dir', type : 'string', value : '', - description : 'Directory where AJA NTV2 headers are located') -option('aja-lib-dir', type : 'string', value : '', - description : 'Directory where AJA NTV2 library is located') - -# CUDA library options -option('cuda-nvmm', type : 'feature', value : 'auto', description : 'Enable NVMM support in cuda library') -option('cuda-nvmm-include-path', type : 'string', value : '', description : 'Include path for NVMM support in cuda library') - -# D3D11/D3D12 HLSL library options -option('d3d-hlsl-precompile', type : 'feature', value : 'auto', description : 'Enable buildtime HLSL compile for d3d11/d3d12 library/plugin') - -# D3D11 plugin options -option('d3d11-math', type : 'feature', value : 'auto', description : 'Enable DirectX SIMD Math support') -option('d3d11-hlsl-precompile', type : 'feature', value : 'auto', description : 'Enable buildtime HLSL compile for d3d11 library/plugin') -option('d3d11-wgc', type : 'feature', value : 'auto', description : 'Windows Graphics Capture API support in d3d11 plugin') - -# D3D12 plugin options -option('d3d12-wgc', type : 'feature', value : 'auto', description : 'Windows Graphics Capture API support in d3d12 plugin') - -# HLS plugin options -option('hls', type : 'feature', value : 'auto', description : 'HTTP Live Streaming plugin') -option('hls-crypto', type : 'combo', value : 'auto', choices : 'auto', 'nettle', 'libgcrypt', 'openssl', - description: 'Crypto library to use for HLS plugin') - -# SCTP plugin options -option('sctp-internal-usrsctp', type: 'feature', value : 'enabled', - description: 'Whether to use the bundled usrsctp library or the system one') - -# MSDK plugin options -option('mfx_api', type : 'combo', choices : 'MSDK', 'oneVPL', 'auto', value : 'auto', - description : 'Select MFX API to build against') - -# nvcodec plugin options -option('nvcodec-cuda-precompile', type : 'feature', value : 'disabled', description : 'Enable CUDA kernel precompile') -option('nvcodec-nvcc-arch', type : 'string', value : 'compute_52', description : 'GPU architectur for nvcc -arch option') - -# nvCOMP plugin options -option('nvcomp-sdk-path', type: 'string', value : '', - description : 'nvCOMP SDK root directory') - -# nvdswrapper plugin options -option('nvds-include-path', type: 'string', value : '', - description : 'DeepStream SDK include directory') -option('nvds-lib-path', type: 'string', value : '', - description : 'DeepStream SDK library directory') - -# QSV plugin options -option('mfx-modules-dir', type: 'string', value : '', - description : 'libmfx runtime module dir, linux only') - -# Qt6 plugin options -option('qt6d3d11', type : 'feature', value : 'auto', description : 'Qt6 Direct3D11 plugin') -option('qt-method', type: 'combo', value: 'auto', choices: 'auto', 'pkg-config', 'qmake', - yield: true, description: 'Method to use to find Qt') - -# Vulkan integration library and plugin options -option('vulkan', type: 'feature', value: 'auto', description: 'Vulkan integration library and video sink plugin') -option('vulkan-video', type: 'feature', value: 'auto', description: 'Whether to use Vulkan Video Extensions for encoding/decoding') -option('vulkan-windowing', type : 'array', - choices : 'x11', 'wayland', 'auto', value : 'auto', - description : 'A comma separated list of Vulkan windowing systems to enable. Non-Linux platforms are auto-detected.') - -# License-related feature options -option('gpl', type: 'feature', value: 'disabled', yield: true, - description: 'Allow build plugins that have (A)GPL-licensed dependencies') - -# Common feature options -option('examples', type : 'feature', value : 'auto', yield : true) -option('tools', type : 'feature', value : 'auto', yield : true) -option('tests', type : 'feature', value : 'auto', yield : true) -option('introspection', type : 'feature', value : 'auto', yield : true, description : 'Generate gobject-introspection bindings') -option('nls', type : 'feature', value : 'auto', yield: true, description : 'Enable native language support (translations)') -option('orc', type : 'feature', value : 'auto', yield : true) -option('extra-checks', type : 'feature', value : 'enabled', yield : true, description : 'Enable extra runtime checks') - -# Common options -option('package-name', type : 'string', yield : true, - description : 'package name to use in plugins') -option('package-origin', type : 'string', value : 'Unknown package origin', yield : true, - description : 'package origin URL to use in plugins') -option('doc', type : 'feature', value : 'auto', yield: true, - description: 'Enable documentation.') -option('glib_debug', type : 'feature', value : 'auto', yield : true, description : 'Enable GLib debug infrastructure (see docs/macros.txt)') -option('glib_assert', type : 'boolean', value : true, yield : true, description : 'Enable GLib assertion (see docs/macros.txt)', - deprecated: {'enabled' : 'true', 'disabled' : 'false', 'auto' : 'false'}, -) -option('glib_checks', type : 'boolean', value : true, yield : true, description : 'Enable GLib checks such as API guards (see docs/macros.txt)', - deprecated: {'enabled' : 'true', 'disabled' : 'false', 'auto' : 'false'}, -) - -# Deprecated, kept for backward compat -option('gobject-cast-checks', type : 'feature', value : 'auto', yield : true, - description: 'Enable run-time GObject cast checks (auto = enabled for development, disabled for stable releases)', - deprecated: 'glib_debug') -option('glib-asserts', type : 'feature', value : 'enabled', yield : true, - description: 'Enable GLib assertion (auto = enabled for development, disabled for stable releases)', - deprecated: 'glib_assert') -option('glib-checks', type : 'feature', value : 'enabled', yield : true, - description: 'Enable GLib checks such as API guards (auto = enabled for development, disabled for stable releases)', - deprecated: 'glib_checks')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12download.cpp
Deleted
@@ -1,322 +0,0 @@ -/* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifdef HAVE_CONFIG_H -#include <config.h> -#endif - -#include "gstd3d12download.h" -#include "gstd3d12pluginutils.h" - -GST_DEBUG_CATEGORY_STATIC (gst_d3d12_download_debug); -#define GST_CAT_DEFAULT gst_d3d12_download_debug - -static GstStaticPadTemplate sink_template = - GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); - -static GstStaticPadTemplate src_template = - GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); - -struct _GstD3D12Download -{ - GstD3D12BaseFilter parent; -}; - -#define gst_d3d12_download_parent_class parent_class -G_DEFINE_TYPE (GstD3D12Download, gst_d3d12_download, - GST_TYPE_D3D12_BASE_FILTER); - -static GstCaps *gst_d3d12_download_transform_caps (GstBaseTransform * trans, - GstPadDirection direction, GstCaps * caps, GstCaps * filter); -static gboolean gst_d3d12_download_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query); -static gboolean gst_d3d12_download_decide_allocation (GstBaseTransform * trans, - GstQuery * query); -static GstFlowReturn gst_d3d12_download_transform (GstBaseTransform * trans, - GstBuffer * inbuf, GstBuffer * outbuf); - -static void -gst_d3d12_download_class_init (GstD3D12DownloadClass * klass) -{ - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseTransformClass *trans_class = GST_BASE_TRANSFORM_CLASS (klass); - - gst_element_class_add_static_pad_template (element_class, &sink_template); - gst_element_class_add_static_pad_template (element_class, &src_template); - - gst_element_class_set_static_metadata (element_class, - "Direct3D12 Downloader", "Filter/Video", - "Downloads Direct3D12 texture memory into system memory", - "Seungha Yang <seungha@centricular.com>"); - - trans_class->passthrough_on_same_caps = TRUE; - - trans_class->transform_caps = - GST_DEBUG_FUNCPTR (gst_d3d12_download_transform_caps); - trans_class->propose_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_download_propose_allocation); - trans_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_download_decide_allocation); - trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_download_transform); - - GST_DEBUG_CATEGORY_INIT (gst_d3d12_download_debug, - "d3d12download", 0, "d3d12download Element"); -} - -static void -gst_d3d12_download_init (GstD3D12Download * self) -{ -} - -static GstCaps * -_set_caps_features (const GstCaps * caps, const gchar * feature_name) -{ - GstCaps *tmp = gst_caps_copy (caps); - guint n = gst_caps_get_size (tmp); - guint i = 0; - - for (i = 0; i < n; i++) { - gst_caps_set_features (tmp, i, - gst_caps_features_new_single_static_str (feature_name)); - } - - return tmp; -} - -static GstCaps * -gst_d3d12_download_transform_caps (GstBaseTransform * trans, - GstPadDirection direction, GstCaps * caps, GstCaps * filter) -{ - GstCaps *result, *tmp; - - GST_DEBUG_OBJECT (trans, - "Transforming caps %" GST_PTR_FORMAT " in direction %s", caps, - (direction == GST_PAD_SINK) ? "sink" : "src"); - - if (direction == GST_PAD_SINK) { - tmp = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); - tmp = gst_caps_merge (gst_caps_ref (caps), tmp); - } else { - GstCaps *newcaps; - tmp = gst_caps_ref (caps); - - newcaps = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY); - tmp = gst_caps_merge (tmp, newcaps); - } - - if (filter) { - result = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (tmp); - } else { - result = tmp; - } - - GST_DEBUG_OBJECT (trans, "returning caps: %" GST_PTR_FORMAT, result); - - return result; -} - -static gboolean -gst_d3d12_download_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query) -{ - GstD3D12BaseFilter *filter = GST_D3D12_BASE_FILTER (trans); - GstVideoInfo info; - GstBufferPool *pool; - GstCaps *caps; - guint size; - gboolean is_d3d12 = FALSE; - - if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, - decide_query, query)) - return FALSE; - - /* passthrough, we're done */ - if (!decide_query) - return TRUE; - - gst_query_parse_allocation (query, &caps, nullptr); - - if (!caps) { - GST_WARNING_OBJECT (filter, "Allocation query without caps"); - return FALSE; - } - - if (!gst_video_info_from_caps (&info, caps)) - return FALSE; - - if (gst_query_get_n_allocation_pools (query) == 0) { - GstCapsFeatures *features; - GstStructure *config; - - features = gst_caps_get_features (caps, 0); - - if (features && gst_caps_features_contains (features, - GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { - GST_DEBUG_OBJECT (filter, "upstream support d3d12 memory"); - pool = gst_d3d12_buffer_pool_new (filter->device); - is_d3d12 = TRUE; - } else { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_META); - if (!is_d3d12) { - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); - } - - size = GST_VIDEO_INFO_SIZE (&info); - gst_buffer_pool_config_set_params (config, caps, size, 0, 0); - - if (!gst_buffer_pool_set_config (pool, config)) { - GST_ERROR_OBJECT (filter, "Bufferpool config failed"); - gst_object_unref (pool); - return FALSE; - } - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, - nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - gst_query_add_allocation_pool (query, pool, size, 0, 0); - gst_object_unref (pool); - } - - gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); - gst_query_add_allocation_meta (query, - GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); - - return TRUE; -} - -static gboolean -gst_d3d12_download_decide_allocation (GstBaseTransform * trans, - GstQuery * query) -{ - GstBufferPool *pool = nullptr; - GstStructure *config; - guint min, max, size; - gboolean update_pool; - GstCaps *outcaps = nullptr; - - gst_query_parse_allocation (query, &outcaps, nullptr); - - if (!outcaps) { - GST_WARNING_OBJECT (trans, "Allocation query without caps"); - return FALSE; - } - - if (gst_query_get_n_allocation_pools (query) > 0) { - gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); - - update_pool = TRUE; - } else { - GstVideoInfo vinfo; - - gst_video_info_from_caps (&vinfo, outcaps); - size = vinfo.size; - min = max = 0; - update_pool = FALSE; - } - - if (!pool) { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - gst_buffer_pool_config_set_params (config, outcaps, size, min, max); - gst_buffer_pool_set_config (pool, config); - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - if (update_pool) - gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); - else - gst_query_add_allocation_pool (query, pool, size, min, max); - - gst_object_unref (pool); - - return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, - query); -} - -static GstFlowReturn -gst_d3d12_download_transform (GstBaseTransform * trans, GstBuffer * inbuf, - GstBuffer * outbuf) -{ - GstD3D12BaseFilter *filter = GST_D3D12_BASE_FILTER (trans); - GstVideoFrame in_frame, out_frame; - GstFlowReturn ret = GST_FLOW_OK; - - if (!gst_video_frame_map (&in_frame, &filter->in_info, inbuf, GST_MAP_READ)) { - GST_ERROR_OBJECT (filter, "Couldn't map input frame"); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_map (&out_frame, - &filter->out_info, outbuf, GST_MAP_WRITE)) { - GST_ERROR_OBJECT (filter, "Couldn't map output frame"); - gst_video_frame_unmap (&in_frame); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_copy (&out_frame, &in_frame)) { - GST_ERROR_OBJECT (filter, "Copy failed"); - ret = GST_FLOW_ERROR; - } - - gst_video_frame_unmap (&out_frame); - gst_video_frame_unmap (&in_frame); - - return ret; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12download.h
Deleted
@@ -1,31 +0,0 @@ -/* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include "gstd3d12basefilter.h" - -G_BEGIN_DECLS - -#define GST_TYPE_D3D12_DOWNLOAD (gst_d3d12_download_get_type()) -G_DECLARE_FINAL_TYPE (GstD3D12Download, - gst_d3d12_download, GST, D3D12_DOWNLOAD, GstD3D12BaseFilter); - -G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12upload.cpp
Deleted
@@ -1,320 +0,0 @@ -/* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include "gstd3d12upload.h" -#include "gstd3d12pluginutils.h" - -GST_DEBUG_CATEGORY_STATIC (gst_d3d12_upload_debug); -#define GST_CAT_DEFAULT gst_d3d12_upload_debug - -static GstStaticPadTemplate sink_template = - GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); - -static GstStaticPadTemplate src_template = - GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); - -struct _GstD3D12Upload -{ - GstD3D12BaseFilter parent; -}; - -#define gst_d3d12_upload_parent_class parent_class -G_DEFINE_TYPE (GstD3D12Upload, gst_d3d12_upload, GST_TYPE_D3D12_BASE_FILTER); - -static GstCaps *gst_d3d12_upload_transform_caps (GstBaseTransform * trans, - GstPadDirection direction, GstCaps * caps, GstCaps * filter); -static gboolean gst_d3d12_upload_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query); -static gboolean gst_d3d12_upload_decide_allocation (GstBaseTransform * trans, - GstQuery * query); -static GstFlowReturn gst_d3d12_upload_transform (GstBaseTransform * trans, - GstBuffer * inbuf, GstBuffer * outbuf); - -static void -gst_d3d12_upload_class_init (GstD3D12UploadClass * klass) -{ - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseTransformClass *trans_class = GST_BASE_TRANSFORM_CLASS (klass); - - gst_element_class_add_static_pad_template (element_class, &sink_template); - gst_element_class_add_static_pad_template (element_class, &src_template); - - gst_element_class_set_static_metadata (element_class, - "Direct3D12 Uploader", "Filter/Video", - "Uploads system memory into Direct3D12 texture memory", - "Seungha Yang <seungha@centricular.com>"); - - trans_class->passthrough_on_same_caps = TRUE; - - trans_class->transform_caps = - GST_DEBUG_FUNCPTR (gst_d3d12_upload_transform_caps); - trans_class->propose_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_upload_propose_allocation); - trans_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_upload_decide_allocation); - trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_upload_transform); - - GST_DEBUG_CATEGORY_INIT (gst_d3d12_upload_debug, - "d3d12upload", 0, "d3d12upload Element"); -} - -static void -gst_d3d12_upload_init (GstD3D12Upload * self) -{ -} - -static GstCaps * -_set_caps_features (const GstCaps * caps, const gchar * feature_name) -{ - GstCaps *tmp = gst_caps_copy (caps); - guint n = gst_caps_get_size (tmp); - guint i = 0; - - for (i = 0; i < n; i++) { - gst_caps_set_features (tmp, i, - gst_caps_features_new_single_static_str (feature_name)); - } - - return tmp; -} - -static GstCaps * -gst_d3d12_upload_transform_caps (GstBaseTransform * trans, - GstPadDirection direction, GstCaps * caps, GstCaps * filter) -{ - GstCaps *result, *tmp; - - GST_DEBUG_OBJECT (trans, - "Transforming caps %" GST_PTR_FORMAT " in direction %s", caps, - (direction == GST_PAD_SINK) ? "sink" : "src"); - - if (direction == GST_PAD_SINK) { - tmp = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY); - tmp = gst_caps_merge (gst_caps_ref (caps), tmp); - } else { - GstCaps *newcaps; - tmp = gst_caps_ref (caps); - - newcaps = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); - tmp = gst_caps_merge (tmp, newcaps); - } - - if (filter) { - result = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (tmp); - } else { - result = tmp; - } - - GST_DEBUG_OBJECT (trans, "returning caps: %" GST_PTR_FORMAT, result); - - return result; -} - -static gboolean -gst_d3d12_upload_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query) -{ - GstD3D12BaseFilter *filter = GST_D3D12_BASE_FILTER (trans); - GstVideoInfo info; - GstBufferPool *pool; - GstCaps *caps; - guint size; - gboolean is_d3d12 = FALSE; - - if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, - decide_query, query)) - return FALSE; - - /* passthrough, we're done */ - if (!decide_query) - return TRUE; - - gst_query_parse_allocation (query, &caps, nullptr); - - if (!caps) { - GST_WARNING_OBJECT (filter, "Allocation query without caps"); - return FALSE; - } - - if (!gst_video_info_from_caps (&info, caps)) - return FALSE; - - if (gst_query_get_n_allocation_pools (query) == 0) { - GstCapsFeatures *features; - GstStructure *config; - - features = gst_caps_get_features (caps, 0); - - if (features && gst_caps_features_contains (features, - GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { - GST_DEBUG_OBJECT (filter, "upstream support d3d12 memory"); - pool = gst_d3d12_buffer_pool_new (filter->device); - is_d3d12 = TRUE; - } else { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_META); - if (!is_d3d12) { - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); - } - - size = GST_VIDEO_INFO_SIZE (&info); - gst_buffer_pool_config_set_params (config, caps, size, 0, 0); - - if (!gst_buffer_pool_set_config (pool, config)) { - GST_ERROR_OBJECT (filter, "Bufferpool config failed"); - gst_object_unref (pool); - return FALSE; - } - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, - nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - gst_query_add_allocation_pool (query, pool, size, 0, 0); - gst_object_unref (pool); - } - - gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); - gst_query_add_allocation_meta (query, - GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); - - return TRUE; -} - -static gboolean -gst_d3d12_upload_decide_allocation (GstBaseTransform * trans, GstQuery * query) -{ - GstBufferPool *pool = nullptr; - GstStructure *config; - guint min, max, size; - gboolean update_pool; - GstCaps *outcaps = nullptr; - - gst_query_parse_allocation (query, &outcaps, nullptr); - - if (!outcaps) { - GST_WARNING_OBJECT (trans, "Allocation query without caps"); - return FALSE; - } - - if (gst_query_get_n_allocation_pools (query) > 0) { - gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); - - update_pool = TRUE; - } else { - GstVideoInfo vinfo; - - gst_video_info_from_caps (&vinfo, outcaps); - size = vinfo.size; - min = max = 0; - update_pool = FALSE; - } - - if (!pool) { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - gst_buffer_pool_config_set_params (config, outcaps, size, min, max); - gst_buffer_pool_set_config (pool, config); - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - if (update_pool) - gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); - else - gst_query_add_allocation_pool (query, pool, size, min, max); - - gst_object_unref (pool); - - return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, - query); -} - -static GstFlowReturn -gst_d3d12_upload_transform (GstBaseTransform * trans, GstBuffer * inbuf, - GstBuffer * outbuf) -{ - GstD3D12BaseFilter *filter = GST_D3D12_BASE_FILTER (trans); - GstVideoFrame in_frame, out_frame; - GstFlowReturn ret = GST_FLOW_OK; - - if (!gst_video_frame_map (&in_frame, &filter->in_info, inbuf, GST_MAP_READ)) { - GST_ERROR_OBJECT (filter, "Couldn't map input frame"); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_map (&out_frame, - &filter->out_info, outbuf, GST_MAP_WRITE)) { - GST_ERROR_OBJECT (filter, "Couldn't map output frame"); - gst_video_frame_unmap (&in_frame); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_copy (&out_frame, &in_frame)) { - GST_ERROR_OBJECT (filter, "Copy failed"); - ret = GST_FLOW_ERROR; - } - - gst_video_frame_unmap (&out_frame); - gst_video_frame_unmap (&in_frame); - - return ret; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12upload.h
Deleted
@@ -1,31 +0,0 @@ -/* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include "gstd3d12basefilter.h" - -G_BEGIN_DECLS - -#define GST_TYPE_D3D12_UPLOAD (gst_d3d12_upload_get_type()) -G_DECLARE_FINAL_TYPE (GstD3D12Upload, - gst_d3d12_upload, GST, D3D12_UPLOAD, GstD3D12BaseFilter); - -G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2object.cpp
Deleted
@@ -1,461 +0,0 @@ -/* GStreamer - * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifdef HAVE_CONFIG_H -#include "config.h" -#endif - -#include "gstwasapi2object.h" -#include "gstwasapi2activator.h" -#include <endpointvolume.h> -#include <mutex> -#include <condition_variable> -#include <wrl.h> -#include <string> -#include <atomic> -#include <string.h> - -/* *INDENT-OFF* */ -using namespace Microsoft::WRL; - -GST_DEBUG_CATEGORY_EXTERN (gst_wasapi2_debug); -#define GST_CAT_DEFAULT gst_wasapi2_debug - -static GstStaticCaps template_caps = GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS); - -static void gst_wasapi2_object_set_endpoint_muted (GstWasapi2Object * object, - bool muted); - -DEFINE_GUID (IID_Wasapi2EndpointVolumeCallback, 0x21ba991f, 0x4d78, - 0x418c, 0xa1, 0xea, 0x8a, 0xc7, 0xdd, 0xa2, 0xdc, 0x39); -class Wasapi2EndpointVolumeCallback : public IAudioEndpointVolumeCallback -{ -public: - static void CreateInstance (Wasapi2EndpointVolumeCallback ** iface, - GstWasapi2Object * client) - { - auto self = new Wasapi2EndpointVolumeCallback (); - g_weak_ref_set (&self->client_, client); - *iface = self; - } - - STDMETHODIMP_ (ULONG) - AddRef (void) - { - return InterlockedIncrement (&refcount_); - } - - STDMETHODIMP_ (ULONG) - Release (void) - { - ULONG ref_count; - - ref_count = InterlockedDecrement (&refcount_); - - if (ref_count == 0) - delete this; - - return ref_count; - } - - STDMETHODIMP - QueryInterface (REFIID riid, void ** object) - { - if (riid == __uuidof(IUnknown) || riid == __uuidof(IAgileObject)) { - *object = static_cast<IUnknown *>( - static_cast<Wasapi2EndpointVolumeCallback*>(this)); - } else if (riid == __uuidof(IAudioEndpointVolumeCallback)) { - *object = static_cast<IAudioEndpointVolumeCallback *>( - static_cast<Wasapi2EndpointVolumeCallback*>(this)); - } else if (riid == IID_Wasapi2EndpointVolumeCallback) { - *object = static_cast<Wasapi2EndpointVolumeCallback *> (this); - } else { - *object = nullptr; - return E_NOINTERFACE; - } - - AddRef (); - - return S_OK; - } - - STDMETHODIMP - OnNotify (AUDIO_VOLUME_NOTIFICATION_DATA * notify) - { - auto client = (GstWasapi2Object *) g_weak_ref_get (&client_); - - if (client) { - gst_wasapi2_object_set_endpoint_muted (client, notify->bMuted); - gst_object_unref (client); - } - - return S_OK; - } - -private: - Wasapi2EndpointVolumeCallback () - { - g_weak_ref_init (&client_, nullptr); - } - - virtual ~Wasapi2EndpointVolumeCallback () - { - g_weak_ref_set (&client_, nullptr); - } - -private: - ULONG refcount_ = 1; - GWeakRef client_; -}; - -struct GstWasapi2ObjectPrivate -{ - ComPtr<IMMDeviceEnumerator> enumerator; - ComPtr<IMMDevice> device; - ComPtr<IAudioClient> client; - ComPtr<IAudioEndpointVolume> endpoint_volume; - std::atomic<bool> endpoint_muted = { false }; - Wasapi2EndpointVolumeCallback *volume_callback = nullptr; - Wasapi2ActivationHandler *activator = nullptr; - std::mutex lock; - std::condition_variable cond; - std::string device_id; - GstWasapi2EndpointClass device_class; - guint target_pid; - gboolean is_default_device = FALSE; - - void ClearCOM () - { - if (volume_callback && endpoint_volume) - endpoint_volume->UnregisterControlChangeNotify (volume_callback); - if (activator) - activator->Release (); - client = nullptr; - if (volume_callback) - volume_callback->Release (); - endpoint_volume = nullptr; - device = nullptr; - enumerator = nullptr; - } -}; -/* *INDENT-ON* */ - -struct _GstWasapi2Object -{ - GstObject parent; - - GstWasapi2ObjectPrivate *priv; - - GThread *thread; - GMainContext *context; - GMainLoop *loop; - GstCaps *caps; -}; - -static void gst_wasapi2_object_finalize (GObject * object); - -#define gst_wasapi2_object_parent_class parent_class -G_DEFINE_TYPE (GstWasapi2Object, gst_wasapi2_object, GST_TYPE_OBJECT); - -static void -gst_wasapi2_object_class_init (GstWasapi2ObjectClass * klass) -{ - auto object_class = G_OBJECT_CLASS (klass); - - object_class->finalize = gst_wasapi2_object_finalize; -} - -static void -gst_wasapi2_object_init (GstWasapi2Object * self) -{ - self->priv = new GstWasapi2ObjectPrivate (); - self->context = g_main_context_new (); - self->loop = g_main_loop_new (self->context, FALSE); -} - -static void -gst_wasapi2_object_finalize (GObject * object) -{ - auto self = GST_WASAPI2_OBJECT (object); - - g_main_loop_quit (self->loop); - g_thread_join (self->thread); - g_main_loop_unref (self->loop); - g_main_context_unref (self->context); - gst_clear_caps (&self->caps); - - delete self->priv; - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_wasapi2_object_set_endpoint_muted (GstWasapi2Object * object, bool muted) -{ - auto priv = object->priv; - priv->endpoint_muted.store (muted, std::memory_order_release); -} - -static gboolean -is_equal_device_id (const gchar * a, const gchar * b) -{ - auto len_a = strlen (a); - auto len_b = strlen (b); - - if (len_a != len_b) - return FALSE; - -#ifdef _MSC_VER - return _strnicmp (a, b, len_a) == 0; -#else - return strncasecmp (a, b, len_a) == 0; -#endif -} - -static gpointer -gst_wasapi2_object_thread_func (GstWasapi2Object * self) -{ - auto priv = self->priv; - - CoInitializeEx (nullptr, COINIT_MULTITHREADED); - - g_main_context_push_thread_default (self->context); - - auto idle_source = g_idle_source_new (); - /* *INDENT-OFF* */ - g_source_set_callback (idle_source, - (gpointer user_data) -> gboolean { - auto self = (GstWasapi2Object *) user_data; - auto priv = self->priv; - std::lock_guard < std::mutex > lk (priv->lock); - priv->cond.notify_all (); - return G_SOURCE_REMOVE; - }, - self, nullptr); - /* *INDENT-ON* */ - g_source_attach (idle_source, self->context); - g_source_unref (idle_source); - - auto hr = CoCreateInstance (__uuidof (MMDeviceEnumerator), - nullptr, CLSCTX_ALL, IID_PPV_ARGS (&priv->enumerator)); - if (FAILED (hr)) { - GST_ERROR_OBJECT (self, "Failed to create IMMDeviceEnumerator instance"); - goto run_loop; - } - - switch (priv->device_class) { - case GST_WASAPI2_ENDPOINT_CLASS_CAPTURE: - if (priv->device_id.empty () || - is_equal_device_id (priv->device_id.c_str (), - gst_wasapi2_get_default_device_id (eCapture))) { - if (gst_wasapi2_can_automatic_stream_routing ()) { - Wasapi2ActivationHandler::CreateInstance (&priv->activator, - gst_wasapi2_get_default_device_id_wide (eCapture), nullptr); - GST_DEBUG_OBJECT (self, "Creating default capture device"); - priv->is_default_device = TRUE; - } else { - GST_DEBUG_OBJECT (self, "Creating default capture MMdevice"); - hr = priv->enumerator->GetDefaultAudioEndpoint (eCapture, - eConsole, &priv->device); - } - } else { - auto wstr = g_utf8_to_utf16 (priv->device_id.c_str (), - -1, nullptr, nullptr, nullptr); - hr = priv->enumerator->GetDevice ((LPCWSTR) wstr, &priv->device); - g_free (wstr); - } - break; - case GST_WASAPI2_ENDPOINT_CLASS_RENDER: - case GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE: - if (priv->device_id.empty () || - is_equal_device_id (priv->device_id.c_str (), - gst_wasapi2_get_default_device_id (eRender))) { - if (gst_wasapi2_can_automatic_stream_routing ()) { - Wasapi2ActivationHandler::CreateInstance (&priv->activator, - gst_wasapi2_get_default_device_id_wide (eRender), nullptr); - GST_DEBUG_OBJECT (self, "Creating default render device"); - priv->is_default_device = TRUE; - } else { - GST_DEBUG_OBJECT (self, "Creating default render MMdevice"); - hr = priv->enumerator->GetDefaultAudioEndpoint (eRender, - eConsole, &priv->device); - } - } else { - auto wstr = g_utf8_to_utf16 (priv->device_id.c_str (), - -1, nullptr, nullptr, nullptr); - hr = priv->enumerator->GetDevice ((LPCWSTR) wstr, &priv->device); - g_free (wstr); - } - break; - case GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE: - case GST_WASAPI2_ENDPOINT_CLASS_EXCLUDE_PROCESS_LOOPBACK_CAPTURE: - { - AUDIOCLIENT_ACTIVATION_PARAMS params = { }; - params.ActivationType = AUDIOCLIENT_ACTIVATION_TYPE_PROCESS_LOOPBACK; - params.ProcessLoopbackParams.TargetProcessId = priv->target_pid; - if (priv->device_class == - GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE) { - params.ProcessLoopbackParams.ProcessLoopbackMode = - PROCESS_LOOPBACK_MODE_INCLUDE_TARGET_PROCESS_TREE; - } else { - params.ProcessLoopbackParams.ProcessLoopbackMode = - PROCESS_LOOPBACK_MODE_EXCLUDE_TARGET_PROCESS_TREE; - } - - GST_DEBUG_OBJECT (self, "Creating process loopback capture device"); - - Wasapi2ActivationHandler::CreateInstance (&priv->activator, - VIRTUAL_AUDIO_DEVICE_PROCESS_LOOPBACK, ¶ms); - break; - } - default: - g_assert_not_reached (); - break; - } - - if (priv->activator || priv->device) { - if (priv->activator) { - hr = priv->activator->ActivateAsync (); - if (gst_wasapi2_result (hr)) - hr = priv->activator->GetClient (&priv->client, INFINITE); - } else { - hr = priv->device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, - nullptr, &priv->client); - } - - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Couldn't activate device"); - } else if (priv->device && - priv->device_class == GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { - hr = priv->device->Activate (__uuidof (IAudioEndpointVolume), - CLSCTX_ALL, nullptr, &priv->endpoint_volume); - if (gst_wasapi2_result (hr)) { - Wasapi2EndpointVolumeCallback::CreateInstance (&priv->volume_callback, - self); - hr = priv->endpoint_volume-> - RegisterControlChangeNotify (priv->volume_callback); - if (!gst_wasapi2_result (hr)) { - priv->volume_callback->Release (); - priv->volume_callback = nullptr; - } else { - BOOL muted = FALSE; - priv->endpoint_volume->GetMute (&muted); - if (gst_wasapi2_result (hr)) - gst_wasapi2_object_set_endpoint_muted (self, muted); - } - } - } - } else { - GST_WARNING_OBJECT (self, "No device created"); - } - - if (priv->client) { - WAVEFORMATEX *mix_format = nullptr; - hr = priv->client->GetMixFormat (&mix_format); - if (!gst_wasapi2_result (hr)) { - if (gst_wasapi2_is_process_loopback_class (priv->device_class)) - mix_format = gst_wasapi2_get_default_mix_format (); - } - - if (mix_format) { - auto scaps = gst_static_caps_get (&template_caps); - gst_wasapi2_util_parse_waveformatex (mix_format, - scaps, &self->caps, nullptr); - gst_caps_unref (scaps); - - CoTaskMemFree (mix_format); - } - } - -run_loop: - GST_INFO_OBJECT (self, "Starting loop"); - g_main_loop_run (self->loop); - GST_INFO_OBJECT (self, "Stopped loop"); - - priv->ClearCOM (); - - g_main_context_pop_thread_default (self->context); - - CoUninitialize (); - - return nullptr; -} - -GstWasapi2Object * -gst_wasapi2_object_new (GstWasapi2EndpointClass device_class, - const gchar * device_id, guint target_pid) -{ - auto self = (GstWasapi2Object *) - g_object_new (GST_TYPE_WASAPI2_OBJECT, nullptr); - gst_object_ref_sink (self); - - auto priv = self->priv; - priv->device_class = device_class; - if (device_id) - priv->device_id = device_id; - priv->target_pid = target_pid; - - if (gst_wasapi2_is_process_loopback_class (device_class) && !target_pid) { - GST_ERROR_OBJECT (self, "Unspecified target PID"); - gst_object_unref (self); - return nullptr; - } - - { - std::unique_lock < std::mutex > lk (priv->lock); - self->thread = g_thread_new ("GstWasapi2Object", - (GThreadFunc) gst_wasapi2_object_thread_func, self); - while (!g_main_loop_is_running (self->loop)) - priv->cond.wait (lk); - } - - if (!priv->client) { - gst_object_unref (self); - return nullptr; - } - - return self; -} - -GstCaps * -gst_wasapi2_object_get_caps (GstWasapi2Object * object) -{ - if (object->caps) - return gst_caps_ref (object->caps); - - return nullptr; -} - -IAudioClient * -gst_wasapi2_object_get_handle (GstWasapi2Object * object) -{ - return object->priv->client.Get (); -} - -gboolean -gst_wasapi2_object_is_endpoint_muted (GstWasapi2Object * object) -{ - return object->priv->endpoint_muted.load (std::memory_order_acquire); -} - -gboolean -gst_wasapi2_object_auto_routing_supported (GstWasapi2Object * object) -{ - return object->priv->is_default_device; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2object.h
Deleted
@@ -1,44 +0,0 @@ -/* GStreamer - * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include <gst/gst.h> -#include "gstwasapi2util.h" - -G_BEGIN_DECLS - -#define GST_TYPE_WASAPI2_OBJECT (gst_wasapi2_object_get_type ()) -G_DECLARE_FINAL_TYPE (GstWasapi2Object, gst_wasapi2_object, - GST, WASAPI2_OBJECT, GstObject); - -GstWasapi2Object * gst_wasapi2_object_new (GstWasapi2EndpointClass device_class, - const gchar * device_id, - guint target_pid); - -GstCaps * gst_wasapi2_object_get_caps (GstWasapi2Object * object); - -IAudioClient * gst_wasapi2_object_get_handle (GstWasapi2Object * object); - -gboolean gst_wasapi2_object_is_endpoint_muted (GstWasapi2Object * object); - -gboolean gst_wasapi2_object_auto_routing_supported (GstWasapi2Object * object); - -G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2ringbuffer.cpp
Deleted
@@ -1,1542 +0,0 @@ -/* GStreamer - * Copyright (C) 2021 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#include "gstwasapi2ringbuffer.h" -#include "gstwasapi2object.h" -#include <string.h> -#include <mfapi.h> -#include <wrl.h> -#include <memory> -#include <atomic> -#include <vector> - -GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_ring_buffer_debug); -#define GST_CAT_DEFAULT gst_wasapi2_ring_buffer_debug - -static HRESULT gst_wasapi2_ring_buffer_io_callback (GstWasapi2RingBuffer * buf); -static HRESULT -gst_wasapi2_ring_buffer_loopback_callback (GstWasapi2RingBuffer * buf); - -/* *INDENT-OFF* */ -using namespace Microsoft::WRL; - -struct GstWasapi2RingBufferPtr -{ - GstWasapi2RingBufferPtr (GstWasapi2RingBuffer * ringbuffer) - : obj(ringbuffer) - { - } - - /* Point to ringbuffer without holding ownership */ - GstWasapi2RingBuffer *obj; -}; - -class GstWasapiAsyncCallback : public IMFAsyncCallback -{ -public: - GstWasapiAsyncCallback(std::shared_ptr<GstWasapi2RingBufferPtr> listener, - DWORD queue_id, - gboolean loopback) - : ref_count_(1) - , queue_id_(queue_id) - , listener_(listener) - , loopback_(loopback) - { - } - - virtual ~GstWasapiAsyncCallback () { } - - /* IUnknown */ - STDMETHODIMP_ (ULONG) - AddRef (void) - { - GST_TRACE ("%p, %u", this, (guint) ref_count_); - return InterlockedIncrement (&ref_count_); - } - STDMETHODIMP_ (ULONG) - Release (void) - { - ULONG ref_count; - - GST_TRACE ("%p, %u", this, (guint) ref_count_); - ref_count = InterlockedDecrement (&ref_count_); - - if (ref_count == 0) { - GST_TRACE ("Delete instance %p", this); - delete this; - } - - return ref_count; - } - - STDMETHODIMP - QueryInterface (REFIID riid, void ** object) - { - if (!object) - return E_POINTER; - - if (riid == IID_IUnknown) { - GST_TRACE ("query IUnknown interface %p", this); - *object = static_cast<IUnknown *> (static_cast<GstWasapiAsyncCallback *> (this)); - } else if (riid == __uuidof (IMFAsyncCallback)) { - GST_TRACE ("query IUnknown interface %p", this); - *object = static_cast<IUnknown *> (static_cast<GstWasapiAsyncCallback *> (this)); - } else { - *object = nullptr; - return E_NOINTERFACE; - } - - AddRef (); - - return S_OK; - } - - /* IMFAsyncCallback */ - STDMETHODIMP - GetParameters(DWORD * pdwFlags, DWORD * pdwQueue) - { - *pdwFlags = 0; - *pdwQueue = queue_id_; - - return S_OK; - } - - STDMETHODIMP - Invoke(IMFAsyncResult * pAsyncResult) - { - HRESULT hr; - auto ptr = listener_.lock (); - - if (!ptr) { - GST_WARNING ("Listener was removed"); - return S_OK; - } - - if (loopback_) - hr = gst_wasapi2_ring_buffer_loopback_callback (ptr->obj); - else - hr = gst_wasapi2_ring_buffer_io_callback (ptr->obj); - - return hr; - } - -private: - ULONG ref_count_; - DWORD queue_id_; - std::weak_ptr<GstWasapi2RingBufferPtr> listener_; - gboolean loopback_; -}; - -struct GstWasapi2RingBufferPrivate -{ - std::shared_ptr<GstWasapi2RingBufferPtr> obj_ptr; - std::atomic<bool> monitor_device_mute; -}; -/* *INDENT-ON* */ - -struct _GstWasapi2RingBuffer -{ - GstAudioRingBuffer parent; - - GstWasapi2EndpointClass device_class; - gchar *device_id; - gboolean low_latency; - gboolean mute; - gdouble volume; - gpointer dispatcher; - gboolean can_auto_routing; - guint loopback_target_pid; - - GstWasapi2Object *client; - GstWasapi2Object *loopback_client; - IAudioCaptureClient *capture_client; - IAudioRenderClient *render_client; - IAudioStreamVolume *volume_object; - - GstWasapiAsyncCallback *callback_object; - IMFAsyncResult *callback_result; - MFWORKITEM_KEY callback_key; - HANDLE event_handle; - - GstWasapiAsyncCallback *loopback_callback_object; - IMFAsyncResult *loopback_callback_result; - MFWORKITEM_KEY loopback_callback_key; - HANDLE loopback_event_handle; - - guint64 expected_position; - gboolean is_first; - gboolean running; - UINT32 buffer_size; - UINT32 loopback_buffer_size; - - gint segoffset; - guint64 write_frame_offset; - - GMutex volume_lock; - gboolean mute_changed; - gboolean volume_changed; - - GstCaps *supported_caps; - - GstWasapi2RingBufferPrivate *priv; -}; - -static void gst_wasapi2_ring_buffer_constructed (GObject * object); -static void gst_wasapi2_ring_buffer_dispose (GObject * object); -static void gst_wasapi2_ring_buffer_finalize (GObject * object); - -static gboolean gst_wasapi2_ring_buffer_open_device (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_close_device (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_acquire (GstAudioRingBuffer * buf, - GstAudioRingBufferSpec * spec); -static gboolean gst_wasapi2_ring_buffer_release (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_start (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_resume (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_pause (GstAudioRingBuffer * buf); -static gboolean gst_wasapi2_ring_buffer_stop (GstAudioRingBuffer * buf); -static guint gst_wasapi2_ring_buffer_delay (GstAudioRingBuffer * buf); - -#define gst_wasapi2_ring_buffer_parent_class parent_class -G_DEFINE_TYPE (GstWasapi2RingBuffer, gst_wasapi2_ring_buffer, - GST_TYPE_AUDIO_RING_BUFFER); - -static void -gst_wasapi2_ring_buffer_class_init (GstWasapi2RingBufferClass * klass) -{ - GObjectClass *gobject_class = G_OBJECT_CLASS (klass); - GstAudioRingBufferClass *ring_buffer_class = - GST_AUDIO_RING_BUFFER_CLASS (klass); - - gobject_class->constructed = gst_wasapi2_ring_buffer_constructed; - gobject_class->dispose = gst_wasapi2_ring_buffer_dispose; - gobject_class->finalize = gst_wasapi2_ring_buffer_finalize; - - ring_buffer_class->open_device = - GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_open_device); - ring_buffer_class->close_device = - GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_close_device); - ring_buffer_class->acquire = - GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_acquire); - ring_buffer_class->release = - GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_release); - ring_buffer_class->start = GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_start); - ring_buffer_class->resume = - GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_resume); - ring_buffer_class->pause = GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_pause); - ring_buffer_class->stop = GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_stop); - ring_buffer_class->delay = GST_DEBUG_FUNCPTR (gst_wasapi2_ring_buffer_delay); - - GST_DEBUG_CATEGORY_INIT (gst_wasapi2_ring_buffer_debug, - "wasapi2ringbuffer", 0, "wasapi2ringbuffer"); -} - -static void -gst_wasapi2_ring_buffer_init (GstWasapi2RingBuffer * self) -{ - self->volume = 1.0f; - self->mute = FALSE; - - self->event_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); - self->loopback_event_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); - g_mutex_init (&self->volume_lock); - - self->priv = new GstWasapi2RingBufferPrivate (); - self->priv->obj_ptr = std::make_shared < GstWasapi2RingBufferPtr > (self); - self->priv->monitor_device_mute.store (false, std::memory_order_release); -} - -static void -gst_wasapi2_ring_buffer_constructed (GObject * object) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (object); - HRESULT hr; - DWORD task_id = 0; - DWORD queue_id = 0; - - hr = MFLockSharedWorkQueue (L"Pro Audio", 0, &task_id, &queue_id); - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Failed to get work queue id"); - goto out; - } - - self->callback_object = new GstWasapiAsyncCallback (self->priv->obj_ptr, - queue_id, FALSE); - hr = MFCreateAsyncResult (nullptr, self->callback_object, nullptr, - &self->callback_result); - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Failed to create IAsyncResult"); - GST_WASAPI2_CLEAR_COM (self->callback_object); - } - - /* Create another callback object for loopback silence feed */ - self->loopback_callback_object = - new GstWasapiAsyncCallback (self->priv->obj_ptr, queue_id, TRUE); - hr = MFCreateAsyncResult (nullptr, self->loopback_callback_object, nullptr, - &self->loopback_callback_result); - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Failed to create IAsyncResult"); - GST_WASAPI2_CLEAR_COM (self->callback_object); - GST_WASAPI2_CLEAR_COM (self->callback_result); - GST_WASAPI2_CLEAR_COM (self->loopback_callback_object); - } - -out: - G_OBJECT_CLASS (parent_class)->constructed (object); -} - -static void -gst_wasapi2_ring_buffer_dispose (GObject * object) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (object); - - self->priv->obj_ptr = nullptr; - - GST_WASAPI2_CLEAR_COM (self->render_client); - GST_WASAPI2_CLEAR_COM (self->capture_client); - GST_WASAPI2_CLEAR_COM (self->volume_object); - GST_WASAPI2_CLEAR_COM (self->callback_result); - GST_WASAPI2_CLEAR_COM (self->callback_object); - GST_WASAPI2_CLEAR_COM (self->loopback_callback_result); - GST_WASAPI2_CLEAR_COM (self->loopback_callback_object); - - gst_clear_object (&self->client); - gst_clear_object (&self->loopback_client); - gst_clear_caps (&self->supported_caps); - - G_OBJECT_CLASS (parent_class)->dispose (object); -} - -static void -gst_wasapi2_ring_buffer_finalize (GObject * object) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (object); - - g_free (self->device_id); - CloseHandle (self->event_handle); - CloseHandle (self->loopback_event_handle); - g_mutex_clear (&self->volume_lock); - - delete self->priv; - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_wasapi2_ring_buffer_post_open_error (GstWasapi2RingBuffer * self) -{ - GstElement *parent = (GstElement *) GST_OBJECT_PARENT (self); - - if (!parent) { - GST_WARNING_OBJECT (self, "Cannot find parent"); - return; - } - - if (self->device_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER) { - GST_ELEMENT_ERROR (parent, RESOURCE, OPEN_WRITE, - (nullptr), ("Failed to open device")); - } else { - GST_ELEMENT_ERROR (parent, RESOURCE, OPEN_READ, - (nullptr), ("Failed to open device")); - } -} - -static void -gst_wasapi2_ring_buffer_post_scheduling_error (GstWasapi2RingBuffer * self) -{ - GstElement *parent = (GstElement *) GST_OBJECT_PARENT (self); - - if (!parent) { - GST_WARNING_OBJECT (self, "Cannot find parent"); - return; - } - - GST_ELEMENT_ERROR (parent, RESOURCE, FAILED, - (nullptr), ("Failed to schedule next I/O")); -} - -static void -gst_wasapi2_ring_buffer_post_io_error (GstWasapi2RingBuffer * self, HRESULT hr) -{ - GstElement *parent = (GstElement *) GST_OBJECT_PARENT (self); - gchar *error_msg; - - if (!parent) { - GST_WARNING_OBJECT (self, "Cannot find parent"); - return; - } - - error_msg = gst_wasapi2_util_get_error_message (hr); - - GST_ERROR_OBJECT (self, "Posting I/O error %s (hr: 0x%x)", error_msg, - (guint) hr); - if (self->device_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER) { - GST_ELEMENT_ERROR (parent, RESOURCE, WRITE, - ("Failed to write to device"), ("%s, hr: 0x%x", error_msg, (guint) hr)); - } else { - GST_ELEMENT_ERROR (parent, RESOURCE, READ, - ("Failed to read from device"), ("%s hr: 0x%x", error_msg, (guint) hr)); - } - - g_free (error_msg); -} - -static gboolean -gst_wasapi2_ring_buffer_open_device (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (self, "Open"); - - if (self->client) { - GST_DEBUG_OBJECT (self, "Already opened"); - return TRUE; - } - - self->client = gst_wasapi2_object_new (self->device_class, - self->device_id, self->loopback_target_pid); - if (!self->client) { - gst_wasapi2_ring_buffer_post_open_error (self); - return FALSE; - } - - self->can_auto_routing = - gst_wasapi2_object_auto_routing_supported (self->client); - - /* Open another render client to feed silence */ - if (gst_wasapi2_is_loopback_class (self->device_class)) { - self->loopback_client = - gst_wasapi2_object_new (GST_WASAPI2_ENDPOINT_CLASS_RENDER, - self->device_id, 0); - - if (!self->loopback_client) { - gst_wasapi2_ring_buffer_post_open_error (self); - gst_clear_object (&self->client); - - return FALSE; - } - } - - return TRUE; -} - -static gboolean -gst_wasapi2_ring_buffer_close_device_internal (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (self, "Close device"); - - if (self->running) - gst_wasapi2_ring_buffer_stop (buf); - - GST_WASAPI2_CLEAR_COM (self->capture_client); - GST_WASAPI2_CLEAR_COM (self->render_client); - - g_mutex_lock (&self->volume_lock); - GST_WASAPI2_CLEAR_COM (self->volume_object); - g_mutex_unlock (&self->volume_lock); - - gst_clear_object (&self->client); - gst_clear_object (&self->loopback_client); - - return TRUE; -} - -static gboolean -gst_wasapi2_ring_buffer_close_device (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (self, "Close"); - - gst_wasapi2_ring_buffer_close_device_internal (buf); - - gst_clear_caps (&self->supported_caps); - - return TRUE; -} - -static HRESULT -gst_wasapi2_ring_buffer_read (GstWasapi2RingBuffer * self) -{ - GstAudioRingBuffer *ringbuffer = GST_AUDIO_RING_BUFFER_CAST (self); - BYTE *data = nullptr; - UINT32 to_read = 0; - guint32 to_read_bytes; - DWORD flags = 0; - HRESULT hr; - guint64 position; - GstAudioInfo *info = &ringbuffer->spec.info; - IAudioCaptureClient *capture_client = self->capture_client; - guint gap_size = 0; - guint offset = 0; - gint segment; - guint8 *readptr; - gint len; - bool is_device_muted; - - if (!capture_client) { - GST_ERROR_OBJECT (self, "IAudioCaptureClient is not available"); - return E_FAIL; - } - - hr = capture_client->GetBuffer (&data, &to_read, &flags, &position, nullptr); - if (hr == AUDCLNT_S_BUFFER_EMPTY || to_read == 0) { - GST_LOG_OBJECT (self, "Empty buffer"); - to_read = 0; - goto out; - } - - is_device_muted = - self->priv->monitor_device_mute.load (std::memory_order_acquire) && - gst_wasapi2_object_is_endpoint_muted (self->client); - - to_read_bytes = to_read * GST_AUDIO_INFO_BPF (info); - - GST_LOG_OBJECT (self, "Reading %d frames offset at %" G_GUINT64_FORMAT - ", expected position %" G_GUINT64_FORMAT, to_read, position, - self->expected_position); - - /* XXX: position might not be increased in case of process loopback */ - if (!gst_wasapi2_is_process_loopback_class (self->device_class)) { - if (self->is_first) { - self->expected_position = position + to_read; - self->is_first = FALSE; - } else { - if (position > self->expected_position) { - guint gap_frames; - - gap_frames = (guint) (position - self->expected_position); - GST_WARNING_OBJECT (self, "Found %u frames gap", gap_frames); - gap_size = gap_frames * GST_AUDIO_INFO_BPF (info); - } - - self->expected_position = position + to_read; - } - } else if (self->mute) { - /* volume clinet might not be available in case of process loopback */ - flags |= AUDCLNT_BUFFERFLAGS_SILENT; - } - - /* Fill gap data if any */ - while (gap_size > 0) { - if (!gst_audio_ring_buffer_prepare_read (ringbuffer, - &segment, &readptr, &len)) { - GST_INFO_OBJECT (self, "No segment available"); - goto out; - } - - g_assert (self->segoffset >= 0); - - len -= self->segoffset; - if (len > (gint) gap_size) - len = gap_size; - - gst_audio_format_info_fill_silence (ringbuffer->spec.info.finfo, - readptr + self->segoffset, len); - - self->segoffset += len; - gap_size -= len; - - if (self->segoffset == ringbuffer->spec.segsize) { - gst_audio_ring_buffer_advance (ringbuffer, 1); - self->segoffset = 0; - } - } - - while (to_read_bytes) { - if (!gst_audio_ring_buffer_prepare_read (ringbuffer, - &segment, &readptr, &len)) { - GST_INFO_OBJECT (self, "No segment available"); - goto out; - } - - len -= self->segoffset; - if (len > (gint) to_read_bytes) - len = to_read_bytes; - - if (((flags & AUDCLNT_BUFFERFLAGS_SILENT) == AUDCLNT_BUFFERFLAGS_SILENT) || - is_device_muted) { - gst_audio_format_info_fill_silence (ringbuffer->spec.info.finfo, - readptr + self->segoffset, len); - } else { - memcpy (readptr + self->segoffset, data + offset, len); - } - - self->segoffset += len; - offset += len; - to_read_bytes -= len; - - if (self->segoffset == ringbuffer->spec.segsize) { - gst_audio_ring_buffer_advance (ringbuffer, 1); - self->segoffset = 0; - } - } - -out: - hr = capture_client->ReleaseBuffer (to_read); - /* For debugging */ - gst_wasapi2_result (hr); - - return hr; -} - -static HRESULT -gst_wasapi2_ring_buffer_write (GstWasapi2RingBuffer * self, gboolean preroll) -{ - GstAudioRingBuffer *ringbuffer = GST_AUDIO_RING_BUFFER_CAST (self); - HRESULT hr; - IAudioClient *client_handle; - IAudioRenderClient *render_client; - guint32 padding_frames = 0; - guint32 can_write; - guint32 can_write_bytes; - gint segment; - guint8 *readptr; - gint len; - BYTE *data = nullptr; - - client_handle = gst_wasapi2_object_get_handle (self->client); - if (!client_handle) { - GST_ERROR_OBJECT (self, "IAudioClient is not available"); - return E_FAIL; - } - - render_client = self->render_client; - if (!render_client) { - GST_ERROR_OBJECT (self, "IAudioRenderClient is not available"); - return E_FAIL; - } - - hr = client_handle->GetCurrentPadding (&padding_frames); - if (!gst_wasapi2_result (hr)) - return hr; - - if (padding_frames >= self->buffer_size) { - GST_INFO_OBJECT (self, - "Padding size %d is larger than or equal to buffer size %d", - padding_frames, self->buffer_size); - return S_OK; - } - - can_write = self->buffer_size - padding_frames; - can_write_bytes = can_write * GST_AUDIO_INFO_BPF (&ringbuffer->spec.info); - if (preroll) { - GST_INFO_OBJECT (self, "Pre-fill %d frames with silence", can_write); - - hr = render_client->GetBuffer (can_write, &data); - if (!gst_wasapi2_result (hr)) - return hr; - - hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); - return gst_wasapi2_result (hr); - } - - GST_LOG_OBJECT (self, "Writing %d frames offset at %" G_GUINT64_FORMAT, - can_write, self->write_frame_offset); - self->write_frame_offset += can_write; - - while (can_write_bytes > 0) { - if (!gst_audio_ring_buffer_prepare_read (ringbuffer, - &segment, &readptr, &len)) { - GST_INFO_OBJECT (self, "No segment available, fill silence"); - - /* This would be case where in the middle of PAUSED state change. - * Just fill silent buffer to avoid immediate I/O callback after - * we return here */ - hr = render_client->GetBuffer (can_write, &data); - if (!gst_wasapi2_result (hr)) - return hr; - - hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); - /* for debugging */ - gst_wasapi2_result (hr); - return hr; - } - - len -= self->segoffset; - - if (len > (gint) can_write_bytes) - len = can_write_bytes; - - can_write = len / GST_AUDIO_INFO_BPF (&ringbuffer->spec.info); - if (can_write == 0) - break; - - hr = render_client->GetBuffer (can_write, &data); - if (!gst_wasapi2_result (hr)) - return hr; - - memcpy (data, readptr + self->segoffset, len); - hr = render_client->ReleaseBuffer (can_write, 0); - - self->segoffset += len; - can_write_bytes -= len; - - if (self->segoffset == ringbuffer->spec.segsize) { - gst_audio_ring_buffer_clear (ringbuffer, segment); - gst_audio_ring_buffer_advance (ringbuffer, 1); - self->segoffset = 0; - } - - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Failed to release buffer"); - break; - } - } - - return S_OK; -} - -static HRESULT -gst_wasapi2_ring_buffer_io_callback (GstWasapi2RingBuffer * self) -{ - HRESULT hr = E_FAIL; - - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (self), E_FAIL); - - if (!self->running) { - GST_INFO_OBJECT (self, "We are not running now"); - return S_OK; - } - - switch (self->device_class) { - case GST_WASAPI2_ENDPOINT_CLASS_CAPTURE: - case GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE: - case GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE: - case GST_WASAPI2_ENDPOINT_CLASS_EXCLUDE_PROCESS_LOOPBACK_CAPTURE: - hr = gst_wasapi2_ring_buffer_read (self); - break; - case GST_WASAPI2_ENDPOINT_CLASS_RENDER: - hr = gst_wasapi2_ring_buffer_write (self, FALSE); - break; - default: - g_assert_not_reached (); - break; - } - - /* We can ignore errors for device unplugged event if client can support - * automatic stream routing, but except for loopback capture. - * loopback capture client doesn't seem to be able to recover status from this - * situation */ - if (self->can_auto_routing && - !gst_wasapi2_is_loopback_class (self->device_class) && - !gst_wasapi2_is_process_loopback_class (self->device_class) && - (hr == AUDCLNT_E_ENDPOINT_CREATE_FAILED - || hr == AUDCLNT_E_DEVICE_INVALIDATED)) { - GST_WARNING_OBJECT (self, - "Device was unplugged but client can support automatic routing"); - hr = S_OK; - } - - if (self->running) { - if (gst_wasapi2_result (hr) && - /* In case of normal loopback capture, this method is called from - * silence feeding thread. Don't schedule again in that case */ - self->device_class != GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { - hr = MFPutWaitingWorkItem (self->event_handle, 0, self->callback_result, - &self->callback_key); - - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to put item"); - gst_wasapi2_ring_buffer_post_scheduling_error (self); - - return hr; - } - } - } else { - GST_INFO_OBJECT (self, "We are not running now"); - return S_OK; - } - - if (FAILED (hr)) - gst_wasapi2_ring_buffer_post_io_error (self, hr); - - return hr; -} - -static HRESULT -gst_wasapi2_ring_buffer_fill_loopback_silence (GstWasapi2RingBuffer * self) -{ - HRESULT hr; - IAudioClient *client_handle; - IAudioRenderClient *render_client; - guint32 padding_frames = 0; - guint32 can_write; - BYTE *data = nullptr; - - client_handle = gst_wasapi2_object_get_handle (self->loopback_client); - if (!client_handle) { - GST_ERROR_OBJECT (self, "IAudioClient is not available"); - return E_FAIL; - } - - render_client = self->render_client; - if (!render_client) { - GST_ERROR_OBJECT (self, "IAudioRenderClient is not available"); - return E_FAIL; - } - - hr = client_handle->GetCurrentPadding (&padding_frames); - if (!gst_wasapi2_result (hr)) - return hr; - - if (padding_frames >= self->loopback_buffer_size) { - GST_INFO_OBJECT (self, - "Padding size %d is larger than or equal to buffer size %d", - padding_frames, self->loopback_buffer_size); - return S_OK; - } - - can_write = self->loopback_buffer_size - padding_frames; - - GST_TRACE_OBJECT (self, "Writing %d silent frames", can_write); - - hr = render_client->GetBuffer (can_write, &data); - if (!gst_wasapi2_result (hr)) - return hr; - - hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); - return gst_wasapi2_result (hr); -} - -static HRESULT -gst_wasapi2_ring_buffer_loopback_callback (GstWasapi2RingBuffer * self) -{ - HRESULT hr = E_FAIL; - - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (self), E_FAIL); - g_return_val_if_fail (gst_wasapi2_is_loopback_class - (self->device_class), E_FAIL); - - if (!self->running) { - GST_INFO_OBJECT (self, "We are not running now"); - return S_OK; - } - - hr = gst_wasapi2_ring_buffer_fill_loopback_silence (self); - - /* On Windows versions prior to Windows 10, a pull-mode capture client will - * not receive any events when a stream is initialized with event-driven - * buffering */ - if (gst_wasapi2_result (hr)) - hr = gst_wasapi2_ring_buffer_io_callback (self); - - if (self->running) { - if (gst_wasapi2_result (hr)) { - hr = MFPutWaitingWorkItem (self->loopback_event_handle, 0, - self->loopback_callback_result, &self->loopback_callback_key); - - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to put item"); - gst_wasapi2_ring_buffer_post_scheduling_error (self); - - return hr; - } - } - } else { - GST_INFO_OBJECT (self, "We are not running now"); - return S_OK; - } - - if (FAILED (hr)) - gst_wasapi2_ring_buffer_post_io_error (self, hr); - - return hr; -} - -static HRESULT -gst_wasapi2_ring_buffer_initialize_audio_client3 (GstWasapi2RingBuffer * self, - IAudioClient * client_handle, WAVEFORMATEX * mix_format, guint * period) -{ - HRESULT hr = S_OK; - UINT32 default_period, fundamental_period, min_period, max_period; - /* AUDCLNT_STREAMFLAGS_NOPERSIST is not allowed for - * InitializeSharedAudioStream */ - DWORD stream_flags = AUDCLNT_STREAMFLAGS_EVENTCALLBACK; - ComPtr < IAudioClient3 > audio_client; - - hr = client_handle->QueryInterface (IID_PPV_ARGS (&audio_client)); - if (!gst_wasapi2_result (hr)) { - GST_INFO_OBJECT (self, "IAudioClient3 interface is unavailable"); - return hr; - } - - hr = audio_client->GetSharedModeEnginePeriod (mix_format, - &default_period, &fundamental_period, &min_period, &max_period); - if (!gst_wasapi2_result (hr)) { - GST_INFO_OBJECT (self, "Couldn't get period"); - return hr; - } - - GST_INFO_OBJECT (self, "Using IAudioClient3, default period %d frames, " - "fundamental period %d frames, minimum period %d frames, maximum period " - "%d frames", default_period, fundamental_period, min_period, max_period); - - *period = min_period; - - hr = audio_client->InitializeSharedAudioStream (stream_flags, min_period, - mix_format, nullptr); - - if (!gst_wasapi2_result (hr)) - GST_WARNING_OBJECT (self, "Failed to initialize IAudioClient3"); - - return hr; -} - -static HRESULT -gst_wasapi2_ring_buffer_initialize_audio_client (GstWasapi2RingBuffer * self, - IAudioClient * client_handle, WAVEFORMATEX * mix_format, guint * period, - DWORD extra_flags, GstWasapi2EndpointClass device_class, - GstAudioRingBufferSpec * spec, gboolean low_latency) -{ - GstAudioRingBuffer *ringbuffer = GST_AUDIO_RING_BUFFER_CAST (self); - REFERENCE_TIME default_period, min_period; - DWORD stream_flags = - AUDCLNT_STREAMFLAGS_EVENTCALLBACK | AUDCLNT_STREAMFLAGS_NOPERSIST; - HRESULT hr; - REFERENCE_TIME buf_dur = 0; - - stream_flags |= extra_flags; - - if (!gst_wasapi2_is_process_loopback_class (device_class)) { - hr = client_handle->GetDevicePeriod (&default_period, &min_period); - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Couldn't get device period info"); - return hr; - } - - GST_INFO_OBJECT (self, "wasapi2 default period: %" G_GINT64_FORMAT - ", min period: %" G_GINT64_FORMAT, default_period, min_period); - - /* https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-initialize - * For a shared-mode stream that uses event-driven buffering, - * the caller must set both hnsPeriodicity and hnsBufferDuration to 0 - * - * The above MS documentation does not seem to correct. By setting - * zero hnsBufferDuration, we can use audio engine determined buffer size - * but it seems to cause glitch depending on device. Calculate buffer size - * like wasapi plugin does. Note that MS example code uses non-zero - * buffer duration for event-driven shared-mode case as well. - */ - if (spec && !low_latency) { - /* Ensure that the period (latency_time) used is an integral multiple of - * either the default period or the minimum period */ - guint64 factor = (spec->latency_time * 10) / default_period; - REFERENCE_TIME period = default_period * MAX (factor, 1); - - buf_dur = spec->buffer_time * 10; - if (buf_dur < 2 * period) - buf_dur = 2 * period; - } - - hr = client_handle->Initialize (AUDCLNT_SHAREMODE_SHARED, stream_flags, - buf_dur, - /* This must always be 0 in shared mode */ - 0, mix_format, nullptr); - } else { - /* XXX: virtual device will not report device period. - * Use hardcoded period 20ms, same as Microsoft sample code - * https://github.com/microsoft/windows-classic-samples/tree/main/Samples/ApplicationLoopback - */ - default_period = (20 * GST_MSECOND) / 100; - hr = client_handle->Initialize (AUDCLNT_SHAREMODE_SHARED, - AUDCLNT_STREAMFLAGS_LOOPBACK | AUDCLNT_STREAMFLAGS_EVENTCALLBACK, - default_period, - AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM, mix_format, nullptr); - } - - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "Couldn't initialize audioclient"); - return hr; - } - - *period = gst_util_uint64_scale_round (default_period * 100, - GST_AUDIO_INFO_RATE (&ringbuffer->spec.info), GST_SECOND); - - return S_OK; -} - -static gboolean -gst_wasapi2_ring_buffer_prepare_loopback_client (GstWasapi2RingBuffer * self) -{ - IAudioClient *client_handle; - HRESULT hr; - WAVEFORMATEX *mix_format = nullptr; - guint period = 0; - ComPtr < IAudioRenderClient > render_client; - - if (!self->loopback_client) { - GST_ERROR_OBJECT (self, "No configured client object"); - return FALSE; - } - - client_handle = gst_wasapi2_object_get_handle (self->loopback_client); - if (!client_handle) { - GST_ERROR_OBJECT (self, "IAudioClient handle is not available"); - return FALSE; - } - - hr = client_handle->GetMixFormat (&mix_format); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to get mix format"); - return FALSE; - } - - hr = gst_wasapi2_ring_buffer_initialize_audio_client (self, client_handle, - mix_format, &period, 0, GST_WASAPI2_ENDPOINT_CLASS_RENDER, - nullptr, FALSE); - CoTaskMemFree (mix_format); - - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to initialize audio client"); - return FALSE; - } - - hr = client_handle->SetEventHandle (self->loopback_event_handle); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to set event handle"); - return FALSE; - } - - hr = client_handle->GetBufferSize (&self->loopback_buffer_size); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to query buffer size"); - return FALSE; - } - - hr = client_handle->GetService (IID_PPV_ARGS (&render_client)); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "IAudioRenderClient is unavailable"); - return FALSE; - } - - self->render_client = render_client.Detach (); - - return TRUE; -} - -static HRESULT -gst_wasapi2_ring_buffer_set_channel_volumes (IAudioStreamVolume * iface, - float volume) -{ - float target; - HRESULT hr = S_OK; - - if (!iface) - return hr; - - target = CLAMP (volume, 0.0f, 1.0f); - UINT32 channel_count = 0; - hr = iface->GetChannelCount (&channel_count); - if (!gst_wasapi2_result (hr) || channel_count == 0) - return hr; - - std::vector < float >volumes; - for (guint i = 0; i < channel_count; i++) - volumes.push_back (target); - - return iface->SetAllVolumes (channel_count, &volumes0); -} - -static gboolean -gst_wasapi2_ring_buffer_acquire (GstAudioRingBuffer * buf, - GstAudioRingBufferSpec * spec) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - IAudioClient *client_handle; - HRESULT hr; - WAVEFORMATEX *mix_format = nullptr; - ComPtr < IAudioStreamVolume > audio_volume; - GstAudioChannelPosition *position = nullptr; - guint period = 0; - gint segtotal = 2; - - GST_DEBUG_OBJECT (buf, "Acquire"); - - if (!self->client && !gst_wasapi2_ring_buffer_open_device (buf)) - return FALSE; - - if (gst_wasapi2_is_loopback_class (self->device_class)) { - if (!gst_wasapi2_ring_buffer_prepare_loopback_client (self)) { - GST_ERROR_OBJECT (self, "Failed to prepare loopback client"); - goto error; - } - } - - client_handle = gst_wasapi2_object_get_handle (self->client); - if (!client_handle) { - GST_ERROR_OBJECT (self, "IAudioClient handle is not available"); - goto error; - } - - /* TODO: convert given caps to mix format */ - hr = client_handle->GetMixFormat (&mix_format); - if (!gst_wasapi2_result (hr)) { - if (gst_wasapi2_is_process_loopback_class (self->device_class)) { - mix_format = gst_wasapi2_get_default_mix_format (); - } else { - GST_ERROR_OBJECT (self, "Failed to get mix format"); - goto error; - } - } - - /* Only use audioclient3 when low-latency is requested because otherwise - * very slow machines and VMs with 1 CPU allocated will get glitches: - * https://bugzilla.gnome.org/show_bug.cgi?id=794497 */ - hr = E_FAIL; - if (self->low_latency && - /* AUDCLNT_STREAMFLAGS_LOOPBACK is not allowed for - * InitializeSharedAudioStream */ - !gst_wasapi2_is_loopback_class (self->device_class) && - !gst_wasapi2_is_process_loopback_class (self->device_class)) { - hr = gst_wasapi2_ring_buffer_initialize_audio_client3 (self, client_handle, - mix_format, &period); - } - - /* Try again if IAudioClinet3 API is unavailable. - * NOTE: IAudioClinet3:: methods might not be available for default device - * NOTE: The default device is a special device which is needed for supporting - * automatic stream routing - * https://docs.microsoft.com/en-us/windows/win32/coreaudio/automatic-stream-routing - */ - if (FAILED (hr)) { - DWORD extra_flags = 0; - if (gst_wasapi2_is_loopback_class (self->device_class)) - extra_flags = AUDCLNT_STREAMFLAGS_LOOPBACK; - - hr = gst_wasapi2_ring_buffer_initialize_audio_client (self, client_handle, - mix_format, &period, extra_flags, self->device_class, spec, - self->low_latency); - } - - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to initialize audio client"); - goto error; - } - - hr = client_handle->SetEventHandle (self->event_handle); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to set event handle"); - goto error; - } - - gst_wasapi2_util_waveformatex_to_channel_mask (mix_format, &position); - if (position) - gst_audio_ring_buffer_set_channel_positions (buf, position); - g_free (position); - - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to init audio client"); - goto error; - } - - hr = client_handle->GetBufferSize (&self->buffer_size); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to query buffer size"); - goto error; - } - - g_assert (period > 0); - - spec->segsize = period * GST_AUDIO_INFO_BPF (&buf->spec.info); - segtotal = (self->buffer_size / period); - spec->segtotal = MAX (segtotal, 2); - - GST_INFO_OBJECT (self, - "Buffer size: %d frames, period: %d frames, segsize: %d bytes, " - "segtotal: %d", self->buffer_size, period, spec->segsize, spec->segtotal); - - if (self->device_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER) { - ComPtr < IAudioRenderClient > render_client; - - hr = client_handle->GetService (IID_PPV_ARGS (&render_client)); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "IAudioRenderClient is unavailable"); - goto error; - } - - self->render_client = render_client.Detach (); - } else { - ComPtr < IAudioCaptureClient > capture_client; - - hr = client_handle->GetService (IID_PPV_ARGS (&capture_client)); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "IAudioCaptureClient is unavailable"); - goto error; - } - - self->capture_client = capture_client.Detach (); - } - - hr = client_handle->GetService (IID_PPV_ARGS (&audio_volume)); - if (!gst_wasapi2_result (hr)) { - GST_WARNING_OBJECT (self, "ISimpleAudioVolume is unavailable"); - } else { - g_mutex_lock (&self->volume_lock); - self->volume_object = audio_volume.Detach (); - float volume = (float) self->volume; - if (self->mute) - volume = 0.0f; - - gst_wasapi2_ring_buffer_set_channel_volumes (self->volume_object, volume); - - self->mute_changed = FALSE; - self->volume_changed = FALSE; - g_mutex_unlock (&self->volume_lock); - } - - buf->size = spec->segtotal * spec->segsize; - buf->memory = (guint8 *) g_malloc (buf->size); - gst_audio_format_info_fill_silence (buf->spec.info.finfo, - buf->memory, buf->size); - - CoTaskMemFree (mix_format); - - return TRUE; - -error: - GST_WASAPI2_CLEAR_COM (self->render_client); - GST_WASAPI2_CLEAR_COM (self->capture_client); - GST_WASAPI2_CLEAR_COM (self->volume_object); - CoTaskMemFree (mix_format); - - gst_wasapi2_ring_buffer_post_open_error (self); - - return FALSE; -} - -static gboolean -gst_wasapi2_ring_buffer_release (GstAudioRingBuffer * buf) -{ - GST_DEBUG_OBJECT (buf, "Release"); - - g_clear_pointer (&buf->memory, g_free); - - /* IAudioClient handle is not reusable once it's initialized */ - gst_wasapi2_ring_buffer_close_device_internal (buf); - - return TRUE; -} - -static gboolean -gst_wasapi2_ring_buffer_start_internal (GstWasapi2RingBuffer * self) -{ - IAudioClient *client_handle; - HRESULT hr; - - if (self->running) { - GST_INFO_OBJECT (self, "We are running already"); - return TRUE; - } - - client_handle = gst_wasapi2_object_get_handle (self->client); - self->is_first = TRUE; - self->running = TRUE; - self->segoffset = 0; - self->write_frame_offset = 0; - - switch (self->device_class) { - case GST_WASAPI2_ENDPOINT_CLASS_RENDER: - /* render client might read data from buffer immediately once it's prepared. - * Pre-fill with silence in order to start-up glitch */ - hr = gst_wasapi2_ring_buffer_write (self, TRUE); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to pre-fill buffer with silence"); - goto error; - } - break; - case GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE: - { - IAudioClient *loopback_client_handle; - - /* Start silence feed client first */ - loopback_client_handle = - gst_wasapi2_object_get_handle (self->loopback_client); - - hr = loopback_client_handle->Start (); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to start loopback client"); - self->running = FALSE; - goto error; - } - - hr = MFPutWaitingWorkItem (self->loopback_event_handle, - 0, self->loopback_callback_result, &self->loopback_callback_key); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to put waiting item"); - loopback_client_handle->Stop (); - self->running = FALSE; - goto error; - } - break; - } - default: - break; - } - - hr = client_handle->Start (); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to start client"); - self->running = FALSE; - goto error; - } - - if (self->device_class != GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { - hr = MFPutWaitingWorkItem (self->event_handle, 0, self->callback_result, - &self->callback_key); - if (!gst_wasapi2_result (hr)) { - GST_ERROR_OBJECT (self, "Failed to put waiting item"); - client_handle->Stop (); - self->running = FALSE; - goto error; - } - } - - return TRUE; - -error: - gst_wasapi2_ring_buffer_post_open_error (self); - return FALSE; -} - -static gboolean -gst_wasapi2_ring_buffer_start (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (self, "Start"); - - return gst_wasapi2_ring_buffer_start_internal (self); -} - -static gboolean -gst_wasapi2_ring_buffer_resume (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (self, "Resume"); - - return gst_wasapi2_ring_buffer_start_internal (self); -} - -static gboolean -gst_wasapi2_ring_buffer_stop_internal (GstWasapi2RingBuffer * self) -{ - IAudioClient *client_handle; - HRESULT hr; - - if (!self->client) { - GST_DEBUG_OBJECT (self, "No configured client"); - return TRUE; - } - - if (!self->running) { - GST_DEBUG_OBJECT (self, "We are not running"); - return TRUE; - } - - client_handle = gst_wasapi2_object_get_handle (self->client); - - self->running = FALSE; - MFCancelWorkItem (self->callback_key); - - hr = client_handle->Stop (); - gst_wasapi2_result (hr); - - /* Call reset for later reuse case */ - hr = client_handle->Reset (); - self->expected_position = 0; - self->write_frame_offset = 0; - - if (self->loopback_client) { - client_handle = gst_wasapi2_object_get_handle (self->loopback_client); - - MFCancelWorkItem (self->loopback_callback_key); - - hr = client_handle->Stop (); - gst_wasapi2_result (hr); - - client_handle->Reset (); - } - - return TRUE; -} - -static gboolean -gst_wasapi2_ring_buffer_stop (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (buf, "Stop"); - - return gst_wasapi2_ring_buffer_stop_internal (self); -} - -static gboolean -gst_wasapi2_ring_buffer_pause (GstAudioRingBuffer * buf) -{ - GstWasapi2RingBuffer *self = GST_WASAPI2_RING_BUFFER (buf); - - GST_DEBUG_OBJECT (buf, "Pause"); - - return gst_wasapi2_ring_buffer_stop_internal (self); -} - -static guint -gst_wasapi2_ring_buffer_delay (GstAudioRingBuffer * buf) -{ - /* NOTE: WASAPI supports GetCurrentPadding() method for querying - * currently unread buffer size, but it doesn't seem to be quite useful - * here because: - * - * In case of capture client, GetCurrentPadding() will return the number of - * unread frames which will be identical to pNumFramesToRead value of - * IAudioCaptureClient::GetBuffer()'s return. Since we are running on - * event-driven mode and whenever available, WASAPI will notify signal - * so it's likely zero at this moment. And there is a chance to - * return incorrect value here because our IO callback happens from - * other thread. - * - * And render client's padding size will return the total size of buffer - * which is likely larger than twice of our period. Which doesn't represent - * the amount queued frame size in device correctly - */ - return 0; -} - -GstAudioRingBuffer * -gst_wasapi2_ring_buffer_new (GstWasapi2EndpointClass device_class, - gboolean low_latency, const gchar * device_id, gpointer dispatcher, - const gchar * name, guint loopback_target_pid) -{ - GstWasapi2RingBuffer *self; - - self = (GstWasapi2RingBuffer *) - g_object_new (GST_TYPE_WASAPI2_RING_BUFFER, "name", name, nullptr); - - if (!self->callback_object) { - gst_object_unref (self); - return nullptr; - } - - self->device_class = device_class; - self->low_latency = low_latency; - self->device_id = g_strdup (device_id); - self->dispatcher = dispatcher; - self->loopback_target_pid = loopback_target_pid; - - return GST_AUDIO_RING_BUFFER_CAST (self); -} - -GstCaps * -gst_wasapi2_ring_buffer_get_caps (GstWasapi2RingBuffer * buf) -{ - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf), nullptr); - - if (buf->supported_caps) - return gst_caps_ref (buf->supported_caps); - - if (!buf->client) - return nullptr; - - buf->supported_caps = gst_wasapi2_object_get_caps (buf->client); - if (buf->supported_caps) - return gst_caps_ref (buf->supported_caps); - - return nullptr; -} - -HRESULT -gst_wasapi2_ring_buffer_set_mute (GstWasapi2RingBuffer * buf, gboolean mute) -{ - HRESULT hr = S_OK; - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf), E_INVALIDARG); - - g_mutex_lock (&buf->volume_lock); - buf->mute = mute; - if (buf->volume_object) { - float volume = buf->volume; - if (mute) - volume = 0.0f; - hr = gst_wasapi2_ring_buffer_set_channel_volumes (buf->volume_object, - volume); - } else { - buf->mute_changed = TRUE; - } - g_mutex_unlock (&buf->volume_lock); - - return hr; -} - -HRESULT -gst_wasapi2_ring_buffer_get_mute (GstWasapi2RingBuffer * buf, gboolean * mute) -{ - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf), E_INVALIDARG); - g_return_val_if_fail (mute != nullptr, E_INVALIDARG); - - g_mutex_lock (&buf->volume_lock); - *mute = buf->mute; - g_mutex_unlock (&buf->volume_lock); - - return S_OK; -} - -HRESULT -gst_wasapi2_ring_buffer_set_volume (GstWasapi2RingBuffer * buf, gfloat volume) -{ - HRESULT hr = S_OK; - - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf), E_INVALIDARG); - g_return_val_if_fail (volume >= 0 && volume <= 1.0, E_INVALIDARG); - - g_mutex_lock (&buf->volume_lock); - buf->volume = volume; - if (buf->volume_object) { - hr = gst_wasapi2_ring_buffer_set_channel_volumes (buf->volume_object, - volume); - } else { - buf->volume_changed = TRUE; - } - g_mutex_unlock (&buf->volume_lock); - - return hr; -} - -HRESULT -gst_wasapi2_ring_buffer_get_volume (GstWasapi2RingBuffer * buf, gfloat * volume) -{ - g_return_val_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf), E_INVALIDARG); - g_return_val_if_fail (volume != nullptr, E_INVALIDARG); - - g_mutex_lock (&buf->volume_lock); - *volume = buf->volume; - g_mutex_unlock (&buf->volume_lock); - - return S_OK; -} - -void -gst_wasapi2_ring_buffer_set_device_mute_monitoring (GstWasapi2RingBuffer * buf, - gboolean value) -{ - g_return_if_fail (GST_IS_WASAPI2_RING_BUFFER (buf)); - - buf->priv->monitor_device_mute.store (value, std::memory_order_release); -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2ringbuffer.h
Deleted
@@ -1,59 +0,0 @@ -/* GStreamer - * Copyright (C) 2021 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#ifndef __GST_WASAPI2_RING_BUFFER_H__ -#define __GST_WASAPI2_RING_BUFFER_H__ - -#include <gst/gst.h> -#include <gst/audio/audio.h> -#include "gstwasapi2util.h" - -G_BEGIN_DECLS - -#define GST_TYPE_WASAPI2_RING_BUFFER (gst_wasapi2_ring_buffer_get_type()) -G_DECLARE_FINAL_TYPE (GstWasapi2RingBuffer, gst_wasapi2_ring_buffer, - GST, WASAPI2_RING_BUFFER, GstAudioRingBuffer); - -GstAudioRingBuffer * gst_wasapi2_ring_buffer_new (GstWasapi2EndpointClass device_class, - gboolean low_latency, - const gchar *device_id, - gpointer dispatcher, - const gchar * name, - guint loopback_target_pid); - -GstCaps * gst_wasapi2_ring_buffer_get_caps (GstWasapi2RingBuffer * buf); - -HRESULT gst_wasapi2_ring_buffer_set_mute (GstWasapi2RingBuffer * buf, - gboolean mute); - -HRESULT gst_wasapi2_ring_buffer_get_mute (GstWasapi2RingBuffer * buf, - gboolean * mute); - -HRESULT gst_wasapi2_ring_buffer_set_volume (GstWasapi2RingBuffer * buf, - gfloat volume); - -HRESULT gst_wasapi2_ring_buffer_get_volume (GstWasapi2RingBuffer * buf, - gfloat * volume); - -void gst_wasapi2_ring_buffer_set_device_mute_monitoring (GstWasapi2RingBuffer * buf, - gboolean value); - -G_END_DECLS - -#endif /* __GST_WASAPI2_RING_BUFFER_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2sink.c
Deleted
@@ -1,460 +0,0 @@ -/* - * Copyright (C) 2008 Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com> - * Copyright (C) 2013 Collabora Ltd. - * Author: Sebastian Dröge <sebastian.droege@collabora.co.uk> - * Copyright (C) 2018 Centricular Ltd. - * Author: Nirbheek Chauhan <nirbheek@centricular.com> - * Copyright (C) 2020 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-wasapi2sink - * @title: wasapi2sink - * - * Provides audio playback using the Windows Audio Session API available with - * Windows 10. - * - * ## Example pipelines - * | - * gst-launch-1.0 -v audiotestsrc ! wasapi2sink - * | Generate audio test buffers and render to the default audio device. - * - * | - * gst-launch-1.0 -v audiotestsink samplesperbuffer=160 ! wasapi2sink low-latency=true - * | Same as above, but with the minimum possible latency - * - */ -#ifdef HAVE_CONFIG_H -#include <config.h> -#endif - -#include "gstwasapi2sink.h" -#include "gstwasapi2util.h" -#include "gstwasapi2ringbuffer.h" - -GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_sink_debug); -#define GST_CAT_DEFAULT gst_wasapi2_sink_debug - -static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", - GST_PAD_SINK, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS)); - -#define DEFAULT_LOW_LATENCY FALSE -#define DEFAULT_MUTE FALSE -#define DEFAULT_VOLUME 1.0 - -enum -{ - PROP_0, - PROP_DEVICE, - PROP_LOW_LATENCY, - PROP_MUTE, - PROP_VOLUME, - PROP_DISPATCHER, -}; - -struct _GstWasapi2Sink -{ - GstAudioBaseSink parent; - - /* properties */ - gchar *device_id; - gboolean low_latency; - gboolean mute; - gdouble volume; - gpointer dispatcher; - - gboolean mute_changed; - gboolean volume_changed; -}; - -static void gst_wasapi2_sink_finalize (GObject * object); -static void gst_wasapi2_sink_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec); -static void gst_wasapi2_sink_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec); - -static GstStateChangeReturn gst_wasapi2_sink_change_state (GstElement * - element, GstStateChange transition); - -static GstCaps *gst_wasapi2_sink_get_caps (GstBaseSink * bsink, - GstCaps * filter); -static GstAudioRingBuffer *gst_wasapi2_sink_create_ringbuffer (GstAudioBaseSink - * sink); - -static void gst_wasapi2_sink_set_mute (GstWasapi2Sink * self, gboolean mute); -static gboolean gst_wasapi2_sink_get_mute (GstWasapi2Sink * self); -static void gst_wasapi2_sink_set_volume (GstWasapi2Sink * self, gdouble volume); -static gdouble gst_wasapi2_sink_get_volume (GstWasapi2Sink * self); - -#define gst_wasapi2_sink_parent_class parent_class -G_DEFINE_TYPE_WITH_CODE (GstWasapi2Sink, gst_wasapi2_sink, - GST_TYPE_AUDIO_BASE_SINK, - G_IMPLEMENT_INTERFACE (GST_TYPE_STREAM_VOLUME, NULL)); - -static void -gst_wasapi2_sink_class_init (GstWasapi2SinkClass * klass) -{ - GObjectClass *gobject_class = G_OBJECT_CLASS (klass); - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseSinkClass *basesink_class = GST_BASE_SINK_CLASS (klass); - GstAudioBaseSinkClass *audiobasesink_class = - GST_AUDIO_BASE_SINK_CLASS (klass); - - gobject_class->finalize = gst_wasapi2_sink_finalize; - gobject_class->set_property = gst_wasapi2_sink_set_property; - gobject_class->get_property = gst_wasapi2_sink_get_property; - - g_object_class_install_property (gobject_class, PROP_DEVICE, - g_param_spec_string ("device", "Device", - "Audio device ID as provided by " - "WASAPI device endpoint ID as provided by IMMDevice::GetId", - NULL, GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_LOW_LATENCY, - g_param_spec_boolean ("low-latency", "Low latency", - "Optimize all settings for lowest latency. Always safe to enable.", - DEFAULT_LOW_LATENCY, GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_MUTE, - g_param_spec_boolean ("mute", "Mute", "Mute state of this stream", - DEFAULT_MUTE, GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_VOLUME, - g_param_spec_double ("volume", "Volume", "Volume of this stream", - 0.0, 1.0, DEFAULT_VOLUME, - GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - /** - * GstWasapi2Sink:dispatcher: - * - * ICoreDispatcher COM object used for activating device from UI thread. - * - * Since: 1.18 - */ - g_object_class_install_property (gobject_class, PROP_DISPATCHER, - g_param_spec_pointer ("dispatcher", "Dispatcher", - "ICoreDispatcher COM object to use. In order for application to ask " - "permission of audio device, device activation should be running " - "on UI thread via ICoreDispatcher. This element will increase " - "the reference count of given ICoreDispatcher and release it after " - "use. Therefore, caller does not need to consider additional " - "reference count management", - GST_PARAM_MUTABLE_READY | G_PARAM_WRITABLE | G_PARAM_STATIC_STRINGS)); - - gst_element_class_add_static_pad_template (element_class, &sink_template); - gst_element_class_set_static_metadata (element_class, "Wasapi2Sink", - "Sink/Audio/Hardware", - "Stream audio to an audio capture device through WASAPI", - "Nirbheek Chauhan <nirbheek@centricular.com>, " - "Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com>, " - "Seungha Yang <seungha@centricular.com>"); - - element_class->change_state = - GST_DEBUG_FUNCPTR (gst_wasapi2_sink_change_state); - - basesink_class->get_caps = GST_DEBUG_FUNCPTR (gst_wasapi2_sink_get_caps); - - audiobasesink_class->create_ringbuffer = - GST_DEBUG_FUNCPTR (gst_wasapi2_sink_create_ringbuffer); - - GST_DEBUG_CATEGORY_INIT (gst_wasapi2_sink_debug, "wasapi2sink", - 0, "Windows audio session API sink"); -} - -static void -gst_wasapi2_sink_init (GstWasapi2Sink * self) -{ - self->low_latency = DEFAULT_LOW_LATENCY; - self->mute = DEFAULT_MUTE; - self->volume = DEFAULT_VOLUME; -} - -static void -gst_wasapi2_sink_finalize (GObject * object) -{ - GstWasapi2Sink *self = GST_WASAPI2_SINK (object); - - g_free (self->device_id); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_wasapi2_sink_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstWasapi2Sink *self = GST_WASAPI2_SINK (object); - - switch (prop_id) { - case PROP_DEVICE: - g_free (self->device_id); - self->device_id = g_value_dup_string (value); - break; - case PROP_LOW_LATENCY: - self->low_latency = g_value_get_boolean (value); - break; - case PROP_MUTE: - gst_wasapi2_sink_set_mute (self, g_value_get_boolean (value)); - break; - case PROP_VOLUME: - gst_wasapi2_sink_set_volume (self, g_value_get_double (value)); - break; - case PROP_DISPATCHER: - self->dispatcher = g_value_get_pointer (value); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_wasapi2_sink_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstWasapi2Sink *self = GST_WASAPI2_SINK (object); - - switch (prop_id) { - case PROP_DEVICE: - g_value_set_string (value, self->device_id); - break; - case PROP_LOW_LATENCY: - g_value_set_boolean (value, self->low_latency); - break; - case PROP_MUTE: - g_value_set_boolean (value, gst_wasapi2_sink_get_mute (self)); - break; - case PROP_VOLUME: - g_value_set_double (value, gst_wasapi2_sink_get_volume (self)); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static GstStateChangeReturn -gst_wasapi2_sink_change_state (GstElement * element, GstStateChange transition) -{ - GstWasapi2Sink *self = GST_WASAPI2_SINK (element); - GstAudioBaseSink *asink = GST_AUDIO_BASE_SINK_CAST (element); - - switch (transition) { - case GST_STATE_CHANGE_READY_TO_PAUSED: - /* If we have pending volume/mute values to set, do here */ - GST_OBJECT_LOCK (self); - if (asink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (asink->ringbuffer); - - if (self->volume_changed) { - gst_wasapi2_ring_buffer_set_volume (ringbuffer, self->volume); - self->volume_changed = FALSE; - } - - if (self->mute_changed) { - gst_wasapi2_ring_buffer_set_mute (ringbuffer, self->mute); - self->mute_changed = FALSE; - } - } - GST_OBJECT_UNLOCK (self); - break; - default: - break; - } - - return GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); -} - -static GstCaps * -gst_wasapi2_sink_get_caps (GstBaseSink * bsink, GstCaps * filter) -{ - GstAudioBaseSink *asink = GST_AUDIO_BASE_SINK_CAST (bsink); - GstCaps *caps = NULL; - - GST_OBJECT_LOCK (bsink); - if (asink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (asink->ringbuffer); - - gst_object_ref (ringbuffer); - GST_OBJECT_UNLOCK (bsink); - - /* Get caps might be able to block if device is not activated yet */ - caps = gst_wasapi2_ring_buffer_get_caps (ringbuffer); - gst_object_unref (ringbuffer); - } else { - GST_OBJECT_UNLOCK (bsink); - } - - if (!caps) - caps = gst_pad_get_pad_template_caps (bsink->sinkpad); - - if (filter) { - GstCaps *filtered = - gst_caps_intersect_full (filter, caps, GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (caps); - caps = filtered; - } - - GST_DEBUG_OBJECT (bsink, "returning caps %" GST_PTR_FORMAT, caps); - - return caps; -} - -static GstAudioRingBuffer * -gst_wasapi2_sink_create_ringbuffer (GstAudioBaseSink * sink) -{ - GstWasapi2Sink *self = GST_WASAPI2_SINK (sink); - GstAudioRingBuffer *ringbuffer; - gchar *name; - - name = g_strdup_printf ("%s-ringbuffer", GST_OBJECT_NAME (sink)); - - ringbuffer = - gst_wasapi2_ring_buffer_new (GST_WASAPI2_ENDPOINT_CLASS_RENDER, - self->low_latency, self->device_id, self->dispatcher, name, 0); - - g_free (name); - - return ringbuffer; -} - -static void -gst_wasapi2_sink_set_mute (GstWasapi2Sink * self, gboolean mute) -{ - GstAudioBaseSink *bsink = GST_AUDIO_BASE_SINK_CAST (self); - HRESULT hr; - - GST_OBJECT_LOCK (self); - - self->mute = mute; - self->mute_changed = TRUE; - - if (bsink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsink->ringbuffer); - - hr = gst_wasapi2_ring_buffer_set_mute (ringbuffer, mute); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set mute"); - } else { - self->mute_changed = FALSE; - } - } - - GST_OBJECT_UNLOCK (self); -} - -static gboolean -gst_wasapi2_sink_get_mute (GstWasapi2Sink * self) -{ - GstAudioBaseSink *bsink = GST_AUDIO_BASE_SINK_CAST (self); - gboolean mute; - HRESULT hr; - - GST_OBJECT_LOCK (self); - - mute = self->mute; - - if (bsink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsink->ringbuffer); - - hr = gst_wasapi2_ring_buffer_get_mute (ringbuffer, &mute); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't get mute"); - } else { - self->mute = mute; - } - } - - GST_OBJECT_UNLOCK (self); - - return mute; -} - -static void -gst_wasapi2_sink_set_volume (GstWasapi2Sink * self, gdouble volume) -{ - GstAudioBaseSink *bsink = GST_AUDIO_BASE_SINK_CAST (self); - HRESULT hr; - - GST_OBJECT_LOCK (self); - - self->volume = volume; - /* clip volume value */ - self->volume = MAX (0.0, self->volume); - self->volume = MIN (1.0, self->volume); - self->volume_changed = TRUE; - - if (bsink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsink->ringbuffer); - - hr = gst_wasapi2_ring_buffer_set_volume (ringbuffer, (gfloat) self->volume); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set volume"); - } else { - self->volume_changed = FALSE; - } - } - - GST_OBJECT_UNLOCK (self); -} - -static gdouble -gst_wasapi2_sink_get_volume (GstWasapi2Sink * self) -{ - GstAudioBaseSink *bsink = GST_AUDIO_BASE_SINK_CAST (self); - gfloat volume; - HRESULT hr; - - GST_OBJECT_LOCK (self); - - volume = (gfloat) self->volume; - - if (bsink->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsink->ringbuffer); - - hr = gst_wasapi2_ring_buffer_get_volume (ringbuffer, &volume); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set volume"); - } else { - self->volume = volume; - } - } - - GST_OBJECT_UNLOCK (self); - - volume = MAX (0.0, volume); - volume = MIN (1.0, volume); - - return volume; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2src.c
Deleted
@@ -1,663 +0,0 @@ -/* - * Copyright (C) 2008 Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com> - * Copyright (C) 2018 Centricular Ltd. - * Author: Nirbheek Chauhan <nirbheek@centricular.com> - * Copyright (C) 2020 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -/** - * SECTION:element-wasapi2src - * @title: wasapi2src - * - * Provides audio capture from the Windows Audio Session API available with - * Windows 10. - * - * ## Example pipelines - * | - * gst-launch-1.0 -v wasapi2src ! fakesink - * | Capture from the default audio device and render to fakesink. - * - * | - * gst-launch-1.0 -v wasapi2src low-latency=true ! fakesink - * | Capture from the default audio device with the minimum possible latency and render to fakesink. - * - */ -#ifdef HAVE_CONFIG_H -#include <config.h> -#endif - -#include "gstwasapi2src.h" -#include "gstwasapi2util.h" -#include "gstwasapi2ringbuffer.h" - -GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_src_debug); -#define GST_CAT_DEFAULT gst_wasapi2_src_debug - -static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", - GST_PAD_SRC, - GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS)); - -/** - * GstWasapi2SrcLoopbackMode: - * - * Loopback capture mode - * - * Since: 1.22 - */ -typedef enum -{ - /** - * GstWasapi2SrcLoopbackMode::default: - * - * Default loopback mode - * - * Since: 1.22 - */ - GST_WASAPI2_SRC_LOOPBACK_DEFAULT, - - /** - * GstWasapi2SrcLoopbackMode::include-process-tree: - * - * Captures only specified process and its child process - * - * Since: 1.22 - */ - GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE, - - /** - * GstWasapi2SrcLoopbackMode::exclude-process-tree: - * - * Excludes specified process and its child process - * - * Since: 1.22 - */ - GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE, -} GstWasapi2SrcLoopbackMode; - -#define GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE (gst_wasapi2_src_loopback_mode_get_type ()) -static GType -gst_wasapi2_src_loopback_mode_get_type (void) -{ - static GType loopback_type = 0; - static const GEnumValue types = { - {GST_WASAPI2_SRC_LOOPBACK_DEFAULT, "Default", "default"}, - {GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE, - "Include process and its child processes", - "include-process-tree"}, - {GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE, - "Exclude process and its child processes", - "exclude-process-tree"}, - {0, NULL, NULL} - }; - - if (g_once_init_enter (&loopback_type)) { - GType gtype = g_enum_register_static ("GstWasapi2SrcLoopbackMode", types); - g_once_init_leave (&loopback_type, gtype); - } - - return loopback_type; -} - -#define DEFAULT_LOW_LATENCY FALSE -#define DEFAULT_MUTE FALSE -#define DEFAULT_VOLUME 1.0 -#define DEFAULT_LOOPBACK FALSE -#define DEFAULT_LOOPBACK_MODE GST_WASAPI2_SRC_LOOPBACK_DEFAULT -#define DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE FALSE - -enum -{ - PROP_0, - PROP_DEVICE, - PROP_LOW_LATENCY, - PROP_MUTE, - PROP_VOLUME, - PROP_DISPATCHER, - PROP_LOOPBACK, - PROP_LOOPBACK_MODE, - PROP_LOOPBACK_TARGET_PID, - PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE, -}; - -struct _GstWasapi2Src -{ - GstAudioBaseSrc parent; - - /* properties */ - gchar *device_id; - gboolean low_latency; - gboolean mute; - gdouble volume; - gpointer dispatcher; - gboolean loopback; - GstWasapi2SrcLoopbackMode loopback_mode; - guint loopback_pid; - gboolean loopback_silence_on_device_mute; - - gboolean mute_changed; - gboolean volume_changed; -}; - -static void gst_wasapi2_src_finalize (GObject * object); -static void gst_wasapi2_src_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec); -static void gst_wasapi2_src_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec); - -static GstStateChangeReturn gst_wasapi2_src_change_state (GstElement * - element, GstStateChange transition); - -static GstCaps *gst_wasapi2_src_get_caps (GstBaseSrc * bsrc, GstCaps * filter); -static GstAudioRingBuffer *gst_wasapi2_src_create_ringbuffer (GstAudioBaseSrc * - src); - -static void gst_wasapi2_src_set_mute (GstWasapi2Src * self, gboolean mute); -static gboolean gst_wasapi2_src_get_mute (GstWasapi2Src * self); -static void gst_wasapi2_src_set_volume (GstWasapi2Src * self, gdouble volume); -static gdouble gst_wasapi2_src_get_volume (GstWasapi2Src * self); -static void gst_wasapi2_src_set_silence_on_mute (GstWasapi2Src * self, - gboolean value); - -#define gst_wasapi2_src_parent_class parent_class -G_DEFINE_TYPE_WITH_CODE (GstWasapi2Src, gst_wasapi2_src, - GST_TYPE_AUDIO_BASE_SRC, - G_IMPLEMENT_INTERFACE (GST_TYPE_STREAM_VOLUME, NULL)); - -static void -gst_wasapi2_src_class_init (GstWasapi2SrcClass * klass) -{ - GObjectClass *gobject_class = G_OBJECT_CLASS (klass); - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseSrcClass *basesrc_class = GST_BASE_SRC_CLASS (klass); - GstAudioBaseSrcClass *audiobasesrc_class = GST_AUDIO_BASE_SRC_CLASS (klass); - - gobject_class->finalize = gst_wasapi2_src_finalize; - gobject_class->set_property = gst_wasapi2_src_set_property; - gobject_class->get_property = gst_wasapi2_src_get_property; - - g_object_class_install_property (gobject_class, PROP_DEVICE, - g_param_spec_string ("device", "Device", - "Audio device ID as provided by " - "WASAPI device endpoint ID as provided by IMMDevice::GetId", - NULL, GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_LOW_LATENCY, - g_param_spec_boolean ("low-latency", "Low latency", - "Optimize all settings for lowest latency. Always safe to enable.", - DEFAULT_LOW_LATENCY, GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_MUTE, - g_param_spec_boolean ("mute", "Mute", "Mute state of this stream", - DEFAULT_MUTE, GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - g_object_class_install_property (gobject_class, PROP_VOLUME, - g_param_spec_double ("volume", "Volume", "Volume of this stream", - 0.0, 1.0, DEFAULT_VOLUME, - GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - /** - * GstWasapi2Src:dispatcher: - * - * ICoreDispatcher COM object used for activating device from UI thread. - * - * Since: 1.18 - */ - g_object_class_install_property (gobject_class, PROP_DISPATCHER, - g_param_spec_pointer ("dispatcher", "Dispatcher", - "ICoreDispatcher COM object to use. In order for application to ask " - "permission of audio device, device activation should be running " - "on UI thread via ICoreDispatcher. This element will increase " - "the reference count of given ICoreDispatcher and release it after " - "use. Therefore, caller does not need to consider additional " - "reference count management", - GST_PARAM_MUTABLE_READY | G_PARAM_WRITABLE | G_PARAM_STATIC_STRINGS)); - - /** - * GstWasapi2Src:loopback: - * - * Open render device for loopback recording - * - * Since: 1.20 - */ - g_object_class_install_property (gobject_class, PROP_LOOPBACK, - g_param_spec_boolean ("loopback", "Loopback recording", - "Open render device for loopback recording", DEFAULT_LOOPBACK, - GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - if (gst_wasapi2_can_process_loopback ()) { - /** - * GstWasapi2Src:loopback-mode: - * - * Loopback mode. "target-process-id" must be specified in case of - * process loopback modes. - * - * This feature requires "Windows 10 build 20348" - * - * Since: 1.22 - */ - g_object_class_install_property (gobject_class, PROP_LOOPBACK_MODE, - g_param_spec_enum ("loopback-mode", "Loopback Mode", - "Loopback mode to use", GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE, - DEFAULT_LOOPBACK_MODE, - GST_PARAM_CONDITIONALLY_AVAILABLE | GST_PARAM_MUTABLE_READY | - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - - /** - * GstWasapi2Src:loopback-target-pid: - * - * Target process id to be recorded or excluded depending on loopback mode - * - * This feature requires "Windows 10 build 20348" - * - * Since: 1.22 - */ - g_object_class_install_property (gobject_class, PROP_LOOPBACK_TARGET_PID, - g_param_spec_uint ("loopback-target-pid", "Loopback Target PID", - "Process ID to be recorded or excluded for process loopback mode", - 0, G_MAXUINT32, 0, - GST_PARAM_CONDITIONALLY_AVAILABLE | GST_PARAM_MUTABLE_READY | - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - } - - /** - * GstWasapi2Src:loopback-silence-on-device-mute: - * - * When loopback recording, if the device is muted, inject silence in the pipeline - * - * Since: 1.24 - */ - g_object_class_install_property (gobject_class, - PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE, - g_param_spec_boolean ("loopback-silence-on-device-mute", - "Loopback Silence On Device Mute", - "When loopback recording, if the device is muted, inject silence in the pipeline", - DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE, - GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS)); - - gst_element_class_add_static_pad_template (element_class, &src_template); - gst_element_class_set_static_metadata (element_class, "Wasapi2Src", - "Source/Audio/Hardware", - "Stream audio from an audio capture device through WASAPI", - "Nirbheek Chauhan <nirbheek@centricular.com>, " - "Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com>, " - "Seungha Yang <seungha@centricular.com>"); - - element_class->change_state = - GST_DEBUG_FUNCPTR (gst_wasapi2_src_change_state); - - basesrc_class->get_caps = GST_DEBUG_FUNCPTR (gst_wasapi2_src_get_caps); - - audiobasesrc_class->create_ringbuffer = - GST_DEBUG_FUNCPTR (gst_wasapi2_src_create_ringbuffer); - - GST_DEBUG_CATEGORY_INIT (gst_wasapi2_src_debug, "wasapi2src", - 0, "Windows audio session API source"); - - if (gst_wasapi2_can_process_loopback ()) - gst_type_mark_as_plugin_api (GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE, 0); -} - -static void -gst_wasapi2_src_init (GstWasapi2Src * self) -{ - self->mute = DEFAULT_MUTE; - self->volume = DEFAULT_VOLUME; - self->low_latency = DEFAULT_LOW_LATENCY; - self->loopback = DEFAULT_LOOPBACK; - self->loopback_silence_on_device_mute = - DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE; -} - -static void -gst_wasapi2_src_finalize (GObject * object) -{ - GstWasapi2Src *self = GST_WASAPI2_SRC (object); - - g_free (self->device_id); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_wasapi2_src_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstWasapi2Src *self = GST_WASAPI2_SRC (object); - - switch (prop_id) { - case PROP_DEVICE: - g_free (self->device_id); - self->device_id = g_value_dup_string (value); - break; - case PROP_LOW_LATENCY: - self->low_latency = g_value_get_boolean (value); - break; - case PROP_MUTE: - gst_wasapi2_src_set_mute (self, g_value_get_boolean (value)); - break; - case PROP_VOLUME: - gst_wasapi2_src_set_volume (self, g_value_get_double (value)); - break; - case PROP_DISPATCHER: - self->dispatcher = g_value_get_pointer (value); - break; - case PROP_LOOPBACK: - self->loopback = g_value_get_boolean (value); - break; - case PROP_LOOPBACK_MODE: - self->loopback_mode = g_value_get_enum (value); - break; - case PROP_LOOPBACK_TARGET_PID: - self->loopback_pid = g_value_get_uint (value); - break; - case PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE: - gst_wasapi2_src_set_silence_on_mute (self, g_value_get_boolean (value)); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_wasapi2_src_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstWasapi2Src *self = GST_WASAPI2_SRC (object); - - switch (prop_id) { - case PROP_DEVICE: - g_value_set_string (value, self->device_id); - break; - case PROP_LOW_LATENCY: - g_value_set_boolean (value, self->low_latency); - break; - case PROP_MUTE: - g_value_set_boolean (value, gst_wasapi2_src_get_mute (self)); - break; - case PROP_VOLUME: - g_value_set_double (value, gst_wasapi2_src_get_volume (self)); - break; - case PROP_LOOPBACK: - g_value_set_boolean (value, self->loopback); - break; - case PROP_LOOPBACK_MODE: - g_value_set_enum (value, self->loopback_mode); - break; - case PROP_LOOPBACK_TARGET_PID: - g_value_set_uint (value, self->loopback_pid); - break; - case PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE: - g_value_set_boolean (value, self->loopback_silence_on_device_mute); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static GstStateChangeReturn -gst_wasapi2_src_change_state (GstElement * element, GstStateChange transition) -{ - GstWasapi2Src *self = GST_WASAPI2_SRC (element); - GstAudioBaseSrc *asrc = GST_AUDIO_BASE_SRC_CAST (element); - - switch (transition) { - case GST_STATE_CHANGE_READY_TO_PAUSED: - /* If we have pending volume/mute values to set, do here */ - GST_OBJECT_LOCK (self); - if (asrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (asrc->ringbuffer); - - if (self->volume_changed) { - gst_wasapi2_ring_buffer_set_volume (ringbuffer, self->volume); - self->volume_changed = FALSE; - } - - if (self->mute_changed) { - gst_wasapi2_ring_buffer_set_mute (ringbuffer, self->mute); - self->mute_changed = FALSE; - } - } - GST_OBJECT_UNLOCK (self); - break; - default: - break; - } - - return GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); -} - -static GstCaps * -gst_wasapi2_src_get_caps (GstBaseSrc * bsrc, GstCaps * filter) -{ - GstAudioBaseSrc *asrc = GST_AUDIO_BASE_SRC_CAST (bsrc); - GstCaps *caps = NULL; - - GST_OBJECT_LOCK (bsrc); - if (asrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (asrc->ringbuffer); - - gst_object_ref (ringbuffer); - GST_OBJECT_UNLOCK (bsrc); - - /* Get caps might be able to block if device is not activated yet */ - caps = gst_wasapi2_ring_buffer_get_caps (ringbuffer); - gst_object_unref (ringbuffer); - } else { - GST_OBJECT_UNLOCK (bsrc); - } - - if (!caps) - caps = gst_pad_get_pad_template_caps (bsrc->srcpad); - - if (filter) { - GstCaps *filtered = - gst_caps_intersect_full (filter, caps, GST_CAPS_INTERSECT_FIRST); - gst_caps_unref (caps); - caps = filtered; - } - - GST_DEBUG_OBJECT (bsrc, "returning caps %" GST_PTR_FORMAT, caps); - - return caps; -} - -static GstAudioRingBuffer * -gst_wasapi2_src_create_ringbuffer (GstAudioBaseSrc * src) -{ - GstWasapi2Src *self = GST_WASAPI2_SRC (src); - GstAudioRingBuffer *ringbuffer; - gchar *name; - GstWasapi2EndpointClass device_class = GST_WASAPI2_ENDPOINT_CLASS_CAPTURE; - - if (self->loopback_pid) { - if (self->loopback_mode == GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE) { - device_class = - GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE; - } else if (self->loopback_mode == - GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE) { - device_class = - GST_WASAPI2_ENDPOINT_CLASS_EXCLUDE_PROCESS_LOOPBACK_CAPTURE; - } - } else if (self->loopback) { - device_class = GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE; - } - - GST_DEBUG_OBJECT (self, "Device class %d", device_class); - - name = g_strdup_printf ("%s-ringbuffer", GST_OBJECT_NAME (src)); - - ringbuffer = - gst_wasapi2_ring_buffer_new (device_class, - self->low_latency, self->device_id, self->dispatcher, name, - self->loopback_pid); - g_free (name); - - if (self->loopback) { - gst_wasapi2_ring_buffer_set_device_mute_monitoring (GST_WASAPI2_RING_BUFFER - (ringbuffer), self->loopback_silence_on_device_mute); - } - - return ringbuffer; -} - -static void -gst_wasapi2_src_set_mute (GstWasapi2Src * self, gboolean mute) -{ - GstAudioBaseSrc *bsrc = GST_AUDIO_BASE_SRC_CAST (self); - HRESULT hr; - - GST_OBJECT_LOCK (self); - - self->mute = mute; - self->mute_changed = TRUE; - - if (bsrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsrc->ringbuffer); - - hr = gst_wasapi2_ring_buffer_set_mute (ringbuffer, mute); - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set mute"); - } else { - self->mute_changed = FALSE; - } - } - - GST_OBJECT_UNLOCK (self); -} - -static gboolean -gst_wasapi2_src_get_mute (GstWasapi2Src * self) -{ - GstAudioBaseSrc *bsrc = GST_AUDIO_BASE_SRC_CAST (self); - gboolean mute; - HRESULT hr; - - GST_OBJECT_LOCK (self); - - mute = self->mute; - - if (bsrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsrc->ringbuffer); - - hr = gst_wasapi2_ring_buffer_get_mute (ringbuffer, &mute); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't get mute"); - } else { - self->mute = mute; - } - } - - GST_OBJECT_UNLOCK (self); - - return mute; -} - -static void -gst_wasapi2_src_set_volume (GstWasapi2Src * self, gdouble volume) -{ - GstAudioBaseSrc *bsrc = GST_AUDIO_BASE_SRC_CAST (self); - HRESULT hr; - - GST_OBJECT_LOCK (self); - - self->volume = volume; - /* clip volume value */ - self->volume = MAX (0.0, self->volume); - self->volume = MIN (1.0, self->volume); - self->volume_changed = TRUE; - - if (bsrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsrc->ringbuffer); - - hr = gst_wasapi2_ring_buffer_set_volume (ringbuffer, (gfloat) self->volume); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set volume"); - } else { - self->volume_changed = FALSE; - } - } - - GST_OBJECT_UNLOCK (self); -} - -static gdouble -gst_wasapi2_src_get_volume (GstWasapi2Src * self) -{ - GstAudioBaseSrc *bsrc = GST_AUDIO_BASE_SRC_CAST (self); - gfloat volume; - HRESULT hr; - - GST_OBJECT_LOCK (self); - - volume = (gfloat) self->volume; - - if (bsrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsrc->ringbuffer); - - hr = gst_wasapi2_ring_buffer_get_volume (ringbuffer, &volume); - - if (FAILED (hr)) { - GST_INFO_OBJECT (self, "Couldn't set volume"); - } else { - self->volume = volume; - } - } - - GST_OBJECT_UNLOCK (self); - - volume = MAX (0.0, volume); - volume = MIN (1.0, volume); - - return volume; -} - -static void -gst_wasapi2_src_set_silence_on_mute (GstWasapi2Src * self, gboolean value) -{ - GstAudioBaseSrc *bsrc = GST_AUDIO_BASE_SRC_CAST (self); - - GST_OBJECT_LOCK (self); - - self->loopback_silence_on_device_mute = value; - - if (self->loopback && bsrc->ringbuffer) { - GstWasapi2RingBuffer *ringbuffer = - GST_WASAPI2_RING_BUFFER (bsrc->ringbuffer); - - gst_wasapi2_ring_buffer_set_device_mute_monitoring (ringbuffer, value); - } - - GST_OBJECT_UNLOCK (self); -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcutils.cpp
Deleted
@@ -1,71 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#include "gstwin32ipcutils.h" -#include <windows.h> -#include <string> -#include <mutex> - -static ULONG global_index = 0; - -static DWORD -gst_win32_ipc_get_pid (void) -{ - static std::once_flag once_flag; - static DWORD pid = 0; - - std::call_once (once_flag,&() { - pid = GetCurrentProcessId (); - }); - - return pid; -} - -/* Create unique prefix for named shared memory */ -gchar * -gst_win32_ipc_get_mmf_prefix (void) -{ - std::string prefix = "Local\\gst.win32.ipc." + - std::to_string (gst_win32_ipc_get_pid ()) + std::string (".") + - std::to_string (InterlockedIncrement (&global_index)) + std::string ("."); - - return g_strdup (prefix.c_str ()); -} - -gboolean -gst_win32_ipc_clock_is_qpc (GstClock * clock) -{ - GstClockType clock_type = GST_CLOCK_TYPE_MONOTONIC; - GstClock *mclock; - - if (G_OBJECT_TYPE (clock) != GST_TYPE_SYSTEM_CLOCK) - return FALSE; - - g_object_get (clock, "clock-type", &clock_type, nullptr); - if (clock_type != GST_CLOCK_TYPE_MONOTONIC) - return FALSE; - - mclock = gst_clock_get_master (clock); - if (!mclock) - return TRUE; - - gst_object_unref (mclock); - - return FALSE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcutils.h
Deleted
@@ -1,30 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Library General Public - * License as published by the Free Software Foundation; either - * version 2 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Library General Public License for more details. - * - * You should have received a copy of the GNU Library General Public - * License along with this library; if not, write to the - * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02110-1301, USA. - */ - -#pragma once - -#include <gst/gst.h> - -G_BEGIN_DECLS - -gchar * gst_win32_ipc_get_mmf_prefix (void); - -gboolean gst_win32_ipc_clock_is_qpc (GstClock * clock); - -G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol
Deleted
-(directory)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcmmf.cpp
Deleted
@@ -1,241 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#include "win32ipcmmf.h" -#include "win32ipcutils.h" -#include <string> - -GST_DEBUG_CATEGORY_EXTERN (gst_win32_ipc_debug); -#define GST_CAT_DEFAULT gst_win32_ipc_debug - -struct Win32IpcMmf -{ - explicit Win32IpcMmf (HANDLE f, void * b, UINT32 s, const std::string & n) - : file (f), buffer (b), size (s), name (n), ref_count (1) - { - } - - ~Win32IpcMmf () - { - GST_TRACE ("Freeing %p (%s)", this, name.c_str ()); - if (buffer) - UnmapViewOfFile (buffer); - if (file) - CloseHandle (file); - } - - HANDLE file; - void *buffer; - UINT32 size; - std::string name; - ULONG ref_count; -}; - -static Win32IpcMmf * -win32_pic_mmf_new (HANDLE file, UINT32 size, const char * name) -{ - Win32IpcMmf *self; - void *buffer; - std::string msg; - UINT err_code; - - buffer = MapViewOfFile (file, FILE_MAP_ALL_ACCESS, 0, 0, size); - if (!buffer) { - err_code = GetLastError (); - msg = win32_ipc_error_message (err_code); - GST_ERROR ("MapViewOfFile failed with 0x%x (%s)", - err_code, msg.c_str ()); - CloseHandle (file); - return nullptr; - } - - self = new Win32IpcMmf (file, buffer, size, name); - - return self; -} - -/** - * win32_ipc_mmf_alloc: - * @size: Size of memory to allocate - * @name: The name of Memory Mapped File - * - * Creates named shared memory - * - * Returns: a new Win32IpcMmf object - */ -Win32IpcMmf * -win32_ipc_mmf_alloc (UINT32 size, const char * name) -{ - HANDLE file; - std::string msg; - UINT err_code; - - if (!size) { - GST_ERROR ("Zero size is not allowed"); - return nullptr; - } - - if (!name) { - GST_ERROR ("Name must be specified"); - return nullptr; - } - - file = CreateFileMappingA (INVALID_HANDLE_VALUE, nullptr, - PAGE_READWRITE | SEC_COMMIT, 0, size, name); - if (!file) { - err_code = GetLastError (); - msg = win32_ipc_error_message (err_code); - GST_ERROR ("CreateFileMappingA failed with 0x%x (%s)", - err_code, msg.c_str ()); - return nullptr; - } - - /* The name is already occupied, it's caller's fault... */ - if (GetLastError () == ERROR_ALREADY_EXISTS) { - GST_ERROR ("File already exists"); - CloseHandle (file); - return nullptr; - } - - return win32_pic_mmf_new (file, size, name); -} - -/** - * win32_ipc_mmf_open: - * @size: Size of memory to allocate - * @name: The name of Memory Mapped File - * - * Opens named shared memory - * - * Returns: a new Win32IpcMmf object - */ -Win32IpcMmf * -win32_ipc_mmf_open (UINT32 size, const char * name) -{ - HANDLE file; - std::string msg; - UINT err_code; - - if (!size) { - GST_ERROR ("Zero size is not allowed"); - return nullptr; - } - - if (!name) { - GST_ERROR ("Name must be specified"); - return nullptr; - } - - file = OpenFileMappingA (FILE_MAP_ALL_ACCESS, FALSE, name); - if (!file) { - err_code = GetLastError (); - msg = win32_ipc_error_message (err_code); - GST_ERROR ("OpenFileMappingA failed with 0x%x (%s)", - err_code, msg.c_str ()); - return nullptr; - } - - return win32_pic_mmf_new (file, size, name); -} - -/** - * win32_ipc_mmf_get_name: - * @mmf: a Win32IpcMmf object - * - * Returns: the name of @mmf - */ -const char * -win32_ipc_mmf_get_name (Win32IpcMmf * mmf) -{ - if (!mmf) - return nullptr; - - return mmf->name.c_str (); -} - -/** - * win32_ipc_mmf_get_size: - * @mmf: a Win32IpcMmf object - * - * Returns: the size of allocated memory - */ -UINT32 -win32_ipc_mmf_get_size (Win32IpcMmf * mmf) -{ - if (!mmf) - return 0; - - return mmf->size; -} - -/** - * win32_ipc_mmf_get_raw: - * @mmf: a Win32IpcMmf object - * - * Returns: the address of allocated memory - */ -void * -win32_ipc_mmf_get_raw (Win32IpcMmf * mmf) -{ - if (!mmf) - return nullptr; - - return mmf->buffer; -} - -/** - * win32_ipc_mmf_ref: - * @mmf: a Win32IpcMmf object - * - * Increase ref count - */ -Win32IpcMmf * -win32_ipc_mmf_ref (Win32IpcMmf * mmf) -{ - if (!mmf) - return nullptr; - - InterlockedIncrement (&mmf->ref_count); - - return mmf; -} - -/** - * win32_ipc_mmf_unref: - * @mmf: a Win32IpcMmf object - * - * Decrease ref count - */ -void -win32_ipc_mmf_unref (Win32IpcMmf * mmf) -{ - ULONG count; - - if (!mmf) - return; - - count = InterlockedDecrement (&mmf->ref_count); - if (count == 0) - delete mmf; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcmmf.h
Deleted
@@ -1,50 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#pragma once - -#include <gst/gst.h> -#include <windows.h> - -G_BEGIN_DECLS - -struct Win32IpcMmf; - -Win32IpcMmf * win32_ipc_mmf_alloc (UINT32 size, - const char * name); - -Win32IpcMmf * win32_ipc_mmf_open (UINT32 size, - const char * name); - -const char * win32_ipc_mmf_get_name (Win32IpcMmf * mmf); - -UINT32 win32_ipc_mmf_get_size (Win32IpcMmf * mmf); - -void * win32_ipc_mmf_get_raw (Win32IpcMmf * mmf); - -Win32IpcMmf * win32_ipc_mmf_ref (Win32IpcMmf * mmf); - -void win32_ipc_mmf_unref (Win32IpcMmf * mmf); - -G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcpipeclient.cpp
Deleted
@@ -1,569 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#include "win32ipcpipeclient.h" -#include "win32ipcutils.h" -#include <mutex> -#include <condition_variable> -#include <memory> -#include <thread> -#include <queue> -#include <string> - -GST_DEBUG_CATEGORY_EXTERN (gst_win32_ipc_debug); -#define GST_CAT_DEFAULT gst_win32_ipc_debug - -#define CONN_BUFFER_SIZE 1024 - -struct MmfInfo -{ - Win32IpcMmf *mmf; - Win32IpcVideoInfo info; -}; - -struct ClientConnection : public OVERLAPPED -{ - ClientConnection () : pipe (INVALID_HANDLE_VALUE), to_read (0), to_write (0), - seq_num (0) - { - OVERLAPPED *parent = dynamic_cast<OVERLAPPED *> (this); - parent->Internal = 0; - parent->InternalHigh = 0; - parent->Offset = 0; - parent->OffsetHigh = 0; - } - - Win32IpcPipeClient *self; - HANDLE pipe; - UINT8 client_msgCONN_BUFFER_SIZE; - UINT32 to_read; - UINT8 server_msgCONN_BUFFER_SIZE; - UINT32 to_write; - UINT64 seq_num; -}; - -struct Win32IpcPipeClient -{ - explicit Win32IpcPipeClient (const std::string & n) - : name (n), ref_count(1), last_err (ERROR_SUCCESS), flushing (FALSE) - , stopped (FALSE), io_pending (FALSE) - { - release_event = CreateEventA (nullptr, FALSE, FALSE, nullptr); - cancellable = CreateEventA (nullptr, TRUE, FALSE, nullptr); - conn.pipe = INVALID_HANDLE_VALUE; - conn.self = this; - } - - ~Win32IpcPipeClient () - { - GST_DEBUG ("Free client %p", this); - SetEvent (cancellable); - if (thread) { - thread->join (); - thread = nullptr; - } - - last_err = ERROR_OPERATION_ABORTED; - while (!queue.empty ()) { - MmfInfo info = queue.front (); - - queue.pop (); - win32_ipc_mmf_unref (info.mmf); - } - - CloseHandle (release_event); - CloseHandle (cancellable); - } - - std::mutex lock; - std::condition_variable cond; - std::unique_ptr<std::thread> thread; - std::queue<MmfInfo> queue; - std::queue<std::string> unused_mmf; - std::string name; - - ULONG ref_count; - HANDLE release_event; - HANDLE cancellable; - UINT last_err; - BOOL flushing; - BOOL stopped; - BOOL io_pending; - ClientConnection conn; -}; - -static DWORD -win32_ipc_pipe_client_send_need_data_async (Win32IpcPipeClient * self); -static DWORD -win32_ipc_pipe_client_send_release_data_async (Win32IpcPipeClient * self, - const char * mmf_name); - -static VOID WINAPI -win32_ipc_pipe_client_send_finish (DWORD error_code, DWORD n_bytes, - LPOVERLAPPED overlapped) -{ - ClientConnection *conn = (ClientConnection *) overlapped; - Win32IpcPipeClient *self = conn->self; - std::string unused_mmf; - - if (error_code != ERROR_SUCCESS) { - std::string msg = win32_ipc_error_message (error_code); - self->last_err = error_code; - GST_WARNING ("Failed with 0x%x (%s)", self->last_err, msg.c_str ()); - goto error; - } - - self->lock.lock (); - if (!self->unused_mmf.empty ()) { - unused_mmf = self->unused_mmf.front (); - self->unused_mmf.pop (); - } - self->lock.unlock (); - - if (unused_mmf.size () > 0) { - self->last_err = win32_ipc_pipe_client_send_release_data_async (self, - unused_mmf.c_str ()); - if (self->last_err != ERROR_SUCCESS) - goto error; - - return; - } - - /* Don't request data anymore if we are stopped, but keep connection - * to send release data message later */ - if (self->stopped) { - GST_DEBUG ("We are stopped"); - self->io_pending = FALSE; - return; - } - - self->last_err = win32_ipc_pipe_client_send_need_data_async (self); - if (self->last_err != ERROR_SUCCESS) - goto error; - - /* All done, back to need-data state */ - return; - -error: - SetEvent (self->cancellable); -} - -static DWORD -win32_ipc_pipe_client_send_release_data_async (Win32IpcPipeClient * self, - const char * mmf_name) -{ - ClientConnection *conn = &self->conn; - - conn->to_write = win32_ipc_pkt_build_release_data (conn->client_msg, - CONN_BUFFER_SIZE, conn->seq_num, mmf_name); - if (conn->to_write == 0) { - GST_ERROR ("Couldn't build RELEASE-DATA pkt"); - return ERROR_BAD_FORMAT; - } - - GST_TRACE ("Sending RELEASE-DATA"); - - if (!WriteFileEx (conn->pipe, conn->client_msg, conn->to_write, - (OVERLAPPED *) conn, win32_ipc_pipe_client_send_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("WriteFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - return last_err; - } - - return ERROR_SUCCESS; -} - -static DWORD -win32_ipc_pipe_client_send_read_done_async (Win32IpcPipeClient * self) -{ - ClientConnection *conn = &self->conn; - - conn->to_write = win32_ipc_pkt_build_read_done (conn->client_msg, - CONN_BUFFER_SIZE, conn->seq_num); - if (conn->to_write == 0) { - GST_ERROR ("Couldn't build READ-DONE pkt"); - return ERROR_BAD_FORMAT; - } - - GST_TRACE ("Sending READ-DONE"); - - if (!WriteFileEx (conn->pipe, conn->client_msg, conn->to_write, - (OVERLAPPED *) conn, win32_ipc_pipe_client_send_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - - GST_WARNING ("WriteFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - return last_err; - } - - return ERROR_SUCCESS; -} - -static VOID WINAPI -win32_ipc_pipe_client_receive_have_data_finish (DWORD error_code, DWORD n_bytes, - LPOVERLAPPED overlapped) -{ - ClientConnection *conn = (ClientConnection *) overlapped; - Win32IpcPipeClient *self = conn->self; - char mmf_name1024 = { '\0', }; - Win32IpcVideoInfo info; - Win32IpcMmf *mmf; - MmfInfo minfo; - - if (error_code != ERROR_SUCCESS) { - std::string msg = win32_ipc_error_message (error_code); - self->last_err = error_code; - GST_WARNING ("HAVE-DATA failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - goto error; - } - - if (!win32_ipc_pkt_parse_have_data (conn->server_msg, n_bytes, - &conn->seq_num, mmf_name, &info)) { - self->last_err = ERROR_BAD_FORMAT; - GST_WARNING ("Couldn't parse HAVE-DATA pkt"); - goto error; - } - - mmf = win32_ipc_mmf_open (info.size, mmf_name); - if (!mmf) { - GST_ERROR ("Couldn't open file %s", mmf_name); - self->last_err = ERROR_BAD_FORMAT; - goto error; - } - - GST_TRACE ("Got HAVE-DATA %s", mmf_name); - - minfo.mmf = mmf; - minfo.info = info; - - { - std::lock_guard<std::mutex> lk (self->lock); - /* Drops too old data */ - while (self->queue.size () > 5) { - MmfInfo info = self->queue.front (); - - self->queue.pop (); - win32_ipc_mmf_unref (info.mmf); - } - - self->queue.push (minfo); - self->cond.notify_all (); - } - - self->last_err = win32_ipc_pipe_client_send_read_done_async (self); - if (self->last_err != ERROR_SUCCESS) - goto error; - - return; - -error: - SetEvent (self->cancellable); -} - -static DWORD -win32_ipc_pipe_client_receive_have_data_async (Win32IpcPipeClient * self) -{ - ClientConnection *conn = &self->conn; - - GST_TRACE ("Waiting HAVE-DATA"); - - if (!ReadFileEx (conn->pipe, conn->server_msg, CONN_BUFFER_SIZE, - (OVERLAPPED *) conn, win32_ipc_pipe_client_receive_have_data_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("ReadFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - return last_err; - } - - return ERROR_SUCCESS; -} - -static VOID WINAPI -pipe_clinet_send_need_data_finish (DWORD error_code, DWORD n_bytes, - LPOVERLAPPED overlapped) -{ - ClientConnection *conn = (ClientConnection *) overlapped; - Win32IpcPipeClient *self = conn->self; - - if (error_code != ERROR_SUCCESS) { - std::string msg = win32_ipc_error_message (error_code); - self->last_err = error_code; - GST_WARNING ("NEED-DATA failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - goto error; - } - - self->last_err = win32_ipc_pipe_client_receive_have_data_async (self); - if (self->last_err != ERROR_SUCCESS) - goto error; - - return; - -error: - SetEvent (self->cancellable); -} - -static DWORD -win32_ipc_pipe_client_send_need_data_async (Win32IpcPipeClient * self) -{ - ClientConnection *conn = &self->conn; - - conn->to_write = win32_ipc_pkt_build_need_data (conn->client_msg, - CONN_BUFFER_SIZE, conn->seq_num); - if (conn->to_write == 0) { - GST_ERROR ("Couldn't build NEED-DATA pkt"); - return ERROR_BAD_FORMAT; - } - - GST_TRACE ("Sending NEED-DATA"); - - if (!WriteFileEx (conn->pipe, conn->client_msg, conn->to_write, - (OVERLAPPED *) conn, pipe_clinet_send_need_data_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("WriteFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - return last_err; - } - - return ERROR_SUCCESS; -} - -static VOID -win32_ipc_pipe_client_loop (Win32IpcPipeClient * self) -{ - DWORD mode = PIPE_READMODE_MESSAGE; - std::unique_lock<std::mutex> lk (self->lock); - ClientConnection *conn = &self->conn; - HANDLE waitables2; - DWORD wait_ret; - - conn->pipe = CreateFileA (self->name.c_str (), - GENERIC_READ | GENERIC_WRITE, 0, nullptr, OPEN_EXISTING, - FILE_FLAG_OVERLAPPED, nullptr); - self->last_err = GetLastError (); - if (conn->pipe == INVALID_HANDLE_VALUE) { - std::string msg = win32_ipc_error_message (self->last_err); - GST_WARNING ("CreateFileA failed with 0x%x (%s)", self->last_err, - msg.c_str ()); - self->cond.notify_all (); - return; - } - - if (!SetNamedPipeHandleState (conn->pipe, &mode, nullptr, nullptr)) { - self->last_err = GetLastError (); - std::string msg = win32_ipc_error_message (self->last_err); - GST_WARNING ("SetNamedPipeHandleState failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - CloseHandle (conn->pipe); - conn->pipe = INVALID_HANDLE_VALUE; - self->cond.notify_all (); - return; - } - - self->last_err = ERROR_SUCCESS; - self->cond.notify_all (); - lk.unlock (); - - /* Once connection is established, send NEED-DATA message to server, - * and then it will loop NEED-DATA -> HAVE-DATA -> READ-DONE */ - self->last_err = win32_ipc_pipe_client_send_need_data_async (self); - if (self->last_err != ERROR_SUCCESS) - goto out; - - self->io_pending = TRUE; - waitables0 = self->cancellable; - waitables1 = self->release_event; - - do { - /* Enters alertable thread state and wait for I/O completion event - * or cancellable event */ - wait_ret = WaitForMultipleObjectsEx (2, waitables, FALSE, INFINITE, TRUE); - if (wait_ret == WAIT_OBJECT_0) { - GST_DEBUG ("Operation cancelled"); - goto out; - } - - switch (wait_ret) { - case WAIT_OBJECT_0 + 1: - case WAIT_IO_COMPLETION: - { - std::string unused_mmf; - /* If I/O chain is stopped, send release data message here */ - if (!self->io_pending) { - lk.lock (); - if (!self->unused_mmf.empty ()) { - unused_mmf = self->unused_mmf.front (); - self->unused_mmf.pop (); - } - lk.unlock (); - } - - if (unused_mmf.size () > 0) { - GST_DEBUG ("Sending release data for %s", unused_mmf.c_str ()); - self->io_pending = TRUE; - self->last_err = win32_ipc_pipe_client_send_release_data_async (self, - unused_mmf.c_str ()); - if (self->last_err != ERROR_SUCCESS) - goto out; - } - break; - } - default: - GST_WARNING ("Unexpected wait return 0x%x", (UINT) wait_ret); - goto out; - } - } while (true); - -out: - if (conn->pipe != INVALID_HANDLE_VALUE) { - CancelIoEx (conn->pipe, (OVERLAPPED *) &conn); - CloseHandle (conn->pipe); - } - - lk.lock (); - self->last_err = ERROR_OPERATION_ABORTED; - conn->pipe = INVALID_HANDLE_VALUE; - self->io_pending = FALSE; - self->cond.notify_all (); -} - -static BOOL -win32_ipc_pipe_client_run (Win32IpcPipeClient * self) -{ - std::unique_lock<std::mutex> lk (self->lock); - - self->thread = std::make_unique<std::thread> - (std::thread (win32_ipc_pipe_client_loop, self)); - self->cond.wait (lk); - - if (self->last_err != ERROR_SUCCESS) { - self->thread->join (); - self->thread = nullptr; - return FALSE; - } - - return TRUE; -} - -Win32IpcPipeClient * -win32_ipc_pipe_client_new (const char * pipe_name) -{ - Win32IpcPipeClient *self; - - if (!pipe_name) { - GST_ERROR ("Pipe name must be specified"); - return nullptr; - } - - self = new Win32IpcPipeClient (pipe_name); - - if (!win32_ipc_pipe_client_run (self)) { - win32_ipc_pipe_client_unref (self); - return nullptr; - } - - return self; -} - -Win32IpcPipeClient * -win32_ipc_pipe_client_ref (Win32IpcPipeClient * client) -{ - InterlockedIncrement (&client->ref_count); - - return client; -} - -void -win32_ipc_pipe_client_unref (Win32IpcPipeClient * client) -{ - ULONG ref_count; - - ref_count = InterlockedDecrement (&client->ref_count); - if (ref_count == 0) - delete client; -} - -void -win32_ipc_pipe_client_set_flushing (Win32IpcPipeClient * client, BOOL flushing) -{ - std::lock_guard<std::mutex> lk (client->lock); - client->flushing = flushing; - client->cond.notify_all (); -} - -BOOL -win32_ipc_pipe_client_get_mmf (Win32IpcPipeClient * client, Win32IpcMmf ** mmf, - Win32IpcVideoInfo * info) -{ - std::unique_lock<std::mutex> lk (client->lock); - if (client->last_err != ERROR_SUCCESS) { - GST_WARNING ("Last error code was 0x%x", client->last_err); - return FALSE; - } - - while (client->queue.empty () && client->last_err == ERROR_SUCCESS && - !client->flushing && !client->stopped) { - client->cond.wait (lk); - } - - if (client->queue.empty ()) - return FALSE; - - MmfInfo mmf_info = client->queue.front (); - client->queue.pop (); - - *mmf = mmf_info.mmf; - *info = mmf_info.info; - - return TRUE; -} - -void -win32_ipc_pipe_client_release_mmf (Win32IpcPipeClient * client, - Win32IpcMmf * mmf) -{ - std::string name = win32_ipc_mmf_get_name (mmf); - - win32_ipc_mmf_unref (mmf); - - std::lock_guard<std::mutex> lk (client->lock); - if (client->last_err != ERROR_SUCCESS) - return; - - GST_LOG ("Enqueue release data %s", name.c_str ()); - client->unused_mmf.push (name); - SetEvent (client->release_event); -} - -void -win32_ipc_pipe_client_stop (Win32IpcPipeClient * client) -{ - GST_DEBUG ("Stopping %p", client); - - std::lock_guard<std::mutex> lk (client->lock); - client->stopped = TRUE; - client->cond.notify_all (); -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcpipeclient.h
Deleted
@@ -1,55 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#pragma once - -#include <windows.h> -#include <string.h> -#include "win32ipcmmf.h" -#include "win32ipcprotocol.h" -#include <gst/gst.h> - -G_BEGIN_DECLS - -struct Win32IpcPipeClient; - -Win32IpcPipeClient * win32_ipc_pipe_client_new (const char * pipe_name); - -Win32IpcPipeClient * win32_ipc_pipe_client_ref (Win32IpcPipeClient * client); - -void win32_ipc_pipe_client_unref (Win32IpcPipeClient * client); - -void win32_ipc_pipe_client_set_flushing (Win32IpcPipeClient * client, - BOOL flushing); - -BOOL win32_ipc_pipe_client_get_mmf (Win32IpcPipeClient * client, - Win32IpcMmf ** mmf, - Win32IpcVideoInfo * info); - -void win32_ipc_pipe_client_release_mmf (Win32IpcPipeClient * client, - Win32IpcMmf * mmf); - -void win32_ipc_pipe_client_stop (Win32IpcPipeClient * client); - -G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcpipeserver.cpp
Deleted
@@ -1,569 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#include "win32ipcpipeserver.h" -#include "win32ipcutils.h" -#include <mutex> -#include <condition_variable> -#include <memory> -#include <thread> -#include <queue> -#include <vector> -#include <string> -#include <algorithm> -#include <assert.h> - -GST_DEBUG_CATEGORY_EXTERN (gst_win32_ipc_debug); -#define GST_CAT_DEFAULT gst_win32_ipc_debug - -#define CONN_BUFFER_SIZE 1024 - -struct MmfInfo -{ - explicit MmfInfo (Win32IpcMmf * m, const Win32IpcVideoInfo * i, UINT64 s, - void * u, Win32IpcMmfDestroy n) - { - mmf = m; - info = *i; - seq_num = s; - user_data = u; - notify = n; - } - - ~MmfInfo() - { - if (mmf) - win32_ipc_mmf_unref (mmf); - - if (notify) - notify (user_data); - } - - Win32IpcMmf *mmf = nullptr; - Win32IpcVideoInfo info; - UINT64 seq_num; - void *user_data; - Win32IpcMmfDestroy notify; -}; - -struct ServerConnection : public OVERLAPPED -{ - ServerConnection(Win32IpcPipeServer * server, HANDLE p) - : self(server), pipe(p) - { - OVERLAPPED *parent = dynamic_cast<OVERLAPPED *> (this); - parent->Internal = 0; - parent->InternalHigh = 0; - parent->Offset = 0; - parent->OffsetHigh = 0; - } - - Win32IpcPipeServer *self; - std::shared_ptr<MmfInfo> minfo; - std::vector<std::shared_ptr<MmfInfo>> used_minfo; - HANDLE pipe = INVALID_HANDLE_VALUE; - UINT8 client_msgCONN_BUFFER_SIZE; - UINT32 to_read = 0; - UINT8 server_msgCONN_BUFFER_SIZE; - UINT32 to_write = 0; - UINT64 seq_num = 0; - BOOL pending_have_data = FALSE; -}; - -struct Win32IpcPipeServer -{ - explicit Win32IpcPipeServer (const std::string & n) - : name (n), ref_count (1), last_err (ERROR_SUCCESS), seq_num (0) - { - enqueue_event = CreateEventA (nullptr, FALSE, FALSE, nullptr); - cancellable = CreateEventA (nullptr, TRUE, FALSE, nullptr); - } - - ~Win32IpcPipeServer () - { - win32_ipc_pipe_server_shutdown (this); - CloseHandle (cancellable); - CloseHandle (enqueue_event); - } - - std::mutex lock; - std::condition_variable cond; - std::unique_ptr<std::thread> thread; - std::shared_ptr<MmfInfo> minfo; - std::string name; - std::vector<ServerConnection *> conn; - - ULONG ref_count; - HANDLE enqueue_event; - HANDLE cancellable; - UINT last_err; - UINT64 seq_num; -}; - -static void -win32_ipc_pipe_server_wait_client_msg_async (ServerConnection * conn); - -static void -win32_ipc_pipe_server_close_connection (ServerConnection * conn, - BOOL remove_from_list) -{ - Win32IpcPipeServer *self = conn->self; - - GST_DEBUG ("Closing connection %p", conn); - - if (remove_from_list) { - self->conn.erase (std::remove (self->conn.begin (), self->conn.end (), - conn), self->conn.end ()); - } - - if (!DisconnectNamedPipe (conn->pipe)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("DisconnectNamedPipe failed with 0x%x (%s)", - last_err, msg.c_str ()); - } - - CloseHandle (conn->pipe); - delete conn; -} - -static void WINAPI -win32_ipc_pipe_server_send_have_data_finish (DWORD error_code, DWORD n_bytes, - LPOVERLAPPED overlapped) -{ - ServerConnection *conn = (ServerConnection *) overlapped; - - if (error_code != ERROR_SUCCESS) { - std::string msg = win32_ipc_error_message (error_code); - GST_WARNING ("HAVE-DATA failed with 0x%x (%s)", - (UINT) error_code, msg.c_str ()); - win32_ipc_pipe_server_close_connection (conn, TRUE); - return; - } - - GST_TRACE ("HAVE-DATA done with %s", - win32_ipc_mmf_get_name (conn->minfo->mmf)); - - win32_ipc_pipe_server_wait_client_msg_async (conn); -} - -static void -win32_ipc_pipe_server_send_have_data_async (ServerConnection * conn) -{ - assert (conn->minfo != nullptr); - - conn->pending_have_data = FALSE; - conn->seq_num = conn->minfo->seq_num; - - conn->to_write = win32_ipc_pkt_build_have_data (conn->server_msg, - CONN_BUFFER_SIZE, conn->seq_num, - win32_ipc_mmf_get_name (conn->minfo->mmf), &conn->minfo->info); - if (conn->to_write == 0) { - GST_ERROR ("Couldn't build HAVE-DATA pkt"); - win32_ipc_pipe_server_close_connection (conn, TRUE); - return; - } - - conn->seq_num++; - - GST_TRACE ("Sending HAVE-DATA"); - - if (!WriteFileEx (conn->pipe, conn->server_msg, conn->to_write, - (OVERLAPPED *) conn, win32_ipc_pipe_server_send_have_data_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("WriteFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - win32_ipc_pipe_server_close_connection (conn, TRUE); - } -} - -static void WINAPI -win32_ipc_pipe_server_wait_client_msg_finish (DWORD error_code, DWORD n_bytes, - LPOVERLAPPED overlapped) -{ - ServerConnection *conn = (ServerConnection *) overlapped; - UINT64 seq_num; - Win32IpcPktType type; - char mmf_name1024; - - if (error_code != ERROR_SUCCESS) { - std::string msg = win32_ipc_error_message (error_code); - GST_WARNING ("NEED-DATA failed with 0x%x (%s)", - (UINT) error_code, msg.c_str ()); - win32_ipc_pipe_server_close_connection (conn, TRUE); - return; - } - - type = win32_ipc_pkt_type_from_raw (conn->client_msg0); - switch (type) { - case WIN32_IPC_PKT_NEED_DATA: - GST_TRACE ("Got NEED-DATA %p", conn); - - if (!win32_ipc_pkt_parse_need_data (conn->client_msg, CONN_BUFFER_SIZE, - &seq_num)) { - GST_ERROR ("Couldn't parse NEED-DATA message"); - win32_ipc_pipe_server_close_connection (conn, TRUE); - return; - } - - /* Will response later once data is available */ - if (!conn->minfo) { - GST_LOG ("No data available, waiting"); - conn->pending_have_data = TRUE; - return; - } - - win32_ipc_pipe_server_send_have_data_async (conn); - break; - case WIN32_IPC_PKT_READ_DONE: - GST_TRACE ("Got READ-DONE %p", conn); - - conn->used_minfo.push_back (conn->minfo); - conn->minfo = nullptr; - - /* All done, wait for need-data again */ - win32_ipc_pipe_server_wait_client_msg_async (conn); - break; - case WIN32_IPC_PKT_RELEASE_DATA: - { - GST_TRACE ("Got RELEASE-DATA %p", conn); - - if (!win32_ipc_pkt_parse_release_data (conn->client_msg, CONN_BUFFER_SIZE, - &seq_num, mmf_name)) { - GST_WARNING ("Couldn't parse RELEASE-DATA mssage"); - return; - } - - auto it = std::find_if (conn->used_minfo.begin (), - conn->used_minfo.end (), &(const std::shared_ptr<MmfInfo> info) -> bool { - return strcmp (mmf_name, win32_ipc_mmf_get_name (info->mmf)) == 0; - }); - - if (it != conn->used_minfo.end ()) { - conn->used_minfo.erase (it); - } else { - GST_WARNING ("Unknown memory name %s", mmf_name); - } - - win32_ipc_pipe_server_wait_client_msg_async (conn); - break; - } - default: - GST_WARNING ("Unexpected packet type"); - win32_ipc_pipe_server_close_connection (conn, TRUE); - break; - } -} - -static void -win32_ipc_pipe_server_wait_client_msg_async (ServerConnection * conn) -{ - GST_TRACE ("Waiting client message"); - - if (!ReadFileEx (conn->pipe, conn->client_msg, CONN_BUFFER_SIZE, - (OVERLAPPED *) conn, win32_ipc_pipe_server_wait_client_msg_finish)) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - - GST_WARNING ("ReadFileEx failed with 0x%x (%s)", last_err, msg.c_str ()); - win32_ipc_pipe_server_close_connection (conn, TRUE); - } -} - -static HANDLE -win32_ipc_pipe_server_create_pipe (Win32IpcPipeServer * self, - OVERLAPPED * overlap, BOOL * io_pending) -{ - HANDLE pipe = CreateNamedPipeA (self->name.c_str (), - PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, - PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT, - PIPE_UNLIMITED_INSTANCES, - CONN_BUFFER_SIZE, CONN_BUFFER_SIZE, 5000, nullptr); - if (pipe == INVALID_HANDLE_VALUE) { - self->last_err = GetLastError (); - std::string msg = win32_ipc_error_message (self->last_err); - GST_WARNING ("CreateNamedPipeA failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - return INVALID_HANDLE_VALUE; - } - - /* Async pipe should return FALSE */ - if (ConnectNamedPipe (pipe, overlap)) { - self->last_err = GetLastError (); - std::string msg = win32_ipc_error_message (self->last_err); - GST_WARNING ("ConnectNamedPipe failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - CloseHandle (pipe); - return INVALID_HANDLE_VALUE; - } - - *io_pending = FALSE; - self->last_err = GetLastError (); - switch (self->last_err) { - case ERROR_IO_PENDING: - *io_pending = TRUE; - break; - case ERROR_PIPE_CONNECTED: - SetEvent (overlap->hEvent); - break; - default: - { - std::string msg = win32_ipc_error_message (self->last_err); - GST_WARNING ("ConnectNamedPipe failed with 0x%x (%s)", - self->last_err, msg.c_str ()); - CloseHandle (pipe); - return INVALID_HANDLE_VALUE; - } - } - - self->last_err = ERROR_SUCCESS; - - return pipe; -} - -static void -win32_ipc_pipe_server_loop (Win32IpcPipeServer * self) -{ - BOOL io_pending = FALSE; - DWORD n_bytes; - DWORD wait_ret; - HANDLE waitables3; - HANDLE pipe; - OVERLAPPED overlap; - std::unique_lock<std::mutex> lk (self->lock); - - overlap.hEvent = CreateEvent (nullptr, TRUE, TRUE, nullptr); - pipe = win32_ipc_pipe_server_create_pipe (self, &overlap, &io_pending); - if (pipe == INVALID_HANDLE_VALUE) { - CloseHandle (overlap.hEvent); - self->cond.notify_all (); - return; - } - - self->last_err = ERROR_SUCCESS; - self->cond.notify_all (); - lk.unlock (); - - waitables0 = overlap.hEvent; - waitables1 = self->enqueue_event; - waitables2 = self->cancellable; - - do { - ServerConnection *conn; - - /* Enters alertable state and wait for - * 1) Client's connection request - * (similar to socket listen/accept in async manner) - * 2) Or, performs completion routines (finish APC) - * 3) Or, terminates if cancellable event was signalled - */ - wait_ret = WaitForMultipleObjectsEx (3, waitables, FALSE, INFINITE, TRUE); - if (wait_ret == WAIT_OBJECT_0 + 2) { - GST_DEBUG ("Operation cancelled"); - goto out; - } - - switch (wait_ret) { - case WAIT_OBJECT_0: - if (io_pending) { - BOOL ret = GetOverlappedResult (pipe, &overlap, &n_bytes, FALSE); - if (!ret) { - UINT last_err = GetLastError (); - std::string msg = win32_ipc_error_message (last_err); - GST_WARNING ("ConnectNamedPipe failed with 0x%x (%s)", - last_err, msg.c_str ()); - CloseHandle (pipe); - break; - } - } - - conn = new ServerConnection (self, pipe); - GST_DEBUG ("New connection is established %p", conn); - - /* Stores current buffer if available */ - lk.lock(); - conn->minfo = self->minfo; - lk.unlock (); - - pipe = INVALID_HANDLE_VALUE; - self->conn.push_back (conn); - win32_ipc_pipe_server_wait_client_msg_async (conn); - pipe = win32_ipc_pipe_server_create_pipe (self, &overlap, &io_pending); - if (pipe == INVALID_HANDLE_VALUE) - goto out; - break; - case WAIT_OBJECT_0 + 1: - case WAIT_IO_COMPLETION: - { - std::vector<ServerConnection *> pending_conns; - std::shared_ptr<MmfInfo> minfo; - - lk.lock(); - minfo = self->minfo; - lk.unlock(); - - if (minfo) { - for (auto iter: self->conn) { - if (iter->pending_have_data && iter->seq_num <= minfo->seq_num) { - iter->minfo = minfo; - pending_conns.push_back (iter); - } - } - } - - for (auto iter: pending_conns) { - GST_LOG ("Sending pending have data to %p", iter); - win32_ipc_pipe_server_send_have_data_async (iter); - } - - break; - } - default: - GST_WARNING ("Unexpected WaitForMultipleObjectsEx return 0x%x", - (UINT) wait_ret); - goto out; - } - } while (true); - -out: - /* Cancels all I/O event issued from this thread */ - { - std::vector<HANDLE> pipes; - for (auto iter: self->conn) { - if (iter->pipe != INVALID_HANDLE_VALUE) - pipes.push_back (iter->pipe); - } - - for (auto iter: pipes) - CancelIo (iter); - } - - for (auto iter: self->conn) - win32_ipc_pipe_server_close_connection (iter, FALSE); - - self->conn.clear (); - - if (pipe != INVALID_HANDLE_VALUE) - CloseHandle (pipe); - - lk.lock (); - CloseHandle (overlap.hEvent); - self->last_err = ERROR_OPERATION_ABORTED; - self->cond.notify_all (); -} - -static BOOL -win32_ipc_pipe_server_run (Win32IpcPipeServer * self) -{ - std::unique_lock<std::mutex> lk (self->lock); - - self->thread = std::make_unique<std::thread> - (std::thread (win32_ipc_pipe_server_loop, self)); - self->cond.wait (lk); - - if (self->last_err != ERROR_SUCCESS) { - self->thread->join (); - self->thread = nullptr; - return FALSE; - } - - return TRUE; -} - -Win32IpcPipeServer * -win32_ipc_pipe_server_new (const char * pipe_name) -{ - Win32IpcPipeServer *self; - - if (!pipe_name) - return nullptr; - - self = new Win32IpcPipeServer (pipe_name); - - if (!win32_ipc_pipe_server_run (self)) { - win32_ipc_pipe_server_unref (self); - return nullptr; - } - - return self; -} - -Win32IpcPipeServer * -win32_ipc_pipe_server_ref (Win32IpcPipeServer * server) -{ - if (!server) - return nullptr; - - InterlockedIncrement (&server->ref_count); - - return server; -} - -void -win32_ipc_pipe_server_unref (Win32IpcPipeServer * server) -{ - ULONG ref_count; - - if (!server) - return; - - ref_count = InterlockedDecrement (&server->ref_count); - if (ref_count == 0) - delete server; -} - -void -win32_ipc_pipe_server_shutdown (Win32IpcPipeServer * server) -{ - GST_DEBUG ("Shutting down"); - - SetEvent (server->cancellable); - if (server->thread) { - server->thread->join (); - server->thread = nullptr; - } - - std::lock_guard<std::mutex> lk (server->lock); - server->last_err = ERROR_OPERATION_ABORTED; - server->minfo = nullptr; - server->cond.notify_all (); -} - -BOOL -win32_ipc_pipe_server_send_mmf (Win32IpcPipeServer * server, Win32IpcMmf * mmf, - const Win32IpcVideoInfo * info, void * user_data, Win32IpcMmfDestroy notify) -{ - std::lock_guard<std::mutex> lk (server->lock); - server->minfo = std::make_shared<MmfInfo> (mmf, info, server->seq_num, - user_data, notify); - - GST_LOG ("Enqueue mmf %s", win32_ipc_mmf_get_name (mmf)); - - server->seq_num++; - - /* Wakeup event loop */ - SetEvent (server->enqueue_event); - - return TRUE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcpipeserver.h
Deleted
@@ -1,54 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#pragma once - -#include <windows.h> -#include <string.h> -#include "win32ipcmmf.h" -#include "win32ipcprotocol.h" -#include <gst/gst.h> - -G_BEGIN_DECLS - -struct Win32IpcPipeServer; - -typedef void (*Win32IpcMmfDestroy) (void * user_data); - -Win32IpcPipeServer * win32_ipc_pipe_server_new (const char * pipe_name); - -Win32IpcPipeServer * win32_ipc_pipe_server_ref (Win32IpcPipeServer * server); - -void win32_ipc_pipe_server_unref (Win32IpcPipeServer * server); - -void win32_ipc_pipe_server_shutdown (Win32IpcPipeServer * server); - -BOOL win32_ipc_pipe_server_send_mmf (Win32IpcPipeServer * server, - Win32IpcMmf * mmf, - const Win32IpcVideoInfo * info, - void * user_data, - Win32IpcMmfDestroy notify); - -G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcprotocol.cpp
Deleted
@@ -1,292 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#include "win32ipcprotocol.h" -#include <string.h> - -const char * -win32_ipc_pkt_type_to_string (Win32IpcPktType type) -{ - switch (type) { - case WIN32_IPC_PKT_NEED_DATA: - return "NEED-DATA"; - case WIN32_IPC_PKT_HAVE_DATA: - return "HAVE-DATA"; - case WIN32_IPC_PKT_READ_DONE: - return "READ-DONE"; - case WIN32_IPC_PKT_RELEASE_DATA: - return "RELEASE-DATA"; - default: - break; - } - - return "Unknown"; -} - -Win32IpcPktType -win32_ipc_pkt_type_from_raw (UINT8 type) -{ - return (Win32IpcPktType) type; -} - -UINT8 -win32_ipc_pkt_type_to_raw (Win32IpcPktType type) -{ - return (UINT8) type; -} - -#define READ_UINT32(d,v) do { \ - (*((UINT32 *) v)) = *((UINT32 *) d); \ - (d) += sizeof (UINT32); \ -} while (0) - -#define WRITE_UINT32(d,v) do { \ - *((UINT32 *) d) = v; \ - (d) += sizeof (UINT32); \ -} while (0) - -#define READ_UINT64(d,v) do { \ - (*((UINT64 *) v)) = *((UINT64 *) d); \ - (d) += sizeof (UINT64); \ -} while (0) - -#define WRITE_UINT64(d,v) do { \ - *((UINT64 *) d) = v; \ - (d) += sizeof (UINT64); \ -} while (0) - -UINT32 -win32_ipc_pkt_build_need_data (UINT8 * pkt, UINT32 pkt_len, UINT64 seq_num) -{ - UINT8 *data = pkt; - - if (!pkt || pkt_len < WIN32_IPC_PKT_NEED_DATA_SIZE) - return 0; - - data0 = win32_ipc_pkt_type_to_raw (WIN32_IPC_PKT_NEED_DATA); - data++; - - WRITE_UINT64 (data, seq_num); - - return WIN32_IPC_PKT_NEED_DATA_SIZE; -} - -BOOL -win32_ipc_pkt_parse_need_data (UINT8 * pkt, UINT32 pkt_len, UINT64 * seq_num) -{ - UINT8 *data = pkt; - - if (!pkt || pkt_len < WIN32_IPC_PKT_NEED_DATA_SIZE) - return FALSE; - - if (win32_ipc_pkt_type_from_raw (data0) != WIN32_IPC_PKT_NEED_DATA) - return FALSE; - - data++; - - READ_UINT64 (data, seq_num); - - return TRUE; -} - -UINT32 -win32_ipc_pkt_build_have_data (UINT8 * pkt, UINT32 pkt_size, UINT64 seq_num, - const char * mmf_name, const Win32IpcVideoInfo * info) -{ - UINT8 *data = pkt; - size_t len; - - if (!pkt || !mmf_name || !info) - return 0; - - len = strlen (mmf_name); - if (len == 0) - return 0; - - len++; - if (pkt_size < WIN32_IPC_PKT_HAVE_DATA_SIZE + len) - return 0; - - data0 = win32_ipc_pkt_type_to_raw (WIN32_IPC_PKT_HAVE_DATA); - data++; - - WRITE_UINT64 (data, seq_num); - - strcpy ((char *) data, mmf_name); - data += len; - - WRITE_UINT32 (data, info->format); - WRITE_UINT32 (data, info->width); - WRITE_UINT32 (data, info->height); - WRITE_UINT32 (data, info->fps_n); - WRITE_UINT32 (data, info->fps_d); - WRITE_UINT32 (data, info->par_n); - WRITE_UINT32 (data, info->par_d); - WRITE_UINT64 (data, info->size); - - for (UINT i = 0; i < 4; i++) - WRITE_UINT64 (data, info->offseti); - - for (UINT i = 0; i < 4; i++) - WRITE_UINT32 (data, info->stridei); - - WRITE_UINT64 (data, info->qpc); - - return data - pkt; -} - -BOOL -win32_ipc_pkt_parse_have_data (UINT8 * pkt, UINT32 pkt_size, UINT64 * seq_num, - char * mmf_name, Win32IpcVideoInfo * info) -{ - UINT8 *data = pkt; - size_t len; - - if (!pkt || pkt_size < WIN32_IPC_PKT_HAVE_DATA_SIZE) - return FALSE; - - if (win32_ipc_pkt_type_from_raw (pkt0) != WIN32_IPC_PKT_HAVE_DATA) - return FALSE; - - data++; - - READ_UINT64 (data, seq_num); - - len = strnlen ((const char *) data, pkt_size - (data - pkt)); - if (len == 0) - return FALSE; - - len++; - if (pkt_size < WIN32_IPC_PKT_HAVE_DATA_SIZE + len) - return FALSE; - - strcpy (mmf_name, (const char *) data); - data += len; - - READ_UINT32 (data, &info->format); - READ_UINT32 (data, &info->width); - READ_UINT32 (data, &info->height); - READ_UINT32 (data, &info->fps_n); - READ_UINT32 (data, &info->fps_d); - READ_UINT32 (data, &info->par_n); - READ_UINT32 (data, &info->par_d); - READ_UINT64 (data, &info->size); - - for (UINT i = 0; i < 4; i++) - READ_UINT64 (data, &info->offseti); - - for (UINT i = 0; i < 4; i++) - READ_UINT32 (data, &info->stridei); - - READ_UINT64 (data, &info->qpc); - - return TRUE; -} - -UINT32 -win32_ipc_pkt_build_read_done (UINT8 * pkt, UINT32 pkt_len, UINT64 seq_num) -{ - UINT8 *data = pkt; - - if (!pkt || pkt_len < WIN32_IPC_PKT_READ_DONE_SIZE) - return 0; - - data0 = win32_ipc_pkt_type_to_raw (WIN32_IPC_PKT_READ_DONE); - data++; - - WRITE_UINT64 (data, seq_num); - - return WIN32_IPC_PKT_READ_DONE_SIZE; -} - -BOOL -win32_ipc_pkt_parse_read_done (UINT8 * pkt, UINT32 pkt_len, UINT64 * seq_num) -{ - UINT8 *data = pkt; - - if (!pkt || pkt_len < WIN32_IPC_PKT_READ_DONE_SIZE) - return FALSE; - - if (win32_ipc_pkt_type_from_raw (data0) != WIN32_IPC_PKT_READ_DONE) - return FALSE; - - data++; - - READ_UINT64 (data, seq_num); - - return TRUE; -} - -UINT32 -win32_ipc_pkt_build_release_data (UINT8 * pkt, UINT32 pkt_size, UINT64 seq_num, - const char * mmf_name) -{ - UINT8 *data = pkt; - size_t len; - - if (!pkt || !mmf_name) - return 0; - - len = strlen (mmf_name); - if (len == 0) - return 0; - - len++; - - data0 = win32_ipc_pkt_type_to_raw (WIN32_IPC_PKT_RELEASE_DATA); - data++; - - WRITE_UINT64 (data, seq_num); - - strcpy ((char *) data, mmf_name); - data += len; - - return data - pkt; -} - -BOOL -win32_ipc_pkt_parse_release_data (UINT8 * pkt, UINT32 pkt_size, - UINT64 * seq_num, char * mmf_name) -{ - UINT8 *data = pkt; - size_t len; - - if (win32_ipc_pkt_type_from_raw (pkt0) != WIN32_IPC_PKT_RELEASE_DATA) - return FALSE; - - data++; - - READ_UINT64 (data, seq_num); - - len = strnlen ((const char *) data, pkt_size - (data - pkt)); - if (len == 0) - return FALSE; - - len++; - - strcpy (mmf_name, (const char *) data); - data += len; - - return TRUE; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcprotocol.h
Deleted
@@ -1,260 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#pragma once - -#include <windows.h> -#include <gst/gst.h> - -G_BEGIN_DECLS - -/* - * Communication Sequence - * - * +--------+ +--------+ - * | client | | server | - * +--------+ +--------+ - * | | - * +--------- NEED-DATA ---------->| - * | +-------+ - * | | prepare named - * | | shared-memory - * | +<------+ - * +<-- HAVE-DATA (w/ shm name) ---| - * +--------+ | - * Open named | | - * shared-memory | | - * +------->+ | - * |--------- READ-DONE ---------->| - * | | - * +--------+ | - * release | | - * shared-memory | | - * +--------| | - * |------- RELEASE-Data---------->| - */ - -typedef enum -{ - WIN32_IPC_PKT_UNKNOWN, - WIN32_IPC_PKT_NEED_DATA, - WIN32_IPC_PKT_HAVE_DATA, - WIN32_IPC_PKT_READ_DONE, - WIN32_IPC_PKT_RELEASE_DATA, -} Win32IpcPktType; - -/* Same as GstVideoFormat */ -typedef enum -{ - WIN32_IPC_VIDEO_FORMAT_UNKNOWN, - WIN32_IPC_VIDEO_FORMAT_ENCODED, - WIN32_IPC_VIDEO_FORMAT_I420, - WIN32_IPC_VIDEO_FORMAT_YV12, - WIN32_IPC_VIDEO_FORMAT_YUY2, - WIN32_IPC_VIDEO_FORMAT_UYVY, - WIN32_IPC_VIDEO_FORMAT_AYUV, - WIN32_IPC_VIDEO_FORMAT_RGBx, - WIN32_IPC_VIDEO_FORMAT_BGRx, - WIN32_IPC_VIDEO_FORMAT_xRGB, - WIN32_IPC_VIDEO_FORMAT_xBGR, - WIN32_IPC_VIDEO_FORMAT_RGBA, - WIN32_IPC_VIDEO_FORMAT_BGRA, - WIN32_IPC_VIDEO_FORMAT_ARGB, - WIN32_IPC_VIDEO_FORMAT_ABGR, - WIN32_IPC_VIDEO_FORMAT_RGB, - WIN32_IPC_VIDEO_FORMAT_BGR, - WIN32_IPC_VIDEO_FORMAT_Y41B, - WIN32_IPC_VIDEO_FORMAT_Y42B, - WIN32_IPC_VIDEO_FORMAT_YVYU, - WIN32_IPC_VIDEO_FORMAT_Y444, - WIN32_IPC_VIDEO_FORMAT_v210, - WIN32_IPC_VIDEO_FORMAT_v216, - WIN32_IPC_VIDEO_FORMAT_NV12, - WIN32_IPC_VIDEO_FORMAT_NV21, - WIN32_IPC_VIDEO_FORMAT_GRAY8, - WIN32_IPC_VIDEO_FORMAT_GRAY16_BE, - WIN32_IPC_VIDEO_FORMAT_GRAY16_LE, - WIN32_IPC_VIDEO_FORMAT_v308, - WIN32_IPC_VIDEO_FORMAT_RGB16, - WIN32_IPC_VIDEO_FORMAT_BGR16, - WIN32_IPC_VIDEO_FORMAT_RGB15, - WIN32_IPC_VIDEO_FORMAT_BGR15, - WIN32_IPC_VIDEO_FORMAT_UYVP, - WIN32_IPC_VIDEO_FORMAT_A420, - WIN32_IPC_VIDEO_FORMAT_RGB8P, - WIN32_IPC_VIDEO_FORMAT_YUV9, - WIN32_IPC_VIDEO_FORMAT_YVU9, - WIN32_IPC_VIDEO_FORMAT_IYU1, - WIN32_IPC_VIDEO_FORMAT_ARGB64, - WIN32_IPC_VIDEO_FORMAT_AYUV64, - WIN32_IPC_VIDEO_FORMAT_r210, - WIN32_IPC_VIDEO_FORMAT_I420_10BE, - WIN32_IPC_VIDEO_FORMAT_I420_10LE, - WIN32_IPC_VIDEO_FORMAT_I422_10BE, - WIN32_IPC_VIDEO_FORMAT_I422_10LE, - WIN32_IPC_VIDEO_FORMAT_Y444_10BE, - WIN32_IPC_VIDEO_FORMAT_Y444_10LE, - WIN32_IPC_VIDEO_FORMAT_GBR, - WIN32_IPC_VIDEO_FORMAT_GBR_10BE, - WIN32_IPC_VIDEO_FORMAT_GBR_10LE, - WIN32_IPC_VIDEO_FORMAT_NV16, - WIN32_IPC_VIDEO_FORMAT_NV24, - WIN32_IPC_VIDEO_FORMAT_NV12_64Z32, - WIN32_IPC_VIDEO_FORMAT_A420_10BE, - WIN32_IPC_VIDEO_FORMAT_A420_10LE, - WIN32_IPC_VIDEO_FORMAT_A422_10BE, - WIN32_IPC_VIDEO_FORMAT_A422_10LE, - WIN32_IPC_VIDEO_FORMAT_A444_10BE, - WIN32_IPC_VIDEO_FORMAT_A444_10LE, - WIN32_IPC_VIDEO_FORMAT_NV61, - WIN32_IPC_VIDEO_FORMAT_P010_10BE, - WIN32_IPC_VIDEO_FORMAT_P010_10LE, - WIN32_IPC_VIDEO_FORMAT_IYU2, - WIN32_IPC_VIDEO_FORMAT_VYUY, - WIN32_IPC_VIDEO_FORMAT_GBRA, - WIN32_IPC_VIDEO_FORMAT_GBRA_10BE, - WIN32_IPC_VIDEO_FORMAT_GBRA_10LE, - WIN32_IPC_VIDEO_FORMAT_GBR_12BE, - WIN32_IPC_VIDEO_FORMAT_GBR_12LE, - WIN32_IPC_VIDEO_FORMAT_GBRA_12BE, - WIN32_IPC_VIDEO_FORMAT_GBRA_12LE, - WIN32_IPC_VIDEO_FORMAT_I420_12BE, - WIN32_IPC_VIDEO_FORMAT_I420_12LE, - WIN32_IPC_VIDEO_FORMAT_I422_12BE, - WIN32_IPC_VIDEO_FORMAT_I422_12LE, - WIN32_IPC_VIDEO_FORMAT_Y444_12BE, - WIN32_IPC_VIDEO_FORMAT_Y444_12LE, - WIN32_IPC_VIDEO_FORMAT_GRAY10_LE32, - WIN32_IPC_VIDEO_FORMAT_NV12_10LE32, - WIN32_IPC_VIDEO_FORMAT_NV16_10LE32, - WIN32_IPC_VIDEO_FORMAT_NV12_10LE40, - WIN32_IPC_VIDEO_FORMAT_Y210, - WIN32_IPC_VIDEO_FORMAT_Y410, - WIN32_IPC_VIDEO_FORMAT_VUYA, - WIN32_IPC_VIDEO_FORMAT_BGR10A2_LE, - WIN32_IPC_VIDEO_FORMAT_RGB10A2_LE, - WIN32_IPC_VIDEO_FORMAT_Y444_16BE, - WIN32_IPC_VIDEO_FORMAT_Y444_16LE, - WIN32_IPC_VIDEO_FORMAT_P016_BE, - WIN32_IPC_VIDEO_FORMAT_P016_LE, - WIN32_IPC_VIDEO_FORMAT_P012_BE, - WIN32_IPC_VIDEO_FORMAT_P012_LE, - WIN32_IPC_VIDEO_FORMAT_Y212_BE, - WIN32_IPC_VIDEO_FORMAT_Y212_LE, - WIN32_IPC_VIDEO_FORMAT_Y412_BE, - WIN32_IPC_VIDEO_FORMAT_Y412_LE, - WIN32_IPC_VIDEO_FORMAT_NV12_4L4, - WIN32_IPC_VIDEO_FORMAT_NV12_32L32, - WIN32_IPC_VIDEO_FORMAT_RGBP, - WIN32_IPC_VIDEO_FORMAT_BGRP, - WIN32_IPC_VIDEO_FORMAT_AV12, - WIN32_IPC_VIDEO_FORMAT_ARGB64_LE, - WIN32_IPC_VIDEO_FORMAT_ARGB64_BE, - WIN32_IPC_VIDEO_FORMAT_RGBA64_LE, - WIN32_IPC_VIDEO_FORMAT_RGBA64_BE, - WIN32_IPC_VIDEO_FORMAT_BGRA64_LE, - WIN32_IPC_VIDEO_FORMAT_BGRA64_BE, - WIN32_IPC_VIDEO_FORMAT_ABGR64_LE, - WIN32_IPC_VIDEO_FORMAT_ABGR64_BE, - WIN32_IPC_VIDEO_FORMAT_NV12_16L32S, - WIN32_IPC_VIDEO_FORMAT_NV12_8L128, - WIN32_IPC_VIDEO_FORMAT_NV12_10BE_8L128, -} Win32IpcVideoFormat; - -typedef struct -{ - Win32IpcVideoFormat format; - UINT32 width; - UINT32 height; - UINT32 fps_n; - UINT32 fps_d; - UINT32 par_n; - UINT32 par_d; - /* the size of memory */ - UINT64 size; - /* plane offsets */ - UINT64 offset4; - /* stride of each plane */ - UINT32 stride4; - /* QPC time */ - UINT64 qpc; -} Win32IpcVideoInfo; - -/* 1 byte (type) + 8 byte (seq-num) */ -#define WIN32_IPC_PKT_NEED_DATA_SIZE 9 - -/* 1 byte (type) + 8 byte (seq-num) + N bytes (name) + 4 (format) + - * 4 (width) + 4 (height) + 4 (fps_n) + 4 (fps_d) + 4 (par_n) + 4 (par_d) + - * 8 (size) + 8 * 4 (offset) + 4 * 4 (stride) + 8 (timestamp) */ -#define WIN32_IPC_PKT_HAVE_DATA_SIZE 101 - -/* 1 byte (type) + 8 byte (seq-num) */ -#define WIN32_IPC_PKT_READ_DONE_SIZE 5 - -const char * win32_ipc_pkt_type_to_string (Win32IpcPktType type); - -Win32IpcPktType win32_ipc_pkt_type_from_raw (UINT8 type); - -UINT8 win32_ipc_pkt_type_to_raw (Win32IpcPktType type); - -UINT32 win32_ipc_pkt_build_need_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 seq_num); - -BOOL win32_ipc_pkt_parse_need_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 * seq_num); - -UINT32 win32_ipc_pkt_build_have_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 seq_num, - const char * mmf_name, - const Win32IpcVideoInfo * info); - -BOOL win32_ipc_pkt_parse_have_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 * seq_num, - char * mmf_name, - Win32IpcVideoInfo * info); - -UINT32 win32_ipc_pkt_build_read_done (UINT8 * pkt, - UINT32 pkt_size, - UINT64 seq_num); - -BOOL win32_ipc_pkt_parse_read_done (UINT8 * pkt, - UINT32 pkt_size, - UINT64 * seq_num); - -UINT32 win32_ipc_pkt_build_release_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 seq_num, - const char * mmf_name); - -BOOL win32_ipc_pkt_parse_release_data (UINT8 * pkt, - UINT32 pkt_size, - UINT64 * seq_num, - char * mmf_name); - -G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcutils.cpp
Deleted
@@ -1,55 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#include "win32ipcutils.h" -#include <cctype> -#include <string> -#include <locale> -#include <codecvt> -#include <algorithm> - -static inline void rtrim(std::string &s) { - s.erase (std::find_if (s.rbegin(), s.rend(), - (unsigned char ch) { - return !std::isspace (ch); - }).base (), s.end ()); -} - -std::string -win32_ipc_error_message (DWORD error_code) -{ - WCHAR buffer1024; - - if (!FormatMessageW (FORMAT_MESSAGE_IGNORE_INSERTS | - FORMAT_MESSAGE_FROM_SYSTEM, nullptr, error_code, 0, buffer, - 1024, nullptr)) { - return std::string (""); - } - - std::wstring_convert<std::codecvt_utf8<wchar_t>, wchar_t> converter; - std::string ret = converter.to_bytes (buffer); - rtrim (ret); - - return ret; -}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/protocol/win32ipcutils.h
Deleted
@@ -1,30 +0,0 @@ -/* GStreamer - * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - * DEALINGS IN THE SOFTWARE. - * - * SPDX-License-Identifier: MIT - */ - -#pragma once - -#include <windows.h> -#include <string> - -std::string win32_ipc_error_message (DWORD error_code);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ChangeLog -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ChangeLog
Changed
@@ -1,12 +1,413 @@ -=== release 1.26.10 === +=== release 1.28.0 === -2025-12-25 15:44:26 +0100 Tim-Philipp Müller <tim@centricular.com> +2026-01-27 17:02:33 +0000 Tim-Philipp Müller <tim@centricular.com> + + * NEWS: + * README.md: + * RELEASE: + * gst-plugins-bad.doap: + * meson.build: + Release 1.28.0 + +2026-01-27 15:14:41 +0000 Tim-Philipp Müller <tim@centricular.com> + + * ext/lcevcdecoder/gstlcevch264decodebin.c: + * ext/lcevcdecoder/gstlcevch265decodebin.c: + * ext/lcevcdecoder/gstlcevch266decodebin.c: + lcevch26xdecodebin: don't autoplug for now until issues with non-LCEVC streams are fixed + See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4870 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10612> + +2026-01-25 17:18:08 +0000 Tim-Philipp Müller <tim@centricular.com> + + * po/LINGUAS: + * po/ar.po: + * po/hr.po: + * po/ro.po: + gst-plugins-bad: update translations + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10598> + +2026-01-19 18:48:39 +0100 Rafael Caricio <rcaricio@netflix.com> + + * sys/applemedia/vtdec.c: + vtdec: check AV1 support during caps negotiation + Override getcaps vfunc to filter out video/x-av1 from advertised caps + when hardware decoding is not supported, rather than failing later in + set_format. This allows proper fallback to other decoders during + autoplugging. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10562> + +2026-01-20 20:27:40 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * ext/rtmp/gstrtmp.c: + * ext/rtmp/gstrtmpsink.c: + * ext/rtmp/gstrtmpsrc.c: + rtmp: Emit a deprecation warning in init + After 1.28.0 is released, we will remove this plugin and have the + rtmp2 plugin register these features. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10574> + +2026-01-18 17:09:10 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * meson.build: + * meson.options: + meson: Don't disable orc support when orcc is not available + This was breaking usage of orc when cross-compiling with no orcc + available in PATH. We can use the orc-dist.{c,h} files in that case as + long as the orc library itself is available. Using the subproject, for + example. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10553> + +2025-12-20 19:56:37 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/androidmedia/gstamc.c: + * sys/androidmedia/gstamcutils.c: + * sys/androidmedia/gstamcutils.h: + * sys/androidmedia/gstamcvideoenc.c: + * sys/androidmedia/gstjniutils.c: + * sys/androidmedia/gstjniutils.h: + * sys/androidmedia/jni/gstamc-codeclist-jni.c: + * sys/androidmedia/meson.build: + amc: Fix init on Android API < 29 + Not finding isHardwareAccelerated() is an error only for API >= 29. + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4693 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10426> + +2025-12-20 19:55:45 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/androidmedia/gstjniutils.c: + amc: Fix whitespace around gst_amc_jni_call_*_method macros + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10426> + +2026-01-14 19:07:40 -0600 Christopher Degawa <ccom@randomderp.com> + + * ext/svtav1/gstsvtav1enc.c: + svtav1enc: handle deprecations from SVT-AV1 4.0.0 + `enable_adaptive_quantization` was replaced by `aq_mode` since it's not + a boolean. `target_socket` was removed entirely. If someone wants to + pin the encoder to a specific socket, they will have to use external + means like numactl etc. + Signed-off-by: Christopher Degawa <ccom@randomderp.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10543> + +2026-01-12 16:09:38 +0900 Seungha Yang <seungha@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * sys/win32ipc/gstwin32ipc.cpp: + * sys/win32ipc/gstwin32ipcbasesink.cpp: + * sys/win32ipc/gstwin32ipcbasesrc.cpp: + * sys/win32ipc/gstwin32ipcsink.cpp: + * sys/win32ipc/gstwin32ipcsrc.cpp: + win32ipc: Update plugin docs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10529> + +2026-01-12 16:07:54 +0900 Seungha Yang <seungha@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * sys/d3d12/gstd3d12fisheyedewarp.cpp: + * sys/d3d12/gstd3d12memorycopy.cpp: + * sys/d3d12/gstd3d12overlaycompositor.cpp: + * sys/d3d12/gstd3d12remap.cpp: + d3d12: Update plugin docs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10529> + +2026-01-12 15:23:24 +0900 Seungha Yang <seungha@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + wasapi2: Update plugin docs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10529> + +2026-01-12 07:34:24 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + va: encoder: don't fail at close if config isn't created + With the last refactor, the configuration of the encoder is lazy, so it's + possible to close the encoder without even opened it. But still return FALSE if + closing the encoder with no config ID, which generated spurious error logs. + This patch also resets the state of the encoder while closing. + Fixes: #4852 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10528> + +2026-01-13 09:46:48 +0100 Sjoerd Simons <sjoerd@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnx: Reset memory_info after freeing + For clarity also reset memory_info to NULL after freeing it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10535> + +2026-01-13 08:33:36 +0100 Sjoerd Simons <sjoerd@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Fix invalid free of output names + Ensure the output_names array has been allocated before trying to free + its elements. This situation happens when the model validation fails + before the output names are assigned; I ran into it due to an + incorrect model info file. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10535> + +2026-01-10 21:27:30 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> + + * tests/check/meson.build: + bad: fix indentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10527> + +2026-01-10 15:29:50 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> + + * gst/closedcaption/meson.build: + * tests/check/meson.build: + closedcaption: map with meson options + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10527> + +2026-01-07 14:06:03 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/play/gstplay.c: + play: Check correct flags for deciding whether a track is enabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10503> + +2026-01-07 13:25:55 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/play/gstplay.c: + play: Move subtitle enabled condition to the right place + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10503> + +2026-01-06 18:29:30 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/play/gstplay.c: + play: Don't access currently selected audio/video/subtitle stream ids without mutex + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10503> + +2026-01-06 18:25:41 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/play/gstplay.c: + play: Don't do the same stream selection multiple times + playbin3 doesn't respond with a streams-selected message in that case, and apart + from that it's also inefficient. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10503> + +2026-01-06 14:20:30 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/play/gstplay.c: + play: Ignore streams-selected messages for old selections + It might happen that we received a new stream-collection and sent a new + select-streams event in the meantime, and reacting to the old streams-selected + message can cause inconsistent states if the streams have changed. + See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9851 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10503> + +2025-12-27 18:47:42 +0200 Sebastian Dröge <sebastian@centricular.com> + + * ext/webrtc/gstwebrtcbin.c: + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/nice/nice.c: + webrtc: Change promise in gst_webrtc_ice_close() to `transfer none` + This is more in line with other API taking a promise and makes it easier for + calling code to keep a reference to the promise around. + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4819 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10460> + +2026-01-05 20:20:51 +0000 Tim-Philipp Müller <tim@centricular.com> + + * meson.build: + Back to development after 1.27.90 + +=== release 1.27.90 === + +2026-01-05 20:15:10 +0000 Tim-Philipp Müller <tim@centricular.com> * NEWS: * RELEASE: * gst-plugins-bad.doap: * meson.build: - Release 1.26.10 + Release 1.27.90 + +2026-01-05 18:08:00 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * gst-libs/gst/codecs/gstvp8decoder.c: + vp8decoder: Fix incorrect variable in warning message + The warning message for unrecognized copy_buffer_to_golden was + incorrectly printing copy_buffer_to_alternate instead of + copy_buffer_to_golden. This was a copy-paste error that would + display the wrong field value when debugging invalid VP8 streams. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10494> + +2026-01-04 21:58:36 +0100 Christian Gräfe <cgraefe83@gmail.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/qroverlay/gstdebugqroverlay.c: + * ext/qroverlay/gstqroverlay.c: + qroverlay: use proper klass + klass was not really keywords but a description + use base type "Video" + and functional type "Overlay" (not defined yet in the docu, but also used by other elements) + also fix the long-name of debugqroverlay + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10493> + +2025-12-30 18:53:26 -0500 Doug Nazar <nazard@nazar.ca> + + * ext/assrender/gstassrender.c: + * ext/sndfile/gstsfsrc.c: + * ext/ttml/gstttmlrender.c: + * gst/dvdspu/gstdvdspu.c: + * gst/segmentclip/gstsegmentclip.c: + * tools/element-templates/sinkpad: + gst: Properly unref pad template caps + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10477> + +2025-12-31 09:08:15 +0200 Sebastian Dröge <sebastian@centricular.com> + + * ext/mpeg2enc/gstmpeg2enc.cc: + mpeg2enc: Fix indentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10476> + +2025-12-30 17:53:22 -0500 Doug Nazar <nazard@nazar.ca> + + * tests/check/elements/audiovisualizer.c: + * tests/check/elements/mpegtsmux.c: + tests: Fix several memory leaks + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10476> + +2025-12-30 17:51:50 -0500 Doug Nazar <nazard@nazar.ca> + + * ext/mpeg2enc/gstmpeg2enc.cc: + mpeg2enc: Fix several memory leaks + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10476> + +2025-12-26 15:38:07 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Fix integer overflow in buffer allocation + Use g_size_checked_mul() to safely calculate the buffer size + (width * height * channels * element_size) to prevent integer + overflow which could lead to undersized buffer allocation and + heap corruption. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10449> + +2025-12-24 11:28:53 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.c: + * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: + * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: + analyticsmeta: Export debug category to the Mtd for better debug messages + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10443> + +2025-12-24 11:18:25 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.c: + analyticsmeta: Set debug category as default + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10443> + +2025-12-24 11:08:10 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: + objectdetectionmtd: Reject transformations that aren't 90deg based or symetries + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10443> + +2025-12-24 10:37:50 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: + segmentationmtd: Drop meta on rotations/flips + We don't have any code to do this kind of transformation. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10443> + +2025-12-23 16:20:16 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.c: + analyticsrelationmeta: Skip Mtd that can't be transformed + Also do the relationship copying in the end. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10443> + +2025-12-26 14:32:08 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * gst/geometrictransform/gstgeometrictransform.c: + geometrictransform: Fix integer overflow in map allocation + Use g_size_checked_mul() to safely calculate the map size + (width * height * 2 * sizeof(gdouble)) to prevent integer + overflow which could lead to undersized buffer allocation + and subsequent heap corruption when the map is populated. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10448> + +2025-12-26 10:26:38 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * gst/geometrictransform/gstkaleidoscope.c: + kaleidoscope: Fix potential division by zero in geometric transform + Avoid division by zero when cos(theta) is close to zero. When theta + approaches ±π/2 after the triangle function calculation, cos(theta) + becomes zero, which would cause undefined behavior. Check that + cos(theta) is sufficiently far from zero before performing the + division. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10438> + +2025-12-22 21:56:05 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: Check transport before setting state when closing + `webrtc_transceiver_get_dtls_transport()` can return NULL so that needs to be + checked by the caller. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10432> + +2025-12-22 16:23:10 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/webrtc/nice/nice.c: + * gst/mpegtsmux/gstbasetsmuxjpegxs.h: + * gst/tensordecoders/gstclassifiertensordecoder.c: + gst: Remove various wrongly added includes + These were most likely added by clangd automatically. + Please use `-header-insertion=never` with clangd! + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8519> + +2025-12-17 11:36:14 +0100 Johan Sternerup <johast@axis.com> + + * gst-libs/gst/webrtc/nice/meson.build: + * gst-libs/gst/webrtc/nice/nice.c: + webrtcnice: Depend on libnice 0.1.23 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8519> + +2025-04-16 14:48:57 +0200 Johan Sternerup <johast@axis.com> + + * gst-libs/gst/webrtc/nice/meson.build: + * gst-libs/gst/webrtc/nice/nice.c: + webrtcnice: Close agent and do it with force + Call nice_agent_close_async() to make sure all outstanding resolve tasks have + finished, which means we avoid a potential leak of a GMainContext with + an associated file descriptor. We only do this if + NICE_AGENT_OPTION_CLOSE_FORCED is available and can be used, because + otherwise the closing procedure can take quite a long time while waiting + for turn server responses. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8519> + +2025-04-28 13:44:12 +0200 Johan Sternerup <johast@axis.com> + + * gst-libs/gst/webrtc/nice/nice.c: + webrtcnice: Wait for completion of proxy and mdns resolve tasks + This commit ensures all outstanding name resolving for mdns and proxy is + finished before the nice agent thread is shut down. This is important + because the DNS resolving is performed by GThreadedResolver, which uses + a GTask taking a strong reference to the agent main context in order to + be able finish the task and run the final callback within the agent main + context. Consequently, the main context cannot be disposed until the + GTask has finished. Leaking a GMainContext is particularly bad because + it involves leaking the file descriptor used for polling events. + GThreadedResolver takes a strong reference to the GTask. it hands this + reference over to an internal worker thread that ultimately calls libc's + `getaddrinfo()`. This function can hang for an undefined amount of time + depending on network or network driver conditions. This makes it + impossible to completely control the lifetime of our GTask/GMainContext, + but what we can do is to make sure that in case of a long hang or early + shutdown of the agent, the GTask has already finished so that when + `getaddrinfo()` returns the only work left to do is to drop the + reference to GTask (and possibly GMainContext). + GThreadedResolver already sets up two mechanisms for finishing the task + earlier than `getaddrinfo()`, one timeout source and a cancellation + source. The cancellation source was previously created by + GThreadedResolver, but we're now passing it ourselves so that we can + explicitly invoke cancellation. There is one quirk though. Both the + timeout source and the cancellation source are associated with the gio + global worker context. Thus, cancellation will happen in the thread + driving the worker context and there it will conclude that the final + agent callback must be invoked within the agent main context. This + detour means we cannot simply cancel and expect the GTask to be + finalized without also explicitly waiting for the finalization. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8519> 2025-12-18 18:23:56 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -19,7 +420,7 @@ trying to output. These two decoder simply didn't care unlike all the other. Use similar technique as used in h264dec, which is to mark the frame as not for output. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10431> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10421> 2025-12-18 15:38:55 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -29,7 +430,7 @@ Some of the decoder would simply turn any flow return into an ERROR, which can be noisy when the original flow return is FLUSHING due to a normal flush condition. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10431> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10421> 2025-12-18 15:37:38 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -38,7 +439,7 @@ v4l2codecs: av1/vp9: Fix request leak on error The request was not being freed property if there was an error. Typically a flush / seek operation could cause this. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10431> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10421> 2025-12-18 15:35:07 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -51,7 +452,115 @@ requested or for which the request has been removed by the flush operation. Make sure to crash this kind of programming error with an assert, while protecting against crash in this case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10431> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10421> + +2025-12-18 15:46:24 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * ext/hls/gsthlsdemux.c: + hlsdemux: Mark discontinuity on seek + When seeking across an EXT-X-DISCONTINUITY tag, set the internal + stream discont flag. This ensures the next buffer is correctly marked + with GST_BUFFER_FLAG_DISCONT, signaling a timeline reset to + downstream elements. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10402> + +2025-12-17 13:17:52 -0600 Brad Reitmeyer <brad.reitmeyer@resi.io> + + * sys/decklink/gstdecklinkvideosink.cpp: + decklinkvideosink: Fix frame duration to be based on the decklink clock + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10401> + +2025-12-21 22:12:43 -0500 Daniel Morin <daniel.morin@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnx: remove invalid properties + - input-tensor-offset and input-tensor-scale have been removed + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10427> + +2025-12-19 15:40:26 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * ext/rsvg/meson.build: + meson: Solve some cyclic dependencies caused by test-only deps + gstreamer => gobject-introspection => cairo => fontconfig => freetype2 => harfbuzz => cairo + gst-plugins-base => libdrm => cairo => fontconfig => freetype2 => harfbuzz => cairo + gst-plugins-good => cairo => librsvg => cairo + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10422> + +2025-12-19 11:16:27 +0100 Adrien Plazas <aplazas@gnome.org> + + * gst-libs/gst/play/gstplay.c: + * gst-libs/gst/play/gstplay.h: + gstplay: Add gapless looping + This adds the GstPlayLoop enumeration and the loop configuration + accessor methods. It allows gapless looping over the current track. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10418> + +2025-12-14 11:59:43 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + doc: update gst-plugins-bad doc cache + - doc cache updated + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10380> + +2025-12-13 11:37:04 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstssdtensordec.c: + * gst/tensordecoders/gstssdtensordec.h: + * gst/tensordecoders/gsttensordecoders.c: + tensordecoder: ssdtensordec backward compat with oldname + - Also register old name "ssdobjectdetector" for backward compatibility. + - warn about Deprecated in instance_init + - better annotation related to deprecated + Co-authored-by: Sebastian Dröge <sebastian@centricular.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10380> + +2025-12-18 18:07:17 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/librfb/gstrfbsrc.h: + video: Include gstvideodmabufpool.h from video.h + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10411> + +2025-12-16 16:41:57 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst/jpegformat/gstjpegparse.c: + * gst/jpegformat/gstjpegparse.h: + jpegparse: enable MPF support to JPEG parser + Parse Multi Picture Format APP2 segments to handle multiple images in a + single JPEG stream. Ignore non-primary images and adjust frame logic for + correct framing. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10301> + +2025-12-04 18:24:24 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst/jpegformat/gstjpegparse.c: + jpegparse: add MPF parsing support + Parse APP2 segments containing Multi-Picture Format data according to CIPA DC-x + 007-2009 specification. This enables handling of JPEG files with multiple + embedded images like panoramas or stereoscopic pairs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10301> + +2025-12-17 14:39:07 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst/jpegformat/gstjpegparse.c: + jpegparse: add synthetic header/footer for XMP + Add synthetic xpacket XML header and footer if missing in XMP data in APP1 + segment, since they are needed for the parsing function + gst_tag_list_from_xmp_buffer(). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10301> + +2025-12-17 14:47:38 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst/jpegformat/gstjpegparse.c: + jpegparse: add metadata state flag + This ensures metadata from APP segments and comments is accounted for, + preventing the loose of those segments. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10301> + +2025-12-18 17:47:14 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/modelinfo.c: + modelinfo: Add some missing annotations + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10410> 2025-12-17 14:24:31 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -61,7 +570,7 @@ Apart from keeping less state around this also calculates more accurate timestamps because of tracking everything in terms of edit units instead of nanoseconds. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10406> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10396> 2025-12-17 11:57:01 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -69,15 +578,7 @@ mxfdemux: Switch edit unit position tracking to unsigned integers These can never become negative and the only negative number in use is -1 for "unset", which maps equally well to G_MAXUINT64. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10406> - -2025-12-11 21:53:04 -0500 Aaron Boxer <aaron.boxer@collabora.com> - - * gst-libs/gst/play/gstplay.c: - play: do not call gst_pb_utils_get_codec_description if caps are not fixed - this avoids throwing a (harmless) exception when stream selection is - called before pipeline is linked - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10394> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10396> 2025-12-17 18:58:14 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -86,7 +587,7 @@ aiffparse: Remove segment closing on non-flushing seeks That's a 0.10 leftover and not necessary anymore, and can confuse downstream elements unnecessarily. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10400> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10399> 2025-12-17 18:53:26 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -96,7 +597,7 @@ That's a 0.10 leftover and not necessary anymore, and can confuse downstream elements unnecessarily. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4803 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10400> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10399> 2025-12-17 10:23:25 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -104,7 +605,7 @@ aesdec: use gsize for buffer sizes and fix log format Replace casts with gsize in gst_aes_dec_prepare_output_buffer() and use G_GSIZE_FORMAT for logging to avoid truncation and warnings. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10398> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10239> 2025-11-28 14:40:49 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -112,7 +613,7 @@ aesenc: use gsize for buffer sizes and fix log format Replace casts with gsize in gst_aes_enc_prepare_output_buffer() and use G_GSIZE_FORMAT for logging to avoid truncation and warnings. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10398> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10239> 2025-12-17 11:07:18 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -121,14 +622,56 @@ Prevent NULL dereference by checking gst_message_get_structure() result before accessing fields. Replace strcmp() with gst_structure_has_name() for safe structure name comparison. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10395> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10393> + +2025-12-11 21:53:04 -0500 Aaron Boxer <aaron.boxer@collabora.com> + + * gst-libs/gst/play/gstplay.c: + play: do not call gst_pb_utils_get_codec_description if caps are not fixed + this avoids throwing a (harmless) exception when stream selection is + called before pipeline is linked + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10341> + +2025-12-15 17:57:49 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + docs: Update gir / plugins docs cache + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10384> + +2025-12-15 15:08:02 -0500 Daniel Morin <daniel.morin@collabora.com> + + * ext/onnx/gstonnxinference.c: + * ext/tflite/gsttfliteinference.c: + * gst/tensordecoders/gstclassifiertensordecoder.c: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + * gst/tensordecoders/gstssdtensordec.c: + * gst/tensordecoders/gstyolosegtensordecoder.c: + * gst/tensordecoders/gstyolotensordecoder.c: + gst: Rename GstValueSet to GstValueUniqueList + GObject-Introspection has an issue with GstSet because anything that starts with + 'gst_value_set' becomes something that belongs to 'GstSet' but we have + gst_value_set_bitmask and gst_value_set_SOMETHING (), which all would become + methods of GstSet. + To avoid this, rename GstSet (aka GstValueSet) to GstUniqueList (aka + GstValueUniqueList). + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4813 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10384> 2025-12-16 13:05:19 +0100 François Laignel <francois@centricular.com> * gst/mxf/mxfdemux.c: mxfdemux: send event SegmentDone for segment seeks ... instead of sending an EOS event in that case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10391> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10331> + +2025-12-10 15:54:50 +0100 Jakub Adam <jakub.adam@collabora.com> + + * ext/vmaf/meson.build: + meson: fix building -bad tests with disabled vmaf + Fixes an error from Meson: + subprojects/gst-plugins-bad/tests/check/meson.build:175:85: ERROR: + Unknown variable "libvmaf_dep". + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10353> 2025-12-13 11:29:03 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -138,14 +681,31 @@ a known config and then back to the same unknown config makes it use invalid cached channel positions. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4791 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10373> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10370> 2025-11-26 20:09:20 +0100 Jakub Adam <jakub.adam@collabora.com> * ext/dtls/gstdtlsdec.c: dtlsdec: mark generated cert agent with GST_OBJECT_FLAG_MAY_BE_LEAKED So that it is ignored by the leaks tracer. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10371> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10366> + +2025-12-12 15:41:42 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcbasesink.cpp: + win32ipcbasesink: Serialize metas from uploaded buffer + Use metas attached to the uploaded buffer instead of the original one, + as the uploaded buffer may have different memory-layout-related metas + such as video meta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10362> + +2025-12-10 16:30:55 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcsink.cpp: + win32ipcsink: Preserve original buffer flags in raw-video fallback path + Copy all metadat including buffer flags from the original buffer so + that clients receive the intended information + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10343> 2025-12-09 11:12:12 +0100 Hyunjun Ko <zzoon@igalia.com> @@ -153,13 +713,170 @@ vkformat: Add VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16 format Add support for VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16, which is the Vulkan equivalent of P010. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10360> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10333> + +2025-12-10 20:53:29 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + doc: gst-plugins-bad doc cache update + - cache update + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10340> + +2025-12-09 15:03:34 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstssdtensordec.c: + * gst/tensordecoders/gstssdtensordec.h: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/meson.build: + tensordecoder: rename ssdobjectdetector to ssdtensordec + - renamed ssdobjectdetector to ssdtensordec + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10340> + +2025-12-08 12:58:44 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + gst-python: make python linter happy with test_analytics + - Change to be confirm fory PEP8 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10244> + +2025-12-08 00:09:47 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstyolosegtensordecoder.c: + * gst/tensordecoders/gstyolotensordecoder.c: + tensordecoders: yolo tensordecoders + - Change to dims-order from row-majot to col-major as they are col-major and + with new modelinfo improvement we can communicate this from inference. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10244> + +2025-11-30 22:42:09 -0500 Daniel Morin <daniel.morin@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnx: refactor to use ModelInfo + - Retrieve model metadata from modelInfo instead of from ONNX. + - Validate modelInfo match ONNX, when available + - Get means and stddev from ModelInfo + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10244> + +2025-12-03 22:24:27 -0500 Daniel Morin <daniel.morin@collabora.com> + + * ext/tflite/gsttfliteinference.c: + * ext/tflite/meson.build: + * ext/tflite/modelinfo.c: + * ext/tflite/modelinfo.h: + tflite: adapt to analytics modelinfo moved + - use modelinfo from analytics + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10244> + +2025-12-03 22:23:32 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst-libs/gst/analytics/analytics.h: + * gst-libs/gst/analytics/meson.build: + * gst-libs/gst/analytics/modelinfo.c: + * gst-libs/gst/analytics/modelinfo.h: + analytics: move modelinfo to analytics lib + - moved to analytics and added dims-order setting + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10244> + +2025-12-09 22:01:48 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstioutracker.h: + tensordecoder: fix typo in header + - gstreamer-ssdobjectdetector -> gstreamer-ioutracker + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10342> + +2025-12-09 17:30:58 +0100 Mathieu Duponchelle <mathieu@centricular.com> + + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/nice/nicetransport.c: + * gst-libs/gst/webrtc/nice/niceutils.h: + webrtc/nice: fix crashes on gathering stats for relay candidates + libnice does not have TURN information for remote relay candidates, + but the `nice_candidate_relay_address` returns void and doesn't + check if the `turn` field is set, unlike + `nice_candidate_stun_server_address`. + As a consequence, we must only call the API for local candidates. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10336> + +2025-12-09 19:13:20 +0000 Tim-Philipp Müller <tim@centricular.com> + + * meson.build: + Back to development after 1.27.50 + +=== release 1.27.50 === + +2025-12-09 19:08:48 +0000 Tim-Philipp Müller <tim@centricular.com> + + * NEWS: + * RELEASE: + * gst-plugins-bad.doap: + * meson.build: + Release 1.27.50 + +2025-12-05 13:15:19 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * gst-libs/gst/wayland/gstwlvideoformat.h: + wayland: Add Y444 pixel format support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10307> + +2025-12-05 13:52:45 +0100 Robert Mader <robert.mader@collabora.com> + + * ext/gtk/gstgtkwaylandsink.c: + * ext/wayland/gstwaylandsink.c: + waylandsink: Propose udmabuf allocator + This change sync both GTK and native waylandsink propose_allocation method and + adds support for the new udmabuf allocator and dmabuf pool. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10307> + +2025-11-26 21:44:56 +1100 Jan Schmidt <jan@centricular.com> + + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: Ensure ice-gathering-state reaches complete + If there are pending ice candidates queued for emission when the + underlying ICE implementation signals that all transports have completed + gathering ICE candidates, then reporting that the 'ice-gathering-state' + has reached 'complete' state is deferred. In that situation, we need + to emit the completion state change after the pending candidates finish + emission. Previously, the state change would not complete sometimes. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10206> + +2025-12-08 16:20:39 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/codecparsers/gsth264bitwriter.c: + codecparsers: h264bitwriter: Fix writing of scaling lists + The logic in the writer was comparing the scaling lists differently than + the parser. Where the parser compared the first list to the defaults and + later lists to the earlier, the writer compared every list to the default. + This means a PPS received with scaling lists 0,3,6,7 would be transmitted + with 0-11 all filled in. There was also an extra nested loop with the same + iteration criteria that needed to be removed. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9505> + +2025-12-08 12:19:19 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkav1dec.c: + * ext/vulkan/vkh264dec.c: + * ext/vulkan/vkh265dec.c: + * ext/vulkan/vkvp9dec.c: + vulkan: decoders: fallback to video decoder's decide_allocation() + As in !10297 Vulkan image buffer pool isn't allocated but a video decoder plain + buffer pool, which is less expensive than the Vulkan one, and it will be + de-allocated shortly afterwards. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10324> + +2025-12-08 12:14:11 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkav1dec.c: + vulkanav1dec: renegotiate after events + Missed condition when AV1 decoding was merged, while it was merged for H.264 and + H.265: !9560 and VP9 was merged initially with it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10324> 2025-12-04 16:40:25 -0600 Brad Reitmeyer <brad.reitmeyer@resi.io> * sys/decklink/gstdecklinkvideosink.cpp: decklinkvideosink: Fix frame completion callbacks for firmware 14.3+ - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10322> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10295> 2025-12-06 10:25:21 +0000 Philippe Normand <philn@igalia.com> @@ -169,7 +886,162 @@ By doing so we avoid potential race conditions. The libnice ICE implementation was then adapted to comply with the transfer-full return value of the `gst_webrtc_ice_add_stream()` vfunc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10318> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10312> + +2025-12-05 17:16:05 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcserver.cpp: + win32ipc: Enhance I/O cancel sequence + Waits for pending I/O before releasing overlap struct to avoid + potential use-after-free + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-04 22:16:32 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcbasesink.cpp: + * sys/win32ipc/gstwin32ipcbasesink.h: + * sys/win32ipc/gstwin32ipcbufferpool.cpp: + * sys/win32ipc/gstwin32ipcclient.cpp: + * sys/win32ipc/gstwin32ipcprotocol.cpp: + * sys/win32ipc/gstwin32ipcprotocol.h: + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcserver.h: + * sys/win32ipc/gstwin32ipcsink.cpp: + * sys/win32ipc/gstwin32ipcsink.h: + * sys/win32ipc/gstwin32ipcsrc.cpp: + * sys/win32ipc/gstwin32ipcsrc.h: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + * sys/win32ipc/meson.build: + * sys/win32ipc/plugin.cpp: + win32ipc: Add generic shared memory src/sink elements + Adding win32ipcsink and win32ipcsrc element which supports any + type of streams in addition to raw video + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-04 19:47:29 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcbasesink.cpp: + * sys/win32ipc/gstwin32ipcbasesink.h: + * sys/win32ipc/gstwin32ipcbasesrc.cpp: + * sys/win32ipc/gstwin32ipcbasesrc.h: + * sys/win32ipc/gstwin32ipcclient.cpp: + * sys/win32ipc/gstwin32ipcprotocol.cpp: + * sys/win32ipc/gstwin32ipcprotocol.h: + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcserver.h: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + * sys/win32ipc/gstwin32ipcvideosink.h: + * sys/win32ipc/gstwin32ipcvideosrc.cpp: + * sys/win32ipc/gstwin32ipcvideosrc.h: + * sys/win32ipc/meson.build: + win32ipc: Add baseclass implementation + Extract common logic from video elements + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-04 17:37:11 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcclient.cpp: + * sys/win32ipc/gstwin32ipcmemory.cpp: + * sys/win32ipc/gstwin32ipcmemory.h: + * sys/win32ipc/gstwin32ipcmmf.cpp: + * sys/win32ipc/gstwin32ipcmmf.h: + * sys/win32ipc/gstwin32ipcprotocol.cpp: + * sys/win32ipc/gstwin32ipcprotocol.h: + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + win32ipc: Use SIZE_T for allocation size representation + ... instead of UINT32. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-04 15:51:08 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcserver.h: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + win32ipcvideosink: Enhance EOS sequence + Fully drain queued buffers on EOS + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-04 12:49:28 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcclient.cpp: + * sys/win32ipc/gstwin32ipcclient.h: + * sys/win32ipc/gstwin32ipcvideosrc.cpp: + win32ipcvideosrc: Add leaky-type and {max, current-level}-buffers properties + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-03 19:28:54 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipc.cpp: + * sys/win32ipc/gstwin32ipc.h: + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcserver.h: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + * sys/win32ipc/meson.build: + win32ipcvideosink: Add leaky-type and {max, current-level}-buffers properties + Allows blocking streaming thread when clients are not consuming + incoming buffers fast enough + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-03 17:32:36 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcmemory.cpp: + win32ipcmemory: Refactor memory allocator + Use mutex/cond instead of GstPoll + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-12-02 19:16:06 +0900 Seungha Yang <seungha@centricular.com> + + * sys/win32ipc/gstwin32ipcclient.cpp: + * sys/win32ipc/gstwin32ipcclient.h: + * sys/win32ipc/gstwin32ipcmemory.cpp: + * sys/win32ipc/gstwin32ipcmemory.h: + * sys/win32ipc/gstwin32ipcmmf.cpp: + * sys/win32ipc/gstwin32ipcmmf.h: + * sys/win32ipc/gstwin32ipcprotocol.cpp: + * sys/win32ipc/gstwin32ipcprotocol.h: + * sys/win32ipc/gstwin32ipcserver.cpp: + * sys/win32ipc/gstwin32ipcserver.h: + * sys/win32ipc/gstwin32ipcutils.cpp: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + * sys/win32ipc/gstwin32ipcvideosrc.cpp: + * sys/win32ipc/meson.build: + * sys/win32ipc/protocol/win32ipcmmf.cpp: + * sys/win32ipc/protocol/win32ipcmmf.h: + * sys/win32ipc/protocol/win32ipcpipeclient.cpp: + * sys/win32ipc/protocol/win32ipcpipeclient.h: + * sys/win32ipc/protocol/win32ipcpipeserver.cpp: + * sys/win32ipc/protocol/win32ipcpipeserver.h: + * sys/win32ipc/protocol/win32ipcprotocol.cpp: + * sys/win32ipc/protocol/win32ipcprotocol.h: + * sys/win32ipc/protocol/win32ipcutils.cpp: + * sys/win32ipc/protocol/win32ipcutils.h: + win32ipc: Rewrite plugin + Pre-work to support a generic IPC element in addition to the existing + raw-video specific elements. + Summary of changes: + * Use an unnamed MMF handle with DuplicateHandle() instead of a named + handle to prevent unintended access from other processes via + name-based handle opening + * Refactor server/client implementation based on the D3D12 IPC element + design + * Replace the previous custom data struct with GstCaps and GstMeta to + describe video format and memory layout + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10296> + +2025-07-03 14:35:21 +0200 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: + * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: + analytics segmentation: Implement vidoe matrix meta transformation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9497> + +2025-05-18 11:59:08 +0200 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.c: + * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: + analytics od mtd: Implement matrix meta transformation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9497> 2025-12-05 09:19:34 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -184,13 +1056,252 @@ gap event. This pool will be deallocated again shortly afterwards, just to create a new pool with the correct configuration. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4779 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10302> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10297> + +2025-12-04 20:10:59 +0100 Mathieu Duponchelle <mathieu@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + gst-plugins-bad: update plugins cache + - correction for yolotensordec doc + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-28 14:17:07 -0600 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstyolosegtensordecoder.c: + * gst/tensordecoders/gstyolotensordecoder.c: + tensordecoder: Add caps to Yolo decoders + - Tensor caps added + - Rename yolotensordec to yolov8tensordec. This reflects that this tensor is different for older versions. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-28 12:17:29 -0600 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + * gst/tensordecoders/gstioutracker.c: + * gst/tensordecoders/gstssdobjectdetector.c: + * gst/tensordecoders/gstyolosegtensordecoder.c: + * gst/tensordecoders/gstyolotensordecoder.c: + tensordecoder: Fix metadata long name to be more explicit + - For all tensor decoders + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-22 21:35:17 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + * gst/tensordecoders/gstclassifiertensordecoder.h: + tensordecoder: refactor classifiertensordecoder to support both softmaxed and non-softmaxed tensors + This architecture separates negotiation concerns from processing, improving efficiency and code clarity. + - Add constraints on tensor caps + - Add validation between between model loaded vs label labels files + - Make classifier generic (not assuming a 1000 classes) + - Fixed mismatch return type + - improve error handling in classification + - Warn if no label found from labels file + - Separate tensor groups: Define GROUP_ID_CLASSIFICATION and + GROUP_ID_CLASSIFICATION_SOFTMAXED as distinct tensor groups, as they represent + different tensor formats that don't belong to the same group. + - Enhanced caps template: Updated sink pad caps to support both tensor formats + using a LIST, allowing the decoder to negotiate with either type. + - Cache negotiation result: Added do_softmax member variable to cache whether + softmax processing is needed, determined once during set_caps based on the + negotiated tensor-id rather than checking at runtime. + - Renamed softmax_res to postproc_result: Better reflects dual usage for both + softmax computation and uint8-to-float conversion. Always allocated since + it's needed for uint8 conversion even when tensors are pre-softmaxed. + - Optimized tensor retrieval: get_tensor() now only searches for the specific + tensor-id that was negotiated, rather than trying both types. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-22 21:21:50 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstssdobjectdetector.c: + tensordecoder: refactor ssd tensordecoder + - ssd:Add constrains on tensor caps + - ssd:Update ssd tensor-decoder to follow ids from tensor id registry + - ssd:Add constrains on tensor caps + - ssd:Explicit batch of 1: dims=<(int)1, (int)1,max> + - ssd:Implicit batch of 1: dims=<(int)1,max> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-22 21:19:01 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstfacedetectortensordecoder.c: + tensordecoder: refactor facedetector tensordecoder + - Add constrains on tensor caps + - Downgrade tensordecoder's rank non accelerated to secondary + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-28 11:42:07 -0500 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gsttensordecodebin.c: + * gst/tensordecoders/gsttensordecodebin.h: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/meson.build: + tensordecoder: Add tensordecoderbin + - Adding a tensordecodebin able to auto-plug the correct tensor-decoder + - Add tensordecodebin to tensordecoders plugin + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-10-14 14:09:30 -0400 Daniel Morin <daniel.morin@collabora.com> + + * ext/tflite/gsttfliteinference.c: + tflite: adding tensor caps + - tflite_inference now generate tensor caps. + - name model_incaps and model_outcaps + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-10-14 00:12:53 -0400 Daniel Morin <daniel.morin@collabora.com> + + * ext/tflite/modelinfo.c: + * ext/tflite/modelinfo.h: + analytics: add method to retrive group-id from modelinfo + - Adding modelinfo_get_quark_group_id + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2025-11-26 20:53:35 -0500 Daniel Morin <daniel.morin@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnx: Add tensor capabilities support to gstonnxinference + This commit enhances the gstonnxinference element by introducing comprehensive + tensor capabilities support, enabling better negotiation between the inference + element and downstream tensor decoders. + Key Changes: + - Implement tensor capabilities description mechanism + - Improve caps negotiation and propagation + Detailed Modifications: + - Disable passthrough mode to control tensor caps propagation + - Extract group-id from ONNX model metadata + - Create tensor capabilities structure with: + * Dimensions order (row-major) + * Tensor dimensions + * Tensor identifier + * Data type information + - Build GstSet of tensor capabilities under the group-id + - Utilize gst_tensor_meta_set() for robust meta handling + - Update transform_caps() to handle tensor capabilities in SINK and SRC pads + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9109> + +2021-10-22 11:09:07 -0500 Diego Nieto <dnieto@fluendo.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/meson.build: + * ext/vmaf/gstvmafelement.c: + * ext/vmaf/gstvmafelement.h: + * ext/vmaf/gstvmafplugin.c: + * ext/vmaf/meson.build: + * meson.options: + * tests/check/elements/vmaf.c: + * tests/check/gst-plugins-bad.supp: + * tests/check/meson.build: + vmaf: add new element to calculate VMAF scores + Introduces the `vmaf` element, which calculates video quality scores by + comparing a reference stream against a distorted stream. + The plugin is coded against the libvmaf 2.0 API. As such it requires that + libvmaf => 2.0 be installed for the plugin to compile. + Scores can be retrieved via 2 methods: + 1. Message Bus: Emits messages containing "VMAF.type" (distinguished as + "MESSAGE_TYPE_FRAME" or "MESSAGE_TYPE_POOLED"). + 2. File Output: Writes results to disk in CSV, JSON, or XML formats by + setting the "results-filename" property. + Co-authored-by: Andoni Morales Alastruey <amorales@fluendo.com> + Co-authored-by: Casey Bateman <Casey.Bateman@hudl.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9757> + +2025-12-04 10:34:03 +0200 Sebastian Dröge <sebastian@centricular.com> + + * ext/closedcaption/gstcea708decoder.c: + * ext/closedcaption/gstcea708decoder.h: + * ext/closedcaption/gstceaccoverlay.c: + * ext/closedcaption/gstceaccoverlay.h: + * ext/meson.build: + * gst/closedcaption/bcd.h: + * gst/closedcaption/bit_slicer.c: + * gst/closedcaption/bit_slicer.h: + * gst/closedcaption/ccutils.c: + * gst/closedcaption/ccutils.h: + * gst/closedcaption/decoder.c: + * gst/closedcaption/decoder.h: + * gst/closedcaption/gstcccombiner.c: + * gst/closedcaption/gstcccombiner.h: + * gst/closedcaption/gstccconverter.c: + * gst/closedcaption/gstccconverter.h: + * gst/closedcaption/gstccextractor.c: + * gst/closedcaption/gstccextractor.h: + * gst/closedcaption/gstcea608mux.c: + * gst/closedcaption/gstcea608mux.h: + * gst/closedcaption/gstclosedcaption.c: + * gst/closedcaption/gstcodecccinserter.c: + * gst/closedcaption/gstcodecccinserter.h: + * gst/closedcaption/gsth264ccextractor.c: + * gst/closedcaption/gsth264ccextractor.h: + * gst/closedcaption/gsth264ccinserter.c: + * gst/closedcaption/gsth264ccinserter.h: + * gst/closedcaption/gsth264reorder.c: + * gst/closedcaption/gsth264reorder.h: + * gst/closedcaption/gsth265ccextractor.c: + * gst/closedcaption/gsth265ccextractor.h: + * gst/closedcaption/gsth265ccinserter.c: + * gst/closedcaption/gsth265ccinserter.h: + * gst/closedcaption/gsth265reorder.c: + * gst/closedcaption/gsth265reorder.h: + * gst/closedcaption/gstline21dec.c: + * gst/closedcaption/gstline21dec.h: + * gst/closedcaption/gstline21enc.c: + * gst/closedcaption/gstline21enc.h: + * gst/closedcaption/hamm.h: + * gst/closedcaption/io-sim.c: + * gst/closedcaption/io-sim.h: + * gst/closedcaption/macros.h: + * gst/closedcaption/meson.build: + * gst/closedcaption/misc.h: + * gst/closedcaption/raw_decoder.c: + * gst/closedcaption/raw_decoder.h: + * gst/closedcaption/sampling_par.c: + * gst/closedcaption/sampling_par.h: + * gst/closedcaption/sliced.h: + * gst/meson.build: + * meson.options: + * tests/check/meson.build: + closedcaption: Remove cc708overlay + It was deprecated and cea708overlay from gst-plugins-rs is the replacement. + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4207 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10285> 2025-12-04 12:57:26 +0800 Yun Liu <yun.m.liu@intel.com> * gst-libs/gst/analytics/meson.build: analytics: Fix build on MSVC by using libm dependency - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10288> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10283> + +2025-12-02 17:52:34 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/midi/midiparse.c: + midiparse: Fix a couple of potential out-of-bounds reads + Also use an unsigned integer for parsing variable length integers as shifting + bits out of the sign bit is undefined behaviour. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10276> + +2022-04-01 05:54:29 +1100 Jan Schmidt <jan@centricular.com> + + * gst/mpegtsmux/gstbasetsmux.c: + mpegtsmux: Fix potential deadlock changing pmt-interval + The object lock is the innermost lock. Don't take + the mux->lock while holding it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10261> + +2025-11-28 15:12:37 +0200 Vivia Nikolaidou <vivia@ahiru.eu> + + * gst/mxf/mxfmpeg.c: + mxfmpeg: Add custom Sony picture essence coding UL + Sony seems to use 06.0e.2b.34.04.01.01.03.0e.06.41.02 as prefix for + their custom picture essence codings, and specifically the MPEG one + potentially uses the same semantics. + This comes from a file with: + picture essence coding 06.0e.2b.34.04.01.01.03.0e.06.41.02.01.04.03.01 + codec 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 + essence container 06.0e.2b.34.04.01.01.02.0d.01.03.01.02.04.60.01 + which seemed to be XDCAM, aka decodable as MPEG-2. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10219> 2025-10-08 02:31:50 -0400 Doug Nazar <nazard@nazar.ca> @@ -199,7 +1310,7 @@ * ext/curl/gstcurlhttpsink.c: * ext/curl/gstcurlhttpsrc.c: curl: Ensure set_opt() is called with a long value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> 2025-03-16 20:23:48 -0400 Doug Nazar <nazard@nazar.ca> @@ -208,13 +1319,13 @@ We must stop any existing transfer when seeking or we may push buffers from the old request that downstream isn't expecting, causing decode errors. Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/579 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> 2025-03-16 20:17:58 -0400 Doug Nazar <nazard@nazar.ca> * ext/curl/gstcurlhttpsrc.c: curl: Cleanup logging of received headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> 2021-05-11 23:54:49 -0400 Doug Nazar <nazard@nazar.ca> @@ -224,29 +1335,14 @@ length transfered. The Content-Range header should report the total length if known. The current code would inconsistently use either or race and use the wrong length. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> - -2025-06-12 11:45:33 -0300 L. E. Segovia <amy@amyspark.me> - - * ext/curl/gstcurlhttpsrc.c: - curl: Recover missing comment - See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8974#note_2955585 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> - -2025-05-13 12:22:08 +0000 L. E. Segovia <amy@centricular.com> - - * ext/curl/gstcurlhttpsrc.c: - curl: Fix wrong format specifier for macOS - > ../ext/curl/gstcurlhttpsrc.c:1331:11: error: format specifies type - > unsigned long long' but the argument has type 'curl_off_t' (aka 'long') -Werror,-Wformat - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> 2021-05-11 23:58:54 -0400 Doug Nazar <nazard@nazar.ca> * ext/curl/gstcurlhttpsrc.c: * ext/curl/gstcurlhttpsrc.h: curl: remove unused content_type field - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> 2021-05-08 21:23:46 -0400 Doug Nazar <nazard@nazar.ca> @@ -256,52 +1352,117 @@ We need to stop the current transfer and start a new one if the uri was changed. Also fix the 'test_range_get' to do 20 requests, instead of one request and 19 seeks past EOS. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10274> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5537> -2025-12-02 17:52:34 +0200 Sebastian Dröge <sebastian@centricular.com> +2025-12-01 18:04:15 +0200 Sebastian Dröge <sebastian@centricular.com> - * gst/midi/midiparse.c: - midiparse: Fix a couple of potential out-of-bounds reads - Also use an unsigned integer for parsing variable length integers as shifting - bits out of the sign bit is undefined behaviour. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10280> + * gst/mxf/mxfvanc.c: + mxfmux: Create empty edit units for VANC packets without content or gap events + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10256> -2025-11-28 15:12:37 +0200 Vivia Nikolaidou <vivia@ahiru.eu> +2025-12-01 13:00:14 +0200 Sebastian Dröge <sebastian@centricular.com> - * gst/mxf/mxfmpeg.c: - mxfmpeg: Add custom Sony picture essence coding UL - Sony seems to use 06.0e.2b.34.04.01.01.03.0e.06.41.02 as prefix for - their custom picture essence codings, and specifically the MPEG one - potentially uses the same semantics. - This comes from a file with: - picture essence coding 06.0e.2b.34.04.01.01.03.0e.06.41.02.01.04.03.01 - codec 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 - essence container 06.0e.2b.34.04.01.01.02.0d.01.03.01.02.04.60.01 - which seemed to be XDCAM, aka decodable as MPEG-2. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10271> + * gst/mpegtsdemux/tsdemux.c: + tsdemux: Consider DTS in private streams as audio instead of private + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10249> -2022-04-01 05:54:29 +1100 Jan Schmidt <jan@centricular.com> +2025-12-01 12:53:46 +0200 Sebastian Dröge <sebastian@centricular.com> - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Fix potential deadlock changing pmt-interval - The object lock is the innermost lock. Don't take - the mux->lock while holding it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10272> + * gst/mpegtsdemux/tsdemux.c: + streams: Add GST_STREAM_TYPE_METADATA for metadata streams + And handle it inside parsebin and tsdemux. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10249> + +2025-11-26 14:28:33 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * ext/vulkan/vkviewconvert.c: + vulkan: zero-initialize ViewUpdate before use + Ensure ViewUpdate struct is fully initialized to avoid copying + uninitialized fields when writing to uniform buffer. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10200> + +2025-12-01 13:57:20 +0900 Seungha Yang <seungha@centricular.com> + + asiodeviceprovider: Fix deadlock on stop + Ensure that main loop is fully running during start() to avoid + below deadlock sequence + * GstDeviceProvider::start() spawns a background thread + * GstDeviceProvider::stop() is called before the background thread + actually starts running the main loop + * g_main_loop_quit() is invoked, but since the main loop has not + started yet, it has no effect + * stop() waits for the background thread to join + * The background thread eventually starts the main loop on the + background thread + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10245> -2025-12-01 17:32:18 +0000 Tim-Philipp Müller <tim@centricular.com> +2025-11-30 19:22:03 +0100 David Maseda Neira <david.maseda@cinfo.es> - * meson.build: - Back to development after 1.26.9 + * sys/nvcodec/gstnvh264encoder.cpp: + * sys/nvcodec/gstnvh265encoder.cpp: + nvcodec: Enable num-slices property on nvh26{4,5}enc to force number of output slices + Adds a new num-slices property (range 0-32, default 0) to both nvh264enc + and nvh265enc encoders to control the number of slices per frame. This + ensures compatibility with hardware decoders that require exactly 1 + slice per NAL unit. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10242> -=== release 1.26.9 === +2025-11-21 15:22:54 +0100 Robert Mader <robert.mader@collabora.com> -2025-12-01 17:27:07 +0000 Tim-Philipp Müller <tim@centricular.com> + * ext/wayland/gstwaylandsink.c: + waylandsink: Promote set_caps debug print to info + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.9 +2025-11-03 16:11:56 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/gtk/gstgtkwaylandsink.c: + gtkwaylandsink: Add VideoCropMeta support + Implement support for the crop meta, this allow offloading the crop to the + compositor + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> + +2025-11-03 16:05:15 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/gtk/gstgtkwaylandsink.c: + gtkwaylandsink: De-duplicate frame copy code + Regardless if we have a Dumb or a SHM, the copy code is identical. Factor this + code so it can be shared. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> + +2025-11-03 15:51:39 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/wayland/gstwaylandsink.c: + * ext/wayland/gstwaylandsink.h: + waylandsink: Add VideoCropMeta support + Implement support for the crop meta, this allow offloading the crop to the + compositor. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> + +2025-11-03 15:28:29 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/wayland/gstwaylandsink.c: + waylandsink: De-duplicate frame copy code + Regardless if we have a Dumb or a SHM, the copy code is identical. Factor this + code so it can be shared. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> + +2025-11-03 15:34:47 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Add the ability to offload cropping + This add a new method on the GstWlWindow that allows to offload a crop + rectangle. + Co-authored-by: Robert Mader <robert.mader@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> + +2025-11-17 17:04:18 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlbuffer.c: + * gst-libs/gst/wayland/gstwlbuffer.h: + wayland: buffer: Add getters for video and crop meta + This will be used by the GstWlWindow to calculate the source rectangle to crop + the surface to. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9991> 2025-12-01 17:58:22 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -309,48 +1470,467 @@ mxfmux: Fix memset usage This was supposed to clear the local, stack-allocated struct and not set another pointer to NULL in a complicated way. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10255> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10254> -2025-11-20 12:42:48 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> +2025-11-26 14:17:14 +0900 jeongmin kwak <jeongmin.kwak@lge.com> - * gst/unixfd/gstunixfdsrc.c: - unixfdsrc: Keep dmabuf mapped - The unixfdsrc has as equal use for CPU access and device access. Keeping the CPU - address mapping is preferable, with not down side for device access. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10164> + * ext/vulkan/vkcolorconvert.c: + vulkan: initialize YUVUpdateData before memcpy + Ensure YUVUpdateData is fully initialized to avoid copying + uninitialized fields when writing to uniform buffer. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10199> -2025-11-20 12:41:10 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> +2025-11-28 21:08:17 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - * gst-libs/gst/va/gstvaallocator.c: - va: allocator: Keep dmabuf mapped - VA buffers are rarely access from CPU, but if this is someone use case, it is - preferable to keep the CPU address mapping around since this isn't cheap to - create each time we map/unmap the buffer. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10164> + * sys/applemedia/vtdec.c: + vtdec: Add lower resolution limits for h264, h265, av1 caps + In my testing, these are the lowest resolution values that + VideoToolbox can decode for each of these codecs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9448> -2025-12-01 13:00:14 +0200 Sebastian Dröge <sebastian@centricular.com> +2025-07-24 21:54:59 +0200 Rafael Caricio <rcaricio@netflix.com> - * gst/mpegtsdemux/tsdemux.c: - tsdemux: Consider DTS in private streams as audio instead of private - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10251> + * sys/applemedia/helpers.h: + * sys/applemedia/vtdec.c: + * sys/applemedia/vtdec.h: + vtdec: Support AV1 hardware decoding + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9448> + +2025-10-27 14:19:34 +0000 Dominique Leroux <dominique.p.leroux@gmail.com> + + * sys/applemedia/vtdec.c: + * sys/applemedia/vtdec.h: + vtdec: add VP9 decode support + - Only profile 0 and 2 (10-bits) work. Others allow creating the + decompression session but cause per-frame failures, so I'm + explicitly disabling them in caps. + - 64x64 is the minimum supported resolution. Limiting with caps for + same reason as above. + - API to enable supplemental codecs for iOS appeared in 26.2. vp9 + doesn't work before this. + - Lifting VTRegisterSupplementalVideoDecoderIfAvailableFunc via dlsym + on iOS in case GStreamer is built with an older SDK. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10175> 2025-11-21 15:48:05 -0500 Dominique Leroux <dominique.p.leroux@gmail.com> * sys/applemedia/vtdec.c: vtdec: Fix race condition in decoder draining. Fluster runs were unstable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10231> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10229> -2025-10-15 06:49:03 +0530 Nirbheek Chauhan <nirbheek@centricular.com> +2024-01-10 09:30:51 -0500 Daniel Morin <daniel.morin@collabora.com> - * ext/sctp/sctpassociation.c: - sctp: Fix GMutex leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10141> + * docs/plugins/gst_plugins_cache.json: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/gstyolosegtensordecoder.c: + * gst/tensordecoders/gstyolosegtensordecoder.h: + * gst/tensordecoders/gstyolotensordecoder.c: + * gst/tensordecoders/gstyolotensordecoder.h: + * gst/tensordecoders/meson.build: + tensordecoders: Add tensor decoder element for yolo detection and segmentation models + A new set of tensor decoder that handles YOLO v5+ object detection and + YOLO v8+ segmentation. + `yolotensordecoder`: decodes tensors output(masks) from detection-only + models e.g yolov8s.onnx + `yolsegv8tensordecoder`: decoder tensors output(masks and logits) from + segementation models e.g FastSAM or yolov8s-seg + Co-authored-by: Vineet Suryan <vineet.suryan@collabora.com> + Co-authored-by: Santosh Mahto <santosh.mahto@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8652> -2025-10-15 04:35:27 +0530 Nirbheek Chauhan <nirbheek@centricular.com> +2025-11-26 13:32:51 -0500 Xavier Claessens <xclaessens@netflix.com> - * sys/applemedia/vtenc.c: - vtenc: Fix a leak when setting caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10141> + * gst/unixfd/gstunixfdsink.c: + unixfd: Fix pointer casting on 32bit arch + Fixes: #4766 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10212> + +2025-11-25 12:55:21 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaav1enc.c: + * sys/va/gstvah264enc.c: + * sys/va/gstvajpegenc.c: + * sys/va/gstvavp8enc.c: + * sys/va/gstvavp9enc.c: + va: simplify encoder's reconfig() virtual method + By using the new gst_va_encoder_open(), which closes it if a previous + incompatible configuration exists. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10115> + +2025-11-13 14:08:33 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + * sys/va/gstvaencoder.h: + * sys/va/gstvah265enc.c: + vah265enc: use new VA encoder setup()/open_2() API + This patch also contains the add of a refactor get_surface_alignment() in + GstVaEncoder helper, which before created a synthetic VAConfig to get the + surface alignment. + Now, with the API split we can call get_surface_alignment() after setup() and + before open_2() without the need of a synthetic VAConfig. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10115> + +2025-11-13 12:24:54 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + * sys/va/gstvaencoder.h: + vaencoder: split open() in setup() and open_2() + Still keep open() for backward compatibility as a composite of both new + functions. + setup() -> creates VA config + open_2() -> creates VA context + The reason to use open_2() is because there's a close() and we should keep the + API symmetry. + It adds an internal function is_setup() to do a fine check of what is needed by + every other method. + Fixes #3393 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10115> + +2025-11-13 11:55:34 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + * sys/va/gstvaencoder.h: + vaencoder: add gst_va_encoder_set_coded_buffer_size() + So encoding elements could update the coded buffer size after opened the encoder + helper object. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10115> + +2025-11-12 18:06:18 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + * sys/va/gstvaencoder.h: + vaencoder: allow lazy initialization for reconstruct buffers pool + Right now the open() method still creates the reconstruct buffer pool, but this + patch also allows its lazy creation. + Also the patch adds a namespaces for those member variables related only with + the reconstruct buffer pool. + And adds the function gst_va_encoder_set_reconstruct_pool_config(), that will + be used when an another alternative to the open() member is added in future + commits. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10115> + +2025-11-26 14:41:35 -0500 Dominique Leroux <dominique.p.leroux@gmail.com> + + * sys/applemedia/avfassetsrc.m: + avfassetsrc: Explain how invalid ts and position should interact + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10197> + +2025-11-26 12:18:26 -0500 Dominique Leroux <dominique.p.leroux@gmail.com> + + * sys/applemedia/avfassetsrc.m: + avfassetsrc: Better invalid timestamp handling and reporting + Now, invalid CMTime coming from AVFoundation will be properly + converted to GST_CLOCK_TIME_NONE (for buffer timestamp, buffer + duration and overall duration). And we no longer consider an + invalid buffer timestamp as a irrecoverable error, but just log the + problem instead. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10197> + +2025-11-25 14:21:01 -0500 Dominique Leroux <dominique.p.leroux@gmail.com> + + * sys/applemedia/avfassetsrc.m: + avfassetsrc: Prevent access to released CMSampleBuffer + When the video sample buffers obtained from AVAssetReaderTrackOutput + contain data with line padding that differs from what GStreamer expects + (checked with GST_VIDEO_INFO_PLANE_STRIDE in + gst_core_video_wrap_pixel_buffer in corevideobuffer.c), + gst_core_media_buffer_new ends up creating a new GstBuffer with proper + layout and copying the original buffer content into the new buffer. + This means the original CMSampleBuffer gets released before we get to + the point where we access it to extract time information. So the comment + saying "cmbuf is now retained by buf (in meta)" (in avfassetsrc.c + nextBuffer) was not always right. But it was helpful in making me want + to see whether this was always true. + The code was reformulated to avoid having to rely on side effects, while + preserving a single call to CFRelease. The problem would have been more + obvious if there had been timestamp validation, so this is now + done. Frame duration will not systematically be valid (only timetamp has + to); therefore, invalid duration will not be used but will also not be + flagged as an error. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10197> + +2025-11-27 01:19:01 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/asio/gstasiodeviceprovider.cpp: + * sys/asio/meson.build: + asio: Add compile-time detection for device monitoring support + Older versions of MinGW 32-bit, such as the one that we ship in + Cerbero, do not support CM_Register_Notification etc. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9663> + +2025-09-08 21:13:14 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/wasapi2/gstwasapi2util.cpp: + wasapi2: Fix some typos + Also convert an informative LOG to INFO + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9663> + +2025-09-09 13:49:53 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/asio/gstasiodeviceprovider.cpp: + asio: Add a separate debug category for the device provider + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9663> + +2025-09-09 09:05:44 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/asio/gstasiodeviceprovider.cpp: + * sys/asio/meson.build: + asio: Implement device monitoring using USB events + ASIO doesn't provide a way to monitor for device registration + / disconnect, so we need to re-probe all the devices every time we get + a USB event. + We aggregate USB events by delaying the probe till 500ms have passed + since the last USB plug/unplug event. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9663> + +2024-07-24 09:11:49 +0800 Shengqi Yu (喻盛琪) <shengqi.yu@mediatek.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/unixfd/gstunixfdsink.c: + unixfdsink: add and notify clients-number + 1. Add the client-number property. + 2. When the number of clients increases or decreases, notify the client-number property. + This way, the app can know the current number of client connections based on the notification information, + allowing it to perform other operations, such as setting the server pipeline to NULL state when + there are no client connections. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7224> + +2025-11-25 19:42:15 -0500 Xavier Claessens <xclaessens@netflix.com> + + * sys/aja/gstajasrc.cpp: + ajasrc: Rename variable to match gst_clock_is_system_monotonic() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8257> + +2025-01-08 09:29:13 -0500 Xavier Claessens <xclaessens@netflix.com> + + * gst/unixfd/gstunixfdsink.c: + * gst/unixfd/gstunixfdsrc.c: + * sys/aja/gstajasrc.cpp: + * sys/d3d11/gstd3d11ipc.cpp: + * sys/d3d11/gstd3d11ipc.h: + * sys/d3d11/gstd3d11ipcsink.cpp: + * sys/d3d11/gstd3d11ipcsrc.cpp: + * sys/d3d12/gstd3d12ipc.cpp: + * sys/d3d12/gstd3d12ipc.h: + * sys/d3d12/gstd3d12ipcsink.cpp: + * sys/d3d12/gstd3d12ipcsrc.cpp: + * sys/nvcodec/gstcudaipc.cpp: + * sys/nvcodec/gstcudaipc.h: + * sys/nvcodec/gstcudaipcsink.cpp: + * sys/nvcodec/gstcudaipcsrc.cpp: + * sys/win32ipc/gstwin32ipcutils.cpp: + * sys/win32ipc/gstwin32ipcutils.h: + * sys/win32ipc/gstwin32ipcvideosink.cpp: + * sys/win32ipc/gstwin32ipcvideosrc.cpp: + GstClock: Add gst_clock_is_system_monotonic_clock + It was duplicated in many places and can be useful outside of GStreamer + as well. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8257> + +2025-06-20 13:12:06 +0530 santosh mahto <santoshbit2007@gmail.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/tensordecoders/gstioutracker.c: + * gst/tensordecoders/gstioutracker.h: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/meson.build: + * tests/check/elements/ioutracker.c: + * tests/check/meson.build: + tensordecoders: Implement IoU based tracker element + `ioutracker` element tracks objects in video frames + It uses object detection mtds to get the position and + detects the same object in next frame based on + Intersection-over-Union (IoU). Based on this, it attaches + tracking mtd to buffer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9274> + +2025-11-24 10:55:09 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvadisplay_priv.c: + vadisplay-priv: re-use log category from va plugin + Reuse this log category because is first created. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10192> + +2025-11-19 13:47:24 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: remove extra call to get VA display + And precise the parameter type as VABufferType. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10192> + +2025-11-04 17:36:42 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/codecparsers/gsth266parser.c: + codecparsers: h266parser: fix documentation and parameter check + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10180> + +2024-08-12 11:52:40 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/codecs/gstav1decoder.c: + * gst-libs/gst/codecs/gstcodecpicture.c: + * gst-libs/gst/codecs/gstvp9decoder.c: + * gst-libs/gst/codecs/gstvp9statefulparser.c: + codecs: simple documentation fixes + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10180> + +2025-10-30 13:20:47 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vkphysicaldevice: fix GI documentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10180> + +2025-11-11 12:48:59 +0100 Sven Püschel <s.pueschel@pengutronix.de> + + * sys/uvcgadget/gstuvcsink.c: + uvcsink: add VYUY mapping + The mapping was copied from plugins good 1. At the time of the copy it + missed the VYUY mapping, therefore add it now. + 1 https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6037#note_2573767 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10070> + +2025-01-21 13:32:02 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/analyticsoverlay/gstanalyticsoverlay.c: + * ext/analyticsoverlay/gstsegmentationoverlay.c: + * ext/analyticsoverlay/gstsegmentationoverlay.h: + * ext/analyticsoverlay/meson.build: + analyticsoverlay: add a segmentation overlay + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7149> + +2025-11-24 18:58:43 +0200 Sebastian Dröge <sebastian@centricular.com> + + * sys/decklink/gstdecklinkvideosink.cpp: + decklinkvideosink: Add some debug output for writing ancillary data + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10182> + +2025-11-19 15:01:40 -0600 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstssdobjectdetector.c: + ssdobjectdetector: Print class index + Print the class index, this helps debugging labels lists. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-19 15:23:33 -0600 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + classifiertensordecoder: Print clearer message when setting wrong labels file + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-19 15:23:10 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Remove unused mean/stddev default values + The arrays are always set + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-19 15:03:52 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Read mean/stddev from Image.NominalPixelRange metadata + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-19 15:02:56 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + * ext/tflite/gsttfliteinference.c: + tflite & onnxinference: Substract means + This makes them work like everyone else. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-18 18:13:07 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Remove pointless arguments to macro + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-18 18:12:12 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Correctly pass planarity of input tensor + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-18 16:11:28 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Mark dimension meaning explicitly + This makes the code more correct, as it make some incorrect assumptions + in some places. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-17 11:24:06 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Put more error details from the ONNX Runtime + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-15 13:28:50 -0600 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Create environment before using more APIs + This call has the side effect of enabling the onnxrt logging system, + so we must do it earlier + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-11-13 14:55:52 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Split the ONNX runtime debug from the element's + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-10-31 18:51:14 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Avoid deadlocking on startup error + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-10-31 18:51:02 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Hard fail when selected provider is not available + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-08-08 15:16:02 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + * ext/onnx/meson.build: + onnxinference: Remove explicit CPU execution provider setting + It's the default, and this one it avoid having to load the header file + which isn't always installed in the same place. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10143> + +2025-02-15 12:45:05 +0000 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcstats.c: + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/ice.h: + * gst-libs/gst/webrtc/icetransport.c: + * gst-libs/gst/webrtc/icetransport.h: + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/nice/nicetransport.c: + * gst-libs/gst/webrtc/nice/niceutils.h: + * gst-libs/gst/webrtc/webrtc_fwd.h: + webrtc: ice: Add support for getting the selected candidate pair + Expose a `gst_webrtc_ice_transport_get_selected_candidate_pair()` function + corresponding to the RTCIceTransport spec's `getSelectedCandidatePair()`. See + also + https://w3c.github.io/webrtc-pc/#dom-rtcicetransport-getselectedcandidatepair + This new function should be used instead of `gst_webrtc_ice_get_selected_pair()` + which is now deprecated. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8484> + +2025-07-09 10:08:38 +0200 Rinat Zeh <rinat.zeh@i-rz.de> + + * ext/meson.build: + * ext/mpeghdec/gstmpeghdec.c: + * ext/mpeghdec/gstmpeghdec.h: + * ext/mpeghdec/meson.build: + * meson.options: + mpeghdec: MPEG-H Audio decoder plugin + MPEG-H Audio decoder plugin based on Fraunhofer GitHub MPEG-H + decoder implementation (https://github.com/Fraunhofer-IIS/mpeghdec) + Co-authored-by: Florian Kolbeck <florian.kolbeck@i-rz.de> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9361> 2025-11-21 10:59:14 +0100 Hyunjun Ko <zzoon@igalia.com> @@ -363,13 +1943,20 @@ on ANV requires over 30 memories. This fixes AV1 decoding on ANV and any other scenarios where video sessions require more than 16 memory allocations. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10185> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10169> 2025-11-20 14:04:59 +0100 Hyunjun Ko <zzoon@igalia.com> * gst-libs/gst/vulkan/gstvkvideo-private.c: vkvideo-private: fix to use correct index for picking a memory type - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10185> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10169> + +2025-11-21 11:43:13 +0200 Sebastian Dröge <sebastian@centricular.com> + + * sys/decklink/gstdecklinkvideosrc.cpp: + decklinkvideosrc: Don't add parity bits to the line number in GstAncillaryMeta + It's an 11 bit value without parity bits. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10167> 2025-11-21 11:41:41 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -377,14 +1964,42 @@ ajasink: Remove parity bits from ancillary meta DID/SDID before passing further The GstAncillaryMeta contains the parity bits but the AJA SDK expects them without. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10171> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10167> 2025-11-20 13:48:16 +0100 Hyunjun Ko <zzoon@igalia.com> * ext/vulkan/vkh265dec.c: vkh265dec: fix a typo This fixes also H265 decoding with LTR on ANV. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10170> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10166> + +2025-11-20 12:42:48 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst/unixfd/gstunixfdsrc.c: + unixfdsrc: Keep dmabuf mapped + The unixfdsrc has as equal use for CPU access and device access. Keeping the CPU + address mapping is preferable, with not down side for device access. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10153> + +2025-11-20 12:41:10 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/va/gstvaallocator.c: + va: allocator: Keep dmabuf mapped + VA buffers are rarely access from CPU, but if this is someone use case, it is + preferable to keep the CPU address mapping around since this isn't cheap to + create each time we map/unmap the buffer. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10153> + +2025-11-07 19:29:24 -0300 L. E. Segovia <amy@centricular.com> + + * ext/dts/gstdtsdec.c: + * ext/dts/meson.build: + gst: implement Orc-less cpuid routine for selecting asm routines + This commit removes the use of Orc's default target machinery as a way + to do CPUID detection on x86 and Arm. Instead I port xsimd's CPU + detection routine to C, cleaning up the instruction sets we don't use, + and also adding support for GCC/Clang's cpuid and xgetbv builtins. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10004> 2024-03-15 23:11:20 +1100 Jan Schmidt <jan@centricular.com> @@ -393,13 +2008,78 @@ If scheduling a PCR packet from tsmux_packet_out(), or already writing one, don't do the PCR checks that can recurse infinitely and crash. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10150> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6380> + +2025-10-15 06:49:03 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * ext/sctp/sctpassociation.c: + sctp: Fix GMutex leak + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9846> + +2025-10-15 04:35:27 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/applemedia/vtenc.c: + vtenc: Fix a leak when setting caps + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9846> + +2025-11-04 23:29:40 +0900 Seungha Yang <seungha@centricular.com> + + * gst/audiomixmatrix/gstaudiomixmatrix.c: + * gst/audiomixmatrix/gstaudiomixmatrix.h: + audiomixmatrix: Add sparse matrix LUT optimization + Use precomputed LUTs for non-zero coefficients instead of + blindly traversing all input/output channel combinations + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10002> + +2025-11-17 16:25:06 +0200 Sebastian Dröge <sebastian@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * sys/decklink/gstdecklinkvideosink.cpp: + * sys/decklink/gstdecklinkvideosink.h: + decklinkvideosink: Add support for outputting GstAncillaryMeta + This adds a new output-vanc boolean property. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10111> + +2025-11-17 16:16:50 +0200 Sebastian Dröge <sebastian@centricular.com> + + * sys/decklink/gstdecklinkvideosink.cpp: + * sys/decklink/gstdecklinkvideosink.h: + decklinkvideosink: Use GstVecDeque instead of GQueue for the pending frames + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10111> + +2025-11-13 15:14:50 +0200 Sebastian Dröge <sebastian@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * sys/decklink/gstdecklinkvideosrc.cpp: + * sys/decklink/gstdecklinkvideosrc.h: + decklinkvideosrc: Add support for outputting all VANC via GstAncillaryMeta + This adds a new output-vanc boolean property. + As part of this, now all valid VANC lines are always checked for interesting + VANC. Previously we cached where CC or AFD/Bar was found and first checked that + line. Keeping this would complicate the code considerably, and here checking all + lines takes less than 1ms. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10111> 2025-11-05 15:37:25 +0100 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/play/gstplay.c: gstplay: Fixed wrong initial configuration - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10122> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10110> + +2025-11-13 21:12:20 +0530 Sanchayan Maity <sanchayan@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/mxf/mxfvanc.c: + * gst/mxf/mxfvanc.h: + mxfvanc: Add support for non-closed-caption VANC + - Extends mxfdemux with support for outputting VANC (ST436M) essence + tracks as ST2038 streams instead of extracting closed captions internally. + - Extends mxfmux with support for consuming ST2038 streams for outputting + VANC (ST436M) essence tracks instead of only supporting closed captions. + To support ST2038 instead of the earlier closed captions, we introduce a + breaking change to the caps handling on the pad. We also now support both + 8 and 10-bit VANC data when reading from MXF. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10087> 2025-11-17 10:24:17 +0900 dongjoo.kim <dongjoo.kim@lge.com> @@ -408,7 +2088,147 @@ In this section, the p value cannot be NULL, so it is dead code. However, if you refer to the code in mxf_ffv1_get_track_wrapping, you can see that you should check desc, not p. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10116> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10095> + +2025-11-18 11:03:28 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: add validation that format's chroma + gst_va_encoder_open() receives as parameters GStreamer Video Format and its + chroma. Nevertheless, chroma can be known from the format by using + gst_va_chroma_from_video_format(). + Instead of removing a spurious method's parameter, this patch validates that + both coincide. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10086> + +2025-10-06 20:46:10 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/hip/gsthip-interop.cpp: + * gst-libs/gst/hip/gsthipdevice.cpp: + * gst-libs/gst/hip/gsthipevent.cpp: + * gst-libs/gst/hip/gsthipmemory.cpp: + * gst-libs/gst/hip/gsthipstream.cpp: + * gst-libs/gst/hip/meson.build: + hip: Generate gir files + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9804> + +2025-11-17 14:50:21 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12converter.cpp: + d3d12converter: Apply background color even without mipmapping + Fix missing background color update for UV remapping when mipmapping + is disabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10096> + +2025-11-03 11:41:36 +0900 jeongmin kwak <jeongmin.kwak@lge.com> + + * ext/smoothstreaming/gstmssdemux.c: + mssdemux: Clarify pad name cleanup in _create_pad() + Refactored _create_pad() to always free the pad name after pad creation attempt. + No actual leak existed; this change makes cleanup explicit and improves readability + after a static analyzer warning. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9982> + +2025-11-13 09:14:59 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaav1enc.c: + * sys/va/gstvadisplay_priv.c: + * sys/va/gstvadisplay_priv.h: + * sys/va/gstvaencoder.c: + * sys/va/gstvaencoder.h: + * sys/va/gstvah264enc.c: + * sys/va/gstvah265enc.c: + * sys/va/gstvajpegenc.c: + * sys/va/gstvavp8enc.c: + * sys/va/gstvavp9enc.c: + va: move methods from encoder to display-priv + These methods, semantically speaking, don't belong to the encoder helper object, + even if the attributes they are querying are only related with encoding + operations, since, in terms of the required parameters to call them, only + display is required. + The main problem with adding them in the encoder is that the encoder already has + entrypoint (and the profile when it's opened), and this methods are required to + be called previous the opening of the encoder object. + This patch proposes to move these methods to the GstVaDisplay namespace, but + privately, since they are used only by the elements in the plugins, not by the + public API. + Calling this attribute in a non-encoding entrypoint shouldn't be a problem for + any driver, it should return the unimplemented value. + No functional changes are done. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10083> + +2025-11-11 04:27:42 +0900 Seungha Yang <seungha@centricular.com> + + * gst/videoparsers/gsth266parse.c: + * gst/videoparsers/gsth266parse.h: + h266parse: Use VUI framerate when upstream framerate is 0/1 + If upstream framerate is 0/1 (unknown) but VUI has framerate + information, use the VUI framerate + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10039> + +2025-11-11 04:18:04 +0900 Seungha Yang <seungha@centricular.com> + + * gst/videoparsers/gsth264parse.c: + h264parse: Use VUI framerate when upstream framerate is 0/1 + If upstream framerate is 0/1 (unknown) but VUI has framerate + information, use the VUI framerate + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10039> + +2025-11-07 20:39:34 +0900 Seungha Yang <seungha@centricular.com> + + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth265parse.h: + h265parse: Use VUI framerate when upstream framerate is 0/1 + If upstream framerate is 0/1 (unknown) but VUI has framerate + information, use the VUI framerate + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10039> + +2025-11-12 17:13:06 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkh264enc.c: + vkh264enc: remove unused member variable + The member variable out_state is not used in the code, so there's no need to + keep it as a member variable along the lifespan of the object. + Even more, it can be fetched with gst_video_encoder_get_output_state(). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10080> + +2025-10-30 23:25:23 +0900 Seungha Yang <seungha@centricular.com> + + * tests/check/elements/h265parse.c: + tests: h265parse: Update for AUD insertion + Updating tests to handle inserted AUD by h265parse + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9955> + +2025-10-30 21:40:37 +0900 Seungha Yang <seungha@centricular.com> + + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth265parse.h: + h265parse: Add support for AUD insertion + Implement AUD insertion similar to h264parse + to work around decoding artifacts in multi-slice HEVC streams. + This fixes an issue with the Intel Media Foundation decoder in + Windows Media Player, where the decoded image becomes corrupted + in non-first slices when the frame lacks AUD + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9955> + +2025-11-13 11:53:52 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: add more checks to gst_va_encoder_open() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10084> + +2025-11-13 11:51:53 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: IS macros checks for NULL + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10084> + +2025-11-13 11:36:42 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: init() calls reset() + Thus avoiding repeated code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10084> 2025-11-03 11:23:36 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -416,26 +2236,69 @@ scte-section: fix missing cleanup and clarify event ownership on parse failure Free only the allocated component when _parse_splice_component() fails, leaving event cleanup to the caller to maintain proper ownership. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10093> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9981> + +2025-11-07 14:17:39 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvabaseenc.c: + vabaseenc: get usage hint with real entrypoint + Use class' entrypoint rather than a hardcoded one. Currently, in the case of + encoders, it doesn't matter much, but it makes the code robust for future + changes. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10077> + +2025-11-07 14:19:17 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvabasedec.c: + * sys/va/gstvabaseenc.c: + * sys/va/gstvabasetransform.c: + * sys/va/gstvacaps.c: + * sys/va/gstvacaps.h: + * sys/va/gstvacompositor.c: + va: use gst_video_is_dma_drm_caps() + Instead of local gst_caps_is_dmabuf() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10077> + +2025-11-07 14:15:50 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/meson.build: + va: build: remove non-documentation headers + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10077> 2025-11-12 16:53:32 -0800 Matthew Semeniuk <megaman22342@live.ca> * gst-libs/gst/vulkan/gstvkqueue.h: vulkan: add G_DECLS to gstvkqueue - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10085> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10078> 2025-11-12 16:53:05 -0800 Matthew Semeniuk <megaman22342@live.ca> * gst-libs/gst/vulkan/gstvkcommandpool.h: vulkan: add G_DECLS to gstvkcommandpool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10085> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10078> + +2025-11-11 22:37:29 -0500 Olivier Crête <olivier.crete@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + audio: Re-order the all formats + The order they were in was tripping a Rust unit test. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10074> + +2025-08-15 15:16:57 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsbatchmeta.c: + * gst-libs/gst/analytics/gstanalyticsbatchmeta.h: + batchmeta: Merge event/buffer/bufferlist into a single field + Everything serialized should be sent together + Break an API introduced since the last release. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9563> 2025-06-18 14:25:17 +0900 dongjoo.kim <dongjoo.kim@lge.com> * gst/id3tag/id3tag.c: id3tag: Fix resource leak When latin1 is not NULL and latin10 is '\0' - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10067> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10050> 2025-10-31 16:59:45 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -443,22 +2306,7 @@ analyticsmeta: Initialize span to avoid undefined behavior Fix uninitialized scalar variable by setting 'span' when max_relation_span is non-negative. Prevents potential issues when accessing adjacency matrix. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10060> - -2025-11-10 17:30:54 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.26.8 - -=== release 1.26.8 === - -2025-11-10 17:22:05 +0000 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.8 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9963> 2025-11-10 13:21:43 +0100 Kevin Scott <kevin.scott@axis.com> @@ -468,14 +2316,14 @@ handshake messages. An MTU of 1200 prevents certificate fragmentation and potential message reordering. Chromium does something similar: https://chromium.googlesource.com/external/webrtc/+/06b8f7e/rtc_base/openssl_stream_adapter.cc#259 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10055> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10054> 2025-06-16 16:40:07 +0900 dongjoo.kim <dongjoo.kim@lge.com> * gst-libs/gst/wayland/gstwllinuxdmabuf.c: wayland: Fix using uninitialized value of data.wbuf There are cases which it goes to "out" without initializing data.wbuf - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10052> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10051> 2025-11-04 13:48:12 +0200 Sebastian Dröge <sebastian@centricular.com> @@ -484,19 +2332,160 @@ And as part of that fix a couple of mode changes while running, e.g. starting recording in video-first mode, then switching to running time mode and only setting an end running time. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10040> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10001> + +2025-11-05 15:25:33 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * tests/check/libs/analyticsmeta.c: + memory: Add gst_map_info_clear() and use GST_MAP_REF_MEMORY for gst_buffer_map() + Also deprecate GstMemoryMapInfo and GstBufferMapInfo, and add g_autoptr support + for GstMapInfo directly. + This simplifies usage of the GstMapInfo API and reduces a bit of + inconsistencies. + For consistency, also add gst_map_info_init(). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10020> + +2025-11-06 18:06:46 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + * gst/videoparsers/gstvp9parse.c: + videoparsers: Don't unnecessarily copy buffers + gst_base_parse_finish_frame() does not invalidate the buffer's data + anymore since a few years. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10035> + +2025-11-06 18:03:56 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + * gst/videoparsers/gstmpeg4videoparse.c: + * gst/videoparsers/gstmpegvideoparse.c: + * gst/videoparsers/gstvp9parse.c: + videoparsers: Don't read GstMapInfo values after unmapping + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10035> + +2025-11-06 14:27:06 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + h26456parse: Don't unmap and unref buffers twice + And also don't unnecessarily copy it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10035> + +2025-11-06 14:26:01 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/videoparsers/gstav1parse.c: + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + * gst/videoparsers/gstvp9parse.c: + videoparsers: Call gst_base_parse_frame_free() for custom allocated frames + Unlike the name suggests, for stack-allocated frames it only frees the + contained memory and does not free the frame itself. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10035> + +2025-10-17 12:28:00 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/gstvkutils.c: + * ext/vulkan/gstvkutils.h: + * ext/vulkan/vkdownload.c: + * ext/vulkan/vkupload.c: + gstvkutils: Add utility function for plane dimension calculations + Adds gst_vulkan_video_info_get_plane_dimensions() to calculate plane's + dimensions required for Vulkan buffer/image copy operations. This function: + - Handle video metadata, if available, for padding + - Converts byte strides to pixel counts for Vulkan's bufferRowLength + - Share this calculations with vulkanupload and vulkandownload elements + Co-authored-by: Stéphane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8862> + +2025-10-16 10:26:30 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/gstvkutils.c: + * ext/vulkan/gstvkutils.h: + * ext/vulkan/meson.build: + * ext/vulkan/vkdownload.c: + * ext/vulkan/vkupload.c: + gstvkutils: Add utility function for plane memory lookup + Added gst_vulkan_buffer_peek_plane_memory(). It finds the GstMemory associated + with a specific video plane. This replaces duplicated code in vkdownload and + vkupload elements. + This function properly handles both buffers with and without video metadata. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8862> + +2025-10-16 21:13:18 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkdownload.c: + vulkandownload: fix indentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8862> + +2025-10-16 21:08:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkupload.c: + vulkanupload: fix indentation and missing indent control + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8862> + +2025-10-16 21:10:35 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * tests/check/elements/vkupload.c: + tests: vkupload: add multiple formats and resolutions test + Restructured the test allowing testing multiple formats (NV12 and RGBA) and + resolution. + Instead of a format-specific validation, the test keeps the input image and + compare it, row by row / pixel-perfect, with the output image. The test also + does memory dumps of each compared row for troubleshooting. + Co-authored-by: Stéphane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8862> 2025-11-05 12:37:42 +0200 Rares Branici <rares.branici@senstar.com> * gst-libs/gst/d3d12/gstd3d12converter.cpp: d3d12converter: Initialize video_direction - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10024> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10016> 2025-11-05 12:37:01 +0200 Rares Branici <rares.branici@senstar.com> * gst-libs/gst/d3d11/gstd3d11converter.cpp: d3d11converter: Initialize video_direction - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10024> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10016> + +2025-11-04 17:20:16 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> + + * ext/wpe2/meson.build: + wpe2: Check for presence of wpe-platform.h + Just because WPEWebKit is new enough doesn't mean it has been built with + the WPE Platform API enabled. Check that its header is present before + building the plugin. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/10003> + +2025-11-03 18:39:01 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12basefilter.cpp: + * sys/d3d12/gstd3d12basefilter.h: + * sys/d3d12/gstd3d12convert.cpp: + * sys/d3d12/gstd3d12deinterlace.cpp: + * sys/d3d12/gstd3d12fisheyedewarp.cpp: + * sys/d3d12/gstd3d12interlace.cpp: + * sys/d3d12/gstd3d12mipmapping.cpp: + * sys/d3d12/gstd3d12overlaycompositor.cpp: + * sys/d3d12/gstd3d12remap.cpp: + d3d12basefilter: Make device access thread-safe + Since the device object can be cleared while handling allocation queries + during state changes, protect it with a mutex. Also, move the duplicated + allocation query handler into the base class to eliminate redundant code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9984> + +2025-09-05 21:07:43 +0800 Nicholas Jin <nicholasdezai@gmail.com> + + * docs/plugins/gst_plugins_cache.json: + audio: add U20_32 and S20_32 audio format + Co-authored-by: Sebastian Dröge <sebastian@centricular.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9650> 2025-10-31 10:51:48 +0100 Loïc Le Page <llepage@igalia.com> @@ -511,7 +2500,103 @@ values may be stored on int32. It may introduce a small computation error with odd values but negligeable taking into account the huge initial values > max_int32. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9973> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9967> + +2025-10-31 15:00:35 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + * ext/tflite/gsttfliteinference.c: + tflite+onnx: Remove Effect from the klass + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-26 12:48:20 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Guess the planar nature instead of setting it manually + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-26 12:14:00 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Use GstVideoInfo directly + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-26 11:07:00 +0000 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Redirect ONNX-Runtime level to GStreamer logs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-25 16:23:13 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Set ONNX logging from GStreamer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-25 16:04:24 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.c: + onnxinference: Push ERROR on the bus when returning GST_FLOW_ERROR + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-25 15:51:51 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstml.h: + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + * ext/onnx/gstonnxinference.c: + * ext/onnx/gstonnxinference.cpp: + * ext/onnx/meson.build: + onnxinference: Port to C + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-25 12:11:01 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + onnx: Remove template with C macro for conversion + Import the code from TfLiteInference for now. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 18:43:43 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/gsttfliteinference.c: + tflite: Factor out function guessing the type from the tensor + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 18:33:00 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + onnx: Simplify structure member setting + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 18:18:22 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/gsttfliteinference.c: + tfliteinference: Remove object instance from conversion function + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 18:03:15 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + onnx: Remove usage of C++ string + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 17:52:22 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + onnx: Remove usage of C++ vector + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> + +2025-10-22 16:48:37 +0100 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + * ext/onnx/gstonnxinference.cpp: + onnxinference: Use C OnnxRT API instead of C++ API + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9911> 2025-10-31 17:20:31 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -519,7 +2604,194 @@ scte-section: fix resource leak in splice component parsing Free the `component` buffer in the error path of `gst_scte_section_parse()` to prevent a memory leak when an error occurs after allocation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9971> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9964> + +2025-10-26 16:11:43 +0000 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkav1dec.c: + vkav1dec: dynamically generated pads templates + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-26 16:11:00 +0000 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkav1dec.c: + * ext/vulkan/vkav1dec.h: + vkav1dec: clean headers and code-style fixes + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-08 15:37:47 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkvp9dec.c: + * ext/vulkan/vkvp9dec.h: + vkvp9dec: dynamically generated pads templates + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-06 17:05:59 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkh265dec.c: + * ext/vulkan/vkh265dec.h: + vkh265dec: dynamically generated pads templates + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-25 19:33:31 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkh264dec.c: + * ext/vulkan/vkh264dec.h: + * ext/vulkan/vkh264enc.c: + * ext/vulkan/vkh264enc.h: + vkh264XXX: dynamically generated pads templates + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-25 19:31:46 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/gstvkvideocaps.c: + * ext/vulkan/gstvkvideocaps.h: + * ext/vulkan/meson.build: + gstvkvideocaps: dynamic caps generator for pad templates + Implement gst_vulkan_physical_device_codec_caps() to query Vulkan video + capabilities and generate appropriate GStreamer caps for supported codecs. This + includes support for H.264, H.265 decode and encode operations, and VP9 and AV1 + decode operations. + The implementation builds video profiles for each codec and iterates through + possible chroma subsampling and bit depth combinations to determine supported + configurations. For each valid configuration, it generates both codec-specific + caps (like video/x-h264) and corresponding raw video output caps with + Vulkan image memory feature. + Also handles special cases and sets appropriate stream formats for different + codecs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-23 14:09:30 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + vkvideo-private: add gst_vulkan_video_try_configuration() + This is a refactoring of encoder and decoder helper classess, taking out common + code from both that queries a vulkan physical device given a video operation + profile. That query returns the hardware capabilities for that profile. + By taking out from encoder and decoder helper classes this common query to a + physical devices to a gst_vulkan_try_configuration(), this function can be + reused for the caps template generation for vulkan video elements. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-26 16:22:31 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils-private: add film grain field in caps + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-07 17:37:44 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils-private: improve H.265 profile handling + In order to determine the H.265 GStreamer profile, it's required to consider the + chroma subsampling and the chroma/luma bit depth and compare them with a list of + possible profile strings. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-08 17:34:13 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils-private: fail _to_caps() if no gstreamer profile + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-07 19:36:50 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils-private: break loops when profile is found + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-25 19:24:55 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/gstvulkan.c: + * ext/vulkan/gstvulkanelement.c: + vulkan: export common debug category + So it can be used along all the plugin registy process. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-07 14:42:25 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkformat.c: + vkformat: add missing vulkan video formats + These formats are used by Vulkan video decoders and encoders. + In Vulkan Specification (1.4.330), in section 53.1.6 "Representation and Texel + Block Size", it says: + … The in-memory ordering of bytes within a component is determined by the + host endianness. + Then there's a macro that will do that conversion in compilation time. + Those new color formats don't have color conversion logic yet. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-25 19:21:50 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkerror.c: + vkerror: add video profile errors + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-09-25 19:20:33 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils-private: H.264 profile baseline is constrained-baseline + In the Vulkan Specification (v 1.4.330) in sections 45.12.3 and 45.21.6, "H264 + Decode Profile" and "H.264 Encode Profile" respectively, it says: + … enum constant STD_VIDEO_H264_PROFILE_IDC_BASELINE identifies the + Constrained Baseline profile as defined in A.2.1.1 of the ITU-T H.264 + Specification … + This patch fix that in our mapping structure which assumed baseline. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9748> + +2025-10-27 20:46:40 +0000 Jan Schmidt <jan@centricular.com> + + * gst/mpegtsdemux/tsdemux.c: + tsdemux: support demuxing ID3 metadata + Output timed ID3 metadata frames. Mark packets + that start a new ID3 frame as keyframes, using the + PES data_alignment header flag. + Based on a patch by Sebastían Benítez Díaz <sebastianbd95@gmail.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7645> + +2025-10-30 11:56:59 +0100 Ruben Sanchez <rsanchez@fluendo.com> + + * tests/check/elements/nvenc.c: + nvenc: Update test resolutions for modern NVIDIA GPUs + Modern NVIDIA GPUs (RTX 20xx, 30xx, 40xx series) have minimum + resolution requirements of 160x64. Update test resolutions from + 64x64/128x128 to 320x320/640x640 to support these GPUs. + The original values were causing caps negotiation failures with + GST_FLOW_NOT_NEGOTIATED on modern hardware. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9952> + +2025-10-30 11:56:34 +0100 Ruben Sanchez <rsanchez@fluendo.com> + + * tests/check/elements/nvenc.c: + nvenc: Fix typo in resolution_change_common using to_width for height + When changing resolution in the test, the height caps field was + incorrectly set using to_width instead of to_height parameter. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9952> + +2025-10-16 20:31:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkupload.c: + vulkanupload: fix return error in raw to buffer method + The return value of _copy_frames() is boolean, while the return value + of _raw_to_buffer_perform() is GstFlowReturn. By casting it, it returns + a wrong value. + This patch returns GST_FLOW_ERROR if _copy_frames() fails. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9957> + +2025-10-29 11:29:46 +0100 Tulio Beloqui <tulio@pexip.com> + + * sys/mediafoundation/gstmfcapturedshow.cpp: + mfcapturedshow: fix for top-down RGB images + The documentation for BITMAPINFOHEADER states that: For uncompressed + RGB bitmaps, if biHeight is positive, the bitmap is a bottom-up DIB + with the origin at the lower left corner. If biHeight is negative, the + bitmap is a top-down DIB with the origin at the upper left corner. + Also make sure the height in the caps is always positive. + Tested against the NVIDIA Broadcast virtual camera. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9950> 2025-10-26 16:06:26 +0000 Jan Schmidt <jan@centricular.com> @@ -532,14 +2804,27 @@ allowed to be unbounded When breaking incoming buffers across PES packet payloads, only the first should carry the incoming PTS/DTS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9954> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9925> 2025-10-26 16:06:26 +0000 Jan Schmidt <jan@centricular.com> * gst/mpegtsmux/tsmux/tsmuxstream.c: mpegtsmux: Use some named constants instead of hard-coded values Make some code a bit more readable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9954> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9925> + +2025-10-28 11:40:20 +0900 JihoonLee <ejihoon.lee@lge.com> + + * gst/mpegtsdemux/tsdemux.c: + tsdemux: add debug logs for various stream handling cases + Add GST_DEBUG_OBJECT statements in the default cases of switch statements + to improve traceability when handling unexpected GstH264NalUnitType, + MPEG-TS stream types, JPEG2000 color specifications. + - scan_keyframe_h264: Log NAL unit type for non-slice cases + - create_pad_for_stream: Log unsupported stream types + - color specification handling: Log unrecognized color specs + No functional changes introduced, only enhanced debugging capabilities. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9928> 2025-10-29 14:17:28 +0900 jeongmin kwak <jeongmin.kwak@lge.com> @@ -547,23 +2832,100 @@ v4l2codecs: Free sub-request on allocation failure If gst_vec_deque_pop_head() returns NULL, a new GstV4l2Request is allocated. Free it on MEDIA_IOC_REQUEST_ALLOC failure to avoid memory leak. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9947> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9944> -2025-08-09 22:51:03 +0900 Seungha Yang <seungha@centricular.com> +2025-10-28 00:47:57 -0400 Doug Nazar <nazard@nazar.ca> - * sys/wasapi2/gstwasapi2activator.cpp: - wasapi2: Tone down activation fail log - If there's no endpoint available, that failure is expected error - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9896> + * ext/openaptx/openaptx-plugin.c: + * gst-libs/gst/codecparsers/gsth266parser.c: + * gst-libs/gst/codecs/gstmpeg2decoder.c: + bad: Annotate unused functions/variables when checks/asserts disabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9931> -2025-08-05 00:41:34 +0900 Seungha Yang <seungha@centricular.com> +2025-10-23 15:43:10 +0900 JihoonLee <ejihoon.lee@lge.com> - * sys/wasapi2/gstwasapi2activator.cpp: - * sys/wasapi2/gstwasapi2activator.h: - wasapi2: Handle GetActivateResult failure - Even if GetActivateResult() succeeded, activation result can fail. - Checks output HRESULT code as well - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9896> + * gst/videoparsers/gsth265parse.c: + h265parse: Add debug logging for unknown H.265 values + Add GST_DEBUG logging for unrecognized H.265 parser values to improve + diagnostic capability when processing unsupported or malformed streams: + - gst_h265_parse_process_sei(): log unknown SEI payload types + - gst_h265_parse_update_src_caps(): log unknown chroma format IDs + - gst_h265_parse_pre_push_frame(): log unknown sei_pic_struct values + These debug statements help identify issues with non-standard H.265 stream + configurations during parsing. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9889> + +2025-10-25 14:24:19 +0100 Jan Schmidt <jan@centricular.com> + + * gst/mpegtsdemux/pesparse.c: + mpegtsdemux: Use some named constants instead of hard-coded values + Make some code a bit more readable + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9921> + +2025-05-16 09:26:49 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + classifierdecoder: Support tensors that don't need softmax + In some models, the softmax function has already been performed, so no + need to do it in the GStreamer code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8995> + +2025-10-09 14:50:13 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: Unlock sctp elements during disposal + Not doing so would lead to critical warnings. It could happen if webrtcbin is + teared down while it has a data-channel awaiting to preroll. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9823> + +2025-10-09 14:43:46 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/gstwebrtcbin.h: + * gst-libs/gst/webrtc/nice/nice.c: + * tests/check/elements/webrtcbin.c: + webrtcbin: Optional support for async tasks + This seems needed mostly for the add-ice-candidate-full signal where the promise + is passed down to the ICE implementation. In this context the promise reply will + usually be notified from the ICE backend which in some situations has to perform + an asynchronous name resolution for the host supplied in the candidate SDP. + Covered by a test that negotiates an Offer/Answer and then attempts to add an + ICE candidate containing an invalid MDNS candidate address. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9823> + +2025-01-17 18:32:13 -0300 Daniel Almeida <daniel.almeida@collabora.com> + + * ext/vulkan/gstvulkan.c: + * ext/vulkan/meson.build: + * ext/vulkan/vkav1dec.c: + * ext/vulkan/vkav1dec.h: + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.h: + * gst-libs/gst/vulkan/gstvkdevice.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + * gst-libs/gst/vulkan/gstvkvideoutils-private.h: + vulkan: add av1 decode element + Co-authored-by: Stephane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8316> + +2025-10-25 13:10:48 +0100 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Create background thread with normal priority + The background thread would be idle in most time and it does not + perform any time critical task. Create the thread explicitly + outside of our I/O thread so that default priority to be + assigned + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9895> + +2025-10-25 18:50:16 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkvp9dec.c: + vkvp9dec: fix dpb_size calculation + Remove useless checks in _find_next_slot_idx + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9904> 2025-10-25 16:14:49 +0100 He Junyan <junyan.he@intel.com> @@ -572,7 +2934,98 @@ This fix a bug where we append two frames into one buffer despite trying to split stream into unique frames. Fixes #4701 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9906> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9901> + +2025-10-25 14:18:47 +0100 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12swapchainsink.cpp: + d3d12swapchainsink: Fix flickering after resize + Mark as it was resized correctly so that next render cycle can be + processed as intended + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9897> + +2025-10-14 14:01:52 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + vkvideo-private: split video device and instance functions + Load and use device functions are faster and attached to a specific device. + Nevertheless, physical device functions belong to instance. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-14 13:57:55 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideo-private.h: + vkvideo-private: remove unused function pointer + vkCmdPipelineBarrier2KHR isn't used because it was integrated into + GstVulkanOperation. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 17:48:37 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + vk{decoder,encoder}-private: use new physical device functions + Rather than loading the functions pointers by the helper classes themselves. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 16:46:41 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vkphysicaldevice: add gst_vulkan_physical_device_get_video_capabilities() + The function will return TRUE if vkGetPhysicalDeviceVideoCapabilitiesKHR + function is available and it ran correctly. If so, the pcaps + structure (VkVideoCapabilitiesKHR) is filled. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 16:27:33 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkerror.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vkphysicaldevice: add gst_vulkan_physical_device_get_video_formats() + This function will return a GArray of VkVideoFormatPropertiesKHR elements given + the image usage and a pointer to a VkVideoProfileInfoKHR structure. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 14:43:02 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkformat.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vkphysicaldevice: add gst_vulkan_physical_device_get_format_properties() + This function will try to use the latest way to fetch the format's properties. + It can be called multiple times so it's better to keep it as function attached + to the object. + The function still uses gst_vulkan_instance_get_proc_address() instead of + gst_vulkan_device_get_proc_address() because, logically, physical device is in + between of the instance and the device. + In other to keep the code compatible with several Vulkan versions, we added a + private result structure: GstVulkanFormatProperties. + In gstvkformat, instead of loading the function every time, this new function + is called from GstPhysicalDevice object. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 14:39:26 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vkdevice: enable physical device extensions + These extensions are potentially used by gst_vulkan_format_from_video_info_2() + but since they aren't loaded with the conditions are met, basic fallback are + used. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> + +2025-10-13 14:37:27 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vkdevice: fix verification function calling + The code only checked if the function was set, but it didn't actually call the + function. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9837> 2025-04-01 14:46:23 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -582,7 +3035,7 @@ new mechanism only worked if the main pads received flush-stop first. Keep both pad implementation symmetrical and reset once both pads have received the event. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9905> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8759> 2025-10-16 14:34:48 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> @@ -590,18 +3043,7 @@ tsmux: Reset PUSI flag after writing stream packet Otherwise we might accidentally set it on a PCR-only packet when we pad the stream on a subsequent write. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9899> - -2025-10-16 11:35:57 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst/rist/gstristsink.c: - ristsink: Fix double free regression - The rtpext element was leaked in error path and fixed in MR !9756, but the - rtpext owner ship is later passed to a bin using gst_bin_add(). Ref-sync the - rtpext element so we can unref it in finalize() function without any special - cases. - Fixes #4707 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9859> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9857> 2024-09-21 00:53:11 +0200 Michael Grzeschik <m.grzeschik@pengutronix.de> @@ -612,7 +3054,37 @@ v4l2sink. Using the already linked v4l2sink a second time does curr ntly not work when we restart the stream and therefor switch to streaming again. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9860> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7574> + +2025-08-25 21:39:27 -0400 Daniel Morin <daniel.morin@collabora.com> + + * tests/check/libs/analyticsmeta.c: + test: Test GstAnalyticsTensorMtd + - Verify add GstAnalyticsTensorMtd to GstAnalyticsRelationMeta and retrieve it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7959> + +2024-07-17 12:24:11 -0400 Daniel Morin <daniel.morin@collabora.com> + + * gst-libs/gst/analytics/analytics.h: + * gst-libs/gst/analytics/gstanalyticstensormtd.c: + * gst-libs/gst/analytics/gstanalyticstensormtd.h: + * gst-libs/gst/analytics/gsttensor.c: + * gst-libs/gst/analytics/gsttensor.h: + * gst-libs/gst/analytics/meson.build: + analytics: adding tensor mtd + - GstAnayticsTensorMtd can store GstTensor and describe relation with other Mtd + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7959> + +2025-10-16 11:35:57 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst/rist/gstristsink.c: + ristsink: Fix double free regression + The rtpext element was leaked in error path and fixed in MR !9756, but the + rtpext owner ship is later passed to a bin using gst_bin_add(). Ref-sync the + rtpext element so we can unref it in finalize() function without any special + cases. + Fixes #4707 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9858> 2025-10-15 11:38:29 +0200 David Maseda Neira <david.maseda@cinfo.es> @@ -621,22 +3093,116 @@ Estimate latency based on lookahead queue depth + bframes + number of encoding threads. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4697 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9852> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9849> -2025-10-14 18:34:39 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-10-14 09:12:47 -0300 Thibault Saunier <tsaunier@igalia.com> - * meson.build: - Back to development after 1.26.7 + * gst-libs/gst/analytics/gstanalyticsmeta.h: + Revert "doc: python: Document PyGObject overrides for core GStreamer" + This reverts commit 81f5440159ca43194e98327e42bdd6a48e946d69. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9840> -=== release 1.26.7 === +2025-10-04 15:14:31 -0400 Doug Nazar <nazard@nazar.ca> -2025-10-14 18:25:43 +0100 Tim-Philipp Müller <tim@centricular.com> + * gst-libs/gst/cuda/gstcudabufferpool.h: + * gst-libs/gst/insertbin/gstinsertbin.h: + * gst-libs/gst/transcoder/gsttranscoder.h: + * gst-libs/gst/va/gstvaallocator.h: + * gst-libs/gst/va/gstvapool.h: + * gst-libs/gst/vulkan/gstvkfence.h: + * gst-libs/gst/vulkan/gstvkswapper.h: + gst: Add G_GNUC_WARN_UNUSED_RESULT to constructors + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9796> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.7 +2025-09-11 14:24:25 -0400 Doug Nazar <nazard@nazar.ca> + + * tests/check/elements/avtpcrfcheck.c: + * tests/check/elements/avtpcrfsync.c: + * tests/check/elements/camerabin.c: + * tests/check/elements/cccombiner.c: + * tests/check/elements/fdkaac.c: + * tests/check/elements/h264timestamper.c: + * tests/check/elements/h266parse.c: + * tests/check/elements/id3mux.c: + * tests/check/elements/pnm.c: + * tests/check/elements/rtponvifparse.c: + * tests/check/elements/webrtcbin.c: + * tests/check/libs/insertbin.c: + * tests/check/libs/mse.c: + * tests/check/libs/play.c: + * tests/check/libs/vkvideoencodeav1.c: + bad: tests: convert g_assert() to g_assert_*() and mark unused items + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9788> + +2025-09-11 14:22:03 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/avtp/gstavtpaafdepay.c: + * ext/avtp/gstavtpaafpay.c: + * ext/avtp/gstavtpbasedepayload.c: + * ext/avtp/gstavtpcrfbase.c: + * ext/avtp/gstavtpcrfsync.c: + * ext/avtp/gstavtpcrfutil.c: + * ext/avtp/gstavtpcvfdepay.c: + * ext/avtp/gstavtpcvfpay.c: + * ext/avtp/gstavtprvfdepay.c: + * ext/avtp/gstavtprvfpay.c: + * ext/closedcaption/gstcea608mux.c: + * ext/dash/gstmpdclient.c: + * ext/dtls/gstdtlssrtpdec.c: + * ext/dtls/gstdtlssrtpenc.c: + * ext/webrtc/gstwebrtcbin.c: + * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: + * gst-libs/gst/codecparsers/gsth265bitwriter.c: + * gst-libs/gst/mse/gstsourcebuffer.c: + * gst-libs/gst/wayland/gstwlbuffer.c: + * gst/debugutils/gsttestsrcbin.c: + * gst/id3tag/id3tag.c: + * gst/netsim/gstnetsim.c: + * gst/rtmp2/rtmp/amf.c: + * gst/rtmp2/rtmp/rtmpconnection.c: + * gst/rtmp2/rtmp/rtmpmessage.c: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + * sys/va/gstvaav1enc.c: + * sys/va/gstvah264enc.c: + * sys/va/gstvah265enc.c: + * sys/va/gstvavp9enc.c: + bad: mark items that are unused when checks or asserts are disabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9788> + +2025-03-24 06:36:26 +0100 Carlos Bentzen <cadubentzen@igalia.com> + + * gst-libs/gst/codecs/gsth266decoder.c: + h266decoder: suppport vvc1 and vvi1 modes + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8662> + +2025-03-24 06:36:07 +0100 Carlos Bentzen <cadubentzen@igalia.com> + + * gst-libs/gst/codecparsers/gsth266parser.c: + * gst-libs/gst/codecparsers/gsth266parser.h: + h266parser: implement identify and split nalu + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8662> + +2025-10-14 14:21:26 +0800 Shengqi Yu (喻盛琪) <shengqi.yu@mediatek.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + waylandsink: increase wait time for configure event + the timeout 100ms adding in merge request + https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/365 for wait config event + may be not enough. if there are a series of weston client running or CPU loading is high(stress test), + the 100ms appear time out probabilistically. this will result in gstwldisplay thread go wrong. + So, here increase the timeout time. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9838> + +2025-10-03 18:43:03 +0200 stevn <3381023+stevn@users.noreply.github.com> + + * sys/applemedia/avsamplevideosink.m: + * sys/applemedia/coremediabuffer.c: + * sys/applemedia/corevideobuffer.c: + * sys/applemedia/helpers.c: + * sys/applemedia/vtdec.c: + * sys/applemedia/vtenc.c: + applemedia: add P010_LE support to eg vtenc_hw + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9789> 2025-10-09 08:09:33 +0200 Branko Subasic <branko.subasic@axis.com> @@ -651,7 +3217,7 @@ pad has received EOS, and have no data left in their queues, and the last one has data. The problem is solved by modifying find_best_pad() to use gst_aggregator_pad_is_eos() instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9821> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9803> 2025-10-08 14:05:07 +0200 Branko Subasic <branko.subasic@axis.com> @@ -666,7 +3232,7 @@ pad has received EOS, and have no data left in their queues, and the last one has data. The problem is solved by modifying find_best_pad() to use gst_aggregator_pad_is_eos() instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9821> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9803> 2025-10-07 12:25:07 +0100 Philippe Normand <philn@igalia.com> @@ -674,7 +3240,270 @@ webrtc: nice: Fix a use-after-free and a mem leak `new_candidate` was freed too early and `new_addr` wasn't freed in case of early return from the on_candidate_resolved function. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9816> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9808> + +2025-10-03 03:13:25 +0000 Abd Razak, Muhammad Azizul Hazim <muhammad.azizul.hazim.abd.razak@intel.com> + + * sys/va/gstvaav1enc.c: + vaav1enc: add upscaledwidth value for SCC encoding + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9783> + +2025-09-29 17:43:06 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3dvideosink/d3dhelpers.c: + d3dvideosink: Port to gst_call_async + The previous implementation was wrong in a way that passing + element class as if it's element, as well as device reset handling + itself seems to be wrong though, removing use of deprecated API. + The d3dvideosink is not a recommended videosink anyway + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/869> + +2025-09-29 16:57:30 +0900 Seungha Yang <seungha@centricular.com> + + * sys/ipcpipeline/gstipcpipelinesink.c: + * sys/ipcpipeline/gstipcpipelinesrc.c: + ipcpipeline: Port to gst_object_call_async + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/869> + +2025-09-29 16:54:26 +0900 Seungha Yang <seungha@centricular.com> + + * gst/debugutils/gsttestsrcbin.c: + testsrcbin: Port to gst_object_call_async + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/869> + +2025-09-29 16:53:14 +0900 Seungha Yang <seungha@centricular.com> + + * ext/openjpeg/gstopenjpegdec.c: + * ext/openjpeg/gstopenjpegenc.c: + openjpeg: Port to gst_object_call_async + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/869> + +2025-10-05 14:56:35 -0400 Xavier Claessens <xclaessens@netflix.com> + + * data/meson.build: + meson: Add missing devenv values + Those are the differences spotted between: + - meson devenv -C builddir --dump meson.env + - ./gst-env.py --only-environment > gst.env + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9800> + +2025-07-08 01:07:33 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/hip/gsthip-enums.cpp: + * gst-libs/gst/hip/gsthip-enums.h: + * gst-libs/gst/hip/gsthip-gl.h: + * gst-libs/gst/hip/gsthip-interop-gl.h: + * gst-libs/gst/hip/gsthip-interop.cpp: + * gst-libs/gst/hip/gsthip-interop.h: + * gst-libs/gst/hip/gsthip-private.h: + * gst-libs/gst/hip/gsthip.h: + * gst-libs/gst/hip/gsthip_fwd.h: + * gst-libs/gst/hip/gsthipbufferpool.cpp: + * gst-libs/gst/hip/gsthipbufferpool.h: + * gst-libs/gst/hip/gsthipdevice.cpp: + * gst-libs/gst/hip/gsthipdevice.h: + * gst-libs/gst/hip/gsthipevent.cpp: + * gst-libs/gst/hip/gsthipevent.h: + * gst-libs/gst/hip/gsthiploader.cpp: + * gst-libs/gst/hip/gsthiploader.h: + * gst-libs/gst/hip/gsthipmemory.cpp: + * gst-libs/gst/hip/gsthipmemory.h: + * gst-libs/gst/hip/gsthiprtc.cpp: + * gst-libs/gst/hip/gsthiprtc.h: + * gst-libs/gst/hip/gsthipstream.cpp: + * gst-libs/gst/hip/gsthipstream.h: + * gst-libs/gst/hip/gsthiputils-private.h: + * gst-libs/gst/hip/gsthiputils.cpp: + * gst-libs/gst/hip/gsthiputils.h: + * gst-libs/gst/hip/hip-gst-gl.h: + * gst-libs/gst/hip/hip-gst.h: + * gst-libs/gst/hip/hip-prelude.h: + * gst-libs/gst/hip/meson.build: + * gst-libs/gst/hip/stub/cuda.h: + * gst-libs/gst/hip/stub/cudaD3D11.h: + * gst-libs/gst/hip/stub/cudaGL.h: + * gst-libs/gst/hip/stub/driver_types.h: + * gst-libs/gst/hip/stub/hip/driver_types.h: + * gst-libs/gst/hip/stub/hip/hip_gl_interop.h: + * gst-libs/gst/hip/stub/hip/hip_runtime.h: + * gst-libs/gst/hip/stub/hip/hip_runtime_api.h: + * gst-libs/gst/hip/stub/hip/hiprtc.h: + * gst-libs/gst/hip/stub/hip/nvidia_hip_runtime_api.h: + * gst-libs/gst/hip/stub/hip/texture_types.h: + * gst-libs/gst/meson.build: + * sys/hip/gsthipbasefilter.h: + * sys/hip/gsthipcompositor.cpp: + * sys/hip/gsthipconverter.cpp: + * sys/hip/gsthipconverter.h: + * sys/hip/gsthipmemorycopy.cpp: + * sys/hip/gsthipmemorycopy.h: + * sys/hip/meson.build: + * sys/hip/plugin.cpp: + hip: Move core methods to gst-libs + Make core GstHip methods public so applications can access + GstHip-produced resources directly. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9340> + +2025-10-03 16:56:47 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkvp9dec.c: + vkvp9dec: fix resolution change NULL pointer dereference + Add early return when output_state is not yet initialized in + _check_resolution_change(). This occurs during early stream + initialization before output_state is set up. + Resolution change detection is safely skipped since coded dimensions + match frame header dimensions during stream init, making renegotiation + unnecessary at this stage. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9786> + +2025-09-22 12:53:31 -0300 Thibault Saunier <tsaunier@igalia.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.h: + doc: python: Document PyGObject overrides for core GStreamer + Add comprehensive documentation for Python-specific functionality + provided by PyGObject overrides in core GStreamer classes including: + - Bin: make_and_add helper method and multi-element add() support + - Buffer/Memory: context manager support for map operations + - Caps: constructor overrides and container protocol support + - Clock: TIME_ARGS utility function + - Element: link_many static method + - ElementFactory: convenience metadata getters and classmethod make() + - Iterator: Python iteration protocol support + - MiniObject: make_writable and flags property + - Structure: dictionary-like access and constructor overrides + - TagList: container protocol support + Documentation is placed in the appropriate C function documentation + where overrides enhance existing functionality, or in class-level + SECTION documentation for new helper methods. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9717> + +2025-09-03 19:07:06 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: remove unused code + When the encoder is open, codedbuf_size cannot be zero or less. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9747> + +2025-09-03 18:13:34 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * sys/va/gstvaencoder.c: + vaencoder: use gst_clear_object() + Instead of gst_clear_pointer(), even though gst_clear_object() is a macro + that uses gst_clear_pointer() let's keep the semantics. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9747> + +2025-09-03 06:44:02 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/va/gstvaallocator.c: + * gst-libs/gst/va/gstvaallocator.h: + * gst-libs/gst/va/gstvapool.c: + * gst-libs/gst/va/gstvapool.h: + va: remove unusable public macros + Since GstVaDmabufAllocator, GstVaAllocator and GstVaPool structures aren't + public it's impossible to use the glib macros for data type checking. They + were moved to the c code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9747> + +2025-09-26 22:20:08 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiprtc.cpp: + * sys/hip/gsthiputils.cpp: + * sys/hip/gsthiputils.h: + hip: Fix loading of HIP libraries on Linux + We shouldn't need the development packages to dlopen the HIP + libraries. Also look for HIP 7.0 in System32 on Windows. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9749> + +2025-09-10 21:30:50 +0300 anonymix007 <48598263+anonymix007@users.noreply.github.com> + + * tests/examples/vulkan/meson.build: + * tests/examples/vulkan/sdl3_vulkandec.c: + examples: Add SDL3 Vulkan Renderer interop example + Co-authored-by: Victor Jaquez <vjaquez@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9677> + +2025-09-19 12:43:01 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vkdevice: always enable YCBCR conversion extension + It always needs to be enable to use it. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9677> + +2025-09-25 08:39:47 -0400 Doug Nazar <nazard@nazar.ca> + + * gst-libs/gst/analytics/gstanalyticssegmentationmtd.h: + * gst-libs/gst/analytics/gsttensor.h: + * gst-libs/gst/audio/gstnonstreamaudiodecoder.h: + * gst-libs/gst/audio/gstplanaraudioadapter.h: + * gst-libs/gst/codecs/gstav1picture.h: + * gst-libs/gst/codecs/gsth264decoder.h: + * gst-libs/gst/codecs/gsth264picture.h: + * gst-libs/gst/codecs/gsth265decoder.h: + * gst-libs/gst/codecs/gsth265picture.h: + * gst-libs/gst/codecs/gsth266picture.h: + * gst-libs/gst/codecs/gstmpeg2picture.h: + * gst-libs/gst/codecs/gstvp8picture.h: + * gst-libs/gst/codecs/gstvp9picture.h: + * gst-libs/gst/cuda/gstcudacontext.h: + * gst-libs/gst/cuda/gstcudamemory.h: + * gst-libs/gst/cuda/gstcudamemorypool.h: + * gst-libs/gst/cuda/gstcudastream.h: + * gst-libs/gst/mpegts/gst-atsc-section.h: + * gst-libs/gst/mpegts/gst-dvb-section.h: + * gst-libs/gst/mpegts/gst-scte-section.h: + * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: + * gst-libs/gst/mpegts/gstmpegtssection.h: + * gst-libs/gst/mse/gstmediasource.h: + * gst-libs/gst/mse/gstsourcebuffer.h: + * gst-libs/gst/mse/gstsourcebufferlist.h: + * gst-libs/gst/play/gstplay-signal-adapter.h: + * gst-libs/gst/play/gstplay-visualization.h: + * gst-libs/gst/play/gstplay.h: + * gst-libs/gst/player/gstplayer-visualization.h: + * gst-libs/gst/player/gstplayer.h: + * gst-libs/gst/transcoder/gsttranscoder-signal-adapter.h: + * gst-libs/gst/transcoder/gsttranscoder.h: + * gst-libs/gst/va/gstvadisplay_drm.h: + * gst-libs/gst/va/gstvadisplay_wrapped.h: + * gst-libs/gst/vulkan/gstvkbufferpool.h: + * gst-libs/gst/vulkan/gstvkcommandbuffer.h: + * gst-libs/gst/vulkan/gstvkcommandpool.h: + * gst-libs/gst/vulkan/gstvkdescriptorcache.h: + * gst-libs/gst/vulkan/gstvkdescriptorpool.h: + * gst-libs/gst/vulkan/gstvkdescriptorset.h: + * gst-libs/gst/vulkan/gstvkdevice.h: + * gst-libs/gst/vulkan/gstvkdisplay.h: + * gst-libs/gst/vulkan/gstvkfence.h: + * gst-libs/gst/vulkan/gstvkfullscreenquad.h: + * gst-libs/gst/vulkan/gstvkhandle.h: + * gst-libs/gst/vulkan/gstvkimagebufferpool.h: + * gst-libs/gst/vulkan/gstvkimagememory.h: + * gst-libs/gst/vulkan/gstvkimageview.h: + * gst-libs/gst/vulkan/gstvkinstance.h: + * gst-libs/gst/vulkan/gstvkoperation.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.h: + * gst-libs/gst/vulkan/gstvkqueue.h: + * gst-libs/gst/vulkan/gstvkswapper.h: + * gst-libs/gst/vulkan/gstvktrash.h: + * gst-libs/gst/vulkan/gstvkvideofilter.h: + * gst-libs/gst/vulkan/gstvkwindow.h: + * gst-libs/gst/vulkan/wayland/gstvkdisplay_wayland.h: + * gst-libs/gst/webrtc/ice.h: + * gst-libs/gst/webrtc/icestream.h: + * gst-libs/gst/webrtc/rtcsessiondescription.h: + bad: Add G_GNUC_WARN_UNUSED_RESULT to funcs with transfer full returns + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9771> + +2025-09-28 10:19:26 +0100 Philippe Normand <philn@igalia.com> + + * ext/wpe2/gstwpedisplay.cpp: + * ext/wpe2/gstwpethreadedview.cpp: + * ext/wpe2/meson.build: + wpe2: Require wpewebkit >= 2.50 + The API slightly changed since 2.48 and also the WPEPlatform lib is now part of + libWPEWebKit, so there's no need to check it anymore during the meson setup. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9758> 2025-09-26 16:58:38 +0900 Seungha Yang <seungha@centricular.com> @@ -682,7 +3511,24 @@ cuda: Fix runtime kernel compile with CUDA 13.0 Instead of hardcoded value, checks compute compatibility at runtime Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4655 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9769> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9745> + +2025-09-24 15:33:02 -0400 Doug Nazar <nazard@nazar.ca> + + * gst-libs/gst/audio/gstplanaraudioadapter.c: + gst: fixes + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9766> + +2025-09-29 14:44:32 -0400 Xavier Claessens <xclaessens@netflix.com> + + * ext/svtjpegxs/meson.build: + * sys/aja/meson.build: + meson: Remove "allow_fallback: true" from non essential deps + It means that if that dependency is not found on the system, and the + corresponding feature option is set to "auto", it won't build the + fallback subproject. + This reduces size and build time of default build. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9767> 2025-09-22 12:00:32 +0300 Sebastian Dröge <sebastian@centricular.com> @@ -694,7 +3540,7 @@ the same way as ffmpeg. See https://github.com/ZLMediaKit/ZLMediaKit/issues/4461 Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4645 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9765> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9719> 2025-09-25 22:42:12 -0400 Doug Nazar <nazard@nazar.ca> @@ -705,7 +3551,124 @@ * sys/decklink/gstdecklinkvideosink.cpp: * sys/uvcgadget/gstuvcsink.c: gst: Fix a few small leaks - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9761> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9756> + +2025-09-05 11:49:55 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> + + * ext/wayland/gstwaylandsink.c: + * gst-libs/gst/wayland/gstwlwindow.c: + * gst-libs/gst/wayland/gstwlwindow.h: + waylandsink: handle flush stop event + when got flush seek, waylandsink will re-preroll. + if next_buffer and staged_buffer are both NOT-NULL, + gst_wl_window_render will return FALSE. and preroll + will fail. + this patch clear staged_buffer for flush_stop event, make + sure preroll success + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9642> + +2025-09-17 08:38:48 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Protect against too small window + Fix protocol error due to 0 sized destination. When the compositor decides we + should render to a really small window such as 1x1, the inner rectangle for the + video surface may endup with a width or height of 0, which is a protocol error. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-17 08:37:43 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Move commit outsite of the video resize helper + This is a minor cleanup that helps readability by removing a boolean parameter + from the function. No functional change. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-17 08:34:54 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/wayland/gstwaylandsink.c: + * gst-libs/gst/wayland/gstwlwindow.c: + * gst-libs/gst/wayland/gstwlwindow.h: + waylandsink: Open toplevel fullscreen window on the selected output + Adds a new helper to create a toplevel that is configured to fullscreen on the + selected output immediately. This also allow selecting the fullscreen-output + when using the fullscreen shell protocol. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-08 15:55:35 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Delay the render rectangle once the surface is configured + Updating the surface before sending the configure ack results in bad window + placement. Handle this situation like if the window is being created by + unsetting the configured flag. + Once we acked the configuration, always update the geometry, which ensure we + have a commit after that event regardless if anything dimensions changed or not. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-08 15:53:58 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Flush the queue after sending fullscreen command + Flushing the queue allow for immediate changes. This is needed to ensure the + switch from/to fullscreen, or changing display happens in paused state. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-08 10:44:47 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Trace the configured top level surface state + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-05 15:04:49 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/wayland/gstwaylandsink.c: + * ext/wayland/gstwaylandsink.h: + waylandsink: Add a fullscreen-output property + New property that let you specify which output to fullscreen to. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-05 15:02:57 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + * gst-libs/gst/wayland/gstwlwindow.h: + wayland: window: Allow to fullscreen on a specific output + This add a new method that allow to fullscreen the window on a specific + displaty. This can be useful for simple application that just want to render + fullscreen and have multiple possible outputs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-05 15:00:38 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwldisplay.c: + * gst-libs/gst/wayland/gstwldisplay.h: + wayland: display: Enumerate wl_output + Catch wl_output global object and using its listener gather all the wl_output + information and store them by name into a hashmap. The GstWlOutput can be + obtained by name using a new function in the GstWlDisplay API. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-05 14:58:23 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwloutput-private.h: + * gst-libs/gst/wayland/gstwloutput.c: + * gst-libs/gst/wayland/gstwloutput.h: + * gst-libs/gst/wayland/meson.build: + * gst-libs/gst/wayland/wayland.h: + wayland: Add object to store wl_output information + This is a gobject that can be used to store all the information about a + wl_output. The setters are private, while the getters are public. Thread safety + should be done by maintaining a ref count on the object during reads. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9653> + +2025-09-24 19:26:51 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstcudaconvertscale.c: + cudaconvert: Fix crop meta support + When in/output caps are identical, even if downstream didn't propose + pool, always respond to support crop meta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9734> 2025-09-24 19:37:00 +0900 Seungha Yang <seungha@centricular.com> @@ -713,7 +3676,41 @@ d3d12convert: Fix crop meta support When in/output caps are identical, even if downstream didn't propose pool, always respond to support crop meta - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9743> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9735> + +2025-09-17 11:17:59 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/isoff/gstisoff.c: + isoff: fix fall through warnings + The compiler since !8229 checks + fallthroughs, explicitly add + G_GNUC_FALLTHROUGH to tell that + this is expected. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9703> + +2025-09-09 10:12:00 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/gsttflitevsiinference.c: + tflitevsiinference: Replace renamed API + The renamed API wasn't actually the one we wanted, replace it with + the correct one. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9731> + +2025-09-11 15:29:08 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcencoder/gstlcevch266enc.c: + * ext/lcevcencoder/gstlcevch266enc.h: + * ext/lcevcencoder/meson.build: + * ext/lcevcencoder/plugin.c: + lcevcencoder: Add lcevch266enc element + This new element allows encoding LCEVC H.266 video. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9705> + +2025-09-23 20:29:11 +0200 David Maseda Neira <david.maseda@cinfo.es> + + * sys/nvcodec/gstnvh264encoder.cpp: + nvcodec: Ensure interlace is used only when required and supported + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9730> 2025-09-23 09:06:09 +0300 Sebastian Dröge <sebastian@centricular.com> @@ -724,13 +3721,35 @@ audiotestsrc is-live=true ! audio/x-raw,channels=8 ! opusenc ! mpegtsmux ! fakesink opusenc selects the Vorbis channel layout family but a channel-mapping that is not one of the ones supported by the short MPEG-TS Opus channel configurations. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9733> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9729> + +2025-08-11 13:56:14 -0400 Olivier Crête <olivier.crete@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + objectdetectionoverlay: In the presence of tracking Mtd, draw different colors + It will create one color per track, trying to use colors which are as different + as possible from each other. There is a property to control that behaviour. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9538> + +2025-08-11 13:55:49 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + objectdetectionoverlay: Remove unused variable + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9538> + +2025-08-11 11:29:11 -0400 Olivier Crête <olivier.crete@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + objectdetectionoverlay: Add option to not draw tracking labels + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9538> 2025-09-22 10:09:48 +0800 Cheah, Vincent Beng Keat <vincent.beng.keat.cheah@intel.com> * sys/va/gstvacompositor.c: vacompositor: Correct scale-method properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9726> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9718> 2025-09-08 17:50:32 -0400 Doug Nazar <nazard@nazar.ca> @@ -739,7 +3758,7 @@ * gst/debugutils/gsttestsrcbin.c: gst: Don't use g_assert() around production code If G_DISABLE_ASSERT is defined the code will not be compiled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9721> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9662> 2025-09-16 19:00:47 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -749,7 +3768,17 @@ Cflags: even though pc files should not contain -W flags. Worse, our plugin is written in C but that's a C++ argument so GCC emits a warning about that. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9715> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9695> + +2025-09-03 17:16:38 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/libs/vkvideoencodeav1.c: + vulkan: fix AV1 encode test with TILE_GROUP + Add support for GST_AV1_OBU_TILE_GROUP. + For example RADV generates frame with two OBUs + such as FRAME_HEADER + TILE_GROUP when + nvidia generated only FRAME OBU. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9631> 2025-09-16 16:38:22 -0700 Xavier Claessens <xclaessens@netflix.com> @@ -761,29 +3790,153 @@ call. Idealy we should serialize metas into a memfd in that case, instead of writing the data on the socket. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9706> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9701> + +2025-08-29 12:13:58 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkh264dec.c: + * ext/vulkan/vkh265dec.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.h: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * tests/check/libs/vkvideodecode.c: + vkdecoder: enable INLINE_PARAMS in the decoder + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS should enable + VK_VIDEO_SESSION_CREATE_INLINE_SESSION_PARAMETERS_BIT_KHR + in the decoder session creation. + Add new api gst_vulkan_decoder_has_feature and keep + `features` private. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9616> + +2025-08-29 12:12:24 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkh264dec.c: + * ext/vulkan/vkh265dec.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.h: + * tests/check/libs/vkvideodecode.c: + vkdecoder: fix typo and rename MAINTENANCE2 feat + The feature GST_VULKAN_DECODER_FEATURES_VIDEO_MAINTEINANCE2 + has been renamed to GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS + to clarify the use of this feature in the rest of the code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9616> + +2025-09-16 18:01:35 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12overlaycompositor.cpp: + d3d12overlaycompositor: Fix leak and improve passthrough + Allow buffer passthrough when the composition meta + contains no rectangles + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9693> + +2025-09-16 17:35:17 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12overlayblender.cpp: + d3d12overlayblender: Rectangle upload optimization + Removed GList usage and reworked upload logic + to reduce overhead + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9693> 2025-09-16 15:24:22 +0900 Seungha Yang <seungha@centricular.com> * sys/d3d12/gstd3d12deinterlace.cpp: d3d12deinterlace: Fix passthrough handling Don't try to convert buffer when passthrough is enabled - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9696> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9691> -2025-09-14 13:19:28 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-09-03 11:32:19 -0400 Xavier Claessens <xclaessens@netflix.com> - * meson.build: - Back to development after 1.26.6 + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + * ext/vulkan/vkoverlaycompositor.c: + * sys/d3dvideosink/d3dhelpers.h: + * sys/d3dvideosink/d3dvideosink.c: + * sys/d3dvideosink/gstd3d9overlay.c: + * sys/dwrite/gstdwriteoverlayobject.cpp: + GstVideoOverlayCompositionMeta: Fix multiple composition meta usage + This deprecates gst_buffer_get_video_overlay_composition_meta() and + stops using it. The reason is a buffer could have multiple composition + metas, and each of them can have multiple rectangles. Sinks and + compositor elements must iterate over all metas instead of assuming + there is only one. + Discourage usage of gst_video_overlay_composition_make_writable() and + gst_video_overlay_composition_copy() in documentation. Instead of + modifying upstream's composition meta, overlay elements should add their + own meta. This avoids texture cache invalidation in sinks and compositor + elements that keep a ref of GstVideoOverlayRectangle objects. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7627> -=== release 1.26.6 === +2025-09-12 17:31:01 +0900 Seungha Yang <seungha@centricular.com> -2025-09-14 13:13:58 +0100 Tim-Philipp Müller <tim@centricular.com> + * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: + * gst-libs/gst/d3dshader/gstd3dshadercache.h: + * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_full.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_full_premul.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_limited.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_limited_premul.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: + * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: + * sys/d3d12/gstd3d12overlayblender.cpp: + * sys/d3d12/gstd3d12overlaycompositor.cpp: + * sys/d3d12/gstd3d12overlaycompositor.h: + * sys/d3d12/meson.build: + * sys/d3d12/plugin.cpp: + d3d12: Add overlay compositor element + Introduce a new d3d12overlaycompositor element + that blends GstVideoOverlayCompositionMeta attached to input buffers + onto output D3D12 textures + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9683> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.6 +2025-09-12 17:48:36 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12overlayblender.cpp: + * sys/d3d12/gstd3d12overlayblender.h: + * sys/d3d12/gstd3d12overlaycompositor.h: + * sys/d3d12/gstd3d12swapchainsink.cpp: + * sys/d3d12/gstd3d12window-swapchain-resource.h: + * sys/d3d12/gstd3d12window-swapchain.cpp: + * sys/d3d12/gstd3d12window.cpp: + * sys/d3d12/meson.build: + d3d12: Change overlay blending helper object name + Change the name to overlaycompositor to overlayblender + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9683> + +2025-09-11 20:57:34 +0900 Seungha Yang <seungha@centricular.com> + + d3d12: Add interlace element + Add a new interlace element using D3D12 compute shaders, + providing the same behavior as the software interlace element. + Currently supported patterns: + * 1:1 (60p -> 60i), generating half the number of output frames + * 2:2 (30p -> 60i), implemented as passthrough with buffer flag update + only + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9680> + +2025-09-09 21:55:08 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Auto-select IAudioClient3 and pick shared-mode period + Use IAudioClient3 in shared mode when the requested latency-time + is below the engine default period even if low-latency is + not explicitly requested. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9666> + +2025-09-09 19:30:36 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2device.cpp: + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2enumerator.h: + wasapi2: Probe device period and report via device provider + Report IAudioClient::GetDevicePeriod() and IAudioClient3::GetSharedModeEnginePeriod() + values + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9666> + +2025-09-09 19:31:39 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2: Fix shared mode caps report in device provider + Build shared mode caps using corresponding IAudioClient, + not default device + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9666> 2025-09-12 21:35:58 +0900 Seungha Yang <seungha@centricular.com> @@ -796,36 +3949,205 @@ As a result, the resource may not be writable if the conversion command has not yet finished. To address this, add a private method that allows setting the fence without a writability check - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9686> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9681> -2025-07-23 11:37:15 -0400 Xavier Claessens <xclaessens@netflix.com> +2025-09-13 20:04:06 +0900 Seungha Yang <seungha@centricular.com> - * tests/check/elements/mpegtsmux.c: - mpegtsmux: Caps event fails with stream type change error - If mpegtsmux receives the same caps again, it wrongly claims the stream - type changed: - error: Stream type change from 06 to 8f not supported - This adds a unit test that demonstrate the issue in the very hacky way. - I have seen this happening with the below pipeline when upstream caps - changes. Since the caps filter fixates the caps received by opusenc and - mpegtsmux, the stream type cannot change. - ... - ! audioconvert - ! audio/x-raw,format=S16LE,channels=2,rate=48000 - ! opusenc bitrate=128000 - ! mpegtsmux - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9678> + * sys/d3d12/gstd3d12memorycopy.cpp: + d3d12upload, d3d12download: Use internal staging pool + When copying between system memory and a D3D12 resource, + if the non-D3D12 buffer is not backed by D3D12 staging memory + (e.g. use-staging-memory is disabled or upstream provides + its own buffer pool), fall back to the internal staging + memory pool. The staging pool enables batched copies, which + is more efficient than copying each GstD3D12Memory object + individually in a GstBuffer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> -2025-08-07 14:31:38 +1000 Jan Schmidt <jan@centricular.com> +2025-09-13 20:34:30 +0900 Seungha Yang <seungha@centricular.com> - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Use 'internal' stream_type to detect codec changes - The TsMuxStream internal_stream_type field stores the original - 'full' stream type (such as Opus), while the stream_type field - stores the value that will actually be written into the MPEG-TS - packets according to the codec mappings. When checking if - input caps are changing stream type, check the original type. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9678> + * sys/d3d12/gstd3d12memorycopy.cpp: + d3d12upload, d3d12download: Add use-staging-memory property + Since the maximum allocatable staging memory size is about half of the + total system memory, we might run out of available staging memory earlier + than system memory. This adds a property to allow choosing the preferred + memory target for upload/download + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-09-05 20:18:43 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12swapchainsink.cpp: + * sys/d3d12/gstd3d12videosink.cpp: + d3d12videosink, d3d12swapchainsink: Port to gst_d3d12_buffer_copy_into() + Use copy helper function to support both uploading from system + memory and staging memory + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-09-05 19:52:25 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12download.cpp: + * sys/d3d12/gstd3d12download.h: + * sys/d3d12/gstd3d12memorycopy.cpp: + * sys/d3d12/gstd3d12upload.cpp: + * sys/d3d12/gstd3d12upload.h: + * sys/d3d12/meson.build: + * sys/d3d12/plugin.cpp: + d3d12: Maintain only single upload/download implementation + Remove ones that for d3d11 disabled, and use #ifdef in a single source + file + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-09-05 18:05:24 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12memorycopy.cpp: + d3d12download, d3d12upload: Add support for staging memory + * Use newly implemented staging memory for upload/download + operations to to allow copying from/to resources + on the D3D12 default heap directly without an extra copy + using system memory + * Add 'queue-type' property to let users select the preferred + command queue type for copy command execution + In addition to removing the extra copy via staging memory, + copy commands can also be batched into a single command list + in case of non-DXGI-native multi-plane formats, such as I420. + This can result in up to 3x faster copy performance. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-09-05 16:31:15 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12frame.cpp: + * gst-libs/gst/d3d12/gstd3d12frame.h: + * gst-libs/gst/d3d12/gstd3d12utils.cpp: + * gst-libs/gst/d3d12/gstd3d12utils.h: + d3d12: Add new gst_d3d12_frame_copy varient methods + Allow user to specify command queue to use + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-09-02 21:13:44 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12.h: + * gst-libs/gst/d3d12/gstd3d12_fwd.h: + * gst-libs/gst/d3d12/gstd3d12stagingbufferpool.cpp: + * gst-libs/gst/d3d12/gstd3d12stagingbufferpool.h: + * gst-libs/gst/d3d12/gstd3d12stagingmemory.cpp: + * gst-libs/gst/d3d12/gstd3d12stagingmemory.h: + * gst-libs/gst/d3d12/meson.build: + d3d12: Add staging memory implementation + Add GstD3D12StagingMemory and GstD3D12StagingBufferPool + that can be used for temporary storage of GPU processed data. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9648> + +2025-08-27 14:24:43 +0100 Philippe Normand <philn@igalia.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/webrtc/gstwebrtcbin.c: + * tests/check/elements/webrtcbin.c: + webrtcbin: Add a close signal + This is a partial implementation of the close procedure defined in + https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close + Most notably the transceiver stopping procedure is not supported because it + doesn't fit properly within our transceiver implementation. + Fixes #2760 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9379> + +2025-07-12 13:12:06 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/nice/nice.c: + webrtc: nice: Implement close API + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9379> + +2025-07-12 13:11:32 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/ice.h: + webrtc: ice: Add close API + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9379> + +2025-09-05 22:40:57 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2util.cpp: + wasapi2: Prefer QUAD over 3.1 for 4ch layout + ... and add missing 3, 5, and 7ch layout fallback. + QUAD is more common 4ch configuration than 3.1 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9649> + +2025-09-05 22:38:26 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2util.cpp: + wasapi2: Preserve channel mask from device/mix format + Ensure that the channel mask from the mix format (shared mode) or + PKEY_AudioEngine_DeviceFormat (exclusive mode) is inherited by + generated format candidates for consistency + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9649> + +2025-09-07 20:39:44 +0100 Tim-Philipp Müller <tim@centricular.com> + + * meson.build: + Back to development after 1.27.2 + +=== release 1.27.2 === + +2025-09-07 20:34:55 +0100 Tim-Philipp Müller <tim@centricular.com> + + * NEWS: + * RELEASE: + * gst-plugins-bad.doap: + * meson.build: + Release 1.27.2 + +2025-09-06 11:08:47 +0300 Sebastian Dröge <sebastian@centricular.com> + + * ext/sndfile/gstsfdec.c: + * ext/vulkan/vkdownload.c: + * ext/vulkan/vksink.c: + * ext/vulkan/vkupload.c: + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/transportreceivebin.c: + * ext/webrtc/transportsendbin.c: + * gst-libs/gst/mse/gstappendpipeline.c: + * gst-libs/gst/play/gstplay.c: + * gst-libs/gst/transcoder/gsttranscoder.c: + * gst/rtp/gstrtpsink.c: + * gst/rtp/gstrtpsrc.c: + * sys/androidmedia/gstamcvideodec.c: + * sys/decklink/gstdecklinkvideosink.cpp: + * sys/ipcpipeline/gstipcpipelinecomm.c: + * sys/ipcpipeline/gstipcpipelinesink.c: + * sys/ipcpipeline/gstipcpipelinesrc.c: + * sys/uvcgadget/gstuvcsink.c: + * tests/check/elements/adaptive_demux_common.c: + * tests/check/elements/mpegtsmux.c: + * tests/check/elements/webrtcbin.c: + * tests/examples/camerabin2/gst-camerabin2-test.c: + * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: + * tests/examples/inter/gstintertest.c: + * tests/examples/ipcpipeline/ipc-play.c: + * tests/examples/mediafoundation/mfvideoenc-dynamic-reconfigure.c: + * tests/examples/nvcodec/nvcodec.c: + * tests/examples/webrtc/webrtc.c: + * tests/examples/webrtc/webrtcbidirectional.c: + * tests/examples/webrtc/webrtcrenego.c: + * tests/examples/webrtc/webrtcswap.c: + * tests/examples/webrtc/webrtctransceiver.c: + * tests/examples/wpe/wpe.c: + gst: Change usage of gst_element_state_*() to gst_state_*() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9655> + +2025-09-01 11:30:46 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Add support for exclusive mode mute control + In case that IAudioStreamVolume interface is unavailable such as + exclusive mode, control the mute state by using AUDCLNT_BUFFERFLAGS_SILENT + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9624> + +2025-09-01 10:49:37 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Handle device init error on acquire() + Don't post error if IAudioClient::Initialize() got failed but + continue-on-error is enabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9624> 2025-09-05 08:25:22 +0200 Ola Fornander <ola.fornander@axis.com> @@ -837,7 +4159,41 @@ characters in the range of 1 through 127 and interpreted as US-ASCII characters. Hence, when using g_date_time_format, it is necessary to instead write %_e to enforce space padding. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9645> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9639> + +2025-09-04 09:37:18 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + vkvideoutils: fix typo in vp9 profile map + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9638> + +2025-08-25 12:39:05 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcencoder/gstlcevch265enc.c: + * ext/lcevcencoder/gstlcevch265enc.h: + * ext/lcevcencoder/meson.build: + * ext/lcevcencoder/plugin.c: + lcevcencoder: Add lcevch265enc element + This new element allows encoding video into H265 LCEVC streams. It follows the + same design as lcevch264enc. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9632> + +2024-10-31 15:59:35 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/lcevcencoder/README.md: + lcevcencoder: Add ldconfig and install steps to the readme + Without ldconfig, the library isn't found at runtime! + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7805> + +2025-09-02 11:32:18 +0100 Philippe Normand <philn@igalia.com> + + * tests/examples/inter/gstintertest.c: + * tests/examples/inter/meson.build: + * tests/examples/meson.build: + inter: Move intertest example to tests/examples/inter + Also fix a couple leaks, make it use playbin3, add URI command line argument + handling, use gst_print functions and remove dead code. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9627> 2025-09-02 09:06:20 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -845,7 +4201,7 @@ meson: Avoid pulling in gtest for openh264 Emits a big warning about wrapdbv1 and the updated wrap fails to build on Windows. We don't need the tests anyway. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9637> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9626> 2025-09-01 21:02:22 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -865,7 +4221,120 @@ Only commonly-used plugin deps like pango, orc, openh264, libvpx, libnice are enabled by default. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1788 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9637> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9626> + +2025-08-09 23:16:09 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstcudacompositor.cpp: + cudacompositor: Add support for crop meta + GstCudaConverter object can support cropping now + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-09 03:23:38 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstcudaconverter.cpp: + * sys/nvcodec/gstcudaconvertscale.c: + * sys/nvcodec/kernel/gstcudaconverter.cu: + cudaconvertscale: Add support for crop meta + Performs cropping based on upstream attached crop meta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-09 03:55:49 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/cuda/gstcudamemory.cpp: + * gst-libs/gst/cuda/gstcudamemory.h: + cudamemory: Add gst_cuda_allocator_alloc_stream_ordered() method + Allow stream ordered memory allocation without GstCudaBufferPool + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-09 03:53:29 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/cuda/gstcudamemory.cpp: + cudamemory: Add ARGB64 format support + The format will be used for intermediate data processing for now + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-09 03:50:08 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/cuda/gstcudamemory.h: + cudamemory: Add GST_MAP_{READ,WRITE}_CUDA macro + Instead of casting the alias + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-09 02:30:34 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/cuda/gstcudamemory.cpp: + cudamemory: Add VUYA texture mapping + Support VUYA texture caching + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9523> + +2025-08-22 14:01:44 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdecodebin.c: + lcevcdecodebin: Update the base decoder when setting base-decoder property + Currently, the base-decoder property only works when setting it while + constructing the element, but does not work if we set the property after + constructing the element. This patch fixes this issue so that the property + can be set with gst-launch-1.0. Note that the property can only be set if + the element is in NULL state. + Fixes #4594 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9599> + +2025-05-28 17:23:27 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkvp9dec.c: + vkvp9dec: dpb pool created with vulkan caps max coded size + The dpb pool should be created with the max supported size + to avoid a dpb pool recreation on resize event. + When the pool is destroyed during resolution changes, previously decoded + reference frames stored in the DPBs are lost, which can cause decoding + errors or corruption when those reference frames are needed for + inter-frameprediction at different resolutions. By sizing the pool for + the maximum supported resolution upfront, we ensure reference frame + continuity across resolution changes. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9204> + +2025-07-08 11:44:18 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/gstvulkan.c: + * ext/vulkan/meson.build: + * ext/vulkan/vkvp9dec.c: + * ext/vulkan/vkvp9dec.h: + vulkan: add vp9 decode element + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9204> + +2025-03-26 17:29:55 +0100 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/libs/vkcodecparams_vp9.c: + * tests/check/libs/vkvideodecode.c: + tests: add vp9 vulkan video decode + This test allows to decode one key frame + and one inter frame + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9204> + +2025-03-26 15:20:14 +0100 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkdevice.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + * gst-libs/gst/vulkan/gstvkvideoutils-private.h: + * gst-libs/gst/vulkan/meson.build: + vulkan: add vp9 decode support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9204> + +2025-05-28 16:12:07 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + vkdecoder-private: manage existing dpb pool + When the decoder wants to recreate the dpb pool + on resize event for example, an existing dpb pool + might exist, so it should be kept if the caps + are equal or destroy for new caps. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9614> 2025-08-26 20:17:36 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -873,7 +4342,65 @@ * ext/vulkan/vkh265dec.c: vulkanh26xdec: fix discont state handling It fixes a couple tests in fluster for H.265 decoding. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9613> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9610> + +2025-07-15 11:15:56 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/elements/vkcolorconvert.c: + * tests/check/elements/vkdeviceprovider.c: + * tests/check/elements/vkupload.c: + * tests/check/libs/vkcommandpool.c: + * tests/check/libs/vkdevice.c: + * tests/check/libs/vkformat.c: + * tests/check/libs/vkimage.c: + * tests/check/libs/vkimagebufferpool.c: + * tests/check/libs/vkinstance.c: + * tests/check/libs/vkmemory.c: + * tests/check/libs/vkvideodecode.c: + * tests/check/libs/vkvideoencodeh264.c: + * tests/check/libs/vkvideoencodeh265.c: + * tests/check/libs/vkwindow.c: + vulkan: tests: remove/update ci comments + Since previous commit, the CI can now run vulkan + tests. Remove or update the comments related to CI. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9416> + +2025-07-11 18:04:04 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/gst-plugins-bad.supp: + ci: enable vulkan tests in validate + As CI now supports llvm 18 and mesa 24.1 which + allow to use properly lavapipe in the CI, the vulkan + ci tests have been removed from the validate blacklist. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9416> + +2025-08-25 09:44:01 +0800 Vivian LEE <vivian.lee@harmonicinc.com> + + * ext/x265/gstx265enc.c: + * ext/x265/gstx265enc.h: + x265: Fix duplicate SEI at startup IDR frame problem + x265 encoder_headers return headers with SEI after encoding the frame, + while the output frame also contains SEI so two identical header + blocks appeared. + Cache the headers at init, leaving only a single copy in the stream. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9604> + +2025-08-20 15:32:15 +0300 Sebastian Dröge <sebastian@centricular.com> + + * ext/webrtc/gstwebrtcbin.c: + gst: Convert `is_writable()` / `make_writable()` macros to inline functions + Plus actual functions that are exported from the library. + Apart from improving type-safety, this also makes bindings more happy. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9588> + +2025-08-22 14:20:36 -0400 Daniel Morin <daniel.morin@collabora.com> + + * gst/tensordecoders/gstfacedetectortensordecoder.c: + tensordecoders: fix wrong assumption in ultralightfacedetectortensordec + - UltraLightFaceDetection was assmuming only one TensorMeta could be attach to + buffer. We need to look at all TensorMeta attach to the buffer and check for + the one it support. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9564> 2025-08-17 15:58:22 -0400 Daniel Morin <daniel.morin@collabora.com> @@ -889,7 +4416,227 @@ multiple inference. I don't see much value in having all tensors data always inside one GstTensorMeta since appending would mean re-allocation of the tensors array anyway. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9601> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9564> + +2025-08-22 13:09:31 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + wasapi2: Add support for format negotiation + Enumerate supported formats during open so that src/sink can + report them via get_caps(). The format is then fixated and + initialized on acquire(), allowing users to select their + preferred format + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-22 08:26:11 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + wasapi2: Enumerate supported shared mode formats + ... and report it via device provider property + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-22 09:22:02 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Workaround for S24_32LE format mismatch + Since Windows 24bit-in-32bit format is not supported + by GStreamer (Windows one is MSB-aligned), converts format + in ringbuffer using SSE2. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-20 18:14:35 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2util.cpp: + wasapi2: Demote S24_32LE in exclusive-mode format ordering + Some endpoints accept 24-bit in 32-bit PCM (S24_32LE) in exclusive mode + but playback at very low volume. Until the root cause is identified, + push S24_32LE to the end of the candidate list + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-20 12:02:47 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + wasapi2: Add support for exclusive mode device switching + Because of APO/OS mixer bypass in exclusive mode, we should + convert samples if new device has different format. + The conversion with additional buffering is implemented in this patch + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-20 12:03:54 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Fix process loopback device init + Fix AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM flag usage + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-20 06:56:27 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2rbuf.h: + * sys/wasapi2/gstwasapi2sink.cpp: + * sys/wasapi2/gstwasapi2src.cpp: + wasapi2: Add support for exclusive mode + Add "exclusive" property and try exclusive mode streaming + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-20 03:47:08 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2device.cpp: + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2enumerator.h: + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + wasapi2: Probe exclusive mode formats + ... and report it via device provider props + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9586> + +2025-08-08 16:46:48 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/libs/vkvideoencodeav1.c: + * tests/check/meson.build: + tests: add vulkan AV1 encode test + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8841> + +2024-12-17 18:49:22 +0100 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + * gst-libs/gst/vulkan/gstvkvideoutils-private.h: + vulkan: add basic AV1 encode support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8841> + +2025-08-19 07:48:59 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdecutils.c: + * ext/lcevcdecoder/gstlcevcdecutils.h: + lcevcdec: Support all available formats + RGB and GRAY formats are only placeholders in LCEVCDec and therefore are not + supported yet. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-08-18 10:26:55 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdecutils.c: + lcevcdec: Remove unneeded LCEVC 2.0.0 workaround + This is not needed anymore as the min version for LCEVCdec is 4.0.1 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-08-18 09:50:48 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdec.c: + * ext/lcevcdecoder/gstlcevcdec.h: + * ext/lcevcdecoder/gstlcevcdecutils.c: + lcevcdec: Handle pixel aspect ratio and crop size correctly + LCEVCdec supports different pixel aspect ratios other than 1/1. This change + forwards the pixel aspect ratio of the base picture to the LCEVC decoder, + and also updates the output pixel aspect ratio caps base on the one from the + enhanced frame. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-07-29 15:22:39 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdec.c: + * ext/lcevcdecoder/gstlcevcdec.h: + lcevcdec: Peek the decoder for output resolution + The output resolution is not always twice as big as the input resultion divided + by the pixel aspect ratio. This is the case for LCEVC '0D' mode, where the + output resolution is the same as the input resolution, and the only enhancement + is the picture being clearer. + This patch uses LCEVC_PeekDecoder() after sending the LCEVC enhancement data to + know what the output resolution will be before allocating the output picture. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-07-30 15:19:42 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdec.c: + * ext/lcevcdecoder/gstlcevcdecutils.c: + * ext/lcevcdecoder/gstlcevcdecutils.h: + lcevcdec: Fix LCEVC picture access flags + Even though the LCEVC decoder works fine without this, it is recommended to + set read access to base pictures that are sent to the decoder, and write access + to enhanced pictures that are received from the decoder. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-07-30 15:19:26 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdec.c: + lcevcdec: Fix width type typo + This was always meant to be gint instead of gint32. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9463> + +2025-08-20 14:24:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkh264dec.c: + * ext/vulkan/vkh265dec.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkdecoder-private.h: + * gst-libs/gst/vulkan/gstvkdevice.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/meson.build: + * tests/check/libs/vkvideodecode.c: + vulkan: enable video maintenance2 for inline session parameters + .. in decoders. + Inline session parameters allows to not create session parameters handlers for + every new stream parameters (such as SPS and PPS for H.264, for example), but + instead to pass them as a chained structure in the decoding main structure. This + is completely align with GStreamer decoder base classes. + Even that the previous approach is kept, if the devices doesn't support video + maintenance2, it shows a lot of validation errors. + Also it was required to add another parameter when enabling extension to verify + if the extension is linked with a device feature and if it is enabled. + Bump Vulkan API (and driver version for both decoders and encoders) to 1.4.306 + Also bumped the ABI_CHECK_TAG because the CI finally catches up with the vulkan + video symbols that are not exposed by a public header (tough they are binary + public). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9585> + +2025-08-20 14:29:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkvideo-private.c: + vkvideo-private: remove unused guards + Since this file is compiled only if vulkan video support is enabled with the + proper vulkan headers version. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9585> + +2025-08-20 14:12:54 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vkdevice: check for features when enabling extensions + Some extensions need to have enabled certain feature in the device. This patch + does that check by adding a new field in the extension list which is a function + that can be mapped to gst_vulkan_physical_device_has_feature_*() functions. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9585> + +2025-08-20 20:01:21 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + vkimagesbufferpool: another usage for non-independent profile flag + Fix validation issue VUID-VkImageCreateInfo-flags-08329 on old RADV hardwware. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9591> + +2025-08-19 20:37:06 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkh265dec.c: + vulkanh265dec: fix validation layer complain + Silence the validation VUID-VkImageMemoryBarrier2-srcAccessMask-03915 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9591> 2025-08-19 11:15:29 +0100 Ian Napier <ian@digitaledgesubsea.com> @@ -908,7 +4655,24 @@ used by any other decklinkvideosrc. Now, we log an element error if StopStreams() fails but otherwise consider the state change to have succeeded. This means that the element can be disposed and the associated hardware resource released. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9592> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9578> + +2025-08-18 14:17:32 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + vkimagebufferpool: don't use independent profile flag for some usage + VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR, among others, needs the video profile + to use. + The patch clears the validation issue VUID-VkImageCreateInfo-flags-08331 + This is a continuation of !9550 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9570> + +2025-08-18 11:00:34 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * tests/check/libs/vkvideoencodeh264.c: + * tests/check/libs/vkvideoencodeh265.c: + tests: fix queues for vulkan h26x encoders + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9571> 2025-08-15 13:02:57 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -916,7 +4680,7 @@ * ext/vulkan/vkh265dec.c: vulkanh26xdec: fix debug category name This is a regression from merge request !78011 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9562> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9560> 2025-08-15 12:06:45 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -925,21 +4689,30 @@ vulkanh26xdec: re-negotiate after FLUSH Vulkan decoders also have the same issue as VA decoders fixed in !9457, where FLUSH event doesn't renegotiate downstream the pipeline. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9562> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9560> + +2025-08-13 14:54:45 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + vkimagebufferpool: support video profile independent images + With video_maintenance1 extension is possible to create images independent of a + the video profile list, under that image will be processed. + With that extension is possible to share the same image for dynamic transcoding. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9550> 2025-08-14 15:55:41 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> * tests/check/elements/cccombiner.c: tests: cccombiner: Test durationless buffers Crashes without the previous fix. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9559> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9415> 2025-07-17 14:40:47 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> * ext/closedcaption/gstcccombiner.c: cccombiner: Don't crash when first frame has no duration Aggregate again so the code above can determine the end time or EOS. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9559> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9415> 2025-07-17 14:29:07 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> @@ -949,7 +4722,39 @@ The `gst_util_uint64_scale` emitted a critical warning and returned `GST_CLOCK_TIME_NONE`, so beyond removing the warning this fix does not change behavior. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9559> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9415> + +2025-08-14 03:03:50 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Warm up capture audio client on open + If the endpoint is idle, the first IAudioClient::Start() call + may take a long time to return. Start/stop the capture client + on open to reduce latency of subsequent Start() calls. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9553> + +2025-08-13 03:13:47 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2sink: Do not push too large preroll buffer to endpoint + To avoid startup glitches, a silent buffer is pushed to + render endpoint, but pushing too large silent buffer will + introduce unnecessary latency. Limit it to a single period worth data. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9553> + +2025-08-13 16:39:24 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + vkdecoder-private: fix mistake from !9531 + This was a very silent mistake. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9552> + +2025-08-13 16:38:23 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vk{decoder,encoder}-private: use API to check device version + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9552> 2025-08-12 21:27:53 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -960,7 +4765,7 @@ vkCreateImage(): pCreateInfo->usage is zero. This patch force to use the internal defaults in vkimagebufferpool if no usage is defined. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9546> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9544> 2025-08-12 14:24:12 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -977,7 +4782,31 @@ device driver before adding the enabling of the feature. Finally, the getters were adapted to use the version feature structure if the device driver version matches. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9546> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9544> + +2025-08-12 14:24:52 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + vkimagebufferpool: fix regression from !9492 + On commit 1a7f0f162726f07f5723e0c1f43f2c6725d07c80 a regression were introduced + by omitting to initialize the profileCount field. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9545> + +2025-08-08 17:29:32 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vkencoder-private: remove duplicated definition + They are already declared in gstvkvideo-private.h + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9531> + +2025-08-08 17:22:46 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vulkan: remove spurious video extension checking + Since they are expected dependencies. If the specific codec extension is loaded, + that means that dependencies are loaded too. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9531> 2025-08-11 22:54:29 +1000 Jan Schmidt <jan@centricular.com> @@ -990,7 +4819,7 @@ Fixes problems where the EXT-X-MAP directive has been written into the playlist between an EXT-X-BYTERANGE and the fragment URI it applies to. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9541> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9533> 2025-08-08 17:38:33 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> @@ -998,29 +4827,178 @@ waylandsink: add some error handler for event dispatch if wl client got last_error, wl_display_dispatch_queue_pending will return -1, may lead to unhandled case, we should quit. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9537> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9511> + +2025-08-09 22:51:03 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2activator.cpp: + wasapi2: Tone down activation fail log + If there's no endpoint available, that failure is expected error + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9521> + +2025-08-09 22:46:59 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2: Pass correct data flow value to GetDefaultAudioEndpoint() + Respect requested data flow value + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9521> + +2025-07-23 11:37:15 -0400 Xavier Claessens <xclaessens@netflix.com> + + * tests/check/elements/mpegtsmux.c: + mpegtsmux: Caps event fails with stream type change error + If mpegtsmux receives the same caps again, it wrongly claims the stream + type changed: + error: Stream type change from 06 to 8f not supported + This adds a unit test that demonstrate the issue in the very hacky way. + I have seen this happening with the below pipeline when upstream caps + changes. Since the caps filter fixates the caps received by opusenc and + mpegtsmux, the stream type cannot change. + ... + ! audioconvert + ! audio/x-raw,format=S16LE,channels=2,rate=48000 + ! opusenc bitrate=128000 + ! mpegtsmux + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9430> + +2025-08-07 14:31:38 +1000 Jan Schmidt <jan@centricular.com> + + * gst/mpegtsmux/gstbasetsmux.c: + mpegtsmux: Use 'internal' stream_type to detect codec changes + The TsMuxStream internal_stream_type field stores the original + 'full' stream type (such as Opus), while the stream_type field + stores the value that will actually be written into the MPEG-TS + packets according to the codec mappings. When checking if + input caps are changing stream type, check the original type. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9430> + +2025-08-08 22:30:52 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2: Fix default render device probing + Fixing typo + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9517> + +2025-07-31 11:31:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vkphysicaldevice: detect and dump Vulkan 1.4 properties and features + In order to link videomaintenance1, and others to come, without knowing if + Vulkan 1.4 features are chained in the device properties structure, a static and + inlined function was added in gstvkphysicaldevice-private.h. It was added in a + header file to avoid compiler warnings if it's not used because of old Vulkan + headers. + Also the value dump videomaintenance1 was moved to another function to pack + there all the future features to query which aren't part of a Vulkan release + yet. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> + +2025-07-31 13:48:10 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/gstvulkan.c: + * gst-libs/gst/vulkan/gstvkdebug.c: + * gst-libs/gst/vulkan/gstvkdevice.c: + * gst-libs/gst/vulkan/gstvkformat.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + vulkan: fine grained access to API + This patch is the payment to my technical debt. + The symbol GST_VULKAN_HAVE_VIDEO_EXTENSIONS is defined at compilation-time if + the user requests for the usage of the Vulkan Video extensions. And we used this + symbol for anything related with Vulkan Video. But this is not the smartest + approach. + The rule should be: + - If the code allocates Vulkan Video resources, use + GST_VULKAN_HAVE_VIDEO_EXTENSIONS + - Otherwise, use the Vulkan's guard for the used API + In this way, API version bumps will be easier. + Also, this commit marks the end of GST_VULKAN_HAVE_VIDEO_EXTENSIONS guarded code + for readability. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> + +2025-08-04 15:11:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkoperation.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + vkoperation: refactor for fine grained guards and clean ups + Added a static inlined function in gstvkphysicaldevice-private.h for looking up + a specific vulkan structure in a chain. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> + +2025-07-31 17:56:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + vkimagebufferpool: refactor common code + Both gst_vulkan_image_buffer_pool_set_config() and + gst_vulkan_image_buffer_pool_alloc() functions share the same code to create + Vulkan images for different purposes. + This patch refactor them in a new helper function that creates the images and + stores them in a buffer if it's passed as output parameters, such as the + offsets. + This patch also adds specifics guards for Vulkan's symbols for better grained + API usage, but also for prepare_buffer() the guard is set where the symbol is + used. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> + +2025-07-31 13:19:13 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vkdevice: fine grained vulkan video extensions detection + The Vulkan Video extensions can be available, according to the specification, + since Vulkan 1.1, but with other extensions dependencies. That's why this patch + adds a field in the extension structure, which represents the extension + dependency that the specified extension requires. And they are specified by + Vulkan Video extensions. + This allow to have a single function to check if the extension can be enabled + both by optional extensions and video extensions. + Regardless that video extension can be loaded since Vulkan 1.1, they are rather + loaded since Vulkan 1.3, when synchronization2 was promoted, so it isn't + checked as video_queue dependency. + Finally, this patch checks for each guard symbol. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> + +2025-08-04 12:11:40 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.c: + * gst-libs/gst/vulkan/gstvkoperation.c: + * gst-libs/gst/vulkan/gstvkphysicaldevice-private.h: + * gst-libs/gst/vulkan/gstvkphysicaldevice.c: + * gst-libs/gst/vulkan/gstvkvideo-private.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + vulkan: private functions for physical device features + This is a continuation of !9483, but without back-porting. + Instead of checking the driver's API version to figure out if a physical device + feature is available and enabled, or even more, instead of checking for enabled + extensions in the driver, this patch adds private functions in the physical + driver to get the availability and enabling of features such as sampler ycbrc + conversion, synchronization2, timeline semaphore and video maintenance1. + And these new functions are used internally in the GstVulkanOperation object, + and the private object GstVulkanDecoder. + This approach is computationally cheaper, simpler and precise. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9492> 2025-08-08 07:13:24 +0200 Jan Alexander Steffens (heftig) <heftig@archlinux.org> * tests/check/elements/zbar.c: zbar: tests: Handle symbol-bytes as not null-terminated Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4592 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9514> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9510> -2025-08-07 19:15:29 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-07-21 03:41:05 +0100 Nirbheek Chauhan <nirbheek@centricular.com> - * meson.build: - Back to development after 1.26.5 - -=== release 1.26.5 === - -2025-08-07 19:06:46 +0100 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.5 + * ext/svthevcenc/gstsvthevcenc.c: + * ext/wayland/gstwaylandsink.c: + * sys/winks/gstksclock.c: + * sys/winks/gstksvideodevice.c: + * sys/winks/gstksvideosrc.c: + * sys/winks/ksdeviceprovider.c: + * sys/winks/ksvideohelpers.c: + * sys/winks/ksvideohelpers.h: + debug: Category init should happen in class_init when possible + plugin_init() will not get called if element/feature registration + happens manually, such as when using linking only specific plugin + features with gstreamer-full. That is possible when plugins contain + static features. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9496> 2025-08-07 13:58:27 +0300 Sebastian Dröge <sebastian@centricular.com> @@ -1030,7 +5008,126 @@ parsing in any way. Specifically, ffmpeg with `av1_nvenc` seems to create `GST_AV1_SEQ_LEVEL_7_3` currently and parsing such streams would fail otherwise. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4589 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9504> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9502> + +2025-06-30 13:34:47 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevcdec.c: + * ext/lcevcdecoder/meson.build: + lcevcdec: Set LCEVCdec min version to 4.0.0 and fix build + V-Nova's LCEVCdec SDK 4.0.0 was released with a small API change. This patch + fixes the 'lcevcdec' element so that it builds with the new version. For more + information see: + https://github.com/v-novaltd/LCEVCdec/blob/4.0.0/docs/v4_migration_guide.md + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9414> + +2025-08-04 16:56:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdecoder-private.h: + * gst-libs/gst/vulkan/gstvkencoder-private.h: + * gst-libs/gst/vulkan/gstvkimagebufferpool.c: + * gst-libs/gst/vulkan/gstvkvideo-private.h: + * gst-libs/gst/vulkan/gstvkvideoutils-private.c: + * gst-libs/gst/vulkan/gstvkvideoutils-private.h: + * gst-libs/gst/vulkan/meson.build: + * gst-libs/gst/vulkan/vulkan.h: + * gst-libs/gst/vulkan/vulkan_fwd.h: + * tests/check/libs/vkimagebufferpool.c: + * tests/check/libs/vkvideodecode.c: + vkvideoutils-private: make it private + Since we moved the GstVulkan generic decoder and encoder to private objects in + the library, there was not need to keep vkvideoutils public. + This patch turns it private and reduces the public API surface. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9493> + +2025-08-05 01:51:14 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2rbuf.cpp: + wasapi2: Always fallback to MMDevice if default device is unavailable + Automatic stream routing supported virtual device may not be + available for some reason, but can try default MMdevice + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-08-05 01:41:51 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2: Always probe default audio endpoint info + Regardless of GetActivateResult() return code, fill default + device information to device provider props + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-08-05 00:41:34 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2activator.cpp: + * sys/wasapi2/gstwasapi2activator.h: + wasapi2: Handle GetActivateResult failure + Even if GetActivateResult() succeeded, activation result can fail. + Checks output HRESULT code as well + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-08-01 00:00:06 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2device.cpp: + wasapi2deviceprovider: Log device update details + ... and add wasapi2deviceprovider debug category + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-07-31 22:54:29 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2enumerator: Retry on IMMDevice::Activate failure + Since the sequence of IMMDeviceEnumerator::EnumAudioEndpoints() + followed by IMMDevice::Activate() is not atomic, Activate() may fail + if the enumerated device becomes invalidated before probing. + In such cases, retry device probing + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-07-31 22:22:47 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + wasapi2enumerator: Avoid racy device probing + IMMDeviceEnumerator may fire a series of callbacks even for a single + device plug/unplug event. To avoid redundant probing, start device + enumeration only after no further callbacks are received for 100ms. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-07-31 21:10:27 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + wasapi2enumerator: Log IMMNotificationClient callback details + ... and add wasapi2enumerator debug category + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9468> + +2025-08-02 12:19:18 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/webrtcsdp.c: + * tests/check/elements/webrtcbin.c: + webrtc: sdp: Validate ICE SDP attributes + According to https://datatracker.ietf.org/doc/html/rfc5245#section-15.4, + those attributes should contain only alpha-numerical (with / and + allowed), + should be less than 256 characters, the ufrag should be at least 4 characters + and the pwd should be at least 22 characters. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9485> + +2025-08-02 08:55:05 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/webrtcsdp.c: + * tests/check/elements/webrtcbin.c: + webrtc: sdp: Relax ice-ufrag and ice-pwd checks + According to RFC 8839 section 5.4, if two data streams have identical + "ice-ufrag"s, they MUST have identical "ice-pwd"s. + The previous code wasn't allowing different ice-ufrag values in bundled medias. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9485> + +2025-08-02 08:54:37 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcstats.c: + * gst-libs/gst/webrtc/webrtc_fwd.h: + * tests/check/elements/webrtcbin.c: + webrtc: stats: Set DTLS role and state on transport stats + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9485> 2025-08-01 14:55:12 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1061,7 +5158,42 @@ old Vulkan headers. Also the value dump videomaintenance1 was moved to another function to pack there all these queried features. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9489> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9483> + +2025-07-31 17:55:29 -0400 Olivier Crête <olivier.crete@collabora.com> + + * docs/random/PORTED_09: + random: Remove historical doc + This is about porting which happened over 20 years ago. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9476> + +2025-07-31 17:54:32 -0400 Olivier Crête <olivier.crete@collabora.com> + + * docs/random/LICENSE: + random: Remove historical LICENSE header + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9476> + +2025-07-31 17:50:33 -0400 Olivier Crête <olivier.crete@collabora.com> + + * AUTHORS: + AUTHORS: Remove outdated files + They only contained historical contributors, the modern version is + to look at the git logs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9476> + +2025-07-31 17:44:21 -0400 Olivier Crête <olivier.crete@collabora.com> + + * MAINTAINERS: + MAINTAINERS: Update to reflect current maintainership + Instead of listing everyone, just point to GitLab + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9476> + +2025-07-31 17:39:44 -0400 Olivier Crête <olivier.crete@collabora.com> + + * REQUIREMENTS: + REQUIREMENTS: Remove outdated doc + They contained information which was completely outdated. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9476> 2025-08-01 11:52:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1076,13 +5208,15 @@ comparison was racy. This patch locks the object to compare the current rate-control with the one set by the user. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9484> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9480> -2025-05-29 13:20:59 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> +2025-02-26 03:41:21 +0000 Jonathan Lui <jonathan.ming.jun.lui@intel.com> - * gst-libs/gst/vulkan/gstvkdevice.c: - vulkan: ycbcr conversion extension got promoted in 1.1.0 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9482> + * sys/va/gstvaav1enc.c: + vaav1enc: Enable intrablock copy and palette mode + This allow screen content coding (SCC) optimization feature. + Co-authored-by: Victor Jaquez <vjaquez@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8560> 2025-08-01 01:03:10 +0900 Seungha Yang <seungha@centricular.com> @@ -1090,7 +5224,95 @@ d3d12screencapturedevice: Avoid false device removal on monitor reconfiguration Post device-changed instead of device-removed/device-added when only HMONITOR or display position changed without actual device change. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9481> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9472> + +2025-02-11 18:59:04 -0500 Olivier Crête <olivier.crete@collabora.com> + + * tests/check/libs/analyticsmeta.c: + analytics: Add unit test for copying GstAnalyticsRelationMeta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9475> + +2025-03-06 18:08:22 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpbasepayload.c: + avtpbasepay: Add debug message for time handling + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-03-06 18:08:04 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpbasepayload.c: + avtpbasepay: Make make constants more readable + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-03-06 18:01:23 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpaafdepay.c: + * ext/avtp/gstavtpbasedepayload.c: + * ext/avtp/gstavtpbasedepayload.h: + * ext/avtp/gstavtpcvfdepay.c: + * ext/avtp/gstavtpsrc.c: + * ext/avtp/gstavtpvfdepaybase.c: + * tests/check/elements/avtpaafdepay.c: + * tests/check/elements/avtpcvfdepay.c: + avtp: Use the DTS as the AVTP base time + Make it work a little more like RTP. Have the source interact with the + clock and set the capture time on each packet. Then the other elements + can use that to do adjustments. Since AVTP is always very low latency, + it can be assumed that the gPTP clock at the packet reception is very + close to the sending time, never more than 2 seconds off, so the + timestamps can be compared directly. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-03-06 16:14:04 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpaafdepay.c: + * ext/avtp/gstavtpbasedepayload.c: + * ext/avtp/gstavtpbasedepayload.h: + * ext/avtp/gstavtpcvfdepay.c: + * ext/avtp/gstavtprvfdepay.c: + avtp: Use nicely abstracted process function in base depayloader class + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-02-07 16:18:14 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpbasepayload.c: + avtp: Intercept changes in the latency + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-02-07 13:33:48 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/avtp/gstavtpvfpaybase.c: + avtpvfpaybase: Don't require a caps handling method + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9412> + +2025-07-30 11:12:24 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + docs: Update documentation cache for new RGB 10bit support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9460> + +2025-07-29 13:58:21 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwlvideoformat.h: + waylandsink: Enable 10bit RGB for SHM buffer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9460> + +2025-07-31 15:36:38 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst/videoparsers/gstav1parse.c: + av1parse: Set MDI into the final caps + The MDI was being set in the original caps which is not even writable. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9474> + +2025-03-05 11:02:55 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + * ext/onnx/gstonnxinference.cpp: + * ext/onnx/meson.build: + onnx: Add Verisilicon provider + Add the option to use the VSI provider for the Verisilicon NPUs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9408> 2025-07-30 15:50:56 +0400 Marc-André Lureau <marcandre.lureau@redhat.com> @@ -1099,7 +5321,52 @@ meson: d3d12: Add support for MinGW DirectXMath package This is a similar issue that was found for d3d11: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6495 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9473> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9464> + +2025-07-30 16:19:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/base/gsth264encoder.c: + vulkanh264enc: calculate latency with corrected framerate + Fix for the h264encoder base class in the same spirit of !9437. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9465> + +2025-07-24 14:31:26 +0300 Amotz Terem <amotzte@gmail.com> + + * sys/nvcodec/gstnvencoder.cpp: + nvcodec: Add emit-frame-stats signal + Add emit-frame-stats property to optionally emit frame stats on each frame + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9429> + +2025-07-24 20:42:59 +0100 Nirbheek Chauhan <nirbheek@centricular.com> + + * sys/directsound/gstdirectsoundplugin.c: + * sys/wasapi/gstwasapi.c: + windows: Disable all audio device providers except wasapi2 + We have too many device providers outputting duplicate device entries, + and it's not clear to people what they should be using. Let's only + keep wasapi2 around since it is PRIMARY + 1. + After the device switching work done on WASAPI2, there is no reason to + use directsound anymore. + https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9326 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9438> + +2025-07-24 20:20:39 +0100 Nirbheek Chauhan <nirbheek@centricular.com> + + * meson.build: + meson: Pass sysprof=disabled to glib + sysprof cannot be built on Windows, and this causes the build to fail + on Windows. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9438> + +2025-07-24 20:19:00 +0100 Nirbheek Chauhan <nirbheek@centricular.com> + + * ext/dash/meson.build: + * ext/smoothstreaming/meson.build: + * ext/ttml/meson.build: + meson: Pass python=false to libxml2 + We don't need this in gstreamer anyway. + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4510 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9438> 2025-07-29 11:42:54 +0100 Philippe Normand <philn@igalia.com> @@ -1110,7 +5377,7 @@ situation the input state hasn't changed. By always chaining up we are sure that buffer pool negotiation will always be attempted. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-29 11:42:44 +0100 Philippe Normand <philn@igalia.com> @@ -1121,7 +5388,7 @@ situation the input state hasn't changed. By always chaining up we are sure that buffer pool negotiation will always be attempted. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-29 11:42:33 +0100 Philippe Normand <philn@igalia.com> @@ -1132,13 +5399,13 @@ situation the input state hasn't changed. By always chaining up we are sure that buffer pool negotiation will always be attempted. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-28 12:52:22 +0100 Philippe Normand <philn@igalia.com> * sys/va/gstvabasedec.c: vabasedec: Instrument negotiate function with debug statements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-28 12:49:36 +0100 Philippe Normand <philn@igalia.com> @@ -1149,19 +5416,142 @@ situation the input state hasn't changed. By always chaining up we are sure that buffer pool negotiation will always be attempted. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-28 12:45:25 +0100 Philippe Normand <philn@igalia.com> * sys/va/gstvah264dec.c: vah264dec: Spelling fix in warning debug statement - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> 2025-07-28 12:44:48 +0100 Philippe Normand <philn@igalia.com> * gst-libs/gst/codecs/gsth264decoder.c: h264decoder: Spelling fix in warning debug statement - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9461> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9457> + +2025-07-24 16:41:35 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + onnx: Add debug message with tensor id + Also downgrade input dimensions as it's shown on + each buffer. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9439> + +2025-07-24 16:41:23 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * gst/tensordecoders/gstssdobjectdetector.c: + ssdtensordecoder: Use tensor ids from the registry + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9439> + +2025-07-10 15:19:02 -0500 Derek Foreman <derek.foreman@collabora.com> + + * gst/videoparsers/gstav1parse.c: + av1parse: Set CLL and MDI caps + We already parse the content-light-level and mastering-display-info data + from the stream, so propagate that into caps. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9445> + +2025-07-25 08:59:31 -0500 Derek Foreman <derek.foreman@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: display: Scale whitepoint the same as the primaries + The whitepoint metadata also needs the same scale factor as the + display_primaries. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9444> + +2025-07-25 09:31:08 -0500 Derek Foreman <derek.foreman@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Name the color management queue + Wayland debugging is easier if we use queue names. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9446> + +2025-07-22 11:05:08 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * ext/lcevcdecoder/gstlcevch265decodebin.c: + * ext/lcevcdecoder/gstlcevch265decodebin.h: + * ext/lcevcdecoder/gstlcevch266decodebin.c: + * ext/lcevcdecoder/gstlcevch266decodebin.h: + * ext/lcevcdecoder/meson.build: + * ext/lcevcdecoder/plugin.c: + lcevcdecoder: Add lcevch265decodebin and lcevch266decodebin elements + Similar to lcevch264decodebin, these new elements are needed for LCEVC H265 and + H266 video streams to be decoded properly with autoplugging elements. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9427> + +2025-07-22 11:02:00 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth266parse.c: + h266parse: Fix typo when finding compatible profiles + This solves some critical errors about not fixed caps with some H266 streams. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9427> + +2025-07-22 10:56:55 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth266parse.c: + h266parse: Wait for SEI before exposing src caps + Similar to h264parse, this makes sure 'lcevc=false' src caps are not set before + parsing SEI. It is needed for decodebin2 to work properly with the LCEVC decoder. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9427> + +2025-07-22 10:56:30 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth265parse.c: + * tests/check/elements/h265parse.c: + h265parse: Wait for SEI before exposing src caps + Similar to h264parse, this makes sure 'lcevc=false' src caps are not set before + parsing SEI. It is needed for decodebin2 to work properly with the LCEVC decoder. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9427> + +2025-07-22 10:47:20 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + codecparsersbad: Accept lcevc=false sink caps + This is needed if the LCEVC enhancement data is part of the video stream as SEI + and the demuxer outputs 'lcevc=false' src caps because LCEVC enhancement data is + not stored as a separate stream in the container. + To clarify, 'lcevc=true' just means that the video buffers have LCEVC metadata + attached. Therefore, it is valid to have a stream with LCEVC enhancement data as + SEI with 'lcevc=false' as long as it is not attached as metadata. + This will be needed once we add support for the demuxer to attach LCEVC metadata + to video buffers if it is stored in a separate track. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9427> + +2025-07-25 03:51:13 +0900 Seungha Yang <seungha@centricular.com> + + * tests/examples/d3d12/d3d12fisheyedewarp.cpp: + * tests/examples/d3d12/meson.build: + examples: Add d3d12fisheyedewarp test example + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9441> + +2025-06-20 03:23:25 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: + * gst-libs/gst/d3dshader/gstd3dshadercache.h: + * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_equirect.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_panorama.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_perspective.hlsl: + * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: + * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: + * sys/d3d12/gstd3d12fisheyedewarp.cpp: + * sys/d3d12/gstd3d12fisheyedewarp.h: + * sys/d3d12/meson.build: + * sys/d3d12/plugin.cpp: + d3d12: Add support for dewarping fisheye images + Add d3d12fisheyedewarp element that performs fisheye image dewarping + using D3D12. A UV remap LUT texture is generated via a compute shader, + and the actual remapping is performed in a pixel shader using this LUT + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9441> + +2025-07-24 17:44:46 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + objectdetectionoverlay: Print tracking id + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9440> 2025-07-24 18:16:51 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1173,89 +5563,347 @@ * sys/va/gstvavp9enc.c: vaXXXenc: calculate latency with corrected framerate Fixes: #4558 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9447> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9437> -2025-06-29 22:52:37 +0900 Seungha Yang <seungha@centricular.com> +2025-07-10 21:56:28 +0900 Seungha Yang <seungha@centricular.com> - * sys/wasapi2/gstwasapi2ringbuffer.cpp: - wasapi2: Fix various MinGW build warnings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9428> + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2rbuf.h: + * sys/wasapi2/gstwasapi2sink.cpp: + * sys/wasapi2/gstwasapi2src.cpp: + wasapi2: Add continue-on-error property + If enabled, wasapi2src/sink will post a warning message instead of an error, + when device failures occur, such as open failure, I/O error, + or device removal. + The element will continue to produce/consume audio buffers and behave as if + a capture/render device were active, allowing pipeline to keep running even when + no audio endpoint is available + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9326> -2025-06-29 20:45:51 +0900 Seungha Yang <seungha@centricular.com> +2025-07-10 22:29:31 +0900 Seungha Yang <seungha@centricular.com> - * sys/wasapi2/AsyncOperations.h: - * sys/wasapi2/gstwasapi2client.cpp: - * sys/wasapi2/gstwasapi2client.h: * sys/wasapi2/gstwasapi2util.cpp: - * sys/wasapi2/meson.build: - waapi2: Remove unused WinRT deps and implementations - Removing unused WinRT API based implementations - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9428> + wasapi2: Use 48kHz default sample rate + That's most common default value + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9326> -2025-06-29 01:46:44 +0900 Seungha Yang <seungha@centricular.com> +2025-07-04 21:55:13 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2device.cpp: + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2enumerator.h: + wasapi2deviceprovider: Probe device form factor and enumerator name + Adding form factor and enumerator information to device property struct + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9326> + +2025-07-01 00:33:18 +0900 Seungha Yang <seungha@centricular.com> * sys/wasapi2/gstwasapi2object.cpp: * sys/wasapi2/gstwasapi2object.h: + * sys/wasapi2/gstwasapi2rbuf.cpp: + * sys/wasapi2/gstwasapi2rbuf.h: * sys/wasapi2/gstwasapi2ringbuffer.cpp: * sys/wasapi2/gstwasapi2ringbuffer.h: * sys/wasapi2/gstwasapi2sink.c: - * sys/wasapi2/gstwasapi2src.c: - * sys/wasapi2/meson.build: - wasapi2: Port to IMMDevice based device selection - Because of a couple of issues reported related to WinRT device - enumeration, porting to IMMDevice device id based device selection. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4311 - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3936 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9428> - -2025-06-27 21:36:53 +0900 Seungha Yang <seungha@centricular.com> - - * sys/wasapi2/gstwasapi2activator.cpp: - * sys/wasapi2/gstwasapi2activator.h: - * sys/wasapi2/gstwasapi2client.cpp: - * sys/wasapi2/gstwasapi2device.c: - * sys/wasapi2/gstwasapi2device.cpp: - * sys/wasapi2/gstwasapi2device.h: - * sys/wasapi2/gstwasapi2enumerator.cpp: - * sys/wasapi2/gstwasapi2enumerator.h: + * sys/wasapi2/gstwasapi2sink.cpp: + * sys/wasapi2/gstwasapi2src.cpp: * sys/wasapi2/gstwasapi2util.cpp: - * sys/wasapi2/gstwasapi2util.h: * sys/wasapi2/meson.build: * sys/wasapi2/plugin.cpp: - wasapi2: Implement IMMDeviceEnumerator based enumerator - ... and merge wasapi2{capture,render}deviceprovider into single - wasapi2deviceprovider since we can enumerate input/output audio - devices at once using IMMDeviceEnumerator - This is a preparation for complete porting to Win32 API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9428> + wasapi2: Add support for dynamic device switch + Ringbuffer implementation is re-written to support "device" property + change in playing state + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9326> -2025-06-01 00:02:16 +0900 Seungha Yang <seungha@centricular.com> +2025-02-26 03:03:15 +0900 Seungha Yang <seungha@centricular.com> - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12graphicscapture.h: - * sys/d3d12/plugin.cpp: - d3d12screencapturesrc: Fix OS handle leaks/random crash in WGC mode - Multiple DispatcherQueues per thread seems to be causing OS handle leak - and random crashes were observed. Instead of creating - thread/DispatcherQueue per GstD3D12GraphicsCapture object, - reuse only single thread and DispatcherQueue - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4351 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9424> + * ext/nvdswrapper/gstnvdsdewarp.cpp: + nvdsdewarp: Disallow resizing in case of passthrough + It's not supported yet + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8558> -2025-07-16 15:33:19 +0200 Tim-Philipp Müller <tim@centricular.com> +2025-02-26 02:27:38 +0900 Seungha Yang <seungha@centricular.com> - * meson.build: - Back to development after 1.26.4 + * ext/nvdswrapper/gstnvdsdewarp.cpp: + nvdsdewarp: Avoid synchronization if possible + If input/output memory objects have the same cuda stream, + don't need to synchronize stream + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8558> -=== release 1.26.4 === +2025-02-26 02:22:24 +0900 Seungha Yang <seungha@centricular.com> -2025-07-16 15:26:21 +0200 Tim-Philipp Müller <tim@centricular.com> + * ext/nvdswrapper/gstnvdsdewarp.cpp: + nvdsdewarp: Cache texture object + ... instead of creating texture for every frame + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8558> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.4 +2025-02-26 01:20:23 +0900 Seungha Yang <seungha@centricular.com> + + * ext/nvdswrapper/gstnvdsdewarp.cpp: + nvdsdewarp: Add support for output resizing + ... and adding "add-borders" property + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8558> + +2025-05-05 14:44:54 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkdownload.c: + * ext/vulkan/vkdownload.h: + vkdownload: implement decide_allocation virtual method + In the case of caps change such as frame size, a new buffer pool should be + created according to this new caps via the decide_allocation() vmethod. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8931> + +2025-06-04 16:10:20 -0400 Olivier Crête <olivier.crete@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + analyticsoverlay: Add expire-overlay property + If there has been no new data for this amount of time, just + expire the overlay and don't send one. Otherwise, it keeps sending + the old one for the following frames. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9409> + +2025-07-04 23:34:13 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstnvencoder.cpp: + * sys/nvcodec/gstnvencoder.h: + * sys/nvcodec/gstnvh264encoder.cpp: + * sys/nvcodec/gstnvh265encoder.cpp: + nvencoder: Always allow interlaced stream + ... even if hardware does not support interlaced encoding at bitstream level. + Although interlacing information is not written in the bitstream, + that information can be signalled via container, thus allow interlaced + stream. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9328> + +2025-07-19 01:50:03 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12swapchainsink.cpp: + * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: + d3d12swapchainsink: Add last-rendered-sample action signal + Add a new action signal to allow applications to capture + the most recently rendered frame directly from the swapchain + back buffer. + Unlike the existing "last-sample" property, which exposes + the raw input sample before any sink-side processing, this + signal captures the final displayed image after any internal + image processing (e.g., UV remap, color balance, overlay) has been + applied. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9423> + +2025-05-26 13:47:39 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * tests/check/libs/vkvideoencodebase.c: + * tests/check/libs/vkvideoencodeh264.c: + * tests/check/libs/vkvideoencodeh265.c: + tests: vkh26xenc: use vkvideoencodebase + To avoid duplicating code, use vkvideoencodebase.c + Code cleanup and function clarifications. + Fix leaks in case of multiple device. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9077> + +2025-07-15 20:10:25 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/gsttflite.c: + * ext/tflite/gsttflitevsiinference.c: + * ext/tflite/gsttflitevsiinference.h: + * ext/tflite/meson.build: + * meson.options: + tflite: Add support for VSI delegate + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9410> + +2025-07-17 17:14:06 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + classifiertensordecoder: Use utility functions to get tensors + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9419> + +2025-07-17 17:12:29 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstclassifiertensordecoder.c: + classifiertensordecoder: Handle error cases better with labels file + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9419> + +2025-07-17 16:47:06 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst/tensordecoders/gstssdobjectdetector.c: + ssdobjectdetector: Validate tensor type and dimensions + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9419> + +2025-07-17 16:46:33 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gsttensor.c: + tensor: Print tensor name in debug name + It makes it easier to understand which one is rejected. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9419> + +2025-07-18 17:52:42 +0100 Tim-Philipp Müller <tim@centricular.com> + + * gst/unixfd/gstunixfdsink.c: + * gst/unixfd/gstunixfdsrc.c: + unixfd: fix and improve the example pipelines in the documentation + - Add a videoconvert element before the videosink so that the output + works no matter what format gets negotiated (A444_16LE for me) + - Specify a reasonable video format and size with a capsfilter, so + we don't default to something silly like A444_16LE @ 240p. + - Add a timeoverlay element, so it's obvious when stoppping/restarting + the pipeline that the input stream is just picked up again from the + moment the consumer pipeline is restarted. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9422> + +2025-07-16 16:46:18 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/transcoder/gsttranscoder.c: + * gst-libs/gst/transcoder/gsttranscoder.h: + transcoder: Fix warning/error APIs + The GError pointers were actually not out-parameters. :( + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9407> + +2025-07-16 16:44:59 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/transcoder/gsttranscoder.c: + transcoder: Remove unused priv->bus variable + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9407> + +2025-07-16 16:43:36 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/transcoder/gsttranscoder-signal-adapter.c: + transcoder: signal-adapter: Fix error/warning details access + The field names were missing in the gst_structure_get() calls... + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9407> + +2025-07-16 09:27:40 +0100 Philippe Normand <philn@igalia.com> + + * tests/check/gst-plugins-bad.supp: + check: Silence some OpenSSL memory leaks + The OpenSSL version shipping in Fedora 40 leaks memory, the issue is fixed in + F42. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9323> + +2025-07-03 12:06:22 +0100 Philippe Normand <philn@igalia.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/dtls/gstdtlscertificate.c: + * ext/dtls/gstdtlsdec.c: + * ext/dtls/gstdtlssrtpdec.c: + dtls: Use ECDSA private key for default certificate + ECDSA is widely used in browsers and SFUs, some servers such as the ones using + BouncyCastle only accept certificates signed with ECDSA. + Based on closed MR https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2458 + Fixes #4516 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9323> + +2025-07-16 15:58:27 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gsttensor.c: + * gst-libs/gst/analytics/gsttensor.h: + * gst-libs/gst/analytics/gsttensormeta.c: + * gst-libs/gst/analytics/gsttensormeta.h: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + tensormeta: Check dimensions when retrieving tensor + Modify the API to retrieve the tensor meta to check for the dimensions + as well. + Also fix an API mistake, the buffer whose dimensions should be checheck + is the one inside the GstTensor, not another buffer some outside. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9411> + +2025-07-16 11:18:33 -0400 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gsttensormeta.c: + tensormeta: Don't crash on invalid tensor name + It's a valid case to check for an existing tensor. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9411> + +2025-07-10 18:23:30 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkupload.c: + vkupload: fix the refactored frame copy + When refactoring the code in 743c425f64, + the wrong GstBuffer has been used to copy to, + leading to a failing frame copy. + The bug has been discovered running + elements_vkcolorconvert. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9373> + +2025-07-14 22:58:25 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12converter-private.h: + * gst-libs/gst/d3d12/gstd3d12converter.cpp: + * sys/d3d12/gstd3d12swapchainsink.cpp: + * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: + d3d12swapchainsink: Update uv-remap signal to support background color + Allow per viewport background color setting + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9393> + +2025-07-03 19:40:45 +0300 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/gsttensor.c: + tensor: Clarify meaning of the dimensions array in the docs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9325> + +2025-03-06 18:07:08 -0500 Olivier Crête <olivier.crete@collabora.com> + + * gst-libs/gst/analytics/gstanalyticsmeta.c: + analyticsmeta: Remove incorrect check + The value can be NULL which is the wildcard + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9413> + +2025-07-08 11:15:31 +0530 raghu447 <raghavendra.rao@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + tensordecoder: rename facedetector element + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9344> + +2025-07-09 07:50:43 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth266parse.c: + * gst/videoparsers/gsth266parse.h: + h266parse: Parse and attach LCEVC metadata to buffers if present + Similar to h264parse and h265parse, this patch enhances the element to parse + LCEVC enhancement data from SEI, and attach it to output buffers as GstLcevcMeta. + The 'lcevc' field in the output caps is also set to TRUE or FALSE depending on + whether LCEVC data is present or not. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9365> + +2025-07-09 07:48:32 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst-libs/gst/codecparsers/gsth266parser.c: + * gst-libs/gst/codecparsers/gsth266parser.h: + * gst/videoparsers/gsth266parse.c: + h266parse: Parse and process SEI registered user data + Similar to h264parse and h265parse, this patch improves the element to parse + the SEI registered user data from NAL units. The core structure of H266 SEI for + ITU-T T.35 is the same as the other parsers, so we can re-use the same logic. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9365> + +2025-07-15 01:29:46 +0900 Seungha Yang <seungha@centricular.com> + + * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: + examples: d3d12swapchainsink: Add support for force-aspect-ratio change + Adding keyboard control for "force-aspect-ratio" property change + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9392> + +2025-07-15 01:28:40 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12swapchainsink.cpp: + d3d12swapchainsink: Fix force-aspect-ratio change in playing state + Set output updated flag so that viewport can be calculated again + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9392> + +2025-07-14 15:07:37 +0300 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/analytics.h: + analytics: Include new batch meta in the single-include header + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9391> + +2025-06-19 13:24:32 +0300 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/gstanalyticsbatchmeta.c: + * gst-libs/gst/analytics/gstanalyticsbatchmeta.h: + * gst-libs/gst/analytics/meson.build: + analytics: Add GstAnalyticsBatchMeta for batches of buffers from one or more streams + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9282> 2025-07-09 02:52:27 -0400 Doug Nazar <nazard@nazar.ca> @@ -1265,14 +5913,14 @@ Previously the socket would be created in the thread, which take some time to start. As the tests were so short they would usually pass as they don't actually use the socket. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9374> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9364> 2025-07-09 02:51:11 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/elements/avtpcrfcheck.c: * tests/check/elements/avtpcrfsync.c: avtp: crf: tests: Only run tests if packet socket is available - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9374> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9364> 2025-07-09 00:47:58 +0200 Mathieu Duponchelle <mathieu@centricular.com> @@ -1282,7 +5930,7 @@ the message flow chart as documented in the "spec" always has the server sending it first, and the client replying to it on reception of the Set Peer Bandwidth, which we do since 286a3829b637. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9372> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9360> 2025-07-09 00:44:36 +0200 Mathieu Duponchelle <mathieu@centricular.com> @@ -1294,7 +5942,7 @@ ``` and does not require a second component to the path, adapt our code to allow using such URLs as `tcUrl`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9372> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9360> 2025-07-09 12:21:31 +0200 Piotr Brzeziński <piotr@centricular.com> @@ -1306,14 +5954,32 @@ were just uinitialized memory previously, we'd incorrectly end up with main or main-10 when the encoder was in fact giving us 4:2:2 10bit output. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9371> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9348> -2025-07-07 11:59:18 +0200 Hanna Weiß <hweiss@igalia.com> +2025-07-08 10:13:11 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - vulkan: Fix drawing too many triangles in fullscreenquad - was using a index buffer for triangle list but drawn as strip - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9370> + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: window: Fix next video info leak + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9353> + +2025-07-08 10:09:39 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * ext/gtk/gstgtkwaylandsink.c: + * ext/wayland/gstwaylandsink.c: + * ext/wayland/gstwaylandsink.h: + * gst-libs/gst/wayland/gstwlwindow.c: + * gst-libs/gst/wayland/gstwlwindow.h: + waylandsink: Parse and set the HDR10 metadata + Basically whenever the compositor have support for it, and the caps includes it, + set the mastering display and light content level information. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9353> + +2025-07-08 10:08:57 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst-libs/gst/wayland/gstwldisplay.c: + * gst-libs/gst/wayland/gstwldisplay.h: + wayland: display: Detect HDR10 metadata feature + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9353> 2025-06-17 06:38:59 -0400 Doug Nazar <nazard@nazar.ca> @@ -1321,14 +5987,40 @@ openh264: Ensure src_pic is initialized before use valgrind was showing reads of uninitialized memory and the library examples all memset the structure before use. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9362> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9352> + +2025-05-22 14:41:30 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth265parse.h: + h265parse: Parse and attach LCEVC metadata to buffers if present + Similar to h264parse, this patch enhances the element to parse LCEVC enhancement + data from SEI, and attach it to output buffers as GstLcevcMeta. The 'lcevc' + field in the output caps is also set to TRUE or FALSE depending on whether LCEVC + data is present or not. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9335> + +2025-07-08 20:00:07 +0100 Tim-Philipp Müller <tim@centricular.com> + + * meson.build: + Back to development after 1.27.1 + +=== release 1.27.1 === + +2025-07-08 19:55:15 +0100 Tim-Philipp Müller <tim@centricular.com> + + * NEWS: + * RELEASE: + * gst-plugins-bad.doap: + * meson.build: + Release 1.27.1 2025-07-07 10:12:52 +1000 Matthew Waters <matthew@centricular.com> * gst-libs/gst/vulkan/gstvkfullscreenquad.c: vulkanfullscreenquad: add locks for synchronisation Now all API can be accessed from any thread. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9357> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9337> 2025-05-31 03:35:27 -0400 Doug Nazar <nazard@nazar.ca> @@ -1337,50 +6029,171 @@ It's possible that the callback is already scheduled to run on another thread when we unschedule it during dispose and we would then access a freed object. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> 2025-05-30 19:15:56 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/elements/h266parse.c: h266parse: test: Pass correct size argument to va_arg function sizeof(int) != sizeof (gsize) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> 2025-05-30 15:23:03 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/libs/analyticsmeta.c: analytics: tests: Copy correct size of array to buffer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> 2025-05-30 15:20:41 -0400 Doug Nazar <nazard@nazar.ca> * sys/decklink/gstdecklink.cpp: decklink: Fix a memory leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> 2025-05-30 15:21:58 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/elements/webrtcbin.c: webrtc: tests: Fix a few memory leaks - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> 2025-05-30 15:20:02 -0400 Doug Nazar <nazard@nazar.ca> * gst/camerabin2/gstcamerabin2.c: camerabin: Fix a memory leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9347> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9151> + +2025-07-07 11:59:18 +0200 Hanna Weiß <hweiss@igalia.com> + + * gst-libs/gst/vulkan/gstvkfullscreenquad.c: + vulkan: Fix drawing too many triangles in fullscreenquad + was using a index buffer for triangle list but drawn as strip + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9338> + +2025-07-07 15:16:32 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/gtk/gstgtkwaylandsink.c: + gtkwaylandsink: Make the rotate property GST_PARAM_MUTABLE_PLAYING + This matches the change we made to waylandsink. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9283> + +2025-06-25 19:42:59 +0200 Michael Olbrich <m.olbrich@pengutronix.de> + + * ext/wayland/gstwaylandsink.c: + * ext/wayland/gstwaylandsink.h: + gstwaylandsink: add some locking documentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9283> + +2025-06-25 16:21:40 +0200 Michael Olbrich <m.olbrich@pengutronix.de> + + * docs/plugins/gst_plugins_cache.json: + * ext/wayland/gstwaylandsink.c: + gstwaylandsink: add GST_PARAM_MUTABLE_PLAYING flag for more properties + The fullscreen state and rotate method can be changed while the element is + playing, so add the GST_PARAM_MUTABLE_PLAYING flag to those properties to + indicate this. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9283> + +2025-06-25 16:17:58 +0200 Michael Olbrich <m.olbrich@pengutronix.de> + + * ext/wayland/gstwaylandsink.c: + waylandsink: make sure self->window is not NULL before using it + self->window is created with the first frame, so it is not available when + properties are set during construction of the element. + Skip calling gst_wl_window_ensure_fullscreen() in this case. + The window is already constructed with the current configured fullscreen state, + nothing else in needed here. + Without this, running e.g. 'gst-launch-1.0 -v videotestsrc ! waylandsink + fullscreen=true' will result in: + GStreamer-Wayland-CRITICAL **: 14:11:19.921: gst_wl_window_ensure_fullscreen: assertion 'self' failed + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9283> + +2025-07-02 18:39:20 +0200 Olivier Crête <olivier.crete@collabora.com> + + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: Add RTX/FEC for each relevant payload type + When sending an answer with multiple codecs, we need to add the RTX and FEC + payload for each codec + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9031> + +2025-06-15 23:17:17 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipmemorycopy.cpp: + hipmemorycopy: Use stream associated with buffer + ... instead of global device stream. memory object might hold + different stream. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> + +2025-06-15 23:09:17 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipconverter.cpp: + hipconverter: Avoid unnecessary sync + If input and output buffers are running on the same stream, + record event instead of sync + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> + +2025-06-15 21:34:36 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipbufferpool.cpp: + * sys/hip/gsthipmemory.cpp: + * sys/hip/gsthipmemory.h: + hipmemory: Allow lazy sync + Store recorded hip event and wait for sync later if needed + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> + +2025-06-15 21:29:19 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthip.h: + * sys/hip/gsthip_fwd.h: + * sys/hip/gsthipevent.cpp: + * sys/hip/gsthipevent.h: + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + * sys/hip/gsthipstream.cpp: + * sys/hip/gsthipstream.h: + * sys/hip/meson.build: + * sys/hip/stub/driver_types.h: + * sys/hip/stub/hip/hip_runtime_api.h: + hip: Add GstHipEvent object + hip event handle wrapper object + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> + +2025-06-15 19:48:54 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipcompositor.cpp: + * sys/hip/gsthipconverter.cpp: + * sys/hip/gsthipmemory.cpp: + * sys/hip/gsthipmemory.h: + * sys/hip/gsthipmemorycopy.cpp: + hip: Use non-default stream + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> + +2025-06-15 19:06:28 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthip.h: + * sys/hip/gsthip_fwd.h: + * sys/hip/gsthipdevice.cpp: + * sys/hip/gsthipdevice.h: + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + * sys/hip/gsthipstream.cpp: + * sys/hip/gsthipstream.h: + * sys/hip/meson.build: + hip: Add GstHipStream object + Adding hip stream abstraction layer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9281> 2025-05-28 08:45:40 -0400 Doug Nazar <nazard@nazar.ca> * gst/codecalpha/gstalphacombine.c: alphacombine: fix memory leaks - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9343> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9115> 2025-05-28 08:44:01 -0400 Doug Nazar <nazard@nazar.ca> * gst/transcode/gst-cpu-throttling-clock.c: cpu-throttling-clock: free clock when finished - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9343> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9115> 2025-07-05 03:10:51 +0900 Seungha Yang <seungha@centricular.com> @@ -1388,7 +6201,7 @@ d3d12screencapture: Add support for monitor add/remove in device provider Update device list on WM_DISPLAYCHANGE event Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4521 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9333> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9330> 2025-07-04 10:56:27 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1399,7 +6212,7 @@ bug in how vp9parse interact with parsebin, presenting downstream negotiation of alignment to work. This revert to being stuck using frame alignment always, which fortunately works with libvpx, though less efficient. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9331> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9329> 2025-05-07 14:02:05 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1409,7 +6222,7 @@ converting from super frame to frame, mark all frames as decode only except the last one. This fixes vp90-2-22-svc_1280x720_3.ivf conformance test with stateless decoders such as VA. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9327> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8944> 2025-05-07 13:48:04 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1420,19 +6233,184 @@ Fixes vp90-2-22-svc_1280x720_3.ivf conformance test when using libvpx based decoder. Fixes #4371 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9327> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8944> 2025-05-07 10:29:10 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> * gst/videoparsers/gstvp9parse.c: vp9parse: Fix typo Aligment vs Alignment - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9327> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8944> + +2025-06-23 15:28:30 -0400 Julian Bouzas <julian.bouzas@collabora.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth264parse.h: + h264parse: Forward LCEVC caps + This makes sure the parser exposes lcevc=true output caps if the demuxer + attached LCEVC data to video frames. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9321> + +2024-11-20 18:46:54 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * tests/examples/meson.build: + * tests/examples/vulkan/meson.build: + * tests/examples/vulkan/vulkanenc.c: + examples: vulkan encoder test + Similar as d3d11 and va. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2024-12-10 19:14:12 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/vulkan/gstvulkan.c: + * ext/vulkan/meson.build: + * ext/vulkan/vkh264enc.c: + * ext/vulkan/vkh264enc.h: + vulkanh264enc: add Vulkan H264 encoder + Add an element to encode h264 content using the vulkan API. + Co-authored-by: Stéphane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2024-12-10 18:53:36 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/vulkan/base/gsth264encoder.c: + * ext/vulkan/base/gsth264encoder.h: + * ext/vulkan/meson.build: + vulkan: h264encoder: introduce base class + H.264 base class oriented for hardware accelerated encoders, such as Vulkan, VA + and others. + 1. It can be parametrized for hardware limits, such as lists size, b-frames + supports, etc. + 2. It produces a GOP structure map IDR, R/I/B, ...) + 3. It proposes parameters set and other strucures such as bitrate limites. + Subclases can modify those structures. + 4. It calls the subclass encode virtual method implementation. + It doesn't handle rate control algorithms or other encoding quality mechanisms. + For a deeper introduction to the class there was a lighting talk in the GstConf + 2024: <https://www.youtube.com/watch?v=-fQY54KHH38> + Co-authored-by: He Junyan <junyan.he@intel.com> + Co-authored-by: Michael Grzeschik <m.grzeschik@pengutronix.de> + Co-authored-by: Stéphane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2025-07-03 11:11:38 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vkencoder-private: free data on error + Co-authored-by: Stéphane Cerveau <scerveau@igalia.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2024-12-17 19:15:06 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkencoder-private.c: + * gst-libs/gst/vulkan/gstvkencoder-private.h: + vkencoder-private: add gst_vulkan_encoder_rc_mode() + To get the updated rate control mode. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2024-12-09 17:59:30 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vkencoder-private: fix array layer for layered DPB + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2025-06-09 17:27:24 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkencoder-private.c: + vkencoder-private: free err when bailing + And log out the error message from the Vulkan call. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> + +2025-05-29 13:20:59 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkdevice.c: + vulkan: ycbcr conversion extension got promoted in 1.1.0 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7197> 2025-07-02 22:38:41 +0300 Sebastian Dröge <sebastian@centricular.com> * gst/tensordecoders/gstssdobjectdetector.c: ssdobjectdetector: Use correct tensor data index for the scores - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9324> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9322> + +2025-06-29 22:52:37 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2ringbuffer.cpp: + wasapi2: Fix various MinGW build warnings + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9307> + +2025-06-29 20:45:51 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/AsyncOperations.h: + * sys/wasapi2/gstwasapi2client.cpp: + * sys/wasapi2/gstwasapi2client.h: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/meson.build: + waapi2: Remove unused WinRT deps and implementations + Removing unused WinRT API based implementations + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9307> + +2025-06-29 01:46:44 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2object.cpp: + * sys/wasapi2/gstwasapi2object.h: + * sys/wasapi2/gstwasapi2ringbuffer.cpp: + * sys/wasapi2/gstwasapi2ringbuffer.h: + * sys/wasapi2/gstwasapi2sink.c: + * sys/wasapi2/gstwasapi2src.c: + * sys/wasapi2/meson.build: + wasapi2: Port to IMMDevice based device selection + Because of a couple of issues reported related to WinRT device + enumeration, porting to IMMDevice device id based device selection. + Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4311 + Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3936 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9307> + +2025-06-27 21:36:53 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2activator.cpp: + * sys/wasapi2/gstwasapi2activator.h: + * sys/wasapi2/gstwasapi2client.cpp: + * sys/wasapi2/gstwasapi2device.c: + * sys/wasapi2/gstwasapi2device.cpp: + * sys/wasapi2/gstwasapi2device.h: + * sys/wasapi2/gstwasapi2enumerator.cpp: + * sys/wasapi2/gstwasapi2enumerator.h: + * sys/wasapi2/gstwasapi2util.cpp: + * sys/wasapi2/gstwasapi2util.h: + * sys/wasapi2/meson.build: + * sys/wasapi2/plugin.cpp: + wasapi2: Implement IMMDeviceEnumerator based enumerator + ... and merge wasapi2{capture,render}deviceprovider into single + wasapi2deviceprovider since we can enumerate input/output audio + devices at once using IMMDeviceEnumerator + This is a preparation for complete porting to Win32 API + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9307> + +2025-06-05 11:24:34 +0100 James Cowgill <james.cowgill@blaize.com> + + * sys/v4l2codecs/gstv4l2decoder.c: + v4l2codecs: Use prop_offset in gst_v4l2_decoder_install_properties + Install properties at the given offset as intended instead of at 0. + Currently there are no elements with any properties, so this has no + effect. This change is needed if any element adds properties in the + future. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9179> + +2025-06-27 10:51:05 +0200 Edward Hervey <edward@centricular.com> + + mpegtsdemux: Add property to disable skew corrections + This is for cases where: + * We *do* want to refer to the PCR stream to figure out global positioning, gap + detection, wrapover correction. + * But we do not want to apply any skew correction to the output + This is useful for cases where: + * the input stream has already been clock-corrected (for example with + mpegtslivesrc) + * or where the output doesn't require synchronization against a clock (ex: for + storage) + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9301> 2025-06-27 10:06:34 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1441,7 +6419,7 @@ It is not possible to do frame cropping when DMABuf caps feature is negotiated. The VideoInfo size is zero, resulting in empty destination buffers, and video convert library may not understand what the format actually is. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9315> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9305> 2025-06-27 10:00:37 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1450,7 +6428,7 @@ If the conformance window does not requires cropping the top or left of the window, we can use GstVideoMeta to crop in a zero-copy fashion. If a copy is needed, the frame copy can also handle it, and is a lot faster. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9315> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9305> 2025-06-27 09:49:00 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1463,7 +6441,7 @@ v4l2codecs: dec: Remove has_videometa member Now that the code is properly located, this member is not needed anymore. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9315> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9305> 2025-06-27 09:37:06 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> @@ -1474,25 +6452,41 @@ * sys/v4l2codecs/gstv4l2codecvp8dec.c: * sys/v4l2codecs/gstv4l2codecvp9dec.c: v4l2codecs: dec: Move copy_frames logic inside decide_allocation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9315> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9305> + +2025-02-06 22:49:35 +0900 Seungha Yang <seungha@centricular.com> + + * ext/webrtcdsp/meson.build: + webrtcdsp: Respect disabled feature option + Don't try to build this plugin if it's explicitly disabled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8424> + +2025-06-17 15:24:58 +0530 Vineet Suryan <vineet.suryan@collabora.com> + + * ext/onnx/README.md: + onnx: Use system installed Eigen to avoid hash mismatch failure + Eigen’s download for the commit referenced by ONNX Runtime v1.16.3 was + updated upstream, so the SHA-256 embedded in ORT’s CMake scripts no + longer matches and the build aborts with a hash-mismatch error. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9229> 2025-06-22 03:23:59 -0400 Doug Nazar <nazard@nazar.ca> * ext/avtp/gstavtpvfdepaybase.c: avtp: Fix memory leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9314> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9273> 2025-06-22 03:22:27 -0400 Doug Nazar <nazard@nazar.ca> * ext/srt/gstsrtsrc.c: srt: Fix warning about uninitialized memory - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9314> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9273> 2025-06-22 03:20:35 -0400 Doug Nazar <nazard@nazar.ca> * gst-libs/gst/codecparsers/gstvc1parser.c: vc1parser: Fix warning about printing uninitialized variables - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9314> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9273> 2025-06-22 03:16:55 -0400 Doug Nazar <nazard@nazar.ca> @@ -1500,14 +6494,27 @@ proxysrc: Fix order freeing pads Free pads from bottom of parent tree first else with GST_DEBUG enabled it would access freed memory printing object info. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9314> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9273> 2025-06-22 03:14:39 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/elements/avtpcvfpay.c: avtpcvfpay: tests: Initialize codec memory If GST_DEBUG was enabled we would print unintialized memory - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9314> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9273> + +2025-06-30 11:56:49 +0300 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/gsttensor.c: + analytics: Fix docs of gst_tensor_check_type() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9311> + +2025-06-26 18:19:27 +0300 Sebastian Dröge <sebastian@centricular.com> + + * gst-libs/gst/analytics/gsttensor.c: + * gst-libs/gst/analytics/gsttensormeta.c: + analytics: Fix transfer annotations of gst_tensor_check_type() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9293> 2025-06-25 09:30:15 -0600 David Monge <david.monge@ridgerun.com> @@ -1516,22 +6523,15 @@ The PMT descriptor was owned by the stream object but also added to the descriptors array without copying, leading to a double free and core dump during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9303> - -2025-06-26 21:29:34 +0100 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.26.3 - -=== release 1.26.3 === + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9285> -2025-06-26 21:25:24 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-06-17 10:56:03 -0400 Thibault Saunier <tsaunier@igalia.com> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.3 + * gst-libs/gst/cuda/gstcudanvrtc.cpp: + cuda: Lower debug log level on nvrtc compilation failure + We have a fallback to compile with cubin and that compilation failure + might very well not be fatal. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9232> 2025-06-24 21:40:26 +0800 He Junyan <junyan.he@intel.com> @@ -1543,7 +6543,29 @@ 2. Should check max_sublayers_minus1, no more than GST_H266_MAX_SUBLAYERS-1 Fixes ZDI-CAN-27381, CVE-2025-6663 Closes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4503 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9295> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9294> + +2025-06-26 10:22:42 +0200 Johan Sternerup <johast@axis.com> + + * sys/hip/gsthipmemorycopy.cpp: + hip: Add missing #ifdef + So that it compiles without gstreamer-gl. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9289> + +2025-06-12 12:45:57 +0200 Michael Olbrich <m.olbrich@pengutronix.de> + + * docs/plugins/gst_plugins_cache.json: + * ext/wayland/gstwaylandsink.c: + * ext/wayland/gstwaylandsink.h: + * gst-libs/gst/wayland/gstwlwindow.c: + * gst-libs/gst/wayland/gstwlwindow.h: + waylandsink: Add force-aspect-ratio property + Similar to and inspired by glimagesink, xvimagesink and others. + The waylandsink never transform the buffer in any way but delegates this to the + Wayland compositor with the Wayland buffer transform API. + Rotation and window size are already supported, so this just changes the video + surface geometry that is communicated to the Wayland compositor. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9210> 2025-06-25 16:24:44 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> @@ -1556,25 +6578,228 @@ being `NULL`. - The code forgot to unmap the buffer if it decided to ignore it. Fixes: 0a562a92d7ee38d8919d1b802add84d3c93b59eb - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9286> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9284> + +2025-06-01 00:02:16 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12graphicscapture.cpp: + * sys/d3d12/gstd3d12graphicscapture.h: + * sys/d3d12/plugin.cpp: + d3d12screencapturesrc: Fix OS handle leaks/random crash in WGC mode + Multiple DispatcherQueues per thread seems to be causing OS handle leak + and random crashes were observed. Instead of creating + thread/DispatcherQueue per GstD3D12GraphicsCapture object, + reuse only single thread and DispatcherQueue + Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4351 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9153> + +2025-05-23 07:12:40 -0400 Doug Nazar <nazard@nazar.ca> + + * tests/check/libs/vkvideoencodeh264.c: + * tests/check/libs/vkvideoencodeh265.c: + vkvideoencodeh26x: ensure we call teardown() for each test + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9062> + +2025-06-25 00:45:39 +0900 Seungha Yang <seungha@centricular.com> + + * docs/plugins/gst_plugins_cache.json: + * sys/hip/plugin.cpp: + hip: Add plugin docs + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-06-09 23:09:46 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipcompositor.cpp: + * sys/hip/gsthipcompositor.h: + * sys/hip/meson.build: + * sys/hip/plugin.cpp: + hip: Add hipcompositor element + Feature-wise it's the same as cudacompositor + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-06-09 22:11:36 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + hip: Load memset symbols + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-06-09 21:08:24 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipmemorycopy.cpp: + hipmemorycopy: Add support for GL interop + Enable memory copy between HIP and GL + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-06-03 19:51:47 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthip-interop-gl.h: + * sys/hip/gsthip-interop.cpp: + * sys/hip/gsthip-interop.h: + * sys/hip/gsthip.h: + * sys/hip/gsthip_fwd.h: + * sys/hip/meson.build: + hip: Add GstHipGraphicsResource object + hipGraphicsResource_t wrapper object for graphics api interop + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-06-03 16:56:09 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthiploader-gl.h: + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + * sys/hip/meson.build: + * sys/hip/stub/cudaGL.h: + * sys/hip/stub/driver_types.h: + * sys/hip/stub/hip/hip_gl_interop.h: + * sys/hip/stub/hip/hip_runtime_api.h: + hip: Load GL interop related symbols + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-14 14:56:52 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipconverter.cpp: + hip: Pass GPU arch to kernel compile option args + Pass current GPU arch to compile option instead of relying on auto + detection + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-14 14:25:40 +0900 Seungha Yang <seungha@centricular.com> + + * meson.options: + * sys/hip/gsthipconverter.cpp: + * sys/hip/kernel/collect_ptx_headers.py: + * sys/hip/kernel/meson.build: + * sys/hip/meson.build: + hip: Add support for NVIDIA kernel precompile + ... with "hip-nvidia-precompile" and "hip-nvcc-arch" build options + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-14 13:17:00 +0900 Seungha Yang <seungha@centricular.com> + + * meson.options: + * sys/hip/gsthipconverter.cpp: + * sys/hip/kernel/collect_hsaco_headers.py: + * sys/hip/kernel/converter-unpack.cu: + * sys/hip/kernel/converter.cu: + * sys/hip/kernel/meson.build: + * sys/hip/meson.build: + hip: Add support for AMD kerenl precompile + Adding "hip-amd-precompile" build option. If enabled, AMD kernels + will be precompiled at build time. Also "hip-hipcc-arch" build option + (corresponding to --offload-arch hipcc option) is added + so that user can specify target GPU arch instead of auto-detection by hipcc + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-12 19:45:55 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthipmemorycopy.cpp: + * sys/hip/gsthipmemorycopy.h: + * sys/hip/meson.build: + * sys/hip/stub/cudaD3D11.h: + * sys/hip/stub/cudaGL.h: + hip: Add support for memory copy between GstCuda and GstHip + Handle CUDA <-> HIP memory copy in hipupload and hipdownload elements + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-07 06:32:11 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthip-enums.cpp: + * sys/hip/gsthip-enums.h: + * sys/hip/gsthip.h: + * sys/hip/gsthipbasefilter.cpp: + * sys/hip/gsthipconverter.cpp: + * sys/hip/gsthipdevice.cpp: + * sys/hip/gsthipdevice.h: + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + * sys/hip/gsthipmemory.cpp: + * sys/hip/gsthiprtc.cpp: + * sys/hip/gsthiprtc.h: + * sys/hip/gsthiputils.cpp: + * sys/hip/gsthiputils.h: + * sys/hip/kernel/converter.cu: + * sys/hip/meson.build: + * sys/hip/plugin.cpp: + * sys/hip/stub/cuda.h: + * sys/hip/stub/driver_types.h: + * sys/hip/stub/hip/nvidia_hip_runtime_api.h: + hip: Add support for NVIDIA + Adding HIP <-> CUDA translation layer like the HIP SDK does + but uses dlopen() for CUDA as well + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-05-06 20:16:29 +0900 Seungha Yang <seungha@centricular.com> + + * sys/hip/gsthip.h: + * sys/hip/gsthipconverter.cpp: + * sys/hip/gsthipdevice.cpp: + * sys/hip/gsthiploader.cpp: + * sys/hip/gsthiploader.h: + * sys/hip/gsthipmemory.cpp: + * sys/hip/gsthiprtc.cpp: + * sys/hip/gsthiprtc.h: + * sys/hip/gsthiputils.cpp: + * sys/hip/meson.build: + * sys/hip/plugin.cpp: + * sys/hip/stub/hip/driver_types.h: + * sys/hip/stub/hip/hip_runtime.h: + * sys/hip/stub/hip/hip_runtime_api.h: + * sys/hip/stub/hip/hiprtc.h: + * sys/hip/stub/hip/texture_types.h: + hip: Remove build-time SDK dependency + Use dlopen at runtime + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> + +2025-04-26 03:24:37 +0900 Seungha Yang <seungha@centricular.com> + + * meson.options: + * sys/hip/gsthip.h: + * sys/hip/gsthip_fwd.h: + * sys/hip/gsthipbasefilter.cpp: + * sys/hip/gsthipbasefilter.h: + * sys/hip/gsthipbufferpool.cpp: + * sys/hip/gsthipbufferpool.h: + * sys/hip/gsthipconverter.cpp: + * sys/hip/gsthipconverter.h: + * sys/hip/gsthipconvertscale.cpp: + * sys/hip/gsthipconvertscale.h: + * sys/hip/gsthipdevice.cpp: + * sys/hip/gsthipdevice.h: + * sys/hip/gsthipmemory.cpp: + * sys/hip/gsthipmemory.h: + * sys/hip/gsthipmemorycopy.cpp: + * sys/hip/gsthipmemorycopy.h: + * sys/hip/gsthiprtc.cpp: + * sys/hip/gsthiprtc.h: + * sys/hip/gsthiputils.cpp: + * sys/hip/gsthiputils.h: + * sys/hip/kernel/converter-unpack.cu: + * sys/hip/kernel/converter.cu: + * sys/hip/meson.build: + * sys/hip/plugin.cpp: + * sys/meson.build: + hip: Add AMD HIP plugin + Adding hipupload, hipdownload, and hipconvert family elements + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8923> 2025-06-21 07:12:29 -0400 Doug Nazar <nazard@nazar.ca> * ext/analyticsoverlay/gstobjectdetectionoverlay.c: analyticsoverlay: Fix memory leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9271> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9266> 2025-06-21 07:11:29 -0400 Doug Nazar <nazard@nazar.ca> * tests/check/elements/dashsink.c: dashsink: test: Minor cleanups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9271> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9266> 2025-06-21 07:10:54 -0400 Doug Nazar <nazard@nazar.ca> * ext/dash/gstdashsink.c: dashsink: Fix memory leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9271> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9266> 2025-06-23 10:22:57 +1000 Matthew Waters <matthew@centricular.com> @@ -1583,7 +6808,7 @@ * sys/decklink/gstdecklinkvideosink.cpp: decklink/clock: remove clock_offset It is completely unused and only ever initialized to 0. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9269> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9267> 2025-06-18 14:20:32 +1000 Matthew Waters <matthew@centricular.com> @@ -1607,7 +6832,21 @@ in potentially large differences in the output internal time from gst_clock_unadjust_with_calibration(). Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4197 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9269> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9267> + +2025-06-13 17:23:21 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkupload.c: + vulkanupload: refactor frame copy in a single function + Avoiding code duplication + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9222> + +2025-06-13 15:24:27 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/vulkan/vkupload.c: + vulkanupload: use gst_video_frame_copy() for VulkanBuffer + There's no need of a custom copy. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9222> 2025-06-18 14:04:14 +0200 Edward Hervey <edward@centricular.com> @@ -1615,7 +6854,7 @@ tsdemux: Allow access unit parsing failures * Refactor the various Access Unit extraction calls into a single function * Allow the access unit parsing to fail, but emit a warning - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9258> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9246> 2025-06-16 18:38:30 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -1634,14 +6873,14 @@ a fallback. Also rank PRIMARY+1 the c2.android c2.exynos and c2.amlogic audio codecs alongside OMX.google, because they are known-good. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9257> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9225> 2025-06-17 03:32:24 +0530 Nirbheek Chauhan <nirbheek@centricular.com> * sys/androidmedia/gstamc.c: amc: Log under GST_FIXME for audio encoders We don't support audio encoders yet, so log that correctly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9257> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9225> 2025-06-16 18:36:31 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -1651,13 +6890,13 @@ only printed on plugin registration. Fix printing of codec caps, since GST_PTR_FORMAT truncates the output in almost every case that I saw. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9257> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9225> 2025-06-16 18:34:39 +0530 Nirbheek Chauhan <nirbheek@centricular.com> * sys/androidmedia/gstamc.c: amc: Print error messages when registering plugins - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9257> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9225> 2025-02-20 21:59:31 +0000 Ben Butterworth <24711048+ben-xD@users.noreply.github.com> @@ -1665,17 +6904,7 @@ mpegts: handle MPEG2-TS with KLV metadata safely by preventing out of bounds Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3896 @slomo, as requested on https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3896#note_2780065 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9251> - -2024-07-26 14:23:10 +1000 Matthew Waters <matthew@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: disconnect signal ICE handlers on dispose - It is entirely possible that the in progress may still provide some state - updates until the ICE object is destroyed, these state updates should - not really be done when webrtcbin is in the process of destroying itself - and access freed data. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9249> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8529> 2025-06-13 12:35:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1687,7 +6916,7 @@ video meta either in the input and output buffers, or the default offset given the format and size. This patch also requests the video meta option for the output buffers. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9243> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9219> 2025-06-13 11:28:35 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1704,35 +6933,167 @@ Then, the offset is set in the buffer video meta. In the case of single memory Vulkan images, the default offset is set in the video meta. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9243> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9219> 2025-06-13 11:27:49 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> * gst-libs/gst/vulkan/gstvkimagebufferpool.c: vkimagebufferpool: remove unused variable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9243> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9219> -2024-12-13 19:08:30 +1100 Jan Schmidt <jan@centricular.com> +2024-07-26 14:23:10 +1000 Matthew Waters <matthew@centricular.com> - * gst/mpegtsdemux/tsdemux.c: - tsdemux: Send new-segment before GAP - If adding a sparse stream and sending a gap event to bring it - up to speed, make sure to send the new segment event first - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9235> + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: disconnect signal ICE handlers on dispose + It is entirely possible that the in progress may still provide some state + updates until the ICE object is destroyed, these state updates should + not really be done when webrtcbin is in the process of destroying itself + and access freed data. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9216> + +2025-06-12 15:34:53 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkerror.c: + vkerror: add invalid_video_std_parameters message + Add string to handle error related to the + codec standard parameters. + <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9212> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9212> + +2025-06-12 01:07:01 +0900 Seungha Yang <seungha@centricular.com> + + * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: + * tests/examples/d3d12/meson.build: + examples: d3d12swapchainsink: Add uv-remap/redraw example + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9209> + +2025-06-11 22:42:22 +0900 Seungha Yang <seungha@centricular.com> + + * sys/d3d12/gstd3d12swapchainsink.cpp: + d3d12swapchainsink: Add uv-remap and redraw action signal + New uv-remap signal can be used for UV coordinate remap operation + in videosink, and redraw signal can allow updating view even in paused + state + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9209> + +2025-06-12 20:15:15 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12converter-private.h: + * gst-libs/gst/d3d12/gstd3d12converter.cpp: + d3d12converter: Add support multiple UV remap in a single path + Add private methods for multiple UV remap operation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9209> + +2025-04-08 16:12:46 +0200 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/codecparsers/gstav1parser.c: + * gst-libs/gst/codecparsers/gstav1parser.h: + * sys/nvcodec/gstnvav1dec.cpp: + * sys/v4l2codecs/gstv4l2codecav1dec.c: + parser: fix spelling of GstAV1SegmentationParams + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8795> + +2025-06-12 11:09:42 -0400 Xavier Claessens <xclaessens@netflix.com> + + * ext/svtjpegxs/meson.build: + wraps: Add svtjpegxs from wrapdb + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9214> + +2025-06-12 11:45:33 -0300 L. E. Segovia <amy@amyspark.me> + + * ext/curl/gstcurlhttpsrc.c: + curl: Recover missing comment + See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8974#note_2955585 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9213> + +2025-06-09 13:05:47 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + webrtcbin: Include all accepted media formats in SDP answers + Until this patch only the first format was added. + Fixes #4458 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9117> 2025-06-10 22:17:46 +1000 Matthew Waters <matthew@centricular.com> * sys/decklink/gstdecklinkvideosink.cpp: decklinkvideosink: show preroll frame correctly Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4254 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9238> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9199> + +2024-11-13 11:03:30 +0100 Robert Mader <robert.mader@collabora.com> + + * gst-libs/gst/wayland/meson.build: + wayland: Add support for local protocols + This proved to be helpful for previous protocol experiments, so let's + upstream it. Inspired by the corresponding code in Weston. + Protocols need to be placed in a `protocols` subdirectory and can be + declared in the following way in `meson.build`: + ``` + 'color-management-v1', 'internal' , + ``` + Note the `v1` being part of the name. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9186> + +2024-05-10 18:24:06 +0200 Robert Mader <robert.mader@collabora.com> + + * gst-libs/gst/wayland/gstwldisplay.c: + * gst-libs/gst/wayland/gstwldisplay.h: + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: Add basic colorimetrie support + Using the Wayland color-management and color-representation protocols. + The implementation queries supported values from the compositors and tries + to convert them into GstVideoColorimetry values. It currently *does not* + pass these upstream to decoders etc. as GstCaps for negotiation. + On the Wayland side it uses named transfer functions, named primaries, + matrices and ranges. The straight alpha mode is also set if supported + by the compositor. + On setting caps it translates the GstVideoColorimetry from the GstVideoInfo + back to into a Wayland parametric image description and color representation + for the video surface if possible. If a colorimetry is not fully + support, we bail out and if wayland objects already exist they get reset or + deleted. + Note that not all GstVideoColorimetry values are implemented yet. + Useful debug options: GST_DEBUG=wlwindow:4,wldisplay:4 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6830> + +2024-12-11 15:40:24 +0100 Robert Mader <robert.mader@collabora.com> + + * gst-libs/gst/wayland/gstwlwindow.c: + wayland: wlwindow: Use GstWlWindow debug category + As probably intended - and Demote frame_redraw_cb log to debug + to make it less noisy. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6830> + +2025-05-15 13:47:35 +0200 Robert Mader <robert.mader@collabora.com> + + * gst-libs/gst/wayland/gstwlbuffer.h: + * gst-libs/gst/wayland/gstwldisplay.h: + * gst-libs/gst/wayland/gstwlwindow.h: + wayland: Turn wl objects into GstObjects + For better logging and locking support. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6830> + +2024-11-12 20:19:35 +0100 Robert Mader <robert.mader@collabora.com> + + * gst-libs/gst/wayland/meson.build: + wayland: Add color protocols + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6830> + +2024-12-13 19:08:30 +1100 Jan Schmidt <jan@centricular.com> + + * gst/mpegtsdemux/tsdemux.c: + tsdemux: Send new-segment before GAP + If adding a sparse stream and sending a gap event to bring it + up to speed, make sure to send the new segment event first + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8143> 2025-06-16 13:39:55 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> * tests/check/elements/vkupload.c: * tests/check/meson.build: test: vulkanupload unit test - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9228> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9218> 2025-06-13 14:36:25 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -1746,7 +7107,29 @@ use-case. This patch solve the regression by instantiating a different buffer pool depending on the output cap features, and configuring it accordingly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9228> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9218> + +2025-06-04 17:52:01 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + * ext/onnx/gstonnxinference.cpp: + onnx: Also implement stop to clean up session + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9176> + +2025-03-05 17:47:41 -0500 Olivier Crête <olivier.crete@collabora.com> + + * ext/onnx/gstonnxinference.cpp: + onnxinference: Clean up session creation logic + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9176> + +2025-06-10 14:41:22 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * ext/avtp/gstavtp.c: + * scripts/gen-changelog.py: + gstreamer-vaapi: remove subproject + It's almost superseded by va plugin in gst-plugins-bad. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9200> 2025-06-10 12:53:05 +0100 Philippe Normand <philn@igalia.com> @@ -1754,7 +7137,89 @@ transcoder: Fix uritranscodebin reference handling Make sure the reference is not floating, because the get_pipeline function returns a transfer-full reference. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9203> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9198> + +2025-05-13 12:22:08 +0000 L. E. Segovia <amy@centricular.com> + + * ext/curl/gstcurlhttpsrc.c: + curl: Fix wrong format specifier for macOS + > ../ext/curl/gstcurlhttpsrc.c:1331:11: error: format specifies type + > unsigned long long' but the argument has type 'curl_off_t' (aka 'long') -Werror,-Wformat + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8974> + +2025-05-09 16:45:53 +0200 Stefan Andersson <stefana@axis.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + * tests/check/elements/h264parse.c: + * tests/check/elements/h265parse.c: + * tests/check/elements/h266parse.c: + h26xparse: Drop NAL units that can't be parsed using AU alignment + Change so that the handling of NAL unit that can't be parsed when using + AU alignment is the same as when using NAL alignment, ie drop the data + if it can't be parsed. + If the AU contains more than one NAL unit any correctly parsed NAL unit + in the AU is kept. + Fixes #4436 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8976> + +2025-05-28 14:19:46 +0200 Stefan Andersson <stefana@axis.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + h26xparse: Bail out if ...finish_frame returns an error + For NAL alignment bail out if gst_base_parse_finish_frame returns a flow + error. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8976> + +2025-05-26 16:55:43 +0200 Stefan Andersson <stefana@axis.com> + + * gst/videoparsers/gsth264parse.c: + * gst/videoparsers/gsth265parse.c: + * gst/videoparsers/gsth266parse.c: + h26xparse: Prevent assert hitting when discarding NAL unit + If using NAL aligment and only dropping part of the AU, the size + argument given to gst_base_parse_finish_frame was wrong and this assert + in gst_base_parse_finish_frame hit + 'gst_adapter_available (parse->priv->adapter) >= size' failed + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8976> + +2025-06-03 23:05:18 -0400 Daniel Morin <daniel.morin@collabora.com> + + * ext/onnx/gstonnxclient.cpp: + * ext/onnx/gstonnxclient.h: + * ext/onnx/gstonnxinference.cpp: + onnx: produce tensor caps + - Add tensor description to srcpads caps + onnx: formatting + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9172> + +2025-06-05 16:47:02 -0400 Daniel Morin <daniel.morin@collabora.com> + + * gst-libs/gst/analytics/gsttensor.h: + gsttensor: adding new datatypes + - Adding datatype for string, boolean, complex numbers and special floating + point numbers. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9172> + +2025-06-02 16:00:36 +0530 raghu447 <raghavendra.rao@collabora.com> + + * gst-libs/gst/analytics/gsttensor.c: + * gst-libs/gst/analytics/gsttensor.h: + * gst-libs/gst/analytics/gsttensormeta.c: + * gst-libs/gst/analytics/gsttensormeta.h: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + analytics: add a convenient API to retrieve tensor + use the API in facedetector tensor decoding + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9162> + +2025-06-06 18:34:02 +0530 raghu447 <raghavendra.rao@collabora.com> + + * ext/tflite/gsttfliteinference.c: + tfliteinference: initialize means and stddevs arrays appropriately + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9189> 2025-06-06 16:24:19 +0200 Jakub Adam <jakub.adam@collabora.com> @@ -1763,13 +7228,13 @@ Fixes an error from Meson: ../subprojects/gst-plugins-bad/tests/validate/meson.build:16:93: ERROR: Unknown variable "soundtouch_dep" - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9192> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9190> 2025-05-12 11:31:32 +0100 Glyn Davies <glyn@solet.io> * gst/mpegtsmux/tsmux/tsmuxstream.c: mpegtsmux: Corrections around Teletext handling - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9183> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8966> 2025-05-22 10:11:36 -0400 Julian Bouzas <julian.bouzas@collabora.com> @@ -1779,21 +7244,21 @@ LCEVCdec SDK can return LCEVC_Error if the enhancement data is wrong. This change improves the lcevcdec element to check for those errors and stop the pipeline when that happens. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9177> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9057> 2025-06-04 17:56:39 +0900 Seungha Yang <seungha@centricular.com> * sys/d3d11/gstd3d11decoder.cpp: d3d11decoder: Use interlace info in input caps ... instead of relying on only parsed values from bitstream. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9174> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9170> 2025-06-04 17:55:51 +0900 Seungha Yang <seungha@centricular.com> * sys/d3d12/gstd3d12decoder.cpp: d3d12decoder: Use interlace info in input caps ... instead of relying on only parsed values from bitstream. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9174> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9170> 2025-06-04 03:49:19 +0900 Seungha Yang <seungha@centricular.com> @@ -1802,14 +7267,24 @@ nvdec: Use interlace info in input caps ... instead of relying on only parsed values from bitstream. Also parses HEVC specific interlace information - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9174> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9170> + +2025-03-05 15:51:05 +0530 raghu447 <raghavendra.rao@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/tensordecoders/gstfacedetectortensordecoder.c: + * gst/tensordecoders/gstfacedetectortensordecoder.h: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/meson.build: + tensordecoder: add facedetector tensor decoding support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8600> 2025-06-02 00:59:52 +0900 Seungha Yang <seungha@centricular.com> * sys/d3d11/gstd3d11compositor.cpp: d3d11compositor: Fix negative position handling Negative positions should be cropped out - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9161> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9158> 2025-06-01 23:38:27 +0900 Seungha Yang <seungha@centricular.com> @@ -1817,7 +7292,7 @@ d3d12compositor: Fix negative position handling Negative positions should be cropped out Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4249 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9161> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9158> 2025-06-01 22:55:53 +0900 Seungha Yang <seungha@centricular.com> @@ -1828,48 +7303,67 @@ background one is recorded on aggregate thread). And there can be temporary refcount increase (so not writable). Updates fence once all rendering commands have been submitted. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9159> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9157> 2025-06-01 22:50:30 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: d3d12converter: Fix fallback upload process Fixing typo - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9159> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9157> 2025-05-30 12:41:06 +0300 Sebastian Dröge <sebastian@centricular.com> * gst/bayer/gstrgb2bayer.c: rgb2bayer: Use gst_structure_has_name() instead of g_str_equal() for simplicity - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9150> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9146> 2025-05-30 12:40:42 +0300 Sebastian Dröge <sebastian@centricular.com> * gst/bayer/gstbayer2rgb.c: bayer2rgb: Use gst_structure_has_name() instead of strcmp() for clarity - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9150> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9146> 2025-05-30 12:33:09 +0300 Sebastian Dröge <sebastian@centricular.com> * gst/bayer/gstbayer2rgb.c: bayer2rgb: Fix RGB stride calculation This fixes a regression introduced in 4c92d4096e9. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9150> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9146> -2025-05-30 00:03:12 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-05-16 13:32:08 +0200 Thibault Saunier <tsaunier@igalia.com> - * meson.build: - Back to development after 1.26.2 + * ext/closedcaption/misc.h: + general: Stop checking `G_HAVE_GNUC_VARARGS` now that we depend on c99 + Cleaning up a bit the code now that we can rely on C99 which specifies + varargs for macros. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8990> -=== release 1.26.2 === +2025-05-28 20:59:24 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> -2025-05-29 23:56:43 +0100 Tim-Philipp Müller <tim@centricular.com> + * gst-libs/gst/vulkan/gstvkinstance.c: + vulkan: add best practices validation feature + It can be disabled in run-time via the environment variable + VK_KHRONOS_VALIDATION_VALIDATE_BEST_PRACTICES=false + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9119> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.2 +2025-05-28 20:39:12 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkinstance.c: + vulkan: remove vkDebugReportMessage() loading + Since it's not used. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9119> + +2025-05-28 20:13:07 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * gst-libs/gst/vulkan/gstvkinstance.c: + vulkan: use VK_EXT_debug_utils if available + Nowadays VK_EXT_debug_report is considered deprecated and it's recommended to + replace it it VK_EXT_debug_utils, which offer a way to ignore messages + considered false positives. + The approach is to try the extension first, if available at compilation time, if + not or if it fails to load, VK_EXT_debug_report fallbacks. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9119> 2025-05-30 00:07:25 +0900 Seungha Yang <seungha@centricular.com> @@ -1877,36 +7371,27 @@ d3d12screencapturesrc: Fix desktop handle leak Calling CloseDesktop() on a handle that is currently in use will fail. Close the handle after current desktop handle change - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9140> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9136> 2025-05-29 14:30:42 +0300 Sebastian Dröge <sebastian@centricular.com> * gst/dvbsuboverlay/gstdvbsuboverlay.c: dvbsuboverlay: Actually make use of subtitle running time instead of using PTS Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4446 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9129> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9123> 2025-05-27 13:47:27 -0400 Daniel Morin <daniel.morin@collabora.com> * gst-libs/gst/webrtc/rtpsender.c: rtpsender: fix 'priority' GValue get/set - 'priority' is declared as enum, we need to use g_value_get|set_enum() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9110> - -2025-05-26 18:25:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/meson.build: - d3d12: Enable GIR for MSVC build as well - cerbero issue should be fixed by - https://gitlab.freedesktop.org/gstreamer/cerbero/-/merge_requests/1824 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9095> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9105> -2025-05-26 18:24:01 +0900 Seungha Yang <seungha@centricular.com> +2025-05-21 10:10:25 -0400 Doug Nazar <nazard@nazar.ca> - * gst-libs/gst/d3d12/gstd3d12memory.h: - d3d12memory: Make D3D12 map flags inspectable - GIR scanner does not seem to be able to infer integer value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9095> + * gst/mpegtsdemux/tsdemux.c: + tsdemux: Ensure AC3 descriptor is long enough before accessing + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9044> 2025-05-21 10:03:09 -0400 Doug Nazar <nazard@nazar.ca> @@ -1914,13 +7399,7 @@ * gst/rist/gstristsink.c: * gst/sdp/gstsdpsrc.c: gstreamer: Ensure we free the template - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9101> - -2025-05-21 10:10:25 -0400 Doug Nazar <nazard@nazar.ca> - - * gst/mpegtsdemux/tsdemux.c: - tsdemux: Ensure AC3 descriptor is long enough before accessing - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9101> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9044> 2025-05-26 19:07:22 +0300 Sebastian Dröge <sebastian@centricular.com> @@ -1937,99 +7416,270 @@ id when disabling a track so that when enabling it again later the same one can be enabled again. See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4344 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9100> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9081> + +2025-05-27 00:03:05 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstnvencoder.cpp: + nvencoder: Fix GstNvEncTask leak on non-flow-ok return + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9078> + +2025-05-26 23:17:15 +0900 Seungha Yang <seungha@centricular.com> + + * sys/nvcodec/gstnvencoder.cpp: + nvencoder: Fix GstVideoCodecFrame leak on non-flow-ok return + ... and use gst_video_encoder_release_frame() to drop frame + instead of gst_video_encoder_finish_frame() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9078> + +2025-05-24 14:42:32 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst/codecalpha/gstalphacombine.c: + alphacombine: Fix seeking after EOS + The alpha_eos state was not being reset on flush-stop, as a side effect + flushing seek after EOS did not work. + Fixes #4442 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9069> + +2025-05-26 17:20:05 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12memory.cpp: + d3d12memory: Allow set_fence() only against writable memory + Setting a fence to memory should only be allowed on the side + that modified that memory or has the right to modify it + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9075> + +2025-05-26 18:25:58 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/meson.build: + d3d12: Enable GIR for MSVC build as well + cerbero issue should be fixed by + https://gitlab.freedesktop.org/gstreamer/cerbero/-/merge_requests/1824 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9076> + +2025-05-26 18:24:01 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12memory.h: + d3d12memory: Make D3D12 map flags inspectable + GIR scanner does not seem to be able to infer integer value + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9076> + +2025-05-23 16:02:43 -0300 L. E. Segovia <amy@centricular.com> + + * gst/bayer/gstbayerorc-dist.c: + * gst/bayer/gstbayerorc-dist.h: + * gst/fieldanalysis/gstfieldanalysisorc-dist.c: + * gst/fieldanalysis/gstfieldanalysisorc-dist.h: + * gst/gaudieffects/gstgaudieffectsorc-dist.c: + * gst/gaudieffects/gstgaudieffectsorc-dist.h: + * gst/videofilters/gstscenechangeorc-dist.c: + * gst/videofilters/gstscenechangeorc-dist.h: + * meson.build: + orc: Update pregenerated files + Fixes -Wtype-limits on gstbayer.orc when emulating convuuslw. + Regenerated Orc files use OrcOnce, which increases the minimum version to 0.4.34. + See https://gitlab.freedesktop.org/gstreamer/orc/-/merge_requests/212 (ORC_MIN) + See https://gitlab.freedesktop.org/gstreamer/orc/-/merge_requests/238 (AVX2 convussql) + See https://gitlab.freedesktop.org/gstreamer/orc/-/commit/8a86d517530ce79c0ae47e37d768107c57ab31c4 (OrcOnce) + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9067> + +2025-05-23 13:04:43 -0300 L. E. Segovia <amy@centricular.com> + + * scripts/update-orc-dist-files.py: + orc: Remove references to gst-indent-1.0 + These are automatically handled by pre-commit now. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9067> + +2025-03-30 01:43:33 -0400 Doug Nazar <nazard@nazar.ca> + + * tests/check/elements/dash_mpd.c: + dash: mpdclient: Re-enable test now that mpdclient is fixed + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8734> + +2025-03-30 01:41:10 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/dash/gstmpdclient.c: + dash: mpdclient: Don't pass terminating NUL to adapter + libxml2 will complain if it detects any characters after the valid + XML, including a NUL byte. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8734> + +2025-05-23 09:16:00 +0200 Jan Schmidt <jan@centricular.com> + + * sys/applemedia/vtenc.c: + vtenc: Use strlcpy instead of strncpy + Silences a compiler warning, and there's no cross-platform + consideration as this plugin is apple-only + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9058> + +2025-05-21 20:29:06 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * meson.options: + meson: Add a monorepo-wide qt-method option and yield to it + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9046> + +2025-05-21 20:23:01 +0530 Nirbheek Chauhan <nirbheek@centricular.com> + + * ext/qt6d3d11/meson.build: + * meson.options: + * tests/examples/qt6d3d11/meson.build: + meson: Fix qt detection for qt6d3d11 plugin + This now matches the code for the qml6gl plugin. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9046> 2025-05-20 22:32:36 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/d3d12/meson.build: d3d12: Generate gir file Prerequisite for rust binding - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9096> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9053> 2025-05-22 19:12:03 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/d3d12/gstd3d12device.cpp: * gst-libs/gst/d3d12/gstd3d12memory.cpp: d3d12: Fix docs annotations - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9096> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9053> -2025-05-26 17:20:05 +0900 Seungha Yang <seungha@centricular.com> +2025-05-14 20:07:52 +0200 Robert Mader <robert.mader@collabora.com> - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Allow set_fence() only against writable memory - Setting a fence to memory should only be allowed on the side - that modified that memory or has the right to modify it - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9094> + * gst-libs/gst/wayland/gstwlvideoformat.c: + wayland: Remove custom format mapping + As of Gst >= 1.24 we can just use GstVideoInfoDmaDrm APIs. Note + that SHM formats match DRM ones with only two exceptions. + No functional changes intended (for backporting) apart from + supporting a few more formats - those present in video-info-dma.c + but missing in the removed mapping. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8983> + +2025-05-16 05:02:37 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/meson.build: + tflite: Also look for C symbols in libtensorflow-lite + For some builds, there isn't a separate C library such as + some Yocto builds of tflite. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8523> + +2025-05-09 20:19:27 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/VX/vsi_npu_custom_op.h: + tflite: Make VSI header build in C code + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8523> + +2025-04-06 12:05:48 -0400 Olivier Crête <olivier.crete@collabora.com> + + * ext/tflite/gsttflite.c: + * ext/tflite/gsttfliteedgetpuinference.c: + * ext/tflite/gsttfliteedgetpuinference.h: + * ext/tflite/gsttfliteinference.c: + * ext/tflite/gsttfliteinference.h: + * ext/tflite/meson.build: + * meson.options: + tflite: Add Coral EdgeTPU inference element + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8523> -2025-05-14 14:36:49 -0400 Doug Nazar <nazard@nazar.ca> +2024-03-09 13:42:22 -0300 Denis Shimizu <denis.shimizu@collabora.com> - * gst/videoframe_audiolevel/gstvideoframe-audiolevel.c: - videoframe-audiolevel: Switch to GST_AUDIO_NE() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9089> + * ext/meson.build: + * ext/tflite/README.md: + * ext/tflite/VX/vsi_npu_custom_op.cc: + * ext/tflite/VX/vsi_npu_custom_op.h: + * ext/tflite/gstml.h: + * ext/tflite/gsttflite.c: + * ext/tflite/gsttfliteinference.c: + * ext/tflite/gsttfliteinference.h: + * ext/tflite/meson.build: + * ext/tflite/modelinfo.c: + * ext/tflite/modelinfo.h: + * meson.options: + tflite: Add TensorFlow Lite element + A new element wrapping the LiteRT (aka TensorFlow Lite) inference engine. + It currently supports only CPU. + Co-authored-by: Daniel Morin <daniel.morin@collabora.com> + Co-authored-by: Denis Shimizu <denis.shimizu@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8523> -2025-05-14 14:36:01 -0400 Doug Nazar <nazard@nazar.ca> +2024-11-22 21:32:18 -0500 Olivier Crête <olivier.crete@collabora.com> - * ext/musepack/gstmusepackdec.c: - musepack: Switch to GST_AUDIO_NE() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9089> + * gst-libs/gst/analytics/gsttensor.c: + * gst-libs/gst/analytics/gsttensor.h: + tensor: Add helper function to stringify a tensor data type + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8523> -2025-05-12 18:27:06 -0400 Olivier Crête <olivier.crete@collabora.com> +2025-05-19 20:38:56 +0900 Seungha Yang <seungha@centricular.com> - * tests/check/elements/h264parse.c: - h264parse test: Ensure avc3 caps include a codec_data - The avc3 caps without a codec_data are just totally invalid - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9087> + * gst-libs/gst/d3d12/meson.build: + d3d12: Fix gstreamer-full subproject build with gcc + Since default option "cpp_std=c++14" is not applied automatically + in case that gstreamer is used as a meson subproject, specify + cpp_std option explicitly + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9030> -2025-05-08 19:20:13 -0400 Olivier Crête <olivier.crete@collabora.com> +2025-05-13 08:20:53 -0400 Doug Nazar <nazard@nazar.ca> - * gst/videoparsers/gsth264parse.c: - h264parse: Require codec_data when receiving stream-format=avc or avc3 - It's not really possible to safely interpret the content afterwards if - it's missing. - Even for AVC3, the codec_data doesn't need to contain a SPS/PPS, but - it still needs to be present to tell downstream elements about the size - of the nal unit length field. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9087> + * tests/check/elements/audiovisualizer.c: + audiovisualizer: Change test to use native endian audio format + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8975> -2025-05-08 19:16:13 -0400 Olivier Crête <olivier.crete@collabora.com> +2025-05-13 17:35:14 +0300 Jordan Petridis <jordan@centricular.com> - * gst/videoparsers/gsth264parse.c: - h264parse: Never output stream-format=avc/avc3 caps without codec_data - It's not possible to interpret further buffers without knowing the nal_length_size - field, so avc1/avc3 caps without the codec_data aren't valid, don't push them out. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9087> + * tests/check/gst-plugins-bad.supp: + bad: Add more variants for an srt suppression + Followup to 087cb87d27e268d55a8d152690870ac4a2b3e166 + These are some more variants of the same issue we + already suppressed in the commit above. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8979> -2025-05-12 16:40:05 -0400 Olivier Crête <olivier.crete@collabora.com> +2025-05-12 16:30:10 +0300 Jordan Petridis <jordan@centricular.com> - * tests/check/elements/h264parse.c: - h264parse test: Send PPS in SPS parsing test - Without the PPS, the codec_data can not be created - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9087> + * gst-libs/gst/opencv/meson.build: + opencv: import as system dep + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8979> + +2025-05-12 16:29:48 +0300 Jordan Petridis <jordan@centricular.com> + + * gst/bayer/gstbayer2rgb.c: + bad: Avoid gcc false positive about variable initialization + In gstbayer2rgb the dtmp always gets initialized when + we check for bayersrc16. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8979> + +2025-05-14 14:36:49 -0400 Doug Nazar <nazard@nazar.ca> + + * gst/videoframe_audiolevel/gstvideoframe-audiolevel.c: + videoframe-audiolevel: Switch to GST_AUDIO_NE() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8984> + +2025-05-14 14:36:01 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/musepack/gstmusepackdec.c: + musepack: Switch to GST_AUDIO_NE() + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8984> 2025-05-13 19:37:59 -0400 Doug Nazar <nazard@nazar.ca> * gst/transcode/gsturitranscodebin.c: uritranscodebin: Free various props before being set - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9088> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8978> 2025-05-13 19:35:58 -0400 Doug Nazar <nazard@nazar.ca> * gst/transcode/gsttranscodebin.c: transcodebin: Free various props before being set Also disable setting filters more than once. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9088> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8978> 2025-05-13 19:34:59 -0400 Doug Nazar <nazard@nazar.ca> * gst-libs/gst/vulkan/gstvkwindow.c: vulkan: Free various props before being set - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9088> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8978> 2025-05-13 19:23:53 -0400 Doug Nazar <nazard@nazar.ca> * gst-libs/gst/transcoder/gsttranscoder.c: transcoder: Free various props before during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9088> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8978> 2025-05-13 19:15:21 -0400 Doug Nazar <nazard@nazar.ca> @@ -2052,101 +7702,105 @@ all: Annotate *_set_property() contructor only props without free Properties that are marked constructor only aren't required to be freed before g_value_dup_*() as they can only be called once during construction. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9088> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8978> -2025-03-30 19:44:22 -0400 Doug Nazar <nazard@nazar.ca> - - * gst-libs/gst/vulkan/wayland/gstvkdisplay_wayland.c: - vulkan/wayland: Init debug category before usage - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9086> - -2025-05-27 00:03:05 +0900 Seungha Yang <seungha@centricular.com> +2025-05-13 01:40:57 +0900 Seungha Yang <seungha@centricular.com> - * sys/nvcodec/gstnvencoder.cpp: - nvencoder: Fix GstNvEncTask leak on non-flow-ok return - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9084> + * sys/d3d12/gstd3d12decoder.cpp: + d3d12decoder: Workaround for NVIDIA crash on resolution change + Recent NVIDIA driver seems to crash on resolution change + if ID3D12VideoDecoder and ID3D12VideoDecodeCommandList are reused. + Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4415 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8968> -2025-05-26 23:17:15 +0900 Seungha Yang <seungha@centricular.com> +2025-05-12 18:27:06 -0400 Olivier Crête <olivier.crete@collabora.com> - * sys/nvcodec/gstnvencoder.cpp: - nvencoder: Fix GstVideoCodecFrame leak on non-flow-ok return - ... and use gst_video_encoder_release_frame() to drop frame - instead of gst_video_encoder_finish_frame() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9084> + * tests/check/elements/h264parse.c: + h264parse test: Ensure avc3 caps include a codec_data + The avc3 caps without a codec_data are just totally invalid + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8955> -2025-05-24 14:42:32 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> +2025-05-08 19:20:13 -0400 Olivier Crête <olivier.crete@collabora.com> - * gst/codecalpha/gstalphacombine.c: - alphacombine: Fix seeking after EOS - The alpha_eos state was not being reset on flush-stop, as a side effect - flushing seek after EOS did not work. - Fixes #4442 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9079> + * gst/videoparsers/gsth264parse.c: + h264parse: Require codec_data when receiving stream-format=avc or avc3 + It's not really possible to safely interpret the content afterwards if + it's missing. + Even for AVC3, the codec_data doesn't need to contain a SPS/PPS, but + it still needs to be present to tell downstream elements about the size + of the nal unit length field. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8955> -2025-03-30 01:43:33 -0400 Doug Nazar <nazard@nazar.ca> +2025-05-08 19:16:13 -0400 Olivier Crête <olivier.crete@collabora.com> - * tests/check/elements/dash_mpd.c: - dash: mpdclient: Re-enable test now that mpdclient is fixed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9068> + * gst/videoparsers/gsth264parse.c: + h264parse: Never output stream-format=avc/avc3 caps without codec_data + It's not possible to interpret further buffers without knowing the nal_length_size + field, so avc1/avc3 caps without the codec_data aren't valid, don't push them out. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8955> -2025-03-30 01:41:10 -0400 Doug Nazar <nazard@nazar.ca> +2025-05-12 16:40:05 -0400 Olivier Crête <olivier.crete@collabora.com> - * ext/dash/gstmpdclient.c: - dash: mpdclient: Don't pass terminating NUL to adapter - libxml2 will complain if it detects any characters after the valid - XML, including a NUL byte. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9068> + * tests/check/elements/h264parse.c: + h264parse test: Send PPS in SPS parsing test + Without the PPS, the codec_data can not be created + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8955> -2025-05-21 20:29:06 +0530 Nirbheek Chauhan <nirbheek@centricular.com> +2025-02-25 22:53:30 +0900 Seungha Yang <seungha@centricular.com> - * meson_options.txt: - meson: Add a monorepo-wide qt-method option and yield to it - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9063> + * sys/nvcodec/gstnvjpegenc.cpp: + * sys/nvcodec/gstnvjpegenc.h: + * sys/nvcodec/plugin.c: + nvjpegenc: Add autogpu mode element + Similar to nvautogpu{h264,h265,av1}enc, adding auto gpu select mode + element + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8555> -2025-05-21 20:23:01 +0530 Nirbheek Chauhan <nirbheek@centricular.com> +2025-02-25 21:22:46 +0900 Seungha Yang <seungha@centricular.com> - * ext/qt6d3d11/meson.build: - * meson_options.txt: - * tests/examples/qt6d3d11/meson.build: - meson: Fix qt detection for qt6d3d11 plugin - This now matches the code for the qml6gl plugin. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9063> + * sys/nvcodec/gstnvjpegenc.cpp: + nvjpegenc: Use stream-ordered alloc if requested + If user requested stream-ordered allocation, use async alloc/free + methods + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8555> -2025-05-19 20:38:56 +0900 Seungha Yang <seungha@centricular.com> +2025-04-10 11:22:29 +0530 Santosh Mahto <santosh.mahto@collabora.com> - * gst-libs/gst/d3d12/meson.build: - d3d12: Fix gstreamer-full subproject build with gcc - Since default option "cpp_std=c++14" is not applied automatically - in case that gstreamer is used as a meson subproject, specify - cpp_std option explicitly - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9033> + * gst-libs/gst/analytics/gsttensormeta.c: + gstanalytics: Add transform function to copy the tensor meta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8825> -2025-05-13 08:20:53 -0400 Doug Nazar <nazard@nazar.ca> +2025-05-05 17:38:08 -0400 Daniel Morin <daniel.morin@collabora.com> - * tests/check/elements/audiovisualizer.c: - audiovisualizer: Change test to use native endian audio format - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/9025> + * gst/tensordecoders/gstclassifiertensordecoder.c: + analytics: change tensor-id and use new API + - tensor-id changed to match tensor-id-registry at https://github.com/collabora/tensor-id-registry + - Use new GstTensorMeta API to get tensor. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8548> -2025-05-13 01:40:57 +0900 Seungha Yang <seungha@centricular.com> +2025-02-24 11:15:29 -0500 Daniel Morin <daniel.morin@collabora.com> - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Workaround for NVIDIA crash on resolution change - Recent NVIDIA driver seems to crash on resolution change - if ID3D12VideoDecoder and ID3D12VideoDecodeCommandList are reused. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4415 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8977> + * docs/plugins/gst_plugins_cache.json: + * gst/tensordecoders/gstclassifiertensordecoder.c: + * gst/tensordecoders/gstclassifiertensordecoder.h: + * gst/tensordecoders/gsttensordecoders.c: + * gst/tensordecoders/meson.build: + tensordecoder: add general classifier tensor-decoder + - Classification output is more standard compare to other tensor-decoder. + - This tensor-decoder implement a standard classification tensor-decoder. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8548> 2025-05-02 15:46:26 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> * gst/removesilence/gstremovesilence.c: removesilence: canonicalize property names - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8962> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8915> 2025-05-02 10:08:31 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> * ext/opencv/gsthanddetect.cpp: handdetect: canonicalize property names - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8962> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8915> 2025-05-02 08:56:19 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> @@ -2181,7 +7835,7 @@ properties: add G_PARAM_STATIC_STRINGS where missing "Hold on, I know you need to generate the registry, but let me just create copies of all those strings first", Framework whispered - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8962> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8915> 2025-05-08 19:23:54 +0900 Seungha Yang <seungha@centricular.com> @@ -2190,14 +7844,24 @@ examples: cuda: Fix build with old CUDA SDK Some symbols are not available in old cuda headers. Use our stub headers instead - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8961> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8950> 2025-05-08 19:18:32 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/cuda/gstcudanvrtc.cpp: cuda: Fix runtime PTX compile Handle extra option args - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8961> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8950> + +2025-03-11 15:02:03 +0100 Pablo García <pgarcia@fluendo.com> + + * ext/curl/gstcurlbasesink.c: + curl: use CURL_SOCKET_BAD to ensure cross-platform + Solves this error in Windows build: + ../ext/curl/gstcurlbasesink.c:1154:14: error: comparison of unsigned + expression in '< 0' is always false -Werror=type-limits + 1154 | if (curlfd < 0) { + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8615> 2025-05-05 17:51:55 +0000 L. E. Segovia <amy@centricular.com> @@ -2206,31 +7870,114 @@ * ext/curl/gstcurlsmtpsink.c: curl: Fix build with MSVC See https://gitlab.freedesktop.org/gstreamer/cerbero/-/merge_requests/1740#note_2895537 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8946> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8932> -2025-03-11 15:02:03 +0100 Pablo García <pgarcia@fluendo.com> +2025-04-17 15:41:05 -0400 Xavier Claessens <xclaessens@netflix.com> - * ext/curl/gstcurlbasesink.c: - curl: use CURL_SOCKET_BAD to ensure cross-platform - Solves this error in Windows build: - ../ext/curl/gstcurlbasesink.c:1154:14: error: comparison of unsigned - expression in '< 0' is always false -Werror=type-limits - 1154 | if (curlfd < 0) { - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8958> + * docs/plugins/gst_plugins_cache.json: + * gst/unixfd/gstunixfdallocator.c: + * gst/unixfd/gstunixfdallocator.h: + * gst/unixfd/gstunixfdsink.c: + * gst/unixfd/meson.build: + * tests/check/elements/unixfd.c: + unifxfdsink: Add an property to allow copying + By design, unixfd is meant to be used for zero-copy and failing when the data is + not FD based memory is wanted to help debug pipelines. Though, there exists + cases, notably with RTP payloader and demuxers, where its not possible + to get all the data into FD memory through allocation queries. + To allow using unixfd for these cases, introduce a property on the unixfdsink + that enable copying the non FD data into freshly allocated memfd. + Co-authored-by: Nicolas Dufresne <nicolas.dufresne@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8861> + +2025-03-27 16:48:36 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> + + * docs/plugins/gst_plugins_cache.json: + * gst/meson.build: + * gst/y4m/gsty4mdec.c: + * gst/y4m/gsty4mdec.h: + * gst/y4m/meson.build: + * meson.options: + y4m: move y4mdec to good to have a single y4m plugin + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8719> + +2025-04-26 03:20:42 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> + + * gst-libs/gst/webrtc/datachannel.c: + webrtc: fix build with -DGST_REMOVE_DEPRECATED + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8901> + +2025-03-15 20:56:17 +0100 Tim-Philipp Müller <tim@centricular.com> + + * meson.options: + meson: rename meson_options.txt to meson.options + Which is supported since Meson 1.1: + https://mesonbuild.com/Release-notes-for-1-1-0.html#support-for-reading-options-from-mesonoptions + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8651> 2025-05-01 15:16:22 -0400 Olivier Crête <olivier.crete@collabora.com> * ext/lcevcdecoder/gstlcevcdec.c: lcevcdec: Use portable printf formatting macros This should fix 32bit builds - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8927> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8926> 2025-05-01 15:30:28 -0400 Olivier Crête <olivier.crete@collabora.com> * ext/lcevcencoder/gstlcevcencoder.c: lcevcenc: Use portable printf formatting macros This should fix 32bit builds - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8927> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8926> + +2025-04-25 00:25:53 +0900 Seungha Yang <seungha@centricular.com> + + * tests/examples/d3d12/d3d12remap-fisheye.cpp: + * tests/examples/d3d12/meson.build: + examples: Add d3d12remap example + Adding a fisheye image transform example using d3d12remap element + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8887> + +2025-04-24 00:36:03 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d12/gstd3d12compat.h: + * sys/d3d12/gstd3d12remap.cpp: + * sys/d3d12/gstd3d12remap.h: + * sys/d3d12/meson.build: + * sys/d3d12/plugin.cpp: + d3d12: Add d3d12remap element + Adding new element to support pixel remapping operation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8887> + +2025-04-20 23:33:16 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/d3d11/gstd3d11converter.cpp: + * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: + * gst-libs/gst/d3d12/gstd3d12converter-private.h: + * gst-libs/gst/d3d12/gstd3d12converter.cpp: + * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: + d3d12converter: Add support UV remap + Adding OpenCV's cv::remap() like feature + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8887> + +2025-02-08 22:44:47 +0800 Zhao, Gang <gang.zhao.42@gmail.com> + + * gst/midi/midiparse.c: + midiparse: Quit parsing if error occurred + Invalid midi files will crash gstreamer or let it enter infinite + loop. Fixed it by quit parsing if error is encountered. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8396> + +2025-02-03 13:53:09 +0800 Zhao, Gang <gang.zhao.42@gmail.com> + + * gst/midi/midiparse.c: + * gst/midi/midiparse.h: + midiparse: Consider tempo change when calculating duration + Midi meta event set tempo would change tempo. Should consider tempo + change when calculating buffer PTS / duration. + Save tempo change to a list and calculate duration according to the + list. + Fixed #4158 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8396> 2025-04-03 02:24:13 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -2241,7 +7988,7 @@ We need to disable libsoup 3.0 tests because they fail to build on Windows. Closes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1115 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8919> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8753> 2025-04-01 17:58:14 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -2252,51 +7999,94 @@ a part of the gstreamer project, and should be treated as system deps. libsoup needs some porting work for the bump, and vorbis/lame are already at their latest releases. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8919> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8753> -2025-02-08 22:44:47 +0800 Zhao, Gang <gang.zhao.42@gmail.com> +2025-04-28 09:51:23 +0100 Philippe Normand <philn@igalia.com> - * gst/midi/midiparse.c: - midiparse: Quit parsing if error occurred - Invalid midi files will crash gstreamer or let it enter infinite - loop. Fixed it by quit parsing if error is encountered. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8918> + * ext/meson.build: + * ext/wpe2/gstwpe.h: + * ext/wpe2/gstwpe2.cpp: + * ext/wpe2/gstwpedisplay.cpp: + * ext/wpe2/gstwpedisplay.h: + * ext/wpe2/gstwpethreadedview.cpp: + * ext/wpe2/gstwpethreadedview.h: + * ext/wpe2/gstwpetoplevel.cpp: + * ext/wpe2/gstwpetoplevel.h: + * ext/wpe2/gstwpevideosrc.cpp: + * ext/wpe2/gstwpevideosrc.h: + * ext/wpe2/gstwpeview.cpp: + * ext/wpe2/gstwpeview.h: + * ext/wpe2/meson.build: + * meson_options.txt: + wpe2: New WPE plugin making use of the "WPE Platform API" + Currently only a wpevideosrc2 element is exposed. GL and SHM buffer rendering + are supported, navigation events too (touch is un-tested). Audio pads handling + is not supported yet (that requires new WPE API). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8789> -2025-02-03 13:53:09 +0800 Zhao, Gang <gang.zhao.42@gmail.com> +2025-04-25 16:36:37 -0400 Olivier Crête <olivier.crete@collabora.com> - * gst/midi/midiparse.c: - * gst/midi/midiparse.h: - midiparse: Consider tempo change when calculating duration - Midi meta event set tempo would change tempo. Should consider tempo - change when calculating buffer PTS / duration. - Save tempo change to a list and calculate duration according to the - list. - Fixed #4158 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8918> + * gst-libs/gst/webrtc/nice/nice.c: + nice: Add function to fill in ufrag/pwd of remote candidates + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8900> -2025-04-24 07:48:12 +0200 Jochen Henneberg <jochen@centricular.com> +2025-04-25 16:33:39 -0400 Olivier Crête <olivier.crete@collabora.com> - * sys/va/gstvah264enc.c: - va: Fix H264 profile decision logic - The current logic would choose 'baseline' profiles only in case that - these profiles appear in the list first. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8893> + * gst-libs/gst/webrtc/nice/nice.c: + nice: Rename local candidate filling function + Rename it, and avoid using it on remote candidates, as it will put + the wrong value. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8900> -2025-04-24 20:26:57 +0100 Tim-Philipp Müller <tim@centricular.com> +2025-04-25 16:32:59 -0400 Olivier Crête <olivier.crete@collabora.com> - * meson.build: - Back to development after 1.26.1 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8890> + * gst-libs/gst/webrtc/nice/nice.c: + nice: Don't modify struct borrowed by signal + The struct is owned by libnice, you can't safely modify it + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8900> -=== release 1.26.1 === +2025-04-26 19:28:56 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> -2025-04-24 20:20:14 +0100 Tim-Philipp Müller <tim@centricular.com> + * ext/bs2b/gstbs2b.c: + * ext/gtk/gstgtkwaylandsink.c: + * ext/srt/gstsrtsink.c: + * ext/srt/gstsrtsrc.c: + * ext/vulkan/vkcolorconvert.c: + * ext/vulkan/vkdownload.c: + * ext/vulkan/vkimageidentity.c: + * ext/vulkan/vkoverlaycompositor.c: + * ext/vulkan/vkshaderspv.c: + * ext/vulkan/vkupload.c: + * ext/vulkan/vkviewconvert.c: + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/transportreceivebin.c: + * ext/webrtc/transportsendbin.c: + * gst/accurip/gstaccurip.c: + * gst/netsim/gstnetsim.c: + * gst/rist/gstristrtpdeext.c: + * gst/rist/gstristrtpext.c: + * gst/rist/gstristsink.c: + * gst/rist/gstristsrc.c: + * gst/rist/gstroundrobin.c: + * sys/amfcodec/gstamfav1enc.cpp: + * sys/amfcodec/gstamfh264enc.cpp: + * sys/amfcodec/gstamfh265enc.cpp: + * sys/applemedia/avfvideosrc.m: + * sys/applemedia/avsamplevideosink.m: + * sys/nvcodec/gstnvvp8dec.cpp: + * sys/nvcodec/gstnvvp9dec.cpp: + * sys/uvcgadget/gstuvcsink.c: + * tests/check/elements/test_http_src.c: + elements: use set_static_metadata when it's allowed + Those strings are nice but CPU doesn't want to copy them + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8905> - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.1 +2025-04-18 15:21:59 -0400 Daniel Morin <daniel.morin@collabora.com> + + * tests/check/libs/analyticsmeta.c: + test:analytics: add more test on tracking mtd + - Verify we can retrive tracking-mtd and its data + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8868> 2025-03-15 23:48:52 +0900 Seungha Yang <seungha@centricular.com> @@ -2306,26 +8096,34 @@ value of num_long_term_pics Fixes ZDI-CAN-26596 Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4285 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8885> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8884> 2025-03-15 22:39:44 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/codecparsers/gsth265parser.c: h265parser: Fix max_dec_pic_buffering_minus1 bound check Allowed max value is MaxDpbSize - 1 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8885> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8884> + +2025-04-24 07:48:12 +0200 Jochen Henneberg <jochen@centricular.com> + + * sys/va/gstvah264enc.c: + va: Fix H264 profile decision logic + The current logic would choose 'baseline' profiles only in case that + these profiles appear in the list first. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8882> 2025-04-17 17:28:17 +0200 Stéphane Cerveau <scerveau@igalia.com> * sys/va/gstvaav1enc.c: vaav1enc: fix mem leaks in _av1_decide_profile - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8876> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8845> 2025-04-15 16:46:23 +0200 Stéphane Cerveau <scerveau@igalia.com> * sys/va/gstvavp9enc.c: vavp9enc: fix mem leaks in _vp9_decide_profile - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8876> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8845> 2025-04-23 09:28:16 +0300 Sebastian Dröge <sebastian@centricular.com> @@ -2333,7 +8131,7 @@ aja: Use the correct location of the AJA NTV2 SDK in the docs Also there is no longer a proprietary version of it. Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4381 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8875> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8874> 2025-04-21 22:59:18 +0200 Jakub Adam <jakub.adam@collabora.com> @@ -2343,20 +8141,112 @@ orientation didn't get applied on the new GstVaFilter instance. Resettig prev_direction to default value in update_properties ensures gst_va_filter_set_orientation() isn't inadvertently skipped. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8872> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8871> + +2025-04-14 18:24:52 +0300 Sebastian Dröge <sebastian@centricular.com> + + * ext/x265/gstx265enc.c: + x265enc: Add bitrate tags to the output + Based on the same code in x264enc. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8838> + +2020-03-23 13:45:46 +0000 jan vermaete <jan.vermaete@gmail.com> + + * ext/opencv/gstmotioncells.h: + motioncells: fix typo in header comment + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8867> 2025-04-17 04:40:12 -0600 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstsourcebuffer.c: gstsourcebuffer: Reverted ownership change for append method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8864> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8857> + +2025-04-18 00:45:07 +0900 Seungha Yang <seungha@centricular.com> + + * sys/wasapi2/gstwasapi2ringbuffer.cpp: + wasapi2: Log buffer QPC position and status flags + Log all infos of IAudioCaptureClient::GetBuffer + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8859> + +2025-04-09 13:47:54 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/gstwebrtcstats.c: + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/ice.h: + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/webrtc_fwd.h: + * tests/check/elements/webrtcbin.c: + webrtc: stats: Improve spec compliance for ICE candidate stats + We now fill the foundation, related-address, related-port, username-fragment and + tcp-type fields. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8792> + +2025-04-17 11:15:08 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + doc: Update cache for plugins automatically picks NV16_10LE40 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5612> + +2023-11-06 15:19:33 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * sys/v4l2codecs/gstv4l2format.c: + * sys/v4l2codecs/gstv4l2format.h: + * sys/v4l2codecs/linux/videodev2.h: + v4l2codecs: Add Rockchip 8bit/10bit 422 formats + This enable NV16 and NV16_10LE40 formats. These formats are + produced by notably rkvdec driver. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5612> + +2023-11-06 15:16:41 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + + * gst/interlace/gstinterlace.c: + video: Add 10bit 422 NV16_10LE40 format + Similar to NV12_10LE40, this is a 422 variant. This format is also named + NV20 (20bit per pixels) in other stack and is produced by rkvdec + decoder. + Co-authored-by: Sebastian Fricke <sebastian.fricke@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5612> + +2025-02-25 15:50:42 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> + + * gst/mpegtsmux/gstbasetsmux.c: + mpegtsmux: Read prog-mapPMT_ORDER_<PID> for PMT order key + Right now the prog-map's meaning of `PMT_%d` is overloaded: + - PMT_<PGM> is used to look up the PID for the PMT. + - PMT_<PID> is used to look up ordering keys for streams in the PMT. + This is not a problem in practice because program numbers and PES PIDs + shouldn't overlap. Still, it's quite the wart in the API. + Provide "PMT_ORDER_%d" as an unambiguous way of specifying ordering + keys. + See: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1510#note_2790022 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8584> 2025-04-16 21:17:07 +0900 Seungha Yang <seungha@centricular.com> * gst-libs/gst/d3d12/gstd3d12converter.cpp: d3d12converter: Fix cropping when automatic mipmap is enabled Update vertex buffer and viewport of extra shader pipeline as well - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8853> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8851> + +2025-04-15 16:28:38 -0400 Daniel Morin <daniel.morin@collabora.com> + + * tests/check/libs/analyticsmeta.c: + test: add test for tensor-meta + - Verify we can add a tensor-meta to a buffer + - Verify we can get a tensor from a tensor-meta + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8848> + +2025-04-10 09:58:57 -0400 Daniel Morin <daniel.morin@collabora.com> + + * gst-libs/gst/analytics/gsttensormeta.c: + * gst-libs/gst/analytics/gsttensormeta.h: + analytics: add more convenient API to retrieve tensor + `gst_tensor_meta_get_by_id (meta,id)' is more convenient then + retrieving the tensor index using `gst_tensor_meta_get_index_from_id()` followed + by `gst_tensor_meta_get ()`. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8848> 2025-04-10 15:44:54 -0400 Daniel Morin <daniel.morin@collabora.com> @@ -2365,13 +8255,52 @@ tensordecoders: updating element classification - `TensorDecoder` is clashing with media decoder which cause decodebin use it. Replacing with `Tensordecoder` to avoid clash - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8839> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8819> + +2025-04-09 16:19:54 -0400 Daniel Morin <daniel.morin@collabora.com> + + * tests/check/libs/analyticsmeta.c: + test: add test for gstanalytics utility + - IoU test + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8805> + +2025-04-09 20:36:40 -0400 Daniel Morin <daniel.morin@collabora.com> + + * gst-libs/gst/analytics/analytics.h: + * gst-libs/gst/analytics/gstanalytics_image_util.c: + * gst-libs/gst/analytics/gstanalytics_image_util.h: + * gst-libs/gst/analytics/meson.build: + analytics: Move IoU calculation to gstanalytics lib + Calculating intersection-of-union (IoU) is a very common operation used by + tensor-decoder handling tensors from vision models. Having this in a library + will improve maintainability and ease of writing tensor-decoder. + - Post-fix _uint: We might eventually want to handle different datatype that we + woule post-fix with _type + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8805> + +2025-04-12 15:02:38 +0900 Andrew Yooeun Chun <aychun00@gmail.com> + + * sys/v4l2codecs/plugin.c: + v4l2codecs: fix typos in the documentation + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8829> + +2025-01-10 14:34:54 +0100 Stéphane Cerveau <scerveau@igalia.com> + + * ext/vulkan/vkh265dec.c: + vkh265dec: add main-10 support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8282> + +2025-01-10 14:30:54 +0100 Stéphane Cerveau <scerveau@igalia.com> + + * gst-libs/gst/vulkan/gstvkformat.c: + vkformat: add NV12 10 bits support + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8282> 2025-04-10 14:23:59 +0200 Carlos Bentzen <cadubentzen@igalia.com> * gst-libs/gst/codecs/gsth266decoder.c: h266decoder: fix leak parsing SEI messages - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8817> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8814> 2025-04-08 15:08:11 -0400 Detlev Casanova <detlev.casanova@collabora.com> @@ -2381,7 +8310,7 @@ v4l2codecs: Unref the frame before leaving on error In h264, h265 and mpeg2, make sure that dec_submit_bitstream() doesn't leak a frame when dec_ensure_output_buffer() fails. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8804> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8794> 2025-04-08 09:30:31 -0400 Detlev Casanova <detlev.casanova@collabora.com> @@ -2391,7 +8320,7 @@ This makes the end_picture() function handle the frame in the same way as in vp8, which also fixes a frame leak when gst_buffer_pool_acquire_buffer() fails. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8804> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8794> 2025-04-09 09:01:22 -0400 Xavier Claessens <xclaessens@netflix.com> @@ -2401,13 +8330,33 @@ 1.22 was the correct pkg-config version. It's only the subproject version that was wrong. Since we bumped libva.wrap to 2.22 version, h266 is now always available when using the subproject. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8802> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8800> 2025-04-07 17:45:28 -0400 Xavier Claessens <xclaessens@netflix.com> * sys/va/meson.build: va: h266 requires libva 2.22.0 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8793> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8788> + +2025-04-07 18:37:01 +0100 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/gstwebrtcstats.c: + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/ice.h: + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/webrtc_fwd.h: + * tests/check/elements/webrtcbin.c: + Revert "webrtc: stats: Increase spec compliance for ICE candidate stats" + This reverts commit 4718fc9be72ccbbb9278c9abe7d72106e161aebf. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8787> + +2025-04-07 18:36:39 +0100 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/ice.h: + Revert "webrtc: Add missing Since markers to new ICE API" + This reverts commit 601c772447b0bada8e54d097088b8ea51ecba09a. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8787> 2025-04-04 12:18:24 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> @@ -2415,7 +8364,7 @@ alphacombine: unblock when alpha sink is eos If the alpha sink receives EOS while the other thread was waiting for a alpha buffer it was stuck waiting forever. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8790> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8775> 2025-04-02 09:58:26 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> @@ -2427,7 +8376,71 @@ eos. Co-authored-by: Nicolas Dufresne <nicolas.dufresne@collabora.com> Fix #4165 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8790> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8775> + +2024-11-27 23:12:18 +0100 Jakub Adam <jakub.adam@collabora.com> + + * gst/unixfd/gstunixfdsrc.c: + unixfdsrc: fix allocating FD memory with nonzero offsets + The element should allocate GstFdMemory large enough to fit incoming + memory's size plus its potential offset. + Fixes "gst_memory_resize: assertion 'size + mem->offset + offset <= + mem->maxsize' failed". + Fixes an issue reproducible on Raspberry Pi 4 that results in a garbled + image on the receiver's end: + gst-launch-1.0 libcamerasrc ! unixfdsink socket-path=/tmp/socket + gst-launch-1.0 unixfdsrc socket-path=/tmp/socket ! autovideosink + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8025> + +2025-04-03 13:43:55 +1100 Matthew Waters <matthew@centricular.com> + + * gst-libs/gst/webrtc/ice.h: + webrtc: Add missing Since markers to new ICE API + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> + +2025-03-26 14:00:33 +0000 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/gstwebrtcstats.c: + * gst-libs/gst/webrtc/ice.c: + * gst-libs/gst/webrtc/ice.h: + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/webrtc_fwd.h: + * tests/check/elements/webrtcbin.c: + webrtc: stats: Increase spec compliance for ICE candidate stats + We now fill the foundation, related-address, related-port, username-fragment and + tcp-type fields. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> + +2025-03-26 10:38:06 +0000 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcstats.c: + * tests/check/elements/webrtcbin.c: + webrtc: stats: Fill data-channel transport stats + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> + +2025-02-15 11:41:57 +0000 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/nice/nice.c: + * gst-libs/gst/webrtc/nice/niceutils.h: + webrtc: nice: Add niceutils + The gst_webrtc_nice_get_candidate_server_url() function is going to be used for + stats generation purposes and also from the upcoming get_selected_candidate_pair + implementation. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> + +2025-02-15 11:15:31 +0000 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/nice/nice.c: + webrtc: nice: Make use of nice_candidate_type_to_string + This API was added in libnice 0.1.19 and we currently require 0.1.20. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> + +2025-02-15 11:14:30 +0000 Philippe Normand <philn@igalia.com> + + * gst-libs/gst/webrtc/nice/nice.c: + webrtc: nice: Remove unused libnice utilities + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8698> 2025-03-30 13:04:12 +0300 Razvan Grigore <razvan.grigore@vampirebyte.ro> @@ -2436,13 +8449,49 @@ This helps debug cases when the remote is offerer and m-line does not match with already existing transceivers. In this case, it will create new ones with sendrecv direction without any warning. Similar with code from _create_answer_task - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8784> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8735> + +2025-03-26 01:33:57 +0900 Seungha Yang <seungha@centricular.com> + + * ext/closedcaption/gsth265reorder.c: + * gst-libs/gst/codecparsers/gsth265parser-private.h: + * gst-libs/gst/codecparsers/gsth265parser.c: + * gst-libs/gst/codecparsers/gsth265parser.h: + * gst-libs/gst/codecs/gsth265decoder.c: + h265parser: Make gst_h265_parser_link_slice_hdr public + ... and updating h265decoder/h265ccinserter to match + the changed gst_h265_parser_link_slice_hdr method + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8679> + +2025-03-26 01:23:45 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/codecparsers/gsth264parser.c: + * gst-libs/gst/codecparsers/gsth264parser.h: + h264parser: Store associated parameter set id + Make h264parser and h265parser consistent + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8679> + +2025-03-26 01:13:47 +0900 Seungha Yang <seungha@centricular.com> + + * gst-libs/gst/codecparsers/gsth265parser.c: + * gst-libs/gst/codecparsers/gsth265parser.h: + h265parser: Store PPS id in slice header + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8679> + +2025-03-04 21:56:39 -0500 Daniel Morin <daniel.morin@collabora.com> + + * docs/plugins/gst_plugins_cache.json: + * ext/analyticsoverlay/gstobjectdetectionoverlay.c: + analyticsoverlay: add filled-box mode + - Add filled-box-mode property, when set region where detection is happening is + filled + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8592> 2025-04-03 16:30:52 -0400 Olivier Crête <olivier.crete@collabora.com> * docs/plugins/gst_plugins_cache.json: bad: Update va docs, adding new elements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-04 12:54:27 -0400 Olivier Crête <olivier.crete@collabora.com> @@ -2451,7 +8500,7 @@ * sys/va/gstvavpp.c: * sys/va/meson.build: va: Add since markers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-04 13:20:35 -0400 Olivier Crête <olivier.crete@collabora.com> @@ -2463,7 +8512,7 @@ va: Remove GstVaFeature marking as a plugin API It's part of the libgstva library and it's documented there, no need to duplicate it as it confuses hotdoc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-03 17:39:45 -0400 Olivier Crête <olivier.crete@collabora.com> @@ -2472,32 +8521,32 @@ * sys/va/gstvavp8dec.c: * sys/va/gstvavp9dec.c: va: Add doc section for vah26xlpenc and codecalpha element - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-03 16:31:30 -0400 Olivier Crête <olivier.crete@collabora.com> * docs/plugins/gst_plugins_cache.json: bad: Update wpesrc docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-03 17:45:57 -0400 Olivier Crête <olivier.crete@collabora.com> * ext/wpe/gstwpevideosrc.cpp: wpevideosrc: Fix typo in doc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-03 16:22:08 -0400 Olivier Crête <olivier.crete@collabora.com> * docs/plugins/gst_plugins_cache.json: bad: Update qsv docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-04 13:20:09 -0400 Olivier Crête <olivier.crete@collabora.com> * sys/qsv/gstqsvdecoder.cpp: * sys/qsv/gstqsvencoder.cpp: qsv: Add since marker to device-path property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-03 16:07:20 -0400 Olivier Crête <olivier.crete@collabora.com> @@ -2506,7 +8555,7 @@ * ext/onnx/gstonnxclient.h: * ext/onnx/gstonnxinference.cpp: bad: Add onnxinference to the docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8778> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8774> 2025-04-01 16:10:52 +0530 Nirbheek Chauhan <nirbheek@centricular.com> @@ -2516,14 +8565,7 @@ IceStream is not an actual object, it's GstWebRTCICEStream Some `Returns:` annotations were improperly formatted and not taking effect. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8771> - -2025-04-03 09:54:19 -0400 Xavier Claessens <xclaessens@netflix.com> - - * gst/unixfd/gstunixfdsrc.c: - unixfd: Fix wrong memory size when offset > 0 - This is a backport of !8025 that does not require new API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8770> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8755> 2024-09-10 01:13:25 +0200 Michael Grzeschik <m.grzeschik@pengutronix.de> @@ -2541,7 +8583,7 @@ This patch is a necessary feature to properly pass the UVC Functionality Test of the USB3CV Compliance Software. Fixes: 69c17461392d ('uvcgadget: Properly implement GET_INFO control responses') - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8760> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7524> 2025-03-31 18:54:28 +0200 Piotr Brzeziński <piotr@centricular.com> @@ -2554,7 +8596,22 @@ setDeviceCaps() will reverse that process and pass the actual supported value back to AVF, as most often the rounding causes us to fall just outside the accepted threshold. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8756> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8745> + +2025-03-07 09:06:18 +0100 Jochen Henneberg <jochen@centricular.com> + + * tests/check/elements/camerabin.c: + camerabin: Ensure that test record pipeline does not see caps change + Depending on the system load the test 'video_capture_with_tags' may + fail or not. Reason is that 'videotestsrc' may emit a buffer before + the final caps negotiation on the recording pipeline has happened + after dynamic linking. + In that case there would be a caps change and because videorate does + no longer drop old buffers and caps on change but pushes duplicates if + required qtmux will notice a caps change and fail to link. + The problem is a synchronization problem in 'camerabin' which became + obvious with the changed behaviour of 'videorate'. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8579> 2025-03-31 21:56:51 +0900 Seungha Yang <seungha@centricular.com> @@ -2563,13 +8620,173 @@ h264ccextractor,h265ccextractor: Handle gap with unknown pts Fixing critical warngins gst_event_new_gap: assertion 'GST_CLOCK_TIME_IS_VALID (timestamp)' failed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8752> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8741> + +2025-03-27 15:38:42 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> + + * gst/codecalpha/gstalphadecodebin.c: + codecalpha: name both queues + Make it easier to debug from logs. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8744> + +2025-03-30 19:44:22 -0400 Doug Nazar <nazard@nazar.ca> + + * gst-libs/gst/vulkan/wayland/gstvkdisplay_wayland.c: + vulkan/wayland: Init debug category before usage + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8738> + +2025-03-28 12:19:20 +0000 Philippe Normand <philn@igalia.com> + + * ext/webrtc/gstwebrtcbin.c: + * ext/webrtc/webrtcsdp.c: + * tests/check/elements/webrtcbin.c: + webrtcbin: Make mid optional in offers and answers + The mid attribute is not strictly required. Two new tests cover this change, + they remove the mid and group attributes from the SDP offers and answers. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8729> + +2025-03-29 19:03:13 +0200 Artem Martus <artemmartus2012@gmail.com> + + * ext/webrtc/gstwebrtcbin.c: + * tests/check/elements/webrtcbin.c: + webrtcbin: ensure RTX entry for all formats + Properly implement RFC 4588 by ensuring each media format + has its own RTX payload type with unique 'apt' parameter, + rather than only mapping the first format. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8663> + +2025-03-30 13:10:42 +0300 Sebastian Dröge <sebastian@centricular.com> + + * sys/va/gstvacaps.c: + va: Skip codecs that report maximum width or height lower than minimum + This happens on F42 with the JPEG decoders for some reason and trying to + actually use them with any resolution simply gives a "resolution not supported" + error. + A minimum of 64 is correctly reported though and trying to create caps with an + int range of 64, 0 gives critical warnings. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8736> + +2025-03-14 22:03:53 -0400 Doug Nazar <nazard@nazar.ca> + + * sys/bluez/gsta2dpsink.c: + a2dpsink: Free various props during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 22:01:44 -0400 Doug Nazar <nazard@nazar.ca> + + * sys/aja/gstajasink.cpp: + * sys/aja/gstajasrc.cpp: + aja: Free various props during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 22:01:05 -0400 Doug Nazar <nazard@nazar.ca> + + * gst/librfb/gstrfbsrc.c: + * gst/librfb/rfbdecoder.c: + rfbsrc: Free various props before being set & during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 21:59:13 -0400 Doug Nazar <nazard@nazar.ca> + + * gst/frei0r/gstfrei0r.c: + frei0r: Free various props before being set + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 21:58:14 -0400 Doug Nazar <nazard@nazar.ca> + + * gst/faceoverlay/gstfaceoverlay.c: + faceoverlay: Free various props during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:38:54 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/wayland/gstwaylandsink.c: + waylandsink: Free various props before being set + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:38:03 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/qroverlay/gstdebugqroverlay.c: + * ext/qroverlay/gstqroverlay.c: + qroverlay: Free various props before set & during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:37:39 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/openal/gstopenalsrc.c: + openalsrc: Free various props before being set + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:35:49 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/lcevcencoder/gstlcevcencoder.c: + lcevcencoder: Free various props before during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:25:48 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/dash/gstmpdadaptationsetnode.c: + * ext/dash/gstmpdperiodnode.c: + * ext/dash/gstmpdrepresentationbasenode.c: + * ext/dash/gstmpdrepresentationnode.c: + * ext/dash/gstmpdsegmenttemplatenode.c: + * ext/dash/gstmpdsegmenturlnode.c: + dash: Free various props before set & during cleanup + In addition several members were being freed via xmlFree() even though + being created via g_value_dup_string(). Switch to g_free(). + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:22:20 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/curl/gstcurlhttpsrc.c: + curlhttpsrc: Free various props before set & during cleanup + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-14 19:14:43 -0400 Doug Nazar <nazard@nazar.ca> + + * ext/gtk/gstgtkwaylandsink.c: + * gst-libs/gst/mse/gstmediasourcetrack.c: + * gst-libs/gst/va/gstvadisplay_drm.c: + * sys/directshow/dshowdeviceprovider.cpp: + * sys/directsound/gstdirectsounddevice.c: + * sys/mediafoundation/gstmfdevice.cpp: + * sys/uvch264/gstuvch264deviceprovider.c: + * sys/wasapi/gstwasapidevice.c: + * sys/wasapi2/gstwasapi2device.c: + * sys/winks/ksdeviceprovider.c: + all: Annotate *_set_property() contructor only props without free + Properties that are marked constructor only aren't required to be freed + before g_value_dup_string() as they can only be called once during construction. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8648> + +2025-03-26 15:32:05 +0200 Sebastian Dröge <sebastian@centricular.com> + + * ext/dash/gstdashsink.c: + dashsink: Make sure to use a non-NULL pad name when requesting a pad from splitmuxsink + If the caller passed in "audio_%u" instead of a concrete pad name into + gst_element_request_pad_simple() then the pad name will be NULL. In that case + use the pad template name for requesting the pad from splitmuxsink. + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8697> + +2025-03-23 00:19:50 +0100 Jan Tojnar <jtojnar@gmail.com> + + * gst-libs/gst/analytics/meson.build: + gst-analytics: Add gst-video to Requires in pkg-config + `gst/analytics/analytics.h` includes `gst/analytics/gstanalyticssegmentationmtd.h`, + which in turn `gst/video/video-info.h` but `gst-video-1.0` was only listed + in `Requires.private` field of `gst-analytics-1.0.pc`. + This would cause projects linking against `gst-analytics-1.0.pc` to fail to find + the headers when using alternative interpretation of pkg-config specification + that only considers private dependencies for include path during static builds, + such as the case e.g. on Nix. + https://gitlab.freedesktop.org/pkg-config/pkg-config/-/issues/28 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8661> 2025-02-18 14:12:49 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * docs/plugins/gst_plugins_cache.json: mse: Updated documentation cache - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:38 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2587,19 +8804,19 @@ the element state is READY or higher. Finally, added proper error reporting when failing to push a buffer and improved debug logging. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:38 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstappendpipeline.c: gstappendpipeline: Added name to background task - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstsourcebufferlist.c: gstsourcebufferlist: Added locking - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2626,19 +8843,19 @@ Finally, updated to adapt to track buffer API changes. Some functions previously passed in a lower bound for sample timestamps. Now the source buffer is responsible for clipping samples within a desired range of time. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstsourcebuffer.c: gstsourcebuffer: Added name to track feed task - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstsourcebuffer.c: gstsourcebuffer: Moved misplaced documentation comment - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2649,7 +8866,7 @@ when there's a small gap between them is MAX(0.1sec, max frame duration * 2). Previously it was hardcoded to 0.01sec. The specification suggests that it could be something like the max frame duration * 2. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2663,7 +8880,7 @@ When the source buffer reaches the end of the track buffer, it should wait for any new data to be processed -- not just an EOS -- then check for cancellation if the deadline expires without new data. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2672,14 +8889,14 @@ gstmediasourcetrackbuffer: Removed start time filtering from sample iterator This adapts to the changes to the sample map since gst_iterator_filter() is a simpler way for callers to clip the returned samples to a desired time range. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstmediasourcetrackbuffer-private.h: * gst-libs/gst/mse/gstmediasourcetrackbuffer.c: gstmediasourcetrackbuffer: Removed unused code - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2687,7 +8904,7 @@ * gst-libs/gst/mse/gstmediasourcetrack.c: * tests/check/libs/mse.c: gstmediasourcetrack: Removed unused try_push() method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2697,7 +8914,7 @@ This simplifies cleanup for the caller since the push method already cleans up the sample when it is consumed by playback or if it fails to be added to the queue. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> @@ -2717,55 +8934,40 @@ into a single sample containing a GstBufferList. Also, start time filtering was removed from the API since gst_iterator_filter() can be used by callers to achieve the same result. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstmediasource.c: gstmediasource: Added locking - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstmediasource.c: gstmediasource: Added caller-allocates annotation to get_live_seekable_range() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-02-18 13:08:37 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> * gst-libs/gst/mse/gstmselogging-private.h: * gst-libs/gst/mse/gstmselogging.c: gstmselogging: Added helper function to get nicknames of enum values - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8751> - -2025-03-27 15:38:42 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * gst/codecalpha/gstalphadecodebin.c: - codecalpha: name both queues - Make it easier to debug from logs. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8750> - -2025-03-21 11:41:11 +0900 Dongyun Seo <dongyun.seo@lge.com> - - * gst-libs/gst/vulkan/gstvkformat.c: - vkformat: fix build error - fix build error when VK_KHR_format_feature_flags2 is not defined. - Co-authored-by: Victor Jaquez vjaquez@igalia.com - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8749> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8512> 2025-03-25 03:04:52 +0900 Seungha Yang <seungha@centricular.com> * ext/closedcaption/gstcodecccinserter.c: codecccinserter: Fix event double free Need to steal GstVideoCodecFrame.events before unref - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8707> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8673> 2025-03-25 03:02:57 +0900 Seungha Yang <seungha@centricular.com> * ext/closedcaption/gsth265reorder.c: h265ccinserter: Fix broken SPS/PPS link Apply the same h265decoder change to h265ccinserter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8707> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8673> 2025-03-25 02:09:49 +0900 Seungha Yang <seungha@centricular.com> @@ -2775,7 +8977,7 @@ slice header and SPS/PPS can be broken at the second pass if SPS/PPS got updated after slice header in the same AU Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4323 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8707> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8673> 2025-03-25 02:41:29 +0900 Seungha Yang <seungha@centricular.com> @@ -2784,7 +8986,7 @@ h265parser: Add private method to update slice header Adding a method to allow linking already parsed slice header with parser's own sps/pps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8707> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8673> 2025-03-25 01:58:42 +0900 Seungha Yang <seungha@centricular.com> @@ -2794,144 +8996,36 @@ Thus valid SPS/PPS parsed by h265parser can have no linked parent parameter set. Apply this behavior to gst_h265_parser_update_{sps,pps} too - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8707> - -2025-03-29 19:03:13 +0200 Artem Martus <artemmartus2012@gmail.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: ensure RTX entry for all formats - Properly implement RFC 4588 by ensuring each media format - has its own RTX payload type with unique 'apt' parameter, - rather than only mapping the first format. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8742> - -2025-03-30 13:10:42 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/va/gstvacaps.c: - va: Skip codecs that report maximum width or height lower than minimum - This happens on F42 with the JPEG decoders for some reason and trying to - actually use them with any resolution simply gives a "resolution not supported" - error. - A minimum of 64 is correctly reported though and trying to create caps with an - int range of 64, 0 gives critical warnings. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8737> - -2025-03-14 22:03:53 -0400 Doug Nazar <nazard@nazar.ca> - - * sys/bluez/gsta2dpsink.c: - a2dpsink: Free various props during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 22:01:44 -0400 Doug Nazar <nazard@nazar.ca> - - * sys/aja/gstajasink.cpp: - * sys/aja/gstajasrc.cpp: - aja: Free various props during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 22:01:05 -0400 Doug Nazar <nazard@nazar.ca> - - * gst/librfb/gstrfbsrc.c: - * gst/librfb/rfbdecoder.c: - rfbsrc: Free various props before being set & during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 21:59:13 -0400 Doug Nazar <nazard@nazar.ca> - - * gst/frei0r/gstfrei0r.c: - frei0r: Free various props before being set - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 21:58:14 -0400 Doug Nazar <nazard@nazar.ca> - - * gst/faceoverlay/gstfaceoverlay.c: - faceoverlay: Free various props during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:38:54 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/wayland/gstwaylandsink.c: - waylandsink: Free various props before being set - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:38:03 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/qroverlay/gstdebugqroverlay.c: - * ext/qroverlay/gstqroverlay.c: - qroverlay: Free various props before set & during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:37:39 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/openal/gstopenalsrc.c: - openalsrc: Free various props before being set - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:35:49 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/lcevcencoder/gstlcevcencoder.c: - lcevcencoder: Free various props before during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:25:48 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/dash/gstmpdadaptationsetnode.c: - * ext/dash/gstmpdperiodnode.c: - * ext/dash/gstmpdrepresentationbasenode.c: - * ext/dash/gstmpdrepresentationnode.c: - * ext/dash/gstmpdsegmenttemplatenode.c: - * ext/dash/gstmpdsegmenturlnode.c: - dash: Free various props before set & during cleanup - In addition several members were being freed via xmlFree() even though - being created via g_value_dup_string(). Switch to g_free(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:22:20 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/curl/gstcurlhttpsrc.c: - curlhttpsrc: Free various props before set & during cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> - -2025-03-14 19:14:43 -0400 Doug Nazar <nazard@nazar.ca> - - * ext/gtk/gstgtkwaylandsink.c: - * gst-libs/gst/mse/gstmediasourcetrack.c: - * gst-libs/gst/va/gstvadisplay_drm.c: - * sys/directshow/dshowdeviceprovider.cpp: - * sys/directsound/gstdirectsounddevice.c: - * sys/mediafoundation/gstmfdevice.cpp: - * sys/uvch264/gstuvch264deviceprovider.c: - * sys/wasapi/gstwasapidevice.c: - * sys/wasapi2/gstwasapi2device.c: - * sys/winks/ksdeviceprovider.c: - all: Annotate *_set_property() contructor only props without free - Properties that are marked constructor only aren't required to be freed - before g_value_dup_string() as they can only be called once during construction. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8714> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8673> -2025-03-23 00:19:50 +0100 Jan Tojnar <jtojnar@gmail.com> +2025-03-21 11:41:11 +0900 Dongyun Seo <dongyun.seo@lge.com> - * gst-libs/gst/analytics/meson.build: - gst-analytics: Add gst-video to Requires in pkg-config - `gst/analytics/analytics.h` includes `gst/analytics/gstanalyticssegmentationmtd.h`, - which in turn `gst/video/video-info.h` but `gst-video-1.0` was only listed - in `Requires.private` field of `gst-analytics-1.0.pc`. - This would cause projects linking against `gst-analytics-1.0.pc` to fail to find - the headers when using alternative interpretation of pkg-config specification - that only considers private dependencies for include path during static builds, - such as the case e.g. on Nix. - https://gitlab.freedesktop.org/pkg-config/pkg-config/-/issues/28 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8708> + * gst-libs/gst/vulkan/gstvkformat.c: + vkformat: fix build error + fix build error when VK_KHR_format_feature_flags2 is not defined. + Co-authored-by: Victor Jaquez vjaquez@igalia.com + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8655> -2025-03-26 15:32:05 +0200 Sebastian Dröge <sebastian@centricular.com> +2025-03-07 16:05:20 -0500 Eric <ekc4yz@virginia.edu> - * ext/dash/gstdashsink.c: - dashsink: Make sure to use a non-NULL pad name when requesting a pad from splitmuxsink - If the caller passed in "audio_%u" instead of a concrete pad name into - gst_element_request_pad_simple() then the pad name will be NULL. In that case - use the pad template name for requesting the pad from splitmuxsink. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8711> + * docs/plugins/gst_plugins_cache.json: + * ext/sctp/gstsctpdec.c: + * ext/sctp/gstsctpdec.h: + * ext/sctp/gstsctpenc.c: + * ext/sctp/sctpassociation.c: + * ext/sctp/sctpassociation.h: + * ext/webrtc/webrtcsctptransport.c: + webrtc: fix hangup when duplicate sctp association IDs chosen + Fixes an issue where the webrtcbin would hangup when finalizing due + to the sctpenc hanging up when finalizing. This occurred when the + webrtcbin chose to use a sctp association ID already in use. + The sctpenc would fail to reach the paused state, but startup a task + anyways that was never stopped. + This commit modifies the behavior to not choose sctp association IDs + randomly, and instead only choose one that is free. It also prevents the + sctpenc from starting up that task if it fails to reach the paused state. + Fixes: #4188 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8607> 2025-03-13 16:27:44 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> @@ -2940,13 +9034,22 @@ * ext/vulkan/vksink.c: vulkan: fix memory leak at dynamic registering Also it cleans up a bit the code. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8650> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8639> + +2025-01-22 15:02:03 +0100 Marc Leeman <marc.leeman@gmail.com> + + * gst-libs/gst/cuda/meson.build: + meson.build: test for and link against libatomic if it exists + It's needed on some platforms for some subset (or all) atomic operations and + checking for the cases when it's actually needed is quite complex. + Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4300 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8637> 2025-03-13 17:22:12 +0100 Piotr Brzeziński <piotr@centricular.com> * sys/applemedia/vtenc.c: vtenc: Reset restart flag when creating session in set_format() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8644> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8597> 2025-03-06 17:23:11 +0100 Piotr Brzeziński <piotr@centricular.com> @@ -2964,13030 +9067,52 @@ changing a property when a session is already created will just flag it to be reconfigured upon the next encode call. This is done in similar fashion to how restarting the session upon an error works. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8644> - -2025-01-22 15:02:03 +0100 Marc Leeman <marc.leeman@gmail.com> - - * gst-libs/gst/cuda/meson.build: - meson.build: test for and link against libatomic if it exists - It's needed on some platforms for some subset (or all) atomic operations and - checking for the cases when it's actually needed is quite complex. - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4300 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8647> - -2025-03-08 12:07:11 +0000 Philippe Normand <philn@igalia.com> - - * gst/codecalpha/gstalphacombine.c: - alphacombine: De-couple flush-start/stop events handling - There is no guarantee that any FLUSH_STOP event is preceded by a FLUSH_START. - The element now stops flushing once it has received a FLUSH_STOP on all its sink - pads. - Fixes #4174 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8629> - -2025-03-11 20:23:16 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.26.0 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8618> - -=== release 1.26.0 === - -2025-03-11 20:14:44 +0000 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * README.md: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.26.0 - -2025-02-21 18:20:06 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> - - * sys/va/gstvacompositor.c: - vacompositor: Add missing GST_VIDEO_CROP_META_API_TYPE - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8537> - -2025-03-07 09:54:35 +0100 Carlo Caione <ccaione@baylibre.com> - - * sys/uvcgadget/uvc.c: - uvcgadget: Properly implement GET_INFO control responses - According to the UVC 1.5 specification, section 4.1.2, the GET_INFO request - must return a bitmap indicating supported operations for the control. - Value 0x00 indicates that neither GET nor SET operations are supported. - This patch fixes control handling in the UVC gadget implementation to properly - respond to GET_INFO requests with the correct bitmap, allowing host systems - to properly detect supported control operations (none in this case). - The pipeline I'm using to test this is: - gst-launch-1.0 videotestsrc ! uvcsink v4l2sink::device=/dev/video0 - This is the equivalent of 0 but the difference is that we are now returning - 0x00 instead of 0x03. - Without this change the host in my case is unable to probe the UVC gadget at - all, automatically disconnecting the device after a few seconds. - Following is the log when the gadget is not working (without this fix): - usb 1-1.2: new high-speed USB device number 73 using xhci_hcd - usb 1-1.2: New USB device found, idVendor=0525, idProduct=a4a2, bcdDevice= 5.15 - usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 - usb 1-1.2: Product: UVC Gadget - usb 1-1.2: Manufacturer: localhost.localdomain - usb 1-1.2: SerialNumber: 0123456789 - usb 1-1.2: Found UVC 1.10 device UVC Gadget (0525:a4a2) - usb 1-1.2: Failed to query (GET_INFO) UVC control 2 on unit 1: -110 (exp. 1). - usb 1-1.2: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround. - uvcvideo 1-1.2:1.1: Failed to query (129) UVC probe control : -71 (exp. 34). - uvcvideo 1-1.2:1.1: Failed to initialize the device (-71). - cdc_subset 1-1.2:1.0: probe with driver cdc_subset failed with error -22 - cdc_subset 1-1.2:1.1: probe with driver cdc_subset failed with error -22 - usb 1-1.2: USB disconnect, device number 73 - With the fix the USB device is correctly probed: - usb 1-1.2: new high-speed USB device number 88 using xhci_hcd - usb 1-1.2: New USB device found, idVendor=0525, idProduct=a4a2, bcdDevice= 5.15 - usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 - usb 1-1.2: Product: UVC Gadget - usb 1-1.2: Manufacturer: localhost.localdomain - usb 1-1.2: SerialNumber: 0123456789 - usb 1-1.2: Found UVC 1.10 device UVC Gadget (0525:a4a2) - 0 camera/uvc-gadget@0df9d3ad - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8572> - -2025-03-06 10:22:00 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Fix caps leak after sorting caps - gst_v4l2_format_sort_caps() create a new caps which need to be - release to avoid leak. - Co-authored-by: Robert Mader <robert.mader@posteo.de> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8583> - -2025-03-04 11:04:56 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - v4l2codecs: Release sink allocator when deciding allocation - All decoders have the same design pattern in decide allocation - and forgot to release sink allocator before allocating a new one. - Fixing the memory leak by clearing sink allocator before creating - the new one. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8583> - -2025-03-04 11:02:16 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecallocator.c: - v4l2codecs: allocator: Fix buffers leak when using remove buffers - When removing buffers from v4l2 queue do not forget to release - the memory on gstreamer side. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8583> - -2025-03-07 01:09:23 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gsth264ccextractor.c: - * ext/closedcaption/gsth265ccextractor.c: - h264ccextractor,h265ccextractor: Do not resend caps per output buffer - Send caps event only when it's required - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4281 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8596> - -2025-03-06 11:24:28 +0100 Pablo García <pgarcia@fluendo.com> - - * ext/curl/gstcurlbasesink.c: - * ext/curl/gstcurlfilesink.c: - * ext/curl/gstcurlftpsink.c: - * ext/curl/gstcurlhttpsink.c: - * ext/curl/gstcurlsmtpsink.c: - * ext/curl/gstcurltlssink.c: - curl: replace #if with #ifdef (part 2) - Continuation of 47d1262402c81a9054e618052deeff7414b4f75d, that is not enough. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8595> - -2025-03-03 11:30:38 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagememory.c: - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan/operation: fix timeline semaphore extension detection - As for synchronization2, the timeline semaphore has been - been promoted in 1.2 and does not have to be enabled explicitely. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-03-03 12:59:02 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan/operation: fix synchronization2 extension detection - The synchronization2 extension is a core part of Vulkan 1.3. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 21:22:32 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - vulkan/device: only enable relevant extensions - Extensions can have a minimum set of dependencies (e.g. API version) and may - also be promoted to core in a later version. Don't explicitly enable extensions - that fail to meet their requirements or that have been promoted to the core API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 21:17:57 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan/operation: retrieve function pointers directly from the device - The instance API version supported may not be of the same version supported by - the device. It is possible that the function that is returned may be non-0 - but not functional due to the requested API version of the instance limiting the - availability of calling the returned function. - Can be reproduced by running a pipeline with GST_VULKAN_INSTANCE_API_VERSION=1.1 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 21:04:35 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkinstance.c: - vulkan/instance: allow the requested api version to be larger than the supported - Since Vulkan 1.1, the requested API version is the maximum API version that the - application is expecting to use. It is also possible for individual devices - (backed by potentially different drivers) may support a higher or lower API - version than the instance. Both cases (higher and lower) should be supported - and as such, it is not an error to request an API version that is larger than - the instance supported API version. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 20:57:15 +1100 Matthew Waters <matthew@centricular.com> - - * ext/vulkan/gstvulkan.c: - vulkan: plugin: add debug for why an instance fails to open - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 20:55:09 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkformat.c: - * gst-libs/gst/vulkan/gstvkformat.h: - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * tests/check/libs/vkformat.c: - vkformat: fix format_from_video_info_2 to actually runtime check versions and extensions - If the vulkan plugin was compiled against a newer version than the supported - vulkan runtime instance or device, then it was possible for format retrieval to - fail. Failure was due to unconditionally using newer extensions and features - without runtime checking them. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 20:09:48 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkinstance.c: - * gst-libs/gst/vulkan/gstvkinstance.h: - * gst-libs/gst/vulkan/gstvkphysicaldevice.c: - * tests/check/libs/vkdevice.c: - vulkan: fix device related API version checks - The API version exposed by a particular device can be completely different from - what is exported by the parent instance. Since Vulkan 1.1 it is also possible - to use newer device API than supported by the instance API version (with the - appropriate version checks). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-02-25 14:57:33 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkphysicaldevice.c: - * gst-libs/gst/vulkan/gstvkphysicaldevice.h: - * tests/check/libs/vkdevice.c: - vulkan/physicaldevice: add methods for retrieving and checking against an API version - Most version checks should actually be done against the device API version and - not the instance API version. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8554> - -2025-03-05 11:07:38 +0100 Pablo García <pgarcia@fluendo.com> - - * ext/curl/gstcurlbasesink.c: - * ext/curl/gstcurlfilesink.c: - * ext/curl/gstcurlftpsink.c: - * ext/curl/gstcurlhttpsink.c: - * ext/curl/gstcurlsmtpsink.c: - * ext/curl/gstcurltlssink.c: - curl: replace #if with #ifdef - Using #if instead of #ifdef was causing some issues when cross-compiling, like: - ../ext/curl/gstcurlsmtpsink.c:54:5: error: "HAVE_SYS_SOCKET_H" is not - defined, evaluates to 0 -Werror=undef - 54 | #if HAVE_SYS_SOCKET_H - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8589> - -2025-03-05 13:29:20 +0100 Pablo García <pgarcia@fluendo.com> - - * ext/curl/gstcurlbasesink.c: - * ext/curl/gstcurlfilesink.c: - * ext/curl/gstcurlftpsink.c: - * ext/curl/gstcurlhttpsink.c: - * ext/curl/gstcurlhttpsrc.h: - * ext/curl/gstcurlsftpsink.c: - * ext/curl/gstcurlsmtpsink.c: - * ext/curl/gstcurlsshsink.c: - * ext/curl/gstcurltlssink.c: - curl: remove unnecesary reference to unistd.h - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8589> - -2025-02-21 16:24:58 -0600 Christopher Degawa <ccom@randomderp.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/svtav1/gstsvtav1enc.c: - svtav1enc: update to use SVT-AV1 3.0.0 API changes - Signed-off-by: Christopher Degawa <ccom@randomderp.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8539> - -2025-03-04 14:33:29 +0100 Branko Subasic <branko@axis.com> - - * ext/voamrwbenc/meson.build: - voamrwbenc: Do not install anything unless dependency found - If the dependency for the plugin is not found then nothing should be - installed, neither the element nor documentation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8586> - -2025-03-04 22:08:46 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: clear cache values with memset - Fixes a stack overflow on Windows/MSVC. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8588> - -2025-03-04 15:01:24 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/opencv/meson.build: - * gst-libs/gst/opencv/meson.build: - meson: Replace disabler dependencies with not-found dependencies - If a plugin gets disabled due to a `disabler()` dependency, the plugin - docs build itself will get disabled because `all_plugins_paths` will - become a disabler. - This was actually happening with opencv on systems that don't have - opencv available, and could happen with libsoup too if the build files - change in the future. - Let's avoid wasting hours of debugging for people. A not-found - dependency has the same effect. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8582> - -2024-12-17 20:48:46 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * tests/check/elements/dashsink.c: - * tests/check/meson.build: - tests: add dashsink unit test - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7916> - -2024-12-20 14:54:01 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/dash/gstdashsink.c: - dashsink: use gst_dash_sink_reset - To be able to use the properties properly, - the element should be reset by gst_dash_sink_reset - during the state change from READY_PAUSED and PAUSED_READY. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7916> - -2024-12-19 18:22:06 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/dash/gstdashsink.c: - dashsink: send element message on event - On new mpd update and new segment written, send - an element message to signal the event. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7916> - -2024-11-18 12:26:25 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/dash/gstdashsink.c: - dashsink: cleanup the teardown process - The stream was keeping a reference to the sink, preventing - it to be removed properly by the pipleline bin. - Clean up and simplify the code to get the stream from the pad. - Add more mutex protection against add/remove requested pad. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7916> - -2025-02-28 13:06:34 -0300 Denis Yuji Shimizu <denis.shimizu@collabora.com> - - * ext/analyticsoverlay/gstobjectdetectionoverlay.c: - analytics: objectdetectionoverlay: improve event handling - This change ensures that the `GST_EVENT_EOS`, - `GST_EVENT_FLUSH_START` and `GST_EVENT_FLUSH_STOP` - events are forwarded to the sink downstream. - The logging message for `GST_EVENT_FLUSH_START` - has also been fixed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8576> - -2025-02-28 14:22:07 +0900 Dongyun Seo <dongyun.seo@lge.com> - - * ext/soundtouch/gstpitch.cc: - pitch: fix build error - fix build error due to sound integer sample caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8570> - -2025-02-28 11:29:56 +0900 Elliot Chen <elliot.chen@nxp.com> - - * gst-libs/gst/play/gstplay.c: - gstplay: support disabling the selected track at startup - In some cases, need to disable some type tracks at startup before - receiving the stream collection message. And fix printing error log - in this case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8568> - -2025-02-26 10:39:06 +0100 Robert Mader <robert.mader@collabora.com> - - * tests/examples/waylandsink/main.c: - waylandsink/demo: Use playbin3 instead of playbin - Video looping currently does not work reliably with the later - and playbin3 is generally considered the better choice. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8561> - -2025-02-21 19:00:42 -0500 Olivier Crête <olivier.crete@collabora.com> - - * ext/avtp/gstavtpsrc.c: - * ext/avtp/gstavtpsrc.h: - * ext/avtp/meson.build: - avtpsrc: Use GSocket to have cancellable wait - Otherwise it would block forever when there is no sender. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8550> - -2025-02-21 08:37:03 +0100 Roberto Viola <rviola@vicomtech.org> - - * ext/dash/gstmpdperiodnode.c: - dashsink: fix period duration in dynamic MPD - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8531> - -2025-02-21 09:35:37 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - wayland: leverage unified object destruction for wl_callback - This patch refactors gst_wl_display_callback_destroy() to use the - recently introduced gst_wl_display_object_destroy() helper. Previously, - the function manually handled wl_callback destruction with explicit - lock/unlock calls and direct invocation of wl_callback_destroy(). - Switching to gst_wl_display_object_destroy() unifies the destruction - process across similar objects, reducing code duplication and potential - errors. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8242> - -2025-02-24 09:53:48 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> - - * gst-libs/gst/wayland/gstwlwindow.c: - wayland: fix crash issue during stop flow - when received xdg config event fron wayland server, - gst_wl_display_thread_run will call handle_xdg_surface_configure - which protected by priv->sync_mutex. - and in handle_xdg_surface_configure, configure_mutex also is locked - but if waylandsink set state from paused to ready, that will dispose - wlwindow, which will try to clear configure_mutex, and try to destroy - xdg_surface, - that do not proteced by anything. - so, problem is: - 1) if clear configure_mutex(with locked state), clear lock will abort - 2) after xdg_surface destroy, handle_xdg_surface_config may still call - ack_configure, that will lead wayland server go wrong - so, this patch updates gst_wl_window_finalize to use the new - destruction function for xdg_toplevel and xdg_surface, ensuring all - destruction operations are properly synchronized. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8242> - -2025-02-24 09:43:12 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - * gst-libs/gst/wayland/gstwldisplay.h: - wayland: add synchronized object destruction function - Introduces a new generic destruction function - gst_wl_display_object_destroy that ensures all - destruction operations are protected by - sync_mutex. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8242> - -2025-02-23 23:52:57 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.25.90 - -=== release 1.25.90 === - -2025-02-23 23:44:10 +0000 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.25.90 - -2025-02-23 16:56:25 +0000 Tim-Philipp Müller <tim@centricular.com> - - * po/hr.po: - * po/pt_BR.po: - gst-plugins-bad: update translations - -2025-02-21 20:11:09 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/plugin.c: - nvcodec: Register all elements if CUDA kernel is precompiled - GstCudaConverter dependent element can work if CUDA kernel is - precompiled even if runtime compiler library is not found - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8536> - -2025-02-21 19:18:01 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvjpegenc.cpp: - * sys/nvcodec/kernel/gstnvjpegenc.cu: - * sys/nvcodec/kernel/meson.build: - nvjpegenc: Add support for kernel precompile - Port to CUDA precompile/cache - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8536> - -2025-02-21 18:40:21 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudanvrtc-private.h: - * gst-libs/gst/cuda/gstcudanvrtc.cpp: - * sys/nvcodec/gstcudaconverter.cpp: - * sys/nvcodec/meson.build: - cudaconverter: Add support for kernel precompile and cache - Port to precompile/cache approach - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8536> - -2025-02-20 15:26:37 +0900 Seungha Yang <seungha@centricular.com> - - * meson_options.txt: - * sys/nvcodec/kernel/collect_ptx_headers.py: - * sys/nvcodec/kernel/gstcudaconverter-unpack.cu: - * sys/nvcodec/kernel/gstcudaconverter.cu: - * sys/nvcodec/kernel/meson.build: - * sys/nvcodec/meson.build: - nvcodec: Add support for CUDA kernel precompile - Enable build time CUDA kernel compile if nvcc is detected. - Precompile is disabled by default and controlled by - "nvcodec-cuda-precompile" build option. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8536> - -2025-02-19 08:55:44 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2format.c: - * sys/v4l2codecs/gstv4l2format.h: - v4l2codecs: Sort formats to avoid quality lost - When the driver prefered format is not picked by downstream, the - decoders needs to select another format from the list. The selection - was currently unsorted, resulting in 10bit data often being stripped - to 8bit. - To solve this, reorder the formats in an HW preference order. This order - deviates slightly from the preferred order in libgstvideo. This is to - prefer bandwidth saving over better CPU alignment. As an example NV15 is - prefered over P010. We also prefer tiled over linear. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8522> - -2025-02-19 08:53:13 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/kms/gstkmsutils.c: - kmssink: Add NV12_10LE40 / NV15 support - This is needed until kmssink is ported to use libgstvideo mapping. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8525> - -2025-02-20 22:20:48 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - closedcaption: Add h264/h265 ccinserter docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8496> - -2025-02-17 20:47:35 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gstclosedcaption.c: - * ext/closedcaption/gsth265ccinserter.c: - * ext/closedcaption/gsth265ccinserter.h: - * ext/closedcaption/gsth265reorder.c: - * ext/closedcaption/gsth265reorder.h: - * ext/closedcaption/meson.build: - closedcaption: Add h265ccinserter element - Adding new element for inserting closed caption SEI to H.265 stream - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8496> - -2025-02-20 22:04:34 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gstclosedcaption.c: - * ext/closedcaption/gstcodecccinserter.c: - * ext/closedcaption/gstcodecccinserter.h: - * ext/closedcaption/gsth264ccinserter.c: - * ext/closedcaption/gsth264ccinserter.h: - * ext/closedcaption/gsth264reorder.c: - * ext/closedcaption/gsth264reorder.h: - * ext/closedcaption/meson.build: - closedcaption: Add h264ccinserter element - Adding new element for inserting closed caption SEI to H.264 stream - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8496> - -2025-02-15 20:17:59 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/codecs/gsth264picture-private.h: - h264picture: Export private method symbols - That method will be used by plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8496> - -2025-02-19 18:34:41 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/cuda/meson.build: - * tests/examples/cuda/nvenc-extern-pool.c: - examples: Add example for nvenc extern-cuda-bufferpool property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8516> - -2025-02-19 17:46:34 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/nvcodec/gstnvencobject.cpp: - * sys/nvcodec/gstnvencobject.h: - * sys/nvcodec/gstnvencoder.cpp: - nvencoder: Add extern-cuda-bufferpool property - Add new property to support application allocated GstCudaMemory. - CUDA memory alloc/free is a global device synchronization point - as if launching CUDA kernel on default CUDA stream. To avoid the global - synchronization, we added stream-ordered allocation support - which allocates CUDA memory asynchronously. - However, NVENC does not allow registering the stream-ordered - allocated memory. Thus encoder was allocating normal CUDA - memory in case that input CUDA memory is stream-ordered type. - In this commit, newly introduced property will allow application - to provide encoder with GstCudaBufferPool. Application can - preallocate sufficient amount of CUDA memory in advance - to avoid global device synchronization while pipeline is running. - For now, this pool is used only if input CUDA memory is allocated - via stream-ordered-allocation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8516> - -2025-02-19 15:38:08 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaconverter.cpp: - cudaconverter: Use stream ordered allocation if requested - ... to avoid global device synchronization - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8516> - -2025-02-19 14:47:10 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaconverter.cpp: - cudaconverter: Remove unnecessary CUDA memory allocation - We can pass struct to kernel by value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8516> - -2025-02-13 15:30:04 +1100 Matthew Waters <matthew@centricular.com> - - * ext/vulkan/vkupload.c: - vkupload: don't require that input memory count matches output memory count - It can very easily not. e.g. - videotestsrc ! video/x-raw,format=NV12 ! identity drop-allocation=true ! \ - vulkanupload ! vulkancolorconvert ! vulkansink - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8530> - -2025-02-18 01:33:40 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstclosedcaption.c: - * ext/closedcaption/gsth265ccextractor.c: - * ext/closedcaption/gsth265ccextractor.h: - * ext/closedcaption/meson.build: - closedcaption: Add h265ccextractor element - This element will collect closed caption meta from H.265 stream - and output caption buffers in display order - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8500> - -2025-02-18 01:14:38 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gsth264ccextractor.c: - h264ccextractor: Port to GstVecDeque - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8500> - -2025-02-18 10:39:24 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - mpegts: Rename un-namespaced REG_TO_UINT32 macro - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4226 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8506> - -2025-02-12 10:37:09 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mpegts/gst-atsc-section.c: - * gst-libs/gst/mpegts/gst-dvb-descriptor.c: - * gst-libs/gst/mpegts/gst-dvb-section.c: - * gst-libs/gst/mpegts/gst-scte-section.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtssection.c: - mpegts: Update annotations - Specify whether the various functions can return a NULL value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8466> - -2024-12-09 17:42:18 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/debugutils/gstvideocodectestsink.c: - debugutils: videocodectestsink: Add GBR_10LE as supported pixel format - Add GBR_10LE in the list of supported format of the element. - GBR_10LE is used as output format in Fluster ARGON tests suite. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8502> - -2024-12-09 10:57:30 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * gst/videoparsers/gstav1parse.c: - videoparsers: av1: Fix typo in debug log - comsumed -> consumed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8502> - -2024-12-06 14:13:22 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * gst/videoparsers/gstav1parse.c: - videoparsers: av1: Allow av1parse to parse annexb streams - Let's av1 parser do it job event it receives an annexb stream. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8502> - -2025-02-13 21:10:40 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkfence.c: - vkfencecache: call parent release() only after resources have been removed - The parent class will allow the handle to be reused at the end of the function. - If we are still modifying the released fence, then another thread can acquire - the fence while we are still clearing some of its data and produce a data race - or a leaked fence depending on which thread wins. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8491> - -2025-02-16 20:08:58 -0500 Olivier Crête <olivier.crete@collabora.com> - - * gst-libs/gst/analytics/gstanalyticsmeta.c: - analyticsmeta: Make output struct annotation more explicit - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8489> - -2025-02-16 20:07:48 -0500 Olivier Crête <olivier.crete@collabora.com> - - * gst-libs/gst/analytics/gstanalyticsmeta.c: - analyticsmeta: Avoid crash when adding Mtd with NULL Mtd structure - It's documented that you don't need to get the position of the Mtd - when adding it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8489> - -2025-02-16 21:30:42 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/gstvulkan.c: - vulkan: register always vulkansink elements - vulkansink elements were enabled only if - the video extensions were present which - is breaking backward compatibility such as Android - or ios. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8488> - -2024-12-23 15:37:17 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/srtp/gstsrtp.c: - * ext/srtp/gstsrtp.h: - * ext/srtp/gstsrtpdec.c: - * ext/srtp/gstsrtpdec.h: - * ext/srtp/gstsrtpenc.c: - * ext/srtp/meson.build: - srtp: require libsrtp2, drop support for libsrtp1 - Even old old debian stable from 2019 ships with a - recent-enough libsrtp2 version. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8205> - -2025-02-16 14:09:49 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gstcccombiner.c: - cccombiner: Fix critical warnings - gst_buffer_add_video_caption_meta: assertion 'data != NULL' failed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8486> - -2025-02-16 14:36:08 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - play: Fix annotations of `parse_missing_plugins()` API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8487> - -2025-02-15 15:44:14 +0000 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsenc.c: - svtjpegxsenc: fix copy'n'paste error in property registration - Doesn't change anything in practice because the default value - was set correctly in the instance init function. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8485> - -2024-12-09 13:39:16 -0500 Arun Raghavan <arun@asymptotic.io> - - * ext/onnx/gstonnxclient.cpp: - onnx: Allow generic well-known names for tensors - This allows us to use the upstream version of the ssd_mobilenet model1, and - starts setting us up to allow some tensor names by convention if we want to add - more decoders. - 1 https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/ssd-mobilenetv1 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8117> - -2025-02-14 14:10:25 +0100 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - mpegts: Fix PCR Discontinuity handling for HLS - We can only reliably use the adaptation field discontinuity flag if our input is - properly timestamped on a regular basis (ex: UDP, DVB, RTP, etc...). - For HLS and other systems which don't provide that information, we should not - reset the base observations. Otherwise we would potentially end up picking a - reference time from a long time ago. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8480> - -2024-04-09 21:31:07 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstclosedcaption.c: - * ext/closedcaption/gsth264ccextractor.c: - * ext/closedcaption/gsth264ccextractor.h: - * ext/closedcaption/meson.build: - closedcaption: Add closed caption extractor element for H.264 stream - Adding new h264ccextractor element. This element will extract - closed caption meta from H.264 stream, and output in display order. - For the frame reordering, this element is implemented as a subclass - of h264decoder but without actual frame decoding. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6580> - -2025-02-14 10:29:08 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/applemedia/avfassetsrc.m: - avfassetsrc: fix mutex leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8479> - -2025-02-14 10:25:14 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/applemedia/avfassetsrc.m: - avfassetsrc: fix missing GObject dispose chainup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8479> - -2025-02-10 15:01:14 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/applemedia/videotexturecache-gl.m: - * sys/applemedia/videotexturecache-vulkan.mm: - applemedia: fix chaining up GObject's constructed virtual method - Fixes #4224 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8446> - -2025-02-13 13:03:37 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - * tests/check/elements/cccombiner.c: - cccombiner: Fix wrong caps and buffer ordering - If there's queued video buffer, forwards new caps event once - the queued video buffer is drained. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8473> - -2025-02-13 10:04:16 +0800 James Oliver <james.oliver@icetana.ai> - - * sys/nvcodec/gstcudaipcserver.cpp: - * sys/nvcodec/gstnvencobject.cpp: - nvcodec: fix invalidated std::set::iterator usage - As per the C++ standard, any usage of a std::set::iterator after it has - been erased from the collection results in undefined behaviour. This has - resulted in application crashes due to CUDA illegal address errors. - This commit fixes the issue by copying and incrementing the iterator - within any for-loops that also invoke std::set::erase. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8472> - -2025-02-10 13:43:11 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/d3d11/gstd3d11window_corewindow.cpp: - * sys/d3d11/gstd3d11window_swapchainpanel.cpp: - d3d11: fix chaining up GObject's constructed virtual method - Fixes #4223 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8448> - -2025-02-10 13:41:54 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/wasapi/gstmmdeviceenumerator.cpp: - wasapi: fix chaining up GObject's constructed virtual method - Fixes #4223 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8448> - -2025-02-10 13:41:19 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/asio/gstasioobject.cpp: - asio: fix chaining up GObject's constructed virtual method - Fixes #4223 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8448> - -2025-02-10 13:40:24 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/winrt/gstwinrtdevicewatcher.cpp: - winrt: fix chaining up GObject's constructed virtual method - Fixes #4223 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8448> - -2025-02-12 22:07:41 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12yadif.cpp: - d3d12: Update root signature flags for old Windows10 - Use root signature flags which are part of initial Direct3D12 release. - Old OS does not understand newly introduced flags - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8469> - -2025-02-12 16:51:24 +0100 wbartel <wilhelm.bartel@streamonkey.de> - - * gst-libs/gst/webrtc/webrtc-priv.h: - * gst-libs/gst/webrtc/webrtc.h: - * gst-libs/gst/webrtc/webrtc_fwd.h: - webrtc: fix recursive G_BEGIN_DECLS and include missing sctptransport.h in webrtc.h - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8470> - -2025-02-12 00:54:25 +0200 Mart Raudsepp <mart.raudsepp@globalm.media> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Fix error message for PID < 0x40 to be in the claimed base 16 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8463> - -2025-02-12 00:53:24 +0200 Mart Raudsepp <mart.raudsepp@globalm.media> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Fix deadlock when requesting pad for PID < 0x40 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8463> - -2025-02-11 00:02:50 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/directshow/dshowdeviceprovider.cpp: - dshowdeviceprovider: fix missing GObject vtable chainups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8452> - -2025-02-11 00:02:05 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst/transcode/gst-cpu-throttling-clock.c: - cpu-throttling-clock: fix missing GObject vtable chainups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8452> - -2025-02-11 00:01:23 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * ext/wpe/wpe-extension/gstwpebusmsgforwarder.c: - wpebusmsgforwarder: fix missing GObject vtable chainups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8452> - -2025-02-11 00:00:42 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * ext/qroverlay/gstbaseqroverlay.c: - baseqroverlay: fix missing GObject vtable chainups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8452> - -2025-02-11 00:00:23 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * ext/codec2json/gstav12json.c: - * ext/codec2json/gsth2642json.c: - * ext/codec2json/gsth2652json.c: - * ext/codec2json/gstvp82json.c: - codec2json: fix missing GObject vtable chainups - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8452> - -2025-02-11 16:51:38 +0100 Robert Mader <robert.mader@collabora.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - wayland: Report correct modifiers - Fixes: e0e7a11089 ("wayland: De-dupe filling caps format fields") - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8460> - -2025-02-10 19:51:57 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12screencapturesrc.cpp: - d3d12screencapturesrc: Fix infinite negotiation on resolution change - Updates crop rect if previous capture got error. The error might - result from resolution change. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8444> - -2025-02-10 19:34:28 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Fix capturing rotated monitor - Acquired and reconstructed frames will have different resolution - if monitor is rotated. Use the copying logic of d3d11 implementation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8444> - -2025-02-07 17:48:32 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegtsmux: add support for VVC/H.266 video - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8435> - -2025-02-06 09:11:14 +0100 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - * gst/mpegtsdemux/tsdemux.c: - mpegts: Take into account adaptation field discont - If the flag is set, there is an *expected* discontinuity: - * For CC, we ignore the fact it's not contiguous - * For PCR, we acknowledge the values aren't contiguous - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8420> - -2025-01-24 13:53:36 -0500 Daniel Morin <daniel.morin@collabora.com> - - * ext/srt/gstsrtsink.c: - * ext/srt/gstsrtsink.h: - srtsink: filter stream-config already sent - - Only sent buffer with GST_BUFFER_FLAG_HEADER if this buffer is not present in - the streamheader - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8373> - -2025-02-10 13:16:20 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/kms/gstkmsallocator.c: - kms: fix chaining up GObject's constructed virtual method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8416> - -2025-02-10 13:15:34 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/mse/gstmsesrc.c: - msesrc: fix chaining up GObject's constructed virtual method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8416> - -2025-02-06 11:40:55 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/vulkan/gstvkphysicaldevice.c: - vkphysicaldevice: fix chaining up GObject's constructed virtual method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8416> - -2025-02-06 11:33:50 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst/mpegtsmux/gstbasetsmux.c: - basetsmux: fix chaining up GObject's constructed virtual method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8416> - -2025-02-06 11:28:16 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst/rtp/gstrtpsrc.c: - rtpsrc: fix chaining up GObject's constructed virtual method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8416> - -2024-11-04 18:49:32 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkutils.c: - vkutils: update gst_vulkan_handle_set_context doc - device is a GstVulkanDevice - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7011> - -2024-06-11 17:25:51 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/gstvulkan.c: - * ext/vulkan/vksink.c: - * ext/vulkan/vksink.h: - vksink: allow multiple device registration - As for decoders, the plugin can register multiple - device present on the system. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7011> - -2024-06-04 18:49:37 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/gstvulkan.c: - * ext/vulkan/gstvulkanelement.c: - * ext/vulkan/gstvulkanelements.h: - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh264dec.h: - * ext/vulkan/vkh265dec.c: - * ext/vulkan/vkh265dec.h: - vkh26xdec: register multiple elements - Register the multiple devices available on the system in - separate features in the registry for the vulkan decoders. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7011> - -2024-10-21 17:05:18 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - v4l2codecs: av1: Allow inter-frames resolution change - When the stream resolution change it is needed to negotiate - a new pools and to update the caps. - Resolution change could occurs on a new sequence or a new - picture so move resolution change detection code in a common - function. - Only call streamoff if the resolution occur while decoding a key frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8064> - -2024-10-21 17:03:00 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * gst-libs/gst/codecs/gstav1decoder.c: - codecs: av1 decoder: Drain output buffers resolution change - We must drain the pending output picture so that subclass can renegotiate - the caps. Not doing so while still renegotiating would mean that the - subclass would have to do an allocation query before pushing the caps. - Pushing the caps now without this would also not work since these caps - won't match the pending buffers format. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8064> - -2025-02-03 12:46:29 +0000 Glyn Davies <glyn@solet.io> - - * gst/videoparsers/gsth264parse.c: - h264parse: Force full timestamp on all timecode updates. Was invalid between midnight and 1am - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8401> - -2025-02-09 17:47:32 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.25.50 - -=== release 1.25.50 === - -2025-02-09 17:35:17 +0000 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.25.50 - -2025-02-08 16:53:57 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/opencv/meson.build: - opencv: Fix pkgconfig dependency name and gstopencv_dep - Broke in bbdf8f599633627d4727b4cab6274c6a2b486a81 - Also print the prefix inside which we try to detect opencv's data dir. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8403> - -2025-02-08 01:49:07 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/opencv/gsthanddetect.cpp: - * gst-libs/gst/opencv/meson.build: - opencv: Fix hand detect profile paths - This is the same mechanism used by facedetect - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8403> - -2024-12-13 09:07:48 +0000 Cheung Yik Pang <pang.cheung@harmonicinc.com> - - * sys/va/gstvavp8dec.c: - va: Add VP8 alpha decode bin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8180> - -2024-10-29 12:40:04 +0800 Cheung Yik Pang <pang.cheung@harmonicinc.com> - - * sys/va/gstvavp9dec.c: - va: Add VP9 alpha decode bin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8180> - -2024-10-29 12:39:39 +0800 Cheung Yik Pang <pang.cheung@harmonicinc.com> - - * sys/va/gstvacodecalphadecodebin.c: - * sys/va/gstvacodecalphadecodebin.h: - * sys/va/meson.build: - va: Add codec alpha decode bin base class - A VA-API decoder bin base class for codecs with alpha channel support. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8180> - -2025-02-07 15:43:05 +0100 wbartel <wilhelm.bartel@streamonkey.de> - - * gst-libs/gst/webrtc/meson.build: - webrtc: fix pkg-config missing sdp dependency - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8433> - -2025-02-07 08:44:53 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * gst-libs/gst/codecs/meson.build: - codecs: include gsth266decoder.h when building gir - Will hopefully fix cerbero ci job. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8430> - -2025-02-05 15:27:14 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/v4l2codecs/gstv4l2format.c: - * sys/v4l2codecs/gstv4l2format.h: - v4l2codecs: Add NV12_10LE40 / NV15 support - NV15 is common format on RK platform and is that only uncompressed 10bit - format the display controller on RK3588 supports. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8426> - -2023-05-22 16:15:33 +0200 Kévin Commaille <zecakeh@tedomum.fr> - - * ext/zbar/gstzbar.c: - * tests/check/elements/zbar.c: - zbar: allow to get symbol as bytes - It would be possible to get some binary symbols with a string, but if - they contain NUL bytes, the string will be cut off. To fix this, - provide the decoded symbol as a GBytes too. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4688> - -2023-05-22 16:09:28 +0200 Kévin Commaille <zecakeh@tedomum.fr> - - * docs/plugins/gst_plugins_cache.json: - * ext/zbar/gstzbar.c: - * ext/zbar/gstzbar.h: - * ext/zbar/meson.build: - zbar: allow to enable binary mode - Added in zbar 0.23.1, it is a mode that prevents zbar from trying to - convert the binary data of QR codes to text by guessing the encoding. - Add a property that changes the configuration of the zbar image scanner - accordingly. - <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4688> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4688> - -2025-01-09 23:42:14 +0800 He Junyan <junyan.he@intel.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Handle the padding OBU correctly - The current av1parse can not find the edge of frame correctly if there - is padding OBUs inside the stream. We now use a flag seen_non_padding to - check whether we see some valid data after a data push. Then the padding - OBUs will be the part of the new frame. - We also refine the code logic to make the code more readable. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4044 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8269> - -2025-02-05 17:10:16 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - * gst/videoparsers/gsth266parse.h: - h266parse: clean up unused APS fields - Since APS is always carried in-band, we don't need to keep the APS - NALs around in the parser anymore. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-31 00:26:38 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst-libs/gst/codecparsers/gsth266parser.c: - h266parser: do not fail when extension flags are set - For VPS, PPS, APS, OPI and DCI, the extension flags are the last syntax - in the structures, and according to the spec, should be ignored if set to 1. - Therefore, we can just ignore them rather than failing. - This fixes a few failures in fluster, like in the PSEXT_A_Nokia_2 stream. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 12:11:28 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * tests/check/elements/h266parse.c: - h266parse: add tests for vvc1 and vvi1 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 12:10:05 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/videoparsers/gsth266parse.c: - h266parse: enable vvc1 and vvi1 stream formats - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 12:07:21 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: handle packetized frames - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-02-05 14:17:21 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: don't prepend APS NALs on IDR frames - Instead, APS NALs can just be pushed an in-band NALs like PH and SEI. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 12:21:35 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: negotiate before handling codec_data NALs - If we find VPS/SPS/PPS in codec_data and call gst_h266_parse_process_nal - with them, we need to have negotiated before in order to correctly - process them with flags like h266parse->transform set or not depending - on the negotiation. This is important because in certain vvc1/vvi1 streams we - may have correct codec_data but faulty parameter sets in the stream and - we would want to push the parameter sets from codec_data first. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-02-05 14:17:45 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: parse codec_data - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 11:46:38 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: update IDR position in more cases - The IDR position should be updated if we're processing an - IDR frame or pushing codec NALs. Not only when picture_header_in_slice_header_flag - is set. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 11:37:10 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: fix typos - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 11:30:36 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst/videoparsers/gsth266parse.c: - h266parse: implement make_codec_data - implement serialization of codec_data containing VvcDecoderConfigurationRecord - as defined in ISO/IEC 14996-15. - The VPS/SPS/PPS NALs are added to the codec_data. APS NALs could be - optionally included as well but will be pushed in-band instead, because: - 1. Logic is easier that way. We'd have to filter out for PREFIX_APS only - (SUFFIX_APS aren't allowed in codec_data). - 2. APS NALs can also be sent for every non-keyframe slice, and often are, so just pushing - them in-band makes more sense to have less to keep track and avoid possible - duplicates. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-01-24 11:17:50 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * gst-libs/gst/codecparsers/gsth266parser.c: - * gst-libs/gst/codecparsers/gsth266parser.h: - * tests/check/libs/h266parser.c: - h266parser: add API to parse VVCDecoderConfigurationRecord - VVCDecoderConfigurationRecord is present in ISOBMFF files carrying - VVC/H.266 streams via the vvcC box, as defined in ISO/IEC 14496-15. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8359> - -2025-02-06 08:34:46 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/opencv/meson.build: - opencv: imgcodecs.hpp is also needed to build the plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8418> - -2025-02-06 08:08:58 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/opencv/meson.build: - * gst-libs/gst/opencv/meson.build: - meson: Modernize opencv build definitions - Use the fs module instead of using `run_command('test')`, simplify - some indentation, fix dependency management - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8418> - -2025-01-03 15:15:57 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - * gst-libs/gst/play/gstplay.h: - play: Distinguish missing plugin errors and include more details in error/warning messages - Include the URI (and if possible) stream-id in the messages. These are provided - by uridecodebin3 / decodebin3 in most cases but there is fallback code to guess - them otherwise. - For missing plugin errors also the installer details are included. - The URI is included in all message types. - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3547 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8236> - -2025-02-06 23:28:13 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/nvdswrapper/gstnvdsdewarp.cpp: - * ext/nvdswrapper/plugin.cpp: - docs: Add nvdswrapper docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8425> - -2024-12-23 11:47:26 +0100 Philippe Normand <philn@igalia.com> - - * ext/wpe/gstwpethreadedview.cpp: - * ext/wpe/gstwpethreadedview.h: - wpe: Reduce gpointer usage in ThreadedView - Those gpointers were introduced when we had to support some old WPE API, no need - for them anymore. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8413> - -2025-02-03 12:25:34 +0100 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/plugin.c: - docs: v4l2codecs: Add plugin index documentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-10-25 16:28:43 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - doc: Push v4l2codecs documentation cache - These are normally autogenerated for the platform GStreamer runs on, - though it is convenient to have everything listed in the doc. This - was created with the new GST_V4L2_CODEC_GEN_DOC=1 environment. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2025-01-30 16:27:36 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecalphadecodebin.c: - * sys/v4l2codecs/gstv4l2codecalphadecodebin.h: - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - doc: v4l2codecs: Document all decoders - Add the documentation blob and since marker for all decoders. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2025-01-30 23:05:58 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecalphadecodebin.c: - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: Remove uneeded per-codec abstract class - That subclass was not needed and was causing issues wit doc generation. - The only down side of removing it is that the decoder cast macro is no - longer type safe. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2025-01-30 20:00:55 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - v4l2codecs: Add Hardware class to alpha decoders - This was accidently omitted, this is needed wen filterting hardware - codecs. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-12-14 15:48:43 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecalphadecodebin.c: - * sys/v4l2codecs/gstv4l2codecalphadecodebin.h: - v4l2codecs: Cleanup alpha decodebin class header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-12-14 15:26:27 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8597> - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codecav1dec.h: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.h: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.h: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.h: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.h: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.h: - v4l2codecs: Don't needlessly expose decoder types - We have explicit register functions and have no use for these types in - other components. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-12-14 15:19:56 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecallocator.h: - * sys/v4l2codecs/gstv4l2codecalphadecodebin.h: - * sys/v4l2codecs/gstv4l2codecav1dec.h: - * sys/v4l2codecs/gstv4l2codecdevice.h: - * sys/v4l2codecs/gstv4l2codech264dec.h: - * sys/v4l2codecs/gstv4l2codech265dec.h: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.h: - * sys/v4l2codecs/gstv4l2codecpool.h: - * sys/v4l2codecs/gstv4l2codecvp8dec.h: - * sys/v4l2codecs/gstv4l2codecvp9dec.h: - * sys/v4l2codecs/gstv4l2decoder.h: - * sys/v4l2codecs/gstv4l2format.h: - v4l2codecs: Use pragma once - This is a nice cleanup and removes comment referring to D3D notably. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-10-24 17:07:54 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - v4l2codecs: Enable AV1 kernel version check - The uAPI finally got merged into 6.5. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2023-10-24 17:04:17 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codecdevice.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: Add a doc generation mode - This is enabled through an env, it allow exposing all elements without - the needed driver supports. This is useful to fill the documentation cache. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5555> - -2025-02-03 14:09:16 +0100 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/gtk/gstgtkwaylandsink.c: - * ext/wayland/gstwaylandsink.c: - waylandsink: Prefer DMABuf over system memory - Swap the template and caps query around to that the sink can describe a - preference for DMAbuf over system memory. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8404> - -2025-02-04 17:33:23 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * tests/check/libs/vkvideoencodeh265.c: - vkvideoencodeh265: fix PicOrderCntVal usage - remove `pic_order_cnt` member variable of GstVulkanH265EncodeFrame and - use always `pic_num` instead. - Intialize first `pic_num` value in test_encoder_h265_i_p. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8408> - -2025-02-04 16:10:38 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkvideoencodeh26x: tests: set constant qp - Set constant qp to 26 in between 0 and 51, the qp range - for h264 and h265. - minQp in case of ANV is 10 for h265 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8407> - -2025-02-04 03:49:00 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/meson.build: - mediafoundation: Enable MinGW build - Update MinGW toolchain in cerbero can support MediaFoundation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8406> - -2025-02-04 05:27:40 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfcapturedshow.cpp: - * sys/mediafoundation/gstmftransform.cpp: - * sys/mediafoundation/gstmfvideobuffer.cpp: - * sys/mediafoundation/gstmfvideobuffer.h: - * sys/mediafoundation/gstmfvideoencoder.cpp: - mediafoundation: Use DEFINE_GUID instead of DECLSPEC_UUID - MinGW will not define IID for custom COM object - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8406> - -2025-02-04 04:14:40 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfutils.h: - mediafoundation: Fix GUID_NULL related MinGW build error - Include cguid.h for GUID_NULL - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8406> - -2025-02-04 04:09:02 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfaacenc.cpp: - * sys/mediafoundation/gstmfcapturedshow.cpp: - * sys/mediafoundation/gstmfmp3enc.cpp: - * sys/mediafoundation/gstmfsourceobject.cpp: - * sys/mediafoundation/gstmfsourcereader.cpp: - * sys/mediafoundation/gstmftransform.cpp: - * sys/mediafoundation/gstmfutils.cpp: - * sys/mediafoundation/gstmfvideobuffer.cpp: - * sys/mediafoundation/gstmfvideobuffer.h: - * sys/mediafoundation/gstmfvideoencoder.cpp: - * sys/mediafoundation/gstmfvideosrc.cpp: - * sys/mediafoundation/gstmfvp9enc.cpp: - * sys/mediafoundation/plugin.cpp: - mediafoundation: Fix various GCC warnings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8406> - -2024-06-28 09:32:20 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabasedec.h: - * sys/va/gstvah266dec.c: - * sys/va/gstvah266dec.h: - * sys/va/gstvaprofile.c: - * sys/va/gstvaprofile.h: - * sys/va/meson.build: - * sys/va/plugin.c: - va: Implement the VA h266 decoder - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5865> - -2024-12-20 18:13:23 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecs/gsth266decoder.c: - * gst-libs/gst/codecs/gsth266decoder.h: - * gst-libs/gst/codecs/gsth266picture.c: - * gst-libs/gst/codecs/gsth266picture.h: - * gst-libs/gst/codecs/meson.build: - codecs: Add the H266/VVC decoder base class - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5865> - -2025-02-04 05:27:20 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstwin32devicewatcher.cpp: - mfdevice: Unregister device notification callback on stop - ... as intended - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8405> - -2025-02-04 03:55:55 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfvideoencoder.cpp: - mfvideoenc: Fix profile string check - profile_str is not std::string. Use strcmp instead - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8405> - -2025-02-04 02:52:51 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfdevice.cpp: - mfdevice: Fix memory leak - Release resources on dispose() as intended - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8405> - -2025-02-03 20:39:53 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/meson.build: - * sys/nvcodec/meson.build: - * sys/qsv/meson.build: - meson: Check d3d12video header for MinGW build - Old MinGW toolchain does not ship the header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8400> - -2025-02-03 09:39:07 +0100 Sebastian Dröge <sebastian@centricular.com> - - * ext/closedcaption/gstceaccoverlay.c: - cc708overlay: Deprecate element in favour of cea708overlay - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3459 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8397> - -2025-02-02 19:00:26 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - d3d12mipgen: Respect requested mip levels - Don't waste GPU power by generating more levels than requested - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8394> - -2025-02-02 00:55:07 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - d3d12mipgen: Serialize root signature only once - ... and reuse serialized blob - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8394> - -2024-06-11 17:46:11 +0200 Ruben Gonzalez <rgonzalez@fluendo.com> - - * meson.build: - meson: use nls option to ENABLE_NLS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7017> - -2023-09-29 18:14:52 +0200 Tim-Philipp Müller <tim@centricular.com> - - * tools/gst-app-maker: - * tools/gst-element-maker: - * tools/gst-project-maker: - bad: tools: update gst-{app,element,project}-maker for new gst-indent - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5192> - -2023-09-29 18:10:09 +0200 Tim-Philipp Müller <tim@centricular.com> - - * scripts/update-orc-dist-files.py: - scripts: update update-orc-dist-files.py scripts for new gst-indent - And fix python indentation with autopep8 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5192> - -2025-01-31 22:06:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Fix SRV descriptor heap size - Converter was allocating smaller size of descriptor heap - than required size when auto-mipgen is enabled - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8391> - -2025-01-22 23:01:40 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/codecparsers/meson.build: - * tests/examples/codecparsers/parse-h264-drop-frames.c: - examples: Add h264parser example - An example to show how to detect frame type using h264parser - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8338> - -2025-01-29 19:37:39 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - mpegtsdescriptor: Add (transfer none) annotation to out parameter of parse_registration() - Out parameters are (transfer full) by default. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8380> - -2025-01-09 14:27:11 +0000 Colin Kinloch <colin.kinloch@collabora.com> - - * ext/gtk/gstgtkwaylandsink.c: - * ext/wayland/gstwaylandsink.c: - * gst-libs/gst/wayland/gstwldisplay.c: - * gst-libs/gst/wayland/gstwldisplay.h: - wayland: De-dupe filling caps format fields - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8268> - -2025-01-29 09:31:54 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2format.c: - v4l2codecs: format: Fix tiled stride with modifiers - After a bit of back and forth, we figure-out that for backward - compatibility we need to set the tile stride the way GStreamer - defines it. Sinks such as glimagesink/waylandsink translate it - back to the number of bytes representation used by Linux. - The change in !7355 when the other way around, breaking tiled - playback through waylandsink and glimagesink. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7892> - -2025-01-10 11:29:44 +0000 Colin Kinloch <colin.kinloch@collabora.com> - - * ext/gtk/gstgtkwaylandsink.c: - * ext/wayland/gstwaylandsink.c: - * gst-libs/gst/wayland/gstwldisplay.c: - * gst-libs/gst/wayland/gstwlvideoformat.c: - * gst-libs/gst/wayland/gstwlvideoformat.h: - wayland: Don't filter out unrecognised DRM formats - There is no requirement for a base DRM format to be supported by libgstvideo - in order to be uploaded to. - The linux-dmabuf-v1 format events are DRM_FORMAT codes and don't need to - be converted before use with `gst_video_dma_drm_fourcc_to_string`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8279> - -2025-01-27 18:55:26 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/ccutils.c: - * ext/closedcaption/ccutils.h: - * ext/closedcaption/gstcccombiner.c: - cccombiner: Restore QoS messaging - Reimplement the QoS message generation that was lost together with the - caption frame counting. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7996> - -2025-01-23 15:34:14 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: Clean up property mutability - Most settings are copied from properties on the READY → PAUSED state - change. The recently added properties violate this scheme, and are - probably unsafe to change. - Make these properties consistently MUTABLE_READY. Also remove the unused - `output_padding` field. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7996> - -2024-11-19 17:38:43 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * tests/check/elements/cccombiner.c: - tests: cccombiner: Test rescheduling 50fps to 25fps w/o overflow - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7996> - -2024-11-19 17:38:43 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: Replace caption frame counting with max_buffer_time - The counting is defective when we are combining with a stream that has a - higher max_cea608_count (such as 60p to 30i), as we produce less caption - frames than we consume, leading to periodic queue drops. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7996> - -2025-01-25 00:15:04 -0500 Arun Raghavan <arun@asymptotic.io> - - * ext/webrtcdsp/meson.build: - webrtcdsp: Use C++20 with MSVC if needed - The subproject fails on vs2022 builds with: - ...agc2/input_volume_stats_reporter.cc(89): error C7555: use of designated initializers requires at least '/std:c++20' - So let's force C++20 in this case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8270> - -2025-01-09 11:37:05 -0500 Arun Raghavan <arun@asymptotic.io> - - * ext/webrtcdsp/gstwebrtcdsp.cpp: - * ext/webrtcdsp/meson.build: - webrtcdsp: Bump to WebRTC AudioProcessing 2.1 - Keep 1.0 support around so distros can manage this bump more easily. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8270> - -2025-01-20 18:19:18 +0100 Andoni Morales Alastruey <ylatuya@gmail.com> - - * sys/applemedia/vtdec.c: - vtdec: fix seeks hangs due to a race condition draining - If the drain function of the decoder triggered by FLUSH_START - is run while the output loop is running, once the output loop - finished vtdec->downstream_ret will be GST_FLOW_FLUSHING instead - of GST_FLOW_OK, which must not be treated as an error since - the queue is cleaned correctly as well. - Fix #4179 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8328> - -2025-01-23 11:50:43 +1100 Matthew Waters <matthew@centricular.com> - - * sys/nvcodec/gstcudacompositor.cpp: - cudacompositor: pass correct variable to debug log functions - Fixes spew of: - gst_debug_log_full_valist: assertion 'id != NULL || - object == NULL || G_IS_OBJECT (object)' failed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8345> - -2025-01-23 13:20:50 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/wpe/gstwpevideosrc.cpp: - wpe: remove glFlush() when filling buffer - According to https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4162#note_2739338 - it was introduced as workaround for tearing issues. - I do not experience any tearing without flushing on both nvidia and AMD - GPU, so I suppose it's no longer needed. - Slightly improve CPU usage according to my tests. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8348> - -2025-01-22 19:37:02 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudacompositor.cpp: - cudacompositor: Fix memory leak - gst_cuda_compositor_upload_frame() returns buffers with increased - refcount already - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8335> - -2025-01-17 20:46:55 +1100 Brad Hards <bradh@frogmouth.net> - - * gst/videoparsers/gsth264parse.c: - h264parse: add conditional values to AVCConfigurationRecord - This adds the data required in AVCDecoderConfigurationRecord for - higher profile (High variants) configurations - everything in the if(...) {...} part - of ISO/IEC 14496-15:2024 Section 5.3.2.1.2. (or 5.3.3.1.2 in the 2019 version). - Resolves an error flagged by ComplianceWarden when muxing this into ISOBMFF. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8308> - -2025-01-20 03:14:22 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/nvcodec/gstcudacompositor.cpp: - * sys/nvcodec/gstnvav1encoder.cpp: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - docs: Update nvcodec plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8325> - -2025-01-20 21:29:52 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaipcsink.cpp: - * sys/nvcodec/gstcudaipcsrc.cpp: - cudaipc: Use empty string for address property docs - Since Windows and Linux have different default values, - use empty string when generating plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8325> - -2025-01-20 04:52:00 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvav1dec.cpp: - * sys/nvcodec/gstnvav1encoder.cpp: - * sys/nvcodec/gstnvdecoder.cpp: - * sys/nvcodec/gstnvh264dec.cpp: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265dec.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - * sys/nvcodec/gstnvvp8dec.cpp: - * sys/nvcodec/gstnvvp9dec.cpp: - nvcodec: Specify documentation caps - ... since produced caps will be different depending on OS and GPU model. - Also adding Y444_16LE format to decoder's GL template caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8325> - -2025-01-20 02:55:03 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/nvcomp/gstnvcompvideoenc.cpp: - * ext/nvcomp/plugin.cpp: - docs: Add nvcomp plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8325> - -2025-01-20 18:37:23 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - tsdemux: Fix backwards PTS wraparound detection with ignore-pcr=true - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8326> - -2025-01-20 13:23:50 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/meson.build: - docs: explicitly list gir files as depends for generating configs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8324> - -2024-12-18 01:45:28 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudacompositor.cpp: - * sys/nvcodec/gstcudacompositor.h: - * sys/nvcodec/meson.build: - * sys/nvcodec/plugin.c: - nvcodec: Add cudacompositor element - Adding CUDA based compositor element - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8170> - -2024-12-17 00:51:47 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaconverter.cpp: - cudaconverter: Add support for alpha blending - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8170> - -2024-12-16 01:32:36 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaconverter.cpp: - * sys/nvcodec/gstcudaconverter.h: - * sys/nvcodec/gstcudaconvertscale.c: - * sys/nvcodec/meson.build: - cudaconverter: Add support for configuration update - Allow updating various configuration values via property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8170> - -2024-12-14 23:56:35 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaconverter.c: - cudaconverter: Pass constant values as kernel argument - Make conversion kernel more flexible and reusable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8170> - -2024-12-14 21:44:55 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/stub/cuda.h: - cuda: Load 2D memset function symbols - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8170> - -2025-01-15 17:36:00 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/meson.build: - * gst-libs/gst/adaptivedemux/meson.build: - * gst-libs/gst/analytics/meson.build: - * gst-libs/gst/audio/meson.build: - * gst-libs/gst/basecamerabinsrc/meson.build: - * gst-libs/gst/codecparsers/meson.build: - * gst-libs/gst/codecs/meson.build: - * gst-libs/gst/cuda/meson.build: - * gst-libs/gst/dxva/meson.build: - * gst-libs/gst/insertbin/meson.build: - * gst-libs/gst/mpegts/meson.build: - * gst-libs/gst/mse/meson.build: - * gst-libs/gst/opencv/meson.build: - * gst-libs/gst/play/meson.build: - * gst-libs/gst/player/meson.build: - * gst-libs/gst/transcoder/meson.build: - * gst-libs/gst/va/meson.build: - * gst-libs/gst/vulkan/meson.build: - * gst-libs/gst/webrtc/meson.build: - * gst-libs/meson.build: - docs: generate hotdoc configs for libraries with our helper script - With this patch, configure time is identical no matter whether doc is - enabled or not. - The configuration files also now contain explicitly-listed sources with - no wildcards. - For the four libraries where hotdoc needs to use clang to generate the - documentation (as opposed to the rest of the libraries where hotdoc uses - the gir), the script will call pkg-config to determine the appropriate - C flags. - This means a side effect of this patch is that pkg-config files are now - generated for the gstadaptivedemux and gstopencv libraries. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8312> - -2025-01-17 16:51:22 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * meson.build: - meson: bump minimum version to 1.4 in every subprojects - 36c01d05797ad9c7778939c54870f979bdcbba1f bumped to 1.4 for gst-devtools - and the root project, but we usually keep those in sync everywhere. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8312> - -2025-01-04 20:46:37 +0000 Sam James <sam@gentoo.org> - - * ext/lc3/meson.build: - lc3: tweak meson style - While this might seem a bit silly, it aids some of our infra in - packaging. Tweak for consistency with other use. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8239> - -2025-01-06 13:28:40 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - vkh264dec: enhance interlaced content support - - Use frame_num instead of pic_num to set the long_term_pic_num - fixing 10 interlaced tests in fluster test suite: JVT-AVC_V1 - - Send the slice offset only once in case of interlaced content. - Fixing 5 interlaced tests in fluster test suite: JVT-AVC_V1. - - The default value for top and bottom field flag should be 0 in the - case of a progressive content. - - Use short and long term refs helper getter method to retrieve the - reference frames according its none existing and interlaced state - - Reorganize the find_next_slot_idx code to be easier to read. - Co-authored-by: Daniel Almeida <daniel.almeida@collabora.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7854> - -2024-06-21 16:55:05 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - vkh264dec: enable h264 interlaced decoding - First the slot_index shall have the same value for the first and second - fields. - Also, the reference frames are only those with both fields. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7854> - -2024-06-21 16:43:52 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - vkh264dec: make GstVulkanH264Picture a reference count - Thus we could re-use the same structure for interlaced fields: a single bistream, - single output buffer and single vulkan structures. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7854> - -2024-10-25 15:24:49 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - vkh264dec: non existing picture are not refs - The non existing picture or gap picture should not be - considered as refs for vulkan decoder. - Fix fluster tests: - MR3_TANDBERG_B - MR4_TANDBERG_C - MR5_TANDBERG_C - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7854> - -2024-10-25 12:36:43 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/vulkan/vkh264dec.c: - vkh264dec: support h264 extended profile - Extended is identical to main but allows FMO/ASO features to be used, - and prevent using CABAC. - Using similar logic to "baseline", assume that if we support main, - we can also do extended. - This fixes the following fluster vectors, which otherwise would fail when trying to link the parsebin pad. - BA3_SVA_C - MR6_BT_B - MR7_BT_B - MR8_BT_B - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7854> - -2025-01-15 17:08:21 -0500 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: - * gst-libs/gst/analytics/gstanalyticssegmentationmtd.h: - gst-analytics: add missing mtd segmentation API - - add gst_analytics_segmentation_mtd_get_mtd_type() which is required to - retrieve the concrete type of a generic mtd (GstAnalyticsMtd). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8307> - -2025-01-10 14:15:21 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/decklink/gstdecklinkvideosink.cpp: - decklinkvideosink: Fix handling of caps framerate in auto mode - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8280> - -2025-01-10 21:18:45 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * gst-libs/gst/mpegts/gstmpegtssection.h: - * gst/mpegtsdemux/tsdemux.c: - tsdemux: add support for VVC/H.266 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4940> - -2025-01-14 17:22:12 +0000 Tim-Philipp Müller <tim@centricular.com> - - * gst-libs/gst/webrtc/nice/nice.c: - webrtc-nice: fix compiler warning with older versions if libnice - warning: "HAVE_LIBNICE_CONSENT_FIX" is not defined, evaluates to 0 -Wundef - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8305> - -2025-01-13 00:39:43 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - d3d12: Update docs for max-mip-levels property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-13 00:08:28 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12convert.h: - d3d12convert: Add max-mip-levels property - Add support for automatic mipmap generation depending on viewport size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-12 23:37:58 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Add max-mip-level property - Add support for automatic mipmap generation depending on viewport size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-12 23:22:44 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Add max-mip-level property - Add support for automatic mipmap generation depending on viewport size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-10 21:57:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12mipgen-private.h: - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - d3d12converter: Add support for mipmap generation - Adding max-mip-levels property so that converter can generate - mipmap textures if render target size is smaller than - input texture resolution. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-10 01:59:14 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Refactor to support mipmap handling - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-10 21:05:45 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_mipgen_gray.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - * sys/d3d12/gstd3d12mipmapping.cpp: - d3d12mipmapping: Add support for GRAY output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-10 00:38:39 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12mipgen-private.h: - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_mipgen_ayuv.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_mipgen_vuya.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - * sys/d3d12/gstd3d12mipmapping.cpp: - d3d12mipmapping: Skip alpha sampling if possible - If input format has no alpha and output format has no alpha, - skip alpha sampling which can reduce the number of instruction slots - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-09 23:12:05 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12mipgen-private.h: - * gst-libs/gst/d3d12/gstd3d12mipgen.cpp: - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/gstd3d12mipmapping.cpp: - * sys/d3d12/meson.build: - d3d12: Move mipgen to libs - converter object will use mipgen object - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8290> - -2025-01-14 15:00:43 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - Back to development after 1.25.1 - -=== release 1.25.1 === - -2025-01-14 14:52:48 +0000 Tim-Philipp Müller <tim@centricular.com> - - * NEWS: - * RELEASE: - * gst-plugins-bad.doap: - * meson.build: - Release 1.25.1 - -2024-12-20 13:28:38 -0700 Jordan Yelloz <jordan.yelloz@collabora.com> - - * sys/decklink/gstdecklink.cpp: - decklink: Fixed caps-building for output devices - When iterating through output devices, video_input_caps was being - updated instead of video_output_caps. - As a result, video output devices were being created with an empty caps object - and `gst-device-monitor-1.0 Video/Sink` would produce no decklink devices. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8298> - -2025-01-13 22:09:02 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> - - * gst/videoparsers/gsth264parse.c: - h264parse: drop duplicated call - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8299> - -2025-01-13 12:48:52 +0000 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: update vampeg2dec docs with new rank - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8295> - -2025-01-13 12:46:26 +0000 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: add svtjpegxs plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8295> - -2025-01-13 12:45:38 +0000 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/openaptx/openaptx-plugin.c: - docs: add openaptx plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8295> - -2025-01-13 18:10:31 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/meson.build: - * ext/aes/meson.build: - * ext/analyticsoverlay/meson.build: - * ext/aom/meson.build: - * ext/assrender/meson.build: - * ext/avtp/meson.build: - * ext/bs2b/meson.build: - * ext/bz2/meson.build: - * ext/chromaprint/meson.build: - * ext/closedcaption/meson.build: - * ext/codec2json/meson.build: - * ext/colormanagement/meson.build: - * ext/curl/meson.build: - * ext/dash/meson.build: - * ext/dc1394/meson.build: - * ext/directfb/meson.build: - * ext/dtls/meson.build: - * ext/dts/meson.build: - * ext/faac/meson.build: - * ext/faad/meson.build: - * ext/fdkaac/meson.build: - * ext/flite/meson.build: - * ext/fluidsynth/meson.build: - * ext/gme/meson.build: - * ext/gs/meson.build: - * ext/gsm/meson.build: - * ext/gtk/meson.build: - * ext/hls/meson.build: - * ext/iqa/meson.build: - * ext/isac/meson.build: - * ext/ladspa/meson.build: - * ext/lc3/meson.build: - * ext/lcevcdecoder/meson.build: - * ext/lcevcencoder/meson.build: - * ext/ldac/meson.build: - * ext/libde265/meson.build: - * ext/lv2/meson.build: - * ext/mdns/meson.build: - * ext/modplug/meson.build: - * ext/mpeg2enc/meson.build: - * ext/mplex/meson.build: - * ext/musepack/meson.build: - * ext/neon/meson.build: - * ext/nvcomp/meson.build: - * ext/nvdswrapper/meson.build: - * ext/onnx/meson.build: - * ext/openal/meson.build: - * ext/openaptx/meson.build: - * ext/opencv/meson.build: - * ext/openexr/meson.build: - * ext/openh264/meson.build: - * ext/openjpeg/meson.build: - * ext/openmpt/meson.build: - * ext/openni2/meson.build: - * ext/opus/meson.build: - * ext/qroverlay/meson.build: - * ext/qt6d3d11/meson.build: - * ext/resindvd/meson.build: - * ext/rsvg/meson.build: - * ext/rtmp/meson.build: - * ext/sbc/meson.build: - * ext/sctp/meson.build: - * ext/smoothstreaming/meson.build: - * ext/sndfile/meson.build: - * ext/soundtouch/meson.build: - * ext/spandsp/meson.build: - * ext/srt/meson.build: - * ext/srtp/meson.build: - * ext/svtav1/meson.build: - * ext/svthevcenc/meson.build: - * ext/svtjpegxs/meson.build: - * ext/teletextdec/meson.build: - * ext/ttml/meson.build: - * ext/voaacenc/meson.build: - * ext/voamrwbenc/meson.build: - * ext/vulkan/meson.build: - * ext/wayland/meson.build: - * ext/webp/meson.build: - * ext/webrtc/meson.build: - * ext/webrtcdsp/meson.build: - * ext/wildmidi/meson.build: - * ext/wpe/meson.build: - * ext/x265/meson.build: - * ext/zbar/meson.build: - * ext/zxing/meson.build: - * gst/accurip/meson.build: - * gst/adpcmdec/meson.build: - * gst/adpcmenc/meson.build: - * gst/aiff/meson.build: - * gst/asfmux/meson.build: - * gst/audiobuffersplit/meson.build: - * gst/audiofxbad/meson.build: - * gst/audiolatency/meson.build: - * gst/audiomixmatrix/meson.build: - * gst/audiovisualizers/meson.build: - * gst/autoconvert/meson.build: - * gst/bayer/meson.build: - * gst/camerabin2/meson.build: - * gst/codecalpha/meson.build: - * gst/codectimestamper/meson.build: - * gst/coloreffects/meson.build: - * gst/debugutils/meson.build: - * gst/dvbsubenc/meson.build: - * gst/dvbsuboverlay/meson.build: - * gst/dvdspu/meson.build: - * gst/faceoverlay/meson.build: - * gst/festival/meson.build: - * gst/fieldanalysis/meson.build: - * gst/freeverb/meson.build: - * gst/frei0r/meson.build: - * gst/gaudieffects/meson.build: - * gst/gdp/meson.build: - * gst/geometrictransform/meson.build: - * gst/id3tag/meson.build: - * gst/insertbin/meson.build: - * gst/inter/meson.build: - * gst/interlace/meson.build: - * gst/ivfparse/meson.build: - * gst/ivtc/meson.build: - * gst/jp2kdecimator/meson.build: - * gst/jpegformat/meson.build: - * gst/librfb/meson.build: - * gst/meson.build: - * gst/midi/meson.build: - * gst/mpegdemux/meson.build: - * gst/mpegpsmux/meson.build: - * gst/mpegtsdemux/meson.build: - * gst/mpegtsmux/meson.build: - * gst/mse/meson.build: - * gst/mxf/meson.build: - * gst/netsim/meson.build: - * gst/onvif/meson.build: - * gst/pcapparse/meson.build: - * gst/pnm/meson.build: - * gst/proxy/meson.build: - * gst/rawparse/meson.build: - * gst/removesilence/meson.build: - * gst/rist/meson.build: - * gst/rtmp2/meson.build: - * gst/rtp/meson.build: - * gst/sdp/meson.build: - * gst/segmentclip/meson.build: - * gst/siren/meson.build: - * gst/smooth/meson.build: - * gst/speed/meson.build: - * gst/subenc/meson.build: - * gst/switchbin/meson.build: - * gst/tensordecoders/meson.build: - * gst/timecode/meson.build: - * gst/transcode/meson.build: - * gst/unixfd/meson.build: - * gst/videofilters/meson.build: - * gst/videoframe_audiolevel/meson.build: - * gst/videoparsers/meson.build: - * gst/videosignal/meson.build: - * gst/vmnc/meson.build: - * gst/y4m/meson.build: - * meson.build: - * sys/aja/meson.build: - * sys/amfcodec/meson.build: - * sys/androidmedia/meson.build: - * sys/applemedia/meson.build: - * sys/asio/meson.build: - * sys/bluez/meson.build: - * sys/d3d11/meson.build: - * sys/d3d12/meson.build: - * sys/d3dvideosink/meson.build: - * sys/decklink/meson.build: - * sys/directshow/meson.build: - * sys/directsound/meson.build: - * sys/dvb/meson.build: - * sys/dwrite/meson.build: - * sys/fbdev/meson.build: - * sys/ipcpipeline/meson.build: - * sys/kms/meson.build: - * sys/magicleap/meson.build: - * sys/mediafoundation/meson.build: - * sys/msdk/meson.build: - * sys/nvcodec/meson.build: - * sys/opensles/meson.build: - * sys/qsv/meson.build: - * sys/shm/meson.build: - * sys/tinyalsa/meson.build: - * sys/uvcgadget/meson.build: - * sys/uvch264/meson.build: - * sys/v4l2codecs/meson.build: - * sys/va/meson.build: - * sys/wasapi/meson.build: - * sys/wasapi2/meson.build: - * sys/webview2/meson.build: - * sys/wic/meson.build: - * sys/win32ipc/meson.build: - * sys/winks/meson.build: - * sys/winscreencap/meson.build: - * tools/gst-project-maker: - docs: port plugins to explicit sources - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8273> - -2024-02-13 10:21:15 -0500 Xavier Claessens <xavier.claessens@collabora.com> - - * sys/androidmedia/magicleap/gstamc-codec-ml.c: - * sys/androidmedia/magicleap/gstamc-codeclist-ml.c: - * sys/androidmedia/magicleap/gstamc-format-ml.c: - * sys/androidmedia/magicleap/gstamc-internal-ml.h: - * sys/androidmedia/magicleap/gstamc-ml.c: - * sys/androidmedia/magicleap/gstamc-surfacetexture-ml.c: - * sys/androidmedia/magicleap/gstamc-surfacetexture-ml.h: - * sys/androidmedia/meson.build: - magicleap: Drop MLSDK support - I was used by ML1 (first gen device) which is deprecated and not - supported anymore. ML2 uses standard Android JNI and NDK. - Note that mlaudiosink element remains in bad/sys/magiclea because it - allows 3d spatial audio and that API is still supported by Magicleap - SDK. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6108> - -2025-01-06 09:12:19 +0100 Edward Hervey <edward@centricular.com> - - * ext/iqa/meson.build: - * ext/sctp/usrsctp/meson.build: - * ext/soundtouch/meson.build: - * ext/ttml/meson.build: - * gst-libs/gst/vulkan/meson.build: - * gst/dvbsubenc/meson.build: - * meson.build: - * sys/dwrite/libcaption/meson.build: - * sys/qsv/libmfx/meson.build: - * tests/check/meson.build: - bad: Add extra warning flags - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-11 13:40:39 +0100 Edward Hervey <edward@centricular.com> - - * sys/winscreencap/gstgdiscreencapsrc.c: - * sys/winscreencap/gstwinscreencap.c: - * sys/winscreencap/gstwinscreencap.h: - winscreencap: Don't use aggregate returns - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-11 09:07:27 +0100 Edward Hervey <edward@centricular.com> - - * sys/decklink/meson.build: - decklink: Ignore undef warnings in decklink API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 17:24:40 +0100 Edward Hervey <edward@centricular.com> - - * ext/openni2/meson.build: - openni2: Ignore undef in external header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 17:24:30 +0100 Edward Hervey <edward@centricular.com> - - * ext/x265/meson.build: - x265: Ignore undef in external headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 16:31:01 +0100 Edward Hervey <edward@centricular.com> - - * sys/amfcodec/meson.build: - amf: Ignore undef warnings in external headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 16:30:25 +0100 Edward Hervey <edward@centricular.com> - - * ext/spandsp/meson.build: - spandsp: Ignore undef issue in external headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 16:29:53 +0100 Edward Hervey <edward@centricular.com> - - * gst/transcode/gsturitranscodebin.c: - uritranscodebin: Fix definition usage - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 16:17:11 +0100 Edward Hervey <edward@centricular.com> - - * ext/sctp/sctpassociation.c: - sctp: Convert function to avoid aggregate return - It's only used locally and only to fill an existing variable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 11:42:19 +0100 Edward Hervey <edward@centricular.com> - - * sys/applemedia/avfdeviceprovider.m: - * sys/applemedia/avfvideosrc.m: - * sys/applemedia/corevideobuffer.c: - * sys/applemedia/videotexturecache-gl.h: - * sys/applemedia/videotexturecache-gl.m: - * sys/applemedia/videotexturecache-vulkan.mm: - * sys/applemedia/videotexturecache.m: - * sys/applemedia/vtdec.c: - * sys/applemedia/vtenc.c: - applemedia: Fix usage of HAVE_IOS define - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 11:10:44 +0100 Edward Hervey <edward@centricular.com> - - * gst/timecode/meson.build: - timecode: Fix definition - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 11:10:25 +0100 Edward Hervey <edward@centricular.com> - - * ext/resindvd/resindvdbin.c: - resindvd: Fix definition - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 11:09:58 +0100 Edward Hervey <edward@centricular.com> - - * ext/curl/gstcurlhttpsrc.h: - * meson.build: - curl: Fix definitions - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 10:27:51 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/vulkan/gstvkdebug.c: - vulkan: Include api header - Needed for GST_VULKAN_HAVE_VIDEO_EXTENSIONS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 11:03:32 +0100 Edward Hervey <edward@centricular.com> - - * ext/wpe/wpe-extension/gstwpeaudiosink.c: - wpeaudiosink: Check error value explicitly - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 15:55:42 +0100 Edward Hervey <edward@centricular.com> - - * sys/msdk/meson.build: - msdk: Ignore aggregate return warning - That's how their API is implemented - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-10 15:53:55 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/d3d11/meson.build: - * sys/d3d11/meson.build: - d3d11: Ignore undef issues with external headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 15:06:58 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/meson.build: - d3d12: Disable implicit fallthrough checks - There are some missing explicit fallthrough statements in the direct headers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 10:36:04 +0100 Edward Hervey <edward@centricular.com> - - * gst/rtmp2/gstrtmp2locationhandler.c: - * gst/rtmp2/rtmp/rtmpclient.c: - * gst/rtmp2/rtmp/rtmpclient.h: - rtmp2: Explicitly define scheme error enum - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 10:28:20 +0100 Edward Hervey <edward@centricular.com> - - * ext/directfb/dfbvideosink.c: - dfbvideosink: Rework escape handling - Detected by a fallthrough. - * Just use if/else for clarity - * Remove 2002 fart joke - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 09:10:48 +0100 Edward Hervey <edward@centricular.com> - - * gst/mxf/mxfdemux.c: - mxfdemux: Fix segments iteration - `i >= 0` is always true since it's an unsigned integer ... - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-06 09:10:08 +0100 Edward Hervey <edward@centricular.com> - - * gst/siren/encoder.c: - siren: Cast shift mask to unsigned value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 18:42:01 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/codecparsers/gsth265parser.c: - h265parser: Fix unsigned value reading - Unsigned values are always above 0, use MAX variant for reading - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:13:23 +0100 Edward Hervey <edward@centricular.com> - - * tests/examples/ipcpipeline/ipc-play.c: - examples/ipcpipeline: Fix ESC handler - Same as for gst-play - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:12:54 +0100 Edward Hervey <edward@centricular.com> - - * tests/check/libs/play.c: - tests/play: Fix debug statement - The interval is in milliseconds, convert to nanoseconds for debugging statement - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:12:21 +0100 Edward Hervey <edward@centricular.com> - - * tests/check/elements/webrtcbin.c: - tests/webrtcbin: Remove useless checks with unsigned values - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:10:59 +0100 Edward Hervey <edward@centricular.com> - - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcea608mux.c: - closedcaption: Use proper type for storing result - drop_ccp_from_cc_data() will return a negative value if there was an - error. Storing that in an unsigned value will cause the checks for errors to - never happen. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:10:21 +0100 Edward Hervey <edward@centricular.com> - - * ext/closedcaption/gstcea708decoder.c: - cea708decoder: Remove useless checks - No need to check for the type limits - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:09:28 +0100 Edward Hervey <edward@centricular.com> - - * ext/isac/gstisacdec.c: - isacdec: Remove impossible check - WebRtcIsac_DecodePlc() never returns a negative value (confirmed by - documentation and current/historical code) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 12:09:00 +0100 Edward Hervey <edward@centricular.com> - - * ext/sctp/gstsctpdec.c: - sctpdec: Remove useless check - A uint16 will always be below ... the maximum value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 11:47:11 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/codecs/gstmpeg2decoder.c: - mpeg2decoder: Remove useless check - The enum is unsigned - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 11:43:19 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/vulkan/gstvkinstance.c: - vkinstance: Remove useless check - priv->requested_api_major is unsigned - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 11:42:22 +0100 Edward Hervey <edward@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - basetsmux: Add explicit macro for GstClockTimeDiff handling - The checks in the other macro were useless for unsigned values - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 11:41:54 +0100 Edward Hervey <edward@centricular.com> - - * gst/mxf/mxfdemux.c: - mxfdemux: Remove useless check - values will always be positive - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-05 11:41:18 +0100 Edward Hervey <edward@centricular.com> - - * gst/speed/gstspeed.c: - speed: Refactor event handler - To avoid fallthrough issues which were tricky to fix - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-04 18:00:41 +0100 Edward Hervey <edward@centricular.com> - - * ext/fdkaac/gstfdkaacdec.c: - * gst-libs/gst/audio/gstnonstreamaudiodecoder.c: - * gst-libs/gst/isoff/gstisoff.c: - * gst/aiff/aiffparse.c: - * gst/dvbsuboverlay/gstdvbsuboverlay.c: - * gst/mpegpsmux/psmuxstream.c: - * gst/mpegtsdemux/mpegtsbase.c: - * gst/mpegtsdemux/pesparse.c: - * gst/rtmp2/rtmp/amf.c: - * gst/rtmp2/rtmp/rtmpchunkstream.c: - * gst/videoparsers/gsth266parse.c: - * gst/videoparsers/gstmpeg4videoparse.c: - * gst/videoparsers/gstmpegvideoparse.c: - * sys/ipcpipeline/gstipcpipelinesrc.c: - * sys/msdk/gstmsdkenc.c: - * sys/msdk/gstmsdkh265enc.c: - * sys/va/gstvabasedec.c: - * sys/va/gstvabaseenc.c: - * tests/examples/audiomixmatrix/test-audiomixmatrix.c: - * tests/examples/waylandsink/wayland-threads.c: - bad: Clearly specify fallthrough in switch/case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8229> - -2025-01-09 00:42:48 +0100 Carlos Bentzen <cadubentzen@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtc: fix duplicate payload types with RTX and multiple video codecs - Before this patch, there could be duplicate payload types in offers that - have, within a media section, multiple codecs and RTX enabled: - ``` - m=video 9 UDP/TLS/RTP/SAVPF 96 97 97 <-- HAS DUPLICATES - a=sendrecv - a=rtpmap:96 VP8/90000 - a=rtcp-fb:96 nack - a=rtcp-fb:96 nack pli - a=rtcp-fb:96 ccm fir - a=rtcp-fb:96 transport-cc - a=rtpmap:97 H264/90000 - a=rtcp-fb:97 nack - a=rtcp-fb:97 nack pli - a=rtcp-fb:97 ccm fir - a=rtcp-fb:97 transport-cc - a=rtpmap:97 rtx/90000 <--------- PT IS DUPLICATE - a=fmtp:97 apt=96 - ``` - Fix this by populating the media_mapping array with all media formats - rather than only the first one. The added test case reproduces the issue, - which fails without this patch. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8259> - -2025-01-09 11:39:11 +0100 Edward Hervey <edward@centricular.com> - - * sys/wasapi/gstwasapisink.c: - wasapi: Use signed value for can_frames - The can retrieval function can return negative values (which will be properly handled) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-09 10:08:23 +0100 Edward Hervey <edward@centricular.com> - - * sys/amfcodec/gstamfav1enc.cpp: - * sys/amfcodec/gstamfh264enc.cpp: - * sys/amfcodec/gstamfh265enc.cpp: - amfcodec: Add missing break statement - Setting frame-sad would also set ltr - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-06 10:39:35 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12: Add missing breaks to switch/case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-06 09:57:33 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - h265bitwriter: Don't use type too small - The computed `coef_val` could exceed the maximum range of a gint8. Use a bigger - one, the checks after will ensure it's properly cropped/padded - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-05 12:08:24 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mpegts/gstmpegtssection.c: - mpegts: Add missing break - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-05 11:46:21 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mse/gstmsemediatype.c: - msemediabuffer: Fix ASCII character detection - Use glib function. The previous check was checking whether a signed int was - lower than its limit (which ... is always TRUE). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-05 11:43:49 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mse/gstsourcebuffer.c: - msesourcebuffer: Fix unsigned value handling - Use the explicit valid clocktime handler instead - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-05 11:41:44 +0100 Edward Hervey <edward@centricular.com> - - * gst/rist/gstristsink.c: - ristsink: Add missing break - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8286> - -2025-01-10 13:27:13 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> - - * gst/videoparsers/gstvideoparseutils.c: - videoparsers: Fix indentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8281> - -2025-01-07 12:56:13 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/decklink/gstdecklinkaudiosink.cpp: - decklinkaudiosink: Don't crash if started without corresponding video sink - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8253> - -2025-01-09 17:23:41 +0000 Colin Kinloch <colin.kinloch@collabora.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - wayland: Print table split when DMABuf format changes - The `zwp_linux_dmabuf_v1` doesn't specify an order for modifier events - to be sent. - In my case the linear format was sent last resulting in the first item - in each row being the previous format. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8272> - -2024-12-25 15:04:03 +0100 Philippe Normand <philn@igalia.com> - - * ext/wpe/gstwpethreadedview.cpp: - * ext/wpe/gstwpethreadedview.h: - * ext/wpe/gstwpevideosrc.cpp: - wpevideosrc: Clear cached SHM buffers after caps re-negotiation - Otherwise buffers not corresponding to the negotiated caps might be pushed - downstream. - Fixes #4094 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8212> - -2024-12-27 13:28:18 +0100 Philippe Normand <philn@igalia.com> - - * ext/wpe/gstwpevideosrc.cpp: - wpevideosrc: Post progress messages on the bus - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8217> - -2024-12-25 14:42:16 +0100 Philippe Normand <philn@igalia.com> - - * ext/wpe/gstwpevideosrc.cpp: - wpevideosrc: Handle latency queries - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8217> - -2025-01-08 00:56:45 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/d3d12/gstd3d12mipmapping.cpp: - d3d12mipmapping: Add mip-levels property - Generating full levels would result in waste of GPU resource - depending on rendering usecase. Adding a property to make it - controllable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8255> - -2025-01-08 00:38:39 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12mipgen.cpp: - * sys/d3d12/gstd3d12mipmapping.cpp: - d3d12mipmapping: Add YUV and 64bits output formats - Add support for YUV and 64bits output formats to avoid - colorspace conversion and bitdepth loss - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8255> - -2025-01-06 15:16:02 -0600 Olivier Crête <olivier.crete@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * gst-libs/gst/analytics/gsttensor.c: - * gst-libs/gst/analytics/gsttensor.h: - analytics: Tensor dimensions are always row-major or col-major - Simplify by removing the extra fields, as this is what all - frameworks give us. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8250> - -2024-12-27 20:55:56 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsenc.c: - svtjpegxsenc: add support for interlaced video - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-27 18:02:12 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsenc.c: - svtjpegxsenc: factor out encoding of codestream into separate function - Prepare for interlacing support where an interlaced image - is coded as two codestreams each representing a field. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-26 18:08:14 +0100 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - mpegtsmux: add support for interlaced JPEG XS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-25 22:54:16 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsdec.c: - svtjpegxsdec: add support for interlaced video - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-25 22:40:06 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsdec.c: - svtjpegxsdec: drop frames that had decoding errors - Follow-up to !8163 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-25 18:07:04 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsdec.c: - svtjpegxsdec: factor out decoding of codestream into separate function - Prepare for interlacing support where an interlaced image - is coded as two codestreams each representing a field. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-20 13:49:47 +0000 Tim-Philipp Müller <tim@centricular.com> - - * gst/mpegtsdemux/tsdemux.c: - tsdemux: handle interlaced JPEG XS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8219> - -2024-12-10 15:09:24 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh265dec.c: - vkh265dec: update only vps/sps on demand and pass pps always - As PPS can change over the stream, the pps should be always - updated to avoid missing picture parameters sets. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8124> - -2024-12-10 12:48:32 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - vkh264dec: update only sps on demand and pass pps always - As PPS can change over the stream, the pps should be always - updated to avoid missing picture parameters sets. - See CABA3_TOSHIBA_E.264 in fluster resources. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8124> - -2024-12-30 21:59:03 +0100 Samuel Thibault <samuel.thibault@ens-lyon.org> - - * ext/gtk/meson.build: - * tests/examples/gtk/meson.build: - * tests/examples/waylandsink/meson.build: - meson: Fix build with gtk3 but not wayland - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8222> - -2025-01-04 20:30:01 +0000 Sam James <sam@gentoo.org> - - * meson_options.txt: - codec2json: move option to right section - It has an external dependency (json-glib) so should be under - that heading. - Fixes: fd588a50e415feb0ab21c4a3386bd426c8c9043b - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8238> - -2025-01-04 19:52:48 +0000 Sam James <sam@gentoo.org> - - * meson_options.txt: - analyticsoverlay: move option to right section - It has an external dependency (pango/cairo) so should be under - that heading. Also, fix an inconsistency with the ':' style. - Fixes: 95464c89772e144088af54c1e8a4c1fecc45f09a - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8238> - -2024-12-20 14:51:45 -0500 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * gst-libs/gst/analytics/gsttensor.c: - * gst-libs/gst/analytics/gsttensor.h: - analytics: remove batch-size - - Batch-size will be the outer-most dimension. Presence of batch dimension can - be identified using `dims` and `id`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8191> - -2025-01-01 00:43:41 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12screencapture.cpp: - * sys/d3d12/gstd3d12screencapture.h: - * sys/d3d12/gstd3d12screencapturedevice.cpp: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - d3d12screencapturesrc: Add support for HDR capture in DDA mode - Use IDXGIOutput5::DuplicateOutput1() if HDR is enabled. - Note that scRGB color space is not defined in GStreamer, - this element will output SDR tonemapped frame - with linear or reinhard filtering. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3834 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8227> - -2025-01-01 22:15:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_scrgb.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_scrgb_tonemap.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - d3dshader: Add sampling pixel shader for scRGB SRV - Shaders required for HDR capturing - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8227> - -2024-11-22 12:46:22 +0100 Albert Sjolund <alberts@axis.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/webrtc/gstwebrtcbin.c: - webrtc: add new post-rtp-aux-sender signal - Adds a new signal to webrtcbin, to allow for placement - of an object after rtp, before sendbin. This is usable for - objects such as congestion control elements, that don't want - to be burdened by the synchronization requirements of rtpsession. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7940> - -2024-12-26 01:33:37 +0100 Tim-Philipp Müller <tim@centricular.com> - - * gst/mpegtsdemux/tsdemux.c: - tsdemux: fix JPEG XS framerate handling for 29.97fps - .. and other framerate values with a 1.001 denominator. - The coded framerate denominator value is a code that maps to - either 1 (for 1) or 1.001 (for 2) not a direct value. - Before, 29.97fps would be announced as 15fps because it - would calculate 30/2 instead of 30/1.001. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8226> - -2024-12-30 22:53:02 +0100 Samuel Thibault <samuel.thibault@ens-lyon.org> - - * meson.build: - meson: Also disable drm on GNU/Hurd - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8223> - -2024-12-28 22:29:23 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: Update qsv plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 22:26:48 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: Update d3d11 plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 21:43:46 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: Update d3d12 plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 21:35:44 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12deinterlace.cpp: - * sys/d3d12/gstd3d12memorycopy.cpp: - * sys/d3d12/gstd3d12mipmapping.cpp: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/gstd3d12swapchainsink.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12: Add "Since" markers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 21:06:24 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: Add asio plugin docs - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3745 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 20:42:56 +0900 Seungha Yang <seungha@centricular.com> - - * sys/asio/gstasiosink.cpp: - * sys/asio/gstasiosrc.cpp: - * sys/asio/plugin.c: - asio: Add "Since" markers and fix typos in property description - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 20:38:13 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - docs: Add webview2 plugin docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8220> - -2024-12-28 13:26:18 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - play: Fix stream id leaks on initial stream selection - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7650> - -2024-12-27 19:01:21 +0200 Sebastian Dröge <sebastian@centricular.com> - - * tests/check/meson.build: - play: Actually check for valgrind for the tests - Other tests in gst-plugins-bad also assumed it to be checked. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7650> - -2024-12-27 13:31:09 +0200 Sebastian Dröge <sebastian@centricular.com> - - * tests/check/libs/play.c: - play: Fix tests after the switch to playbin3 - And also fix various memory leaks and other issues that always existed - in the tests. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7650> - -2024-10-10 15:54:04 -0400 Sebastian Dröge <sebastian@centricular.com> - - * tests/check/libs/play.c: - * tests/check/meson.build: - play: Port tests to libsoup 3 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7650> - -2024-10-10 15:08:33 -0400 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay-media-info-private.h: - * gst-libs/gst/play/gstplay-media-info.c: - * gst-libs/gst/play/gstplay-media-info.h: - * gst-libs/gst/play/gstplay.c: - * gst-libs/gst/play/gstplay.h: - play: Add stream-id based selection of streams to match better playbin3's API - As part of this - - Add accessors for the stream ID and selection API based on the - stream ID - - Deprecate the old index-based APIs - - Remove playbin support - - Implement the track enable API based on stream selection - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7648> - -2024-12-23 15:17:57 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/srtp/gstsrtpdec.c: - srtpdec: fix build when libsrtp1 is being used - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8204> - -2024-12-23 14:58:31 +0100 Philippe Normand <philn@igalia.com> - - * ext/wpe/wpe-extension/gstwpeextension.c: - * ext/wpe/wpe-extension/meson.build: - wpe: Fix build for version 2.44 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8203> - -2024-12-22 15:00:07 +0100 Philippe Normand <philn@igalia.com> - - * ext/srtp/gstsrtpdec.c: - srtpdec: Fix a use-after-free buffer issue - The gst_srtp_dec_decode_buffer() function modifies the input buffer after making - it writable, so the pointer might change as well, depending on the refcount of - the buffer. - This issue was detected using a netsim element upstream of the decoder in a - WebRTC pipeline. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8198> - -2020-08-10 14:27:29 +0900 Hosang Lee <hosang10.lee@lge.com> - - * ext/smoothstreaming/gstmssdemux.c: - mssdemux: Use gsturi structure to form fragment urls - We can use gst_uri_from_string_with_base () to join base url - and the fragment url path. - The previous method of forming base url in update_base_url(), - by looking for the string 'manifest' or 'Manifest' is insufficient. - A query may include these string in their paths and thus an invalid - base url string will be kept. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8193> - -2024-12-18 13:18:32 -0300 Thibault Saunier <tsaunier@igalia.com> - - * docs/meson.build: - doc: Handle gst_dep.get_variable('libexecdir') failure - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8178> - -2024-12-18 12:27:30 -0300 Thibault Saunier <tsaunier@igalia.com> - - * docs/meson.build: - doc: Allow updating the plugins cache for all modules even if hotdoc is not present - This was possible for some modules but not all, for no good reason. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8178> - -2024-12-20 12:52:31 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2format.c: - v4l2codecs: decoder: Fix drm format query - A late change that slipped through as it mainly affects NC12 - at the moment. - Fixes: 4b07d54931 ("v4l2codecs: decoder: Translate V4L2 formats into DRM fourcc/mod pairs") - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8188> - -2023-04-18 11:37:25 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsdemux/gstmpegdesc.h: - * gst/mpegtsdemux/tsdemux.c: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegts: Add provisional AV1 mapping - The main difference with the WIP av1-in-mpegts mapping is that the payload data - is not startcode-escaped. Most of the rest is sensible usage of it: - * Custom AV1G (AV1 Gstreamer) registration descriptor instead of AV01 - * AV1CodecConfigurationRecord is stored in the same 0x80 custom descriptor and - conforms fully to the isobmff spec (i.e. does not the HDR fields from the - provisional mpegts specification which conflict with that one). - * Data is stored as OBU - * Access Unit is the frame level (same as provisional mpegts mapping) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4442> - -2024-11-18 12:31:21 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/dash/gstxmlhelper.c: - * tests/check/elements/dash_mpd.c: - dash: handle 0 duration in gst_xml_helper_set_prop_duration - Add dash_mpdparser_check_mpd_client_set_period_to_0 - unit test to demonstrate it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8114> - -2024-12-02 11:39:11 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Fix some debug trace and comment typo - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8026> - -2024-12-02 11:32:13 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Don't immediatly reset timestamp in presence of TD - When a TD is being processed, it is not always pushed immediatly. Resetting - the time information lead to lost of timestamp in TU to Frame conversion. The - TU would be formed by buffer of TDFrame, and the timestamp taken from - the TU buffer was lost then the TD was handled. - The handling of TS should be entirely done by the 3 functions: - - gst_av1_parse_handle_obu_to_obu() (direct input to output) - - gst_av1_parse_handle_to_big_align() Reset DTS on detected TU or TD - - gst_av1_parse_handle_to_small_and_equal_align() PTS on show frame, flat DTS - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/commit/79312357a6ab8ebc4cfc1ed2243bdbc0660c39d5 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8026> - -2024-12-20 08:18:15 +0800 He Junyan <junyan.he@intel.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Fix a typo in the comments about its usage - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2024-12-04 23:27:37 +0800 He Junyan <junyan.he@intel.com> - - * docs/plugins/gst_plugins_cache.json: - Doc: Update the plugin document for h266parse - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2024-12-13 00:00:01 +0800 He Junyan <junyan.he@intel.com> - - * tests/check/elements/h266parse.c: - * tests/check/meson.build: - test: Add the h266parse element test - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2023-09-09 01:10:18 +0800 He Junyan <junyan.he@intel.com> - - * gst/videoparsers/gsth266parse.c: - * gst/videoparsers/gsth266parse.h: - * gst/videoparsers/gstvideoparserselements.h: - * gst/videoparsers/meson.build: - * gst/videoparsers/plugin.c: - h266parse: Add the new h266parse element - TODO: Need to refer to the new ISO/IEC 14496-15 for vvc1 and vvi1's - codec data - Co-authored-by: Zhong Hongcheng <spartazhc@gmail.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2024-12-17 00:12:51 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth266parser.c: - * gst-libs/gst/codecparsers/gsth266parser.h: - libs: codecparsers: Add the missing ilrp_idx field in H266's ref list - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2024-12-17 00:15:07 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth266parser.h: - libs: codecparsers: H266 GstH266RefPicListStruct's abs_delta_poc_st should be 16 bits - Its value range is 0~(2^15 − 1) according to the spec. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5710> - -2024-12-16 11:39:10 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/videoparsers/gstvp9parse.c: - vp9parse: Add video codec tag to the tag list - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8156> - -2024-12-16 11:38:52 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Add video codec tag to the tag list - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8156> - -2024-11-08 12:38:09 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2format.c: - v4l2codecs: format: Add V4L2_PIX_FMT_NC12 - Which is used by the Raspberry Pi 4 and 5 for 8-bit HEVC. Adding it - here in order to show-case how the V4L2<->DRM translation is - supposed to work. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7355> - -2024-08-14 02:11:06 +0200 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2codecpool.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2format.c: - * sys/v4l2codecs/gstv4l2format.h: - * sys/v4l2codecs/linux/drm_fourcc.h: - v4l2codecs: decoder: Translate V4L2 formats into DRM fourcc/mod pairs - V4L2 and DRM choose different, incompatible ways to represent - tiled/compressed etc. formats. While the later uses combinations of - format fourccs and opaque, vendor/hardware specific modifiers, for the - later every such combination is a distinct new format. - Traditionally Gst implemented each of the V4L2 formats if needed. - Given the large number of tiling and compression modes, this is - quite work intensive - and often actually not needed. - In many situations Gst just needs to pass buffers from V4L2 to DRM in - the form of EGL, VK, Wayland or KMS. - Thus implement a direct translation for some V4L2 formats to DRM ones, - limited to the DMA_DRM API, allowing much quicker enablement of formats - while requiring peers to use external implementations (usually Mesa or - KMS) for tiling etc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7355> - -2024-12-12 14:41:08 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder: init debug category earlier - The encoder has not been created if the codec is not supported by - the hardware, so the GST_WARNING_OBJECT will fail to find a suitable - category. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8138> - -2024-12-12 14:40:55 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - vkdecoder: init debug category earlier - The decoder has not been created if the codec is not supported by - the hardware, so the GST_WARNING_OBJECT will fail to find a suitable - category. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8138> - -2024-12-16 19:01:15 +0000 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsdec.c: - svtjpegxsdec: handle decode errors more gracefully - Use GST_VIDEO_DECODER_ERROR instead of just erroring out - unconditionally, so that the error handling behaviour is - determined by the "max-errors" property and we'll just - continue after decoding errors now instead of erroring out. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8163> - -2024-12-16 17:32:20 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/cuda/gstcudacontext.cpp: - cuda/context: add gpu stack size property - Allows reducing the initial stack size of GPU threads. Cuda should - automatically increase this value if a kernel requires a larger stack. - Can save roughly 40MB of GPU memory for a single nvh264enc instance. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8158> - -2024-12-16 17:31:17 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/stub/cuda.h: - cuda: add CuGet/SetCtxLimit() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8158> - -2024-12-18 13:35:53 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst-libs/gst/va/gstvadisplay.c: - va: display: Optimize out some property indirection - Because it was visible during some profiling, I thought it cost nothing - to optimize out the uneeded property get roundtrip. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8179> - -2024-12-17 17:36:19 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/wpe/wpe-extension/gstwpeextension.c: - wpe: enable console message with WPE2 - Looks like the WebKitConsoleMessage API is now available in WPE2 as well: - https://webkitgtk.org/reference/webkitgtk-web-process-extension/stable/signal.WebPage.console-message-sent.html - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8169> - -2024-12-16 21:41:55 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/va/gstvah264dec.c: - va: h264dec: Allow "extended" profile decoding - Extended is identical to main but allows for FMO/ASO features to be - used, and prevent using CABAC. Using similar logic to "baseline", - assume that if we support main, we can also do extended. - This fixes the following fluster vectors, which otherwise would fail - when trying to link the parsebin pad. - - BA3_SVA_C - - MR6_BT_B - - MR7_BT_B - - MR8_BT_B - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8164> - -2024-11-18 07:54:55 +0100 Emil Ljungdahl <emillj@axis.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: Tear down src and sink bins before removing them from webrtc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7900> - -2024-11-15 15:00:00 +0100 Emil Ljungdahl <emillj@axis.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: Fix potential deadlock on bin elements cleanup - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7900> - -2024-12-10 13:12:18 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Fix authors of encoder features - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-10 00:05:53 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Fix class hierarchy in an attempt to fix property docs - Also fix some convention-nits in the process. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-09 15:12:57 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc, osxaudio: Fix missing since markers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-09 15:12:57 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/applemedia/vtenc.c: - vtenc: Mark rate-control enum as plugin API, and update cache - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-09 15:12:06 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/avfvideosrc.m: - avfvideosrc: Add missing since markers for screen-crop properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-05 00:32:37 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - docs: Update macOS plugin docs again - Contains the following updates: - * New properties on avfvideosrc: screen-crop-* - * H265 and H265 Alpha support in vtdec and vtenc (VideoToolbox) - * ProRes support in vtenc - * New properties on vtenc elements: rate-control, data-rate-limits, - max-frame-delay - * New plugin atenc (AudioToolbox) with support for encoding AAC - * Plugin move: atdec moved from -bad to -good - * New property on osxaudio elements: unique-id - * OS X -> macOS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120> - -2024-12-06 19:13:50 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: expose new input-meta-processing type, force - In force mode, generated captions are discarded even if input video - buffers do not hold CC meta. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8091> - -2024-12-02 17:12:00 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/cuda/meson.build: - meson: Improve NVMM CUDA detection - 1. Add some comments explaining what headers and libs are expected on - what systems - 2. Only look in default incdirs if no incdir is specified - 3. Require libnvbufsurface.so on Jetson when cuda-nvmm=enabled - 4. Require libatomic on Jetson when cuda-nvmm=enabled - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8021> - -2024-12-16 00:22:47 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12descheappool.cpp: - d3d12: Suppress misleading leak report - Set may-be-leaked flag to child objects if needed, - since the parent object holding refcount of the child - will be leaked intentionally - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8152> - -2024-04-07 19:23:52 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12deinterlace.cpp: - * sys/d3d12/gstd3d12deinterlace.h: - * sys/d3d12/gstd3d12yadif.cpp: - * sys/d3d12/gstd3d12yadif.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - d3d12: Add d3d12deinterlace element - Adding D3D12 compute shader based deinterlace element - with YADIF filtering - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8140> - -2024-10-09 10:02:09 -0400 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_yadif_1.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_yadif_1_10.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_yadif_1_12.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_yadif_2.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_yadif_4.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - d3dshader: Add YADIF deinterlacing compute shader code - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8140> - -2024-12-11 11:53:47 +0100 Oskar Fiedot <oskar.fiedot@intel.com> - - * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: - * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.h: - * gst-libs/gst/analytics/meson.build: - * tests/check/libs/analyticsmeta.c: - analytics: add rotation to object detection mtd - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7938> - -2024-12-10 16:15:09 +0000 Philippe Normand <philn@igalia.com> - - * ext/webrtc/gstwebrtcstats.c: - webrtc: Simplify fmtp handling in codec stats - Parsing the whole caps as SDP media only to retrieve the fmtp field afterwards - seems a bit superfluous. By looking up the a-fmtp attribute directly the number - of allocations in this function gets down a bit. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8125> - -2024-12-10 12:52:33 +0000 Tim-Philipp Müller <tim@centricular.com> - - * meson.build: - meson: unset GST_TRACERS for g-ir-scanner to avoid warnings - People might have GST_TRACERS=leaks set in their environment - by default, which will now trigger criticals during the build - when calling g-ir-scanner, because we unset GST_PLUGIN_SYSTEM_PATH - so that the scanner doesn't load any plugins. - Fixes #4093 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8121> - -2024-12-10 13:42:41 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vkoperation: enable inline query only if it's a video operation - This commit enables the usage of inline queries, if and only if, the - provided - pNext structure, in gst_vulkan_opeation_enable_query(), chains a - VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR typed structure. - Also it guards "gstvkvideo-private.h" include - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8112> - -2024-12-09 14:45:01 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkvideo-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - vkvideo: add video_maintenance1 check - Add gst_vulkan_video_maintenance1_supported - to check if the video session needs - VK_VIDEO_SESSION_CREATE_INLINE_QUERIES_BIT_KHR - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8112> - -2024-12-06 15:15:45 +0100 Armin Begovic <armin.begovic@hotmail.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/decklink/gstdecklink.cpp: - decklink: Add missing video modes to gst_decklink_mode_get_type() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8110> - -2024-12-06 15:15:34 +0100 Armin Begovic <armin.begovic@hotmail.com> - - * sys/decklink/gstdecklink.cpp: - decklink: Fix copy-paste errors regarding 8K modes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8110> - -2024-12-09 12:02:01 -0300 Thibault Saunier <tsaunier@igalia.com> - - * docs/meson.build: - docs: Do not try to generate cuda documentation when gir is not generated - On macos it is not - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8086> - -2024-12-06 00:04:45 -0500 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/analytics/gstanalyticsmeta.c: - * gst-libs/gst/analytics/gstanalyticsmeta.h: - analytics: add _N_TO_N relation type - This relation type define relations between each components of two groups. - First component of first group relate to first component of second group, - Second component of second group relate to second component of second group, - and so on. It's a denser way to express relations in this context. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8087> - -2024-11-19 10:30:09 +0100 Peter Stensson <petest@axis.com> - - * ext/curl/gstcurlhttpsink.c: - curlhttpsink: Set auth any for http_proxy and https_proxy - There was different behaviour if the proxy was configured through - properties or environment. For properties libcurl would be configured - with any auth, but for environment libcurl would default to using basic. - Now any auth is set for both configuration methods. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935> - -2024-11-19 07:31:20 +0100 Peter Stensson <petest@axis.com> - - * ext/curl/gstcurlhttpsink.c: - * ext/curl/gstcurlhttpsink.h: - curlhttpsink: Don't set Content-Length to 0 for proxy - The Content-Length header would unconditionally be included when the - proxy property was set. This would result in requests with both - Content-Length and Transfer-Encoding header. Now we rely on the - use-content-length property in the proxy case aswell. This also makes - sure that Content-Type is set correctly, since before that would be - skipped if proxy was used. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935> - -2024-12-07 00:23:33 -0500 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - onnx: disable onnxruntime telemetry - Disable the telemetry feautre on onnxruntime. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8092> - -2024-12-05 13:39:38 +0100 Francisco Javier Velázquez-García <francisco.velazquez@appear.net> - - * ext/zxing/gstzxing.cpp: - zxing: Update decode hints usage for compatibility with ZXing >= 2.2 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7879> - -2024-12-04 14:33:07 -0500 Aaron Boxer <aaron.boxer@collabora.com> - - * gst/videoparsers/gsth265parse.c: - h265parse: reset nalparser to NULL after it is freed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8074> - -2024-08-01 14:54:11 +0000 sachin gadag <sggadag@amazon.com> - - * gst/videoparsers/gsth264parse.c: - h264parse: set nalparser to NULL after it is freed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8074> - -2024-12-05 06:41:35 -0500 Aaron Boxer <aaron.boxer@collabora.com> - - * gst-libs/gst/codecparsers/gsth265parser.c: - h265parse: remove useless NULL setting in gst_h265_parser_free - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8082> - -2024-12-05 06:39:06 -0500 Aaron Boxer <aaron.boxer@collabora.com> - - * gst-libs/gst/codecparsers/gsth264parser.c: - h264parse: remove useless NULL setting in gst_h264_nal_parser_free - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8082> - -2024-10-16 15:56:40 +0900 Aniket Hande <ahande@ftilab.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - * gst/mpegtsdemux/mpegtspacketizer.h: - * gst/mpegtsdemux/mpegtsparse.c: - tsparse: Extract and fill m2ts header for each packet - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7625> - -2024-12-02 16:42:06 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkoperation.h: - vkoperation: use inline query with video maintenance1 - When video_maintenance1 is supported, - gst_vulkan_operation_begin_query will now use - the inline query mechanism instead of vkCmdBeginQuery - API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7995> - -2024-11-28 15:49:14 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - vkdevice: enable VK_KHR_VIDEO_MAINTENANCE_1_EXTENSION_NAME - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7995> - -2024-07-29 13:49:05 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkphysicaldevice.c: - vkphysicaldevice: dump if video maintenance1 is supported - Dump if VK_KHR_video_maintenance1 features is supported by the driver. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7995> - -2024-11-28 23:17:40 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecpool.c: - * sys/v4l2codecs/gstv4l2codecpool.h: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: Use GstVideoInfoDmaDrm more consistently - This avoids some duplications and makes the DRM info available in - more places, which will help with future changes. - Also fix some error messages while on it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8028> - -2024-12-03 13:14:33 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/codecparsers/gsth264parser.c: - h264parse: Free SEI if parsing succeeds but alignment afterwards fails - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8056> - -2024-12-03 13:10:04 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/codecparsers/gsth265parser.c: - h265parse: Free SEI if parsing succeeds but alignment afterwards fails - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4076 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8056> - -2024-11-29 14:41:12 +0100 Jan Alexander Steffens (heftig) <heftig@archlinux.org> - - * ext/neon/meson.build: - meson: Drop max version bound from neon - Neon 0.34.0 broke the build again, but the API+ABI has been stable since - 0.27 and the library is so-versioned. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8008> - -2024-12-03 14:44:30 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * gst-libs/gst/codecparsers/gsth265parser.c: - * gst/videoparsers/gsth265parse.c: - h265parse: parse unregistered SEI without user data - Same change as in h264parse. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7931> - -2024-11-20 14:16:23 +0100 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * gst-libs/gst/codecparsers/gsth264parser.c: - * gst/videoparsers/gsth264parse.c: - * tests/check/elements/h264parse.c: - h264parse: parse unregistered SEI without user data - We get loads of warnings when parsing videos from users: - gsth264parser.c:1115:gst_h264_parser_parse_user_data_unregistered: No more remaining payload data to store - gsth264parse.c:646:gst_h264_parse_process_sei:<h264parse0> failed to parse one or more SEI message - Those are raised because of unregistered SEI without user data. - The spec does not explicitly state that unregistered SEI needs to have - data and I suppose the UUID by itself can carry valuable information. - FFmpeg also parses and exposes such SEI so there is no reason for us no - too as well. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7931> - -2024-11-15 16:44:10 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsdemux/mpegtsbase.c: - tsdemux: Lower a GST_FIXME to a GST_DEBUG - This is not really a refcounting issue and can happen if a new program is in the - process of being activated that contains streams with the same PIDs. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7898> - -2024-08-20 20:43:42 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: add gst_vulkan_encoder_is_started() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-11-15 12:41:15 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - vkencoder-private: add again GST_TYPE_VULKAN_ENCODER_RATE_CONTROL_MODE - It was already part of the old rate control mechanism but it had wrong the - namespace. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-11-08 18:05:55 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: implement callback to chain control rate structures - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-11-08 11:44:40 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: implement callback to chain codec specific structures - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-19 16:43:09 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: handle quality level - It creates a new structure for passing the codec quality structure at _start(), - where it will be filled. The quality level can be set or changed according - encoder limits. - Later the quality level will be set at _update_session_parameters() and at each - frame encoding. That's why it has to be set at _start(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-19 15:04:09 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: rename first_encode_cmd to session_reset - Since it reflect better when it's needed to be used: to reset the current - session. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-19 12:47:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: shuffle down VkVideoBeginCodingInfoKHR initialization - to make it more cohesive - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-05 14:52:31 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: remove rate control handling - It will be reintroduced later with different approach. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-19 13:28:15 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: enhance algorithm to get the slot index - The algorithm for generating the current slot index is a simple round robin, - nonetheless it's not assured that the next slot index it's not still used by a - still living encode picture. - This new way holds an array with the still living encode pictures and the next - slot index looks for a released index in the array. - Its downside is deallocating a picture need to be removed from the array, so the - helper has to be passed to the uninit() function - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-19 12:21:04 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: add VkVideoReferenceSlotInfoKHR in GstVulkanEncoderPicture - And remove slotIndex since it's part of VkVideoReferenceSlotInfoKHR, simplifying - the reference slots array creation, and changing the tests accordingly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-12-03 15:39:47 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: check for codec buffer - gst_vulkan_video_codec_buffer_new() can return NULL, so it's required to check - the returned value and bail out if needed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-06 11:23:40 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: avoid GstVulkanEncoderPicture allocation - By using it as apart of the encoder picture structure that has to initialized - and uninitalized. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-04 14:14:04 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: remove width, height and fps from GstVulkanEncoderPicture - In GStreamer that buffer information is decoupled, holding other structures to - describe the stream: GstCaps. So, to keep the GStreamer design this patch - removes these information from GstVulkanEncoderPicture and pass to - gst_vulkan_encoder_encode() a pointer to GstVideoInfo. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-11-26 20:10:15 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - vkencoder-private: replace packed headers with offset handling - Instead of holding all headers in an external array and add them into the - bitstream buffer before the encoding operation, adding extra memory and extra - copy operations, the encoder picture should specify the offset where the Vulkan - will start to add the bitstream slices/frame, because the element has written - already the headers until that offset. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-04 13:17:01 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: remove nb_refs from GstVulkanEncoderPicture - That's the number of references that gst_vulkan_encoder_encode() receives to - process, so it has to go as a parameter, because it's part of the reference - list, not of the picture. - This commit also modified unit tests accordingly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-04 12:31:25 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: remove pic_num and pic_order_cnt from GstVulkanEncoderPicture - Since they aren't semantically part of the codec-independent encoding operation. - And modify unit tests accordingly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-09-03 21:36:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: remove is_ref member from GstVulkanEncoderPicture - It's not used. Modified the unit test accordingly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-08-12 17:31:14 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: don't store output data size - There's no need to store in encoder helper the output data size, that's - responsibility of the caller when an output buffer is allocated. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-08-22 10:51:52 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: enhance capabilities logging - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-08-19 17:52:10 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkvideoutils.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vulkan: store in GstVulkanVideoCapabilities encoder and decoder caps - The structure already stored the generic video capabilities and the specific - codec capabilities both for encoding an decoding. The generic decoder - capabilities weren't stored because it was only used internally in the decoder - helper object. Nonetheless, for the encoder, the elements will need the generic - encoder capabilities to configure the encoding. That's why it's required to - expose it as part of GstVulkanVideoCapabilities. And the generic decoder is - included for the sake of symmetry. - While updating the API vkvideoencodeh265 test got some code-style fixes. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-11-27 10:51:38 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: rename GstVulkanEncoderPicture - GstVulkanEncodePicture breaks the namespace. This commit fixes it by renaming it - to GstVulkanEncoderPicture, also new() and free() signature functions. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007> - -2024-08-13 01:03:56 +0900 Seungha Yang <seungha@centricular.com> - - * ext/meson.build: - * ext/nvdswrapper/gstnvdsdewarp.cpp: - * ext/nvdswrapper/gstnvdsdewarp.h: - * ext/nvdswrapper/meson.build: - * ext/nvdswrapper/plugin.cpp: - * ext/nvdswrapper/stub/cuda_runtime.h: - * meson_options.txt: - nvdswrapper: Add NVIDIA DeepStream wrapper plugin - Adding a NVIDIA DeepStream SDK based plugin with a dewarp element - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7357> - -2024-12-02 19:34:14 +0800 Pablo Sun <pablo.sun@mediatek.com> - - * sys/kms/gstkmssink.c: - kmssink: Add mediatek auto-detection - Add MediaTek display controller into list of - auto-detected modules. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8023> - -2024-10-03 22:42:36 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: keep only one DPB view for layered DPB - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-09-19 12:29:23 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: increase reference slots array - H264 has the maximum number of refs (36) of any supported codec. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-09-18 16:28:41 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder_private: move view creation to picture init - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-09-17 22:14:46 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: shuffle up operation and query creation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-09-17 13:44:53 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: shuffle up get format to bail out better - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-09-17 13:43:06 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: define encoded feedback flags by removing override bit - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-08-27 18:32:42 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: fix parameters overrides - First remove validations since they will fail if there isn't a write operation. - It's valid to pass data without write operations. - Finally, it should check for hasOverride in feedback info. Nonetheless, there's - a NVIDIA bug returning always FALSE for hasOverride, that's why we currently - force it to TRUE. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993> - -2024-11-28 12:24:11 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/kms/gstkmssink.c: - * sys/kms/gstkmssink.h: - * sys/kms/meson.build: - kms: Bump libdrm requirement to 2.4.108 - DRM modifier support requires drmModeFormatModifierBlobIterNext() - which was added in 2.4.108. See: - https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174#note_2673883 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7991> - -2022-02-18 17:19:57 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/gstcccombiner.c: - cccombiner: Improve get_next_time to avoid spinning - Avoid aggregate getting called in a loop when timed out but we're not in - a state where we can produce a buffer. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1746> - -2022-02-18 17:06:44 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: Add caption_pad field to avoid get_static_pad - Save a reference to the caption pad as well. This will make the - `get_next_time` implementation cheaper. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1746> - -2024-11-26 16:27:19 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/gstcccombiner.c: - cccombiner: Pass caption_pad to schedule_caption - Avoid having to find this pad again. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1746> - -2022-02-18 17:06:44 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: Add video_pad field to avoid pad get/ref/unref - Saving a reference to this always-present pad simplifies the code and - avoids a lot of pad list scans and refcounting. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1746> - -2024-10-03 21:04:28 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - vulkan: add gst_vulkan_video_image_create_view() - This function is moved from gstvkdecoder-private so it could be used by - gstvkencoder-private too, removing there what it should be duplicated code. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7984> - -2024-11-26 21:22:25 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: fix documentation - The function doesn't take the reference from caller, it keeps it's own - reference, so transfer is none. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-09-06 11:22:54 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.h: - vkencoder-private: fix code style and use gpointer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-11-26 14:46:40 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: fix and complete public functions prechecks - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-11-26 16:22:47 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: check for layered buffer when new picture - And balance `if` curly brackets. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-10-03 22:31:54 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: early return if dpb pool or dpb buffer already exist - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-08-15 17:51:23 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * tests/check/libs/vkvideoencodeh265.c: - vkencoder-private: usage structure is provided by caller - As all the profile structure, it's not intended to be filled in - gst_vulkan_encoder_start() function, but by the caller. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-09-19 12:31:33 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: fix how to get bitstream buffer size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-09-04 14:21:33 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: doc: fix function name - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-10-03 22:22:17 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: use gst_clear_object() - Instead of g_clear_object() for sake of coherence. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-09-04 14:21:04 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: fix return value to gst_vulkan_encoder_encode() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-11-25 17:51:31 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: doc: remove (in) annotation - because it's the default one - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-09-05 14:53:07 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: move out indent macros - Outside of the structures whenever is possible, given indent limitations. In this way - the code has a better readability. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7974> - -2024-11-26 21:34:25 +0800 Xi Ruoyao <xry111@xry111.site> - - * ext/x265/gstx265enc.c: - x265: Allow building with x265-4.1 - In x265-4.1 masteringDisplayColorVolume is changed from a pointer to a - character array embedded in struct x265_param. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7968> - -2024-11-26 16:52:05 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * tests/check/meson.build: - meson: Don't unconditionally invoke the libsoup subproject - fallback: kwarg will invoke the specified subproject even if required: - false, which is not what we want here. - Reported at https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4045#note_2674340 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7965> - -2024-02-01 18:45:01 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: expose input-meta-processing property - It can be used to discard closed captions from the input pad if the - matching video buffer already held closed captions. - It is useful in a scenario where captions are generated for an AV - stream, but the incoming stream already has embedded captions for - some intervals, and those original captions should be preferred. - It can also be used to make sure input CC meta is always dropped, - the default behavior remains to append aggregated CC to whatever - CC meta was already present on the input video buffer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6072> - -2024-11-23 22:08:56 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12converter: Gamma LUT related enhancements - * Build gamma LUT using shader, instead of CPU side math then uploading - * Make gamma LUT sharable across multiple converters - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7951> - -2024-11-23 11:47:00 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_gamma_lut.hlsl: - * gst-libs/gst/d3dshader/converter-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/converter-hlsl/meson.build: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - d3dshader: Add shader for building gamma LUT - Newly added shader will be used by converter to construct - gamma encode/decode LUT texture - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7951> - -2024-11-19 16:52:29 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/ccutils.c: - ccutils: Rename wrote_first to write_field1 - This better describes what we're doing. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7967> - -2024-11-19 17:21:16 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/ccutils.c: - ccutils: Fix a typo in cc_buffer_take_cea608_field2 - There are no users of cc_buffer_take_cea608_field2, so this never was a - problem. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7967> - -2024-11-19 16:42:11 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/ccutils.c: - ccutils: Fix a typo in max_buffer_time handling - All users set max_buffer_time to GST_CLOCK_TIME_NONE, effectively - infinite, so this never was a problem. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7967> - -2024-11-19 13:47:55 +0100 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * ext/closedcaption/ccutils.c: - ccutils: Remove broken branch - This branch was added in dd00dab5e9e8650f3f00660c2e611f81f1e8cd5b but is - never actually taken, as it requires `cc_data` to be null but - `cc_data_len` to be non-null. It would then dereference the null - `cc_data`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7967> - -2024-11-26 09:23:51 +0100 Albert Sjolund <alberts@axis.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtc: don't crash on invalid bundle id - If the bundle id forwarded to connect_rtpfunnel is not valid, - the assertion fails and crashes the program. This is now instead - an error. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7963> - -2024-11-22 11:31:18 -0700 Sebastien Cote <sebas642@gmail.com> - - * sys/applemedia/vtenc.c: - vtenc: add support for the HLG color transfer - Fixes #4047 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7947> - -2024-11-13 16:04:44 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vkoperation: update doc to skip barriers array methods - Some methods are using arrays of element which type can - vary at compile time. These methods should not - be introspectable as it's not possible to determine - the final type. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7880> - -2023-10-20 23:05:01 +0800 Zhong Hongcheng <spartazhc@gmail.com> - - * tests/check/libs/h266parser.c: - * tests/check/meson.build: - tests: Add the VVC(H266) parser test cases - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5003> - -2024-11-21 01:17:27 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth266parser.c: - libs: codecparsers: Implement the VVC(H266) parser part II - Implement the picture header, slice header and SEI parsing functions. - Co-authored-by: spartazhc <spartazhc@gmail.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5003> - -2024-11-13 15:42:03 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth266parser.c: - * gst-libs/gst/codecparsers/meson.build: - libs: codecparsers: Implement the VVC(H266) parser part I - Implement all the VPS, SPS and APS parsing functions. - Co-authored-by: spartazhc <spartazhc@gmail.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5003> - -2024-11-13 15:39:49 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth266parser.h: - libs: codecparsers: Add the VVC(H266) parser header file - Co-authored-by: spartazhc <spartazhc@gmail.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5003> - -2024-11-20 20:32:09 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.h: - * sys/d3d12/plugin.cpp: - d3d12: Add gst_d3d12_flush_all_devices() method - ... and removing implicit flushing behavior on GstD3D12Device::finalize - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7929> - -2024-11-20 10:42:13 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajadeviceprovider.cpp: - aja: Fix infinite loop in device provider - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7928> - -2024-11-14 10:59:35 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst-libs/gst/wayland/gstwllinuxdmabuf.c: - wayland: dmabuf: Translate tiled strides - GStreamer uses a different representation of tiled strides that needs - to be translated before being sent to wayland. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849> - -2024-11-14 10:59:05 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Use new helpers for DRM handling - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849> - -2024-11-14 09:46:28 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Fix caps string leak in v4l2codecs - Unlike gst_video_format_to_string(), gst_video_dma_drm_fourcc_to_string() - return a freshly allocated string which needs to be free. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849> - -2024-11-08 16:22:16 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: decoder: Fixed unset dimension in caps - When the driver does not implement ENUM_FRMESIZE for some specific - formats, the caps limiting the sizes may endup empty, which results in - assuming the driver can scale to any sizes. - Ensure that the original size is in the caps to prevent this assumption. - This happens with Hantro drive, since it only reply to that call if the - format is postprocessed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849> - -2024-11-12 12:09:46 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Use drm modifier to build caps - Do not only use drm fourcc to build drm-format but also - include the format modifier. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849> - -2024-11-07 13:06:03 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * ext/gtk/gstgtkwaylandsink.c: - * ext/wayland/gstwaylandsink.c: - waylandsink: Properly handle unrecoverable errors - Allocation failures cannot be recovered and should lead to an error - being posted on the bus. Otherwise the pipeline will just stall. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7903> - -2024-11-07 12:49:10 -0500 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * ext/gtk/gstgtkwaylandsink.c: - * ext/wayland/gstwaylandsink.c: - waylandsink: Do not offer SHM pool when DMABuf is negotiated - Pools are expected to produce DMABuf when the caps are negotiated with - the associated caps feature. For that reason, avoid sharing the SHM pool - in this case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7903> - -2024-11-18 11:00:36 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/zbar/gstzbar.c: - zbar: fix documentation - Fix some typos and markdown cleanup. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7914> - -2024-11-18 10:59:51 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/zxing/gstzxing.cpp: - zxing: update documentation - Fix some typos and markdown cleanup. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7914> - -2024-11-13 17:00:17 +1100 Matthew Waters <matthew@centricular.com> - - * sys/uvcgadget/configfs.c: - uvcgadget: silence a maybe-uninitialized warning - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7875> - -2024-11-13 16:59:46 +1100 Matthew Waters <matthew@centricular.com> - - * ext/fdkaac/gstfdkaacenc.c: - fdkaacenc: silence a maybe-unitialized warning - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7875> - -2024-11-13 16:58:41 +1100 Matthew Waters <matthew@centricular.com> - - * gst/rist/gstristrtxsend.c: - ristrtxsend: silence a maybe-uninitialized warning - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7875> - -2024-11-13 16:58:15 +1100 Matthew Waters <matthew@centricular.com> - - * ext/codec2json/gstav12json.c: - av12json: silence a maybe-unitialized warning - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7875> - -2024-11-13 16:12:41 +1100 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkvideoutils.c: - vulkan/videoutils: silence some maybe-unitialized warnings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7875> - -2024-11-16 13:20:16 +0800 He Junyan <junyan.he@intel.com> - - * docs/plugins/gst_plugins_cache.json: - Doc: Update the kmssink caps after adding DMA support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2024-11-08 13:43:53 +0100 Jakub Adam <jakub.adam@collabora.com> - - * sys/kms/gstkmsbufferpool.c: - kmsbufferpool: Accept DMA_DRM caps in the config - Only linear modifier is supported due to the dumb allocator's - limitation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2024-09-07 00:43:37 +0200 Jakub Adam <jakub.adam@collabora.com> - - * sys/kms/gstkmssink.c: - kmssink: ensure we have a valid vinfo_drm after set_caps - Consequently drop the check in import_dmabuf - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2024-08-29 20:58:04 +0200 Jakub Adam <jakub.adam@collabora.com> - - * sys/kms/gstkmssink.c: - kmssink: enumerate drm formats when IN_FORMATS not present - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2024-05-24 12:26:20 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmssink.c: - kmssink: Do not provide DMA buffer pool for non-linear caps - The dumb allocator does not support modifiers, so we can not allocate - non-linear buffers by ourself. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2023-08-11 18:43:57 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmssink.c: - * sys/kms/gstkmssink.h: - kmssink: Handle the DMA buffer importing correctly - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2023-08-11 18:37:18 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmssink.c: - * sys/kms/gstkmsutils.c: - kmssink: Add DMA kind caps into sink caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2023-08-11 18:32:45 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmssink.c: - kmssink: Add helper functions to create DMA and raw caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2023-08-11 18:28:49 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmssink.c: - kmssink: Add a helper function to collect formats and modifiers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2023-08-11 18:02:42 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmsallocator.c: - * sys/kms/gstkmsallocator.h: - * sys/kms/gstkmssink.c: - kmssink: Add modifier to gst_kms_allocator_dmabuf_import - Use the new drmModeAddFB2WithModifiers() API for binding the - non-linear BO. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5174> - -2024-11-15 11:46:14 -0300 Thibault Saunier <tsaunier@igalia.com> - - * meson.build: - meson: Bump minimum version to 1.3 - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4025 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7899> - -2024-11-12 12:12:17 +0100 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Test ioctl return value and errno - Fix error testing when using V4L2_FMTDESC_FLAG_ENUM_ALL by using - both errno and the return value. - Fixes !7686 (merged) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7873> - -2024-11-06 12:47:32 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkdownload.c: - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * ext/vulkan/vksink.c: - * ext/vulkan/vkupload.c: - * gst-libs/gst/vulkan/gstvkutils.c: - * gst-libs/gst/vulkan/gstvkutils.h: - * gst-libs/gst/vulkan/gstvkvideofilter.c: - * sys/applemedia/vtdec.c: - vkutils: add gst_vulkan_ensure_element_device - In order to keep the same device across - the elements in the pipeline, use either the device id - to create the device or get the device from the context - set by the peer elements. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7843> - -2024-11-08 10:21:19 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkinstance.c: - * gst-libs/gst/vulkan/gstvkinstance.h: - vkinstance: add gst_vulkan_instance_create_device_with_index - This method will allow to create a device with its device_index - preparing the support of multiple device. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7843> - -2024-11-11 18:24:37 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gsth264parser.c: - * gst-libs/gst/codecparsers/gsth265parser.c: - * gst-libs/gst/codecparsers/gstjpegparser.c: - * gst-libs/gst/codecparsers/nalutils.c: - * gst-libs/gst/codecparsers/nalutils.h: - codecparser: remove unused headers - Mainly <string.h> but also <stdlib.h> in jpegparse - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869> - -2024-11-11 17:47:48 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - * gst-libs/gst/codecparsers/gsth265parser.c: - * gst-libs/gst/codecparsers/meson.build: - * gst-libs/gst/codecparsers/nalutils.h: - codecparsers: avoid libc math library - Instead of the libc ceil() and pow() machinery for double types, since the - library uses it for unsigned integers use a simple math function for for ceil - division and bit left shift for integer power of two. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869> - -2024-10-09 13:47:41 -0400 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - * gst-libs/gst/codecparsers/gsth265parser.c: - h265parser/bitwriter: add some comments for ceil_log2 use - Validate that the length of field must be calculate with - ceil_logs2 and not bit storage. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7429> - -2024-10-09 13:46:17 -0400 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/codecparsers/gsth264parser.c: - h264parse: use of ceil_log2 instead of bit_storage - According to the specification: - The length of the slice_group_id i syntax element is Ceil( Log2( - num_slice_groups_minus1 + 1 ) ) bits - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7429> - -2021-07-01 13:09:04 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Mux timestampless buffers immediately - Instead of leaving them queued indefinitely, or until we're timing out - and it's the only buffer queued. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7870> - -2024-11-12 11:01:03 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Don't time out in live mode if no timestamped next buffer is available - The muxer can only advance the time if it has a timestamped buffer that can be - output, otherwise it will just busy-wait and use up a lot of CPU. - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3912 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7870> - -2024-11-14 10:37:05 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/zxing/gstzxing.cpp: - * ext/zxing/gstzxingplugin.c: - gst_plugins-cache: add zxing plugin - update documentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7887> - -2024-11-12 09:01:49 +0100 Edward Hervey <edward@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Post error on the bus if no mapping is found - This is more useful/visible - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707> - -2024-10-22 08:42:17 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsdemux/gstmpegdesc.h: - * gst/mpegtsdemux/tsdemux.c: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstbasetsmux.h: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegts: Add custom mapping for vp9 - This is a custom mapping. There isn't much needed apart from that to store vp9 - in mpeg-ts since the bitstream is self contained. - Since there are no official specification we don't want people to be mistaken in - believing that. Therefore that mapping is only used in the muxer if the (new) - property `enable-custom-mappings` is set to TRUE. - * The MPEG-TS Stream Type is Private Data (0x6) with the registration descriptor - set to `VP09`. - * The Access Unit are VP9 frames stored in PES packets - * As there is no emulation prevention byte in VP9 elementary stream, the can be - misdetection of PES start code. To avoid this, the start of a PES packet must - be signalled using the Payload Unit Start Indicator in the transport packet - header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707> - -2024-11-08 10:18:09 -0300 Thibault Saunier <tsaunier@igalia.com> - - * sys/nvcodec/gstnvdecoder.cpp: - nvcodec: gl now supports Y444_16LE - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7858> - -2024-11-12 02:06:39 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Fix error when the sink is reused - Release backbuffer just before releasing swapchain - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7867> - -2024-11-05 14:34:03 +0100 Théo Maillart <tmaillart@freebox.fr> - - * gst/videoparsers/gstmpegvideoparse.c: - mpegvideoparse: do not set delta unit flag on unknown frame type - When encoding an image to mpeg2 video, with something like: - gst-launch-1.0 encodebin name=e profile=mpegpsmux:video/mpeg,mpegversion=2,systemstream=false ! \ - filesink location=sample.mpg filesrc num-buffers=1 blocksize=$(stat -c%s sample.png) \ - location=sample/dts.png ! pngdec ! e. - The only frame's type is set to an invalid value 0 - The consequence is that mpegvideoparse sets the delta unit flag on the buffer because - it is not an I frame, then decodebin3 drops this only frame because the delta - unit flag is set and the decoder receives eos before it was able to receive any - encoded data - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7832> - -2024-11-11 17:44:22 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gsth264bitwriter.c: - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - h26Xbitwriter: use quote form directive for internal header - Since nalutils.h is not installed it should be included for the local path. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7868> - -2024-10-11 11:57:15 -0400 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gsth264bitwriter.c: - * gst-libs/gst/codecparsers/gsth264bitwriter.h: - * tests/check/libs/h264bitwriter.c: - h264bitwriter: implement gst_h264_bit_writer_filler() - This is required for vulkan encoder since it can only write slides after aligned - offsets. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7785> - -2024-11-08 19:05:41 -0500 Sid Sethupathi <sid.sethupathi@gmail.com> - - * ext/gs/README.md: - gs: update building README - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7860> - -2023-10-28 13:01:58 +0200 Diego Nieto <diego.nieto.m@outlook.com> - - * tests/check/elements/jifmux.c: - exiftag: handle GST_TAG_CAPTURING_LIGHT_SOURCE tag - This exif tag allows to specify the different light conditions - when taking a picture. This tag is defined in: - https://exiftool.org/TagNames/EXIF.html#LightSource - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5571> - -2024-09-17 11:47:47 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/closedcaption/gstccconverter.c: - * tests/check/elements/ccconverter.c: - ccconverter: Don't override in_fps_entry when trying to take output - This allows to handle CDP streams where the framerate is not provided by the - caps and generally gives preference to the framerate inside the CDP packets over - the one in the caps. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7532> - -2024-11-04 19:30:02 -0500 Olivier Crête <olivier.crete@collabora.com> - - * gst-libs/gst/analytics/gsttensor.h: - * gst-libs/gst/analytics/gsttensormeta.c: - * gst-libs/gst/analytics/gsttensormeta.h: - * gst/tensordecoders/gstssdobjectdetector.c: - tensormeta: Add APIs to create and access GstTensorMeta contents - Also document those APIs better. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-11-04 19:29:43 -0500 Olivier Crête <olivier.crete@collabora.com> - - * gst-libs/gst/analytics/gsttensor.c: - * gst-libs/gst/analytics/gsttensor.h: - * gst-libs/gst/analytics/gsttensormeta.c: - tensor: Add APIs to create and access GstTensor contents - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-11-04 17:04:28 -0500 Olivier Crête <olivier.crete@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * ext/onnx/gstonnxinference.cpp: - * gst-libs/gst/analytics/gsttensor.h: - * gst/tensordecoders/gstssdobjectdetector.c: - tensors: Use full GstTensorDataType type name in type members - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-10-31 16:03:31 -0400 Olivier Crête <olivier.crete@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * gst-libs/gst/analytics/gsttensormeta.c: - * gst-libs/gst/analytics/gsttensormeta.h: - analytics: Add APIs to add or get a GstTensorMeta - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-09-24 10:53:05 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * gst-libs/gst/analytics/gsttensor.c: - * gst-libs/gst/analytics/gsttensor.h: - analytics: Adding abstraction on tensor dims - Tensor can be row or col major, but it's also possible that the order by we need - to read the tensor with more than two dimension need to be described. The - reserved field in GstTensorDim is there for this purpose. If we need this we - can add GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for - each dimension. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-10-17 17:28:24 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * gst-libs/gst/analytics/gsttensor.c: - * gst-libs/gst/analytics/gsttensor.h: - * gst-libs/gst/analytics/gsttensormeta.c: - * gst-libs/gst/analytics/gsttensormeta.h: - * gst-libs/gst/analytics/meson.build: - * gst/tensordecoders/gstssdobjectdetector.c: - analytics: Make GstTensor more suitable for inline allocation - GstTensor contained two fields (data, dims) that were dynamicallay allocated. For - data it's for a GstBuffer and we have pool for efficient memory management. For - dims it's a small array to store the dimension of the tensor. The dims field - can be allocated inplace by moving it at the end of the structure. This will - allow a better memory management when GstTensor is stored in an analytics meta - which will take advantage of the _clear interface for re-use. - - New api to allocate and free GstTensor - To continue to support use-cases where GstTensor is not stored in an - analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and - gst_tensor_free that will facilitate memory management. - - Make GstTensor a boxed type - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-07-17 14:39:42 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - * ext/onnx/gstonnxinference.cpp: - * gst-libs/gst/analytics/gsttensor.h: - * gst-libs/gst/analytics/gsttensormeta.h: - analytics: Move batch to GstTensor - - batch_size is required to interpret the tensor depending on the tensor format - the batch are not necessarily memory plane therefore it's preferable to keep it - inside GstTensor. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-10-17 17:27:37 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxinference.cpp: - * gst-libs/gst/analytics/gsttensor.h: - * gst-libs/gst/analytics/gsttensormeta.h: - * gst-libs/gst/analytics/meson.build: - analytics: Decouple GstTensor from GstTensorMeta - - To support transporting tensor as GstMeta, Analytics-Meta and Media we need to - decouple GstTensor from GstTensorMeta. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-01-25 01:09:13 -0500 Olivier Crête <olivier.crete@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/onnx/gstonnx.c: - * ext/onnx/meson.build: - * gst/meson.build: - * gst/tensordecoders/gstssdobjectdetector.c: - * gst/tensordecoders/gstssdobjectdetector.h: - * gst/tensordecoders/gsttensordecoders.c: - * gst/tensordecoders/meson.build: - * meson_options.txt: - tensordecoders: Move decoder out of the ONNX plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-01-24 21:12:17 -0500 Olivier Crête <olivier.crete@collabora.com> - - * ext/onnx/decoders/gstssdobjectdetector.c: - * ext/onnx/gstonnx.c: - * ext/onnx/gstonnxclient.h: - * ext/onnx/meson.build: - * gst-libs/gst/analytics/analytics.h: - * gst-libs/gst/analytics/gsttensormeta.c: - * gst-libs/gst/analytics/gsttensormeta.h: - * gst-libs/gst/analytics/meson.build: - analytics: Move tensor meta to the analytics library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000> - -2024-11-06 16:35:10 +0100 wbartel <wilhelm.bartel@streamonkey.de> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: fix malformed docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7847> - -2024-11-06 12:05:25 +0100 Adrien De Coninck <a.deconinck@intopix.com> - - * gst/mpegtsdemux/tsdemux.c: - tsdemux: validate frat before setting framerate in caps - From JPEG-XS part3 : "If the frame rate is unknown, the frat parameter is 0." - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7836> - -2024-11-05 18:04:44 +0100 Adrien De Coninck <a.deconinck@intopix.com> - - * gst/mpegtsdemux/tsdemux.c: - tsdemux: use JXS_video_descriptor "frat" to set caps "framerate" - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7836> - -2024-11-05 14:23:05 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/timecode/gsttimecodestamper.c: - timecodestamper: Don't fail the latency query in LTC mode if we have no framerate - Only in LTC mode we introduce additional latency that is depending on only on a - property and not on the framerate, so waiting for the framerate is not necessary. - In all other modes no latency is introduced at all and the latency query can - simply be proxied. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7831> - -2024-03-16 00:38:58 +0200 Mart Raudsepp <mart.raudsepp@globalm.media> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegtsmux: Add support for SMPTE 302M (audio/x-smpte-302m) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6530> - -2024-11-03 17:30:40 +0000 Tim-Philipp Müller <tim@centricular.com> - - * docs/meson.build: - meson: bail out earlier in docs subdir if docs are disabled - The gst_dep.get_variable('libexecdir') may fail in some scenarios - (e.g. building a module alone inside an uninstalled devenv) and - it shouldn't really be reached in the first place if docs are - disabled via options. - Also to avoid confusing meson messages when cross-compiling or - doing a static build. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7818> - -2024-11-03 10:42:33 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Add support for GstColorBalance interface - ... and adding hue, saturation, brightness, and contrast properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7817> - -2024-11-03 09:20:24 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12convert.h: - d3d12convert: Add support for GstColorBalance interface - ... and adding hue, saturation, brightness, and contrast properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7817> - -2024-11-03 06:36:32 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Add support for GstColorBalance interface - ... and adding hue, saturation, brightness, and contrast properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7817> - -2024-11-03 04:00:25 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-builder.h: - * gst-libs/gst/d3d12/gstd3d12converter-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - d3d12converter: Add support for colorbalance - Adding support for hue, saturation, brightness, and contrast adjustment - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7817> - -2024-11-03 17:37:03 +0000 Tim-Philipp Müller <tim@centricular.com> - - * po/de.po: - * po/es.po: - * po/hr.po: - * po/ro.po: - * po/sl.po: - * po/sv.po: - gst-plugins-bad: update translations - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7819> - -2024-11-02 03:18:26 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11converter.cpp: - d3d11converter: Fix constant buffer update - Fixing regression of - https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7814> - -2024-08-06 18:09:58 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth264parse.h: - h264parse: Fix pic_timing SEI replacement - The calculated position was off. I'm not sure of the exact cause; - possibly because we're in AU-aligned byte-stream mode, which means - `transform` is true. - Replacing the math that calculates the NALU positions with code more - similar to what is already in use for `idr_pos` seems to have fixed it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7318> - -2024-07-30 14:31:45 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth265parse.c: - h264parse, h265parse: Support drop frame codes with counting_type 6 - Tested with an Ateme Kyrion CM5000, which uses 6 when it drops 4 frames - from the code for 1080p@59.94. - Apply the same change to h265parse, with reference to the spec. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7316> - -2024-10-27 04:26:46 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window-win32.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Add support for mouse scroll events - Handle WM_MOUSEHWHEEL and WM_MOUSEWHEEL events - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7774> - -2024-10-29 09:49:50 +0100 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/tsdemux.c: - * gst/mpegtsmux/gstbasetsmux.c: - mpegts: Fix bit-depth storage for jpeg-xs - As per ISO/IEC 21122-3 2019: - > Sample_Bitdepth code shall specify directly the bitdepth of the components - minus 1 - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3945 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7775> - -2024-10-29 09:43:11 +0100 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - mpegts: Fix JPEG-XS Extension Descriptor handling - The initial specification for the descriptor (from H.222.0 06/21) was wrong and - introduced duplicate descriptor_tag/descriptor_length field. - This was later corrected in H.222.0 (2021) Amendment 1 (12/22) - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3945 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7775> - -2024-10-29 11:29:05 +0100 Edward Hervey <edward@centricular.com> - - * ext/srt/gstsrtobject.c: - srt: Don't attempt to reconnect on authentication failures - This is a fatal issue which can't be recovered - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1550 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7776> - -2024-10-22 22:29:51 +1100 Jan Schmidt <jan@centricular.com> - - * sys/androidmedia/gstamc-constants.h: - * sys/androidmedia/gstamc.c: - androidmedia: Add more pixel format mappings - Add missing pixel format constants, and mappings for - P010, packed variants of 420 and RGBA layouts to GStreamer - buffer formats. This fixes problems with android decoders - refusing to output raw video frames with decoders that - announce support for these common pixel formats and - only allowing the 'hardware surfaces output' path. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7743> - -2024-10-22 21:28:04 +1100 Jan Schmidt <jan@centricular.com> - - * sys/androidmedia/gstamc-constants.h: - * sys/androidmedia/gstamc.c: - androidmedia: Add extra H.2645 profile mappings - Update the android headers and add missing mappings for H.264/H.265 - profiles that have been added in newer android releases - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7743> - -2024-10-28 14:37:04 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstcea608mux.c: - cea608mux: expose force-live property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7765> - -2024-10-22 18:08:19 +1100 Matthew Waters <matthew@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/decklink/gstdecklink.cpp: - * sys/decklink/gstdecklink.h: - * sys/decklink/gstdecklinkvideosink.cpp: - * sys/decklink/gstdecklinkvideosrc.cpp: - decklink: reinstate some hardcoded colorimetry handling - Needed when we don't yet have an open device and are doing negotiation. - colorimetry=bt601 is only actually supported by decklink for PAL and NTSC - formats. All other formats use bt709 or above. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742> - -2024-10-22 18:06:46 +1100 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklink.cpp: - decklink: only expose HDR colorimetry if 2020 colorspace is supported - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742> - -2024-10-22 13:23:06 +1100 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklinkvideosrc.cpp: - decklinkvideosrc: ignore HDR metadata consisting of all zeros - In some cases decklinkvideosink may produce such stream when the - information is unknown. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742> - -2024-10-22 13:20:30 +1100 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklinkvideosink.cpp: - decklinkvideosink: provide default values when HDR metadata is not available - Some file format standards don't require mastering-display-info - and content-light-level values to be provided. - Decklink however requires the static HDR metdata for the PQ transfer - function which we may not have. - CTA-861-G mentions that in this case, 0 may provided as an 'unknown' - value which is what we use here. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742> - -2024-10-22 13:18:58 +1100 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklinkvideosink.cpp: - decklinkvideosink: fix incorrect EOTF value - Checking for mastering-display-info twice is incorrect. One of the - checks should be for the content-light-level. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742> - -2024-09-24 13:55:39 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * gst-libs/gst/wayland/gstwlvideoformat.c: - * gst-libs/gst/wayland/gstwlvideoformat.h: - wayland: Add NV15 support - This format, which maps to NV12_10LE40 in GStreamer is produced by Rockchip - video decoders when decoding 4:2:0 10 bit content. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7569> - -2024-10-22 23:41:13 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/codecs/gstvp8decoder.c: - vp8decoder: Fix resolution change handling - Do not store resolution in set_format() so that resolution change - can be detected on keyframe as intended. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3928 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7710> - -2024-10-25 16:37:15 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/rtmp2/gstrtmp2sink.c: - rtmp2sink: Initialize base_ts / last_ts with the actual first observed timestamp - Initializing it with zero can falsely trigger the overflow / underflow detection - code if the first observed timestamp is a big integer. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7745> - -2024-10-28 18:58:48 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvavp9enc.c: - va{av1,vp9}enc: fix return value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7773> - -2024-05-27 09:45:00 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * tests/check/elements/unixfd.c: - Revert "unixfd: disable flaky test_unixfd_segment for now" - This reverts commit 06cd4e24578caf1e16e364eb56edbbb065b8533e. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6765> - -2024-04-29 14:30:49 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/unixfd/gstunixfdsink.c: - * tests/check/elements/unixfd.c: - unixfd: Fix racy unit test by adding wait-for-connection property - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6765> - -2024-10-26 11:42:48 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - * gst-libs/gst/play/gstplay.h: - play: Improve play message API inconsistencies - * Consistently name parse functions according to their message type and - deprecate the misnamed ones, - * Add missing parse functions, - * Check for the correct message type when parsing - * Use correct field name for warning message details - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7754> - -2024-10-25 11:10:38 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcencoder/gstlcevch264enc.c: - lcevch264enc: Set 'byte-stream' format and 'au' alignment in output caps - This is because the LCEVC EIL SDK from V-Nova always outputs encoded video in - that format. This also avoids using the parser in some scenarios. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7750> - -2024-10-21 13:32:03 +0200 Pablo García <pgarcia@fluendo.com> - - * sys/d3d11/gstd3d11videosink.cpp: - * sys/d3d11/gstd3d11window.cpp: - * sys/d3d11/gstd3d11window.h: - * sys/d3d11/gstd3d11window_win32.cpp: - d3d11: implement mouse wheel events - Adittion of d3d11 support for WM_MOUSEWHEEL and WM_MOUSEHWHEEL events, - which are triggered when the mouse is scrolled vertically or horizontally - respectively. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7705> - -2024-09-21 19:16:29 +0300 Jordan Petridis <jordan@centricular.com> - - * tests/check/gst-plugins-bad.supp: - ci: add suppressions for OpenSSL false positives - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-21 19:01:55 +0300 Jordan Petridis <jordan@centricular.com> - - * tests/check/gst-plugins-bad.supp: - gst-plugins-bad.supp: Remvoe gssdp leaks that have been fixed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-20 11:02:42 +0200 Edward Hervey <edward@centricular.com> - - * sys/va/gstvacompositor.c: - vacompositor: Add since marker - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-04 17:57:08 +0200 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/codecparsers/gstav1parser.c: - * gst-libs/gst/codecparsers/gstav1parser.h: - * gst-libs/gst/codecparsers/gsth264parser.c: - * gst-libs/gst/codecparsers/gsth265parser.c: - codecparsers: Fix gtk-doc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-04 10:11:40 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/isac/gstisacenc.c: - * ext/ldac/gstldacenc.c: - * ext/svtav1/gstsvtav1enc.c: - * ext/svthevcenc/gstsvthevcenc.c: - bad: Mark more types as plugin API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-03 15:00:39 +0200 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/codecparsers/gstav1parser.h: - docs: Fix av1parser symbols - Don't use un-named structures - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-06-28 14:24:54 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - plugins_cache: Update for fedora 40 build - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-09-21 18:11:20 +0300 Jordan Petridis <jordan@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/va/gstvafilter.c: - gstvafilter: Add back missing property comments - In b1cda4439bc9170b4af60ab464471f58ea770f58 the property comments - were removed, even though these are marked as public api. - Add back the comments, and a Since version for interpolation-method. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7455> - -2024-10-24 09:17:54 +0200 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - * gst-libs/gst/vulkan/gstvkdisplay.c: - * gst-libs/gst/vulkan/gstvktrash.c: - all: Fix closure annotations - This was misused almost everywhere. - See - https://gi.readthedocs.io/en/latest/annotations/giannotations.html#support-for-gobject-closures - and: https://www.bassi.io/articles/2023/02/20/bindable-api-2023/ - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7725> - -2024-10-15 16:07:42 +0200 Diego Nieto <diego.nieto.m@outlook.com> - - * gst/debugutils/gstvideocodectestsink.c: - debugutils: videocodectestsink: support GRAY8 and GRAY10_LE{16,32} - Add support for: - * GST_VIDEO_FORMAT_GRAY8 - * GST_VIDEO_FORMAT_GRAY10_LE16 - * GST_VIDEO_FORMAT_GRAY10_LE32 - These formats are used by Fraunhofer VVC encoder and decoder. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7706> - -2024-10-21 11:50:23 +0200 Carlos Falgueras García <cfalgueras@fluendo.com> - - * docs/plugins/gst_plugins_cache.json: - video: Add GRAY10_LE16 support - This adds a 10-bit variant of grayscale packed into 16 bits little-endian - words. The MSB 6 bits are padding and should be ignored. This format is - used by Fraunhofer VVC encoder and decoder libraries. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7706> - -2024-10-23 14:28:30 +0200 Peter Stensson <petest@axis.com> - - * gst/codectimestamper/gstcodectimestamper.c: - * tests/check/elements/h264timestamper.c: - codectimestamper: Fix gint wraparound in pts_compare_func - The diff between compared timestamps might be outside the gint range - resulting in wrong sorting results. This patch corrects that by - comparing the timestamps and then returning -1, 0 or 1 depending on the - result. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7726> - -2024-10-24 14:40:23 +0200 Andoni Morales Alastruey <ylatuya@gmail.com> - - * sys/applemedia/vtdec.c: - vtdec: add support for level 6 6.1 and 6.2 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7729> - -2024-10-22 09:13:06 -0600 Jordan Yelloz <jordan.yelloz@collabora.com> - - * gst/mpegtsmux/tsmux/tsmux.c: - mpegtsmux: Schedule PMT update on stream removal - Following the behavior of tsmux_program_add_stream(), this ensures that a PMT - update will also be caused by removal of a stream from a program. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7719> - -2024-09-30 15:51:04 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - mpegtsdemux: Handle PTS/DTS wraparound with ignore-pcr=true - The wraparound handling code assumes that the PCR gets updated regularly for - being able to detect wraparounds. With ignore-pcr=true that was not the case and - it stayed initialized at 1h forever. - To avoid this problem, update the fake PCR whenever the PTS advanced by more - than 5s, and also detect wraparounds in these fake PCRs. - Problem can be reproduced with - $ gst-launch-1.0 videotestsrc pattern=black ! video/x-raw,framerate=1/5 ! \ - x264enc speed-preset=ultrafast tune=zerolatency ! mpegtsmux ! \ - tsdemux ignore-pcr=true ! fakesink - which restarts timestamps at 0 after around 26h30m. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7588> - -2024-10-24 06:10:13 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decodercpbpool.cpp: - * sys/d3d12/gstd3d12mipgen.cpp: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window-win32.cpp: - * sys/d3d12/meson.build: - * sys/nvcodec/gstcudainterop_d3d12.cpp: - d3d12: Additional fixes for MinGW build - Various fixes for GCC build, including actual bug fixes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7722> - -2024-10-23 04:41:23 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/meson.build: - d3d12: Fix MinGW build with installed DirectX-Headers - Required for cerbero MinGW build - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7154> - -2024-10-18 18:09:01 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/decoders/gstssdobjectdetector.c: - tensordecoder: Correct Klass, for ssd TD - Tensor decoder need a specific klass to be able to auto-plug them - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7698> - -2024-10-10 17:24:34 +0200 Jochen Henneberg <jochen@centricular.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: va: Added VP8 encoder to dynamic reconfigure - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6959> - -2024-10-10 17:22:34 +0200 Jochen Henneberg <jochen@centricular.com> - - * sys/va/gstvavp8enc.c: - * sys/va/gstvavp8enc.h: - * sys/va/meson.build: - * sys/va/plugin.c: - va: Added VP8 encoder - Fixes #3430 - Fixes #3576 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6959> - -2024-10-10 17:21:25 +0200 Jochen Henneberg <jochen@centricular.com> - - * gst-libs/gst/codecparsers/gstvp8parser.h: - codecparsers: vp8parser: Added frame type enums - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6959> - -2024-10-21 00:23:41 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12mipmapping.cpp: - d3d12mipmapping: Fix debug category - Fixing copying-and-pasting mistake - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7701> - -2024-06-20 16:52:46 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codecdevice.h: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: dynamically discovers supported pixels formats - If the driver allows it, for each stateless decoder, - enumerate all the pixels formats and use this list for source - pad instead of a static template. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7686> - -2024-06-20 16:51:07 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/linux/videodev2.h: - v4l2codecs: Update videodev2.h with V4L2_FMTDESC_FLAG_ENUM_ALL flag - Add V4L2_FMTDESC_FLAG_ENUM_ALL flag to support discovering all - possible pixel formats. - Add V4L2_FMT_FLAG_META_LINE_BASED to not create a hole in flag - definition. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7686> - -2024-10-11 11:58:37 -0400 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gsth264bitwriter.c: - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - h26xbitwriter: false have_space if aligning fails on aud - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7655> - -2024-10-18 15:10:56 +0200 Edward Hervey <edward@centricular.com> - - * ext/qroverlay/gstbaseqroverlay.c: - qrbaseoverlay: Add doc/since - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7692> - -2024-10-15 16:44:27 +0800 He Junyan <junyan.he@intel.com> +2025-03-12 19:20:58 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> * sys/va/gstvaav1enc.c: - vaav1enc: Do not enable palette mode by default - Palette mode should only be enabled only when we know that the content - of the picture is simple. For example, only white letters on black - screen in SCC mode. So, by default, we need to disable it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7668> - -2024-09-27 18:01:53 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/kms/gstkmssink.c: - kmssink: Add IMX-DCSS auto-detection - Add IMX DCSS display controller into list of - auto-detected modules. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7685> - -2024-09-20 13:34:34 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - v4l2codecs: vp9: Allow inter-frames resolution change - When the stream resolution change it is needed to negotiate - a new pools and to update the caps. - Resolution change could occurs on a new sequence or a new - picture so move resolution change detection code in a common - function. - For memory allocation reasons, only allows resolution change - on non keyframe if the driver support remove buffer feature. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-09-20 10:48:39 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - codecs:vp9 decoder: Remove unsused info field - Video info field is never used so remove it. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-09-20 10:30:01 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * gst-libs/gst/codecs/gstvp9decoder.c: - codecs: vp9 decoder: Drain output buffers before resolution change - We must drain the pending output picture so that subclass can renegotiate - the caps. Not doing so while still renegotiating would mean that the - subclass would have to do an allocation query before pushing the caps. - Pushing the caps now without this would also not work since these caps - won't match the pending buffers format. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-05-27 14:28:18 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecallocator.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: Add remove buffers helpers - Add helpers function to call VIDIOC_REMOVE_BUFS ioctl. - If the driver support this feature buffers are removed from the queue when: - - the pool when is detached from the decoded. - - the pool is released. - - allocation failed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-05-27 13:52:28 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: Do not register stateless decoder if the driver doesn't support VIDIOC_CREATE_BUFS - If the driver can't allocate buffers with VIDIOC_CREATE_BUFS do not - register it has stateless decoder. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2023-06-19 11:09:22 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codecallocator.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: Replace VIDIOC_REQBUFS calls by VIDIOC_CREATE_BUFS - Use VIDIOC_CREATE_BUFS ioctl to create buffers instead of VIDIOC_REQBUFS - because it allows to create buffers also while streaming. - To prepare the introduction of VIDIOC_REMOVE_BUFFERS create - the buffers one per one instead of a range of them. This way - it can, in the futur, fill the holes. - gst_v4l2_decoder_request_buffers() is stil used to remove all - the buffers of the queue. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-05-07 10:48:05 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/linux/videodev2.h: - v4l2codecs: udpate videodev2.h - Update videodev2.h to be aligned with kernel version v6.10 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684> - -2024-10-14 11:26:20 +0200 Emil Ljungdahl <emillj@axis.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: Clean up bin elements when datachannel is removed - When a datachannel within a session is removed after proper close, - reference to the error_ignore_bin elements of the datachannel - appsrc/appsink were left in webrtcbin. - This caused the bin-objects to be left and not freed until the whole - webrtc session was terminated. Among other things that includes a thread - from the appsrc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7675> - -2024-10-09 12:32:34 -0400 Francisco Javier Velázquez-García <francisco.velazquez@appear.net> - - * ext/srt/gstsrtsink.c: - srtsink: Add guard for null error when SRT open fails - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7628> - -2024-10-09 12:08:10 -0400 Francisco Javier Velázquez-García <francisco.velazquez@appear.net> - - * ext/srt/gstsrtobject.c: - srtsink: Register SRT listen callback before binding socket - This change https://github.com/Haivision/srt/pull/2683 forces us to - call `srt_listen_callback` before `srt_listen`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7628> - -2024-07-08 17:54:03 -0400 Daniel Morin <daniel.morin@collabora.com> - - * tests/check/libs/analyticsmeta.c: - test: Adding a test for segmentation analytics-meta - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6026> - -2024-07-08 17:52:24 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/analytics/analytics.h: - * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: - * gst-libs/gst/analytics/gstanalyticssegmentationmtd.h: - * gst-libs/gst/analytics/meson.build: - analytics: add segmentation analytics-meta - - Add a new analytics-meta to store segmentation analysis result. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6026> - -2024-07-08 17:47:13 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/analytics/gstanalyticsmeta.c: - * gst-libs/gst/analytics/gstanalyticsmeta.h: - * gst-libs/gst/analytics/gstanalyticssegmentationmtd.c: - analytics: Allow specific analytics-meta (Mtd) to handle their clear - - Add mtd_meta_clear to allow specific analytics-meta to handle their clear - operation specific to their type. - - Clear mtd's attached when analytic-meta is freed. When the buffer where - analytics-meta is attached is not from a buffer pool - gst_analytics_relation_meta_clear will not be called unless we explicitly call - it in _free. This important otherwise _mtd_clear are not called and lead to - leak if embedded mtd's allocated memory - - Un-ref in transform if it's a copy - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6026> - -2024-10-12 21:38:08 +0300 Jordan Petridis <jordan@centricular.com> - - * tests/check/elements/lc3.c: - lc3: tests: Zero out the buffer we allocate for the tests - Otherwise liblc3 will try to access the uninitialized memory - and it makes valgrind very sad. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7657> - -2024-10-14 15:31:54 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst/videofilters/gstscenechange.c: - scenechange: fix memory leak - A reference to the last buffer(oldbuf) was kept - leading to a memory leak on stop. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7662> - -2024-10-11 12:07:27 -0400 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/codecparsers/gstav1bitwriter.c: - * gst-libs/gst/codecparsers/gsth264bitwriter.c: - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - * gst-libs/gst/codecparsers/gstvp9bitwriter.c: - codecparsers: add debug categories to bitwriters - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7653> - -2024-10-13 23:04:58 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/dxva/meson.build: - meson: Explicitly use cpp_std=c++11 for dxva - dxva is built unconditionally on all platforms where introspection is - enabled, so let's fix the build on macOS so that introspection can be - enabled there: https://gitlab.freedesktop.org/nirbheek/cerbero/-/jobs/65009118 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7659> - -2024-10-12 19:10:46 -0300 L. E. Segovia <amy@centricular.com> - - * gst-libs/gst/winrt/meson.build: - * meson.build: - * sys/dwrite/meson.build: - * sys/wasapi2/meson.build: - * sys/webview2/meson.build: - * sys/wic/meson.build: - meson: Undefine any WINVER and _WIN32_WINNT entries before redefining them - Fixes Cerbero build with MinGW GCC 14, where specifying -DWINVER=0x0601 -DWINVER=0x0A00 is a hard -Werror. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658> - -2024-10-12 19:05:17 -0300 L. E. Segovia <amy@centricular.com> - - * sys/amfcodec/meson.build: - meson: amfcodec: fix build with MinGW GCC 14 - > ../sys/amfcodec/include/core/PropertyStorage.h:87:50: error: 'virtual void - > amf::AMFPropertyStorage::RemoveObserver(amf::AMFPropertyStorageObserver*)' was hidden -Werror=overloaded-virtual= - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658> - -2024-10-12 19:01:46 -0300 L. E. Segovia <amy@centricular.com> - - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/meson.build: - meson: d3d12: fix build with MinGW GCC 14 - Also apply the d3d11 fix since both use the same header. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658> - -2024-10-12 19:01:13 -0300 L. E. Segovia <amy@centricular.com> - - * gst-libs/gst/d3d11/meson.build: - * sys/d3d11/meson.build: - meson: d3d11: fix build with MinGW GCC 14 - In my tests with the new GCC 14 compiler for Cerbero, I got the - following error: - > In file included from include/directxmath/DirectXMath.h:2275, - > from ../gst-libs/gst/d3d11/gstd3d11converter.cpp:46: - > include/directxmath/DirectXMathMatrix.inl: In function 'bool - > DirectX::XMMatrixDecompose(XMVECTOR*, XMVECTOR*, XMVECTOR*, FXMMATRIX)': - > include/directxmath/DirectXMathMatrix.inl:1161:16: - > error: variable 'aa' set but not used -Werror=unused-but-set-variable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658> - -2024-08-29 20:50:59 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcstats.c: - webrtcbin: Retrieve RR stats from internal sources - Check and generate remote reception statistics from the info stored on - internal sources, as they are stored there when running against newer rtpbin - since MR !7424 - This fixes cases where statistics are incomplete when - peers send RR reports from a single remote ssrc, which GStreamer does - when bundling is enabled and other RTP stacks may too. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7425> - -2024-10-04 23:37:35 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Set input formats when we get incomplete caps - In some cases, decodebin3 will send us incomplete caps (not containing - codec_data), and then a GAP event, which will force a negotiation. - This segfaults due to a null pointer deref because self->input_state - is NULL. - The only possible fix is to avoid negotiating when we get incomplete - caps (to avoid re-negotiationg immediately afterwards, which isn't - supported by some muxers), but also set as much input state as - possible so that a renegotiation triggered by a GAP event can complete - successfully. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7634> - -2024-10-09 17:19:42 -0400 Xavier Claessens <xclaessens@netflix.com> - - * ext/qroverlay/gstbaseqroverlay.c: - qroverlay: Change pixel-size to percent of width or height - The size is now expressed in percent of the smallest dimention. 100 - means the biggest square that fits the render area. - Fixes: #3695 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7638> - -2024-10-09 17:16:46 -0400 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/check/elements/vapostproc.c: - tests: va: fix vapostproc test for DMABuf - Now it picks the first format in the template srcpad list and do - the convertion. Also the format size is reduced because not all - drives support 4K as DMABuf (radeonsi). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7636> - -2024-10-09 16:48:18 -0400 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12.h: - * gst-libs/gst/d3d12/gstd3d12_fwd.h: - * gst-libs/gst/d3d12/gstd3d12cmdallocpool.cpp: - * gst-libs/gst/d3d12/gstd3d12cmdallocpool.h: - * gst-libs/gst/d3d12/gstd3d12cmdlistpool.cpp: - * gst-libs/gst/d3d12/gstd3d12cmdlistpool.h: - * gst-libs/gst/d3d12/gstd3d12cmdqueue-private.h: - * gst-libs/gst/d3d12/gstd3d12cmdqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12cmdqueue.h: - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.cpp: - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.h: - * gst-libs/gst/d3d12/gstd3d12commandlistpool.cpp: - * gst-libs/gst/d3d12/gstd3d12commandlistpool.h: - * gst-libs/gst/d3d12/gstd3d12commandqueue.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12descheappool.cpp: - * gst-libs/gst/d3d12/gstd3d12descheappool.h: - * gst-libs/gst/d3d12/gstd3d12descriptorpool.cpp: - * gst-libs/gst/d3d12/gstd3d12descriptorpool.h: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.h: - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12mipgen.cpp: - * sys/d3d12/gstd3d12mipmapping.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12swapchainsink.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window-swapchain-resource.h: - * sys/d3d12/gstd3d12window-swapchain.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12: Shorten various names - Update names of various objects and method to be shorter, for instance - GstD3D12CommandAllocator is changed to GstD3D12CmdAlloc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7642> - -2024-10-09 15:46:15 -0400 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12fencedatapool.cpp: - d3d12: Fix typo in docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7642> - -2024-10-09 15:19:52 -0400 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - d3d12: Early error out on Signal() fail - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7642> - -2024-10-09 20:37:10 +0300 Jordan Petridis <jordan@centricular.com> - - * tests/check/elements/lc3.c: - tests/lc3: Allocate the same size for the buffer and the data - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7631> - -2024-10-09 16:40:05 +0300 Vivia Nikolaidou <vivia@ahiru.eu> - - * gst/mxf/mxftypes.c: - * gst/mxf/mxftypes.h: - mxftypes: Add support for a few additional fields - According to SMPTE ST 377-1:2019 - Currently still unused. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7626> - -2024-10-09 16:25:05 +0300 Vivia Nikolaidou <vivia@ahiru.eu> - - * gst/mxf/mxftypes.c: - mxftypes: Check for the existence of all required fields - According to SMPTE ST 377-1:2019 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7626> - -2024-10-09 16:23:47 +0300 Vivia Nikolaidou <vivia@ahiru.eu> - - * gst/mxf/mxfdemux.c: - mxfdemux: Keep tracking the offsets even when an index table was found - Some files may contain a partial index table, leading into a crash when - you try seeking in them - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7626> - -2024-09-09 15:53:25 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Don't call drop_frame() when flushing - Slipped through with earlier changes to use drop/release_frame() explicitly. - We should only drop when something goes wrong in the encoder, and just release otherwise. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7475> - -2024-08-08 10:50:23 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcencoder/README.md: - lcevcencoder: Add README.md - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2024-08-08 10:01:24 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcdecoder/README.md: - lcevcdecoder: Add README.md - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2023-08-25 13:30:48 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcencoder/gstlcevcencoder.c: - * ext/lcevcencoder/gstlcevcencoder.h: - * ext/lcevcencoder/gstlcevcencoderutils.c: - * ext/lcevcencoder/gstlcevcencoderutils.h: - * ext/lcevcencoder/gstlcevch264enc.c: - * ext/lcevcencoder/gstlcevch264enc.h: - * ext/lcevcencoder/meson.build: - * ext/lcevcencoder/plugin.c: - * ext/meson.build: - * meson_options.txt: - lcevcencoder: Add new LCEVC Encoder plugin - This new LCEVC encoder plugin is meant to implement all LCEVC encoder elements. - For now, it only implements the LCEVC H264 encoder (lcevch264enc) element. This - element essentially encodes raw video frames using a specific EIL plugin, and - outputs H264 frames with LCEVC data. Depending on the encoder properties, the - LCEVC data can be either part of the video stream as SEI NAL Units, or attached - to buffers as GstMeta. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2023-08-01 11:15:54 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcdecoder/gstlcevcdecodebin.c: - * ext/lcevcdecoder/gstlcevcdecodebin.h: - * ext/lcevcdecoder/gstlcevch264decodebin.c: - * ext/lcevcdecoder/gstlcevch264decodebin.h: - * ext/lcevcdecoder/plugin.c: - lcevcdecoder: Add new lcevch264decodebin element - This new element wraps both the base H264 decoder and lcevcdec elements into a - bin so that LCEVC decoding works with auto-plugging elements such as decodebin. - By default, the H264 decoder element with higher rank is used as base decoder, - but any particular H264 decoder can be used by manually setting the base-decoder - property. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2023-08-01 11:11:18 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * ext/lcevcdecoder/gstlcevcdec.c: - * ext/lcevcdecoder/gstlcevcdec.h: - * ext/lcevcdecoder/gstlcevcdecutils.c: - * ext/lcevcdecoder/gstlcevcdecutils.h: - * ext/lcevcdecoder/meson.build: - * ext/lcevcdecoder/plugin.c: - * ext/meson.build: - * meson_options.txt: - lcevcdecoder: Add new LCEVC Decoder plugin - This new LCEVC decoder plugin is meant to implement all LCEVC decoder elements. - For now, it only implements the LCEVC enhancement decoder (lcevcdec) element. - This element essentially enhances raw video frames using the LCEVC metadata - attached to input buffers into a higher resolution frame. The element is only - meant to be used after any base decoder (eg avdec_h264). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2024-09-03 17:09:40 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * gst/videoparsers/gsth264parse.c: - h264parse: Wait for SEI before exposing src caps - This makes sure 'lcevc=false' src caps are not set before parsing SEI. It is - needed for decodebin2 to work properly with the LCEVC decoder. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2024-10-01 16:40:13 -0400 Olivier Crête <olivier.crete@collabora.com> - - * gst/videoparsers/gsth264parse.c: - h264parse: Don't fake IDR without at least an i-slice - There was an override to fake an IDR as soon as a SPS/PPS - is encountered, but that's not valid, at least an i-slice is needed. - Amend the visl result, as the output is slightly more correct, not - duplicating frame_num. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2024-08-19 12:16:49 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth265parse.c: - * gst/videoparsers/gstmpegvideoparse.c: - * gst/videoparsers/gstvideoparseutils.c: - * gst/videoparsers/gstvideoparseutils.h: - h264parse: attach LCEVC meta to buffers if it is present in SEI - This improves the h264parse element to attach LCEVC enhancement data to buffers - using the new GstLcevcMeta API. This metadata will eventually be used downstream - by LCEVC decoders to enhance the RAW video frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2023-07-27 16:27:29 -0400 Julian Bouzas <julian.bouzas@collabora.com> - - * gst-libs/gst/codecparsers/gstlcevcmeta.c: - * gst-libs/gst/codecparsers/gstlcevcmeta.h: - * gst-libs/gst/codecparsers/meson.build: - codecparsers: Add LCEVC metadata API - This new metadata API allows elements to attach LCEVC enhancement data to video - buffers. Usually, the video parser elements are charged to parse the LCEVC - enhancement data from SEI Nal units (Supplemental enhancement Information). - However, other elements such as demuxers can also use this API if the LCEVC - enhancement data of the video is stored in a separate stream in the container. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330> - -2024-09-16 23:34:15 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvdecoder.cpp: - * sys/nvcodec/gstnvdecoder.h: - * sys/nvcodec/plugin.c: - nvdecoder: Add support for D3D12 output - Enable D3D12 output if device can support D3D12 interop - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7529> - -2024-09-16 23:24:30 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudainterop_d3d12.cpp: - * sys/nvcodec/gstcudainterop_d3d12.h: - * sys/nvcodec/gstnvencoder.cpp: - nvcodec: Add support CUDA to D3D12 memory copy - Adding CUDA -> D3D12 memory copy method to GstCudaD3D12Interop - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7529> - -2024-09-24 23:43:07 -0700 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkenc.c: - msdkenc: Guard the read of thiz->initialized with the modification of this value - This is to avoid wrongly read/write thiz->initialized when multi-thread - invoking encoder init function, it is possible when user apps deploy - multi-thread to dynamically change encoder's settings. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7578> - -2024-09-24 17:32:54 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajadeviceprovider.cpp: - ntv2: Update to AJA NTV2 SDK 17.1.0 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7568> - -2024-09-03 12:16:19 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/nvcomp/gstnvcompvideodec.cpp: - * ext/ttml/gstttmlparse.c: - * ext/vulkan/vkdownload.c: - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * ext/vulkan/vkupload.c: - * gst-libs/gst/vulkan/gstvkswapper.c: - * gst/interlace/gstinterlace.c: - * sys/amfcodec/gstamfav1enc.cpp: - * sys/amfcodec/gstamfh264enc.cpp: - * sys/amfcodec/gstamfh265enc.cpp: - * sys/androidmedia/gstamcvideodec.c: - * sys/applemedia/avfvideosrc.m: - * sys/d3d11/gstd3d11convert.cpp: - * sys/d3d11/gstd3d11decoder.cpp: - * sys/d3d11/gstd3d11deinterlace.cpp: - * sys/d3d11/gstd3d11download.cpp: - * sys/d3d11/gstd3d11h265dec.cpp: - * sys/d3d11/gstd3d11ipcsink.cpp: - * sys/d3d11/gstd3d11screencapturesrc.cpp: - * sys/d3d11/gstd3d11upload.cpp: - * sys/d3d11/gstd3d11vp9dec.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12download.cpp: - * sys/d3d12/gstd3d12h264enc.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12memorycopy.cpp: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/gstd3d12upload.cpp: - * sys/mediafoundation/gstmfvideoencoder.cpp: - * sys/msdk/gstmsdkcaps.c: - * sys/msdk/gstmsdkdec.c: - * sys/msdk/gstmsdkenc.c: - * sys/msdk/gstmsdkvpp.c: - * sys/msdk/gstmsdkvpputil.c: - * sys/nvcodec/gstcudaconvertscale.c: - * sys/nvcodec/gstcudaipcsink.cpp: - * sys/nvcodec/gstcudamemorycopy.c: - * sys/nvcodec/gstnvav1encoder.cpp: - * sys/nvcodec/gstnvdec.c: - * sys/nvcodec/gstnvdecoder.cpp: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - * sys/qsv/gstqsvav1enc.cpp: - * sys/qsv/gstqsvdecoder.cpp: - * sys/qsv/gstqsvh264dec.cpp: - * sys/qsv/gstqsvh264enc.cpp: - * sys/qsv/gstqsvh265dec.cpp: - * sys/qsv/gstqsvh265enc.cpp: - * sys/qsv/gstqsvjpegdec.cpp: - * sys/qsv/gstqsvjpegenc.cpp: - * sys/qsv/gstqsvvp9dec.cpp: - * sys/qsv/gstqsvvp9enc.cpp: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/va/gstvaav1dec.c: - * sys/va/gstvabase.c: - * sys/va/gstvabasedec.c: * sys/va/gstvacaps.c: * sys/va/gstvacompositor.c: + * sys/va/gstvadeinterlace.c: * sys/va/gstvaencoder.c: * sys/va/gstvafilter.c: - * sys/va/gstvavpp.c: - * tests/check/libs/vkimagebufferpool.c: - * tests/check/libs/vkvideodecode.c: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - * tests/examples/d3d11/d3d11converter.cpp: - * tests/examples/nvcodec/nvcodec.c: - common: Use more efficient versions of GstCapsFeatures API where possible - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7432> - -2024-08-30 18:57:03 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/va/gstvabasedec.c: - * sys/va/gstvavpp.c: - common: Stop using GQuark-based GstCapsFeatures API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7432> - -2024-08-29 20:09:52 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/curl/gstcurlhttpsrc.c: - * ext/webrtc/gstwebrtcbin.c: - * gst-libs/gst/player/gstplayer.c: - * gst/debugutils/gsttestsrcbin.c: - * sys/ipcpipeline/gstipcpipelinecomm.c: - * sys/kms/gstkmssink.c: - * sys/nvcodec/gstcudaconverter.c: - * tests/check/elements/webrtcbin.c: - * tests/examples/mxf/mxfdemux-structure.c: - common: Stop using GQuark-based GstStructure field name API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7432> - -2024-08-09 10:41:57 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/svthevcenc/gstsvthevcenc.c: - * gst-libs/gst/mpegts/gstmpegtssection.c: - * gst-libs/gst/play/gstplay.c: - * gst/mpegtsdemux/tsdemux.c: - * gst/mxf/mxfaes-bwf.c: - * gst/mxf/mxfffv1.c: - * gst/mxf/mxfmetadata.c: - * gst/mxf/mxfmetadata.h: - * gst/mxf/mxfmpeg.c: - common: Stop using GQuark-based GstStructure name API - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7432> - -2024-09-26 02:03:19 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decodercpbpool.cpp: - * sys/d3d12/gstd3d12decodercpbpool.h: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12: Fix resource allocation on old Windows version - D3D12_HEAP_FLAG_CREATE_NOT_ZEROED flag was introduced as of - Windows 10 May 2020 Update, and older versions don't understand - the heap flag. Checks the feature support and enables the - D3D12_HEAP_FLAG_CREATE_NOT_ZEROED only if it's supported by OS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7573> - -2024-05-30 07:34:22 +0000 Weijian Pan <pwjworks@gmail.com> - - * sys/applemedia/avfdeviceprovider.m: - avfdeviceprovider: Fix caps leak - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6966> - -2024-09-24 13:31:34 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/wpe/gstwpethreadedview.cpp: - wpe: initialize threading.ready before reading it - Fix Valgrind warning. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7377> - -2023-12-10 23:31:32 +0100 Michael Grzeschik <m.grzeschik@pengutronix.de> - - * sys/uvcgadget/gstuvcsink.c: - uvcsink: make gst_v4l2uvc_fourcc_to_bare_struct work with more raw formats - The uvcsink was limited to only transfer YUY2 and MJPEG. For the - uncompressed formats there is no technical reason not to support them. - Since gst_video_format_to_string is already supporting more fourcc than - only YUY2 we use the default path in gst_v4l2uvc_fourcc_to_bare_struct - to create structures for more formats and bail out if the returned - format is not from the uncompressed type. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6037> - -2024-09-24 17:01:10 +0200 Hugues Fruchet <hugues.fruchet@foss.st.com> - - * sys/kms/gstkmsallocator.c: - kmsallocator: fix stride with planar formats - This fixes a regression introduced by the merge request - https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3801 - The extrapolated stride was computed but not used, resulting in the same - stride being applied to all planes. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7566> - -2024-09-24 01:07:13 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12convert.h: - * sys/d3d12/plugin.cpp: - d3d12: Add colorconvert and scale elements - In addition to existing d3d12convert element which supports - color conversion and rescale at once, adding - separate color-conversion-only and scale-only elements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7560> - -2024-09-20 23:46:32 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/plugin-hlsl/CSMain_mipgen.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - * sys/d3d12/gstd3d12mipgen.cpp: - * sys/d3d12/gstd3d12mipgen.h: - * sys/d3d12/gstd3d12mipmapping.cpp: - * sys/d3d12/gstd3d12mipmapping.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - d3d12: Add d3d12mipmapping element - Adding a new element for texture conversion from single mip level - texture to mipmapping enabled RGBA texture - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7555> - -2024-09-21 04:33:02 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12pluginutils.cpp: - d3d12: Use D3D12_FILTER_MIN_MAG_MIP_LINEAR filter by default - ... instead of D3D12_FILTER_MIN_MAG_LINEAR_MIP_POINT, since we supports - mipmap texture now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7555> - -2024-09-20 22:56:08 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - * gst-libs/gst/d3d12/gstd3d12memory-private.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - d3d12: Add support for mipmap texture - Consider D3D12_RESOURCE_DESC.MipLevels > 1 or zero case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7555> - -2024-09-19 21:29:18 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Fix crash on pso update - Allocates D3D12_INPUT_ELEMENT_DESC memory on heap instead of using - stack memory for later reuse - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-19 19:31:20 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Add auto-resize mode - Automatically resize swapchain backbuffer to be identical to - stream resolution if user calls resize() signal with zero resolution - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-19 01:23:50 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Add support for MSAA - Adding "msaa" property and enable MSAA if supported by device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-19 00:21:21 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - * sys/d3d12/gstd3d12window-swapchain.cpp: - d3d12videosink: Use converter config for initial MSAA setup - Avoid redundant pso creation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-18 23:53:23 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - d3d12converter: Add support initial pso DXGI_SAMPLE_DESC setting - Add more options for pso, in order to avoid redundant pso - creation when MSAA is used - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-18 19:59:13 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - d3d12swapchainsink: Add sampling-method property - Allow setting sampler filter method to use - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-18 23:24:55 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12convert.cpp: - d3d12convert: Use new sampler filter update method - ... instead of creating new converter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-18 20:21:22 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Add support for sampler filter update - Creates new root signature and pipeline state object - if sampler filter method is updated - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-18 23:01:57 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-builder.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12converter: Use generated sampler - ... instead of static ones, in order to support sampler state update - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7550> - -2024-09-07 11:06:12 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvadecoder.c: - va: decoder: Delete all the internal locks - In fact, the va decoder is just a internal helper class and its access - is under the control of all dec elements. So far, there is no parallel - operation on it now. - At the other side, some code scan tools report race condition issues. - For example, the "context" field is just protected with lock at _open() - but is not protected at _add_param_buffer(). - So we just delete all its lock usage. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547> - -2024-09-07 10:45:09 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth264bitwriter.c: - h264bitwriter: Add check for data size to avoid overflow - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547> - -2024-09-07 10:26:22 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gsth265bitwriter.c: - h265bitwriter: Add check for data size to avoid overflow - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547> - -2024-09-06 23:44:53 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvajpegenc.c: - va: jpegenc: Fix a memory leak when filter sink caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547> - -2024-09-06 23:35:59 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabasetransform.c: - va: vpp: Use gst_caps_replace to operate the filter_caps - No need to use lock when we assign value to priv->filter_caps. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547> - -2024-09-18 16:15:49 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mxf/mxfmux.c: - mxfmux: Use gst_aggregator_update_segment() instead of randomly pushing a segment event - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7542> - -2024-09-18 16:15:11 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Use gst_aggregator_push_src_event() for pushing downstream events - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7542> - -2024-09-17 14:48:03 +0200 Benjamin Gaignard <benjamin.gaignard@collabora.com> - - * sys/v4l2codecs/gstv4l2codech265dec.c: - v4l2codecs: h265: Minimize memory allocation - Be smarter when allocating sink and source memory pools to reduce the - memory footprint. Use gst_v4l2_decoder_get_render_delay() to know the - need number of buffers for downstream element. - Handle errors in case of memory allocation failures. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7544> - -2024-02-12 14:02:44 +0800 Tim Blechmann <tim@klingt.org> - - * ext/mdns/gstmicrodnsdevice.c: - mdns: fix thread names - Linux thread names are limited to 15 chars. providing long thread names - causes the thread name not to be applied at all - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6094> - -2024-09-17 23:03:14 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decodercpbpool.cpp: - d3d12decoder: Disable sub-allocated bitstream buffer - This sub-allocation causes decoding artifacts for some reason - on Intel platform - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7535> - -2024-09-17 18:31:30 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Fix refcounting issue when selecting the best pad - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7538> - -2024-09-06 10:44:46 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsdemux/gstmpegdesc.h: - * gst/mpegtsdemux/tsdemux.c: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegts: Add support for SMPTE ST-2038 ANC - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7461> - -2024-09-17 10:36:58 -0400 Xavier Claessens <xclaessens@netflix.com> - - * sys/aja/gstajasinkcombiner.cpp: - aja: there is no need to take object lock - Both _sink_event() and _aggregate() vfunc are called from the source pad - streaming thread. There is thus no need to protect caps fields. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7536> - -2024-07-15 16:10:10 +0200 Edward Hervey <edward@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - * gst-libs/gst/mpegts/gstmpegtssection.h: - * gst/mpegtsdemux/tsdemux.c: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstbasetsmuxjpegxs.c: - * gst/mpegtsmux/gstbasetsmuxjpegxs.h: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/meson.build: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegts: Add support for JPEG-XS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7172> - -2024-08-29 14:49:59 +0200 Edward Hervey <edward@centricular.com> - - * gst/mpegtsmux/tsmux/tsmuxstream.c: - tsmux: Split off j2k descriptor code in separate function - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7172> - -2024-08-28 11:07:32 +0200 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/mpegts/gstmpegts-private.h: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - * tests/examples/mpegts/ts-parser.c: - mpegts: Handle ISO 13818 / ITU H.222.0 base extension descriptor - Previously this was hardcoded to the DVB extension descriptors (0x7f), but it - should also be applied for the base specification extension descriptors (0x3f) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7172> - -2024-09-11 19:07:14 +0100 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/svtjpegxs/gstsvtjpegxs.c: - * ext/svtjpegxs/gstsvtjpegxsdec.c: - * ext/svtjpegxs/gstsvtjpegxsenc.c: - svtjpegxs: add to documentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7430> - -2024-08-30 11:19:06 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxsenc.c: - svtjpegxsenc: put "codestream-length" into caps - So consumers can calculate the maximum bitrate (brat) - from that for various descriptors, in combination with - the framerate. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7430> - -2024-08-22 14:15:35 +0100 Tim-Philipp Müller <tim@centricular.com> - - * ext/svtjpegxs/gstsvtjpegxs.c: - * ext/svtjpegxs/gstsvtjpegxsdec.c: - * ext/svtjpegxs/gstsvtjpegxsdec.h: - * ext/svtjpegxs/meson.build: - svtjpegxs: add SVT JPEG XS decoder - Based on: https://github.com/OpenVisualCloud/SVT-JPEG-XS/ - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7430> - -2024-07-13 17:45:02 +0200 Tim-Philipp Müller <tim@centricular.com> - - * ext/meson.build: - * ext/svtjpegxs/gstsvtjpegxs.c: - * ext/svtjpegxs/gstsvtjpegxsenc.c: - * ext/svtjpegxs/gstsvtjpegxsenc.h: - * ext/svtjpegxs/meson.build: - * meson_options.txt: - svtjpegxs: add SVT JPEG XS encoder - Based on: https://github.com/OpenVisualCloud/SVT-JPEG-XS/ - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7430> - -2024-09-09 00:31:21 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvav1encoder.cpp: - * sys/nvcodec/gstnvcodecutils.cpp: - * sys/nvcodec/gstnvcodecutils.h: - * sys/nvcodec/gstnvencoder.cpp: - * sys/nvcodec/gstnvencoder.h: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - * sys/nvcodec/meson.build: - nvencoder: Add support for d3d12 memory - Use d3d12 -> cuda memory copy helper object in cuda mode encoder - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7480> - -2024-09-08 01:00:12 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudainterop_d3d12.cpp: - * sys/nvcodec/gstcudainterop_d3d12.h: - * sys/nvcodec/meson.build: - nvcodec: Add a helper object for d3d12 interop - Adding new helper object for d3d12 -> cuda memory copy - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7480> - -2024-09-08 21:01:47 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.h: - d3d12: Add gst_d3d12_get_copyable_footprints() method - This helper method will calculate buffer resource size and layout - required for (mutiple) texture resources to be stored in a single - buffer resource - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7480> - -2024-09-08 00:06:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandlistpool.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12device: Hold compute queue - Compute queue will be used for async compute task or device-to-device - memory copy - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7480> - -2024-09-14 03:12:46 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcudacontext.cpp: - * gst-libs/gst/cuda/gstcudaloader-private.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/stub/cuda.h: - cuda: Load external resource interop symbols - Required for d3d12 interop - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7480> - -2024-09-05 22:07:24 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/adaptivedemux/gstadaptivedemux.c: - video: Don't overshoot QoS earliest time by a factor of 2 - By setting the earliest time to timestamp + 2 * diff there would be a difference - of 1 * diff between the current clock time and the earliest time the element - would let through in the future. If e.g. a frame is arriving 30s late at the - sink, then not just all frames up to that point would be dropped but also 30s of - frames after the current clock time. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7459> - -2024-09-11 08:40:42 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Wait for data on all pads before deciding on a best pad unless timing out - This makes sure that if upstream has different latencies that we're still - outputting buffers with increasining timestamps across the different streams - unless buffers are arriving after the latency deadline. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7500> - -2024-08-30 01:38:23 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/cuda/cudamemorypool.c: - * tests/examples/cuda/meson.build: - examples: Add application CUDA memory pool example - An example to show application managed CUDA memory pool usage - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-29 23:52:08 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudamemory.cpp: - * gst-libs/gst/cuda/gstcudamemory.h: - cuda: Add support for application cuda memory pool - Adding gst_cuda_register_allocator_need_pool_callback() method - to support memory allocation from application's CUmemoryPool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-29 22:18:48 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcuda.h: - * gst-libs/gst/cuda/gstcudamemorypool.cpp: - * gst-libs/gst/cuda/gstcudamemorypool.h: - * gst-libs/gst/cuda/meson.build: - cuda: Add CUDA memory pool object - Adding a wrapper object for CUmemoryPool handle to use the native - handle in a refcounted way - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-09-10 19:29:44 +0900 Seungha Yang <seungha@centricular.com> - - * docs/libs/cuda/index.md: - * gst-libs/gst/cuda/gstcudabufferpool.cpp: - * gst-libs/gst/cuda/gstcudabufferpool.h: - * gst-libs/gst/cuda/gstcudacontext.cpp: - cuda: Add methods to enable stream ordered allocation - Adding prefer-stream-ordered-alloc property to GstCudaContext. - If stream ordered allocation buffer pool option is not configured - and this property is enabled, buffer pool will enable the stream - ordered allocation. Otherwise it will follow default behavior. - If GST_CUDA_ENABLE_STREAM_ORDERED_ALLOC env is set, - default behavior is enabling the stream ordered allocation. - Otherwise sync alloc/free method will be used. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-30 00:39:06 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvencoder.cpp: - nvencoder: Disable stream ordered allocation - Stream ordered allocation is not supported by encoder - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-29 20:24:56 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstcudaipcsink.cpp: - cudaipcsink: Disable stream ordered allocation - Legacy CUDA IPC does not support default CUDA memory pool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-29 20:15:20 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcuda-private.h: - * gst-libs/gst/cuda/gstcudabufferpool.cpp: - * gst-libs/gst/cuda/gstcudabufferpool.h: - * gst-libs/gst/cuda/gstcudamemory.cpp: - * gst-libs/gst/cuda/gstcudamemory.h: - cuda: Add support for stream ordered allocation - Default CUDA memory allocation will cause implicit global - synchronization. This stream ordered allocation can avoid it - since memory allocation and free operations are asynchronous - and executed in the associated cuda stream context - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-29 18:23:37 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcuda-private.h: - * gst-libs/gst/cuda/gstcudacontext.cpp: - * gst-libs/gst/cuda/gstcudaloader-private.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/stub/cuda.h: - cuda: Load stream ordered allocation related symbols - Required to support async memory allocation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427> - -2024-08-30 14:59:14 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - * gst-libs/gst/wayland/gstwlshmallocator.c: - * gst-libs/gst/wayland/gstwlshmallocator.h: - wayland: Set a debug category for the shm allocator - None was set, which meant the debug was associated with default. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7482> - -2024-09-09 16:27:43 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * ext/wayland/gstwaylandsink.c: - * gst-libs/gst/wayland/gstwlcontext.c: - * gst-libs/gst/wayland/gstwlcontext.h: - wayland: Fix ABI break in WL context type name - While transforming the internals of waylandsink into a library, the - context type name was accidentally changed, causing an ABI break. Change - it back to its original (as used by the libgstgl), and add support for - the misnamed version as a backward compatibility measure. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7482> - -2024-09-10 00:10:21 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan: Fix some doc strings and also some g-i warnings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7481> - -2023-07-18 17:34:54 +0200 Michael Tretter <m.tretter@pengutronix.de> - - * sys/uvcgadget/gstuvcsink.c: - uvcsink: set cur_caps to upstream selected caps - If the UVC gadget announces multiple formats in the descriptors the uvcsink - doesn't select the actual format but let's the UVC hosts select the format. - If the GStreamer pipeline is started before a UVC host selected the format, - upstream decides on a format until the UVC host has decided. In this case, the - current format needs to be set based on the caps from the caps event to be able - to detect if the format selection by the UVC host requires a format change on - the GStreamer pipeline. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473> - -2024-09-05 15:04:33 +0200 Michael Tretter <m.tretter@pengutronix.de> - - * sys/uvcgadget/gstuvcsink.c: - uvcsink: skip comparison with prev_caps if they are not set - The uvcsink may be put into the READY state to start listening for UVC requests. - Therefore, the UVC host may set a streaming format before the GStreamer pipeline - is started and the uvcsink received a caps event. In this case, prev_caps will - be NULL. - If the EVENT_CAPS has not been received, skip the check if the format needs to - be changed, since the sink will be started with the format selected by the UVC - host, anyway. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473> - -2024-08-29 12:16:16 +0200 Edward Hervey <edward@centricular.com> - - * gst/mpegtsmux/gstatscmux.c: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/tsmux/tsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegtsmux: Cleanup TsMuxStream fields - Instead of using plenty of case-specific booleans: - * Store type as GstStreamType - * Store unique stream type - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7476> - -2024-09-06 10:51:01 +0200 Edward Hervey <edward@centricular.com> - - * gst-libs/gst/play/gstplay.c: - gstplay: Name the different bus - Makes it clearer when reading logs which one is which - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7462> - -2024-09-06 01:07:43 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - nvencoder: Prefer byte-stream format over packetized - Since old encoder implementation supported only byte-stream, - prefers byte-stream format for backward compatibility. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3787 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7457> - -2024-09-02 12:15:41 +0200 Michael Scherle <michael.scherle@rz.uni-freiburg.de> - - * sys/va/gstvacompositor.c: - * sys/va/gstvafilter.c: - * sys/va/gstvafilter.h: - * sys/va/gstvavpp.c: - va: restrict interpolation & scaling property to iHD driver - interpolation & scaling is supported for all hardware on - iHD driver. But not supported in mesa driver. see: - <https://github.com/intel/media-driver/issues/1843> - <https://gitlab.freedesktop.org/mesa/mesa/-/issues/11803> - improvment of: - <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7301> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7443> - -2024-09-05 01:14:17 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/va/gstvavideoformat.c: - va: videoformat: Correct NV21's BPP - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-09-02 13:18:13 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvajpegenc.c: - vajpegenc: set interlace-mode, colorspace and sampling in output caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-09-02 13:17:01 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/va/vasurfaceimage.c: - vasurfaceimage: log surface status string - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-01-30 23:46:36 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvajpegenc.c: - * sys/va/gstvajpegenc.h: - * sys/va/meson.build: - * sys/va/plugin.c: - va: Implement the vajpegenc plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-08-30 23:00:48 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - va: baseenc: Check the bitrate property before get its value - Not all the encoders have the bitrate property, such as the jpeg enc. - We need to check that property before getting its value, or the glib - will print warnings. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-01-30 23:24:18 +0800 He Junyan <junyan.he@intel.com> - - * tests/check/libs/jpegbitwriter.c: - * tests/check/meson.build: - tests: Add the jpeg bit code writer test case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-01-30 23:14:39 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gstjpegbitwriter.c: - * gst-libs/gst/codecparsers/gstjpegbitwriter.h: - * gst-libs/gst/codecparsers/meson.build: - codecparsers: Implement the jpeg bit code writer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6022> - -2024-09-05 10:08:17 +0200 Edward Hervey <edward@centricular.com> - - * tests/check/elements/dash_mpd.c: - check: Disable failing test - Test hasn't been properly fixed for several years with modern libsoup, and it - only for the legacy adaptive demuxer. - Fixes #3783 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7454> - -2024-08-26 14:46:59 +1000 Matthew Waters <matthew@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: enable forward-unknown-ssrc on rtpfunnel - See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7405 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7409> - -2024-09-03 20:10:42 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/gstdwriterender_d3d12.cpp: - dwrite: Allow unlimited number of in-flight d3d12 commands - ... so that it can be controlled by global direct command queue. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7444> - -2024-09-03 19:33:41 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12: Add async-depth property - Adding a property to control the number of in-flight GPU commands - (default is unlimited). Note that actual maximum number is defined - in d3d12device's direct command queue object which is 32 now, - thus total number of scheduled GPU commands cannot exceed 32. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7444> - -2024-09-03 17:04:49 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Use new release_frame/drop_frame encoder API - Replaces usage of gst_video_codec_frame_unref everywhere. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7173> - -2024-09-03 17:00:09 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Restart encoding session when certain errors are detected - Sometimes under certain loads, VT can error out with kVTVideoEncoderMalfunctionErr or kVTVideoEncoderNotAvailableNowErr. - These have been reported to happen more often than usual if CopyProperty/SetProperty() is used close to the encode call. - Both can be worked around by restarting the encoding session. - These errors can be returned either directly from VTCompressionSessionEncodeFrame() or later in the encoding callback. - This patch handles both scenarios the same way - a session restart is be attempted on the next encode_frame() call. - If the error is returned immediately by the encode call, it's possible that some correct frames will still be given to - the output callback, but for simplicity (+ because I wasn't able to verify this scenario) let's just discard those. - In addition, this commit also simplifies the beach/drop logic in enqueue_buffer. - Related bug reports in other projects: - http://www.openradar.me/45889262 - https://github.com/aws/amazon-chime-sdk-ios/issues/170#issuecomment-741908622 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7173> - -2024-09-02 18:25:56 +0900 Hou Qi <qi.hou@nxp.com> - - * gst-libs/gst/play/gstplay.c: - gstplay: check whether stream is seekable before seeking when state change - If state is changing from playing to paused, and rate is reset to 1 - which causes seek position is valid, current code will do seek for - streams that are not seekable. So need to check whether stream is - seekable before seeking. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7441> - -2024-08-20 02:01:34 +0100 Tim-Philipp Müller <tim@centricular.com> - - * gst-libs/gst/codecs/gsth264decoder.c: - * gst-libs/gst/glib-compat-private.h: - * sys/va/gstvaav1enc.c: * sys/va/gstvah264enc.c: * sys/va/gstvah265enc.c: + * sys/va/gstvajpegenc.c: * sys/va/gstvavp9enc.c: - * sys/va/meson.build: - gst-plugins-bad: use g_sort_array() instead of deprecated g_qsort_with_data() - Fixes compiler warnings with the latest GLib versions. - See https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4127 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7384> - -2024-08-30 09:52:55 +0200 Oskar Fiedot <oskar.fiedot@intel.com> - - * gst-libs/gst/analytics/gstanalyticsclassificationmtd.c: - * gst-libs/gst/analytics/gstanalyticsclassificationmtd.h: - * gst-libs/gst/analytics/gstanalyticsmeta.c: - * gst-libs/gst/analytics/gstanalyticsmeta.h: - * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c: - * gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.h: - * gst-libs/gst/analytics/gstanalyticsobjecttrackingmtd.c: - * gst-libs/gst/analytics/gstanalyticsobjecttrackingmtd.h: - analytics: Change pointers in getters to const - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7403> - -2024-08-29 12:01:30 +0100 Philippe Normand <philn@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: Prevent crash when attempting to set answer on invalid SDP - If the pending remote description has an invalid BUNDLE group _parse_bundle() - triggers early return from _create_answer_task(), before ret has been - initialized, so it needs to be checked before attempting to call - gst_sdp_message_copy(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7423> - -2024-07-27 08:53:47 +0200 Edward Hervey <edward@centricular.com> - - * tests/check/gst-plugins-bad.supp: - bad: Add suppression for libsrt issues - This is not code we control - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7415> - -2024-07-27 08:29:53 +0200 Edward Hervey <edward@centricular.com> - - * tests/check/elements/lc3.c: - check: Fix leak in lc3 test - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7415> - -2024-08-27 11:52:08 +0200 Carlos Bentzen <cadubentzen@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: fix regression with missing RTP header extensions in Answer SDP - webrtcsrc first creates recvonly transceivers with codec-preferences - and expects that after applying a remote description, the - previously created transceivers are used rather than having new - transceivers created. - When pairing webrtcsink + webrtcsrc, the offer sdp from webrtcsink has a media - section with sendonly direction. In !7156, which was implemented following - RFC9429 Section 5.10, we only reuse a unassociated transceiver when applying a - remote description if the media is sendrecv or recvonly, and that caused creation - of new transceivers when applying a remote offer in webrtcsrc, thus losing - information from codec preferences like the RTP extension headers in the - previously created transceivers. - Since the change in !7156 broke existing code from webrtcsrc, relax the condition - for reusing unassociated transceivers and add a test to document this behavior which - wasn't covered by any tests before. - Fixes #3753. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7417> - -2024-08-21 13:23:36 +0100 Francis Quiers <fquiers@cisco.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/voamrwbenc/gstvoamrwbenc.c: - voamrwbenc: fix list of bitrates - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7396> - -2024-08-09 09:41:07 +0000 Daniel Pendse <daniel.pendse@spiideo.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/rtmp2/rtmp/rtmpclient.c: - * gst/rtmp2/rtmp/rtmpclient.h: - rtmp2: Add llnw auth support to rtmp client - Add support for Limelight CDN (llnw) authentication. Inspired - by the ffmpeg implementation of llnw auth. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7410> - -2024-07-25 17:50:26 +0200 Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com> - - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth265parse.c: - h264parse, h265parse: Fix time code calculation - We need to multiply for the nuit_field_based_flag before scaling, or - we'll lose precision and end up only adding even timecodes. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7241> - -2024-08-23 16:21:43 +0200 RSWilli <bartel.wilhelm@gmail.com> - - * gst-libs/gst/webrtc/webrtc_fwd.h: - webrtc: fix documentation error in GstWebRTCKind - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7407> - -2024-08-08 06:23:47 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/timecode/gsttimecodestamper.c: - * gst/timecode/gsttimecodestamper.h: - timecodestamper: Add running-time source mode - Add a new source mode "running-time". This mode will convert buffer - running time into timecode - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7322> - -2024-08-21 09:24:58 -0400 Thibault Saunier <tsaunier@igalia.com> - - * tests/validate/autovideoconvert/renegotiate/flow-expectations/log-^convert-src$-expected: - ci: Fail tests if we forget to checkout expectation files - And add missing expectation files - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7400> - -2024-08-20 22:09:13 +1000 Jan Schmidt <jan@centricular.com> - - * gst-libs/gst/player/gstplayer.c: - gstplayer: Check GstPlayerSignalDispatcher type - Before trying to retrieve a GMainContext from a provided - GstPlayerSignalDispatcher, check that it is actually - GstPlayerGMainContextSignalDispatcher. If not, use the - default GMainContext for dispatching signals via the adapter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7392> - -2024-08-21 09:19:39 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/wpe/gstwpesrcbin.cpp: - wpe: fix gst-launch example - wpesrc does not have num-buffers property but wpevideosrc does. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7389> - -2024-06-07 00:01:10 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/cuda/cuda-template.c: - * tests/examples/cuda/meson.build: - * tests/examples/cuda/template-plugin/cuda-transform-ip-template.c: - * tests/examples/cuda/template-plugin/cuda-transform-ip-template.h: - * tests/examples/cuda/template-plugin/plugin.c: - examples: Add CUDA based in-place transform element example - Adding a CUDA example element for plugin developers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7004> - -2024-08-20 19:20:34 +1000 Jan Schmidt <jan@centricular.com> - - * tests/check/elements/webrtcbin.c: - webrtc: Fix racy unit test - Don't reuse the same stats state structure across multiple - get-stats calls. Make each callback take a copy of the - non-changing fields it needs and use a local working copy - to avoid crashing. - Fixes problems with the unit test crashing sometimes for the - unit test introduced in MR !7338 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7387> - -2024-08-20 18:57:50 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcstats.c: - webrtcbin: Always populate rtp-inbound stats fields - Even if there's no jitterbuffer yet for an incoming stream, - make sure to populate the mandatory statistics with 0 entries. - Fixes problems with the unit test failing sometimes for the - unit test introduced in MR !7338 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7387> - -2024-08-05 11:46:28 +0200 Michael Scherle <michael.scherle@rz.uni-freiburg.de> - - * sys/va/gstvacompositor.c: - * sys/va/gstvafilter.c: - * sys/va/gstvafilter.h: * sys/va/gstvavpp.c: - va: add interpolation method for scaling - For description of interpolation methods, see: - <https://intel.github.io/libva/structVAProcPipelineParameterBuffer.html#abb95e119ed7f841f71b2afbec2104784> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7301> - -2024-08-19 14:34:28 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvabasedec.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvabasetransform.c: - * sys/va/gstvacompositor.c: - va: don't use GST_ELEMENT_WARNING in set_context() vmethod - Since bins can set the context of their children elements, the set_context() - vmethod shouldn't call bus messages post methods, since it locks the parent - object, the bin, which might be already locked, leading to a deadlock. - Fixes: #3706 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7378> - -2024-08-16 22:33:03 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: Fix uint64 -> uint confusion for ice-candidate priority - ICE candidate priority is a 32-bit field and reported as such in the - webrtcbin statistics, but the documentation was incorrect, and the - unit test was looking for a uint64. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7338> - -2024-08-12 22:17:14 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcstats.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: Fixes for bundled statistics generation - When multiple streams are bundled on the same transport, - the statistics would end up incorrectly generated, - as each pad would regenerate stats for every ssrc on the - transport, overwriting previous iterations and assigning - bogus media kind and other values to the wrong ssrc. - Fix by making sure each pad only loops and generates - statistics for the one ssrc that pad is receiving / sending. - Add a unit test that the codec kind field in RTP statistics - are now generated correctly. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2555 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7338> - -2024-07-30 21:59:53 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12swapchainsink.cpp: - * sys/d3d12/gstd3d12swapchainsink.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - * tests/examples/d3d12/d3d12swapchainsink-win32.cpp: - * tests/examples/d3d12/d3d12swapchainsink-winrt.cpp: - * tests/examples/d3d12/meson.build: - d3d12: Add d3d12swapchainsink element - Adding a new videosink element for Windows composition API based - applications. Unlike d3d12videosink, this element will create only - DXGI swapchain by using IDXGIFactory2::CreateSwapChainForComposition() - without actual window handle, so that video scene can be composed - via Windows native composition API, such as DirectComposition. - Note that this videosink does not support GstVideoOverlay interface - because of the design. - The swapchain created by this element can be used with - * DirectComposition's IDCompositionVisual in Win32 app - * WinRT and WinUI3's UI.Composition in Win32/UWP app - * UWP and WinUI3 XAML's SwapChainPanel - See also examples in this commit which show usage of the videosink - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7287> - -2024-08-08 14:09:20 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvah264enc.c: - vah264enc: fix typo - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7337> - -2024-08-06 10:59:32 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvaav1dec.c: - * sys/va/gstvaav1enc.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp8dec.c: - * sys/va/gstvavp9enc.c: - va: replace %d for %u format for system_frame_number guint32 variable - And also fixed the format for other less frequently printed variables. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7337> - -2024-08-06 10:58:29 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvah264enc.c: - vah264enc: update b_pryamid property if it changes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7337> - -2024-08-06 10:57:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - vah26xenc: use gst_h26x_slice_type_to_string() - Rather than custom function. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7337> - -2024-08-16 14:47:52 +1000 Jan Schmidt <jan@centricular.com> - - * tests/check/elements/webrtcbin.c: - tests/webrtcbin: Add a lock around the stats test - Prevent any race if both webrtcbin end up generating their - statistics simultaneously, however unlikely. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365> - -2024-08-16 14:46:19 +1000 Jan Schmidt <jan@centricular.com> - - * tests/check/elements/webrtcbin.c: - tests/webrtcbin: Fix racy rollback test - Prevent the default webrtc test machinery from attempting to - create and set an answer when we're just testing rollback - of the offers. Add some locking / waiting to ensure the test - is complete before exiting. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365> - -2024-08-16 08:58:47 +1000 Jan Schmidt <jan@centricular.com> - - * tests/check/elements/webrtcbin.c: - tests/webrtcbin: Use fail_unless_matches_string() - Use pattern matching against expected error strings that - might include internal element names, where the names - are default assigned with incrementing integers. When running - with CK_FORK=no, there may have been previous tests that - ran in the same process and incremented the counters more - than when running in the default fork-per-test mode. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365> - -2024-08-13 23:55:47 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvadeinterlace.c: - va: deinterlace: Do not use the backward reference - num_backward_references > 0 means we need to cache several frames - after the current frame. But the basetransform class does not - provide any _drain() kind function, so we do not have the chance - to push out our cached frames when EOS or set caps event comes. - Rather than losing the last several frames, we should just give up - the backward reference here. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348> - -2024-08-13 22:41:00 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvadeinterlace.c: - va: deinterlace: Push the forgotten leading frames if forward reference > 0 - The current code forgets to push the first several frames if the forward - reference > 0. They are just cached in history array and will never be - deinterlaced and pushed. - For the first several frames, even the forward reference frames are not - enough, we still need to deinterlace them as normal and push them after that. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348> - -2024-08-14 19:44:40 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> - - * gst/jpegformat/gstjpegparse.c: - jpegparse: fix incorrect reading of transform in app14 marker - "adobe" in app14 marker seem not a null-terminted string. so, when - we use gst_byte_reader_get_string_utf8, more bytes will be read until - null. and "gst_byte_reader_get_uint8 (&reader, &transform)" will almost fail - to read transform - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7356> - -2024-08-14 10:45:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: remove duplicated structure definition - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-08-14 10:30:35 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: don't override error on get_format() call - If gst_vulkan_video_encoder_get_format() fails it fills the error structure, so - it shouldn't be filled again. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-08-12 17:29:18 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: There's no need to store the aligned offset of 0 - Since it's 0 too. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-08-12 17:27:35 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: use g_clear_pointer to unref packed headers - And use g_ptr_arra_unref() Instead of using the unrecommended g_ptr_array_free(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-08-12 16:58:27 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vkencoder-private: don't check twice for encoder parameter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-08-12 16:57:59 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - vkencoder-private: fix code style - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7354> - -2024-07-22 21:29:38 +0800 Qian Hu (胡骞) <qian.hu@mediatek.com> - - * gst-libs/gst/codecparsers/gsth264parser.c: - * gst-libs/gst/codecparsers/gsth265parser.c: - * tests/check/libs/h264parser.c: - h26xparse: bypass check for length_size_minus_one - fix playback fail, when some file with length_size_minus_one == 2 - According to the spec 2 cannot be a valid value, so that stream has a - bad config record. but breaking the decoding because of that, perhaps is too much. - and ffmpeg seem not check this - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7213> - -2024-05-21 22:28:05 +0300 Jordan Petridis <jordan@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * gst/rtmp2/gstrtmp2sink.c: - * gst/rtmp2/gstrtmp2src.c: - * gst/rtmp2/rtmp/amf.c: - * gst/rtmp2/rtmp/amf.h: - * gst/rtmp2/rtmp/rtmpclient.c: - * gst/rtmp2/rtmp/rtmpclient.h: - * gst/rtmp2/rtmp/rtmpconnection.c: - * gst/rtmp2/rtmp/rtmpconnection.h: - rtmp2: reimplement librtmp's connection parameters for the connect packet - librtmp allows for attaching arbitrary AMF objects to the end of the - connect packet, and this is commonly used for authenticating with - servers. - Add a new property, extra-connect-args, that mimics librtmp's behavior. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7054> - -2024-08-13 10:42:31 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/msdk/gstmsdkav1enc.c: - * sys/msdk/gstmsdkcontext.c: - * sys/msdk/gstmsdkh264enc.c: - * sys/msdk/gstmsdkh265enc.c: - * sys/msdk/gstmsdkmpeg2enc.c: - * sys/msdk/gstmsdkvc1dec.c: - * sys/msdk/gstmsdkvp9enc.c: - msdk: replace strcmp with g_strcmp0 - Because strcmp doesn't handle NULL. - Fixes: #3721 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7347> - -2024-06-23 23:09:00 +0200 Marijn Suijten <marijns95@gmail.com> - - * gst-libs/gst/vulkan/gstvkutils.c: - vulkan: Replace open-coded precondition checks with g_return_val_if_fail - While analyzing gst_vulkan_get_or_create_image_view_with_info() it - seems obvious that this function returns NULL, and that this should be - covered in the return annotations. However, closer inspection indicates - that this is only a precondition check when the incoming arguments are - incompatible with each other, and should not be considered as a function - that optionally returns a pointer. - Signify this by using precondition checks instead of an opencoded - if-return-NULL. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736> - -2023-11-29 23:23:46 +0100 Marijn Suijten <marijns95@gmail.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - vulkan: Annotate queue getter as nullable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736> - -2023-11-29 20:05:18 +0100 Marijn Suijten <marijns95@gmail.com> - - * gst-libs/gst/vulkan/gstvkbuffermemory.c: - * gst-libs/gst/vulkan/gstvkbuffermemory.h: - * gst-libs/gst/vulkan/gstvkmemory.c: - * gst-libs/gst/vulkan/gstvkmemory.h: - * gst-libs/gst/vulkan/gstvkutils.c: - * gst-libs/gst/vulkan/gstvkutils.h: - vulkan: Mark some pointers to Vulkan info structures as const - These pointers are only used as read-only arguments, and should not be - treated as mutable. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736> - -2023-11-29 19:46:49 +0100 Marijn Suijten <marijns95@gmail.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - vulkan: Add missing `out` annotation to `decoder_out_format()` - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736> - -2023-11-28 10:54:27 +0100 Marijn Suijten <marijns95@gmail.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - * gst-libs/gst/vulkan/gstvkdisplay.c: - * gst-libs/gst/vulkan/gstvkinstance.c: - * gst-libs/gst/vulkan/gstvkqueue.c: - vulkan: Fix context get/set annotations - Most notably the out annotations for gst_context_get_* were missing, - causing us to generate the wrong bindings for Rust. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736> - -2024-08-01 13:42:52 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: Fix renegotiation checks - When checking for renegotiation against a local offer, - reverse the remote direction in the corresponding answer - to fix falsely not triggering on-negotiation needed when - switching (for example) from local sendrecv -> recvonly - against a peer that answered 'recvonly'. - In the other direction, when the local was the answerer, - renegotiation might trigger when it didn't need to - - whenever the local transceiver direction differs from - the intersected direction we chose. Instead what we want - is to check if the intersected direction we would now - choose differs from what was previously chosen. - This makes the behaviour in both cases match the - behaviour described in - https://www.w3.org/TR/webrtc/#dfn-check-if-negotiation-is-needed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7303> - -2024-08-08 14:36:19 +0200 Benjamin Gräf <benjamin.graef@zuehlke.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/decklink/gstdecklink.cpp: - * sys/decklink/gstdecklink.h: - decklink: Add support for all modes of Quad HDMI recorder - By extending the GstDecklinkModeEnum with the additional modes supported by the Quad HDMI recorder, - we avoid using mode = 0 in case any of these resolutions is returned by the card. - Fixes#3713 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7302> - -2024-08-08 13:18:42 +0100 Tim-Philipp Müller <tim@centricular.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - mpegts: fix stray gtk-doc chunk - Trips up g-ir-scanner it seems: - gstmpegtsdescriptor.h:614: Error: GstMpegts: Skipping invalid GTK-Doc comment block - https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793#note_2517855 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7324> + va: remove unused headers + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8626> -2024-08-08 16:37:35 +0800 Shengqi Yu <shengqi.yu@mediatek.com> - - * gst/autoconvert/gstbaseautoconvert.c: - baseautoconvert: correct mistake in printing log - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7323> - -2024-08-07 19:14:26 +0100 Tim-Philipp Müller <tim@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/aom/gstav1enc.c: - aom: av1enc: restrict allowed input width and height - Restrict allowed input resolution to something sensible - in light of libaom CVE-2024-5171. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7320> - -2024-08-05 22:10:28 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - * gst-libs/gst/webrtc/rtcsessiondescription.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: Make basic rollbacks work - Fixes for basic rollback (from have-local-offer or have-remote-offer to - stable). Allow having no SDP attached to the webrtc session description - in that case, and avoid all the transceiver and ICE update logic - normally applied when entering the stable signalling state - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7304> - -2024-08-06 22:48:16 +1000 Jan Schmidt <jan@centricular.com> - - * gst-libs/gst/webrtc/webrtc_fwd.h: - webrtc: Add missing G_BEGIN/END_DECLS in header - Fix using webrtc.h from C++ by adding the GLib begin/end - decls markers around the header contents in webrtc_fwd.h - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7312> - -2024-08-06 10:03:55 +1000 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklinkvideosrc.cpp: - decklink: fix win32 build error - This was not caught by the CI in the MR. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7307> - -2024-07-22 23:55:48 +1000 Matthew Waters <matthew@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * sys/decklink/gstdecklink.cpp: - * sys/decklink/gstdecklink.h: - * sys/decklink/gstdecklinkvideosink.cpp: - * sys/decklink/gstdecklinkvideosink.h: - * sys/decklink/gstdecklinkvideosrc.cpp: - * sys/decklink/gstdecklinkvideosrc.h: - decklink: add support for HDR output and input - Supports PQ and HLG static metadata. - Support for HDR is queried from the device and selectively enabled when - supported. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7214> - -2024-07-30 12:49:04 +1000 Jan Schmidt <jan@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/webrtc/gstwebrtcbin.c: - * ext/webrtc/gstwebrtcbin.h: - webrtc: Add reuse-source-pads property - Add a property to avoid sending EOS on source pads when the - associated transceiver becomes inactive during renegotiation. - This allows the pads to become active again in a later - renegotiation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237> - -2024-07-25 21:25:58 +1000 Jan Schmidt <jan@centricular.com> - - * gst-libs/gst/webrtc/rtptransceiver.c: - webrtc: Fix transceiver `current-direction` property - Fix a typo registering the `current-direction` property - that made it just be a proxy for `direction` instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237> - -2024-07-24 20:59:51 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtc: Fixes for matching pads to unassociated transceivers - Fix an inverted condition when checking if sink pad caps match - the codec-preference of an unassociated transceiver, and - fix a condition check for transceiver media kind to - avoid matching sinkpad requests where caps aren't provided - against unassociated transceivers where the caps might - not match later. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237> - -2024-07-24 20:58:01 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: tracked maximum pad serial better - If a sink pad with a specific index is requested, also - increase the maximum pad serial number if necessary, so - that mixing fixed sink_X requests with unspecific sink_%u - requests works. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237> - -2024-08-02 11:21:13 +0200 Carlos Bentzen <cadubentzen@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * tests/check/elements/webrtcbin.c: - webrtcbin: connect output stream on recv transceivers - With MR 7156, transceivers and transports are created earlier, - but for sendrecv media we could get `not-linked` errors due to - transportreceivebin not being connected to rtpbin yet when incoming - data arrives. - This condition wasn't being tested in elements_webrtcbin, but could be - reproduced in the webrtcbidirectional example. This commit now also - adds a test for this, so that this doesn't regress anymore. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7294> - -2024-08-02 11:19:56 +0200 Carlos Bentzen <cadubentzen@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: reverse direction from remote media - This had been overlooked from the spec. We need to reverse - the remote media direction when setting the transceiver direction. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7294> - -2024-04-08 21:38:19 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Fix intra only stream bug - When we set "ref-frames=0" to generate an intra only stream, the current - encoder just generates an assert and exit with error. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6577> - -2024-04-01 16:56:23 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Improve B pyramid mode in H264 - If the reference frame number is bigger than 2, we can enable the - pyramid B mode. We do not need to assign a reference frame to each - pyramid level. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6577> - -2024-04-01 23:54:04 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - va: h264enc: Make the level table aligned - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6577> - -2024-08-02 05:21:34 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Always release acquired frame - AcquireNextFrame() call should be paired with ReleaseFrame(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7293> - -2024-08-02 04:07:18 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Do not recreate d3d11 device on capture error - Already opened d3d11 device including shader pipeline can be reused - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7293> - -2024-08-02 03:02:08 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Fix deadlock on error - Don't try to wait for non-signalled fence - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7293> - -2024-07-30 09:27:49 +0000 Michael Scherle <michael.scherle@rz.uni-freiburg.de> - - * sys/msdk/gstmsdkvpp.c: - * sys/msdk/gstmsdkvpp.h: - * sys/msdk/msdk-enums.c: - * sys/msdk/msdk-enums.h: - msdkvpp: add interpolation method - For description of interpolation modes, see: - <https://intel.github.io/libvpl/latest/API_ref/VPL_enums.html#interpolationmode>. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7278> - -2024-07-03 07:58:58 -0600 Jordan Yelloz <jordan.yelloz@collabora.com> - - * gst/videoparsers/gsth265parse.c: - h265parse: Reject FD received before SPS - A previous fix, a275e1e029e9b5d88be26b8304c9a162e4567346, is correct but was too - permissive since it treats all un-matched NAL units the same as AU delimiters - even though some other NAL unit types can be encountered in the processing loop. - The problem this can cause is that some hardware decoders experience bad - performance when handling FD units that precede the SPS. - This change restores the original behavior for FDs so that they're ignored until - the SPS is received and it preserves the codec conformance test gains that the - fix has achieved. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7166> - -2024-07-29 22:49:03 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcuda-private.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/gstcudautils.cpp: - * gst-libs/gst/cuda/gstcudautils.h: - * gst-libs/gst/cuda/meson.build: - * gst-libs/gst/cuda/stub/cudaEGL.h: - * sys/nvcodec/meson.build: - * sys/nvcodec/plugin.c: - cuda/nvcodec: Add support for importing and producing embedded NVMM memory - As produced on the Nvidia Jetson series of devices. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7274> - -2024-08-01 11:12:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: fix documentation grammar - Original-patch-by: Matthew Waters <matthew@centricular.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7288> - -2024-07-10 10:34:19 +0200 Carlos Bentzen <cadubentzen@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * ext/webrtc/webrtcsdp.c: - * ext/webrtc/webrtcsdp.h: - * tests/check/elements/webrtcbin.c: - webrtcbin: create and associate transceivers earlier in negotation - According to https://w3c.github.io/webrtc-pc/#set-the-session-description - (steps in 4.6.10.), we should be creating and associating transceivers when - setting session descriptions. - Before this commit, webrtcbin deviated from the spec: - 1. Transceivers from sink pads where created when the sink pad was - requested, but not associated after setting local description, only - when signaling is STABLE. - 2. Transceivers from remote offers were not created after applying the - the remote description, only when the answer is created, and were then - only associated once signaling is STABLE. - This commit makes webrtcbin follow the spec more closely with regards to - timing of transceivers creation and association. - A unit test is added, checking that the transceivers are created and - associated after every session description is set. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7156> - -2024-07-29 20:59:58 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkupload.c: - vulkanupload: honor downstream pool allocation parameters - If a downstream buffer pool is offered, vulkanupload checks its allocation - parameters to honor them. Only adds to usage the TRANSFER bits, which are - required to upload buffers. - Also, fail if the buffer pool cannot be configured with the current parameters. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7219> - -2024-07-29 19:06:34 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * gst-libs/gst/vulkan/gstvkimagebufferpool.h: - vkimagebufferpool: expose config_get_allocation_params() - Also enhanced the documentation and added a config parameter check for - gst_vulkan_image_buffer_pool_config_set_allocation_params() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7219> - -2024-07-26 17:13:10 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/rsvg/gstrsvgoverlay.c: - rsvgoverlay: add debug category - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7246> - -2024-07-19 14:00:45 -0400 Daniel Morin <daniel.morin@collabora.com> - - * ext/onnx/gstonnxclient.cpp: - onnx: fix formating - Code alignement was not alway consistent - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7205> - -2024-07-25 17:06:39 +0200 Edward Hervey <edward@centricular.com> +2025-03-08 12:07:11 +0000 Philippe Normand <philn@igalia.com> - * gst-libs/gst/vulkan/gstvkvideoutils.h: - vulkan: Add missing since markers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7277> + * gst/codecalpha/gstalphacombine.c: + alphacombine: De-couple flush-start/stop events handling + There is no guarantee that any FLUSH_STOP event is preceded by a FLUSH_START. + The element now stops flushing once it has received a FLUSH_STOP on all its sink + pads. + Fixes #4174 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8604> -2024-07-25 16:28:04 +0200 Edward Hervey <edward@centricular.com> +2025-03-10 13:14:07 -0300 Thibault Saunier <tsaunier@igalia.com> - * ext/ldac/ldac-plugin.c: - * ext/svtav1/gstsvtav1enc.c: * ext/svthevcenc/gstsvthevcenc.c: - bad: Add missing plugin since - These predate current stable release. Was never spotted since they weren't built - on the CI - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7277> - -2024-07-27 06:52:49 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Do not set the CRF/Quality parameter with ProRes - It's not supported with ProRes, setting the property will fail. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-27 06:52:25 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Log warnings when setting a property fails - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-27 06:27:14 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Add max-frame-delay property - This controls the number of frames allowed in the compression window. - Not all encoders and implementations support it; I've only managed to - successfully use it with ProRes. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-27 05:47:34 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Support emulating CBR mode with data rate limits - CBR is only supported on Apple Silicon, and this "emulation" works - surprisingly well. We set the window size to a single frame, and don't - set ABR at all. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-27 05:39:53 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Add new property for setting data rate limits - This proxies kVTCompressionPropertyKey_DataRateLimits, except it - only supports a single limit for now. - https://developer.apple.com/documentation/videotoolbox/kvtcompressionpropertykey_dataratelimits - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-25 04:36:09 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Add support for constant bitrate encoding - Only supported on macOS 13.0+ and iOS 16.0+ on Apple Silicon. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-25 03:04:43 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Eliminate some needless complex code - We do not need a helper that takes a lock to fetch the values of these - properties. There is no race being prevented. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-25 03:03:41 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Fix setting of quality property - gst_vtenc_set_quality() will never actually set the VT compression - property, because it tries to set it on self->session which is not - initialized at this point. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7232> - -2024-07-23 14:12:07 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkupload.c: - vulkanupload: comment zero value usage with VK_ACCESS_NONE - Zero is used only for Vulkan version prior to 1.3, because it wasn't defined - before. - Just for readability. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247> - -2024-07-26 17:26:09 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: refactor how image usage is set - Now that driver version is expected to be equal or superior to 1.3.275 the bug - in NVIDIA and RADV regarding usage is solved, we can revert commit b7ded81f7b. - Also this patch sets the internal usage variable after all the validation are - run, thus the state don't keep an invalid usage. - Finally, the now unused supported_usage variable is dropped. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247> - -2024-07-23 14:11:30 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: add encoding usage as video usage - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247> - -2024-07-23 14:07:26 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: reset the number of profiles at set_config() - Virtual method set_config() can be called several times, and if the number of - profiles counter isn't reset the pool will reach an error state. - The purpose of number of profiles is to check the number of valid vulkan video - profiles (two in the case of transcoding use-case, for example) so it's local to - set_config() virtual method. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247> - -2024-07-26 17:09:59 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vulkan: encoder and decoder runtime check for driver version 1.3.275 - Which is the one checked in meson. See commit 21ee264d65 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247> - -2024-07-29 10:29:11 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/va/gstvaallocator.c: - va: refactor dmabuf handle close - Moved the close loop into a function guarded for non-win32 platforms. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7254> - -2024-07-28 02:01:24 +0900 Seungha Yang <seungha@centricular.com> - - * sys/qsv/gstqsvav1enc.cpp: - * sys/qsv/gstqsvh264enc.cpp: - * sys/qsv/gstqsvh265enc.cpp: - * sys/qsv/gstqsvjpegenc.cpp: - * sys/qsv/gstqsvvp9enc.cpp: - qsv: Fix critical warnings - Fixing warnings - GStreamer-CRITICAL **: 01:21:25.862: gst_value_set_int_range_step: - assertion 'start < end' failed - Although when QSV runtime reports a codec is supported, resolution query - fails sometimes, espeically VP9 encoder case on Windows. - Don't try to register an element if resolution query returned an error - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7250> - -2024-07-27 02:18:45 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/svtav1/gstsvtav1enc.c: - svtav1enc: Fix segfault when flushing - gst_video_encoder_get_oldest_frame() is nullable, and will signal that - all frames are handled by returning NULL. - Fixes #3650 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7249> - -2024-07-27 04:16:16 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - d3d12frame: Fix frame copy method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-27 03:50:19 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Check heap flag before trying to create NT handle - CreateSharedHandle() will fail eventually if the resource was created - with non-shared heap. Instead of trying to create handle blindly, - validate resource first. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-27 03:39:22 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/gstdwriterender_d3d12.cpp: - dwrite: Prefer d3d12 resource allocated with shared heap - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-26 02:46:46 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12fencedatapool.cpp: - d3d12: Suppress fence data object leak report - We don't release GstD3D12Device intentionally, thus - a GstD3D12FenceDataPool owned by a device will not be released - but that's expected leak. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-26 02:37:20 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - d3d12: Fix debug category name - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-26 02:17:07 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12memorycopy.cpp: - d3d12download: Do not overwrite fence of non-writable memory - A fence configured in GstD3D12Memory should be used only for - write access to be completed. And because d3d12 -> d3d11 copy path - is read access to d3d12 resource, we should not set fence to - memory. Otherwise another read access to the d3d12 resource - will wait for d3d11 device context's copy operation although - simultaneous read access is allowed. - Use background thread to keep d3d12 resource and wait for d3d11 device's - copy operation instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243> - -2024-07-25 22:44:51 +1000 Jan Schmidt <jan@centricular.com> - - * gst-libs/gst/va/gstvaallocator.c: - va: Fix dmabuf handle leaks - Close dmabuf handles manually when they're not going to - be passed into GStreamer FD memory, to avoid fd handle - leaks. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7240> - -2024-07-08 16:42:58 -0600 Jordan Yelloz <jordan.yelloz@collabora.com> - - * gst/mpegtsmux/tsmux/tsmux.c: - tsmux: Adjust byte counter when adjusting bitrate - When configured in constant bitrate mode, the muxer computes timing information - using the configured bitrate and the byte counter (now = bytes sent / byterate). - When an application changes the bitrate in CBR mode during playback, the - relationship between bytes sent and bitrate is no longer valid so new timing - values will be off by the ratio of the old bitrate to the new bitrate. - Furthermore, it will upset the way that padding is generated. - pad_stream() works by trying to fit the byte counter to now * byterate. - The result is that when decreasing bitrate, the muxer stalls, waiting until the - byte counter is in agreement with now * byterate. Also, when increasing - bitrate, the padding will spike in volume until the byte counter fits with - now * byterate. - If the byte counter is scaled by the ratio of new bitrate / old bitrate when - adjusting bitrate, then padding is generated in a way that applications would - more likely expect. - One detail this change doesn't yet address is whether the next PCR will match up - optimally with the previous PCR right after the byte counter is scaled. In that - case, some correction may be necessary. Also, perhaps the user should be - prevented from changing from bitrate=0 to bitrate=nonzero during playback since - it's not straightforward how to scale the byte counter in that case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7158> - -2024-07-24 22:22:03 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/qroverlay/gstbaseqroverlay.c: - qroverlay: redraw overlay when caps changes - The position needs to be updated as it depends of the video size. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7230> - -2024-07-24 22:21:41 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * ext/qroverlay/gstbaseqroverlay.c: - qroverlay: add some debug logs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7230> - -2024-07-24 09:16:03 +0200 tomaszmi <257184-tomaszmi@users.noreply.gitlab.freedesktop.org> - - * ext/avtp/gstavtpsink.c: - avtp: Fixed Linux/Alpine 3.20 build - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7226> - -2024-07-24 02:33:50 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudanvrtc.cpp: - cuda: Fix runtime compiler loading with old CUDA tookit - Fallback to PTX if CUBIN symbol is unavailable - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3685 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7220> - -2024-07-19 17:06:03 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: no aliased images for video decoding - This fix regression in validation layer introduced by commit 3a2e8d2d19 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7211> - -2024-07-19 16:56:23 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - vkdecoder: handle barrier internally for coincide references - This is to avoid a regression in validation layer (introduced by commit - 916c4e70cd) when using vulkandownload - VUID-VkImageMemoryBarrier2-srcAccessMask-03914 .. vkCmdPipelineBarrier2(): - pDependencyInfo->pImageMemoryBarriers1.srcAccessMask (VK_ACCESS_TRANSFER_READ_BIT) - is not supported by stage mask (VK_PIPELINE_STAGE_2_VIDEO_DECODE_BIT_KHR) - since vulkandownload set DPB memories' access mask to - VK_ACCESS_TRANSFER_READ_BIT, while they are retain by the DPB queue, so when - they are used as DPB after been shown, this validation error is raised. - Must of the barrier values are set ignoring the previous state of the vulkan - images. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7211> - -2023-12-21 09:32:25 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkdownload.c: - * ext/vulkan/vkupload.c: - vulkan{up,down}load: check for a graphics family queue - Vulkan queue retrieved from peer elements should be a graphics family one. - Otherwise, get a compatible queue from the given device. - Co-Authored-By: Víctor Jáquez <vjaquez@igalia.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7162> - -2024-07-19 01:14:20 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12av1dec.cpp: - * sys/d3d12/gstd3d12av1dec.h: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decoder.h: - * sys/d3d12/gstd3d12h264dec.cpp: - * sys/d3d12/gstd3d12h264dec.h: - * sys/d3d12/gstd3d12h265dec.cpp: - * sys/d3d12/gstd3d12h265dec.h: - * sys/d3d12/gstd3d12mpeg2dec.cpp: - * sys/d3d12/gstd3d12mpeg2dec.h: - * sys/d3d12/gstd3d12vp8dec.cpp: - * sys/d3d12/gstd3d12vp8dec.h: - * sys/d3d12/gstd3d12vp9dec.cpp: - * sys/d3d12/gstd3d12vp9dec.h: - * sys/d3d12/plugin.cpp: - d3d12decoder: Add support for d3d11 output again - Although d3d12download supports d3d12 to d3d11 texture copy, - this feature might be useful if an application is not ready to d3d12 - support and it expects output type of decodebin(3) is d3d11. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7208> - -2024-07-18 23:51:23 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - meson: d3d12: Use configuration file - Move defines to config header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7208> - -2023-11-14 14:39:29 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkcontext.c: - msdk: Add new driver name "xe" - Intel has released a new graphic driver named "xe" for - newer Gen12/Xe graphics (i.e. from Lunar Lake). - This patch add "xe" name when getting device in gst-msdk plugins. - See xe driver public in - https://github.com/torvalds/linux/tree/master/drivers/gpu/drm/xe - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7178> - -2024-07-19 17:05:13 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vkoperation: fix documentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7203> - -2024-03-19 20:04:15 -0300 L. E. Segovia <amy@centricular.com> - - * ext/isac/meson.build: - isac: Work around upstream having no shared library support for MSVC - None of the symbols in webrtc-audio-coding-1 are marked with - `__declspec(dllexport)`, rendering the library usable only if - it was built with GCC/Clang. - The only fix available (as the pulseaudio copy has not been updated - with Google's upstream) is to ensure the fallback builds statically. - Although this change will also affect webrtcdsp's dependency on - webrtc-audio-processing-1, it does not break its compilation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6407> - -2024-07-12 10:34:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: reset buffer's access flags - The access flags are kept around the operations, but when the buffer is - released, the access flag should be reset to its original value, since queue - transfers can be done along the pipeline and, when reusing the buffer, the new - queue might not support the latest access flag. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165> - -2024-07-12 10:06:03 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - vulkanh264dec: set access NONE at buffer pool allocation parameters - Since the decoding queue might not have transfer capabilities. - This change also applies to unit test. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165> - -2024-07-11 13:05:28 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkdownload.c: - * ext/vulkan/vkupload.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkoperation.h: - * tests/check/libs/vkvideodecode.c: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vulkan: add source pipeline stage to _operation_add_frame_barrier() - Instead of dragging the last destination pipeline stage as current barrier - source pipeline stage (which isn't a valid semantic) this patch adds a parameter - to gst_vulkan_operation_add_frame_barrier() to set the source pipeline stage to - define the barrier. - The previous logic brought problems particularly with queue transfers, when the - new queue doesn't support the stage set during a previous operation in a - different queue. - Now the operation API is closer to Vulkan semantics. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165> - -2024-07-12 18:10:12 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkupload.c: - * tests/check/libs/vkvideodecode.c: - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - vulkan: fix wrong stages or access in barriers - While working on !7165 we found out that some parameters for barriers were wrong - or the destination pipeline stage was too coarse. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7200> - -2024-06-27 22:25:42 +1000 Matthew Waters <matthew@centricular.com> - - * sys/decklink/gstdecklinkvideosink.cpp: - * sys/decklink/gstdecklinkvideosink.h: - decklinkvideosink: schedule frames before they need to be displayed - This removes most occurances of 'late' frame notifications produced by - decklink. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7107> - -2024-07-18 23:00:16 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - d3d12converter: Update internal method names - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7193> - -2024-07-18 03:30:23 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12: Always allocate output texture using shared heap - ... if downstream preference is unknown (e.g., no proposed - buffer pool by downstream), so that produced textures can be - shareable with other APIs such as d3d11 or vulkan, or other processes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7193> - -2024-07-16 23:08:39 +0200 Robert Mader <robert.mader@posteo.de> - - * sys/va/gstvabase.c: - vabase: Stop aligning VideoInfo during DMABUF import - Doing so resets the stride from the VideoMeta and it wasn't done before - the commit below. While on it, drop the plane size check as we can't - reliably predict the correct size when using DRM modifiers. - Fixes: 89b0a6fa23 ("va: refactor buffer import") - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7187> - -2024-07-17 12:45:31 +0200 Robert Mader <robert.mader@posteo.de> - - * sys/va/gstvabase.c: - vabase: Use correct VideoInfo during DMABUF import - The changes to the VideoInfo, notably the stride from the VideoMeta, - were lost. Avoid such mistakes by explicitly using the VideoInfo from - drm_info. - Fixes: 9f5b2c4e25 ("va: use GstVideoInfoDmaDrm when importing buffers") - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7187> - -2024-07-17 23:44:09 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/avfdeviceprovider.m: - * sys/applemedia/avfvideosrc.h: - * sys/applemedia/avfvideosrc.m: - avfdeviceprovider: Fix debug category initialization - The device monitor calls into avfvideosrc functions without - initializing the debug category, which causes multiple criticals. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7192> - -2024-07-16 03:31:33 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue-private.h: - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12commandqueue.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window-swapchain-resource.h: - * sys/d3d12/gstd3d12window-swapchain.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - * sys/webview2/gstwebview2src.cpp: - d3d12: Remove unnecessary event handles - null event NT handle to ID3D12Fence::SetEventOnCompletion() - will block the calling CPU thread already, thus it has no point that - creating an event NT handle in order to immediate wait for fence at CPU-side. - Note that passing a valid event NT handle to the fence API might be useful - when we need to wait for the fence value later (or timeout is required), - or want to wait for multiple fences at once via WaitForMultipleObjects(). - But it's not a considered use case for now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7176> - -2024-07-16 04:21:09 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12window.cpp: - d3d12videosink: Fix mouse event handling - GstD3D12Window.priv.input_info is referenced by mouse event handler - in order to calculate corresponding original position - if scene is rotated/flipped by the videosink. - Fixing regression introduced by recent d3d12videosink refactoring - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7177> - -2024-07-15 12:44:52 +0200 Robert Mader <robert.mader@posteo.de> - - * sys/va/plugin.c: - va: Blocklist i965 driver for encoding - The driver - AKA intel-vaapi-driver - has been unmaintained for four years - now and encoding appears to be broken in various cases. As it's unlikely - that the situation will improve, blocklist the driver for encoding. - Decoding appears to be stable enough to keep it enabled. - The driver can still be used by setting the `GST_VA_ALL_DRIVERS` env - variable. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7170> - -2024-07-05 11:36:04 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdebug.c: - * gst-libs/gst/vulkan/gstvkformat.c: - vulkan: remove beta extensions guard for encode operations - This is not needed anymore since encoder operations are not beta anymore. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7179> - -2024-07-16 23:07:50 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12compositor.cpp: - d3d12compositor: Fix transparent background mode with YUV output - In case of YUV format without alpha channel, zero clear value - for each channle will result in green color. Use calculated black - background color with alpha=0 for transparent background mode instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7181> - -2024-07-16 20:38:41 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d11/gstd3d11compositor.cpp: - d3d11compositor: Fix transparent background mode with YUV output - In case of YUV format without alpha channel, zero clear value - for each channle will result in green color. Use calculated black - background color with alpha=0 for transparent background mode instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7181> - -2024-07-11 17:23:43 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: Fix handling of timestamps and don't rely on driver frame counters - The driver frame counters (processed, dropped, buffer level) are not - always correct apparently, and don't allow reliably assigning a frame - number to captured frames. - Instead of relying on them, count the number of frames directly here and - detect dropped frames based on the capture times of the frames: if more - than 1.75 frame durations are between two frames, then there must've - been a dropped frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7163> - -2024-07-03 22:57:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/codecs/gsth264decoder.c: - h264decoder: Update output frame duration when second field frame is discarded - In case of an interlaced stream, if each field picture belongs to - different GstVideoCodecFrame, updates output frame's duration - based on discarded second field picture's timestamp information. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7132> - -2024-07-13 00:04:10 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12window-swapchain.cpp: - d3d12videosink: Clear cached buffer on format change - Otherwise converter will try to read memory of which layout/format - might be different from configured converter pipeline - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7167> - -2024-07-12 12:34:52 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/va/gstvadisplay.c: - vadisplay: fix minor version check - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7164> - -2024-04-17 12:21:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: set image creation flags in needed - set ALIAS bit if the usage is for both sampled and storage. - set MUTABLE_FORMAT and EXTENDED_USAGE bits if the image is a multiplane YUV and - uses multiple memories. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798> - -2024-04-17 12:15:07 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkformat.c: - * gst-libs/gst/vulkan/gstvkformat.h: - vkformat: add gst_vulkan_format_get_map function - This will be used later to compare the format selected by - gst_vulkan_format_from_video_info_2(), to verify if it's multiple memory buffer - or not. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798> - -2024-01-25 11:14:23 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkupload.c: - vulkanupload: request storage usage for bufferpool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798> - -2024-01-22 17:28:06 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - vkimagebufferpool: define a default usage - Define a default usage and use it instead of repeating the same bitwise - addition. - Therefore, when usage is defined as zero, the usage is defined with the - format's supported usage and the default usage, now without the storage - bit, but with color and input attachment bits. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798> - -2024-04-17 12:17:45 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkformat.c: - vkformat: unward G8_B8R8_2PLANE_420_UNORM - Since it exists since VK_VERSION_1_1. It should be ignored by usage flags or the - no_multiplane parameter. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798> - -2024-07-04 02:02:42 +0200 Robert Mader <robert.mader@posteo.de> - - * gst-libs/gst/wayland/gstwlwindow.c: - waylandsink: Fix surface cropping for rotated streams - The wp_viewport source rectangle is applied in surface-local coordinates - after buffer_transform and buffer_scale. Therefore we need to swap width - and height for 90/270 deg. rotations. - This fixes playback of rotated videos such as portrait videos from - mobile devices. - See also: https://wayland.app/protocols/viewporter#wp_viewport - Fixes: 0b648f9a2d ("waylandsink: Crop surfaces to their display width height") - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7136> - -2024-07-08 15:30:45 +0200 Ruben Gonzalez <rgonzalez@fluendo.com> - - * ext/vulkan/vkh265dec.c: - vkh265dec: Fix H.264 ref in logs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7146> - -2024-07-05 00:29:05 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12memorycopy.cpp: - d3d12memorycopy: Enhance d3d12 to d3d11 copy - If a d3d12 memory holds non-direct-queue fence but the fence was - created with D3D12_FENCE_FLAG_SHARED flag, use the fence instead of - waiting for fence at CPU side. Note that d3d12ipcsrc or - d3d12screencapture elements will hold such sharable fence. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7139> - -2024-07-01 16:59:23 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/videotexturecache-vulkan.mm: - applemedia: Disable some deprecation errors - This needs significant work to use the new Metal→Vulkan integration - extension `VK_EXT_metal_objects` - ``` - MoltenVK/mvk_deprecated_api.h:132:1: note: 'vkGetMTLDeviceMVK' has been explicitly marked deprecated here - MVK_DEPRECATED_USE_MTL_OBJS - ^ - MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS' - #define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR deprecated("Use the VK_EXT_metal_objects extension instead.") - ^ - ../sys/applemedia/videotexturecache-vulkan.mm:303:20: error: 'vkSetMTLTextureMVK' is deprecated: - Use the VK_EXT_metal_objects extension instead. - VkResult err = vkSetMTLTextureMVK (memory->vulkan_mem.image, texture); - ^ - MoltenVK/mvk_deprecated_api.h:151:1: note: 'vkSetMTLTextureMVK' has been explicitly marked deprecated here - MVK_DEPRECATED_USE_MTL_OBJS - ^ - MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS' - #define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR deprecated("Use the VK_EXT_metal_objects extension instead.") - ^ - 2 errors generated. - ``` - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-28 17:19:46 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - vk-video: Fix uint64_t string format errors - With clang on macOS: - ``` - error: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long') - ... - error: format specifies type 'unsigned long' but the argument has type 'VkImageView' (aka 'unsigned long long') - ``` - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 04:27:42 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/vulkan/meson.build: - * sys/applemedia/meson.build: - meson: Find MoltenVK with the objc++ compiler everywhere - When building for iOS in Cerbero, as of MoltenVK SDK 1.3.283, we have - to statically link to libMoltenVK since it no longer ships a dylib. - This requires linking to libc++, so we find the dep with the objc++ - compiler to ensure that meson uses the right linker. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 04:25:01 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/meson.build: - meson: Fix some confusing code in applemedia's build file - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 04:20:59 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * sys/applemedia/meson.build: - meson: Fix vulkan automagic in applemedia plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 04:13:31 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/vulkan/meson.build: - meson: Fix vulkan library build on iOS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 04:11:48 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/vulkan/meson.build: - meson: Use / instead of join_paths for vulkan - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-06-23 03:46:39 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/vulkan/meson.build: - * meson_options.txt: - meson: Fix automagic dependency checks in gstvulkan - Windowing, in particular, was getting silently disabled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091> - -2024-07-05 18:17:38 +0530 Taruntej Kanakamalla <taruntej@asymptotic.io> - - * tests/check/elements/lc3.c: - lc3: remove bitstream comparison in the tests - since the encoded output is changing based on version - it does not make sense to check the output bitstream with a fixed - bytearray since the version in the target might vary. So sticking - to checking the number of output buffers and encoded frame size - similar to the other tests - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7141> - -2024-07-02 13:00:14 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvavpp.c: - vavpp: simplify gst_va_vpp_transform_caps() - The code is simplified by using GQuarks for looking for caps features, and - removing inner loops. - Also, it's used the pad template caps to compare with the incoming caps because - is cheaper at the beginning of negotiation, where the pad template caps is used. - And, since the ANY caps where removed, there's no need to check for an initial - intersection. - Finally, the completion of caps features is done through a loop. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698> - -2024-06-26 22:19:52 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvadeinterlace.c: - vadeinterlace: Do not append ANY caps into pad template - Just like the vapostproc, we delete the ANY caps in pad template to - avoid unexpected negotiation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698> - -2024-04-20 16:40:21 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavpp.c: - vapostproc: Do not append ANY caps into pad template - The ANY caps in pad template caps seems to mess up the DMA negotiation. - The command of: - GST_GL_API=opengl gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12 ! - vapostproc ! "video/x-raw(memory:DMABuf)" ! glimagesink - fails to negotiate, but in fact, the vapostproc can convert the input NV12 - formant into the RGBA format to render. - The ANY may help the passthough mode, but we should make the negotiate correct - first. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698> - -2024-06-27 14:05:46 +0800 Lim, Siew Hoon <siew.hoon.lim@intel.com> - - * sys/va/gstvacompositor.c: - vacompositor: Initialize the allocation related variables in decide_allocation() - Prevent garbage value has been pass thru and causing - pipeline fail to run later on. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7097> - -2024-06-27 13:59:40 +0800 Lim, Siew Hoon <siew.hoon.lim@intel.com> - - * sys/va/gstvabasedec.c: - vabasedec: Initialize the allocation related variables in decide_allocation() - Prevent garbage value has been pass thru and causing - pipeline fail to run. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7097> - -2024-06-25 14:38:12 +0800 Lim, Siew Hoon <siew.hoon.lim@intel.com> - - * sys/va/gstvabasetransform.c: - vabasetranform: Initialize the allocation related variables in decide_allocation() - Prevent garbage value has been pass thru and causing - pipeline fail to run. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7097> - -2024-04-16 23:59:58 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Set the trellis only when HW supports it - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6663> - -2024-04-17 00:03:48 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Init missing fields in reset_state() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6663> - -2024-04-16 23:50:58 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Set the trellis only when HW supports it - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6663> - -2024-04-16 18:13:06 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - va: baseenc: Set the trellis parameter anyway - The driver may enable trellis by default. So we should also set the - trellis info to driver even when the trellis option is turned off. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6663> - -2024-07-02 15:26:55 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - doc: fix Since marker - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-07-02 14:55:25 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - mpegts: Added missing function prototype - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-07-02 14:25:59 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - doc: fix docstrings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-07-02 10:14:38 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - doc: fix single line Since comments - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-07-02 09:10:27 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - doc: fix Since marker for gst_mpegts_descriptor_from_metadata_pointer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-27 15:44:54 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - doc: update docstrings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-27 15:20:47 -0500 jadarve <juanda0718@gmail.com> - - * docs/plugins/gst_plugins_cache.json: - doc: update plugin cache - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-04 15:39:12 -0500 jadarve <juanda0718@gmail.com> - - * gst/mpegtsdemux/gstmpegdesc.h: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/gstmpegtsmux.c: - * gst/mpegtsmux/tsmux/tsmux.c: - * gst/mpegtsmux/tsmux/tsmuxstream.c: - * gst/mpegtsmux/tsmux/tsmuxstream.h: - mpegtsmux: mux meta/x-id3 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-04 14:39:05 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - mpegts: use GstMpegtsMetadataApplicationFormat in metadata descriptor - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-04 14:31:48 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - mpegts: added metadata pointer descriptor - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2024-06-04 13:53:26 -0500 jadarve <juanda0718@gmail.com> - - * gst-libs/gst/mpegts/gst-metadata-descriptor.c: - * gst-libs/gst/mpegts/gst-metadata-descriptor.h: - * gst-libs/gst/mpegts/gst-mpegtspesmetadatameta.c: - * gst-libs/gst/mpegts/gst-mpegtspesmetadatameta.h: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.c: - * gst-libs/gst/mpegts/gstmpegtsdescriptor.h: - * gst-libs/gst/mpegts/meson.build: - * gst-libs/gst/mpegts/mpegts.h: - mpegts: moved metadata descriptors to gstmpegtsdescriptor - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6793> - -2023-04-07 14:40:58 -0400 Chris Spoelstra <cs.spoelstra@gmail.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/srt/gstsrtobject.c: - srtsrc: fix case fallthrough of authentication param - Add missing breaks to two case statements. - Also adds a missing lock of srtobject->element when getting the value - of PROP_AUTHENTICATION. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4367> - -2024-06-29 23:02:21 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12memorycopy.cpp: - * sys/d3d12/gstd3d12memorycopy.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - d3d12: Add support for resource copy between d3d11 and d3d12 - If driver can support cross-api resource sharing, use device-to-device - resource copy in d3d12upload/download elements. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7119> - -2024-06-29 21:37:57 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - d3d12bufferpool: Use shared heap by default - ... to make cross-api resource sharing possible - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7119> - -2024-06-27 09:55:41 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: Drop some frames after signal recovery - After signal recovery the capture times for the next frames are simply - wrong. Experimentally this affected 2-3 frames and seemed to be related - to the buffer fill level after signal recovery, so drop at least 5 - frames and up to fill level + 1 frames in this situation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7106> - -2024-06-27 09:30:27 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: Reset clock after signal loss or signal change - Otherwise timestamps would continue as if there was no gap, and the next - frames until the clock has compensated would be all too late. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7106> - -2024-06-27 15:32:01 -0400 Thibault Saunier <tsaunier@igalia.com> - - * gst-libs/gst/analytics/meson.build: - * gst-libs/gst/mse/meson.build: - meson: Remove duplicated library definitions for gstmse and gstanalytics - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7109> - -2024-06-28 02:33:03 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.cpp: - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.h: - d3d12commandallocatorpool: Remove unused methods - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7108> - -2024-06-27 22:58:12 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decodercpbpool.cpp: - * sys/d3d12/gstd3d12decodercpbpool.h: - * sys/d3d12/meson.build: - d3d12decoder: Use sub-allocated bitstream buffer - Since a buffer resource will occupy at least 64KB, - allocating upload resource per decoding command might not be - an optimal approach. Instead, use sub-region of a upload resource - for multiple decoding command if sub-regions are not overlapped - each other. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7108> - -2024-06-26 16:09:26 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * gst/rtmp2/rtmp/amf.c: - rtmp2: guard against calling gst_amf_node_get_type() with NULL - gst_amf_node_get_type() raises a CRITICAL if called with a NULL node. - All callers were checking for this except those. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7103> - -2020-04-08 10:40:42 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * meson.build: - * meson_options.txt: - build: Add missing common options that are yielding in subprojects - - Align `glib_debug`, `glib_assert` and `glib_checks` options with GLib, - otherwise glib subproject won't inherit their value. Previous names - and values are preserved using Meson's deprecation mechanism. - - Add `extra-checks` and `benchmarks` options in the main project so it - can be inherited in GStreamer subprojects. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1165> - -2024-05-13 18:52:28 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - vaav1enc: Move repeat header data to a dedicated buffer - When enable parallel encoding, it is possible that the unshown frame - is not output but it is already be marked as a repeated frame header. - So we need to use a dedicated buffer to hold the repeat frame header, - don't mix it with the orignal frame data. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6867> - -2024-06-25 20:08:54 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11device-private.h: - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * gst-libs/gst/d3d11/meson.build: - * sys/qsv/plugin.cpp: - qsv: Check d3d shared resource tier using D3D11 API - We can check the tier using d3d11 API. Thus, don't need to - create d3d12 device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7099> - -2024-06-02 11:40:04 +0300 Jan Schmidt <jan@centricular.com> - - * ext/hls/m3u8.c: - adaptivedemux: Fix handling closed caption streams - Fix a typo "CLOSED_CAPTION" -> "CLOSED-CAPTION" and - a broken if statement that always bailed out for - closed captions - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6985> - -2024-06-25 22:19:26 +1000 Jan Schmidt <jan@centricular.com> - - * ext/webrtcdsp/gstwebrtcdsp.cpp: - webrtcdsp: Enable multi_channel processing - Enable multi_channel processing in webrtc-audio-processing when the - input or output has multiple channels. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3220 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7100> - -2024-06-24 16:00:45 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Fix redistribute latency spam - Just a quick fix to only report the maximum noticed delay (measured by frames inside the encoder) instead of changing - the reported latency every time the number there changes, which is way too often. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7094> - -2024-06-24 20:49:19 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - d3d12converter: Make sure data upload before executing compute shader - Use read d3d12 map, so that upload can happen if needed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-24 20:07:37 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add ARGB64_LE format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-24 17:53:24 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add AV12 format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-24 01:41:03 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add NV16, NV61, and NV24 format support - Can reuse NV12 shader for the formats - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-24 00:29:23 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add A420, A422 and A444 format support - Adding A420/A422/A444 and its 10/12/16 bits format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-23 23:05:20 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add YUV 4:1:0 and 4:1:1 format support - Adding Y41B, YUV9, and YVU9 format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7093> - -2024-06-21 18:38:04 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window-swapchain.cpp: - * sys/d3d12/gstd3d12window-swapchain.h: - * sys/d3d12/gstd3d12window-win32.cpp: - * sys/d3d12/gstd3d12window-win32.h: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Present on GstVideoOverlay::expose() - ... so that updated backbuffer can be swapped and presented - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7079> - -2024-06-23 22:16:36 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - d3d12: Add v216, v210, r210, v308, IYU2, RGB, BGR format support - Reuse the compute shader implemented for d3d11 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7092> - -2024-06-23 22:14:23 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - d3d12: Add support for UYVY, VYUY, and YVYU - Use already implemented compute shaders - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7092> - -2024-06-23 22:13:32 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d12: Add RGB{16,15} and BGR{16,15} format support - d3d12 device can support B5G6R5_UNORM and B5G5R5A1_UNORM formats - in pixel shader. If the format is not supported by device, - U16_UINT format with compute shader will be used, like d3d11converter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7092> - -2024-06-23 22:00:40 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - d3d12: Add BGRA64 and BGR10A2 format support - Map BGRA64 and BGR10A2 to Y416 and Y410, respectively, - since it's possible RGB space decoder output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7092> - -2024-06-23 02:01:50 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-unpack.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/meson.build: - d3d12: Add support for DXGI native packed YUV formats - Adding YUY2, Y210, Y216, and Y416 format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7092> - -2024-06-23 00:34:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-pack.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-pack.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12converter: Fix Y410 conversion - Adding format conversion helper and use compute shader in case that - output format does not support RTV. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7088> - -2024-06-23 01:18:54 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - d3d12memory: Add support for UAV descriptor cache - Cache shader invisible UAV descriptor in memory - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7088> - -2024-06-22 01:36:43 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12format-private.h: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - * gst-libs/gst/d3d12/gstd3d12format.h: - d3d12: Format table refactoring - Hide format table from header. This is a preparation for compute - shader based format support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7088> - -2024-06-21 00:06:12 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Upload shader buffer resources earlier - Schedule (semi-)static resource upload at converter creation time. - And use single resource for all vertex, index, and constant - buffers, since separate resources will waste GPU memory. - Note that size and address of a committed resource are 64K aligned - even if requested buffer size is small. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7081> - -2024-06-20 22:18:02 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Make gamma remap work as intended - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7073> - -2024-06-20 20:44:56 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12device: Don't warn for out of range device index - It can happen during enumeration as well, and it's expected error - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7070> - -2024-06-20 20:34:33 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12device: Dump device feature support - ... and use CD3DX12FeatureSupport helper class in d3dx12.h - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7070> - -2024-06-20 00:09:16 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12device: Prevent too many in-flight GPU commands - Even if each element is checking its own in-flight commands, - total number of commands can get larger in case of complex pipeline. - Limits total number of in-flight commands at command queue level - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7066> - -2024-06-20 00:07:41 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - d3d12commandqueue: Detect device removed event - Early return if device removed event is detected - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7066> - -2024-06-19 23:29:11 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12basefilter.cpp: - * sys/d3d12/gstd3d12basefilter.h: - d3d12basefilter: Add adapter property - Allows initial GPU adapter selection - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7066> - -2024-06-19 22:12:15 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12: Move fence setter helper method to gst-libs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-19 19:06:42 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window-swapchain.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12converter: Update API signature - Always use device's main direct queue, and control gpu waiting - behavior by using boolean value - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-19 01:00:28 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12ipcserver.cpp: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - * sys/webview2/gstwebview2src.cpp: - d3d12memory: Hide fence value from header - Instead of exposing fence value to wait in header, user setter/getter - methods. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-19 00:57:11 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - d3d12device: Add helper method for getting fence handle - Add get_fence_handle() method so that caller can get command queue's - dedicated fence handle from device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-18 22:59:17 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * gst-libs/gst/d3d12/gstd3d12fencedatapool.cpp: - * gst-libs/gst/d3d12/gstd3d12fencedatapool.h: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12: Remove notify_com and notify_mini_object helper methods - Use private macros instead of exposing multiple APIs for the same thing - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-18 22:04:23 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12commandqueue.h: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12commandqueue: Update API name and arguments - Accepts multiple fences since single command list may have - multiple dependent resources which are associated with - different GPU engines - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-06-18 21:48:11 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12device: Use HRESULT return code if possible - Make function signature consistent with that of command queue - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7057> - -2024-05-28 09:55:05 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - vkdecoder: support layered and non dedicated DPB - As NVIDIA Amperium. In this case the each output buffer is also a DPB, - but using a different view layer. - Still pending a validation layer issue: - VUID-VkVideoBeginCodingInfoKHR-flags-07244 - Co-authored-by: Victor Jaquez <vjaquez@igalia.com> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6954> - -2024-06-17 15:38:05 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/analytics/gstanalyticsclassificationmtd.c: - analytics: Add validation on classification analytics-meta - - Add valiation on parameters passed to gst_analytics_cls_add_cls_mtd. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7046> - -2024-06-18 09:10:16 +0200 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/mpegtspacketizer.c: - tsdemux: Fix maximum PCR/DTS values - * PTS/DTS are stored as 33 bit - * PCR is 33bit multiplied by 300 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7050> - -2024-06-18 00:33:37 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - * sys/d3d12/plugin.cpp: - d3d12: Promote decoder and videosink rank to primary - It's proven that d3d12 performs better than d3d11 while - consumes less resources in various cases. - Assign primary+ rank to decoder and videosink in case of Windows10/11, - so that it can be tested widely - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7045> - -2024-03-22 12:32:22 +0100 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vkoperation: support for query_result_status - query_result_status can be optional so we should not create - the query pool if the queue does not support it, - ie, AMD does not support VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR - In other use case such as VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR, the - query pool must be created. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043> - -2024-06-17 14:55:03 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkphysicaldevice.c: - * gst-libs/gst/vulkan/gstvkphysicaldevice.h: - vkphysicaldevice: rename query to query_result_status - As only queryResultStatusSupport can be optional, - the variable name should be more specific. - queryResultStatusSupport reports VK_TRUE if query type - VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR and use of - VK_QUERY_RESULT_WITH_STATUS_BIT_KHR are supported. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043> - -2024-06-18 05:53:19 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gstav1parser.c: - av1parse: Do not return error when expectedFrameId mismatch - According to the SPEC: - The frame id numbers (represented in display_frame_id, current_frame_id, - and RefFrameId i ) are not needed by the decoding process, but allow - decoders to spot when frames have been missed and take an appropriate action. - So we should just print out warning and should not return error in parser when - mismatching. The decoder itself is already robust to handle the reference missing. - Fixes #3622 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7047> - -2024-06-16 21:21:44 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window-swapchain.cpp: - * sys/d3d12/gstd3d12window-win32.cpp: - * sys/d3d12/gstd3d12window-win32.h: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Add direct-swapchain property - Because DXGI flip mode swapchain will disallow GDI operation - to a HWND once swapchain is configured, videosink has been creating - child window of application's window. However, since window creation - would take a few milliseconds, it can cause performance issue such as - UI freezing. Adding a property so that videosink can attach - DXGI swapchain diretly to application's window in order to improve - performance. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-13 01:34:08 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - d3d12videosink: Add external-window-only property - Adding a new property in order to avoid unintended interanl window - creation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-10 23:40:55 +0900 Seungha Yang <seungha@centricular.com> - - d3d12videosink: Add support for window handle update - A large refactoring commit for adding features and improve performance - * Reuse internal converter and overlay compositor: - Converter can be reused as long as input and display formats are not - changed. Also overlay compositor reconstruction is required only if - display format is changed - * Don't wait for full GPU flush on resize or close: - D3D12 swapchain requires GPU idle in order to resize backbuffer. - Thus CPU side waiting is required for swapchain related commands - to be finished. However, don't need to wait for full GPU flushing. - * Support multiple sink on a single external window - Keep installed subclass window procedure even if there's no associated - our internal HWND. This will make window procedure hooking less racy. - Then parent HWND's message will be transferred to our internal HWNDs - if needed. - * Adding support for window handle update - Application can change target HWND even when videosink is playing or - paused state. So, users can call gst_video_overlay_set_window_handle() - against d3d12videosink anytime. The videosink will be able to update - internal state and setup resource upon requested. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-10 23:38:39 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12overlaycompositor.h: - * sys/d3d12/gstd3d12window.cpp: - d3d12overlaycompositor: Remove unused parameter - Don't need to check fence value of overlay buffer since - window uses global direct command queue - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-03 21:53:40 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - d3d12videosink: Calculate display resolution only per caps change - Don't need to calculate it per window property update - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-05 00:20:05 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - d3d12commandqueue: Fix deadlock on drain() - Don't take lock if the drain() is called from the GC thread - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013> - -2024-06-12 01:02:39 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - d3d12: Workaround for Intel iGPU decoder crash - Observed Intel GPU driver crash when multiple decoders are - configured in a process. It might be because of frequent - command queue alloc/free or too many in-flight decoding commands. - In order to make command queue persistent and limit the number of - in-flight command lists, holds global decoding command queue. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7019> - -2024-06-12 18:28:54 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Use GST_VIDEO_DECODER_ERROR instead of aborting when frame has an ERROR flag - This was already being used in handle_frame() for errors that happen when queueing a frame for decoding, - let's do the same when a frame is flagged with an error in the output callback. - From quick testing, this makes seeking more reliable (previously, it would sometimes cause a decoding error - and shut the whole decoder down due to GST_FLOW_ERROR). - Also manually sets the max error count to actually stop processing if too many errors occur. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446> - -2024-03-26 15:24:31 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Handle some errors without stopping the decoder - ReferenceMissingErr is not critical and the simplest solution is to just ignore it. The frame has - the FrameDropped flag set when it occurs, so we can just drop it as usual. - BadDataErr is also not immediately critical, but in its case let's set the ERROR flag, - so the output loop can use GST_VIDEO_DECODER_ERROR to count and error out if it happens too many times. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446> - -2024-06-17 11:15:22 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/aom/gstav1dec.c: - av1dec: Don't treat decoding errors as fatal and print more error details - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7040> - -2024-06-11 23:33:49 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/meson.build: - d3d12: Add support for DXGI debug layer - Will be enabled if GST_ENABLE_D3D12_DXGI_DEBUG env is set - and dxgidebug.dll is available. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7016> - -2024-06-13 09:11:30 -0500 Zach van Rijn <me@zv.io> - - * gst/pcapparse/gstpcapparse.c: - pcapparse: Avoid unaligned memory access - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3602 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7030> - -2024-05-21 01:20:59 +0900 Seungha Yang <seungha@centricular.com> - - * ext/meson.build: - * ext/nvcomp/gstnvcomp.cpp: - * ext/nvcomp/gstnvcomp.h: - * ext/nvcomp/gstnvcompvideodec.cpp: - * ext/nvcomp/gstnvcompvideodec.h: - * ext/nvcomp/gstnvcompvideoenc.cpp: - * ext/nvcomp/gstnvcompvideoenc.h: - * ext/nvcomp/meson.build: - * ext/nvcomp/plugin.cpp: - * ext/nvcomp/stub/cuda_runtime.h: - * meson_options.txt: - nvcomp: Add nvCOMP library based GPU lossless compression plugin - Adding NVIDIA nvCOMP library based plugin for lossless raw video - compression/decompression. To build this plugin, user should - install nvCOMP SDK first and specify the SDK path via - "nvcomp-sdk-path" build option or NVCOMP_SDK_PATH env. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6912> - -2024-05-21 18:09:12 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/cuda-gst.h: - * gst-libs/gst/cuda/gstcudaloader.cpp: - * gst-libs/gst/cuda/stub/cuda.h: - cuda: Load 1D memcpy method symbols - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6912> - -2024-05-31 13:07:51 +0200 Mathieu Duponchelle <mathieu@centricular.com> - - * gst/codectimestamper/gstcodectimestamper.c: - codectimestamper: never set DTS to NONE - If we want to avoid the DTS going backward, then we can set DTS to - last_dts as a last resort. - Log a warning in this case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6977> - -2024-06-07 23:09:54 -0700 Khem Raj <raj.khem@gmail.com> - - * sys/uvcgadget/configfs.c: - uvcgadget: Use g_path_get_basename instead of libc basename - Musl does not implement GNU basename and have fixed a bug where the - prototype was leaked into string.h 1, which resullts in compile errors - with GCC-14 and Clang-17+ - | sys/uvcgadget/configfs.c:262:21: error: call to undeclared function 'basename' - ISO C99 and later do not support implicit function declarations -Wimplicit-function-declaration - | 262 | const char *v = basename (globbuf.gl_pathvi); - | | ^ - Use glib function instead makes it portable across musl and glibc on - linux - 1 https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7a - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7006> - -2024-06-12 23:15:29 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkswapper.c: - vulkan/swapper: expose choose_queue() in docs - It was missing a doc trigraph. - Also mark input queue argument as nullable. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7023> - -2024-06-10 13:11:19 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/aom/gstav1enc.c: - av1enc: Handle force-keyunit events properly by requesting keyframes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7008> - -2024-06-05 22:09:56 +0900 Seungha Yang <seungha@centricular.com> - - * sys/wasapi2/gstwasapi2client.cpp: - wasapi2: Adjust log level in device enumeration path - Audio device at requested index might not be available, but that's - expected case when enumerating devices. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6996> - -2024-05-29 11:07:23 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkvpp.c: - msdkvpp: Add a huge value to inbuf pts and set mfx surface timestamp - It can be seen as a WA in the case of multi-channel transcoding (like - decoder output to two channels, one for encoder and one for vpp). - Normally, encoder sets min pts of a huge value to avoid negative dts, - while vpp set pts without this addtional huge value, which are likely to - cause input surface pts does not fit with encoder (since both encoder - and vpp accept the same buffer from decoder, means they modify the timestamp - of one mfx surface). So we add this huge value to vpp to ensure enc and - vpp set the same value to input mfx surface meanwhile does not break - encoder's setting min pts for dts protection. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6971> - -2024-06-10 23:25:46 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - d3d12videosink: Disconnect window signal handler on dispose as intended - Fixing typo - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7012> - -2024-05-28 19:23:33 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12videosink: Add error-on-closed property - Adding a property to control error reporting behavior when output - window is closed in playing or paused state. This can be useful - for apps where an app wants to close window even if it's playing - a stream, and the closed window is expected. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6939> - -2024-04-23 21:57:57 +0200 Stéphane Cerveau <scerveau@gmail.com> - - * gst-libs/gst/vulkan/meson.build: - vulkan: fix macos build - The VulkanSDK can be downloaded from LunarG website and can - be installed properly in /usr/local following: - https://vulkan.lunarg.com/doc/view/latest/mac/getting_started.html - Fixes partly #2372 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6669> - -2024-06-06 19:34:03 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - vulkan/fullscreenquad: add check for unset video info - So we don't crash when set_info() is not called. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7000> - -2024-06-06 17:16:30 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - vulkan/fullscreenquad: allow setting NULL input/output buffer to unset - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7000> - -2024-06-01 02:32:22 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudanvmm-private.h: - * gst-libs/gst/cuda/gstcudanvmm.cpp: - * gst-libs/gst/cuda/gstcudautils.cpp: - * gst-libs/gst/cuda/meson.build: - * meson_options.txt: + * gst/librfb/gstrfbsrc.c: + * gst/y4m/gsty4mdec.c: * sys/nvcodec/gstcudamemorycopy.c: - * sys/nvcodec/meson.build: - * sys/nvcodec/plugin.c: - cuda: Enable x86 NVMM support again - It was broken since memory copy helper function was moved to gst-libs. - Also, adding "cuda-nvmm" and "cuda-nvmm-include-path" build options - to en/disable NVMM support in gstcuda library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6978> - -2024-05-15 02:29:12 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/cuda/cudamemory-sync.c: - * tests/examples/cuda/meson.build: - * tests/examples/meson.build: - examples: cuda: Add CUDA memory synchronization example - Add an example code for external CUDA context sharing and - gst_cuda_memory_sync() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6864> - -2024-06-06 12:13:05 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkdevice.c: - * gst-libs/gst/vulkan/meson.build: - vulkan: remove remaining GST_VULKAN_HAVE_VIDEO_ENCODERS - Some define use have been forgotten in - https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6992 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7001> - -2024-05-17 14:12:23 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mxf/mxftypes.c: - mxf: Use GDateTime instead of gmtime() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6872> - -2024-06-04 10:10:07 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * tests/check/libs/vkvideoencodeh264.c: - * tests/check/libs/vkvideoencodeh265.c: - * tests/check/meson.build: - gst-plugins-bad: tests: rename vkvideoencode tests - Rename vulkan encode tests to be able to use the namespace - libs_vkvideoencode*. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6992> - -2024-06-04 09:55:26 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkconfig.h.meson: - * gst-libs/gst/vulkan/gstvkdevice.c: - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkvideo-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - * gst-libs/gst/vulkan/gstvkvideoutils.c: - * gst-libs/gst/vulkan/gstvkvideoutils.h: - * gst-libs/gst/vulkan/meson.build: - * tests/check/libs/vkvideoh264encode.c: - * tests/check/libs/vkvideoh265encode.c: - * tests/check/meson.build: - vulkan: remove GST_VULKAN_HAVE_VIDEO_ENCODERS - Use 2.3.275 as first supported SDK version - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6992> - -2024-06-04 09:34:42 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkqueue.c: - vkqueue: remove useless decoder include - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6992> - -2024-02-14 09:43:35 -0300 Thibault Saunier <tsaunier@igalia.com> - - * gst/autoconvert/gstbaseautoconvert.c: - autoconvert: Fix race condition when creating sub elements - There was a case where the element would get destroyed while being - added to the hash table of elements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6989> - -2024-06-02 10:26:19 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/dtls/gstdtlssrtpenc.c: - dtlssrtpenc: Don't crash if no pad name is provided when requesting a new pad - It is mandatory to provide a valid pad name for dtlssrtpenc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6984> - -2024-06-02 23:36:28 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkvideofilter.c: - * gst-libs/gst/vulkan/gstvkvideofilter.h: - vulkan/videofilter: add getters for queue/device/instance - Allows bindings to not pke at structs for this information. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6987> - -2024-06-02 23:34:39 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - * gst-libs/gst/vulkan/gstvkfullscreenquad.h: - vulkan/fullscreenquad: add get_queue() - Allows bindings to not poke at the instance struct. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6987> - -2024-06-02 23:33:13 +1000 Matthew Waters <matthew@centricular.com> - - * gst-libs/gst/vulkan/gstvkfullscreenquad.c: - * gst-libs/gst/vulkan/gstvkfullscreenquad.h: - vulkan/fullscreenquad: mark set_info GstVideoInfo as const - It's not modified by the function. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6987> - -2024-06-01 22:38:11 +1000 Matthew Waters <matthew@centricular.com> - - * ext/vulkan/meson.build: - * ext/vulkan/shaders/ayuv_to_rgb.frag: - * ext/vulkan/shaders/meson.build: - * ext/vulkan/shaders/nv12_to_rgb.frag: - * ext/vulkan/shaders/rgb_to_ayuv.frag: - * ext/vulkan/shaders/rgb_to_nv12.frag: - * ext/vulkan/shaders/rgb_to_yuy2.frag: - * ext/vulkan/shaders/swizzle.frag: - * ext/vulkan/shaders/swizzle_and_clobber_alpha.frag: - * ext/vulkan/shaders/view_convert.frag: - * ext/vulkan/shaders/yuy2_to_rgb.frag: - vulkan: also support glslang as a shader compiler - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6980> - -2024-06-01 21:35:26 +1000 Matthew Waters <matthew@centricular.com> - - * ext/vulkan/gstvulkan.c: - * ext/vulkan/meson.build: - * gst-libs/gst/vulkan/meson.build: - vulkan: support not having glslc available for building vulkan plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6980> - -2024-05-31 12:28:40 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkutils.c: - vkutils: do not forget to clear context in case of error - The context is leaking in case of a failing instance open. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6975> - -2024-05-31 12:27:30 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkerror.c: - vkerror: free the error string after usage - g_set_error already used the var string, can clear it now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6975> - -2024-05-30 01:30:58 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Fix staging buffer alignment - Not all GPUs can support arbitrary offset of - D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between - texture and buffer. Instead of calculating size/offset per plane, - calculate the entire size and offsets at once. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6967> - -2024-05-28 04:14:15 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/va/meson.build: - * meson.build: - * sys/msdk/gstmsdkallocator_libva.c: - * sys/msdk/gstmsdkcaps.c: - * sys/msdk/gstmsdkdec.c: - * sys/msdk/gstmsdkenc.c: - * sys/msdk/gstmsdkvpp.c: - * sys/msdk/gstmsdkvpputil.c: - * sys/msdk/meson.build: - msdk: Fix libdrm dependency detection and usage - drm_fourcc.h should be picked up via the pkgconfig include, not the - system includedir directly. - Also consolidate the libdrm usage in va and msdk. - All this allows it to be picked up consistently (via the subproject, - for example). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932> - -2024-05-27 18:50:23 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/va/meson.build: - meson: Don't use fallback: kwarg for libva deps - This will cause a fallback even when the `va` option is `auto`, not - giving the user a chance to provide the dependency via the system, and - likely building this feature unnecessarily. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932> - -2024-05-27 18:43:33 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * gst-libs/gst/va/gstvavideoformat.h: - va: Fix libdrm include - The libdrm/ prefix should not be used, it will be provided by the - pkgconfig file. Also HAVE_LIBDRM is necessary. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932> - -2024-05-15 12:48:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/msdk/gstmsdkcaps.c: - msdkcaps: fix ill-format string - This patch fixes this critical warning when registering MSDK: - _dma_fmt_to_dma_drm_fmts: assertion 'fmt != GST_VIDEO_FORMAT_UNKNOWN' failed - It was because the HEVC string with possible output formats has an extra space - that could not be parsed correctly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6853> - -2024-05-29 18:54:18 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12h264enc.cpp: - d3d12encoder: Do not print error log for not-supported feature - gst_d3d12_result() will print message with ERROR level if failed. - Use FAILED/SUCCEEDED macros instead, since not-supported feature - is not a critical error - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6955> - -2024-04-22 17:04:09 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Allow null allocator in alloc() - Update code as documented - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6940> - -2024-05-21 17:25:10 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Properly transfer TU timestamp - When transforming from unknown alignment to frame or obu, the TU timestamp - was not properly transferred. Fix this by saving the TU DTS as the first - DTS seen within the the TU data, and the PTS as the last PTS seen in that - TU data. Finally, reset the TU timestamp after each TU have completed. - Fixes #1496 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6895> - -2024-05-21 17:22:47 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Only place a marker on the last frame of a TU - Markers are meant to indicate the buffer that ends a frame, which imply - something can be displayed. The dependent decode only frames should not - have markers. This should also fix last subframe detection. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6895> - -2024-05-25 16:58:17 +0900 Seungha Yang <seungha@centricular.com> - - * sys/webview2/gstwebview2object.cpp: - * sys/webview2/gstwebview2object.h: - * sys/webview2/gstwebview2src.cpp: - webview2: Add user-data-folder property - Adding a propery to specify location of WebView2's user data folder - location. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6921> - -2024-04-22 01:15:51 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12: Add support for Device Removed Extended Data (DRED) - Enable DRED if "d3d12dred > GST_LEVEL_ERROR", and print - DRED debug information on device removed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6718> - -2024-04-16 20:37:23 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - vtenc: Enable HEVC with alpha encoding - Adds a separate vtenc_h265a element (with a _hw variant as usual) for the HEVCWithAlpha codec type. - Decided to go with a separate element to not break existing uses of the normal HEVC encoder. - The preserve_alpha property is still only used for ProRes, no need for it here because we explicitly say we want alpha - when using the new element. - For now, the HEVCWithAlpha has an issue where it does not throttle the amount of input frames queued internally. - I added a quick workaround where encode_frame() will block until enqueue_frame() callback notifies it that some space - has been freed up in the internal queue. The limit was set to 5, which should be enough I guess? Hopefully this is not - too prone to race conditions. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6664> - -2024-03-26 18:48:17 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtenc.c: - vtenc: Add missing vtenc_h265 docs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6664> - -2024-05-27 15:41:23 +0900 Elliot Chen <elliot.chen@nxp.com> - - * gst/autoconvert/gstbaseautoconvert.c: - autovideoconvert: should not forward the allocation query if no element is selected - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6927> - -2024-05-27 12:28:44 +0100 Philippe Normand <philn@igalia.com> - - * ext/webrtc/gstwebrtcbin.c: - * ext/webrtc/webrtcsdp.c: - * ext/webrtc/webrtcsdp.h: - * tests/check/elements/webrtcbin.c: - webrtcbin: Allow session level setup attribute in SDP - An SDP answer can declare its setup attribute at the session level or at the - media level. Until this patch we were validating only the latter case and an - assert was raised in the former case. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6930> - -2024-05-22 14:54:56 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - vulkanh264dec: code style fix - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6901> - -2024-05-22 14:50:11 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh265dec.c: - vulkanh265dec: fix reference set - `StdVideoDecodeH265PictureInfo.flags.IsReference` refers to section 3.132 ITU-T - H.265 specification: - reference picture: A picture that is a short-term reference picture or a - long-term reference picture. - `GstH265Picture.ref` doesn't reflect this, but we need to query the NAL type of - the processed slice. - This patch fixes the validation layer error - `VUID-vkCmdBeginVideoCodingKHR-slotIndex-07239` while using the NVIDIA driver. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6901> - -2024-05-10 22:59:15 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - * sys/d3d12/meson.build: - * tests/examples/d3d12/d3d12videosink-overlay.cpp: - * tests/examples/d3d12/meson.build: - d3d12videosink: Add overlay signal to support d3d12/d3d11/d2d overlay - Conceptually identical to the present signal of d3d11videosink. - This signal will be emitted with current render target - (i.e., swapchain backbuffer) and command queue. Signal handler - can record GPU commands for an overlay image or to blend - an image to the render target. - In addition to d3d12 resources, videosink will send - d3d11 and d2d resources depending on "overlay-mode" - property, so that signal handler can render by using - preferred/required DirectX API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838> - -2024-05-10 20:08:49 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/gstd3d12window.h: - d3d12videosink: Use device's main direct queue - The idea of using separate command queue per videosink was that - swapchain is bound to a command queue and we need to flush the - command queue when window size is changed. But the separate - queue does not seem to improve performance a lot. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838> - -2024-05-17 11:13:19 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/dtls/gstdtlsconnection.c: - dtlsconnection: Fix overflow in timeout calculation on systems with 32 bit time_t - If a timeout of more than 4295s was scheduled, the calculation would - overflow and a too short timeout would be used instead. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6870> - -2023-08-11 17:50:23 +0800 He Junyan <junyan.he@intel.com> - - * sys/kms/gstkmsallocator.c: - kmssink: Do not close the DRM prime handle twice - The prime_fds for multi planes may be the same. For example, on Intel's - platform, the NV12 surface may have the same FD for the plane0 and the - plane1. Then, the DRM_IOCTL_GEM_CLOSE will close the same handle twice - and get an "Invalid argument 22" error the second time. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6914> - -2024-04-17 12:19:03 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkformat.c: - * tests/check/libs/vkformat.c: - vkformat: try UNORM format first and decouple them from colorimetry - From the spec (chapter 34, v1.3.283): - ```` - UNORM: the components are unsigned normalized values in the range 0, 1 - SRGB: the R, G and B components are unsigned normalized value that represent - values using sRGB nonlinear encoding, while the A component (if one - exists) is a regular unsigned normalized value - ``` - The difference is the storage encoding, the first one is aimed for image - transfers, while the second is for shaders, mostly in the swapchain stage in the - pipeline, and it's done automatically if needed 1. - As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export - images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are - ignored. - My conclusion is that Vulkan formats are related on how bits are stored in - memory rather their transfer functions (colorimetry). - This patch does two interrelated changes: - 1. It swaps certain color format maps to try first, in both - gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(), - the UNORM formats, when comparing its usage, and later check for SRGB. - 2. It removes the code that check for colorimetry in - gst_vulkan_format_from_video_info_2(), since it not storage related. - 1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797> - -2024-05-23 19:10:10 +0900 Seungha Yang <seungha@centricular.com> - - * tests/check/libs/d3d11device.cpp: - * tests/check/meson.build: - Revert "tests/d3d11: add concurrency test for gstd3d11device" - This reverts commit 8e0046a738070ca3c5441222da241a0582103fe7. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6904> - -2024-05-23 17:29:54 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * tests/check/libs/d3d11device.cpp: - Revert "d3d11device: protect device_lock vs device_new" - This reverts commit 926d5366b99b3498632a45147cfa329dbbf2cc30. - AcquireSRWLockExclusive seems to be acquiring lock in exclusive mode - when the same lock is combined with write lock access. - Reverting the commit because of this is unexpected behavior - and unavoidable OS bug. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6904> - -2024-05-22 12:28:39 +0100 Daniel Stone <daniels@collabora.com> - - * gst-libs/gst/wayland/gstwldisplay.c: - * gst-libs/gst/wayland/meson.build: - wayland: Use wl_display_create_queue_with_name - Wayland 1.23 and above allow us to attach names to an event queue, which - are printed out when debugging. Do this to make the logs easier to read. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6900> - -2024-05-23 00:48:11 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudamemory.cpp: - cudamemory: Fix offset of subsampled planar formats - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6903> - -2024-05-21 16:59:10 +0300 Sebastian Dröge <sebastian@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/aom/gstav1enc.c: - * ext/aom/gstav1enc.h: - av1enc: Add timebase property to allow configuring a specific timebase - This mirrors the same property in vp8enc / vp9enc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891> - -2024-05-21 16:58:26 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/aom/gstav1enc.c: - av1enc: Use 1/90000 as timebase and don't use the framerate at all - This mirrors the behaviour in vp8enc / vp9enc and is generally more - useful than using any framerate from the caps as it provides some degree - of accuracy if the stream doesn't have timestamps perfectly according to - the framerate. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891> - -2024-05-21 16:46:40 +0300 Sebastian Dröge <sebastian@centricular.com> - - * ext/aom/gstav1enc.c: - * ext/aom/gstav1enc.h: - av1enc: Fix last timestamp tracking so it actually works - This behaves exactly the same as in vp8enc / vp9enc now. - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3546 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891> - -2024-05-16 20:02:25 +0900 Elliot Chen <elliot.chen@nxp.com> - - * gst/autoconvert/gstbaseautoconvert.c: - autovideoconvert: fix double unref - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6865> - -2024-05-03 19:33:08 -0400 Olivier Crête <olivier.crete@collabora.com> - - * ext/onnx/decoders/gstssdobjectdetector.c: - * ext/onnx/decoders/gstssdobjectdetector.h: - ssdobjectdetector: Add size threshold to drop too big detections - There is a known "failure" mode where the SSD detector finds an object - which is the whole frame. So skip objects which are "too big" to avoid - this. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6810> - -2024-05-17 14:40:52 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - vah26{4,5}enc: No need to assert i>=0 in frame_setup_from_gop() - The value is an uint here and never be negative. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6868> - -2024-05-15 15:32:43 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - vah26xenc: factorize the encoder frame setup - A simple removal of duplicated code. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6854> - -2024-05-15 15:56:37 -0500 Brad Reitmeyer <brad.reitmeyer@resi.io> - - * docs/plugins/gst_plugins_cache.json: - * sys/nvcodec/gstnvh264dec.cpp: - nvcodec: Accept progressive-high profiles for h264 - Videos using progressive-high used to work on 1.16 before the parser added progressive-high. It looks like partial - support was added to nvcodec in https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1634 - but accidentally ommited gstnvh264dec - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6862> - -2024-05-16 14:51:46 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: va: add option for enabling alive stream - This is useful to test va encoding for live streams which should enable output - delay. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-05-16 08:35:30 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - vabaseenc: Set the correct min_buffers for propose_allocation() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-04 22:52:23 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Set preferred_output_delay value to increase performance - Also calculate the correct latency. - In live mode, preferred_output_delay is disabled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-04 22:43:05 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Set preferred_output_delay value to increase performance - Also calculate the correct latency. - In live mode, preferred_output_delay is disabled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-04 22:33:44 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - va: h265enc: Set preferred_output_delay value to increase performance - Also calculate the correct latency. - In live mode, preferred_output_delay is disabled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-04 22:22:04 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - va: h264enc: Set preferred_output_delay value to increase performance - Also calculate the correct latency. - In live mode, preferred_output_delay is disabled. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2023-04-24 16:56:16 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - va: baseenc: Add is_live field to check the live stream - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-04 21:25:51 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - va: baseenc: Add a preferred_output_delay field for GPU parallel processing - The encoder can specify the a preferred_output_delay value to get better throughput - performance. The higher delay may get better HW performance, but it may increases - the encoder and pipeline latency. - When the output queue length is smaller than preferred_output_delay, the encoder - will not block to waiting for the encoding output. It will continue to prepare and - send more commands to GPU, which may improve the encoder throughput performance. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2023-04-06 19:57:29 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - va: encoder: Do not continue when push_buffer gets error - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-04-02 22:47:58 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/va/vasurfaceimage.c: - va: libs: Use va_check_surface_has_status() to implement va_check_surface() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2023-04-06 19:39:04 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/va/vasurfaceimage.c: - * gst-libs/gst/va/vasurfaceimage.h: - va: libs: Add va_check_surface_has_status() helper function - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2023-04-06 19:33:02 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvabaseenc.h: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - va: encoder: Use GstVaEncFrame as the base object for all Enc Frame - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359> - -2024-05-14 14:44:45 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Allow pads to have no caps until they receive their first buffer - If the muxer times out because of the latency deadline it can happen - that some pads have no caps yet. In that case skip creation of streams - for these pads and create updated section tables once the first buffer - arrives later. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823> - -2024-05-09 17:11:59 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst/mpegtsmux/gstbasetsmux.c: - mpegtsmux: Correctly time out and mux anyway in live pipelines - This makes sure that for sparse streams (KLV, DVB subtitles, ...) the - muxer does not wait until the next buffer is available for them but - times out on the latency deadline and outputs data. - For non-live pipelines it will still be necessary for upstream to - correctly produce gap events for sparse streams. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823> - -2024-04-28 18:26:43 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvav1encoder.cpp: - * sys/nvcodec/gstnvav1encoder.h: - * sys/nvcodec/gstnvencobject.cpp: - * sys/nvcodec/gstnvencobject.h: - * sys/nvcodec/meson.build: - * sys/nvcodec/plugin.c: - nvcodec: Add AV1 encoder - Adding CUDA mode "nvav1enc", D3D11 mode "nvd3d11av1enc" and auto GPU - mode "nvautogpuav1enc" elements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-04-28 18:35:56 +0900 Seungha Yang <seungha@centricular.com> + * sys/va/gstvapluginutils.c: + video: Give better names to buffer pools + Making debugging simpler + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8617> - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - * sys/nvcodec/plugin.c: - nvcodec: Rename nvcuda{h264,h265}enc to nv{h264,h265}enc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-05-12 18:49:09 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvbaseenc.c: - * sys/nvcodec/gstnvbaseenc.h: - * sys/nvcodec/gstnvenc.c: - * sys/nvcodec/gstnvenc.h: - * sys/nvcodec/gstnvh264enc.c: - * sys/nvcodec/gstnvh264enc.h: - * sys/nvcodec/gstnvh265enc.c: - * sys/nvcodec/gstnvh265enc.h: - * sys/nvcodec/meson.build: - * sys/nvcodec/plugin.c: - nvcodec: Remove old nvenc implementation - Stop shipping deprecated implementation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-04-28 17:39:39 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvenc.c: - nvcodec: Bump minimum supported SDK version to 10.0 - New preset (i.e., P1 ~ P7) requires SDK 10.0 or newer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-04-24 01:47:51 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/cuviddec.h: - * sys/nvcodec/gstnvenc.c: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - * sys/nvcodec/nvEncodeAPI.h: - * sys/nvcodec/nvcuvid.h: - nvcodec: Update SDK header to 12.0.16 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-05-12 21:56:23 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvencoder.cpp: - * sys/nvcodec/gstnvencoder.h: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - nvencoder: Enhance lagacy encoding profile mapping - Updated based on the NVENC Preset Migration Guide - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-05-12 18:21:27 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvencoder.cpp: - * sys/nvcodec/gstnvencoder.h: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - nvencoder: Update property names and default value - ... to be the same as old NVENC elements - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6754> - -2024-04-12 21:48:13 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Let FORCE_KEYFRAME be IDR frame rather than just I frame - The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME - bit set should be the sync point. So we should let it be an IDR frame to begin - a new GOP, rather than just promote it to an I frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6619> - -2024-04-09 23:40:41 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Let FORCE_KEYFRAME be IDR frame rather than just I frame - The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME - bit set should be the sync point. So we should let it be an IDR frame to begin - a new GOP, rather than just promote it to an I frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6619> - -2024-04-12 16:09:26 +0800 He Junyan <junyan.he@intel.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: vaenc-dynamic: support force key frame setting - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6619> - -2024-05-14 10:54:03 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - vaenc: Allow to set the max-qp and min-qp for QVBR and ICQ modes - In fact, these setting can work well. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6841> - -2024-05-14 10:31:05 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - vah26{4,5}enc: Set the qp_p and qp_b to qp_i value in ICQ and QVBR - Set the P and B frame qp to I frame value to avoid generating delta - QP between different frame types. For ICQ and QVBR modes, we can - only set the qpi value, so the qpp and qpb values should be set to - the same value as the qpi. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6841> - -2024-05-13 21:27:05 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Fix SDK debug layer warning - Address below message reported by SDK debug layer. - ID3D12Device::CheckFeatureSupport: Unsupported Decode Profile Specified. - Use ID3D12VideoDevice::CheckFeatureSupport with D3D12_FEATURE_VIDEO_DECODE_PROFILES - to retrieve a list of supported profiles - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6839> - -2024-05-11 13:29:36 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - vavp9enc: Do not use base class video info to calculate coded size - We should use our in_info which is an adjusted value to calculate - that coded size. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6826> - -2024-03-16 19:37:35 +0100 Mark Nauwelaerts <mnauw@users.sourceforge.net> - - * gst/dvdspu/gstdvdspu.c: - * gst/dvdspu/gstspu-pgs.c: - * gst/dvdspu/gstspu-pgs.h: - dvdspu: use multiple minimal sized PGS overlay rectangles - ... rather than possibly 1 large at full video size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6413> - -2024-05-12 18:15:05 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - nvencoder: Fix maximum QP value setting - Fixing typo - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6827> - -2024-05-06 14:55:32 +0300 Sebastian Dröge <sebastian@centricular.com> - - * meson_options.txt: - * sys/aja/gstajasrc.cpp: - * sys/aja/meson.build: - aja: Update to AJA NTV2 17.0.1 - Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3289 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6808> - -2024-04-15 13:38:15 +0200 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: decoder: Reorder caps to prefer DMA_DRM ones - Certain V4L2 fourccs don't (yet) have DRM counter parts, in which case - we can't create DMA_DRM caps for them. This is usually the case for - specific tilings, which are represented as modifiers for DMA formats. - While using these tilings is generally preferable - because of e.g. - lower memory usage - it can result in additional conversion steps when - interacting with DMA based APIs such as GL, Vulkan or KMS. In such cases - using a DMA compatible format usually ends up being the better option. - Before the addition of DMA_DRM caps, this was what playbin3 ended up - requesting in various cases - e.g. prefering NV12 over NV12_4L4 - but - the addition of DMA_DRM caps seems to confuse the selection logic. - As a simple and quite robust solution, assume that peers supporting - DMA_DRM caps always prefer these and reorder the caps accordingly. - In the future we plan to have a translation layer for cases where - there is a matching fourcc+modifier pair for a V4L2 fourcc, ensuring - optimal results. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6645> - -2024-05-04 11:56:05 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - * gst-libs/gst/player/gstplayer.c: - play: Mention that gst_play_new() also initialized GStreamer - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6801> - -2024-05-04 11:54:16 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/play/gstplay.c: - * gst-libs/gst/player/gstplayer.c: - play: Initialize debug category and error quark in class_init - Doing it in gst_play_new() means that bindings that directly call - g_object_new() with the GType wouldn't end up initializing both. - This affects at least the Python and GJS bindings. - gst_init() is nonetheless only called from gst_play_new() once because - calling it from class_init would likely lead to problems as that's - called from somewhere in the middle of GObject. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6801> - -2024-05-07 10:35:26 +0200 Emil Pettersson <khwaaj@gmail.com> - - * sys/applemedia/vtdec.c: - vtdec: Fix deadlock when negotiating format change - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6811> - -2024-03-12 14:25:31 +1100 Matthew Waters <matthew@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - cccombiner: add support for timing out captions without EOS - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6335> - -2024-05-07 11:18:10 +0200 Piotr Brzeziński <piotr@centricular.com> - - * tests/check/elements/audiovisualizer.c: - * tests/check/meson.build: - audiovisualizer: Add simple pipeline unit test - Creates pipelines with each of our visualizer elements and runs them with 20 buffers from audiotestsrc. - Added after a completely broken (segfaulting) synaescope went unnoticed for a while. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6800> - -2024-04-29 18:24:36 +0100 Tim-Philipp Müller <tim@centricular.com> - - * tests/check/elements/unixfd.c: - unixfd: disable flaky test_unixfd_segment for now - It's a problem with the test, and a proper fix might - require new API, so just disable it for now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6813> - -2024-04-18 17:07:25 +0300 Sebastian Dröge <sebastian@centricular.com> - - * gst-libs/gst/codecs/gstav1decoder.c: - * gst-libs/gst/codecs/gsth264decoder.c: - * gst-libs/gst/codecs/gsth265decoder.c: - * gst-libs/gst/codecs/gstmpeg2decoder.c: - * gst-libs/gst/codecs/gstvp8decoder.c: - * gst-libs/gst/codecs/gstvp9decoder.c: - * gst-libs/gst/d3d12/gstd3d12fencedatapool.cpp: - * gst/codectimestamper/gstcodectimestamper.c: - * gst/mpegpsmux/psmuxstream.c: - * sys/aja/gstajacommon.cpp: - * sys/aja/gstajacommon.h: - * sys/aja/gstajasink.cpp: - * sys/aja/gstajasink.h: - * sys/aja/gstajasrc.cpp: - * sys/aja/gstajasrc.h: - * sys/applemedia/vtdec.c: - * sys/applemedia/vtdec.h: - * sys/applemedia/vtenc.c: - * sys/applemedia/vtenc.h: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/decklink/gstdecklink.cpp: - * sys/decklink/gstdecklinkaudiosrc.cpp: - * sys/decklink/gstdecklinkaudiosrc.h: - * sys/decklink/gstdecklinkvideosrc.cpp: - * sys/decklink/gstdecklinkvideosrc.h: - * sys/mediafoundation/gstmfcapturewinrt.cpp: - * sys/mediafoundation/gstmfsourcereader.cpp: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - gst: Move GstQueueArray as GstVecDeque to core - And change lengths and indices from guint to gsize for a more correct type. - Also deprecate GstQueueArray and implement it in terms of GstVecDeque. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6779> - -2024-05-06 20:50:21 +1000 Matthew Waters <matthew@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtc: request-aux-sender, only sink floating refs - Don't add an extra ref if non-floating as that ref will never be - unreffed. - gst_bin_add() is transfer floating (alias to transfer none). - Fixes a leak when a non-floating ref was provided as a return value in - the request-aux-sender signal. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6807> - -2024-05-04 19:52:59 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - d3dshader: Fix gamma and primaries conversion pixel shader - Fixing regression introduced by the commit of f52ecb960792257b7394a6dc3182b6747c902b5b - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6802> - -2024-04-22 09:48:14 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkvideo-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - * gst-libs/gst/vulkan/gstvkvideoutils.c: - * gst-libs/gst/vulkan/gstvkvideoutils.h: - * tests/check/libs/vkvideoh265encode.c: - * tests/check/meson.build: - tests: add vulkan H.265 encode - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6676> - -2023-07-10 14:44:05 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * gst-libs/gst/vulkan/gstvkvideoutils.c: - * gst-libs/gst/vulkan/gstvkvideoutils.h: - * tests/check/libs/vkcodecparams_h264.c: - * tests/check/libs/vkcodecparams_h265.c: - * tests/check/libs/vkvideodecode.c: - * tests/check/libs/vkvideoh264encode.c: - * tests/check/meson.build: - tests: add Vulkan H.264 encode - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6676> - -2024-02-01 20:43:04 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkconfig.h.meson: - * gst-libs/gst/vulkan/gstvkdevice.c: - * gst-libs/gst/vulkan/gstvkencoder-private.c: - * gst-libs/gst/vulkan/gstvkencoder-private.h: - * gst-libs/gst/vulkan/gstvkimagebufferpool.c: - * gst-libs/gst/vulkan/gstvkimagebufferpool.h: - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvkoperation.h: - * gst-libs/gst/vulkan/gstvkvideo-private.c: - * gst-libs/gst/vulkan/gstvkvideo-private.h: - * gst-libs/gst/vulkan/gstvkvideoutils.h: - * gst-libs/gst/vulkan/meson.build: - * gst-libs/gst/vulkan/vulkan_fwd.h: - vkencoder: add gstvkencoder helper object - Add a gstvkencoder class to support Vulkan encoder such as H26X - formats. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6676> - -2024-04-27 01:13:18 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11-private.h: - * gst-libs/gst/d3d11/gstd3d11converter.cpp: - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - d3d11: Add support for Y216 and Y416 formats - We were mapping Y212 and Y412 formats to DXGI_FORMAT_{Y216,Y416}. - Reuse already implemented shaders for the new formats - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6745> - -2024-04-27 00:37:52 +0900 Seungha Yang <seungha@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - video: Add Y216 and Y416 formats - The same memory layout as Y212 and Y412 formats, respectively - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6745> - -2024-05-03 22:57:57 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Fix a memory leak when destroying the object - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6791> - -2024-05-03 12:08:19 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Use a FIFO queue to generate DTS - The base parse will infer the DTS by itself, so we need to make DTS - offset before PTS in order to avoid DTS bigger than PTS. We now use - a FIFO queue to store all PTS and assign it to DTS by an offset. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6791> - -2024-05-02 14:18:16 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - * sys/va/gstvah264enc.c: - vah264enc: Use a FIFO queue to generate DTS - The base parse will infer the DTS by itself, so we need to make DTS - offset before PTS in order to avoid DTS bigger than PTS. We now use - a FIFO queue to store all PTS and assign it to DTS by an offset. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6791> - -2024-04-30 16:55:05 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - vkdecoder: change dstmask in decoder frame barrier - Use of VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT instead of - specific VK_PIPELINE_STAGE_2_VIDEO_DECODE_BIT_KHR - Fix for VUID-vkCmdPipelineBarrier2-srcStageMask-03849 - pDependencyInfo->pImageMemoryBarriers0.srcStageMask - (VK_PIPELINE_STAGE_2_VIDEO_DECODE_BIT_KHR) is not compatible with - the queue family properties - (VK_QUEUE_GRAPHICS_BIT|VK_QUEUE_COMPUTE_BIT|VK_QUEUE_TRANSFER_BIT| - VK_QUEUE_SPARSE_BINDING_BIT|VK_QUEUE_PROTECTED_BIT) of this - command buffer. The Vulkan spec states: The srcStageMask member - of any element of the pMemoryBarriers, pBufferMemoryBarriers, or - pImageMemoryBarriers members of pDependencyInfo must only - include pipeline stages valid for the queue family that was - used to create the command pool that commandBuffer was allocated - from ( - https://www.khronos.org/registry/vulkan/specs/1.3-extensions/ - html/vkspec.html#VUID-vkCmdPipelineBarrier2-srcStageMask-03849) - The frame barrier should use a compatible srcStageMask for all - the queues. - Remove reset_pipeline_stage_mask as it is redundant - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6780> - -2024-05-02 11:51:03 +0200 Rafael Caricio <rcaricio@netflix.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: No default will trigger warning at compile time - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6778> - -2024-05-01 13:40:06 +0200 Rafael Caricio <rcaricio@netflix.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Add max-level and max-tier to caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6778> - -2024-04-30 13:24:42 +0200 Rafael Caricio <rcaricio@netflix.com> - - * gst/videoparsers/gstav1parse.c: - av1parse: Add level and tier to caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6778> - -2024-02-29 12:31:47 +0100 Loïc Le Page <llepage@igalia.com> - - * ext/soundtouch/gstbpmdetect.cc: - * ext/soundtouch/gstpitch.cc: - * ext/soundtouch/meson.build: - * tests/validate/meson.build: - * tests/validate/pitch/change_pitch_properties.validatetest: - * tests/validate/pitch/change_pitch_properties/flow-expectations/log-pitch-src-expected: - * tests/validate/pitch/maintain_pitch_with_variable_playback_rates.validatetest: - * tests/validate/pitch/maintain_pitch_with_variable_playback_rates/flow-expectations/log-pitch-src-expected: - * tests/validate/pitch/pitch-test.meta: - * tests/validate/pitch/reverse.change_pitch_properties.validatetest: - * tests/validate/pitch/reverse.change_pitch_properties/flow-expectations/log-pitch-src-expected: - * tests/validate/pitch/reverse.maintain_pitch_with_variable_playback_rates.validatetest: - * tests/validate/pitch/reverse.maintain_pitch_with_variable_playback_rates/flow-expectations/log-pitch-src-expected: - pitch: add validate tests - Add pitch tests with different forward and backward playback rates. - Those tests depend on the libSoundTouch version to validate the buffers - checksums. The actual version uses libSoundTouch 2.3.2, use the - `--force-fallback-for=soundtouch` meson option to build using the same - version. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-02-29 12:27:23 +0100 Loïc Le Page <llepage@igalia.com> - - * tests/files/audio-8s-then-reverse.ogg: - * tests/interactive/meson.build: - * tests/interactive/pitch-playback-test.c: - pitch: add interactive test - Test pitch with different forward and backward playback rates. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-02-28 19:34:15 +0100 Loïc Le Page <llepage@igalia.com> - - * ext/soundtouch/gstpitch.cc: - * ext/soundtouch/gstpitch.hh: - pitch: make it work with reverse playback - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-02-28 18:47:58 +0100 Loïc Le Page <llepage@igalia.com> - - * ext/soundtouch/gstpitch.cc: - pitch: fix multithread accesses - - fully protect accesses to the libsoundtouch API that is not - thread-safe. - - fully protect accesses to GstPitch members that could be read by a - downstream query thread while written by an upstream streaming thread - or a user thread. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-02-28 14:05:11 +0100 Loïc Le Page <llepage@igalia.com> - - * ext/soundtouch/gstpitch.cc: - * ext/soundtouch/gstpitch.hh: - pitch: refactor some variables names - - use the `GST_PITCH_GET_PRIVATE` accessor when needed - - rename `out_seg_rate` to `output_rate` to use the same name as the parameter - - rename `seg_arate` to `segment_applied_rate` to improve readability - - apply gst-indent to gstpitch.hh/cc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-02-28 13:21:51 +0100 Loïc Le Page <llepage@igalia.com> - - * ext/soundtouch/gstpitch.cc: - pitch: fix time ratio computation - When changing playing rate, the output segment was not correctly - calculated because the stream time ratio was computed using the previous - input segment rate instead of using the actual rate. This was producing - wrong results for the output segment start and end values. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247> - -2024-05-01 00:12:42 +0900 Seungha Yang <seungha@centricular.com> - - * sys/qsv/gstqsvh264dec.cpp: - * sys/qsv/gstqsvh265dec.cpp: - qsvh264dec,qsvh265dec: Fix nalu leaks - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3514 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6781> - -2024-04-30 18:15:56 +0200 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Fix PAUSED->READY deadlock when output loop is running - Makes sure the GST_PAD_STREAM_LOCK is not taken when pad is being deactivated. - The lack of this was causing deadlocks when stopping the pipeline right after producing first buffers. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6783> - -2024-04-30 18:08:27 +0200 Stéphane Cerveau <scerveau@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - vkh26xdec: Fix stop memory leak - The h26xdecoder 'stop' method was not called - as the vulkan h26x class rewires the video decoder - 'stop' base method to its own one. - It was causing some memory leaks such as dangling parser - and dpb in h26xdecoder base class. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6782> - -2024-04-30 11:20:54 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvabaseenc.c: - * sys/va/gstvabaseenc.h: - * sys/va/gstvavp9enc.c: - vabaseenc: delete the useless frame counter fields - They are used to calculate the PTS and DTS before, no usage now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6773> - -2024-04-30 11:12:05 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - vabaseenc: Do not set the min_pts - Because all the va encoders improved their PTS/DTS algorithm, now - it is impossible to generate minus DTS. So no underflow will happen - and we do not need to set a 1000 hour offset now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6773> - -2024-04-26 17:12:03 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkcaps.c: - msdk: Add Y212 format to hevc encoder static raw caps - Note that static caps is used for the old MSDK dispatch. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6750> - -2024-04-22 15:03:56 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Set the correct buffer flag for output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6703> - -2024-04-22 14:44:53 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Set the correct buffer flag for output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6703> - -2024-04-21 14:55:31 +0800 Seungha Yang <seungha@centricular.com> - - * sys/va/gstvabaseenc.c: - vabaseenc: Fix frame leak on error path - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6703> - -2024-04-21 14:48:02 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Do not touch the PTS of output frame - 1. The PTS of all frames should not be changed. - 2. Just update the DTS based on the PTS. For the frame which is not - reordered, the DTS is equal to PTS. For frame which is reordered, - the DTS is equal to previous DTS. For example: - Input: F0D0, P0 -- F1D1, P1 -- F2D2, P2 -- F3D3, P3 - Output: F0I, D0, P0 -- F3P, D0, P3 -- F1B, D1, P1 -- F2B, D2, P2 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6703> - -2024-04-21 12:51:31 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - vah264enc: Do not touch the PTS of output frame - 1. The PTS of all frames should not be changed. - 2. Just update the DTS based on the PTS. For the frame which is not - reordered, the DTS is equal to PTS. For frame which is reordered, - the DTS is equal to previous DTS. For example: - Input: F0D0, P0 -- F1D1, P1 -- F2D2, P2 -- F3D3, P3 - Output: F0I, D0, P0 -- F3P, D0, P3 -- F1B, D1, P1 -- F2B, D2, P2 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6703> - -2024-04-28 23:37:55 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Fix d3d12 resource copy - It was copying to self resource - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6753> - -2024-04-28 23:34:37 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvh265encoder.cpp: - nvh265encoder: Fix crash with RGBx and BGRx - Both formats need to be handled in switch - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6752> - -2024-04-27 22:54:14 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12ipcclient.cpp: - d3d12ipcclient: Fix deadlock when copying texture - Fixing deadlock in below case - * GC lock is taken by background thread, and the background thread calls - gst_d3d12_ipc_client_release_imported_data() which takes ipc lock - * ipc lock is already taken in ipc thread and trying to pushing GC data - via gst_d3d12_command_queue_set_notify() - * gst_d3d12_command_queue_set_notify() is trying to take GC lock - but it's already taken by background thread - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 22:02:59 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12ipcsink.cpp: - d3d12ipcsink: Handle external fence - Waits external fence before sending frame to peer. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 23:34:35 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Remove CPU-side waiting - Sets decoder command queue's fence to memory instead of waiting - from decoder's output thread. CPU-side waiting will happen - only if download is required. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 21:32:23 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Fix output to non-d3d12 element - Configures upload/download flags to memory after write - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 20:23:32 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - d3d12screencapturesrc: Release and flush d3d11 objects before d3d12 - Fixing device-removed error when closing pipeline - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 20:10:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Do not wait external fence on map() - Only wait for external fence if upload or download is required. - Waiting for external fence in case of d3d12 mapping is caller's - responsibility - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 23:30:40 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12encoder.cpp: - d3d12encoder: Handle external fence explicitly - Waits for external fence if any - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 19:46:51 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12converter: Add support for GPU-side external fence waiting - Ideally, GPU waiting should be scheduled just before executing command list. - But handling the case outside of converter is a bit complicated. - Under an assumption that constructed command list will be executed - immediately, schedules GPU-side waiting inside of conversion method - to simplify the flow. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 18:44:26 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.h: - d3d12memory: Use explicit type for GST_MAP_D3D12 define - C++ compiler will complain about different type between int and GstMapFlags - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 23:29:40 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.h: - d3d12frame: Extract external fence from memory and wait helper function - Adding gst_d3d12_frame_fence_{gpu,cpu}_wait() methods - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 17:54:38 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/dwrite/gstdwriterender_d3d12.cpp: - d3d12: Update copy_texture_region() method - Pass external fence value if any and allow passing fence - data so that dependent resources can be released - once copy is done - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 17:44:36 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12commandqueue.h: - d3d12commandqueue: Add execute_wait_and_command_lists() method - ... so that GPU-side waiting and executing can be scheduled at once - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 17:28:47 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - d3d12memory: Add get_external_fence() method - Required for caller to wait external fence without map() method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-27 00:07:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - d3d12bufferpool: Sync all memory objects on acquire_buffer() - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6749> - -2024-04-19 00:30:47 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - vabaseenc: No need to call _finish_subframe() - After vaav1enc is aligned to TU, there is no case that generates - multi output for one input. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6688> - -2024-04-19 00:25:25 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - vavp9enc: Set the correct buffer flag for output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6688> - -2024-04-19 00:22:50 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - vaav1enc: Set the correct buffer flag for output - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6688> - -2024-04-19 00:14:15 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - vaav1enc: Do not change the PTS/DTS of output frames - The AV1 encoder does not reorder the frames, so there is no need - to change the timestamp related meta data of output frames, just - inheriting it from the input frames. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6688> - -2024-04-18 22:30:20 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - vaav1enc: Change the alignment of output to "tu" - The current output alignment is "frame", which may cause some issues - for PTS and DTS calculation. We now change the alignment to "tu", - and this is also the alignment mode for av1enc and svtav1enc. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6688> - -2024-04-15 09:51:53 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * gst/unixfd/gstunixfdsrc.c: - unixfd: Close file descriptors on error - After calling g_unix_fd_list_steal_fds() and before calling - gst_fd_allocator_alloc(), we are responsible for closing those fds. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6532> - -2024-04-03 10:28:28 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * gst/unixfd/gstunixfdsink.c: - * gst/unixfd/gstunixfdsrc.c: - * tests/check/elements/unixfd.c: - unixfdsink: Take segment into account when converting timestamps - Also rename `calculate_timestamp()` to `to_monotonic()` and - `from_monotonic()` which better describe what it does. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6532> - -2024-04-03 13:17:01 -0400 Xavier Claessens <xavier.claessens@collabora.com> - - * gst/unixfd/gstunixfdsrc.c: - unixfd: Allow sending buffers with no memories - There is no reason to not allow it, and it is useful for simple unit - test. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6532> - -2024-04-25 14:13:30 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.h: - * gst-libs/gst/vulkan/gstvkqueue.c: - * gst-libs/gst/vulkan/gstvkqueue.h: - * gst-libs/gst/vulkan/vulkan_fwd.h: - * tests/check/libs/vkvideodecode.c: - vulkan: replace gst_vulkan_queue_create_decoder() with gst_vulkan_decoder_new_from_queue() - The purpose of this refactor is to hide decoding code from public API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6723> - -2024-04-23 14:51:27 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.h: - * gst-libs/gst/vulkan/vulkan_fwd.h: - vulkan: conceal unused decoder symbols - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6723> - -2024-04-23 14:48:30 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.c: - * gst-libs/gst/vulkan/gstvkdecoder-private.h: - * gst-libs/gst/vulkan/gstvkqueue.c: - * gst-libs/gst/vulkan/gstvkqueue.h: - * gst-libs/gst/vulkan/meson.build: - * tests/check/libs/vkvideodecode.c: - vulkan: conceal decoder from public API - Since we don't want to expose video decoding API outside of GStreamer, the - header is removed from installation and both source files are renamed as - -private. - The header must remain in gst-libs because is referred by GstVulkanQueue, - which's the decoder factory. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6723> - -2024-04-24 15:44:41 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/va/gstvaallocator.c: - vaallocator: disable derived all together for Mesa <23.3 - First it derived mapping was disabled for P010 formats, but also there's an - issue with interlaced frames. - It would be possible to disable derived mapping only for interlaced (H.264 - decoder and vadeinterlace) but it would spread the hacks along the code. It's - simpler and contained to disable derived completely for Mesa <23.3 - Fixes: #3450 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6729> - -2024-04-25 11:50:03 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/va/gstvavideoformat.c: - va: videoformat: use video library to get DRM fourcc - Instead of duplicating the GStreamer format to DRM fourcc mapping, this patch - uses the GstVideo library helpers. This duplicates the big O of looking for, - since the two lists are traversed, but it's less error prone. - Partially reverts commit 547f3e8622a39ce971c272f2c31eab8f1fdfbb45. - Fixes: #3354 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6731> - -2024-04-17 18:37:30 +0900 Hou Qi <qi.hou@nxp.com> - - * gst-libs/gst/wayland/gstwlwindow.c: - wlwindow: free staged buffer when do gst_wl_window_finalize - If waylandsink received buffer rate is high which causes frame - drop, the cached staged buffer will be replaced when next buffer - needs to be rendered and be freed after redraw. But there is - chance to get memory leak if ended without redraw. So need to - free staged buffer when do gst_wl_window_finalize(). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6670> - -2024-04-26 00:35:54 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - d3d12screencapturesrc: Performance improvement - Process captured frame using d3d11 instead of d3d12, and use shared - fence when copying processed d3d11 texture to d3d12 resource. - In this way, capture CPU thread does not need to wait for fence signal. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6733> - -2024-04-24 00:52:18 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/rsvg/gstrsvgdec.c: - * ext/rsvg/gstrsvgoverlay.c: - rsvg: Disable deprecations instead of porting to new librsvg API - `rsvg_handle_get_dimensions()` and `rsvg_handle_render_cairo()` are - deprecated, and the replacement librsvg functions as specified in the - migration guide are `rsvg_handle_get_intrinsic_size_in_pixels()` and - `rsvg_handle_render_document()`. - However, those are not drop-in replacements, and actually have - breaking semantics for our use-case: - 1. `intrinsic_size_in_pixels()` requires SVGs to have width+height or - the viewBox attribute, but `get_dimensions()` does not. It will - calculate the geometry based on element extents recursively. - 2. `render_cairo()` simply renders the SVG at its intrinsic size on - the specified surface starting at the top-left, maintaining - whatever transformations have been applied to the cairo surface, - including distorted aspect ratio. - However, `render_document()` does not do that, it is specifically - for rendering at the specified aspect ratio inside the specified - viewport, and if you specify a viewPort that does not match the - aspect ratio of the SVG, librsvg will center it. - Matching the old behaviour with the new APIs is a lot of work for no - benefit. We'd be duplicating code that is already there in librsvg in - one case and undoing work that librsvg is doing in the other case. - The aspect ratio handling in this element is also kinda atrocious. - There is no option to scale the SVG while maintaining the aspect - ratio. Overall, element needs a rewrite. - Let's just disable deprecations. The API is not going anywhere. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6726> - -2024-04-24 00:51:23 +0530 Nirbheek Chauhan <nirbheek@centricular.com> - - * ext/rsvg/gstrsvgdec.c: - * ext/rsvg/gstrsvgdec.h: - Revert "rsvgdec: Fix uses of librsvg functions deprecated since 2.52" - This reverts commit b8db4739551401c653f2ae55f39d1ab77e3a5ef5. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6726> - -2024-04-17 18:45:34 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkswapper.c: - vkswapper: choose color space according with format - The swapper surfaces contains the color space for each supported format. Instead - of hard coding the color space, it returns the value associated with the - negotiated vulkan format. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6725> - -2024-03-06 12:59:25 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/check/libs/vkcodecparams.c: - * tests/check/libs/vkvideodecode.c: - tests: vulkan: split decoder test and parameters - Thus they can be reused for the encoder test. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6721> - -2024-04-24 14:42:31 +0900 Elliot Chen <elliot.chen@nxp.com> - - * gst-libs/gst/play/gstplay.c: - gstplay: query seek information again in playing state for live stream - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6681> - -2024-04-24 01:02:15 +0900 Haihua Hu <jared.hu@nxp.com> - - * gst-libs/gst/wayland/gstwlwindow.c: - wlwindow: clear configure mutex and cond when finalize - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6722> - -2024-04-23 11:00:21 +0200 Edward Hervey <edward@centricular.com> - - * tools/utils.c: - bad/utils: Simplify get_file_extension - By using g_strrstr - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6715> - -2024-04-23 10:53:54 +0200 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/mpegtsbase.c: - mpegtsbase: Fix Program equality check - There was an issue with this equality check, which was to figure out what to do - with PCR pids (whether they were part of the streams present or not) and whether - we ignore PCR or not. - Turns out ... we already took care of that further up in the function. - The length check can be simplified by just checking whether the length of - the *original* PMT and the new PMT are identical. Since we don't store "magic" - PCR streams in those, we can just use them as-is. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6713> - -2024-04-23 01:40:44 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Lock DPB while building command - Since DPB resource can be modified in output thread, protect - it when building command list. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6709> - -2024-04-22 19:32:22 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Hold reference pictures in fence data - Keep reference pictures alive during executing decoding commands - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6709> - -2024-04-22 21:52:53 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decoder.h: - * sys/d3d12/gstd3d12vp9dec.cpp: - d3d12vp9dec: Disallow resolution change to larger size on non-keyframe - Intel GPU seems to be crashing if the case happens. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6709> - -2024-04-21 22:38:50 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Fix potential use after free - A DPB buffer held by codec picture object may not be writable - at the moment, then gst_buffer_make_writable() will unref passed buffer. - Specifically, the use after free or double free can happen if: - * Crop meta of buffer copy is required because of non-zero - top-left crop position - * zero-copy is possible with crop meta - * A picture was duplicated, interlaced h264 stream for example - Interlaced h264 stream with non-zero top-left crop position - is not very common but it's possible configuration in theory. - Thus gst_buffer_make_writable() should be called with - GstVideoCodecFrame.output_buffer directly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6706> - -2024-04-21 22:07:36 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d11/gstd3d11decoder.cpp: - d3d11decoder: Fix potential use after free - A DPB buffer held by codec picture object may not be writable - at the moment, then gst_buffer_make_writable() will unref passed buffer. - Specifically, the use after free or double free can happen if: - * Crop meta of buffer copy is required because of non-zero - top-left crop position - * zero-copy is possible with crop meta - * A picture was duplicated, interlaced h264 stream for example - Interlaced h264 stream with non-zero top-left crop position - is not very common but it's possible configuration in theory. - Thus gst_buffer_make_writable() should be called with - GstVideoCodecFrame.output_buffer directly. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6706> - -2024-04-16 09:50:52 +0200 Edward Hervey <edward@centricular.com> - - * gst/mpegtsdemux/mpegtsbase.c: - tsdemux: Disable smart program update - The goal of this code was, for programs which were updates (i.e. adding/removing - streams but not completely changing) to allow dynamic addition/removal of - streams without completely removing everything. - But this wasn't 100% tested and there are a bunch of issues which make it fail - in plenty of ways. - For now disable that feature and force the legacy "add all pads again and then - remove old ones" behaviour to make it switch. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6651> - -2024-04-20 21:37:39 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11device-private.h: - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * gst-libs/gst/d3d11/gstd3d11utils.cpp: - d3d11device: Add device-removed-reason property - In addition to device removed status monitoring in gst_d3d11_result() - method, if ID3D11Device4 interface is available, - an event handle will be used for device removed status update. - And "device-removed" signal is removed since applications can monitor - the device removed status via gobject notify - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6699> - -2024-04-20 23:13:20 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - d3d12utils: Fix documentation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6699> - -2024-04-20 20:03:46 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * tests/check/libs/d3d12device.cpp: - * tests/check/meson.build: - d3d12device: Add device-removed-reason property - Adding new property in order to notify users of device removed status. - Once device removed status is detected, application should release - all ID3D12Device objects corresponding to the adapter, including - GstD3D12Device object. Otherwise D3D12CreateDevice() call for the - adapter will fail. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6699> - -2024-04-21 19:17:53 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfsourceobject.cpp: - mediafoundation: Fix infinite loop in device provider - Initialize source state with GST_MF_DEVICE_NOT_FOUND to terminate - loop immediately if no available capture device is available - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3492 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6704> - -2024-04-18 10:18:05 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * tests/check/libs/d3d11device.cpp: - d3d11device: protect device_lock vs device_new - It seems that when D3D11CreateDevice collides in time - with other D3D11 calls, in particular the proccess of - creating a shader, it can corrupt the memory in the driver. - D3D11 spec doesn't seem to require any thread safety from - D3D11CreateDevice. Following MSDN, it is supposed to be called - in the beginning of the proccess, while GStreamer calls it with each - new pipeline. - Such crashes in the driver were frequently reproducing on the - Intel UHD 630 machine. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686> - -2024-04-16 23:08:51 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * tests/check/libs/d3d11device.cpp: - * tests/check/meson.build: - tests/d3d11: add concurrency test for gstd3d11device - We suspect that it's not thread safe to just create and - destroy the device from any thread, particularly because - of D3D11CreateDevice, that is not documented as thread-safe. - While D3D11CreateDevice is usually protected from outside - by the gst_d3d11_ensure_element_data, it still can cross - with the Release() method of another device. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686> - -2024-04-19 17:17:08 +0900 Elliot Chen <elliot.chen@nxp.com> - - * gst-libs/gst/play/gstplay.c: - gstplay: query duration again if previous query failed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6668> - -2024-04-19 22:40:12 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12testsrc.cpp: - d3d12testsrc: Use shared 11on12 device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6697> - -2024-04-19 22:26:35 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/gstdwriterender_d3d12.cpp: - dwrite: Use shared 11on12 device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6697> - -2024-04-19 22:16:42 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/meson.build: - * sys/d3d12/meson.build: - d3d12device: Hold d3d11on12 device to be shared - d3d11on12 device seems to be occupying a bit of GPU memory - Hold the instance in GstD3D12Device so that it can be shared - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6697> - -2024-04-19 21:13:25 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12videosink: Handle mouse double click and modifier - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6693> - -2024-04-19 20:44:44 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12videosink.cpp: - d3d12videosink: Disconnect window's signal on dispose - Same as the commit of 7b69d1758f77331c2801746cd91b1b6b0db9ecfb - but for d3d12videosink. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6692> - -2024-04-19 21:17:17 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12window.cpp: - d3d12videosink: Handle external HWND's mouse/keyboard events - OS will not propagate the event to child HWND if it's handled by - the parent. Thus, navigation event should be handled by parent HWND's - event handler. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6692> - -2024-04-18 09:20:13 +0300 Sebastian Dröge <sebastian@centricular.com> - - * sys/va/gstvavp9enc.c: - vavp9enc: Preserve PTS and other frame metadata - See also https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4150 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6680> - -2024-03-31 00:23:31 +0900 Seungha Yang <seungha@centricular.com> - - * sys/webview2/gstwebview2object.cpp: - webview2: Handle double click and modifier - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6491> - -2024-03-30 23:57:27 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d11/gstd3d11videosink.cpp: - * sys/d3d11/gstd3d11window.cpp: - * sys/d3d11/gstd3d11window.h: - * sys/d3d11/gstd3d11window_win32.cpp: - d3d11videosink: Handle double click and modifier - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6491> - -2024-04-17 10:58:00 +0900 Hou Qi <qi.hou@nxp.com> - - * ext/wayland/gstwaylandsink.c: - waylandsink: config buffer pool with query size when propose_allocation - If propose_allocation comes before set_caps, self->video_info - has not been extracted from caps and self->video_info.size is 0. - It causes buffer pool fail to set config . So need to use info - size got from query instead when propose_allocation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6666> - -2021-03-19 18:33:09 +0200 Sebastian Dröge <sebastian@centricular.com> - - * ext/rsvg/gstrsvgdec.c: - rsvgdec: Remove unused GObject::finalize implementation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6613> - -2024-04-11 19:54:45 -0300 L. E. Segovia <amy@centricular.com> - - * ext/rsvg/gstrsvgdec.c: - * ext/rsvg/gstrsvgdec.h: - rsvgdec: Fix uses of librsvg functions deprecated since 2.52 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6613> - -2021-03-11 20:18:24 +0200 Sebastian Dröge <sebastian@centricular.com> - - * ext/rsvg/gstrsvgdec.c: - * ext/rsvg/gstrsvgdec.h: - rsvgdec: Negotiate resolution with downstream and scale accordingly - Prefer the resolution given by the input but if downstream request a - specific resolution then scale to this without regards to the aspect - ratio. - Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1538 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6613> - -2024-04-17 16:55:31 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - v4l2codecs: Don't unref allocation query caps - The caps obtained from parsing the allocation query is borrowed and - should not be unreffed. This fixes criticals assertion introduced in - 1.24.1. - (gst-launch-1.0:242): GStreamer-CRITICAL **: 19:48:02.667: - gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed - Fixes: 5189e8b95630 ("v4l2codecs: decoders: Add DMA_DRM caps support") - Closes #3462 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6679> - -2024-04-09 17:10:20 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkcaps.c: - * sys/msdk/gstmsdkenc.c: - * sys/msdk/gstmsdkh265enc.c: - msdk: Add main-422-12 profile to hevc - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6591> - -2024-04-16 22:29:15 +1000 Jan Schmidt <jan@centricular.com> - - * gst/dvbsubenc/gstdvbsubenc.c: - dvbsubenc: fixed some memory leaks and a crash - Fix leaks of internal GstBuffers, and a crash if subtitle segments end - up empty. - Based on a patch by Jurijs Satcs <jurijs.satcs@veset.tv> - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6661> - -2024-04-16 23:29:26 +0200 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/d3d11/gstd3d11converter.cpp: - d3d11converter: fix documentation for converter_new () - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6665> - -2024-04-10 20:57:16 +0900 Seungha Yang <seungha@centricular.com> - - * sys/mediafoundation/gstmfcapturedshow.cpp: - * sys/mediafoundation/gstmfcapturedshow.h: - * sys/mediafoundation/gstmfcapturewinrt.cpp: - * sys/mediafoundation/gstmfcapturewinrt.h: - * sys/mediafoundation/gstmfdevice.cpp: - * sys/mediafoundation/gstmfsourceobject.cpp: - * sys/mediafoundation/gstmfsourceobject.h: - * sys/mediafoundation/gstmfsourcereader.cpp: - * sys/mediafoundation/gstmfsourcereader.h: - mediafoundation: Fix device enumeration - Do not stop device enumerate even if a device could not be opened. - Otherwise the other devices listed after the failed device will not be - reported by device provider - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3460 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6598> - -2024-04-15 10:51:03 +0100 Tim-Philipp Müller <tim@centricular.com> - - * tests/check/meson.build: - tests: fix possible libscpp build failure in gst-plugins-bad - ../subprojects/gst-plugins-bad/tests/check/libs/gstlibscpp.cc:41: - fatal error: gst/mpegts/gstmpegts-enumtypes.h: No such file or directory - Could only pass the needed deps to the libscpp test, but gets - messier to maintain, so let's at it for consistency. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6643> - -2024-04-12 09:32:13 +0100 Philippe Normand <philn@igalia.com> - - * tests/check/elements/webrtcbin.c: - tests: webrtcbin: Fix repaired-stream-id handling in simulcast test - The test was attempting to add the same stream-id extension twice, probably some - unfinished copy/paste. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6615> - -2024-04-10 01:26:38 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/gstdwritebaseoverlay.cpp: - * sys/dwrite/gstdwriteoverlayobject.cpp: - * sys/dwrite/gstdwriteoverlayobject.h: - * sys/dwrite/gstdwriterender.cpp: - * sys/dwrite/gstdwriterender.h: - * sys/dwrite/gstdwriterender_bitmap.cpp: - * sys/dwrite/gstdwriterender_bitmap.h: - * sys/dwrite/gstdwriterender_d3d11.cpp: - * sys/dwrite/gstdwriterender_d3d11.h: - * sys/dwrite/gstdwriterender_d3d12.cpp: - * sys/dwrite/gstdwriterender_d3d12.h: - * sys/dwrite/meson.build: - dwrite: D3D12 integration - Adding d3d12 backend text renderer/blender by using d3d11on12 interop. - And subclassing renderer object per backend (i.e., d3d11, d3d12, and bitmap) - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-10 00:57:40 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12overlaycompositor.h: - * sys/d3d12/gstd3d12window.cpp: - d3d12overlaycompositor: Add support for d3d12 memory - Don't allocate d3d12 texture if overlay is d3d12 memory already - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-13 22:47:47 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.h: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - d3d12: Move gst_d3d12_buffer_copy_into method to library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-13 22:28:31 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - d3d12converter: Port to GstD3D12Frame - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-13 21:46:32 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12.h: - * gst-libs/gst/d3d12/gstd3d12_fwd.h: - * gst-libs/gst/d3d12/gstd3d12frame.cpp: - * gst-libs/gst/d3d12/gstd3d12frame.h: - * gst-libs/gst/d3d12/meson.build: - d3d12: Add GstD3D12Frame struct and helper method - Adding GstD3D12Frame struct with map, unmap, and copy methods. - This new struct is equivalent to GstVideoFrame but gst_d3d12_frame_map() - method will extract D3D12 specific resource handles from memory. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-12 18:18:13 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - d3d12memory: Implement copy method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6620> - -2024-04-13 23:53:00 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12encoder.cpp: - d3d12encoder: Fix buffer pool leak - Add missing buffer pool release - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6628> - -2024-04-10 22:01:18 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d11/gstd3d11videosink.cpp: - * sys/d3d11/gstd3d11window.h: - * sys/d3d11/gstd3d11window_dummy.cpp: - d3d11videosink: Fix rendering on keyed mutex enabled handle - As of the commit 69b2e1565c5d0e8b2313d52042d73c721fed7edb, - keyed mutex will be handled by the memory object. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3468 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6600> - -2024-04-11 18:10:40 +0300 Jordan Petridis <jordan@centricular.com> - - * ext/fdkaac/meson.build: - fdkaac: Mark the dependency include_type as 'system' - When using v2.0.2 of the subproject, it triggers werror for - unused functions that come from the fdkaac headers. - This avoids errors like the following when werror is set. - ``` - subprojects/fdk-aac-2.0.2/fdk-aac/FDK_audio.h:757:29: error: ‘FDKlibInfo_lookup’ - defined but not used -Werror=unused-function - 757 | static FDK_AUDIO_INLINE INT FDKlibInfo_lookup(const LIB_INFO* info, - ``` - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6611> - -2024-04-09 18:36:12 +0100 Tim-Philipp Müller <tim@centricular.com> - - * gst-libs/gst/analytics/gstanalyticsmeta.c: - analyticsmeta: fix g-ir-scanner warnings - Fix - gstanalyticsmeta.c:134: Warning: GstAnalytics: "@instance" - parameter unexpected at this location - warning (caused by the extraneous empty line in the doc chunk) - and align function arguments with documentation and header file - (handle -> instance). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6585> - -2024-04-06 00:41:29 +0900 Seungha Yang <seungha@centricular.com> - - * ext/closedcaption/gstccconverter.c: - ccconverter: Fix caps leak and remove unnecessary code - The removed code does the exactly same thing as the below code - except for leaking caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6548> - -2024-04-09 23:35:13 +0900 Seungha Yang <seungha@centricular.com> - - * sys/qsv/gstqsvdecoder.cpp: - qsvdecoder: Release too old frames - Release too old frames manually. - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3163 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6583> - -2024-04-07 19:34:43 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: - d3d12converter: Simplify root signature build - D3DX12SerializeVersionedRootSignature() helper method will translate - RS 1.1 into 1.0 version if needed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6557> - -2024-04-05 21:58:51 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/codecparsers/gsth264parser.h: - h264parser: maintain API changes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:38:36 -0400 Daniel Morin <daniel.morin@collabora.com> - - * tests/check/elements/h264parse.c: - Revert "h264parse: test - AU align with SEI between frame slices" - This reverts commit 533f814fd9a0eff341bb8f400fff82e5f0c4c313. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:38:16 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst-libs/gst/codecparsers/gsth264parser.h: - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth264parse.h: - Revert "h264parse: Improved AU boundary detection" - This reverts commit 49f200cb549d43067e7c6eee332cdf757a38d82a. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:38:13 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst/videoparsers/gsth264parse.c: - Revert "h264parse: Remove dead code" - This reverts commit 141cd3871592292a8a6c81c1e018610a82ecaa88. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:38:08 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst/videoparsers/gsth264parse.c: - Revert "h264parse: Fix AU collection" - This reverts commit 495390f63a710559b149e476d3289dc2f37be7f8. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:37:47 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst/videoparsers/gsth264parse.c: - Revert "h264parse: Remove un-needed check on SPS state" - This reverts commit 73dedf9a51e70868f6aa029b968f8c7ef6af530e. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:37:40 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst/videoparsers/gsth264parse.c: - * gst/videoparsers/gsth264parse.h: - Revert "h264parse: use AUD to detect first VCL NAL" - This reverts commit 90a3b63eed22d2737dbe8e33ee931e897ccfd128. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-04-04 09:36:02 -0400 Daniel Morin <daniel.morin@collabora.com> - - * gst/videoparsers/gsth264parse.c: - Revert "h264parse: correct NAL mode backlog processing" - This reverts commit b2098849dc21c3615cb15b1e26bbbe77feb76476. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6540> - -2024-03-29 15:37:55 +0100 Edward Hervey <edward@centricular.com> - - * gst/videoparsers/gstvideoparseutils.c: - videoparsers: Demote CC warning message - Another warning message which isn't fatal and therefore should just be a DEBUG - line. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6550> - -2024-04-06 01:14:56 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12device.cpp: - d3d12device: Fix typo in object name - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6549> - -2024-03-18 19:32:33 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: always post details about detected format - .. instead of only when there is a mismatch. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6438> - -2024-03-30 15:57:36 +0100 Robert Mader <robert.mader@posteo.de> - - * gst/jpegformat/gstjpegparse.c: - jpegparse: turn some bus warnings into object ones - For some cameras `gst_jpeg_parse_app0()` fails on a invalid segment. - While this is likely a driver or firmware bug that should be addressed - accordingly, it's not fatal and likely does not deserve a bus message on - every frame, flooding journals. - Turn down the volume of the warnings by turning them into object - warnings. If we conclude that in some cases we'd still want bus - warnings, they can be done more fine-grained in the - `gst_jpeg_parse_appX()` functions. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6490> - -2024-03-18 20:50:56 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh265dec.c: - vkh265dec: add missing VPS parameter - and fix coded size - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6400> - -2024-03-18 20:00:11 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - vkh26xdec: implement close() vmethod - Since a validation layer error is signaled at EOS because it's required to wait - for the last frame to be processed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6400> - -2024-04-03 16:44:18 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh264dec.c: - * ext/vulkan/vkh265dec.c: - vkh26xdec: remove unused variables - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6400> - -2024-03-18 19:42:50 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/vulkan/vkh265dec.c: - vkh265dec: fix resource info structure when layered DPB - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6400> - -2024-03-27 19:45:02 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: vaenc-dynamic: support target percentage change in QVBR - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6465> - -2024-03-27 19:43:28 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: vaenc-dynamic: ignore bitrate change with ICQ too - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6465> - -2024-03-27 19:41:30 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - va: encoders: don't assert at target percentage when QVBR - Instead of asserting, just get the max value between the current value and 10, - which is the minimum required by QVBR. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6465> - -2024-03-27 19:37:58 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * tests/examples/va/vaenc-dynamic-reconfigure.c: - examples: vaenc-dynamic: add vp9, av1 and low power tests - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6465> - -2024-04-02 16:23:31 +0100 Chris Spencer <spencercw@gmail.com> - - * gst-libs/gst/vulkan/gstvkbufferpool.c: - * gst-libs/gst/vulkan/gstvkbufferpool.h: - vkbufferpool: correct usage flags type - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6514> - -2024-04-02 18:18:14 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/msdk/gstmsdkcontext.c: - msdk: sink context reference - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2024-04-02 18:02:26 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/gtk/gstgtkwaylandsink.c: - gtk: sink reference of internal wayland pool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2024-04-02 18:00:40 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/wayland/gstwaylandsink.c: - wayland: sink reference to internal pool - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2024-04-02 14:46:32 +0200 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * ext/dash/gstmpdadaptationsetnode.c: - * ext/dash/gstmpdbaseurlnode.c: - * ext/dash/gstmpdclient.c: - * ext/dash/gstmpdcontentcomponentnode.c: - * ext/dash/gstmpddescriptortypenode.c: - * ext/dash/gstmpdlocationnode.c: - * ext/dash/gstmpdmetricsnode.c: - * ext/dash/gstmpdmetricsrangenode.c: - * ext/dash/gstmpdperiodnode.c: - * ext/dash/gstmpdprograminformationnode.c: - * ext/dash/gstmpdreportingnode.c: - * ext/dash/gstmpdrepresentationnode.c: - * ext/dash/gstmpdrootnode.c: - * ext/dash/gstmpdsegmentbasenode.c: - * ext/dash/gstmpdsegmentlistnode.c: - * ext/dash/gstmpdsegmenttemplatenode.c: - * ext/dash/gstmpdsegmenttimelinenode.c: - * ext/dash/gstmpdsegmenturlnode.c: - * ext/dash/gstmpdsnode.c: - * ext/dash/gstmpdsubrepresentationnode.c: - * ext/dash/gstmpdsubsetnode.c: - * ext/dash/gstmpdurltypenode.c: - * ext/dash/gstmpdutctimingnode.c: - dash: sink references of all MDP objects - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2024-03-15 19:03:58 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvadecoder.c: - * sys/va/gstvaencoder.c: - * sys/va/gstvafilter.c: - va: sink reference at instantiation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2023-12-05 12:24:01 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - * gst-libs/gst/vulkan/gstvktrash.c: - vulkan: sink references at instantiation - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6398> - -2024-04-02 19:00:35 +0200 eri <eri@inventati.org> - - * gst-libs/gst/play/gstplay.c: - play: Update `video_snapshot` to support playbin3 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6516> - -2024-04-02 22:56:00 +0900 Seungha Yang <seungha@centricular.com> - - * sys/qsv/gstqsvencoder.cpp: - qsvencoder: Handle d3d12 context - GstD3D12Device objetct's internal resources are singletons per adapter - already though, the object itself is not a singleton. - Due to the singleton design (unlike other APIs such as d3d11), - d3d12 device context sharing is not a strict requirement - for zero-copy, but handles context ones to make things less noisy. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6513> - -2024-04-02 22:09:57 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12decoder.cpp: - d3d12decoder: Always output sharable texture - Because shared heap's additional costs is not significant, - use D3D12_HEAP_FLAG_SHARED for resource can be shared over process - boundary. And enables render target for d3d11 interop in the process. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6513> - -2024-04-02 15:57:58 +0200 Guillaume Desmottes <guillaume.desmottes@onestream.live> - - * tests/examples/webrtc/webrtcswap.c: - examples: set perfect-timestamp=true on opusenc - Fix audio streaming on Chrome, see https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1524 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6512> - -2024-03-28 21:54:21 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Change the set_property to make it atomic - The inside encoder may be set in other threads, so we should make - its accessing atomic. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-28 21:52:25 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Change the set_property to make it atomic - The inside encoder may be set in other threads, so we should make - its accessing atomic. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-27 19:50:19 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvavp9enc.c: - va{vp9,av1}enc: reconfigure when properties change - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-28 21:35:07 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - va: h265enc: Change the set_property to make it atomic - The inside encoder may be set in other threads, so we should make - its accessing atomic. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-28 21:27:54 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - va: h265enc: set the reconf flag when cpb_size updated - This feature can be changed dynamically in playing state, so we - need to set reconf flag to trigger reconfig. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-28 00:00:42 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - va: h264enc: Change the set_property to make it atomic - The inside encoder may be set in other threads, so we should make - its accessing atomic. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-03-27 23:09:08 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - va: h264enc: set the reconf flag when cpb_size updated - This feature can be changed dynamically in playing state, so we - need to set reconf flag to trigger reconfig. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6471> - -2024-04-02 18:20:43 +0900 Seungha Yang <seungha@centricular.com> - - * sys/dwrite/gstdwriteoverlayobject.cpp: - dwrite: Fix crash on device update - Selected blend mode should not be cleared on device update - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6508> - -2024-03-22 01:03:11 +0900 Seungha Yang <seungha@centricular.com> - - * sys/nvcodec/gstnvencobject.cpp: - * sys/nvcodec/gstnvh264encoder.cpp: - * sys/nvcodec/gstnvh265encoder.cpp: - nvencoder: Add support for RGB formats - Adding RGBA, RGBx, BGRA, BGRx, VUYA and RGB10A2_LE format support for performance. - However, these formats are not still recommended if upstream can support - native YUV formats (e.g., NV12, P010) since NVENC does not expose - conversion related optiones. Note that VUYA format is 4:4:4 YUV format - already but NVENC runtime will convert it to 4:2:0 format internally - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6417> - -2024-03-22 01:19:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudamemory.cpp: - * sys/nvcodec/gstcudaconverter.c: - * sys/nvcodec/gstcudaconvertscale.c: - * sys/nvcodec/gstcudaformat.h: - cuda: Add support for VUYA format - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6417> - -2024-04-02 01:36:28 +0900 Seungha Yang <seungha@centricular.com> - - * sys/qsv/gstqsvallocator_d3d11.cpp: - * sys/qsv/gstqsvallocator_d3d11.h: - * sys/qsv/gstqsvav1enc.cpp: - * sys/qsv/gstqsvav1enc.h: - * sys/qsv/gstqsvencoder.cpp: - * sys/qsv/gstqsvencoder.h: - * sys/qsv/gstqsvh264enc.cpp: - * sys/qsv/gstqsvh264enc.h: - * sys/qsv/gstqsvh265enc.cpp: - * sys/qsv/gstqsvh265enc.h: - * sys/qsv/gstqsvjpegenc.cpp: - * sys/qsv/gstqsvjpegenc.h: - * sys/qsv/gstqsvvp9enc.cpp: - * sys/qsv/gstqsvvp9enc.h: - * sys/qsv/meson.build: - * sys/qsv/plugin.cpp: - qsv: Add support for d3d12 interop in encoder - Since QSV API does not support D3D12, try to import d3d12 resource - into d3d11 texture. Note that resource sharing requires - D3D12_SHARED_RESOURCE_COMPATIBILITY_TIER_2 for NV12 texure sharing. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6501> - -2024-03-25 23:33:59 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Avoid reopen encoder or renegotiate - If parameters remain similar enough to avoid either encoder reopening - or downstream renegotiation, avoid it. - This is going to be useful for dynamic parameters setting. - To check if the stream parameters changed, so the internal encoder has - to be closed and opened again, are required two steps: - 1. If input caps, format, profile, chroma or rate control mode have changed. - 2. If any of the calculated variables and element properties have changed. - Later on, only if the output caps also changed, the pipeline - is renegotiated. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6441> - -2024-03-25 19:02:18 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Avoid reopen encoder or renegotiate - If parameters remain similar enough to avoid either encoder reopening - or downstream renegotiation, avoid it. - This is going to be useful for dynamic parameters setting. - To check if the stream parameters changed, so the internal encoder has - to be closed and opened again, are required two steps: - 1. If input caps, format, profile, chroma or rate control mode have changed. - 2. If any of the calculated variables and element properties have changed. - Later on, only if the output caps also changed, the pipeline - is renegotiated. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6441> - -2024-03-14 23:17:32 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Improve the LAST reference assignment - The last frame which has the smallest diff should be consider as - the first choice rather than the golden frame. Especially when only - one reference available, this way can improve the BD rate about 5 - percentage. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6379> - -2024-03-15 15:48:34 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Fix the reference number setting bug - The current way will let the total reference number surplus the - reference number set by the "ref-frames" property. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6379> - -2024-04-01 01:00:53 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/meson.build: - meson: d3d11: Add support for MinGW DirectXMath package - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3428 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6495> - -2024-04-01 22:19:21 +0900 Seungha Yang <seungha@centricular.com> - - * sys/webview2/gstwebview2object.cpp: - * sys/webview2/gstwebview2object.h: - * sys/webview2/gstwebview2src.cpp: - * sys/webview2/meson.build: - webview2: Add support for d3d12 interop - Enable shared copy to D3D12 resource - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6499> - -2024-04-02 00:43:20 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12graphicscapture.cpp: - d3d12screencapturesrc: Use gst_d3d12_memory_get_d3d11_texture() - ... and use fence to wait for GPU sync - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6499> - -2024-04-02 00:36:45 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/gstd3d12_fwd.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - * gst-libs/gst/d3d12/meson.build: - d3d12memory: Add support for d3d11 texture caching - Would be useful for various D3D11 interop use cases - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6499> - -2024-03-29 19:30:10 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - va: encoder: Fix the unit of bitrate in debug log message - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6483> - -2024-03-29 18:26:49 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Adjust the coded buffer size to avoid failure - Some extreme case such as "videotestsrc pattern=1" can generate pure - white noise videoes, for which encoder may generate too big output - for current coded buffer size. We now consider the qindex and bitrate - to avoid that. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6483> - -2024-03-29 18:08:54 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Fix the frame size not enough issue for super frame - The current code forgets to add the current last frame size into - the total super frame size. - Fixes: #3427 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6483> - -2024-03-27 14:34:31 +0800 Cheah, Vincent Beng Keat <vincent.beng.keat.cheah@intel.com> - - * sys/msdk/gstmsdkallocator_libva.c: - * sys/msdk/gstmsdkenc.c: - msdk: Fix mjpeg BGRx encode - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6401> - -2024-03-31 21:55:51 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d12/d3d12-prelude.h: - * gst-libs/gst/d3d12/gstd3d12-private.h: - * gst-libs/gst/d3d12/gstd3d12.h: - * gst-libs/gst/d3d12/gstd3d12_fwd.h: - * gst-libs/gst/d3d12/gstd3d12bufferpool.cpp: - * gst-libs/gst/d3d12/gstd3d12bufferpool.h: - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.cpp: - * gst-libs/gst/d3d12/gstd3d12commandallocatorpool.h: - * gst-libs/gst/d3d12/gstd3d12commandlistpool.cpp: - * gst-libs/gst/d3d12/gstd3d12commandlistpool.h: - * gst-libs/gst/d3d12/gstd3d12commandqueue.cpp: - * gst-libs/gst/d3d12/gstd3d12commandqueue.h: - * gst-libs/gst/d3d12/gstd3d12compat.h: - * gst-libs/gst/d3d12/gstd3d12converter-builder.cpp: - * gst-libs/gst/d3d12/gstd3d12converter-builder.h: - * gst-libs/gst/d3d12/gstd3d12converter-private.h: - * gst-libs/gst/d3d12/gstd3d12converter.cpp: - * gst-libs/gst/d3d12/gstd3d12converter.h: - * gst-libs/gst/d3d12/gstd3d12descriptorpool.cpp: - * gst-libs/gst/d3d12/gstd3d12descriptorpool.h: - * gst-libs/gst/d3d12/gstd3d12device-private.h: - * gst-libs/gst/d3d12/gstd3d12device.cpp: - * gst-libs/gst/d3d12/gstd3d12device.h: - * gst-libs/gst/d3d12/gstd3d12fencedatapool.cpp: - * gst-libs/gst/d3d12/gstd3d12fencedatapool.h: - * gst-libs/gst/d3d12/gstd3d12format-private.h: - * gst-libs/gst/d3d12/gstd3d12format.cpp: - * gst-libs/gst/d3d12/gstd3d12format.h: - * gst-libs/gst/d3d12/gstd3d12memory-private.h: - * gst-libs/gst/d3d12/gstd3d12memory.cpp: - * gst-libs/gst/d3d12/gstd3d12memory.h: - * gst-libs/gst/d3d12/gstd3d12utils.cpp: - * gst-libs/gst/d3d12/gstd3d12utils.h: - * gst-libs/gst/d3d12/meson.build: - * gst-libs/gst/meson.build: - * sys/d3d12/gstd3d12av1dec.cpp: - * sys/d3d12/gstd3d12basefilter.h: - * sys/d3d12/gstd3d12compositor.h: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12decoder.h: - * sys/d3d12/gstd3d12dpbstorage.h: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12encoder.h: - * sys/d3d12/gstd3d12encoderbufferpool.h: - * sys/d3d12/gstd3d12format.h: - * sys/d3d12/gstd3d12h264enc.h: - * sys/d3d12/gstd3d12ipc.h: - * sys/d3d12/gstd3d12ipcsink.h: - * sys/d3d12/gstd3d12ipcsrc.h: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12overlaycompositor.h: - * sys/d3d12/gstd3d12pluginutils.h: - * sys/d3d12/gstd3d12screencapture.h: - * sys/d3d12/gstd3d12screencapturedevice.h: - * sys/d3d12/gstd3d12screencapturesrc.h: - * sys/d3d12/gstd3d12testsrc.h: - * sys/d3d12/gstd3d12videosink.h: - * sys/d3d12/gstd3d12window.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - d3d12: Move core part to gst-libs - Move buffer pool, converter, and device abstraction layer to - public library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-31 20:28:27 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12memory.h: - * sys/d3d12/gstd3d12pluginutils.cpp: - d3d12memory: Define new D3D12 map flags - Define GST_MAP_READ_D3D12 and GST_MAP_READ_D3D12 flags - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-31 20:13:20 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandallocatorpool.cpp: - * sys/d3d12/gstd3d12commandallocatorpool.h: - * sys/d3d12/gstd3d12commandlistpool.cpp: - * sys/d3d12/gstd3d12commandlistpool.h: - * sys/d3d12/gstd3d12commandqueue.cpp: - * sys/d3d12/gstd3d12commandqueue.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12converter.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12descriptorpool.cpp: - * sys/d3d12/gstd3d12descriptorpool.h: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12memory.h: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12: Make resource getter methods consistent - Returns COM pointer directly everywhere - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-31 19:21:47 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d11on12.cpp: - * sys/d3d12/gstd3d11on12.h: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12device.h: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/meson.build: - d3d12: Remove device11on12 wrapping layer - It was added to avoid symbol conflict between DirectX-header project - and Windows SDK, but symbol conflict does not happen - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-31 19:06:12 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12bufferpool.cpp: - d3d12bufferpool: Use d3dx12.h format table - The format table in SDK header defines all required information - already. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-31 18:42:41 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12basefilter.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12converter.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12device.h: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12videosink.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12: Add a helper method for device equality check - GstD3D12Device object itself is not singltons anymore but - underlying private struct is singltons. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6494> - -2024-03-06 11:24:12 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkvpputil.c: - msdkvpp: Set colorimetry for src caps - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6316> - -2024-03-06 11:04:37 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkenc.c: - msdkenc: Set VideoFullRange according to input colorimetry range - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6316> - -2024-04-01 00:52:16 +0300 Mart Raudsepp <leio@gentoo.org> - - * ext/voaacenc/meson.build: - meson: Don't confuse voaacenc plugin with bz2 one in meson variable names - No actual issue was observed from the previous naming duplicating bz2 one, so - just a correctness tweak. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6496> - -2024-03-31 01:21:03 +0900 Seungha Yang <seungha@centricular.com> - - * sys/webview2/gstwebview2object.cpp: - * sys/webview2/gstwebview2object.h: - * sys/webview2/gstwebview2src.cpp: - webview2: Add support for javascript injection - Allow javascript injection for various custom use cases. - For example, scrollbars and scrolling can be disabled via - gst-launch-1.0 webview2src location=https://gstreamer.freedesktop.org \ - javascript="document.querySelector('body').style.overflow='hidden'" ! ... - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6487> - -2024-03-29 18:36:00 +0900 Seungha Yang <seungha@centricular.com> - - * sys/webview2/gstwebview2object.cpp: - * sys/webview2/gstwebview2object.h: - * sys/webview2/gstwebview2src.cpp: - * sys/webview2/meson.build: - webview2: Use IContainerVisual for offscreen rendering - Capturing from hidden HWND fails sometimes for some reason. - Instead of rendering to hidden HWND, render webpage to container - visual and create WGC item from the container visual object. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6487> - -2024-03-28 21:59:02 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * sys/d3d11/gstd3d11videosink.cpp: - d3d11videosink: disconnect signals before releasing the window - It might happen that the key event arrives when the d3d11videosink - is stopping. In case of GstD3D11WindowWin32 it can raise a - navigation event even when the sink is already freed, because the - window object's refcount may reach 0 in the window thread. In - other words sometimes the GstD3D11WindowWin32 lives few ms more - then the GstD3D11VideoSink, because it's freed asynchronously. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6476> - -2024-03-29 19:34:32 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> - - * ext/wpe/gstwpe.cpp: - wpe: avoid crash with G_DEBUG=fatal_criticals and static build - No plugin filenames if static build. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6484> - -2024-03-01 16:12:27 +0800 Mengkejiergeli Ba <mengkejiergeli.ba@intel.com> - - * sys/msdk/gstmsdkcontext.c: - msdk: Fix session close failure - In the case of multi-channels transcoding, a context with child - sesseion can be parent for others, so we need to check if the - msdkcontext has any child session in the list to avoid session - leaks. Otherwise, we will see the failure of closing a parent - session because one of its child's child session not released. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6259> - -2024-03-28 20:02:04 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/meson.build: - meson: d3d11: Disable library build if DirectXMath header was not found - DirectXMath header library is a hard dependency - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6468> - -2023-05-15 04:56:47 +0900 Seungha Yang <seungha@centricular.com> - - * meson_options.txt: - * sys/meson.build: - * sys/webview2/gstwebview2object.cpp: - * sys/webview2/gstwebview2object.h: - * sys/webview2/gstwebview2src.cpp: - * sys/webview2/gstwebview2src.h: - * sys/webview2/meson.build: - * sys/webview2/plugin.cpp: - webview2: Add Microsoft WebView2 based web browser source - Adding webview2src element - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4631> - -2024-03-28 16:29:50 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvavp9enc.c: - va: {av1, vp9}enc: Use g_free() to free frames - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6466> - -2024-03-27 13:53:21 -0400 Nicolas Dufresne <nicolas.dufresne@collabora.com> - - * sys/v4l2codecs/gstv4l2codecalphadecodebin.c: - v4l2codecs: alphadecoder: Explicitly pass 64 bit integers as such through varargs - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6452> - -2024-03-27 16:17:44 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/codecalpha/gstalphadecodebin.c: - alphadecodebin: Explicitly pass 64 bit integers as such through varargs - Maybe fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3422 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6452> - -2024-03-22 16:14:24 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Fix caps criticals during negotiation - Calling gst_pad_peer_query_caps() without a filter can give us EMPTY caps, whereas all the code below - assumes that's not the case. Replacing query+intersect with a filtered query ensures we always get a subset - of the template caps back. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6429> - -2024-03-25 17:45:24 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: Correct the flags for registering properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6437> - -2024-03-25 16:05:36 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Correct the flags for registering properties - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6437> - -2024-03-25 15:40:52 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvavp9enc.c: - va: {vp9, av1}enc: Do not use g_slice_new() to create frames - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6437> - -2024-03-25 15:37:04 +0800 He Junyan <junyan.he@intel.com> - - * tests/check/libs/vp9bitwriter.c: - test: Fix several code style issues in vp9bitwriter test - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6437> - -2024-03-25 15:20:27 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gstvp9bitwriter.c: - * gst-libs/gst/codecparsers/gstvp9bitwriter.h: - vp9bitwriter: Fix several hotdoc related format issues - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6437> - -2024-03-23 19:14:56 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - * sys/va/gstvavp9enc.c: - va: encoder: update the bitrate change correctly - We should update and notify the bitrate change at a common place, - no matter whether the bitrate is calculated or not. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-23 16:05:05 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: enable ICQ and QVBR modes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-23 13:28:12 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - va: vp9enc: enable ICQ and QVBR modes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-23 01:05:40 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - va: h265enc: enable ICQ and QVBR modes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-21 20:55:25 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah264enc.c: - va: h264enc: enable ICQ and QVBR modes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-22 23:59:25 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaencoder.c: - va: encoder: Enable ICQ and QVBR mode in rate control map - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-22 23:35:55 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvabaseenc.c: - va: encoder: Set the quality_factor parameter in rate control - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6433> - -2024-03-26 15:32:24 +0100 Víctor Manuel Jáquez Leal <vjaquez@igalia.com> - - * gst/jpegformat/gstjpegparse.c: - jpegparse: avi1 tag can be progressive - AVI1 tag in APP0 is trivalue: 0 not interleaved, 1 odd, 2 even. - So if avi1 is zero then the frame is progressive. - Also, this patch adds a couple log messages. - Fixes: #3414 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6445> - -2024-03-26 12:46:02 +0000 Tim-Philipp Müller <tim@centricular.com> - - * tests/check/libs/gstlibscpp.cc: - * tests/check/meson.build: - tests: add check to make sure -bad lib headers are C++ compiler clean - Only non-internal libs without external deps for now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6440> - -2024-03-22 16:35:54 +1100 Matthew Waters <matthew@centricular.com> - - * ext/closedcaption/ccutils.c: - * ext/closedcaption/gstccconverter.c: - * tests/check/elements/ccconverter.c: - ccconverter: fix cdp->cea608-raw field 1 60fps conversion - There was a potential busy loop occuring because when we were taking - data from the internal ccbuffer, we were not resetting which field had - written data. This would mean that the next time data was retrieved - from ccbuffer, it was always from field 0 and never from field 1. - This only affects usage of cc_buffer_take_separated() which is only used - by cdp->raw cea608. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6423> - -2024-03-25 00:01:38 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12.h: - * sys/d3d12/gstd3d12_fwd.h: - * sys/d3d12/gstd3d12commandallocatorpool.h: - * sys/d3d12/gstd3d12commandlistpool.h: - * sys/d3d12/gstd3d12compat.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12converter.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12descriptorpool.h: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12encoderbufferpool.h: - * sys/d3d12/gstd3d12fencedatapool.h: - * sys/d3d12/gstd3d12h264enc.cpp: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12overlaycompositor.h: - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - * sys/d3d12/meson.build: - d3d12: Add support for cross-compile - ... and fix bunch of GCC reported warnings - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6435> - -2024-03-24 22:39:20 +0900 Seungha Yang <seungha@centricular.com> - - * meson_options.txt: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/meson.build: - d3d12: Allow building without WGC support - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6435> - -2024-03-24 21:11:08 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12converter-builder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/hlsl/PSMain_checker.hlsl: - * sys/d3d12/hlsl/PSMain_checker_luma.hlsl: - * sys/d3d12/hlsl/PSMain_checker_rgb.hlsl: - * sys/d3d12/hlsl/PSMain_checker_vuya.hlsl: - * sys/d3d12/hlsl/PSMain_color.hlsl: - * sys/d3d12/hlsl/PSMain_converter.hlsl: - * sys/d3d12/hlsl/PSMain_sample.hlsl: - * sys/d3d12/hlsl/PSMain_sample_premul.hlsl: - * sys/d3d12/hlsl/PSMain_snow.hlsl: - * sys/d3d12/hlsl/VSMain_color.hlsl: - * sys/d3d12/hlsl/VSMain_converter.hlsl: - * sys/d3d12/hlsl/VSMain_coord.hlsl: - * sys/d3d12/hlsl/VSMain_pos.hlsl: - * sys/d3d12/hlsl/collect_hlsl_header.py: - * sys/d3d12/hlsl/meson.build: - * sys/d3d12/meson.build: - d3d12: Port to d3dshader library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434> - -2024-03-24 19:14:16 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11compile.cpp: - * gst-libs/gst/d3d11/gstd3d11converter-builder.cpp: - * gst-libs/gst/d3d11/gstd3d11converter-builder.h: - * gst-libs/gst/d3d11/gstd3d11converter-helper.cpp: - * gst-libs/gst/d3d11/gstd3d11device-private.h: - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * gst-libs/gst/d3d11/hlsl/CSMain_converter.hlsl: - * gst-libs/gst/d3d11/hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3d11/hlsl/VSMain_converter.hlsl: - * gst-libs/gst/d3d11/hlsl/collect_hlsl_header.py: - * gst-libs/gst/d3d11/hlsl/meson.build: - * gst-libs/gst/d3d11/meson.build: - * sys/d3d11/gstd3d11pluginutils.cpp: - * sys/d3d11/hlsl/PSMain_checker.hlsl: - * sys/d3d11/hlsl/PSMain_checker_luma.hlsl: - * sys/d3d11/hlsl/PSMain_checker_rgb.hlsl: - * sys/d3d11/hlsl/PSMain_checker_vuya.hlsl: - * sys/d3d11/hlsl/PSMain_color.hlsl: - * sys/d3d11/hlsl/PSMain_sample.hlsl: - * sys/d3d11/hlsl/PSMain_sample_premul.hlsl: - * sys/d3d11/hlsl/PSMain_snow.hlsl: - * sys/d3d11/hlsl/VSMain_color.hlsl: - * sys/d3d11/hlsl/VSMain_coord.hlsl: - * sys/d3d11/hlsl/VSMain_pos.hlsl: - * sys/d3d11/hlsl/gstd3d11plugin-hlsl.h: - * sys/d3d11/hlsl/meson.build: - * sys/d3d11/meson.build: - d3d11: Port to d3dshader library - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434> - -2024-03-24 01:41:48 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3dshader/converter-hlsl/CSMain_converter.hlsl: - * gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl: - * gst-libs/gst/d3dshader/converter-hlsl/VSMain_converter.hlsl: - * gst-libs/gst/d3dshader/converter-hlsl/collect_hlsl_headers.py: - * gst-libs/gst/d3dshader/converter-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/converter-hlsl/meson.build: - * gst-libs/gst/d3dshader/d3dshader-prelude.h: - * gst-libs/gst/d3dshader/gstd3dcompile.cpp: - * gst-libs/gst/d3dshader/gstd3dcompile.h: - * gst-libs/gst/d3dshader/gstd3dshader.h: - * gst-libs/gst/d3dshader/gstd3dshadercache.cpp: - * gst-libs/gst/d3dshader/gstd3dshadercache.h: - * gst-libs/gst/d3dshader/meson.build: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_checker.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_checker_luma.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_checker_rgb.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_checker_vuya.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_color.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_premul.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/PSMain_snow.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/VSMain_color.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/VSMain_coord.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/VSMain_pos.hlsl: - * gst-libs/gst/d3dshader/plugin-hlsl/collect_hlsl_headers.py: - * gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h: - * gst-libs/gst/d3dshader/plugin-hlsl/meson.build: - * gst-libs/gst/meson.build: - * meson_options.txt: - d3dshader: Add HLSL shader library - Adding a new library for HLSL compile and compiled bytecode caching. - This library will be used by d3d11 and d3d12 library/plugin, in order to - reuse single HLSL code and compiled HLSL bytecode. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434> - -2024-03-23 20:25:42 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11converter.cpp: - * gst-libs/gst/d3d11/hlsl/PSMain_converter.hlsl: - d3d11: Update shader to be d3d12 compatible - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434> - -2024-03-23 19:47:34 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12converter-builder.cpp: - * sys/d3d12/hlsl/PSMain_converter.hlsl: - * sys/d3d12/hlsl/PSMain_sample.hlsl: - * sys/d3d12/hlsl/PSMain_sample_premul.hlsl: - * sys/d3d12/hlsl/VSMain_converter.hlsl: - * sys/d3d12/hlsl/meson.build: - * sys/d3d12/meson.build: - d3d12: Update shader to be Shader Model 5.0 compatible - And use fxc HLSL compiler - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6434> - -2024-03-22 12:57:33 +0100 Ruben Gonzalez <rgonzalez@fluendo.com> - - * gst/rist/gstristsrc.c: - ristsrc: Clean caps instead of unref - Fix issue unrefering null caps. Better solution than - ``` - if (src->caps) - gst_caps_unref (src->caps); - ``` - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6432> - -2024-03-22 19:48:50 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12ipc.cpp: - * sys/d3d12/gstd3d12ipc.h: - * sys/d3d12/gstd3d12ipcclient.cpp: - * sys/d3d12/gstd3d12ipcclient.h: - * sys/d3d12/gstd3d12ipcserver.cpp: - * sys/d3d12/gstd3d12ipcserver.h: - * sys/d3d12/gstd3d12ipcsink.cpp: - * sys/d3d12/gstd3d12ipcsink.h: - * sys/d3d12/gstd3d12ipcsrc.cpp: - * sys/d3d12/gstd3d12ipcsrc.h: - * sys/d3d12/meson.build: - * sys/d3d12/plugin.cpp: - d3d12: Add IPC elements - Adding d3d12ipcsink and d3d12ipcsrc elements, equivalent to D3D11 ones. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6428> - -2024-03-22 22:51:54 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12pluginutils.cpp: - * sys/d3d12/gstd3d12pluginutils.h: - d3d12: Add buffer copy helper method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6428> - -2024-03-22 20:45:01 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandqueue.cpp: - d3d12commandqueue: Always invoke notify asynchronously - Otherwise the callback thread is unpredictable - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6428> - -2024-03-22 19:05:52 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12bufferpool.cpp: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12converter.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12memory.h: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12memory: Update for API interop - Add support for destroy notify in case of wrapped memory, and - allow setting external fence for interop - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6428> - -2024-03-22 18:57:26 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandqueue.cpp: - * sys/d3d12/gstd3d12commandqueue.h: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12: Make primary fence sharable - Create primary fence with D3D12_FENCE_FLAG_SHARED flag so that - the fence can be shared with other APIs or processes - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6428> - -2024-03-18 18:46:17 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Ignore output loop errors in drain() if we're flushing - In an early non-linked scenario, this was causing a ton of criticals about the queue array, - because the output callback would still fire for leftover frames that were still being processed by VT - at the time the output loop stopped. This makes sure they're flushed correctly as well. - Also renames gst_vtdec_loop to gst_vtdec_output_loop for consistency with related functions. - wip - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6397> - -2024-03-18 18:38:41 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/vtdec.c: - vtdec: Fix a deadlock during ProRes playback - Sometimes a call to negotiate (and thus drain) can happen from the output loop - (via finish_frame()), which will tell VT to output all internal frames, but that won't succeed - if we happen to decide to wait for the queue to empty (because the loop is waiting for draining to finish and - will not make space in the queue!). This commit adds an override for the queue size limit if we're draining/flushing. - This bug could happen for any formats, but was especially obvious for ProRes, which has dpb_size of 0. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6397> - -2024-03-19 23:37:37 +0900 Seungha Yang <seungha@centricular.com> - - * sys/asio/gstasiodeviceprovider.cpp: - * sys/asio/gstasioobject.cpp: - * sys/asio/gstasioringbuffer.cpp: - * sys/asio/gstasiosink.cpp: - * sys/asio/gstasiosrc.cpp: - * sys/asio/gstasioutils.cpp: - * sys/asio/meson.build: - asio: Add support for MinGW build - Drop MSVC specific bits and remove unused dependency - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6404> - -2024-03-19 23:12:04 +0900 Seungha Yang <seungha@centricular.com> - - * meson_options.txt: - * sys/asio/asio.h: - * sys/asio/gstasioobject.cpp: - * sys/asio/gstasioutils.h: - * sys/asio/meson.build: - asio: Drop external SDK header dependency - Build ASIO plugin using our tiny SDK header - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6404> - -2024-01-30 18:18:31 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvavp9enc.c: - * sys/va/gstvavp9enc.h: - * sys/va/meson.build: - * sys/va/plugin.c: - va: Implement the vavp9enc plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3293> - -2024-03-11 23:06:44 +0800 He Junyan <junyan.he@intel.com> - - * tests/check/libs/vp9bitwriter.c: - * tests/check/meson.build: - test: add vp9 bitwriter test case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3293> - -2024-01-30 18:10:12 +0800 He Junyan <junyan.he@intel.com> - - * gst-libs/gst/codecparsers/gstvp9bitwriter.c: - * gst-libs/gst/codecparsers/gstvp9bitwriter.h: - * gst-libs/gst/codecparsers/meson.build: - vp9bitwriter: Add the VP9 bit writer helper functions - In this first version, we only implement the "show existing frame" - and super frame writting. Other frame header types writting can - be added when needed. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3293> - -2024-03-19 19:24:56 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12device.cpp: - d3d12device: Set debugging friendly object name - Build object name with DXGI adapter index - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-16 22:37:46 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandallocatorpool.cpp: - * sys/d3d12/gstd3d12commandlistpool.cpp: - * sys/d3d12/gstd3d12device.cpp: - d3d12: Suppress expected leak reports - Such leaks are expected and intended ones - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-16 20:40:58 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12-private.h: - * sys/d3d12/gstd3d12commandqueue.cpp: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/plugin.cpp: - d3d12device: Keep device object permanently - Because ID3D12Device objects are singletons per adapter, - GstD3D12Device was following the API design, that is, keep track - of global GstD3D12Device objects and reuses it. - That means ID3D12Device object can be released at the time - when GstD3D12Device is destroyed. - But exetrnal APIs such as NVENC does not seem to be happy - with the released ID3D12Device, that could be a driver bug though. - Let's hold already opened ID3D12Device permanently without releasing - it for now. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-16 21:00:30 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandqueue.cpp: - * sys/d3d12/gstd3d12commandqueue.h: - d3d12commandqueue: Add drain method - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-16 20:04:43 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandallocatorpool.cpp: - * sys/d3d12/gstd3d12commandallocatorpool.h: - * sys/d3d12/gstd3d12commandlistpool.cpp: - * sys/d3d12/gstd3d12commandlistpool.h: - * sys/d3d12/gstd3d12commandqueue.cpp: - * sys/d3d12/gstd3d12commandqueue.h: - * sys/d3d12/gstd3d12compositor.cpp: - * sys/d3d12/gstd3d12convert.cpp: - * sys/d3d12/gstd3d12converter.cpp: - * sys/d3d12/gstd3d12decoder.cpp: - * sys/d3d12/gstd3d12descriptorpool.cpp: - * sys/d3d12/gstd3d12descriptorpool.h: - * sys/d3d12/gstd3d12device.cpp: - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12encoder.cpp: - * sys/d3d12/gstd3d12overlaycompositor.cpp: - * sys/d3d12/gstd3d12testsrc.cpp: - * sys/d3d12/gstd3d12window.cpp: - d3d12: Use native device handle if possible - Various abstraction objects such as command queue/list/allocator - can be constructed without GstD3D12Device - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-16 02:00:31 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12commandqueue.cpp: - d3d12commandqueue: Allow empty command list - Just increase fence value and signal the queue in that case - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395> - -2024-03-07 22:54:29 +0900 Seungha Yang <seungha@centricular.com> - - * tests/examples/d3d11/d3d11decoder-appsink2.cpp: - * tests/examples/d3d11/meson.build: - examples: d3d11: Add inter-device synchronization example - Adding an example to demonstrate resource sharing between - D3D11 device and GPU synchronization - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6303> - -2024-03-06 15:39:33 -0500 Ruijing Dong <ruijing.dong@amd.com> - - * sys/va/gstvaencoder.c: - * sys/va/gstvaencoder.h: - * sys/va/gstvah265enc.c: - va: enc : checking surface alignment attribute - Apply surface alignment attribute when availalbe, - also fix frame cropping issue for va h265 encoder. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6282> - -2024-03-14 19:51:08 +0000 L. E. Segovia <amy@centricular.com> - - * ext/soundtouch/meson.build: - soundtouch: Fix build failure with Apple Clang caused by missing cpp_std - Apple Clang sets C++98 by default. I'm applying C++14 to account for Meson's - lack of support/fallback for `cpp_std=c++11`. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6374> - -2024-03-16 19:32:19 +0100 Mark Nauwelaerts <mnauw@users.sourceforge.net> - - * gst/dvdspu/gstspu-pgs.c: - dvdspu: avoid null dereference - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6386> - -2024-03-17 11:18:37 +0000 Philippe Normand <philn@igalia.com> - - * gst-libs/gst/play/gstplay.c: - play: Fix a critical warning in error callback - `on_error()` can be called with a NULL details structure, so in that situation - the `gst_structure_copy()` would raise a critical warning. Create an empty - structure instead of attempting to copy a NULL one. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6385> - -2024-03-16 21:25:38 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12device.cpp: - d3d12: Fix SDK debug layer activation - Debug layer must be enabled before creating device. Otherwise - already opened devices before the activation will be removed - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6382> - -2024-01-06 13:07:16 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: decoders: Add DMA_DRM caps support - In order to simplify caps negotiations for clients and, notably, be more - compatible with va* decoders. - Crucially this allows clients to know ahead of time whether buffers will - actually be DMABufs. - Similar to GstVaBaseDec we only announce system memory caps if the peer - has ANY caps. Further more, and again like va decoders, we fail in - `decide_allocation()` if DMA_DRM caps are used without VideoMeta. - Apart from buggy peers this can happen e.g. when a peer with ANY caps - is used in combination with caps filters. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890> - -2024-02-17 06:01:41 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: decoders: Introduce and use set_output_state helper class - Allowing us to avoid some code duplication. This will become more - important with upcoming changes to caps generation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890> - -2024-02-17 05:43:23 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: decoder: Clean up select_src_format() - Most importantly rely on video info helpers instead of manual parsing - of caps, which will allow us to use additional helpers in the future. - While on it, tighen the check for supported formats - failing that - indicates a bug in caps negotiation - and make some style changes. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890> - -2024-02-17 04:20:16 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2decoder.c: - v4l2codecs: decoder: Generalize size enumeration caps - By reducing the generated caps to the minimal number of fields and - using intersections instead of merges. This will allow us to reuse the - result in the future. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890> - -2024-02-16 22:48:17 +0100 Robert Mader <robert.mader@collabora.com> - - * sys/v4l2codecs/gstv4l2codecav1dec.c: - * sys/v4l2codecs/gstv4l2codech264dec.c: - * sys/v4l2codecs/gstv4l2codech265dec.c: - * sys/v4l2codecs/gstv4l2codecmpeg2dec.c: - * sys/v4l2codecs/gstv4l2codecvp8dec.c: - * sys/v4l2codecs/gstv4l2codecvp9dec.c: - * sys/v4l2codecs/gstv4l2decoder.c: - * sys/v4l2codecs/gstv4l2decoder.h: - v4l2codecs: decoders: Use src template for negotiation filter - This ensures we don't create filter caps that are not supported by the - individual codec implementations, as well as that the resulting caps - have the required fields so they can be turned into a GstVideoFormat. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890> - -2024-03-14 20:25:52 +0900 Seungha Yang <seungha@centricular.com> - - * sys/asio/gstasiosink.cpp: - * sys/asio/gstasiosrc.cpp: - asio: Fix {input,output}-channels property handling - Fixing regression introduced by the commit 06dc931b52fbd858640506616f5a1a928792b27c - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6370> - -2024-03-14 00:49:45 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/d3d11/gstd3d11device.cpp: - d3d11device: Fix adapter LUID comparison in wrapped device mode - Fix integer type mismatching - Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3382 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6358> - -2024-02-23 00:24:14 +0100 Alexander Slobodeniuk <aslobodeniuk@fluendo.com> - - * gst-libs/gst/d3d11/gstd3d11device-private.h: - * gst-libs/gst/d3d11/gstd3d11device.cpp: - * gst-libs/gst/d3d11/gstd3d11utils.cpp: - d3d11device: raise 'device-removed' signal on DXGI_ERROR_DEVICE_REMOVED - When this error gets caught the GstD3D11Device object raises the new - "device-removed" signal. This allows to handle the error from outside: - stop the playback, re-create the player, replace the catched GstContext by - the new one. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6193> - -2023-11-29 14:44:37 +0100 Michiel Westerbeek <happylinks@gmail.com> - - * sys/nvcodec/gstcudaconvertscale.c: - * sys/va/gstvavpp.c: - gstcudaconvertscale, gstvavpp, videoconvertscale: downgrade 'Can't keep DAR' to debug - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5730> - -2024-03-13 17:19:26 +0800 He Junyan <junyan.he@intel.com> - - * tests/check/libs/av1bitwriter.c: - * tests/check/libs/h264bitwriter.c: - * tests/check/libs/h265bitwriter.c: - test: Correct the API return type of {h264,h265,av1}bitwriter - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6354> - -2024-03-13 00:42:16 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12device.cpp: - d3d12device: Fix IDXGIFactory2 leak - factory passed to gst_d3d12_device_find_adapter() method is valid - handle already - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6340> - -2024-03-12 13:50:18 +0200 Sebastian Dröge <sebastian@centricular.com> - - * gst/videoparsers/gstvideoparseutils.c: - videoparsers: Don't verbosely warn about CEA_708_PROCESS_EM_DATA_FLAG not being set - And the same for CEA_708_PROCESS_CC_DATA_FLAG. This is not really a - problem and was polluting logs with warnings for every single frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6336> - -2024-03-10 12:04:55 -0300 L. E. Segovia <amy@centricular.com> - - * sys/tinyalsa/meson.build: - meson: Require tinyalsa >= 1.1.0 when building its plugin - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6311> - -2024-03-09 15:19:20 +0000 L. E. Segovia <amy@centricular.com> - - * sys/tinyalsa/tinyalsasink.c: - tinyalsasink: Fix missing const and deprecations with tinyalsa v2 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6311> - -2024-03-12 00:45:15 +0900 Seungha Yang <seungha@centricular.com> - - * gst-libs/gst/cuda/gstcudabufferpool.cpp: - * gst-libs/gst/d3d11/gstd3d11bufferpool.cpp: - * sys/d3d12/gstd3d12bufferpool.cpp: - cuda,d3d11,d3d12bufferpool: Disable preallocation - Do not chain up to parent's GstBufferPool::start() which will do - preallocation. We don't want it to be preallocated - since there are various cases where negotiated downstream buffer pool is - not used at all (e.g., zero-copy decoding, IPC elements). - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6326> - -2024-03-11 12:42:48 +0100 Antonio Larrosa <alarrosa@suse.com> - - * sys/va/gstvaav1enc.c: - * sys/va/gstvah264enc.c: - * sys/va/gstvah265enc.c: - va{h264,h265,av1}enc: fix potential crash on devices without rate control - This fixes a crash in `gst_va_h264_enc_class_init` and `gst_va_h265_enc_class_init` - (and probably also in gst_va_av1_enc_class_init) when calling - `g_object_class_install_properties (object_class, n_props, properties);` - When rate_control_type is 0, the following code is executed in : - ``` - } else { - n_props--; - propertiesPROP_RATE_CONTROL = NULL; - } - ``` - n_props has initially a value of N_PROPERTIES but PROP_RATE_CONTROL - is not the last element in the array, so it's making - g_object_class_install_properties fail to iterate over the - properties array. - This applies the same fix to gstvah264enc.c, gstvah265enc.c and - gstvaav1enc.c. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6319> - -2024-03-07 12:28:58 +0100 Jurijs Satcs <jurijs.satcs@veset.tv> - - * docs/plugins/gst_plugins_cache.json: - * gst/mpegtsmux/gstbasetsmux.c: - * gst/mpegtsmux/tsmux/tsmux.c: - mpegtsmux: allow to disable SCTE NULL by setting interval to 0 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6284> - -2024-02-26 14:57:32 +0100 Piotr Brzeziński <piotr@centricular.com> - - * sys/applemedia/atdec.c: - * sys/applemedia/atdec.h: - * sys/applemedia/meson.build: - * sys/applemedia/plugin.m: - macos: Move atdec from applemedia (-bad) to osxaudio (-good) - osxaudio has a few helper methods potentially useful in atdec (or future atenc), like GStreamer -> CoreAudio - channel mapping. Doesn't make sense to duplicate them in applemedia, and atdec is the only audio-oriented - element there anyway. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6223> - -2024-03-08 18:22:53 +1100 Matthew Waters <matthew@centricular.com> - - * docs/plugins/gst_plugins_cache.json: - * ext/closedcaption/ccutils.c: - * ext/closedcaption/ccutils.h: - * ext/closedcaption/gstcccombiner.c: - * ext/closedcaption/gstcccombiner.h: - * ext/closedcaption/gstccconverter.c: - * ext/closedcaption/gstcea608mux.c: - * tests/check/elements/cccombiner.c: - closedcaption: produce valid cea608 padding by default - Cea608 (valid) padding removal is available on the input side of ccconverter - or configurable on cccombiner. cccombiner can now configure whether - valid or invalid cea608 padding is used and for valid padding, how long - after valid non-padding to keep sending valid padding. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6300> - -2024-03-09 20:16:22 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvaav1enc.c: - va: av1enc: Init the output_frame_num when resetting gf group - Fixes: #3359 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6308> - -2024-03-06 12:15:37 +0000 Chris Spencer <spencercw@gmail.com> - - * gst-libs/gst/vulkan/gstvkmemory.c: - vkmemory: invalidate non-coherent memory when mapping for read - Mapping non-coherent memory does not implicitly invalidate the host caches. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6310> - -2024-02-22 12:26:33 +0000 Chris Spencer <spencercw@gmail.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan/operation: use timeline semaphore fallback if sync2 not supported - gst_vulkan_operation_add_dependency_frame does not fall back to the - timeline semaphore implementation if VK_KHR_synchronization2 is compiled - in, but not supported by the driver. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6309> - -2024-02-22 12:22:34 +0000 Chris Spencer <spencercw@gmail.com> - - * gst-libs/gst/vulkan/gstvkoperation.c: - vulkan/operation: add missing unlock - gst_vulkan_operation_add_dependency_frame does not release its lock if - support for VK_KHR_timeline_semaphore/VK_KHR_synchronization2 is compiled - in, but not supported by the driver. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6309> - -2024-03-08 18:18:08 +0200 Jordan Petridis <jordan@centricular.com> - - * ext/rsvg/meson.build: - rsvg: Add direct dependency on cairo - We include cairo.h in the element so we should also - declare it in meson. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6306> - -2024-03-07 17:36:33 +0100 François Laignel <francois@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - * ext/webrtc/transportstream.c: - webrtc: add all SSRC attributes getting CAPS for a PT - The transport stream only returned the CAPS for the first matching PT entry - from the `ptmap`. Other SSRC with the same PT where not included. For a stream - which bundled multiple audio streams for instance, only the first SSRC was - knowed to the SSRC demux and downstream elements. - This commit adds all the `ssrc-` attributes from the matching PT entries. - The RTP jitter buffer can now find the CNAME corresponding its SSRC even if it - was not the first to be registered for a particular PT. - The RTP PT demux removes `ssrc-*` attributes cooresponding to other SSRCs - before pushing SSRC specific CAPS to downstream elements. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6119> - -2024-02-23 11:00:20 +0100 François Laignel <francois@centricular.com> - - * ext/webrtc/gstwebrtcbin.c: - webrtcbin: RFC5576 - early CNAME support - See RFC5576: have CNAME available to the rtpjitterbuffer before the the first - RTCP SR is received, for rapid synchronization. Similar to what was done for - RTSP (last 2 commits) of MR 2132. - RFC5576: https://www.rfc-editor.org/rfc/rfc5576 - MR 2132: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132 - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6119> - -2024-03-02 02:13:41 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12dxgicapture.cpp: - * sys/d3d12/gstd3d12dxgicapture.h: - * sys/d3d12/gstd3d12graphicscapture.cpp: - * sys/d3d12/gstd3d12graphicscapture.h: - * sys/d3d12/gstd3d12screencapture.cpp: - * sys/d3d12/gstd3d12screencapture.h: - * sys/d3d12/gstd3d12screencapturesrc.cpp: - * sys/d3d12/meson.build: - d3d12screencapturesrc: Add support for WGC API - Adding support for window and monitor capturing by using - Windows Graphics Capture API. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6256> - -2024-03-05 22:49:05 +0900 Seungha Yang <seungha@centricular.com> - - * sys/d3d12/gstd3d12memory.cpp: - * sys/d3d12/gstd3d12memory.h: - * sys/d3d12/gstd3d12utils.cpp: - * sys/d3d12/gstd3d12utils.h: - d3d12memory: Implement NT handle caching and custom user data support - Same as the d3d11 memory implementation. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6256> - -2024-02-16 18:08:36 +0100 Mathieu Duponchelle <mathieu@centricular.com> - - * gst/onvif/gstrtponviftimestamp.c: - * gst/onvif/gstrtponviftimestamp.h: - rtponviftimestamp: make sure to set E and T bits on last buffer of lists - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5173> - -2024-02-28 09:30:33 +1100 Jan Schmidt <jan@centricular.com> - - * gst/onvif/gstrtponviftimestamp.c: - rtponviftimestamp: Use gst_segment_to_stream_time_full() - In the situation where playback starts from a keyframe before - the target playback segment, then the first buffers will be - outside the configured segment and gst_segment_to_stream_time() - will return GST_CLOCK_TIME_NONE unconditionally. - If drop-out-of-segment is false, the RTP buffers will not be - dropped, but will be sent witout ONVIF extension timestamps - and given GST_CLOCK_TIME_NONE timestamps on the receiver. - Instead, use gst_segment_to_stream_time_full() to extrapolate - stream time outside the segment so that such buffers still - get assigned their correct timestamps on the receiver. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6248> - -2024-03-01 21:00:33 +1100 Jan Schmidt <jan@centricular.com> - - * gst/dvbsubenc/gstdvbsubenc-util.c: - dvbsubenc: Fix bottom field size calculation - Don't accidentally include the stuffing byte (if present) - into the bottom field size. It should only be included in the - total segment length. - Fixes problems with FFmpeg not rendering the subtitles - with a stuffing byte, giving a "Invalid object location!" error. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6250> - -2024-02-28 15:51:31 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasink.cpp: - ajasink: Make logging between ajasrc and ajasink more consistent - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208> - -2024-02-23 12:41:44 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - * sys/aja/gstajasrc.h: - ajasrc: Improve clock handling - Provide a clock from the source that is a monotonic system clock with - the rate corrected based on the measured and ideal capture rate of the - frames. - If this clock is selected as pipeline clock, then provide perfect - timestamps to downstream. - Otherwise, if the pipeline clock is the monotonic system clock, use the - internal clock for converting back to the monotonic system clock. - Otherwise, use the monotonic system clock time calculated in the above - case and convert that to the pipeline clock. - In all cases this will give a smoother time than the previous code, - which simply took the difference between the driver provided capture - time and the current real-time clock time, and applied that to the - current pipeline clock time. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208> - -2024-02-23 12:21:56 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: Move frame drop detection after the frame transfer - Otherwise there's a small window between querying the state and doing - the transfer in which a frame could be dropped, and we would then output - the frame right after the dropped one as if it was the dropped frame. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208> - -2024-02-23 12:21:04 +0200 Sebastian Dröge <sebastian@centricular.com> - - * sys/aja/gstajasrc.cpp: - ajasrc: Improve debug output related to frame transfers - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208> - -2024-02-26 22:19:57 +0800 He Junyan <junyan.he@intel.com> - - * sys/msdk/gstmsdkdec.c: - * sys/msdk/gstmsdkenc.c: - * sys/msdk/gstmsdkvpp.c: - MSDK: Set the job type when create context from external handle - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6221> - -2024-03-01 00:08:03 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Set backward_num to 1 in low delay mode - In low delay B mode, the P frame is converted as B frame with forward - references. For example, One P frame may refers to P-1, P-2 and P-3 in - list0 and refers to P-3, P-2 and P-1 in list1. - So the num in list0 and list1 does not reflect the forward_num and - backward_num. The vaapi does not provide ref num for forward or backward - so far. In this case, we just consider the backward_num to be 1 conservatively. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6249> - -2024-01-26 23:50:08 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Improve B pyramid mode in HEVC - If the reference frame number is bigger than 2, we can enable the - pyramid B mode. We do not need to assign a reference frame to each - pyramid level. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6249> - -2024-01-28 23:27:48 +0800 He Junyan <junyan.he@intel.com> - - * sys/va/gstvah265enc.c: - vah265enc: Expand log2_max_pic_order_cnt if needed - In b_pyramid mode, B frames can be ref and prevPicOrderCntLsb can - be the B frame POC which is smaller than the P frame. This can cause - POC diff bigger than MaxPicOrderCntLsb/2 and generate wrong POC value. - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6249> - -2024-03-05 12:58:57 +0000 Tim-Philipp Müller <tim@centricular.com> +2025-03-12 13:59:45 +0100 Tim-Philipp Müller <tim@centricular.com> * README.md: * RELEASE: * meson.build: - Back to development - Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6261> + Back to development in main branch after 1.26.0 + Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8621> -=== release 1.24.0 === +=== release 1.26.0 ===
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/MAINTAINERS -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/MAINTAINERS
Changed
@@ -1,11 +1,8 @@ GStreamer is currently maintained by the consensus of a number -of people, including, but not limited to: +of people. The current list of maintainers is the developers, maintainers +and owners in the GitLab GStreamer project at: - Jan Schmidt <thaytan@noraisin.net> - Wim Taymans <wim.taymans@gmail.com> - David Schleef <ds@schleef.org> - Tim-Philipp Müller <tim centricular net> - Sebastian Dröge <slomo@coaxion.net> +https://gitlab.freedesktop.org/groups/gstreamer/-/group_members Maintainer-related issues should be addressed to:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/NEWS -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/NEWS
Changed
@@ -1,12 +1,10 @@ -GStreamer 1.26 Release Notes +GStreamer 1.28 Release Notes -GStreamer 1.26.0 was originally released on 11 March 2025. +GStreamer 1.28.0 was originally released on 27 January 2026. -The latest bug-fix release in the stable 1.26 series is 1.26.10 and was released on 25 December 2025 +See https://gstreamer.freedesktop.org/releases/1.28/ for the latest version of this document. -See https://gstreamer.freedesktop.org/releases/1.26/ for the latest version of this document. - -Last updated: Thursday 25 December 2025, 15:00 UTC (log) +Last updated: Tuesday 27 January 2026, 17:00 UTC (log) ## Introduction @@ -17,908 +15,740 @@ ## Highlights -- H.266 Versatile Video Coding (VVC) codec support -- Low Complexity Enhancement Video Coding (LCEVC) support -- Closed captions: H.264/H.265 extractor/inserter, cea708overlay, cea708mux, tttocea708 and more -- New hlscmafsink, hlssink3, and hlsmultivariantsink; HLS/DASH client and dashsink improvements -- New AWS and Speechmatics transcription, translation and TTS services elements, plus translationbin -- Splitmux lazy loading and dynamic fragment addition support -- Matroska: H.266 video and rotation tag support, defined latency muxing -- MPEG-TS: support for H.266, JPEG XS, AV1, VP9 codecs and SMPTE ST-2038 and ID3 meta; mpegtslivesrc -- ISO MP4: support for H.266, Hap, Lagarith lossless codecs; raw video support; rotation tags -- SMPTE 2038 ancillary data streams support -- JPEG XS image codec support -- Analytics: New TensorMeta; N-to-N relationships; Mtd to carry segmentation masks -- ONVIF metadata extractor and conversion to/from relation metas -- New originalbuffer element that can restore buffers again after transformation steps for analytics -- Improved Python bindings for analytics API -- Lots of Vulkan integration and Vulkan Video decoder/encoder improvements -- OpenGL integration improvements, esp. in glcolorconvert, gldownload, glupload -- Qt5/Qt6 QML GL sinks now support direct DMABuf import from hardware decoders -- CUDA: New compositor, Jetson NVMM memory support, stream-ordered allocator -- NVCODEC AV1 video encoder element, and nvdsdewarp -- New Direct3D12 integration support library -- New d3d12swapchainsink and d3d12deinterlace elements and D3D12 sink/source for zero-copy IPC -- Decklink HDR support (PQ + HLG) and frame scheduling enhancements -- AJA capture source clock handling and signal loss recovery improvements -- RTP and RTSP: New rtpbin sync modes, client-side MIKEY support in rtspsrc -- New Rust rtpbin2, rtprecv, rtpsend, and many new Rust RTP payloaders and depayloaders -- webrtcbin support for basic rollbacks and other improvements -- webrtcsink: support for more encoders, SDP munging, and a built-in web/signalling server -- webrtcsrc/sink: support for uncompressed audio/video and NTP & PTP clock signalling and synchronization -- rtmp2: server authentication improvements incl. Limelight CDN (llnw) authentication -- New Microsoft WebView2 based web browser source element -- The GTK3 plugin has gained support for OpenGL/WGL on Windows -- Many GTK4 paintable sink improvements -- GstPlay: id-based stream selection and message API improvements -- Real-time pipeline visualization in a browser using a new dots tracer and viewer -- New tracers for tracking memory usage, pad push timings, and buffer flow as pcap files -- VA hardware-acclerated H.266/VVC decoder, VP8 and JPEG encoders, VP9/VP8 alpha decodebins -- Video4Linux2 elements support DMA_DRM caps negotiation now -- V4L2 stateless decoders implement inter-frame resolution changes for AV1 and VP9 -- Editing services: support for reverse playback and audio channel reordering -- New QUIC-based elements for working with raw QUIC streams, RTP-over-QUIC (RoQ) and WebTransport -- Apple AAC audio encoder and multi-channel support for the Apple audio decoders -- cerbero: Python bindings and introspection support; improved Windows installer based on WiX5 +- AMD HIP plugin and integration helper library +- Vulkan Video AV1 and VP9 decoding, H.264 encoding, and 10-bit support for H.265 decoder +- waylandsink: Parse and set the HDR10 metadata and other color management improvements +- Audio source separation element based on demucs in Rust +- Analytics combiner and splitter elements plus batch meta to batch buffers from one or more streams +- LiteRT inference element; move modelinfo to analytics lib; add script to help with modelinfo generation and upgrade +- Add general classifier tensor-decoder, facedetector, and more analytics convenience API +- New tensordecodebin element to auto-plug compatible tensor decoders based on their caps and many other additions and + improvements +- Add a burn-based YOLOX inference element and a YOLOX tensor decoder in Rust +- applemedia: VideoToolbox VP9 and AV1 hardware-accelerated decoding support, and 10-bit HEVC encoding +- Add new GIF decoder element in Rust with looping support +- input-selector: implements a two-phase sinkpad switch now to avoid races when switching input pads +- The inter wormhole sink and source elements gained a way to forward upstream events to the producer as well as new + fine-tuning properties +- webrtcsink: add renegotiation support and support for va hardware encoders +- webrtc WHEP client and server signaller +- New ST-2038 ancillary data combiner and extractor elements +- fallbacksrc gained support for encoded streams +- flv: enhanced rtmp H.265 video support, and support for multitrack audio +- glupload: Implement udmabuf uploader to share buffers between software decoders/sources and GPUs, display engines (wayland), + and other dma devices +- video: Add crop, scale, rotate, flip, shear and more GstMeta transformation +- New task pool GstContext to share a thread pool amongst elements for better resource management and performance, especially + for video conversion and compositing +- New Deepgram speech-to-text transcription plugin and many other translation and transcription improvements +- Speech synthesizers: expose new “compress” overflow mode that can speed up audio while preserving pitch +- ElevenLabs voice cloning element and support for Speechmatics speaker identification API +- textaccumulate: new element for speech synthesis or translation preprocessing +- New vmaf element to calculate perceptual video quality assessment scores using Netflix’s VMAF framework +- decodebin3: expose KLV, ID3 PES and ST-2038 ancillary data streams with new metadata GstStream type +- New MPEG-H audio decoding plugin plus MP4 demuxing support +- LCEVC: Add autoplugging decoding support for LCEVC H265 and H266 video streams and LCEVC H.265 and H.266 encoders +- RTP “robust MPEG audio”, raw audio (L8, L16, L24), and SMPTE ST291 ancillary metadata payloaders/depayloaders in Rust +- Add a Rust-based icecastsink element with AAC support +- The Windows IPC plugin gained support for passing generic data in addition to raw audio/video, and various properties +- New D3D12 interlace and overlay compositor elements, plus many other D3D12 improvements +- Blackmagic Decklink elements gained support for capturing and outputting all types of VANC via GstAncillaryMeta +- GstLogContext API to reduce log spam in several components and GST_DEBUG_ONCE (etc) convenience macros to log things only + once +- hlssink3, hlscmafsink: Support the use of a single media file, plus I-frame only playlist support +- Webkit: New wpe2 plugin making use of the “WPE Platform API” +- MPEG-TS demuxer can now disable skew corrections +- New Qt6 QML render source element +- qml6gloverlay: support directly passing a QQuickItem for QML the render tree +- unifxfdsink: Add a property to allow copying to make sink usable with more upstream elements +- dots-viewer: Improve dot file generation and interactivity +- Python bindings: more syntactic sugar, analytics API improvements and type annotations +- cerbero: add support for Python wheel packaging, Windows ARM64, new iOS xcframework, Gtk4 on macOS and Windows, and more + plugins +- Smaller binary sizes of Rust plugins in Windows and Android binary packages +- Peel: New C++ bindings for GStreamer - Lots of new plugins, features, performance improvements and bug fixes +- Countless bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements ## Major new features and changes -### H.266 Versatile Video Coding (VVC) codec support - -- The H.266 / VVC video codec is a successor to H.265 / HEVC and is standardised in ISO/IEC 23090-3. - -- A new h266parse element was added, along with parsing API, typefinding support and some codec utility functions in the - gst-plugins-base utility library. +### AMD HIP plugin and integration library -- A H.266 decoder base class for hardware-accelerated decoders was added and used to implement a VA-API-based - hardware-accelerated H.266 decoder. +- HIP (formerly known as Heterogeneous-computing Interface for Portability) is AMD’s GPU programming API that enables + portable, CUDA-like development across both AMD and NVIDIA platforms: -- The FFmpeg H.266 decoder is exposed now (from FFmpeg 7.0 onwards). + - On AMD GPUs, HIP runs natively via the ROCm stack. + - On NVIDIA GPUs, HIP operates as a thin translation layer over the CUDA runtime and driver APIs. -- H.266 / VVC muxing and demuxing support was implemented for MP4, Matroska and MPEG-TS containers. + This allows developers to maintain a single codebase that can target multiple GPU vendors with minimal effort. -- A VVdeC-based H.266 decoder element was added to the Rust plugins, based on the Fraunhofer Versatile Video Decoder library. +- The new HIP plugin provides the following elements: -### JPEG XS image codec support + - hipcompositor: a HIP-based video mixer/compositor + - hipconvert: Converts video from one colorspace to another using HIP + - hipconvertscale: Resizes video and allow color conversion using HIP + - hipscale: Resize video using HIP + - hipdownload: Downloads HIP device memory into system memory + - hipupload: Uploads system memory into HIP device memory -- JPEG XS is a visually lossless, low-latency, intra-only video codec for video production workflows, standardised in ISO/IEC - 21122. +- The GStreamer HIP integration helper library provides HIP integration functionality to applications and other HIP users. -- JPEG XS encoder and decoder elements based on the SVT JPEG XS library were added, including support for muxing JPEG XS into - MPEG-TS and demuxing it. Both interlaced and progressive modes are supported. +- Watch the Bringing AMD HIP into GStreamer talk from last year’s GStreamer Conference for more details or read Seungha’s + devlog post on the subject. -### Low Complexity Enhancement Video Coding (LCEVC) support +### Low Complexity Enhancement Video Coding (LCEVC) support for H.265 and H.266 - LCEVC is a codec that provides an enhancement layer on top of another codec such as H.264 for example. It is standardised as MPEG-5 Part 2. -- LCEVC encoder and decoder elements based on V-Nova’s SDK libraries were added, including support in h264parse for extracting - the enhancement layer from H.264 and decoding it via a lcevch264decodebin element. +- LCEVC H.265 and H.266 encoder and decoder elements based on V-Nova’s SDK libraries were added in this cycle -### Closed captions improvements +- Autoplugging support for LCEVC H265 and H266 video streams, so these can be decoded automatically in a decodebin3 or + playbin3 scenario. -- New H.264 and H.265 closed captions extractor and inserter elements. +### Closed captions and text handling improvements - - These extractor elements don’t actually extract captions from the bitstream, but rely on parser elements to do that and - add them to buffers in form of caption metas. The problem is that streams might contain B-frames, in which case the - captions in the bitstream will not be in presentation order and extracting them requires frame-reordering in the same - way that a decoder would do. These new elements will do exactly that and allow you to extract captions in presentation - order without having to decode the stream. +- cea708overlay: suport non-relative positioning for streams with CCs that do not have relative positions. Instead of + displaying them at the top, they are positioned relatively. - - The inserter elements do something similar and insert caption SEIs into the H.264 or H.265 bitstream, taking into - account frame ordering. +- cea708mux: expose “discarded-services” property on sink pads. This can be useful when muxing in an original caption stream + with a newly-created one (e.g. transcription / translation), in which case one might wish to discard select services from + the original stream in order to avoid garbled captions. - - This is useful if one wants to extract, process and re-insert captions into an existing video bitstream without decoding - and re-encoding it (in which case the decoder and encoder would handle the caption reordering). +- sccparse: Better handling of streams with more byte tuples in the SCC field. -- cdpserviceinject: New element for injecting a CDP service descriptor into closed caption CDP packets +- tttocea608: expose “speaker-prefix” property -- cea708overlay: New element for overlaying CEA608 / CEA708 closed captions over video streams. - -- The cc708overlay element has been deprecated in favour of the cea708overlay element from the Rust plugins set. - -- cea608mux gained a "force-live" property to make it always in live mode and aggregate on timeout regardless of whether any - live sources are linked upstream. +- Miscellaneous improvements and spec compliance fixes -- cea708mux: New element that allows to mux multiple CEA708 services into a single stream. +- Also see SMPTE ST-2038 metadata section below. -- cccombiner has two new properties: +### Speech to Text, Translation and Speech Synthesis - - "input-meta-processing" controls how input closed caption metas are processed and can be used to e.g. discard closed - captions from the input pad if the matching video buffer already has closed caption on it. +- New audio source separation element based on demucs in Rust. This is useful to separate speech from background audio before + running speech to text transcription, but could also be used to separate vocals from music for karaoke. - - "schedule-timeout" to support timing out captions without EOS +- New Deepgram speech-to-text transcription plugin in Rust. -- tttocea708: New element for converting timed-text to CEA708 closed captions +- The Speechmatics transcriber has seen a major refactoring for better timings, gap and discontinuity handling and has gained + support for the new Speechmatics speaker identification API as well as a new property to mask profanities. -- Miscellaneous improvements and spec compliance fixes +- New ElevenLabs voice cloning element. The new element can operate in two modes: -### Speech to Text, Translation and Speech Synthesis + - In single speaker mode, the element will directly clone a single voice from its input, without storing any samples. + - Otherwise, the element will store a backlog of samples, and wait to receive certain events from a transcriber on its + source pad before draining them to create potentially multiple voices. -- awstranscriber2, awstranslate: New elements around the AWS transcription and translation services. +- New “compress” overflow mode for speech synthesizers that can speed up the audio while preserving pitch. This may be needed + to keep or regain audio/video synchronisation if translated speech output has been consistently longer in duration than the + original and there hasn’t been a sufficient amount of silence that could be filled in to make up the difference. -- polly: New element around the AWS text-to-speech polly services +- awstranslate: new “brevity-on” property for turning brevity on. -- speechmatics: New transcriber / speech-to-text and translation element +- The awstranscriber2 has been refactored to match the speechmatics transcriber design and gained a “show-speaker-label” + property that defines whether to partition speakers in the transcription output. -- translationbin: Helper bin around translation elements, similar to the already existing transcriberbin for transcriptions. +- New textaccumulate element for speech synthesis or translation preprocessing that can be used to accumulate words and + punctuation into complete sentences (or sentence fragments) for synthesis and / or translation by further elements + downstream. ### HLS DASH adaptive streaming improvements -- The adaptivedemux2 client implementation gained support for file:// URIs and as such the ability to play HLS and DASH from - local files. It also no longer sends spurious flush events when it loses sync in live streams, as that is unexpected and - will be handled poorly in non-playback scenarios. Lastly, support for HTTP request retries was added via the "max-retries" - property, along with some exponential backoff logic which can be fine-tuned via properties. - -- dashsink has received period duration fixes for dynamic MPDs and some memory leak fixes. - -- hlscmafsink, hlssink3: New single-variant HLS sink elements that can output CMAF (fMP4) or MPEG-TS fragments. +- Reverse playback, seeking and stream selection fixes in the HLS/DASH clients. -- hlsmultivariantsink: New sink element that can output an HLS stream with multiple variants +- hlscmafsink can generate I-frame only playlists now -### splitmuxsrc, splitmuxsink: lazy loading and dynamic fragment addition +- Both hlssink3 and hlscmafsink gained support for use of a single media file, in which case the media playlist will use byte + range tags for each chunk whilst always referencing the same single media file. This can be useful for VOD use cases. -- splitmuxsrc and splitmuxsink were originally designed to handle a small number of large file fragments, e.g. for situations - where one doesn’t want to exceed a certain file size when recording to legacy file systems. It was also designed for playing - back a static set of file fragments that have been created by an earlier recording session and no longer changes. Over time - people have found more applications and use cases for the splitmux elements and have been deploying them in different - scenarios, exposing the limits of the current implementation. +### decodebin3 and playbin3 improvements -- In this release, splitmuxsink and splitmuxsrc gained new abilities aimed at improving support for recordings with a large - number of files, and for adding fragments on the fly to allow playback of ongoing recordings: +- decodebin3 now has a separate pad template for metadata streams and considers KLV, ID3 PES streams and ST-2038 ancillary + streams as raw formats for meta streams. This comes also with a new dedicated GST_STREAM_TYPE_METADATA stream type in the + stream collection. - - You can now add fragments directly to splitmuxsrc and provide the offset and duration in the stream: +### Enhanced RTMP and multitrack audio/video support in FLV - - Providing offset and duration means splitmuxsrc doesn’t need to scan the file to measure it and calculate it. That - makes for much faster startup. +- The FLV container used for RTMP streaming is fairly old and limited in terms of features: It only supports one audio and one + video track, and also only a very limited number of audio and video codecs, most of which are by now quite long in the + tooth. - - The new "add-fragment" signal can be used to add files to the set dynamically - allowing to be playing an ongoing - recording and adding files to the playback set as they are finished. +- The Enhanced RTMP (V2) specification seeks to remedy this and adds support for modern video codecs such H.265 and AV1 as + well as support for more than one audio and video track inside the container. - - splitmuxsrc no longer keeps all files open, but instead only keeps 100 files open by default, configurable with the - "num-open-fragments" property. +- Both H.265 video and multiple audio/video tracks are now supported for FLV in GStreamer. - - There is a new "num-lookahead" property on splitmuxsrc to trigger (re)opening files a certain distance ahead of the play - position. - - - splitmuxsink will report fragment offset and fragment duration via a message on the bus when closing a file. This - information can then be used to add the new fragment to a splitmuxsrc. +- Support for this comes in form of a new eflvmux muxer element, which is needed to accommodate both the need of backwards + compatibility in the existing FLV muxer and the requirements of the new format. See Tarun’s blog post for more details. ### MPEG-TS container format improvements -- The MPEG-TS muxer and demuxer gained support for - - - H.266 / VVC video muxing and demuxing - - JPEG-XS video muxing and demuxing - - VP9 video muxing and demuxing (using a custom mapping) - - AV1 video muxing and demuxing (using a custom mapping, since the work-in-progress specification effort doesn’t seem to - be going anywhere anytime soon) - - SMPTE ST-2038 ancillary metadata streams (see section above) - -- mpegtsmux gained support for muxing ID3 metadata into the TS stream, as well as SMPTE 302M audio. +- The MPEG-TS demuxer gained a “skew-corrections” property that allows disabling of skew corrections, which are done by + default for live inputs to make sure downstream consumes data at the same rate as it comes in if the local clock and the + sender clock drift apart (as they usually do). Disabling skew corrections is useful if the input stream has already been + clock-corrected (for example with mpegtslivesrc) or where the output doesn’t require synchronisation against a clock, + e.g. when it’s re-encoded and/or remuxed and written to file (incl. HLS/DASH output) where it’s desirable to maintain the + original timestamps and frame spacings. + + It is also useful for cases where we want to refer to the PCR stream to figure out global positioning, gap detection and + wrapover correction. + +- tsdemux now also supports demuxing of ID3 tags in MPEG-TS as specified in the Apple Timed Metadata for HTTP Live Streaming + specification. These timed ID3 tags have a media type of meta/x-id3 which is different from the one used to tag audio files, + and an id3metaparse element is needed to properly frame the PES data coming out of the demuxer. -- It’s also possible to disable sending SCTE-35 null (heartbeat) packets now in mpegtsmux by setting the - "scte-35-null-interval" to 0. - -- tsparse now handles 192-byte M2TS packets - -- mpegtslivesrc: New source element that can wrap a live MPEG-TS source (e.g. SRT or UDP source) and provides a clock based on - the PCR. +- The MPEG-TS muxer now also reads prog-mapPMT_ORDER_<PID> for PMT order key in addition to prog-mapPMT_%d, which fixes a + wart in the API and provides an unambiguous way to specify ordering keys. ### Matroska container format improvements -- H.266 / VVC video muxing and demuxing support - -- matroskamux - - - was ported to the GstAggregator base class, ensuring defined-latency muxing in live streaming pipelines. - - gained rotation tag support - -- matroskademux now also supports seeks with a stop position in push mode. +- matroskademux now supports relative position cues in the seek table and also had its maximum block size restrictions updated + so that it can support uncompressed video frames also in 4k UHD resolution and higher bit depths. ### ISO MP4 container format improvements -- can mux and demux H.266 / VVC in MP4 now - -- can demux Hap video now, as well as Lagarith lossless video and ISO/IEC 23003-5 raw PCM audio. +- mp4mux now supports E-AC3 muxing -- qtdemux handles keyunit-only trick mode also in push mode now +- qtdemux, the MP4 demuxer, has seen countless fixes for various advanced use cases (with lots more in the pipeline for + 1.28.1). -- support for ISO/IEC 23001-17 raw video in MP4 in qtdemux and isomp4mux. +- The isomp4mux from the Rust plugins set now support caps changes and has also gained support for raw audio as per ISO/IEC + 23003-5. Plus improved brand selection. -- support for rotation tags in the muxers and demuxers was improved to correctly handle per-media and per-track rotations, and - support for flips was added as well. +- The isomp4mux, isofmp4mux and related elements were merged into a single isobmff plugin, which allows sharing more code. As + part of this, codec support was consolidated between the two. -SMPTE 2038 ancillary data streams +### MXF container format improvements -- SMPTE 2038 (pdf) is a generic system to put VBI-style ancillary data into an MPEG-TS container. This could include all kinds - of metadata such as scoreboard data or game clocks, and of course also closed captions, in this case in form of a distinct - stream completely separate from any video bitstream. +- The MXF muxer and demuxer gained support for non-closed-caption VANC ancillary metdata: -- A number of new elements in the GStreamer Rust closedcaption plugin add support for this, along with mappings for it in the - MPEG-TS muxer and demuxer. The new elements are: + - Extends mxfdemux with support for outputting VANC (ST436M) essence tracks as ST2038 streams instead of extracting closed + captions internally. - - st2038ancdemux: splits SMPTE ST-2038 ancillary metadata (as received from tsdemux) into separate streams per DID/SDID - and line/horizontal_offset. Will add a sometimes pad with details for each ancillary stream. Also has an always source - pad that just outputs all ancillary streams for easy forwarding or remuxing, in case none of the ancillary streams need - to be modified or dropped. + - Extends mxfmux with support for consuming ST2038 streams for outputting VANC (ST436M) essence tracks instead of only + supporting closed captions. - - st2038ancmux: muxes SMPTE ST-2038 ancillary metadata streams into a single stream for muxing into MPEG-TS with - mpegtsmux. Combines ancillary data on the same line if needed, as is required for MPEG-TS muxing. Can accept individual - ancillary metadata streams as inputs and/or the combined stream from st2038ancdemux. +To support ST2038 instead of the earlier closed captions, we introduce a breaking change to the caps handling on the pad. This +was deemed the cleanest way and should hopefully not cause too much breakage in the real world, as it is likely not something +that was used much in practice in this form. The st2038anctocc element can be used to convert a ST2038 stream to plain closed +captions. - If the video framerate is known, it can be signalled to the ancillary data muxer via the output caps by adding a - capsfilter behind it, with e.g. meta/x-st-2038,framerate=30/1. +We also now support both 8 and 10-bit VANC data when reading from MXF. - This allows the muxer to bundle all packets belonging to the same frame (with the same timestamp), but that is not - required. In case there are multiple streams with the same DID/SDID that have an ST-2038 packet for the same frame, it - will prioritise the one from more recently created request pads over those from earlier created request pads (which - might contain a combined stream for example if that’s fed first). +### MPEG-H audio support - - st2038anctocc: extracts closed captions (CEA-608 and/or CEA-708) from SMPTE ST-2038 ancillary metadata streams and - outputs them on the respective sometimes source pad (src_cea608 or src_cea708). The data is output as a closed caption - stream with caps closedcaption/x-cea-608,format=s334-1a or closedcaption/x-cea-708,format=cdp for further processing by - other GStreamer closed caption processing elements. +- New MPEG-H audio decoding plugin based on the Fraunhofer MPEG-H decoder implementation plus MP4 demuxing support - - cctost2038anc: takes closed captions (CEA-608 and/or CEA-708) as produced by other GStreamer closed caption processing - elements and converts them into SMPTE ST-2038 ancillary data that can be fed to st2038ancmux and then to mpegtsmux for - splicing/muxing into an MPEG-TS container. The line-number and horizontal-offset properties should be set to the desired - line number and horizontal offset. - -### Analytics +SMPTE 2038 ancillary data stream handling improvements -- Added a GstTensorMeta: This meta is designed to carry tensors from the inference element to the model-specific tensor - decoder. This also includes a basic GstTensor class containing a single tensor. The actual tensor data is a GstBuffer. +- New ST-2038 ancillary data combiner and extractor elements in the rsclosedcaption Rust plugin that extract ST-3028 metadata + streams from GstAncillaryMetas on video frames or converts ST-2038 metadata streams to GstAncillaryMeta and combines it with + a given video stream. -- Add N_TO_N relationship to GstAnalyticsRelationMeta: This makes it possible to describe N-to-N relationships. For example, - between classes and regions in an instance segmentation. +- The MXF demuxer and muxer gained support for muxing and demuxing generic ancillary metadata in ST-2038 format (see below). -- Add a new analytics Mtd to carry segmentation masks: Being part of the GstAnalyticsMeta, it can be in relationship with the - other Mtd, such as the classification and object detection bounding boxes. +- decodebin3 now treats ST-2038 metadata streams as a “raw metadata format” and exposes those streams as + GST_STREAM_TYPE_METADATA. -- onvifmetadataextractor: New element that can extract ONVIF metadata from GstMetas into a separate stream - -- originalbuffer: New plugin with originalbuffersave / originalbufferrestore elements that allow saving an original buffer, - modifying it for analytics, and then restoring the original buffer content while keeping any additional metadata that was - added. +### Analytics -- relationmeta: New plugin with elements converting between GstRelationMeta and ONVIF XML metadata. +This release introduces a major improvement in how analytics pipelines are built, moving away from manual configuration toward a +fully negotiated analytics pipeline. -- Improved Python bindings for a more Pythonic interface when iterating over GstRelationMeta’s mtd +- Robust Tensor Negotiation & Smart Selection: All inference and tensor decoder elements adopt the tensor capability + negotiation mechanism. This provides informative error handling by validating the pipeline during the setup phase and + providing descriptive error messages for configuration mismatches before processing begins. Complementing this, the new + tensordecodebin acts as an intelligent proxy that abstracts decoder selection by auto-plugging the correct tensor decoder. + This simplifies the use of existing tensor decoders and allows new tensor decoders to be utilized instantly without + requiring changes to pipeline definitions. -### Vulkan integration enhancements +- Simplified Model Integration with modelinfo: The modelinfo library, configuration files, and the modelinfo-generator.py + script work together to make using any ML model inside a GStreamer pipeline very simple. The new utility script helps you + quickly generate or upgrade metadata files for existing models. Combined with tensor negotiation and tensordecodebin, these + tools facilitate the seamless utilization of new models within the analytics chain. -- Vulkan Integration Improvements: +- analyticsoverlay: New “expire-overlay” property added to objectdetectionoverlay and can also show tracking-id; New + ‘segmentationoverlay’ to visualize segmented regions. - - Memory Management: Non-coherent memory is now invalidated when mapping for read in vkmemory. +- Add LiteRT inference element - - Color Space Selection: The vkswapper component now chooses the correct color space based on the format. +- Analytics: add general classifier tensor-decoder, facedetector, YOLOv8 (detection), YOLOv8segmentation tensor decoders and + more convenience API. - - Vulkan Plugin Compatibility: Support added for cases where glslc is not available for building Vulkan plugins, along - with GLSL compiler support for glslang. +- onnx: Add Verisilicon provider support - - Fullscreen Quad Updates: Improved support for setting NULL input/output buffers and added checks for unset video info. +- New IoU based tracker - - Vulkan Buffer Pool Enhancements: Buffer pool access flags and usage configurations have been refined, offering better - performance for video decoding and encoding. +- Add GstAnalyticsBatchMeta representing a batch of buffers from one or more streams together with the relevant events to be + able to interpret the buffers and to be able to reconstruct the original streams. -- Decoder/Encoder Improvements: +- New analyticscombiner and analyticssplitter elements in the Rust plugin set which batch buffers from one or more streams + into a single stream via the new GstAnalyticsBatchMeta and allow splitting that single stream into the individual ones again + later. - - H264 Decoder: Enhancements to the vkh264dec component for better support of extended profiles and interlaced content - decoding. +- Add a burn-based YOLOX inference element and a YOLOX tensor decoder in Rust. - - H265 Decoder fixes: vkh265dec updated for proper handling of VPS/SPS on demand, along with fixes to PicOrderCntVal. +### Vulkan integration enhancements - - Encoder Performance: Various internal optimizations to the Vulkan encoder, including removal of redundant references and - better management of the DPB view. +- The Vulkan Video encoders and decoders now dynamically generate their pad template caps at runtime instead of hardcoding + them, so they more accurately reflect the actual capabilities of the hardware and drivers. -- Vulkan Instance and Device Management: +- New Vulkan AV1 and VP9 video decoding support - - Device Handling: Added new utility functions for managing Vulkan device instances, including - gst_vulkan_instance_create_device_with_index and gst_vulkan_ensure_element_device. +- New Vulkan H.264 encoding support - - Device Context Management: Updates to manage Vulkan context handling more effectively within the application. +- The Vulkan H.265 decoder now also supports 10-bit depth ### OpenGL integration enhancements -- glcolorconvert gained support for more formats and conversions: - - - Planar YUV <-> planar YUV conversions - - Converting to and from v210 in general - - v210 <-> planar YUV - - UYVY and YUY2 <-> planar YUV - - v210 <-> UYVY and YUY2 - - Support for Y444_10, Y444_16, I422_10, I422_12 pixel formats (both little endian and big endian variants) +- Implement keyboard, mouse, and scroll wheel navigation event handling for the OpenGL Cocoa backend. -- gldownload can import DMABufs from a downstream pool +- Added support for the NV24 and Y444_12 pixel formats. The latter is used by certain HEVC decoders for 12-bit non-subsampled + profiles. -- glupload gained a DRM raw uploader +### udmabuf allocator with glupload support -### Qt5 + Qt6 QML integration improvements +- Implement a udmabuf-based memory allocator for user-space mappable dmabufs. -- qmlglsink, qml6glsink now support external-oes textures, which allows direct DMABuf import from hardware decoders. Both also - support NV12 as an input format now. - -- qmlglsink gained support for RGB16/BGR16 as input format - -- qmlgl6src can now use a downstream buffer pool when available - -- qmlgloverlay make the depth/stencil buffer optional, which reduces memory bandwidth on Windows. - -### CUDA / NVCODEC integration and feature additions +- glupload: add udmabuf uploader to share buffers between software decoders/sources and GPUs, display engines (wayland), and + other dma devices. This can help reduce memory copies and can massively improve performance in video players like Showtime + or Totem for software-decoded video such as AV1 with dav1ddec. -- Added AV1 video encoder nvav1enc +- gtk4paintablesink: Similar to glupload, this now proposes the udmabuf memory allocator to upstream which can reduce memory + copies and improve performance with certain software decoders. -- CUDA mode nvcuda{CODEC}enc encode elements are renamed to nv{CODEC}enc and old nv{CODEC}enc implementations are removed +### Wayland integration -- Added support for CUDA Stream-Ordered allocator +- Added basic colorimetry support -- Added cudacompositor element which is equivalent to the software compositor element but uses CUDA +- waylandsink: -- Added support for CUDA kernel precompile at plugin build time using nvcc and NVCODEC plugin can cache/reuse compiled CUDA - CUBIN/PTX + - Parse and set the HDR10 metadata and other color management improvements -- cudaupload and cudadownload elements can support Jetson platform’s NVMM memory in addition to already supported desktop NVMM - memory + - udmabuf support (see above) -- Introduced nvdswrapper plugin which uses NVIDIA DeepStream SDK APIs with gst-cuda in an integrated way: + - video crop meta support - - nvdsdewarp: NVIDIA NVWarp360 API based dewarping element + - New “fullscreen-output” and “force-aspect-ratio” properties -### GStreamer Direct3D12 integration - -- New gst-d3d12 public library. The following elements are integrated with the gst-d3d12 library: - - - NVIDIA NVCODEC decoders and encoders can support D3D12 memory - - Intel QSV encoders can accept D3D12 memory - - All elements in dwrite plugin can support D3D12 memory - -- The d3d12 library and plugin can be built with MinGW toolchain now (in addition to MSVC) +### Qt5 + Qt6 QML integration improvements -- D3D12 video decoders and d3d12videosink are promoted to have higher rank than D3D11 elements +- New Qt6 QML qml6 render source element -- Added support for multiple mip-levels D3D12 textures: +- qml6gloverlay: support directly passing a QQuickItem for QML the render tree - - Newly added d3d12mipmapping element can generate D3D12 textures with multiple mip-levels +### GTK4 integration improvements - - max-mip-levels property is added to d3d12convert, d3d12videosink, and d3d12swapchainsink element, so that the elements - can generate an intermediate texture with multiple mip-levels in order to reduce downscale aliasing artifacts +- gtk4paintablesink: Added YCbCr memory texture formats and improve color-state fallbacks. The sink will also propose a + udmabuf buffer pool and allocator now if upstream asks for sysmem, which would allow direct imports of the memory by + GL/Vulkan or the compositor. Plus many other improvements which have also been backported into the 0.14 branch. -- d3d12convert, d3d12videosink, and d3d12swapchainsink support the GstColorBalanceInterface to offer color balancing functions - such as hue, saturation, and brightness adjustment +### CUDA / NVCODEC integration and feature additions -- Added d3d12ipcsink and d3d12ipcsrc elements for zero-copy GPU memory sharing between processes +- cudacompositor, cudaconvert and its variants gained crop meta support -- d3d12upload and d3d12download support direct GPU memory copy between D3D12 and D3D12 resources +- nvencoder: interlaced video handling improvements and “emit-frame-stats” property which if enabled makes the encoder emit + the “frame-stats” signal for each encoded frame, allowing applications to monitor things like the average QP per frame. -- Added d3d12swapchainsink element to support DirectComposition or UWP/WinUI3 SwapChainPanel based applications +- nvjpegenc: Add an autogpu mode element (nvautogpunvenc) similar to nvautogpu{h264,h265,av1}enc. -- Added d3d12deinterlace element which performs deinterlacing using a GPU vendor agnostic compute shader. +- nvh264enc, nvh265enc gained a new “num-slices” property which is conditionally available based on device support for dynamic + slice mode -- d3d12screencapturesrc element can capture HDR enabled desktop correctly in DDA mode (DXGI Desktop Duplication API) +- nvdsdewarp: performance improvements and support for output resizing support, along with a new “add-borders” property. ### Capture and playout cards support -- ajasrc: Improve clock handling, frame counting, capture timestamping, and signal loss recovery - -- The Blackmagic Decklink plugin gained support - - - for HDR output and input (PQ + HLG static metadata) - - - all modes of Quad HDMI recorder - - - scheduling frames before they need to be displayed in decklinkvideosink +- Blackmagic Decklink elements gained support for capturing and outputting all types of VANC via GstAncillaryMeta ### RTP and RTSP stack improvements -- rtspsrc now supports client-managed MIKEY key management. Some RTSP servers (e.g. Axis cameras) expect the client to propose - the encryption key(s) to be used for SRTP / SRTCP. This is required to allow re-keying. This mode can be enabled by enabling - the "client-managed-mikey-mode" property and comes with a number of new signals ("request-rtp-key" and "request-rtcp-key"), - action signals ("set-mikey-parameter" and "remove-key") and properties ("hard-limit" and "soft-limit"). - -- rtpbin: Add new “never” and “ntp” RTCP sync modes +- rtspsrc now sends RTSP keepalives also in TCP/interleaved modes. This fixes problems with some cameras that don’t see the + RTCP traffic as sufficient proof of liveness, when using TCP/HTTP tunnelled modes. - - Never is useful for some RTSP servers that report plain garbage both via RTCP SR and RTP-Info, for example. - - NTP is useful if synchronization should only ever happen based on RTCP SR or NTP-64 RTP header extensions. +- New Rust RTP mparobust depayloader for “robust mp3” audio**, a more loss-tolerant RTP payload format for MP3 audio (RFC + 5219) - This is part of a bigger refactoring of the synchronization / offsetting code in rtpbin, which also makes it regularly emit - the sync signal even if no new synchronisation information is available, controlled by the new "min-sync-interval" property. +- New Rust RTP L8/L16/L24 raw audio payloader and depayloader, which offer more correct timestamp handling compared to the old + payloader and depayloader and more correctly implements multichannel support. -- rtpjitterbuffer: add RFC7273 active status to jitterbuffer stats so applications can easily check whether RFC7273 sync is - active. +- New Rust RTP SMTPE ST291 ancillary data payloader and depayloader for sending or receiving ancillary data over RTP. This is + also the payload format used by ST2110-40. -- rtph265depay: Add "wait-for-keyframe" "request-keyframe" properties and improve request keyframe logic +- Various performance improvements and fixes for rtprecv / rtpsend (“rtpbin2”). -- rtppassthroughpay gained the ability to regenerate RTP timestamps from buffer timestamps via the new "retimestamp-mode" - property. This is useful in a relay RTSP server if one wants to do full drift compensation and ensure that the stream coming - out of gst-rtsp-server is not drifting compared to the pipeline clock and also not compared to the RTCP NTP times. - -- New Rust RTP payloaders and depayloaders for AC3, AMR, JPEG, KLV, MPEG-TS (MP2T), MPEG-4 (MP4A, MP4G), Opus, PCMU (uLaw), - PCMA (aLaw), VP8, VP9. - -- New rtpbin2 based on separate rtprecv and rtpsend elements +- Support for “multi-time aggregation packets” (MTAP) in the H264 RTP depayloader rtph264depay. ### WebRTC improvements -- webrtcbin improvements - - - Make basic rollbacks work +- webrtcbin and GstWebRTC library improvements: - - Add "reuse-source-pads" property: When set to FALSE, if a transceiver becomes send-only or inactive then pre-existing - source pads will receive an EOS event and no further traffic even after further renegotiation. When TRUE, pads will - simply not receive any output when the negotiated transceiver state doesn’t have incoming traffic. If renegotiated - later, the pad will receive data again. + - Add support for getting the selected ICE candidate pairs - - Early CNAME support (RFC5576): Have CNAME available to the jitterbuffer before the the first RTCP SR is received, for - rapid synchronization. + - Improve spec compliance for ICE candidate stats by filling the foundation, related-address, related-port, + username-fragment and tcp-type fields of stats. - - New "post-rtp-aux-sender" signal to allow for placement of an object after rtpbin, before sendbin. This is useful for - objects such as congestion control elements, that don’t want to be burdened by the synchronization requirements of - rtpsession. + - improve compatibility with LiveKit - - Create and associate transceivers earlier in negotiation, and other spec compliance improvements - - - Statistics generation improvements for bundled streams +- webrtcsink and webrtcsrc enhancements: -- webrtcsink improvements: + - webrtcsink gained renegotiation support, and support for va hardware encoders - - Support for more encoders: nvv4l2av1enc, vpuenc_h264 (for imx8mp), nvav1enc, av1enc, rav1enc and nvh265enc. +- Added a WHEP client signaller and server signaller to the Rust webrtc plugin, including support for server side offers for + the WHEP client. - - The new "define-encoder-bitrates" signal allows applications to fine-tune the bitrate allocation for individual streams - in cases where there are multiple encoders. By default the bitrate is split equally between encoders. +- webrtc-api: Set default bundle policy to max-bundle. - - A generic mechanism was implemented to forward metas over the control channel. +- The dtls plugin now uses a ECDSA private key for the default certificate. ECDSA is widely used in browsers and SFUs, and + some servers such as the ones using BouncyCastle only accept certificates signed with ECDSA. - - Added a mechanism for SDP munging to handle server-specific quirks. +### New GStreamer C++ bindings - - Can expose a built-in web server and signalling server for prototyping and debugging purposes. +The old GStreamer C++ bindings (gstreamermm and qt-gstreamer) have been unmaintained for a long time, leaving C++ developers +only with the option to use the GStreamer C API. -- webrtcsink and webrtcsrc enhancements: +In recent years, a new approach for C++ bindings was developed by the GNOME community: peel. While initially developed for GTK, +with various GObject Introspection and API fixes included in GStreamer 1.28, this is now also usable for GStreamer. - - Support for raw payloads, i.e. uncompressed audio and video +Compared to gstreamermm this offers a much lower overhead, headers-only C++ binding that just depends on the C libraries and not +even the C++ STL, and provides a modern C++ API top of the GStreamer C API. Compared to qt-gstreamer there is no dependency on +Qt. - - NTP & PTP clock signalling and synchronization support (RFC 7273) +It’s still in active development and various MRs for improving the GStreamer development experience are not merged yet, but it’s +already usable and a great improvement over using the plain C API from C++. - - Generic data channel control mechanism for sending upstream events back to the sender (webrtcsink) - -- webrtcsrc now has support for multiple producers +Various GStreamer examples can be found in Sebastian’s GStreamer peel examples repository. ## New elements and plugins - Many exciting new Rust elements, see Rust section below. -- webview2src: new Microsoft WebView2 based web browser source element - -- h264ccextractor, h264ccinserter: H.264 closed caption extractor / inserter - -- h265ccextractor, h265ccinserter: H.265 closed caption extractor / inserter - -- h266parse - -- lcevch264decodebin +- New D3D12 interlace, overlay compositor, fish eye dewarp and uv coordinate remapping elements -- New VA elements (see below): vah266dec, vavp8enc, vajpegenc, vavp8alphadecodebin, vavp9alphadecodebin +- VMAF: New element to calculate perceptual video quality assessment scores using Netflix’s VMAF framework -- svtjpegxsdec, svtjpegxsenc: SVT JPEG XS decoder/encoder +- Webkit: New wpe2 plugin that makes use of the “WPE Platform API” with support for rendering into GL and SHM buffers and + navigation events (but not audio yet). -- Many other new elements mentioned in other sections (e.g. CUDA, NVCODEC, etc.) +- Many other new elements mentioned in other sections (e.g. CUDA, NVCODEC, D3D12, Speech, AMD HIP, Rust etc.) ## New element features and additions -- audioconvert enhancements: +- The AWS S3 sink and source elements now support S3 compatible URI schemes. - - Add possibility to reorder input channels when audioconvert has unpositionned audio channels as input. It can now use - reordering configurations to automatically position those channels via the new "input-channels-reorder" and - "input-channels-reorder-mode" properties. +- clocksync: new “rate” property and “resync” action signal so that clocksync can synchronise buffer running time against the + pipeline clock with a specified rate factor. This can be useful if one wants to throttle pipeline throughput such as e.g. in + a non-realtime transcoding pipeline where the pipeline’s CPU and/or hardware resource consumption needs to be limited. - - Better handling of setting of the mix matrix at run time +- fallbacksrc is able to support encoded outputs now, not just uncompressed audio/video. As part of this it supports stream + selection via the GstStream API now. - - handles new GstRequestMixMatrix custom upstream event +- h265parse now automatically inserts AUDs where needed if it outputs byte-stream format, which helps fix decoding artefacts + for multi-slice HEVC streams with some hardware decoders. -- audiorate: Take the tolerance into account when filling gaps; better accounting of the number of samples added or dropped. +- input-selector now implements a two-phase sinkpad switch to avoid races when switching input pads. Extensive tests have been + added to avoid regressions. -- av1enc: Add "timebase" property to allow configuring a specific time base, in line with what exists for vp8enc and vp9enc - already. +- The inter plugin wormhole sink and source elements for sending data between pipelines within the same application process + gained new properties to fine tune the inner elements. intersrc can now also be configured to forward upstream events to the + producer pipeline via the new “event-types” property. -- av1parse can parse annexb streams now, and typefinding support has been added for annexb streams as well. +- The quinn plugin supports sharing of the QUIC/WebTransport connection/session with an element upstream or downstream. This + is required for supporting Media over QUIC (MoQ) later, for which an MR is already pending. -- The GTK3 plugin has gained support for OpenGL/WGL on Windows +- replaygain will use EBU-R128 gain tags now if available. -- fdsrc has a new "is-live" property to make it act like a live source and timestamp the received data with the clock running - time. +- threadshare: many improvements to the various threadshare elements, plus examples and a new benchmark program. The plugin + was also relicensed to MPL-2.0. -- imagefreeze: Add support for JPEG and PNG +- The unixfdsink element for zero-copy 1:N IPC on Linux can now also copy the input data if needed, which makes it usable with + more upstream elements. Before it would only work with elements that made use of the special memory allocator it advertised. + This (copy if needed) is enabled by default, but can be disabled by setting the new “min-memory-size” property to -1. -- kmssink: Extended the functionality to support buffers with DRM formats along with non-linear buffers + There’s also a new “num-clients” property that gets notified when the number of clients (unixfdsrc elements tapping the same + unixfdsink) changes. -- pitch now supports reverse playback +- videorate and imagefreeze now also support JPEG XS. -- queue can emit the notify signal on queue level changes if the "notify-levels" property has been set. +- videorate’s formerly defunct “new-pref” property was revived for better control which frame to prefer for output in case of + caps changes. -- qroverlay: the "pixel-size" property has been removed in favour of a new "size" property with slightly different semantics, - where the size of the square is expressed in percent of the smallest of width and height. +## Plugin and library moves and renames -- rsvgdec: Negotiate resolution with downstream and scale accordingly +- The y4mdec plugin moved from gst-plugins-bad into gst-plugins-good and was merged with the existing y4menc there into a + single y4m plugin containing both a YUV4MPEG encoder and decoder. -- rtmp2: server authentication improvements +- The fmp4 and mp4 plugins in the Rust plugins set were merged into a single isobmff plugin. - - Mimic librtmp’s behaviour and support additional connection parameters for the connect packet, which are commonly used - for authentication, via the new "extra-connect-args" property. +## Plugin and element deprecations - - Add support for Limelight CDN (llnw) authentication +- The old librtmp-based rtmpsrc and rtmpsink elements are deprecated are scheduled for removal in the next release cycle. Use + the rtmp2src and rtmp2sink elements instead (which will likely also be registered under the old names after removal of the + old rtmp plugin). -- scaletempo has gained support for a “fit-down” mode: In fit-down mode only 1.0 rates are supported, and the element will fit - audio data in buffers to their advertised duration. This is useful in speech synthesis cases, where elements such as - awspolly will generate audio data from text, and assign the duration of the input text buffers to their output buffers. - Translated or synthesised audio might be longer than the original inputs, so this makes sure the audio will be sped up to - fit the original duration, so it doesn’t go out of sync. +- Deprecate the webrtchttp plugin in the Rust plugins set along with its whipsink and whepsrc elements, in favour of the + whipclientsink and whepclientsrc elements from the webrtc plugin in the Rust plugins set. -- souphttpsrc: Add the notion of "retry-backoff" and retry on 503 (service unavailable) and 500 (internal server error) http - errors. +- The libmpeg2-based mpeg2dec element is deprecated and scheduled for removal in the next release cycle, as libmpeg2 has been + unmaintained for a very long time. The libavcodec-based decoder has had a higher rank for many years already and is also + more performant. We would recommend that distros that also ship the FFmpeg-based decoder out of the box stop shipping the + mpeg2dec plugin now or reduce its rank to GST_RANK_NONE. -- taginject now modifies existing tag events of the selected scope, with the new "merge-mode" property allowing finer control. - -- timecodestamper gained a new running-time source mode that converts the buffer running time into timecodes. - -- playbin3, uridecodebin3, parsebin - - - lots of stream-collection handling and stability/reliability fixes - - error/warning/info message now include the URI (if known) and stream-id - - missing plugin messages include the stream-id - -- videocodectestsink gained support for GBR_10LE, GRAY8 and GRAY10_LE{16,32} pixel formats - -- videoflip gained support for the Y444_16LE and Y444_16BE pixel formats +## Plugin and element removals -- videoconvertscale: +- The cc708overlay element has been removed. It is replaced by the cea708overlay element from the rsclosedcaption plugin in + the Rust plugins module. - - Handle pixel aspect ratios with large numerators or denominators - - Explicitly handle the overlaycomposition meta caps feature, so it doesn’t get dropped unnecessarily +- Drop registration of rusotos3src and rusotos3sink in the AWS plugin in the Rust plugins set. These were legacy names that + were renamed to awss3src and awss3sink in 2022, but had been kept around for a while so applications had time to move to the + new name space. -- waylandsink prefers DMABuf over system memory now +## Miscellaneous API additions -- x264enc has a new "nal-hrd" property to make the encoder signal HRD information, which is required for Blu-ray streams, - television broadcast and a few other specialist areas. It can also be used to force true CBR, and will cause the encoder to - output null padding packets. +### GStreamer Core -- zbar: add support for binary mode and getting symbols as raw bytes instead of a text string. +- gst_call_async() and gst_object_call_async() are more generic and convenient replacements for gst_element_call_async() -## Plugin and library moves +- gst_check_version() is a new convenience function to check for a minimum GStreamer core version at runtime. -- macOS: atdec was moved from the applemedia plugin in -bad to the osxaudio plugin in -good, in order to be able to share - audio-related helper methods. +- GstClock: Add gst_clock_is_system_monotonic() utility function -## Plugin and element removals +- GstController: gst_timed_value_control_source_list_control_points() is a thread-safe method to retrieve the list of control + points, replacing gst_timed_value_control_source_get_all(). -- None in this cycle +- GstCpuId: gst_cpuid_supports_x86_avx() and friends can be used to check which SIMD instruction sets are supported on the + current machine’s CPU without relying on liborc for that. This is useful for plugins that rely on an external library that + wants to be told which SIMD code paths to use. -## Miscellaneous API additions +- gst_object_get_toplevel() can be used to get the toplevel parent of an object, e.g. the pipeline an element is in. -### GStreamer Core +- New API for tensor caps descriptions: -- gst_meta_api_type_set_params_aggregator() allows setting an GstAllocationMetaParamsAggregator function for metas, which has - been implemented for GstVideoMeta and is used to aggregate alignment requirements of multiple tee branches. + - GstUniqueList is a new unordered, unique container value type for GValues similar to GstValueList but guaranteed to have + unique values. Can only be queried and manipulated via the gst_value_* API same as GstValueList and GstValueArray. -- gst_debug_print_object() and gst_debug_print_segment() have been made public API. The can be used to easily get string - representations of various types of (mini)objects in custom log handlers. + - gst_structure_get_caps() gets a GstCaps from a structure -- Added gst_aggregator_push_src_event(), so subclasses don’t just push events directly onto the source pad bypassing the base - class without giving it the chance to send out any pending serialised events that should be sent out before. +- More accessor functions for GstPadProbeInfo fields and the GstMapInfo data field, as well as a generic gst_map_info_clear() + which is useful for language bindings. -- GstMessage has gained APIs to generically add “details” to messages: +- New EBU-R128 variants of the replay gain tags: GST_TAG_TRACK_GAIN_R128 and GST_TAG_ALBUM_GAIN_R128 - - gst_message_set_details() - - gst_message_get_details() - - gst_message_writable_details() - - gst_message_parse_error_writable_details() - - gst_message_parse_warning_writable_details() - - gst_message_parse_info_writable_details() This is used in uridecodebin3 to add the triggering URI to any INFO, WARNING - or ERROR messages posted on the bus, and in decodebin3 to add the stream ID to any missing plugin messages posted on the - bus. +- GstReferenceTimestampMeta: additional information about the timestamp can be provided via the new optional info + GstStructure. This should only be used for information about the timestamp and not for information about the clock source. + This is used in an implementation of the TAI timestamp functionality described in ISO/IEC 23001-17 Amendment 1 in the Rust + MP4 muxer. -- gst_util_floor_log2() returns smallest integral value not bigger than log2(v). +- GstValue: add gst_value_hash() and support 0b / 0B prefix for bitmasks when deserialising. -- gst_util_fraction_multiply_int64() is a 64-bit variant of gst_util_fraction_multiply(). +- Add missing _take() and _steal() functions for some mini objects: -#### GstIdStr replaces GQuark in structure and caps APIs + - gst_buffer_take(), gst_buffer_steal() + - gst_buffer_list_steal() + - gst_caps_steal() + - gst_memory_take(), gst_memory_replace(), gst_memory_steal() + - gst_message_steal() + - gst_query_steal() -- GQuarks are integer identifiers for strings that are inserted into a global hash table, allowing in theory for cheap - equality comparisons. In GStreamer they have been used to represent GstStructure names and field names. The problem is that - these strings once added to the global string table can never be freed again, which can lead to ever-increasing memory usage - for processes where such name identifiers are created based on external input or on locally-created random identifiers. +- GstElement: Deprecate gst_element_state_*() API and provide gst_state_*() replacements with the right namespace -- GstIdStr is a new data structure made to replace quarks in our APIs. It can hold a short string inline, a static string, or - a reference to a heap-allocated longer string, and allows for cheap storage of short strings and cheap comparisons. It does - not involve access to a global hash table protected by a global lock, and as most strings used in GStreamer structures are - very short, it is actually more performant than quarks in many scenarios. +#### GstMetaFactory to dynamically register metas -- GQuark-using APIs in GstStructure or GstCaps have been deprecated and equivalent APIs using GstIdStr have been added - instead. For more details about this change watch Sebastian’s GStreamer Conference presentation “GQuark in GStreamer - structures - what nonsense!”. +- gst_meta_factory_register() allows to dynamically register metas and store them in the registry by name. This is useful in + combination with the GstMeta serialisation and deserialisation functionality introduced in GStreamer 1.24, for metas that + are not provided by GStreamer core. If an element comes across a meta name that is not registered yet with GStreamer, it can + check the registry and load the right plugin which will in turn register the meta with GStreamer. This is similar to how + flag and enum types can be stored in the registry so that if during caps deserialisation an unknown enum or flag type is + encountered, it can be loaded dynamically and registered with the type system before deserialisation continues. -- Most applications and plugins will have been using the plain string-based APIs which are not affected by this change. + The pbtypes plugin in gst-plugins-base registers GstAudioMeta and GstVideoMeta in the registry so that e.g. unixfdsrc and + other elements can make sure they get pulled in and registered with GStreamer before deserialising them. -#### GstVecDeque +### App Sink and Source Library -- Moved GstQueueArray as GstVecDeque into core for use in GstBus, the ringbuffer logger and in GstBufferPool, where an overly - complicated signaling mechanism using GstAtomicQueue and GstPoll was replaced with GstVecDeque and a simple mutex/cond. - -- GstQueueArray in libgstbase was deprecated in favour of GstVecDeque. - -- GstAtomicQueue will be deprecated once all users in GStreamer have been moved over to GstVecDeque. +- appsrc and appsink gained support for a more bindings-friendly “simple callbacks” API that can be used instead of GObject + signals (which have considerable overhead) or the normal callbacks API (which couldn’t be used from most bindings). ### Audio Library -- Added gst_audio_reorder_channels_with_reorder_map() which allows reordering the samples with a pre-calculated reorder map - instead of re-calculating the reorder map every time. - -- Add top-surround-left and top-surround-right channel positions - -- GstAudioConverter now supports more numerical types for the mix matrix, namely double, int, int64, uint, and uint64 in - addition to plain floats. +- added support for 20-bit PCM audio stored in 32-bit containers, both signed (S20_32) and unsigned (U20_32), each in + little-endian and big-endian variants. ### Plugins Base Utils Library -- New AV1 caps utility functions for AV1 Codec Configuration Record codec_data handling - -- The GstEncodingProfile (de)serialization functions are now public - -- GstEncodingProfile gained a way to specify a factory-name when specifying caps. In some cases you want to ensure that a - specific element factory is used while requiring some specific caps, but this was not possible so far. You can now do - e.g. qtmux:video/x-prores,variant=standard|factory-name=avenc_prores_ks to ensure that the avenc_prores_ks factory is used - to produce the variant of prores video stream. +- Many minor improvements. ### Tag Library -- EXIF handling now support the CAPTURING_LIGHT_SOURCE tag - -- Vorbis tag handling gained support for the LYRICS tag +- Vorbis comments: parse EBU R128 tags ### Video Library and OpenGL Library -- gst_video_convert_sample(), gst_video_convert_sample_async() gained support for D3D12 conversion. - -- GstVideoEncoder: gst_video_encoder_release_frame() and gst_video_encoder_drop_frame() have been made available as public - API. - -- Navigation: gained mouse double click event support - -- Video element QoS handling was improved so as to not overshoot the QoS earliest time by a factor of 2. This was fixed in the - video decoder, encoder, aggregator and audiovisualizer base classes, as well as in the adaptivedemux, deinterlace, - monoscope, shapewipe, and (old) videomixer elements. - -- GstVideoConverter gained fast paths for v210 to/from I420_10 / I422_10 +- Add DRM equivalents for various 10/12/16 bit SW-decoders formats -- New gst_video_dma_drm_format_from_gst_format() helper function that converts a video format into a dma drm fourcc / modifier - pair, plus gst_video_dma_drm_format_to_gst_format() which will do the reverse. - -- In the same vein gst_gl_dma_buf_transform_gst_formats_to_drm_formats() and - gst_gl_dma_buf_transform_drm_formats_to_gst_formats() have been added to the GStreamer OpenGL support library. - -- GLDisplay/EGL: Add API (gst_gl_display_egl_set_foreign()) for overriding foreign-ness of the EGLDisplay in order to control - whether GStreamer should call eglTerminate() or not. - -- Additional DMA DRM format definitions/mappings: - - - NV15, NV20, NV30 - - NV12_16L32S, MT2110T, MT2110R as used on Meditek SoCs - - NV12_10LE40 - - RGB15, GRAY8, GRAY16_LE, GRAY16_BE - - plus support for big endian DRM formats and DRM vendor modifiers +- New GstVideoMetaTransformMatrix that adds crop, scale, rotate, flip, shear and more meta transformations. The current + “scaling” transformation doesn’t work if either the input buffer is cropped or if any kinds of borders are added. And it + completely falls down with more complex transformations like compositor. + +- GstVideoOverlayCompositionMeta: handling of multiple video overlay composition metas on a single buffer has been fixed in + lots of places (overlays and sinks). Many elements assumed there would only ever be a single overlay composition meta per + buffer. For that reason gst_buffer_get_video_overlay_composition_meta() has been deprecated, so that elements have to + iterate over the metas and handle multiple occurences of it. New Raw Video Formats -- Packed 4:2:2 YUV with 16 bits per channel: - - Y216_LE, Y216_BE -- Packed 4:4:4:4 YUV with alpha, with 16 bits per channel: - - Y416_LE, Y416_BE -- 10-bit grayscale, packed into 16-bit words with left padding: - - GRAY10_LE16 +- Add more 10bit RGB formats commonly used on ARM SoCs in GStreamer Video, OpenGL and Wayland, as well as in deinterlace and + gdkpixbufoverlay: + - BGR10x2_LE: packed 4:4:4 RGB (B-G-R-x), 10 bits for R/G/B channel and MSB 2 bits for padding + - RGB10x2_LE: packed 4:4:4 RGB (R-G-B-x), 10 bits for R/G/B channel and MSB 2 bits for padding +- Add 10-bit 4:2:2 NV16_10LE40 format, which is a fully-packed variant of NV16_10LE32 and also known as NV20 and is produced + by Rockchip rkvdec decoders. ### GstPlay Library -- Add stream-id based selection of streams to better match playbin3’s API: - - Add accessors for the stream ID and selection API based on the stream ID: - - gst_play_stream_info_get_stream_id() - - gst_play_set_audio_track_id() - - gst_play_set_video_track_id() - - gst_play_set_subtitle_track_id() - - gst_play_set_track_ids() - - Deprecate the old index-based APIs: - - gst_play_stream_info_get_index() - - gst_play_set_audio_track() - - gst_play_set_video_track() - - gst_play_set_subtitle_track() - - Remove old playbin support - - Implement the track enable API based on stream selection -- Distinguish missing plugin errors and include more details (uri, and stream-id if available) in error/warning messages: - - gst_play_message_get_uri() - - gst_play_message_get_stream_id() - - GST_PLAY_ERROR_MISSING_PLUGIN - - gst_play_message_parse_error_missing_plugin() - - gst_play_message_parse_warning_missing_plugin() -- Improve play message API inconsistencies: - - Consistently name parse functions according to their message type: - - gst_play_message_parse_duration_changed() - - gst_play_message_parse_buffering() - - Deprecate the misnamed functions: - - gst_play_message_parse_duration_updated() - - gst_play_message_parse_buffering_percent() - - Add missing parse functions: - - gst_play_message_parse_uri_loaded() - - gst_play_message_parse_seek_done() -- Support disabling the selected track at startup +- GstPlay: Add support for gapless looping ## Miscellaneous performance, latency and memory optimisations -- dvdspu: use multiple minimal sized PGS overlay rectangles instead of a single large one to minimise the total blitting - surface in case of disjointed rectangles. +- New task pool GstContext to share a thread pool amongst elements in a pipeline for better resource management and + performance, especially for video conversion and compositing. This is currently only made use of automatically in the + GStreamer Editing Services library. + +- glupload: Implement udmabuf uploader to share buffers between software decoders/sources and GPUs, display engines (wayland), + and other dma devices (see above). + +- GstDeviceMonitor now starts device providers in a separate thread. This avoids blocking the application when + gst_device_monitor_start() is called, which avoids each app having to spawn a separate thread just to start device + monitoring. This is especially important on Windows, where device probing can take several seconds or on macOS where device + access can block on user input. A new GST_MESSAGE_DEVICE_MONITOR_STARTED is posted on the bus to signal to the application + that the device monitor has completed its async startup. -- video-frame: reduce number of memcpy() calls on frame copies if possible +- On Windows audioresample now has SIMD optimisations enabled also for the MSVC build. -- video-converter: added fast path conversions between v210 and I420_10 / I422_10 +- audiomixmatrix / audioconvert: sparse matrix LUT optimisation which uses precomputed LUTs for non-zero coefficients instead + of blindly traversing all input/output channel combinations. - As always there have been plenty of performance, latency and memory optimisations all over the place. ## Miscellaneous other changes and enhancements -- netclientclock: now also emits the clock synced signal when corrupted to signal that sync has been lost. +- The ALSA device provider now supports enumerating virtual PCM sinks -- GstValue, GstStructure: can now (de)serialize string arrays (G_TYPE_STRV) +- The ASIO device monitor can now detect dynamically added and removed devices by monitoring USB events. ## Tracing framework and debugging improvements -- dot files (pipeline graph dumps) are now written to disk atomically - -- tracing: add hooks for gst_pad_send_event_unchecked() and GstMemory init/free +- There are new hooks to track when buffers are queued or dequeued from buffer pools in the tracing system. -- tracers: Simplify params handling using GstStructure and object properties and move tracers over to property-based - configuration (leaks, latency). +- The pad-push-timings tracer gained a new “write-log” action signal -- textoverlay, clockoverlay, timeoverlay: new "response-time-compensation" property that makes the element render the text or - timestamp twice: Once in the normal location and once in a different sequentially predictable location for every frame. This - is useful when measuring video latency by taking a photo with a camera of two screens showing the test video overlayed with - timeoverlay or clockoverlay. In these cases, you will often see ghosting if the display’s pixel response time is not great, - which makes it difficult to discern what the current timestamp being shown is. Rendering in a different location for each - frame makes it easy to tell what the latest timestamp is. In addition, you can also use the fade-time of the previous frame - to measure with sub-framerate accuracy when the photo was taken, not just clamped to the framerate, giving you a higher - precision latency value. +Dot tracer/viewer -New tracers +- Enhanced dots-viewer: Major refactoring with modular JavaScript architecture, bundled dependencies (no more CDN), clickable + pipeline references for navigation between related dot files, download SVG button, and improved UI/UX with better text + handling and zoom fixes. -- memory-tracer: New tracer that can track memory usage over time +Dot file pipeline graphs -- pad-push-timings: New tracer for tracing pad push timings +- Dot file dumps of pipeline graphs now show the list of active tracers at the bottom along with the tracer configuration. -- pcap-writer: New tracer that can store the buffers flowing through a pad as PCAP file +Debug logging system improvements -Dot tracer/viewer +GstLogContext to fine-tune logging output and reduce log message spam -- New dots tracer that simplifies the pipeline visualization workflow: - - Automatically configures dot file directory and cleanup - - Integrates with the pipeline-snapshotS tracer to allow dumping pipeline on demand from the gst-dots-viewer web interface - - Uses GST_DEBUG_DUMP_DOT_DIR or falls back to $XDG_CACHE_HOME/gstreamer-dots -- New gst-dots-viewer web tool for real-time pipeline visualization - - Provides interface to browse and visualize pipeline dot files - - Features on-demand pipeline snapshots via “Dump Pipelines” button - - WebSocket integration for live updates - - Uses GST_DEBUG_DUMP_DOT_DIR or falls back to $XDG_CACHE_HOME/gstreamer-dots -- Simple usage: - - gst-dots-viewer (starts server) - - GST_TRACERS=dots gst-launch-1.0 videotestsrc ! autovideosink (runs with tracer) - - View at http://localhost:3000 +- GstLogContext is a new API to control logging behavior, particularly for implementing “log once” functionality and periodic + logging. This helps avoid spamming logs with repetitive messages. This comes with a whole suite of new GST_CTX_* debug log + macros that take a context argument in addition to the usual arguments. -Debug logging system improvements +- A number of GST_{MEMDUMP,TRACE,LOG,DEBUG,INFO,WARNING,ERROR}_ONCE convenience macros for logging something only once. -- Nothing major in this cycle. +- The source code of elements and plugins has to be updated to make use of this new feature, so if there are any particular + log messages in certain elements that you feel are particularly spammy, please feel free to file an issue in GitLab so we + can see if it would make sense to use the new API there. ## Tools -- gst-inspect-1.0 documents tracer properties now and shows element flags - -- gst-launch-1.0 will show error messages posted during pipeline construction +- gst-inspect-1.0 now shows the type of each field when it prints caps and also pretty-prints tensor caps. ## GStreamer FFmpeg wrapper -- Add support for H.266/VVC decoder - -- Add mappings for the Hap video codec, the Quite OK Image codec (QOI) and the M101 Matrox uncompressed SD video codec. - -- Don’t register elements for which we have no caps and which were non-functional as a result (showing unknown/unknown caps). - -- The S302M audio encoder now supports up to 8 channels. - -- Various tag handling improvements in the avdemux wrappers, especially when there are both upstream tags and additional local - tags. - -- Support for 10-bit grayscale formats +- The avdec video decoders have seen many improvements and fixes for their buffer pool and allocation query handling. ## GStreamer RTSP server -- GstRTSPOnvifMediaFactoryClass gained a ::create_backchannel_stream() vfunc. This allows subclasses to delay creation of the - backchannel to later in the sequence, which is useful in scenarios where the RTSP server acts as a relay and the supported - caps depend on the upstream camera, for example. +- rtsp-client: Add a “pre-closed” signal which provides a way for an application to be notified when a connection is closed, + before the client’s sessions are cleaned up. This is useful when a client terminates its session improperly, for example, by + sending a TCP RST. + +- rtsp-stream-transport: expose new “timed-out” property. Upon RTCP timeout, rtpsession emits a signal that we can catch and + then also expose the timed out state a property of the transport in order for users (such as rtspclientsink) to get notified + about it. -- The ONVIF backchannel example now features support for different codecs, including AAC. +- rtspclientsink now errors out on timeout. ## VA-API integration -VA plugin - -- New VA elements: - - - H.266 / VVC video decoder - - VP8 video encoder - - JPEG encoder - - VP9 + VP8 alpha decodebin - - Remember that the availability of these elements depends on your platform and driver. - -- There are a lot of improvements and bug fixes, to hightlight some of them: - - - Improved B pyramid mode for both H264 and HEVC encoding when reference frame count exceeds 2, optimizing pyramid level - handling. - - Enabled ICQ and QVBR modes for several encoders, including H264, H265, VP9 and AV1. - - Updated rate control features by setting the quality factor parameter, while improving bitrate change handling. - - Improved VP9 encoder’s ability to avoid reopening or renegotiating encoder settings when parameters remain stable. - - Added functionality to adjust the trellis parameter in encoders. - - Optimize encoders throughput with the introduction of output delay. - - Added support for new interpolation methods for scaling and improvements for handling interlace modes. - -GStreamer-VAAPI is now deprecated +VA plugin for Hardware-Accelerated Video Encoding and Decoding on Intel/AMD -- gstreamer-vaapi has been deprecated and is no longer actively maintained. Users who rely on gstreamer-vaapi are encouraged - to migrate to the va plugin’s elements at the earliest opportunity. +- vaav1enc: Enable intrablock copy and palette mode. -- vaapi*enc encoders have been demoted to a rank of None, so will no longer be autoplugged in encodebin. They have also no - longer advertise dmabuf caps or unusual pixel formats on their input pad template caps. +- Lots of other improvements and bug fixes. -## GStreamer Video4Linux2 support +GStreamer-VAAPI has been removed in favour of the va plugin -- Implemented DMA_DRM caps negotiation +- gstreamer-vaapi has been removed and is no longer updated going forward Users who relied on gstreamer-vaapi are encouraged + to migrate to the va plugin’s elements at the earliest opportunity. It should still be possible to build old versions of + gstreamer-vaapi against newer versions of GStreamer. -- Framerate negotiation improvements - -- Support for P010 and BGR10A2_LE pixel formats - -- The V4L2 stateless decoders now support inter-frame resolution changes for AV1 and VP9 +## GStreamer Editing Services and NLE -- The V4L2 stateful encoders can now handle dynamic frame rates (0/1), and colorimetry negotiation was also improved. +- Task Pool Context Support: GESPipeline now supports task pool context handling for better resource management. It + automatically creates and manages a GstSharedTaskPool with threads set to the number of processors, also allowing + applications to provide their own shared task pool via context negotiation. -## GStreamer Editing Services and NLE +- MT-Safe Controller API: New gst_timed_value_control_source_list_control_points() function provides thread-safe access to + control points, addressing use-after-free bugs in the previous API which returned references to internal structures. -- Added support for reverse playback with a new reverse property on nlesource which is exposed child property on GESClip +- OTIO Formatter Migration: The OpenTimelineIO formatter has been moved from embedded GLib resources to a standalone Python + plugin located in gst-python, simplifying the implementation and avoiding duplicated code. -- Input channels reordering for flexible audio channel mapping +- Framepositioner Z-order Enhancements: The z-order property is now controllable and exposed for manipulation, enabling + dynamic adjustment of layer stacking order during timeline editing. -- Added support for transition in the ges-launch-1.0 timeline description format +- Clip Layer Movement Detection: New ges_clip_is_moving_between_layers() API distinguishes actual layer moves from other + operations like split/ungroup, with separate flags for track element freezing and layer movement. -- Added support for GstContext sharing in GESTimeline +- GES Error Domain: Added ges_error_quark() function for proper GError domain support, enabling automatic ErrorDomain + implementation generation in language bindings. -- Added basic support for duplicated children property in GESTimelineElement +- Timeline Error Reporting: Added GError parameter to ges_base_bin_set_timeline() for proper error reporting when timeline + setup fails. -- validate: Add an action type to group clips +- Various bug fixes for memory leaks, frame position calculations with non-square pixel aspect ratios, and control binding + handling. -## GStreamer validate +GStreamer validate -- Added new action types: +- New check-last-frame-qrcode action type: New action type (from the Rust validate plugin) to validate QR code content in + video frames. Supports exact string matching for single or multiple QR codes, and JSON field validation. - - start-http-server: Start a new instance of our HTTP test server - - http-requests: Send an HTTP request to a server, designed to work with our test http server +- Override Severity Levels: New overrides parameter in the meta action type allows changing issue severity levels during test + execution. Tests can now pass when encountering known issues by downgrading severity from critical to warning/issue/ignore. -- HTTP server control endpoints to allow scenarios to control the server behavior, allowing simulating server failures from - tests +- Enhanced dots-viewer (see dots-viewer section above) -- Improved the select-streams action type, adding support for selecting the same streams several times +- SSIM Validation Improvements: Changed validation to check all images before reporting errors instead of stopping at the + first error. -- Added support for forcing monitoring of all pipelines in validatetest files +- Reverse Playback Validation: Changed segment.time mismatch from critical to warning for reverse playback scenarios, + acknowledging the additional complexity demuxers face during reverse playback. -- Enhanced support for expected Error messages on the bus +- Launcher Improvements: Log files for passing tests are now removed by default to reduce storage usage (with option to keep + them), and debug log colors are now supported when redirected to files. -- Added ways to retrieve HTTP server port in .validatetest files +- Python 3.14 Compatibility: Fixed file:/// URI generation for Python 3.14 with proper RFC 8089 compliance. -- Added support for lldb in the gst-validate-launcher +- Various bug fixes for scenario handling, memory leaks, and improved backward compatibility with GLib 2.64. ## GStreamer Python Bindings @@ -927,125 +757,100 @@ pythonic; as well as support for APIs that aren’t available through the regular gobject-introspection based bindings, such as e.g. GStreamer’s fundamental GLib types such as Gst.Fraction, Gst.IntRange etc. -- The python Meson build option has been renamed to python-exe (and will yield to the monorepo build option of the same name - if set, in a monorepo build context). +- More pythonic API for analytics -- Added an iterator for AnalyticsRelationMeta +- Type annotations have been updated in PyGObject-stubs. -- Implement __eq__ for Mtd classes +- Writability of Gst.Structure, Gst.Caps and other objects has been improved. -- Various build fixes and Windows-related fixes. + - caps.writable_structure() now returns a ContextManager inside of which the returned Gst.Structure can be modified. + - obj.make_writable() makes any MiniObject writable. + - Pad probe callbacks now has info.writable_object() and info.set_object() to modify objects inside the callback. ## GStreamer C# Bindings -- The C# bindings have been updated for the latest GStreamer 1.26 API +- The C# bindings have been updated for the latest GStreamer 1.28 APIs. ## GStreamer Rust Bindings and Rust Plugins The GStreamer Rust bindings and plugins are released separately with a different release cadence that’s tied to the gtk-rs release cycle. -The latest release of the bindings (0.23) has already been updated for the new GStreamer 1.26 APIs, and works with any GStreamer -version starting at 1.14. +The latest release of the bindings (0.24) has already been updated for the new GStreamer 1.28 APIs, and works with any GStreamer +version starting from 1.14. gst-plugins-rs, the module containing GStreamer plugins written in Rust, has also seen lots of activity with many new elements -and plugins. The GStreamer 1.26 binaries will be tracking the main branch of gst-plugins-rs for starters and then track the 0.14 -branch once that has been released (around summer 2025). After that, fixes from newer versions will be backported as needed to -the 0.14 branch for future 1.26.x bugfix releases. - -Rust plugins can be used from any programming language. To applications they look just like a plugin written in C or C++. - -### New Rust elements - -- awstranscriber2, awstranslate: New elements around the AWS transcription and translation services. - -- cea708mux: New element that allows to mux multiple CEA708 services into a single stream. - -- cdpserviceinject: New element for injecting a CDP service descriptor into closed caption CDP packets - -- cea708overlay: New element for overlaying CEA608 / CEA708 closed captions over video streams. +and plugins. -- gopbuffer: New element that can buffer a minimum duration of data delimited by discrete GOPs (Group of Picture) +The GStreamer 1.28 binaries will be tracking the main branch of gst-plugins-rs for starters and then track the 0.15 branch once +that has been released (around the end of February 2026). After that, fixes from newer versions will be backported as needed +into the new 0.15 branch for future 1.28.x bugfix releases. -- hlscmafsink, hlssink3: New single-variant HLS sink elements that can output CMAF (fMP4) or MPEG-TS fragments. +Rust plugins can be used from any programming language. To applications, they look just like a plugin written in C or C++. -- hlsmultivariantsink: New sink element that can output an HLS stream with multiple variants - -- mpegtslivesrc: New source element that can wrap a live MPEG-TS source (e.g. SRT or UDP source) and provides a clock based on - the PCR. - -- onvifmetadataextractor: New element that can extract ONVIF metadata from GstMetas into a separate stream - -- originalbuffer: New plugin with originalbuffersave / originalbufferrestore elements that allow saving an original buffer, - modifying it for analytics, and then restoring the original buffer content while keeping any additional metadata that was - added. - -- polly: New element around the AWS text-to-speech polly services +### New Rust elements -- quinn: New plugin that contains various QUIC-based elements for working with raw QUIC streams, RTP-over-QUIC (RoQ) and - WebTransport. +- New icecastsink element with AAC support that is similar in functionality to the existing shout2send element but also + supports AAC, which upstream libshout is not planning to support. -- relationmeta: New plugin with elements converting between GstRelationMeta and ONVIF XML metadata. +- New audio source separation element based on demucs (see above). -- New Rust RTP payloaders and depayloaders for AC3, AMR, JPEG, KLV, MPEG-TS (MP2T), MPEG-4 (MP4A, MP4G), Opus, PCMU (uLaw), - PCMA (aLaw), VP8, VP9. +- New Deepgram speech-to-text transcription plugin, ElevenLabs voice cloning element and textaccumulate element. See Speech to + Text, Translation and Speech Synthesis section above. -- New rtpbin2 based on rtprecv / rtpsend elements +- New analytics combiner and splitter elements for batch metas (see above). -- speechmatics: New transcriber / speech-to-text and translation element +- New mpa robust RTP depayloader, L8/L16/L24 raw audio payloaders and depayloaders and SMPTE ST291 ancillary data payloader + and depayloader. -- New spotifylyricssrc element for grabbing lyrics from Spotify. +- New GIF decoder element that supports looping. -- streamgrouper: New element that takes any number of streams as input and adjusts their stream-start events in such a way - that they all belong to the same stream group. +- New ST-2038 ancillary data combiner and extractor elements (see above) -- translationbin: Helper bin around translation elements, similar to the already existing transcriberbin for transcriptions. +- Added a burn-based YOLOX inference element and a YOLOX tensor decoder -- tttocea708: New element for converting timed-text to CEA708 closed captions +- s302mparse: Add new S302M audio parser -- A VVdeC-based H.266 decoder element was added to the Rust plugins, based on the Fraunhofer Versatile Video Decoder library. +- New Rust validate plugin with a check-last-frame-qrcode action. -For a full list of changes in the Rust plugins see the gst-plugins-rs ChangeLog between versions 0.12 (shipped with GStreamer -1.24) and 0.14.x (shipped with GStreamer 1.26). +For a full list of changes in the Rust plugins see the gst-plugins-rs ChangeLog between versions 0.14 (shipped with GStreamer +1.26) and current main (soon 0.15) branch (shipped with GStreamer 1.28). -Note that at the time of GStreamer 1.26.0 gst-plugins-rs 0.14 was not released yet and the git main branch was included instead -(see above). As such, the ChangeLog also did not contain the changes between the latest 0.13 release and 0.14.0. +Note that at the time of GStreamer 1.28.0 gst-plugins-rs 0.15 was not released yet and the git main branch was included instead +(see above). ## Build and Dependencies - Meson >= 1.4 is now required for all modules -- liborc >= 0.4.41 is strongly recommended - -- libnice >= 0.1.22 is strongly recommended, as it is required for WebRTC ICE consent freshness (RFC 7675). +- liborc >= 0.4.42 is strongly recommended -- The ASIO plugin dropped its external SDK header dependency, so it can always be built and shipped more easily. +- libnice >= 0.1.23 is now required for the WebRTC library. -- Require tinyalsa >= 1.1.0 when building the tinyalsa plugin +- The closedcaption plugin in gst-plugins-bad no longer depends on pangocairo after removal of the cc708overlay element (see + above). -- The srtp plugin now requires libsrtp2, support for libsrtp1 was dropped. +- Please also note plugin removals and deprecations. Monorepo build -- The FFmpeg subproject wrap was updated to 7.1 +- Updated wraps, incl. glib: cairo, directxmath, expat, fdk-aac, ffmpeg, flac, freetype2, gdk-pixbuf, gtest, harfbuzz, + json-glib, lame, libjpeg-turbo, libnice, libopenjp2, libpng, libsrtp2, libxml2, nghttp2, ogg, pango, pcre2, pygobject, + soundtoch, sqlite3, wayland-protocols, zlib. -- Many other wrap updates - -gstreamer-full - -- No major changes +- Added wraps: librsvg, svtjpegxs Development environment - Local pre-commit checks via git hooks have been moved over to pre-commit, including the code indentation check. - Code indentation checking no longer relies on a locally installed copy of GNU indent (which had different outcomes depending - on the exact version installed). Instead pre-commit will automatically install the gst-indent-1.0 indentation tool through + on the exact version installed). Instead, pre-commit will automatically install the gst-indent-1.0 indentation tool through pip, which also works on Windows and macOS. - A pre-commit hook has been added to check documentation cache updates and since tags. -- Many meson wrap updates, including to FFmpeg 7.1 +- Many meson wrap updates, including to FFmpeg 7.1 (FFmpeg 8.0 is pending) - The uninstalled development environment should work better on macOS now, also in combination with homebrew (e.g. when libsoup comes from homebrew). @@ -1057,2267 +862,235 @@ ### Android -- The recommended mechanism to build Android apps has changed from Android.mk to CMake-in-Gradle using - FindGStreamerMobile.cmake. Android.mk support has been deprecated and will be removed in the next stable release. For more - information, see below, in the Cerbero section. -- More H.264/H.265 profiles and levels have been added to the androidmedia hardware-accelerated video encoder and decoder - elements, along with mappings for a number of additional pixel formats for P010, packed 4:2:0 variants and RGBA layouts, - which fixes problems with android decoders refusing to output raw video frames with decoders that announce support for these - common pixel formats and only allow the ‘hardware surfaces output’ path. +- Overhaul hw-accelerated video codecs detection: + + - Android 10 (API 29) added support for isHardwareAccelerated() to MediaCodecInfo to detect whether a particular + MediaCodec is backed by hardware or not. We can now use that to ensure that the video hw-codec is rank PRIMARY+1 on + Android, since using a software codec for video is simply not feasible most of the time. + + - If we’re not able to detect isHardwareAccelerated(), perhaps because the Android API version is too old, we try to use + the codec name as a fallback and also rank as PRIMARY+1 the c2.android, c2.exynos and c2.amlogic audio codecs alongside + OMX.google, because they are known-good. ### Apple macOS and iOS -- atenc: added an Apple AAC audio encoder +- VP9 and AV1 hardware-accelerated video decoding support -- atdec can now decode audio with more than two channels +- Support for 10-bit HEVC encoding -- vtenc has received various bug fixes as well as a number of new features: +- Implement keyboard, mouse, and scroll wheel navigation event handling for the OpenGL Cocoa backend. - - Support for HEVC with alpha encoding via the new vtenc_h265a element - - additional rate control options for constant bitrate encoding (only supported on macOS 13.0+ and iOS 16.0+ on Apple - Silicon), setting data rate limits, and emulating CBR mode via data rate limits where CBR is not supported. - - HLG color transfer support - - new "max-frame-delay" property (for ProRes) +### Windows -- Better macOS support for gst-validate tools which now use gst_macos_main() and support lldb +#### GStreamer Direct3D12 integration -- The osxaudio device provider exposes more properties including a unique id +- New elements: -- osxaudio will automatically set up AVAudioSession on iOS and always expose the maximum number of channels a device supports - with an unpositioned layout. + - d3d12interlace: A Direct3D12 based interlacing element + - d3d12overlaycompositor: A Direct3D12-based overlay composing element + - d3d12fisheyedewarp: A Direct3D12-based fisheye dewarping element + - d3d12remap: A Direct3D12-based UV coordinate remapping element -- The monorepo development environment should work better on macOS now +- Upload/download optimisations via a staging memory implementation -- CMake apps that build macOS and iOS apps can consume GStreamer more easily now, using FindGStreamer.cmake or - FindGStreamerMobile.cmake respectively. +- d3d12swapchainsink improvements: -- In the near future, CMake in Xcode will be the preferred way of building the iOS tutorials. See below, in the Cerbero - section. + - added a “last-rendered-sample” action signal to retrieve the last rendered frame + - added “uv-remap” and “redraw” action signals -### Windows +#### Windows inter process communication -- webview2src: new Microsoft WebView2 based web browser source element +- The Windows IPC plugin gained support for passing generic data in addition to raw audio/video, and various new properties. + It also serialises metas now where that is supported. -- The mediafoundation plugin can also be built with MinGW now. +#### Windows audio -- The GTK3 plugin has gained support for OpenGL/WGL on Windows +- wasapi2: add support for dynamic audio device switching, exclusive mode and format negotiation, in addition to device + provider improvements and latency enhancements. -- qsv: Add support for d3d12 interop in encoder, via D3D11 textures +- Disable all audio device providers except wasapi2 by default (by setting the others’ rank to NONE). We had too many device + providers outputting duplicate device entries, and it wasn’t clear to people what they should be using. After the recent + device switching work done on WASAPI2, there is no reason to use directsound anymore. ### Cerbero Cerbero is a meta build system used to build GStreamer plus dependencies on platforms where dependencies are not readily -available, such as Windows, Android, iOS, and macOS. +available, such as Windows, Android, iOS, and macOS. It is also used to create the GStreamer Python Wheels. General improvements - New features: - - Python bindings support has been re-introduced and now supports Linux, Windows (MSVC) and macOS. Simply downloading the - official binaries and setting PYTHONPATH to the appropriate directory is sufficient. - - - This should make it easier for macOS and Windows users to use Python libraries, frameworks, and projects that use - GStreamer such as Pitivi and gst-python-ml. - - - Introspection support has been re-introduced on Linux, Windows (MSVC), and macOS. - - - New variants assert and checks to disable GLib assertions and runtime checks for performance reasons. Please note that - these are not recommended because they have significant behavioural side-effects, make debugging harder, and should only - be enabled when targeting extremely resource-constrained platforms. + - Support for generating Python wheels for macOS and Windows + - These will be uploaded to PyPI, currently blocked on PyPI + - Support for iPhone Simulator on ARM64 macOS, via the new iOS xcframework + - Inno Setup is now used for Windows installers, which also bundle the MSVC runtime + - An installer is now shipped for Windows ARM64, built using MSVC + - GTK4 is now shipped on macOS and Windows (MSVC and MinGW) + - Smaller binary sizes of Rust plugins on all platforms except macOS and iOS + - Linux builds now integrate better with system dependencies + - Debuginfo is now correctly shipped on Windows and macOS - API/ABI changes: - - Libsoup has been upgraded from 2.74 to 3.6, which is an API and ABI breaking change. The soup and adaptivedemux2 plugins - are unchanged, but your applications may need updating since libsoup-2.4 and libsoup-3.0 cannot co-exist in the same - process. - - - OpenSSL has been updated from 1.1.1 to 3.4, which is an ABI and API breaking change. Plugins are unchanged, but your - applications may need updating. + - Android NDK r25 is now used, targeting API level 24 (Android 7.0) + - Merge modules are no longer shipped for Windows + - Windows installers are no longer MSIs + - The legacy iOS framework with iPhone ARM64 and iPhoneSimulator x86_64 binaries is now deprecated. It will be removed in + the next release. Please use the new iOS xcframework which supports iPhone ARM64 and iPhoneSimulator ARM64+x86_64. - Plugins added: - - The svt-av1 plugin is now shipped in the binary releases for all platforms. - - - The svt-jpeg-xs plugin is now shipped in the binary releases for all platforms. - - - The x265 plugin is now shipped in the binary releases for all platforms. - - - All gst-plugins-rs elements are now shipped in the binary releases for all platforms, except those that have C/C++ - system-deps like GTK4. For a full list, see the Rust section above. + - pbtypes is now shipped on all platforms + - curl is now shipped on all platforms except iOS and Android + - lcevcdec is now shipped on all platforms except Windows ARM64 and Windows 32-bit x86 + - svtjpegxs is now shipped on Linux and Windows, only on 64-bit + - unixfd is now shipped on all platforms except Windows + - mediafoundation is now shipped additionally on MinGW + - wasapi2 is now shipped additionally on MinGW + - New Rust plugins on all platforms except Windows ARM64: + - analytics + - audioparsers + - burn + - demucs + - elevenlabs + - gopbuffer + - hlsmultivariantsink + - icecastsink + - mpegtslive + - raptorq + - speechmatics + - streamgrouper + - vvdec - Plugins changed: - - The rsvg plugin now uses librsvg written in Rust. The only side-effects of this should be better SVG rendering and - slightly larger plugin size. - - - The webrtc Rust plugin now also supports aws and livekit integrations . - -- Plugins removed: - - - webrtc-audio-processing has been updated to 2.0, which means the isac plugin is no longer shipped. + - mp4 and fmp4 plugins have been merged into isobmff - Development improvements: - - Support for the shell command has been added to cross-macos-universal, since the prefix is executable despite being a - cross-compile target + - Debuginfo is now correctly shipped on Windows and macOS + - Support for iPhone Simulator on ARM64 macOS, via the new iOS xcframework - - More recipes have been ported away from Autotools to Meson and CMake, speeding up the build and increasing platform - support. +- Known issues: -#### macOS + - cerbero: Rust plugins fail to link with Xcode 26 on macOS + - cerbero: Rust plugins are not shipped in the Windows ARM64 installer + - cerbero: Android devices with API level >= 30 cannot play tutorials 4 or 5 – Fix aimed for 1.28.1 + - cerbero: Missing pkg-config for macOS in the Android release -- Python bindings support on macOS only supports using the Xcode-provided Python 3 - -- MoltenVK support in the applemedia plugin now also works on arm64 when doing a cross-universal build. - -#### iOS - -- CMake inside Xcode will soon be the recommended way to consume GStreamer when building iOS apps, similar to Android apps. - - - FindGStreamerMobile.cmake is the recommended way to consume GStreamer now - - - Tutorials and examples still use Xcode project files, but CMake support will be the active focus going forward - -#### Windows - -- The minimum supported OS version is now Windows 10. - - - GStreamer itself can still be built for an older Windows, so if your project is majorly impacted by this, please open an - issue with details. - -- The Windows MSI installers are now based on WiX v5, with several improvements including a much faster MSI creation process, - improved naming in Add/Remove Programs, and more. - - - Windows installer packages: Starting with 1.26, due to security reasons, the default installation directory has changed - from C:\gstreamer to the Program Files folder, e.g. C:\Program Files (x86)\gstreamer for the 32-bit package on 64-bit - Windows. If you upgrade from 1.24 or older versions, the 1.26 installers will NOT keep using the existing folder. - Nevertheless if you were using C:\gstreamer we strongly recommend you double-check the install location. - - - Note for MSI packagers: Starting with 1.26, the installers were ported to WiX 5.0. As part of this, the property for - setting the installation directory has been changed to INSTALLDIR, and it now requires a full path to the desired - directory, e.g. C:\gstreamer instead of C:\. - - - Cross-MinGW build no longer supports the creation of MSIs. Please use tarballs. - -- MinGW: - - - MinGW toolchain has been updated from GCC 8.2 → 14.2 and MinGW 6.0 → 12.0 - - - The mediafoundation plugin is also shipped in the MinGW packages now. - - - The d3d12 plugin is also shipped in the MinGW packages now. - - - Rust support has been enabled on MinGW 64-bit. Rust support cannot work on 32-bit MinGW due to differences in exception - handling between our 32-bit MinGW toolchain and that used by the Rust project - -- The asio plugin is shipped now, since it no longer has a build-time dependency on the ASIO SDK. - -- The new plugin webview2 is shipped with MSVC. It requires the relevant component shipped with Windows. - -#### Linux - -- Preliminary support for Alma Linux has been added. - -- RHEL distro support has been improved. - -- Cerbero CI now tests the build on Ubuntu 24.04 LTS. - -- curl is used for downloading sources on Fedora instead of wget, since they have moved to wget2 despite show-stopper - regressions such as returning a success error code on download failure. - -#### Android - -- CMake inside Gradle is now the recommended way to consume GStreamer when building Android apps - - - FindGStreamerMobile.cmake is the recommended way to consume GStreamer now - - - 1.26 will support both CMake and Make inside Gradle, but the Make support will likely be removed in 1.28 +## Documentation improvements - - Documentation updates are still a work-in-progress, help is appreciated +- Added a Windows section to “building from source” page -- Android tutorials and examples are now built with gradle + cmake instead of gradle + make on the CI +- New Python tutorials for dynamic pipelines and time handling -## Documentation improvements +- The Android tutorials were updated: provided projects were updated to Gradle 8.11 and API level 24 -- Tracer objects information is now included in the documentation +- Updates of the Machine Learning and Analytics design documentation and the GstMeta design docs ## Possibly Breaking Changes -- qroverlay: the "pixel-size" property has been removed in favour of a new "size" property with slightly different semantics, - where the size of the square is expressed in percent of the smallest of width and height. - -- svtav1enc: The SVT-AV1 3.0.0 API exposes a different mechanism to configure the level of parallelism when encoding, which - has been exposed as a new "level-of-parallelism" property. The old "logical-processors" property is no longer functional if - the plugin has been compiled against the new API, which might affect encoder performance if application code setting it is - not updated. - -- udpsrc: now disables allocated port reuse for unicast to avoid unexpected side-effects of SO_REUSEADDR where the kernel - allocates the same listening port for multiple udpsrc. +- The MXF muxer and demuxer used to have direct support for standalone closed caption streams (closedcaption/x-cea-708) as + ancillary data, but that was removed in favour of more generic ST 2038 ancillary metadata which is a better fit for how the + data is stored internally and also supports generic ancillary metadata. Closed captions can still be stored or extracted by + using the ST 2038 elements from the Rust plugins module. Also see the MXF section above. + +- Analytics: Previously it was guaranteed that there is only ever up to one GstTensorMeta per buffer. This is no longer true + and code working with GstTensorMeta must be able to handle multiple GstTensorMeta now (after this Merge Request, which was + apparently backported into 1.26 as well). -- uridecodebin3 remove non-functional "source" property that doesn’t make sense and always returned NULL anyway. +- The thread ID reported in debug logs is no longer prefixed with a 0x on Windows, Linux and FreeBSD platforms. This change + can potentially break log parsers. GstDebugViewer was adapted accordingly. ## Known Issues -- GstBuffer now uses C11 atomics for 64 bit atomic operations if available, which may require linking to libatomic on some - systems, but this is not done automatically yet, see issue #4177. +- There are some open issues with the Apple hardware-accelerated AV1 decoding, which we hope will be fixed in due course. + Please let us know if you run into them and can test patches. + +- Autoplugging LCEVC H.264/H.265/H.266 streams is currently disabled until an issue with decodebin3 and non-LCEVC streams has + been resolved. It is still possible to re-enable this locally by overriding the rank of lcevch26*decodebin using the + GST_PLUGIN_FEATURE_RANK environment variable. ## Statistics -- 4496 commits +- 3548 commits -- 2203 Merge requests merged +- 1765 Merge requests merged -- 794 Issues closed +- 476 Issues closed -- 215+ Contributors +- 190+ Contributors -- ~33% of all commits and Merge Requests were in Rust modules/code +- more than 35% of all commits and Merge Requests were in Rust modules/code -- 4950 files changed +- 5430 files changed -- 515252 lines added +- 395856 lines added -- 201503 lines deleted +- 249844 lines deleted -- 313749 lines added (net) +- 146012 lines added (net) Contributors -Aaron Boxer, Adrian Perez de Castro, Adrien De Coninck, Alan Coopersmith, Albert Sjolund, Alexander Slobodeniuk, Alex Ashley, -Alicia Boya García, Andoni Morales Alastruey, Andreas Wittmann, Andrew Yooeun Chun, Angelo Verlain, Aniket Hande, Antonio -Larrosa, Antonio Morales, Armin Begovic, Arnaud Vrac, Artem Martus, Arun Raghavan, Benjamin Gaignard, Benjamin Gräf, Bill -Nottingham, Brad Hards, Brad Reitmeyer, Branko Subasic, Carlo Caione, Carlos Bentzen, Carlos Falgueras García, cdelguercio, Chao -Guo, Cheah, Cheung Yik Pang, chitao1234, Chris Bainbridge, Chris Spencer, Chris Spoelstra, Christian Meissl, Christopher Degawa, -Chun-wei Fan, Colin Kinloch, Corentin Damman, Daniel Morin, Daniel Pendse, Daniel Stone, Dan Yeaw, Dave Lucia, David Rosca, Dean -Zhang (张安迪), Denis Yuji Shimizu, Detlev Casanova, Devon Sookhoo, Diego Nieto, Dongyun Seo, dukesook, Edward Hervey, eipachte, -Eli Mallon, Elizabeth Figura, Elliot Chen, Emil Ljungdahl, Emil Pettersson, eri, F. Duncanh, Fotis Xenakis, Francisco Javier -Velázquez-García, Francis Quiers, François Laignel, George Hazan, Glyn Davies, Guillaume Desmottes, Guillermo E. Martinez, -Haihua Hu, Håvard Graff, He Junyan, Hosang Lee, Hou Qi, Hugues Fruchet, Hyunwoo, iodar, jadarve, Jakub Adam, Jakub Vaněk, James -Cowgill, James Oliver, Jan Alexander Steffens (heftig), Jan Schmidt, Jeffery Wilson, Jendrik Weise, Jerome Colle, Jesper Jensen, -Jimmy Ohn, Jochen Henneberg, Johan Sternerup, Jonas K Danielsson, Jonas Rebmann, Jordan Petridis, Jordan Petridіs, Jordan -Yelloz, Jorge Zapata, Joshua Breeden, Julian Bouzas, Jurijs Satcs, Kévin Commaille, Kevin Wang, Khem Raj, kingosticks, Leonardo -Salvatore, L. E. Segovia, Liam, Lim, Loïc Le Page, Loïc Yhuel, Lyra McMillan, Maksym Khomenko, Marc-André Lureau, Marek Olejnik, -Marek Vasut, Marianna Smidth Buschle, Marijn Suijten, Mark-André Schadow, Mark Nauwelaerts, Markus Ebner, Martin Nordholts, Mart -Raudsepp, Mathieu Duponchelle, Matthew Waters, Maxim P. DEMENTYEV, Max Romanov, Mengkejiergeli Ba, Michael Grzeschik, Michael -Olbrich, Michael Scherle, Michael Tretter, Michiel Westerbeek, Mikhail Rudenko, Nacho Garcia, Nick Steel, Nicolas Dufresne, -Niklas Jang, Nirbheek Chauhan, Ognyan Tonchev, Olivier Crête, Oskar Fiedot, Pablo García, Pablo Sun, Patricia Muscalu, Paweł -Kotiuk, Peter Kjellerstedt, Peter Stensson, pgarciasancho, Philippe Normand, Philipp Zabel, Piotr Brzeziński, Qian Hu (胡骞), -Rafael Caricio, Randy Li (ayaka), Rares Branici, Ray Tiley, Robert Ayrapetyan, Robert Guziolowski, Robert Mader, Roberto Viola, -Robert Rosengren, RSWilli,Ruben González, Ruijing Dong, Sachin Gadag, Sam James, Samuel Thibault, Sanchayan Maity, Scott Moreau, -Sebastian Dröge, Sebastian Gross, Sebastien Cote, Sergey Krivohatskiy, Sergey Radionov, Seungha Yang, Seungmin Kim, Shengqi Yu, -Sid Sethupathi, Silvio Lazzeretti, Simonas Kazlauskas, Stefan Riedmüller, Stéphane Cerveau, Tamas Levai, Taruntej Kanakamalla, -Théo Maillart, Thibault Saunier, Thomas Goodwin, Thomas Klausner, Tihran Katolikian, Tim Blechmann, Tim-Philipp Müller, Tjitte -de Wert, Tomas Granath, Tomáš Polomský, tomaszmi, Tom Schuring, U. Artie Eoff, valadaptive, Víctor Manuel Jáquez Leal, Vivia -Nikolaidou, W. Bartel, Weijian Pan, William Wedler, Will Miller, Wim Taymans, Wojciech Kapsa, Xavier Claessens, Xi Ruoyao, -Xizhen, Yaakov Selkowitz, Yacine Bandou, Zach van Rijn, Zeno Endemann, Zhao, Zhong Hongcheng, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -Stable 1.26 branch - -After the 1.26.0 release there will be several 1.26.x bug-fix releases which will contain bug fixes which have been deemed -suitable for a stable branch, but no new features or intrusive changes will be added to a bug-fix release usually. The 1.26.x -bug-fix releases will be made from the git 1.26 branch, which is a stable release series branch. - -1.26.1 - -The first 1.26 bug-fix release (1.26.1) was released on 24 April 2025. - -This release only contains bugfixes and security fixes and it should be safe to update from 1.26.0. - -Highlighted bugfixes in 1.26.1 - -- awstranslate and speechmatics plugin improvements -- decodebin3 fixes and urisourcebin/playbin3 stability improvements -- Closed captions: CEA-708 generation and muxing fixes, and H.264/H.265 caption extractor fixes -- dav1d AV1 decoder: RGB support, plus colorimetry, renegotiation and buffer pool handling fixes -- Fix regression when rendering VP9 with alpha -- H.265 decoder base class and caption inserter SPS/PPS handling fixes -- hlssink3 and hlsmultivariantsink feature enhancements -- Matroska v4 support in muxer, seeking fixes in demuxer -- macOS: framerate guessing for cameras or capture devices where the OS reports silly framerates -- MP4 demuxer uncompressed video handling improvements and sample table handling fixes -- oggdemux: seeking improvements in streaming mode -- unixfdsrc: fix gst_memory_resize warnings -- Plugin loader fixes, especially for Windows -- QML6 GL source renegotiation fixes -- RTP and RTSP stability fixes -- Thread-safety improvements for the Media Source Extension (MSE) library -- v4l2videodec: fix A/V sync issues after decoding errors -- Various improvements and fixes for the fragmented and non-fragmented MP4 muxers -- Video encoder base class segment and buffer timestamp handling fixes -- Video time code support for 119.88 fps and drop-frames-related conversion fixes -- WebRTC: Retransmission entry creation fixes and better audio level header extension compatibility -- YUV4MPEG encoder improvments -- dots-viewer: make work locally without network access -- gst-python: fix compatibility with PyGObject >= 3.52.0 -- Cerbero: recipe updates, compatibility fixes for Python < 3.10; Windows Android cross-build improvements -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- Correctly handle whitespace paths when executing gst-plugin-scanner -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup -- cmake: Fix PKG_CONFIG_PATH formatting for Windows cross-builds -- macos: Move macos function documentation to the .h so the introspection has the information -- meson.build: test for and link against libatomic if it exists -- pluginloader-win32: Fix helper executable path under devenv -- pluginloader: fix pending_plugins Glist use-after-free issue -- unixfdsrc: Complains about resize of memory area -- tracers: dots: fix debug log - -gst-plugins-base - -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup -- alsadeviceprovider: Fix leak of Alsa longname -- audioaggregator: fix error added in !8416 when chaining up -- audiobasesink: Fix custom slaving driftsamples calculation and add custom audio clock slaving callback example -- decodebin3: Don’t avoid parsebin even if we have a matching decoder -- decodebin3: Doesn’t plug parsebin for AAC from tsdemux -- gl: eglimage: warn the reason of export failure -- glcolorconvert: fix YUVA<->RGBA conversions -- glcolorconvert: regression when rendering alpha vp9 -- gldownload: Unref glcontext after usage -- meson.build: test for and link against libatomic if it exists -- oggdemux: Don’t push new packets if there is a pending seek -- urisourcebin: Make parsebin activation more reliable -- urisourcebin: deadlock between parsebin and typefind -- videoencoder: Use the correct segment and buffer timestamp in the chain function -- videotimecode: Fix conversion of timecode to datetime with drop-frame timecodes and handle 119.88 fps correctly in all - places - -gst-plugins-good - -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup -- gst-plugins-good: Matroska mux v4 support -- matroska-demux: Prevent corrupt cluster duplication -- qml6glsrc: update buffer pool on renegotiation -- qt6: Add a missing newline in unsupported platform message -- qtdemux: Fix stsc size check in qtdemux_merge_sample_table() -- qtdemux: Next Iteration Of Uncompressed MP4 Decoder -- qtdemux: unref simple caps after use -- rtspsrc: Do not emit signal ‘no-more-pads’ too early -- rtspsrc: Don’t error out on not-linked too early -- rtpsession: Do not push events while holding SESSION_LOCK -- rtpsession: deadlock when gst_rtp_session_send_rtcp () is forwarding eos -- v4l2: drop frame for frames that cannot be decoded -- v4l2videodec: AV unsync for streams with many frames that cannot be decoded -- v4l2object: fix memory leak -- v4l2object: fix type mismatch when ioctl takes int -- y4menc: fix Y41B format -- y4menc: handle frames with GstVideoMeta - -gst-plugins-bad - -- Add missing Requires in pkg-config -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup -- Update docs -- aja: Use the correct location of the AJA NTV2 SDK in the docs -- alphacombine: De-couple flush-start/stop events handling -- alphadecodebin: use a multiqueue instead of a couple of queues -- avfvideosrc: Guess reasonable framerate values for some 3rd party devices -- codecalpha: name both queues -- d3d12converter: Fix cropping when automatic mipmap is enabled -- dashsink: Make sure to use a non-NULL pad name when requesting a pad from splitmuxsink -- docs: Fix GstWebRTCICE* class documentation -- h264ccextractor, h265ccextractor: Handle gap with unknown pts -- h265decoder, h265ccinserter: Fix broken SPS/PPS link -- h265parser: Fix num_long_term_pics bound check -- Segmentation fault in H265 decoder -- h266decoder: fix leak parsing SEI messages -- meson.build: test for and link against libatomic if it exists -- mse: Improved Thread Safety of API -- mse: Revert ownership transfer API change in gst_source_buffer_append_buffer() -- tensordecoders: updating element classification -- unixfd: Fix wrong memory size when offset > 0 -- uvcsink: Respond to control requests with proper error handling -- v4l2codecs: unref frame in all error paths of end_picture -- va: Skip codecs that report maximum width or height lower than minimum -- vapostproc: fix wrong video orientation after restarting the element -- vavp9enc: fix mem leaks in _vp9_decide_profile -- vkformat: fix build error -- vtenc: Avoid deadlocking when changing properties on the fly -- vulkan: fix memory leak at dynamic registering -- webrtc: enhance rtx entry creation -- webrtcbin: add missing warning for caps missmatch -- ZDI-CAN-26596: New Vulnerability Report (Security) - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- Bump MSRV to 1.83 -- Allow any windows-sys version >= 0.52 and <= 0.59 -- aws/polly: add GstScaletempoTargetDurationMeta to output buffers -- awstranslate: improve message posted on bus -- cdg: typefind: Division by zero fix -- cea708mux: Improve support for overflowing input captions -- colordetect: Change to videofilter base class -- dav1ddec: Drain decoder on caps changes if necessary -- dav1ddec: Only update unknown parts of the upstream colorimetry and not all of it -- dav1ddec: Support RGB encoded AV1 -- dav1ddec: Use downstream buffer pool for copying if video meta is not supported -- dav1ddec: Use max-frame-delay value from the decoder instead of calculating it -- dav1ddec: Use max-frame-delay value from the decoder instead of calculating it -- doc: Update to latest way of generating hotdoc config files -- Fix gtk4 compile -- Fix various clippy 1.86 warnings and update gstreamer-rs / gtk-rs dependencies -- fmp4mux: Add a couple of minor new features -- fmp4mux: Add manual-split mode that is triggered by serialized downstream events -- fmp4mux: Add send-force-keyunit property -- fmp4mux: Fix latency configuration for properties set during construction -- fmp4mux: Improve split-at-running-time handling -- fmp4mux/mp4mux: Handle the case of multiple tags per taglist correctly -- fmp4mux: Write a v0 tfdt box if the decode time is small enough -- gstwebrtc-api: Add TypeScript type definitions, build ESM for broader compatibility, improve JSDocs -- hlsmultivariantsink: Allow users to specify playlist and segment location -- hlssink3 - Add Support for NTP timestamp from buffer -- livesync: Notify in/out/drop/duplicate properties on change -- livesync: Only notify drop/duplicate properties -- meson: Require gst 1.18 features for dav1d -- mp4mux: Don’t write composition time offsets if they’re all zero -- mp4mux, fmp4mux: Use correct timescales for edit lists -- mpegtslivesrc: increase threshold for PCR <-> PTS DISCONT -- mpegtslivesrc: Use a separate mutex for the properties -- mux: use smaller number of samples for testing -- net/aws: punctuation-related improvements to our span_tokenize_items function -- pcap_writer: Mark target-factory and pad-path props as construct-only -- speechmatics: Handle multiple stream-start event -- tracers: buffer-lateness: don’t panic on add overflow + reduce graph legend entry font size a bit -- tracers: Update to etherparse 0.17 -- transcriberbin: make auto passthrough work when transcriber is a bin -- ts-jitterbuffer: improve scheduling of lost events -- tttocea708: fix origin-row handling for roll-up in CEA-708 -- Update Cargo.lock to remove old windows-targets 0.48.5 -- Update dependencies -- Update gtk-rs / gstreamer-rs dependencies and update for API changes -- Update to bitstream-io 3 -- uriplaylistbin: skip cache test when offline -- webrtc: Port to reqwest 0.12 -- webrtcsink: Fix compatibility with audio level header extension - -gst-libav - -- No changes - -gst-rtsp-server - -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- gst-python: fix compatibility with PyGObject >= 3.52.0 -- gst-python: Segmentation Fault since PyGObject >= 3.52.0 due to missing _introspection_module attribute - -gst-editing-services - -- Ensure properties are freed before (re)setting with g_value_dup_string() and during cleanup - -gst-devtools, gst-validate + gst-integration-testsuites - -- Add missing Requires in pkg-config -- devtools: dots-viewer: Bundle js dependencies using webpack -- devtools: dots-viewer: Update dependencies and make windows dependencies conditional - -gst-examples - -- examples: Update Rust dependencies -- examples: webrtc: rust: Move from async-std to tokio - -gstreamer-docs - -- Update docs - -Development build environment - -- No changes - -Cerbero build tool and packaging changes in 1.26.1 - -- FindGStreamerMobile: Override pkg-config on Windows -> Android cross builds -- Fix BuildTools not using recipes_remotes and recipes_commits -- bootstrap, meson: Use pathlib.Path.glob to allow Python < 3.10 -- Use of ‘glob(…, root_dir)’ requires Python >=3.10, cerbero enforces >= 3.7 -- harfbuzz: update to 10.4.0 -- Update fontconfig to 2.16.1, pango to 1.56.2 - -Contributors to 1.26.1 - -Alexander Slobodeniuk, Alyssa Ross, Artem Martus, Arun Raghavan, Brad Hards, Carlos Bentzen, Carlos Rafael Giani, Daniel Morin, -David Smitmanis, Detlev Casanova, Dongyun Seo, Doug Nazar, dukesook, Edward Hervey, eipachte, Eli Mallon, François Laignel, -Guillaume Desmottes, Gustav Fahlen, Hou Qi, Jakub Adam, Jan Schmidt, Jan Tojnar, Jordan Petridis, Jordan Yelloz, L. E. Segovia, -Marc Leeman, Marek Olejnik, Mathieu Duponchelle, Matteo Bruni, Matthew Waters, Michael Grzeschik, Nirbheek Chauhan, Ognyan -Tonchev, Olivier Blin, Olivier Crête, Philippe Normand, Piotr Brzeziński, Razvan Grigore, Robert Mader, Sanchayan Maity, -Sebastian Dröge, Seungha Yang, Shengqi Yu (喻盛琪), Stefan Andersson, Stéphane Cerveau, Thibault Saunier, Tim-Philipp Müller, -tomaszmi, Víctor Manuel Jáquez Leal, Xavier Claessens, +Aaron Boxer, Abd Razak, Adrian Perez de Castro, Adrien Plazas, Albert Sjolund, Aleix Pol, Alexander Slobodeniuk, Alicia Boya +García, Alyssa Ross, Amotz Terem, Amy Ko, Andoni Morales Alastruey, Andrew Yooeun Chun, Andrey Khamukhin, anonymix007, Arnout +Engelen, Artem Martus, Arun Raghavan, Ben Butterworth, Biswapriyo Nath, Brad Hards, Brad Reitmeyer, Branko Subasic, Camilo Celis +Guzman, Carlos Bentzen, Carlos Falgueras García, Carlos Rafael Giani, César Alejandro Torrealba Vázquez, Changyong Ahn, Chengfa +Wang, Christian Gräfe, Christo Joseph, Christopher Degawa, Christoph Reiter, Daniel Almeida, Daniel Morin, David Maseda Neira, +David Monge, David Smitmanis, Denis Shimizu, Derek Foreman, Detlev Casanova, Devon Sookhoo, Diego Nieto, Dominique Leroux, +DongJoo Kim, Dongyun Seo, Doug Nazar, Edward Hervey, Ekwang Lee, eipachte, Eli Mallon, Elliot Chen, Enock Gomes Neto, Enrique +Ocaña González, Eric, Eva Pace, F. Duncanh, François Laignel, Gang Zhao, Glyn Davies, Guillaume Desmottes, Gustav Fahlen, +Haejung Hwang, Haihua Hu, Havard Graff, Hanna Weiß, He Junyan, Hou Qi, Hyunjun Ko, Ian Napier, Inbok Kim, Jaehoon Lee, Jakub +Adam, James Cowgill, Jan Alexander Steffens (heftig), Jan Schmidt, Jan Tojnar, Jan Vermaete, Jaslo Ziska, Jeehyun Lee, Jeffery +Wilson, jeongmin kwak, Jeongmin Kwak, Jerome Colle, Jiayin Zhang, Jihoon Lee, Jochen Henneberg, Johan Sternerup, Jonathan Lui, +Jordan Petridis, Jordan Yelloz, Jorge Zapata, Julian Bouzas, Kevin Scott, Kevin Wolf, L. E. Segovia, Linus Svensson, Loïc Le +Page, Manuel Torres, Marc-André Lureau, Marc Leeman, Marek Olejnik, Mark Nauwelaerts, Marko Kohtala, Markus Hofstaetter, Mathieu +Duponchelle, Matteo Bruni, Matthew Semeniuk, Matthew Waters, Max Goltzsche, Mazdak Farzone, Michael Grzeschik, Michael Olbrich, +Michiel Westerbeek, Monty C, Muhammad Azizul Hazim, Nicholas Jin, Nicolas Dufresne, Nirbheek Chauhan, Norbert Hańderek, Ognyan +Tonchev, Ola Fornander, Olivier Blin, Olivier Crête, Oz Donner, Pablo García, Patricia Muscalu, Patrick Fischer, Paul Fee, Paweł +Kotiuk, Paxton Hare, Peter Stensson, pfee, Philippe Normand, Piotr Brzeziński, Piotr Brzeziński, Pratik Pachange, Qian Hu +(胡骞), r4v3n6101Rafael Caricio, Raghavendra Rao, Rares Branici, Ratchanan Srirattanamet, Razvan Grigore, Rick Ye, Rinat Zeh, +Robert Ayrapetyan, Robert Mader, Ross Burton, Ruben Gonzalez, Ruben Sanchez, Samuel Thibault, Sanchayan Maity, Santiago +Carot-Nemesio, Santosh Mahto, Sebastian Dröge, Seungha Yang, Shengqi Yu (喻盛琪), Sjoerd Simons, Slava Sokolovsky, Stefan +Andersson, Stefan Dangl, Stéphane Cerveau, stevn, Sven Püschel, Sylvain Garrigues, Taruntej Kanakamalla, Teus Groenewoud, Théo +Maillart, Thibault Saunier, Tim-Philipp Müller, Tjitte de Wert, Tobias Schlager, Tobias Koenig, Tomasz Mikolajczyk, Tulio +Beloqui, Val Packett, Vasiliy Doylov, Víctor Manuel Jáquez Leal, Vincent Beng Keat Cheah, Vineet Suryan, Vivia Nikolaidou, +Vivian Lee, Vivienne Watermeier, Wilhelm Bartel, William Wedler, Wim Taymans, Xavier Claessens, Yun Liu, … and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! -List of merge requests and issues fixed in 1.26.1 +Stable 1.28 branch -- List of Merge Requests applied in 1.26.1 -- List of Issues fixed in 1.26.1 - -1.26.2 - -The second 1.26 bug-fix release (1.26.2) was released on 29 May 2025. - -This release only contains bugfixes as well as a number of security fixes and important playback fixes, and it should be safe to -update from 1.26.0. - -Highlighted bugfixes in 1.26.2 - -- Various security fixes and playback fixes -- aggregator base class fixes to not produce buffers too early in live mode -- AWS translate element improvements -- D3D12 video decoder workarounds for crashes on NVIDIA cards on resolution changes -- dav1d AV1-decoder performance improvements -- fmp4mux: tfdt and composition time offset fixes, plus AC-3 / EAC-3 audio support -- GStreamer editing services fixes for sources with non-1:1 aspect ratios -- MIDI parser improvements for tempo changes -- MP4 demuxer atom parsing improvements and security fixes -- New skia-based video compositor element -- Subtitle parser security fixes -- Subtitle rendering and seeking fixes -- Playbin3 and uridecodebin3 stability fixes -- GstPlay stream selection improvements -- WAV playback regression fix -- GTK4 paintable sink colorimetry support and other improvements -- WebRTC: allow webrtcsrc to wait for a webrtcsink producer to initiate the connection -- WebRTC: new Janus Video Room WebRTC source element -- vah264enc profile decision making logic fixes -- Python bindings gained support for handling mini object writability (buffers, caps, etc.) -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- aggregator: Various state related fixes -- element: ref-sink the correct pad template when replacing an existing one -- pipeline: Store the actual latency even if no static latency was configured -- structure: Add gst_structure_is_writable() API to allow python bindings to be able to handle writability of MiniObjects -- tracerutils: Do not warn on empty string as tracername -- tracerutils: Fix leak in gst_tracer_utils_create_tracer() -- Ensure properties are freed before (re)setting with g_value_dup_object() or g_value_dup_boxed() and during cleanup -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes - -gst-plugins-base - -- alsa: Avoid infinite loop in DSD rate detection -- gl: Implement basetransform meta transform function -- glshader: free shader on stop -- glupload: Only add texture-target field to GL caps -- gstaudioutilsprivate: Fix gcc 15 compiler error with function pointer -- mikey: Avoid infinite loop while parsing MIKEY payload with unhandled payload types -- properties: add G_PARAM_STATIC_STRINGS where missing -- riff-media: fix MS and DVI ADPCM av_bps calculations -- subtitleoverlay: Remove 0.10 hardware caps handling -- subtitleoverlay: Missing support for DMABuf(?) -- tests: opus: Update channel support and add to meson -- textoverlay: fix shading for RGBx / RGBA pixel format variants -- textoverlay background is wrong while cropping -- uridecodebin3: Don’t hold play items lock while releasing pads -- uridecodebin3: deadlock on PLAY_ITEMS_LOCK -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- Fix Qt detection in various places - -gst-plugins-good - -- adaptivedemux2: Fixes for collection handling -- adaptivedemux2: Fix several races -- dash: mpdclient: Don’t pass terminating NUL to adapter -- gl: Implement basetransform meta transform function -- imagefreeze: Set seqnum from segment too -- interleave: Don’t hold object lock while querying caps downstream -- matroskamux: Write stream headers before finishing file, so that a correct file with headers is written if we finish without - any data -- meson: Add build_rpath for qt6 plugin on macOS -- meson: Fix qt detection in various places -- properties: add G_PARAM_STATIC_STRINGS where missing -- qtdemux: Check length of JPEG2000 colr box before parsing it -- qtdemux: Parse chan box and improve raw audio channel layout handling -- qtdemux: Improve track parsing -- qtdemux: Use byte reader to parse mvhd box -- qtdemux: cmpd box is only mandatory for uncompressed video with uncC version 0 -- rtph264pay: Reject stream-format=avc without codec_data -- rtputils: Add debug category -- v4l2: pool: Send drop frame signal after dqbuf success -- v4l2: pool: fix assert when mapping video frame with DMA_DRM caps -- v4l2videoenc: report error only when buffer pool parameters are invalid -- wavparse: Ignore EOS when parsing the headers -- wavparse: Regression leading to unplaybable wav files that were working before -- Ensure properties are freed before (re)setting with g_value_dup_object() or g_value_dup_boxed() and during cleanup -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- Fixes for big endian -- Switch to GST_AUDIO_NE() -- Valgrind fixes - -gst-plugins-bad - -- alphacombine: Fix seeking after EOS -- cuda: Fix runtime PTX compile, fix example code build with old CUDA SDK -- curl: Fix build with MSVC -- curl: small fixups p3 -- d3d12: Fix gstreamer-full subproject build with gcc -- d3d12: Generate gir file -- d3d12decoder: Workaround for NVIDIA crash on resolution change -- d3d12memory: Allow set_fence() only against writable memory -- d3d12memory: Make D3D12 map flags inspectable -- d3d12screencapturesrc: Fix desktop handle leak -- dash: mpdclient: Don’t pass terminating NUL to adapter -- dvbsuboverlay: Actually make use of subtitle running time instead of using PTS -- dvbsuboverlay: No subtitles after seek -- h264parse: Never output stream-format=avc/avc3 caps without codec_data -- lcevc: Use portable printf formatting macros -- midiparse: Consider tempo changes when calculating duration -- nvencoder: Fix GstVideoCodecFrame leak on non-flow-ok return -- play: Improve stream selection -- properties: add G_PARAM_STATIC_STRINGS where missing -- rtpsender: fix ‘priority’ GValue get/set -- va: Fix H264 profile decision logic -- vulkan/wayland: Init debug category before usage -- Ensure properties are freed before (re)setting with g_value_dup_object() or g_value_dup_boxed() and during cleanup -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- Fixes for big endian -- Fix Qt detection in various places -- Switch to GST_AUDIO_NE() -- Valgrind fixes - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- awstranslate: improve control over accumulator behavior -- awstranslate: output buffer lists -- cea608tott: make test text less shocking by having more cues as context -- dav1ddec: Directly decode into downstream allocated buffers if possible -- deny: Allow webpki-root-certs license -- fmp4mux: Add support for AC-3 / EAC-3 -- fmp4mux: Use earliest PTS for the base media decode time (tfdt) -- fmp4mux: Fix handling of negative DTS in composition time offset -- fmp4mux: Write lmsg as compatible brand into the last fragment -- mp4mux: add extra brands -- mp4: avoid dumping test output into build directory -- mp4: migrate to mp4-atom to check muxing -- mp4: test the trak structure -- gtk4: Update and adapt to texture builder API changes -- gtk4: Initial colorimetry support -- gtk4: Update default GTK4 target version to 4.10 -- rtp: Update to bitstream-io 4.0 -- skia: Implement a video compositor using skia -- webrtc: addressing a few deadlocks -- webrtc: Support for producer sessions targeted at a given consumer -- webrtc: add new JanusVR source element -- webrtc: janus: clean up and refactoring -- webrtcsink: Use seq number instead of Uuid for discovery -- webrtc: Make older peers less likely to crash when webrtcsrc is used -- Fix or silence various new clippy warnings -- Update Cargo.lock to fix duplicated target-lexicon - -gst-libav - -- Valgrind fixes -- libav: Only allocate extradata while decoding - -gst-rtsp-server - -- properties: add G_PARAM_STATIC_STRINGS where missing -- properties: ensure properties are freed before (re)setting with g_value_dup_object() or g_value_dup_boxed() and during - cleanup -- tests: Valgrind fixes - -gstreamer-vaapi - -- Ensure properties are freed before (re)setting with g_value_dup_object() or g_value_dup_boxed() and during cleanup - -gstreamer-sharp - -- No changes - -gst-python - -This release includes important fixes for the GStreamer Python bindings. - -Since pygobject 3.13 around 10 years ago, it wasn’t possible anymore to modify GStreamer miniobjects, e.g. modify caps or set -buffer timestamps, as an implicit copy of the original would always be made. This should finally work again now. - -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- python: Add overrides to be able to handle writability of MiniObjects -- python: Convert buffer metadata API to use @property decorators -- REGRESSION: pygobject 3.13 now copies the GstStructure when getting them from a GstCaps, making it impossible to properly - modify structures from caps in place - -gst-editing-services - -- Fix frame position for sources with par < 1 -- Fix video position for sources with pixel-aspect-ratio > 1 -- Valgrind fixes -- properties: add G_PARAM_STATIC_STRINGS where missing -- Switch to GST_AUDIO_NE() to make things work properly on Big Endian systems - -gst-devtools, gst-validate + gst-integration-testsuites - -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- validate: baseclasses: Reset Test timeouts between iterations -- validate: scenario: Fix race condition when ignoring EOS - -gst-examples - -- Fix new warnings on Fedora 42, various meson warnings, and other small meson build/wrap fixes -- webrtc examples: Fix running against self-signed certs -- webrtc/signalling: fix compatibility with python 3.13 - -gstreamer-docs - -- No changes - -Development build environment - -- Various wrap updates -- Add qt-method meson options to fix Qt detection in various places -- pre-commit: Workaround broken shebang on Windows - -Cerbero build tool and packaging changes in 1.26.2 - -- directx-headers: Fix g-ir-scanner expecting MSVC naming convention for gst-plugins-bad introspection -- m4: update recipe to fix hang in configure -- pango: Fix introspection missing since 1.56.2 update - -Contributors to 1.26.2 - -Adrian Perez de Castro, Alexander Slobodeniuk, Alicia Boya García, Andoni Morales Alastruey, Biswapriyo Nath, Brad Hards, Branko -Subasic, Christoph Reiter, Daniel Morin, Doug Nazar, Devon Sookhoo, Eva Pace, Guillaume Desmottes, Hou Qi, Jakub Adam, Jan -Schmidt, Jochen Henneberg, Jordan Petridis, L. E. Segovia, Mathieu Duponchelle, Matthew Waters, Nicolas Dufresne, Nirbheek -Chauhan, Olivier Crête, Pablo García, Piotr Brzeziński, Robert Mader, Sebastian Dröge, Seungha Yang, Thibault Saunier, -Tim-Philipp Müller, Vasiliy Doylov, Wim Taymans, Xavier Claessens, Zhao, Gang, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.2 - -- List of Merge Requests applied in 1.26.2 -- List of Issues fixed in 1.26.2 - -1.26.3 - -The third 1.26 bug-fix release (1.26.3) was released on 26 June 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.3 - -- Security fix for the H.266 video parser -- Fix regression for WAV files with acid chunks -- Fix high memory consumption caused by a text handling regression in uridecodebin3 and playbin3 -- Fix panic on late GOP in fragmented MP4 muxer -- Closed caption conversion, rendering and muxing improvements -- Decklink video sink preroll frame rendering and clock drift handling fixes -- MPEG-TS demuxing and muxing fixes -- MP4 muxer fixes for creating very large files with faststart support -- New thread-sharing 1:N inter source and sink elements, and a ts-rtpdtmfsrc -- New speech synthesis element around ElevenLabs API -- RTP H.265 depayloader fixes and improvements, as well as TWCC and GCC congestion control fixes -- Seeking improvements in DASH client for streams with gaps -- WebRTC sink and source fixes and enhancements, including to LiveKit and WHIP signallers -- The macOS osxvideosink now posts navigation messages -- QtQML6GL video sink input event handling improvements -- Overhaul detection of hardware-accelerated video codecs on Android -- Video4Linux capture source fixes and support for BT.2100 PQ and 1:4:5:3 colorimetry -- Vulkan buffer upload and memory handling regression fixes -- gst-python: fix various regressions introduced in 1.26.2 -- cerbero: fix text relocation issues on 32-bit Android and fix broken VisualStudio VC templates -- packages: ship pbtypes plugin and update openssl to 3.5.0 LTS -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- aggregator: Do not set event seqnum to INVALID -- baseparse: test: Fix race on test start -- pad: Only remove TAG events on STREAM_START if the stream-id actually changes -- utils: Mark times array as static to avoid symbol conflict with the POSIX function -- vecdeque: Use correct index type gst_vec_deque_drop_struct() - -gst-plugins-base - -- GstAudioAggregator: fix structure unref in peek_next_sample() -- audioconvert: Fix setting mix-matrix when input caps changes -- encodebasebin: Duplicate encoding profile in property setter -- gl: simplify private gst_gl_gst_meta_api_type_tags_contain_only() -- osxvideosink: Use gst_pad_push_event() and post navigation messages -- playsink: Fix race condition in stream synchronizer pad cleanup during state changes -- python: Fix pulling events from appsink -- streamsynchronizer: Consider streams having received stream-start as waiting -- urisourcebin: Text tracks are no longer set as sparse stream in urisourcebin’s multiqueue - -gst-plugins-good - -- aacparse: Fix counting audio channels in program_config_element -- adaptivedemux2: free cancellable when freeing transfer task -- dashdemux2: Fix seeking in a stream with gaps -- decodebin wavparse cannot pull header -- imagefreeze: fix not negotiate log when stop -- osxvideosink: Use gst_pad_push_event() and post navigation messages -- qml6glsink: Allow configuring if the item will consume input events -- qtmux: Update chunk offsets when converting stco to co64 with faststart -- splitmuxsink: Only send closed message once per open fragment -- rtph265depay: CRA_NUT can also start an (open) GOP -- rtph265depay: fix codec_data generation -- rtspsrc: Don’t emit error during close if server is EOF -- twcc: Fix reference timestamp wrapping (again) -- v4l2: Fix possible internal pool leak -- v4l2object: Add support for colorimetry bt2100-pq and 1:4:5:3 -- wavparse: Don’t error out always when parsing acid chunks - -gst-plugins-bad - -- amc: Overhaul hw-accelerated video codecs detection -- bayer2rgb: Fix RGB stride calculation -- d3d12compositor: Fix critical warnings -- dashsink: Fix failing test -- decklink: calculate internal using values closer to the current clock times -- decklinkvideosink: show preroll frame correctly -- decklink: clock synchronization after pause -- h266parser: Fix overflow when parsing subpic_level_info -- lcevcdec: Check for errors after receiving all enhanced and base pictures -- meson: fix building -bad tests with disabled soundtouch -- mpegts: handle MPEG2-TS with KLV metadata safely by preventing out of bounds -- mpegtsmux: Corrections around Teletext handling -- srtsink: Fix header buffer filtering -- transcoder: Fix uritranscodebin reference handling -- tsdemux: Allow access unit parsing failures -- tsdemux: Send new-segment before GAP -- vulkanupload: fix regression for uploading VulkanBuffer -- vulkanupload: fix regression when uploading to single memory multiplaned memory images. -- webrtcbin: disconnect signal ICE handlers on dispose -- {d3d12,d3d11}compositor: Fix negative position handling -- {nv,d3d12,d3d11}decoder: Use interlace info in input caps - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- Add new speech synthesis element around ElevenLabs API -- cea708mux: fix another WouldOverflow case -- cea708mux: support configuring a limit to how much data will be pending -- cea708overlay: also reset the output size on flush stop -- gcc: handle out of order packets -- fmp4mux: Fix panic on late GOP -- livekit: expose a connection state property -- mp4mux: add taic box -- mp4mux: test the trak structure -- pcap_writer: Make target-property and pad-path properties writable again -- skia: Don’t build skia plugin by default for now -- threadshare: cleanups & usability improvements -- threadshare: sync runtime with latest async-io -- threadshare: fix kqueue reactor -- threadshare: Update to getifaddrs 0.2 -- threadshare: add new thread-sharing inter elements -- threadshare: add a ts-rtpdtmfsrc element -- transcriberbin: fix naming of subtitle pads -- tttocea708: don’t panic if a new service would overflow -- webrtc: android: Update Gradle and migrate to FindGStreamerMobile -- webrtc: add new examples for stream selection over data channel -- webrtcsrc: the webrtcbin get-transceiver index is not mlineindex -- webrtcsrc: send CustomUpstream events over control channel .. -- webrtcsink: Don’t require encoder element for pre-encoded streams -- webrtcsink: Don’t reject caps events if the codec_data changes -- whip: server: pick session-id from the endpoint if specified -- cargo: add config file to force CARGO_NET_GIT_FETCH_WITH_CLI=true -- Cargo.lock, deny: Update dependencies and log duplicated targo-lexicon -- Update windows-sys dependency from “>=0.52, <=0.59” to “>=0.52, <=0.60” -- deny: Add override for windows-sys 0.59 -- deny: Update lints -- cargo_wrapper: Fix backslashes being parsed as escape codes on Windows -- Fixes for Clock: non-optional return types -- Rename relationmeta plugin to analytics - -gst-libav - -- No changes - -gst-rtsp-server - -- rtsp-server: tests: Fix a few memory leaks - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -This release includes some important regression fixes for the GStreamer Python bindings for regressions introduced in 1.26.2. - -- gst-python/tests: don’t depend on webrtc and rtsp-server -- python: Fix pulling events from appsink and other fixes - -gst-editing-services - -- No changes - -gst-devtools, gst-validate + gst-integration-testsuites - -- validate: More memory leaks -- validate: Valgrind fixes - -gst-examples - -- No changes - -gstreamer-docs - -- No changes - -Development build environment - -- gst-env: Emit a warning about DYLD_LIBRARY_PATH on macOS - -Cerbero build tool and packaging changes in 1.26.3 - -- WiX: fix broken VC templates -- android: Don’t ignore text relocation errors on 32-bit, and error out if any are found -- build: source: handle existing .cargo/config.toml as in plugins-rs -- ci: Detect text relocations when building android examples -- gst-plugins-base: Ship pbtypes -- gst-plugins-base: Fix category of pbtypes -- gst-plugins-rs: Update for relationmeta -> analytics plugin rename -- libsoup.recipe: XML-RPC support was removed before the 3.0 release -- openssl: Update to 3.5.0 LTS - -Contributors to 1.26.3 - -Albert Sjolund, Aleix Pol, Ben Butterworth, Brad Hards, César Alejandro Torrealba Vázquez, Changyong Ahn, Doug Nazar, Edward -Hervey, Elliot Chen, Enrique Ocaña González, François Laignel, Glyn Davies, He Junyan, Jakub Adam, James Cowgill, Jan Alexander -Steffens (heftig), Jan Schmidt, Jochen Henneberg, Johan Sternerup, Julian Bouzas, L. E. Segovia, Loïc Le Page, Mathieu -Duponchelle, Matthew Waters, Nicolas Dufresne, Nirbheek Chauhan, Philippe Normand, Pratik Pachange, Qian Hu (胡骞), Sebastian -Dröge, Seungha Yang, Taruntej Kanakamalla, Théo Maillart, Thibault Saunier, Tim-Philipp Müller, Víctor Manuel Jáquez Leal, -Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.3 - -- List of Merge Requests applied in 1.26.3 -- List of Issues fixed in 1.26.3 - -1.26.4 - -The fourth 1.26 bug-fix release (1.26.4) was released on 16 July 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.4 - -- adaptivedemux2: Fixed reverse playback -- d3d12screencapture: Add support for monitor add/remove in device provider -- rtmp2src: various fixes to make it play back AWS medialive streams -- rtph265pay: add profile-id, tier-flag, and level-id to output rtp caps -- vp9parse: Fix handling of spatial SVC decoding -- vtenc: Fix negotiation failure with profile=main-422-10 -- gtk4paintablesink: Add YCbCr memory texture formats and other improvements -- livekit: add room-timeout -- mp4mux: add TAI timestamp muxing support -- rtpbin2: fix various race conditions, plus other bug fixes and performance improvements -- threadshare: add a ts-rtpdtmfsrc element, implement run-time input switching in ts-intersrc -- webrtcsink: fix deadlock on error setting remote description and other fixes -- cerbero: WiX installer: fix missing props files in the MSI packages -- smaller macOS/iOS package sizes -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- tracers: Fix deadlock in latency tracer -- Fix various valgrind/test errors when GST_DEBUG is enabled -- More valgrind and test fixes -- Various ASAN fixes - -gst-plugins-base - -- Revert “streamsynchronizer: Consider streams having received stream-start as waiting” -- alsa: free conf cache under valgrind -- gst-device-monitor: Fix caps filter splitting -- Fix various valgrind/test errors when GST_DEBUG is enabled -- More valgrind and test fixes -- Various ASAN fixes - -gst-plugins-good - -- adaptivedemux2: Fixed reverse playback -- matroskademux: Send tags after seeking -- qtdemux: Fix incorrect FourCC used when iterating over sbgp atoms -- qtdemux: Incorrect sibling type used in sbgp iteration loop -- rtph265pay: add profile-id, tier-flag, and level-id to output rtp caps -- rtpjpeg: fix copying of quant data if it spans memory segments -- soup: Disable range requests when talking to Python’s http.server -- v4l2videodec: need replace acquired_caps on set_format success -- Fix various valgrind/test errors when GST_DEBUG is enabled -- More valgrind and test fixes -- Various ASAN fixes - -gst-plugins-bad - -- avtp: crf: Setup socket during state change to ensure we handle failure -- d3d12screencapture: Add support for monitor add/remove in device provider -- mpegtsmux: fix double free caused by shared PMT descriptor -- openh264: Ensure src_pic is initialized before use -- rtmp2src: various fixes to make it play back AWS medialive streams -- ssdobjectdetector: Use correct tensor data index for the scores -- v4l2codecs: h265dec: Fix zero-copy of cropped window located at position 0,0 -- vp9parse: Fix handling of spatial SVC decoding -- vp9parse: Revert “Always default to super-frame” -- vtenc: Fix negotiation failure with profile=main-422-10 -- vulkan: Fix drawing too many triangles in fullscreenquad -- vulkanfullscreenquad: add locks for synchronisation -- Fix various valgrind/test errors when GST_DEBUG is enabled -- More valgrind and test fixes -- Various ASAN fixes - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- aws: s3hlssink: Write to S3 on OutputStream flush -- cea708mux: fix clipping function -- dav1ddec: Use video decoder base class latency reporting API -- elevenlabssynthesizer: fix running time checks -- gopbuffer: Push GOPs in order of time on EOS -- gtk4: Improve color-state fallbacks for unknown values -- gtk4: Add YCbCr memory texture formats -- gtk4: Promote set_caps debug log to info -- hlssink3: Fix a comment typo -- hlssink3: Use closed fragment location in playlist generation -- livekit: add room-timeout -- mccparse: Convert “U” to the correct byte representation -- mp4mux: add TAI timestamp element and muxing -- threadshare: add a ts-rtpdtmfsrc element -- rtp: Update to rtcp-types 0.2 -- rtpsend: Don’t configure a zero min RTCP interval for senders -- rtpbin2: Fix handling of unknown PTs and don’t warn about incomplete RTP caps to allow for bundling -- rtpbin2: Improve rtcp-mux support -- rtpbin2: fix race condition on serialized Queries -- rtpbin2: sync: fix race condition -- rtprecv optimize src pad scheduling -- rtprecv: fix SSRC collision event sent in wrong direction -- skia: Add harfbuzz, freetype and fontconfig as dependencies in the meson build -- tttocea{6,7}08: Disallow pango markup from input caps -- ts-intersrc: handle dynamic inter-ctx changes -- threadshare: src elements: don’t pause the task in downward state transitions -- webrtc: sink: avoid recursive locking of the session -- webrtcsink: fix deadlock on error setting remote description -- webrtcsink: add mitigation modes parameter and signal -- webrtc: fix Safari addIceCandidate crash -- webrtc-api: Set default bundle policy to max-bundle -- WHIP client: emit shutdown after DELETE request -- Fix various new clippy 1.88 warnings -- Update dependencies - -gst-libav - -- Various ASAN fixes - -gst-rtsp-server - -- No changes - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- No changes - -gst-editing-services - -- Fix various valgrind/test errors when GST_DEBUG is enabled - -gst-devtools, gst-validate + gst-integration-testsuites - -- Update various Rust dependencies - -gst-examples - -- Update various Rust dependencies - -gstreamer-docs - -- No changes - -Development build environment - -- No changes - -Cerbero build tool and packaging changes in 1.26.4 - -- WiX: fix missing props files in the MSI -- cmake: Do not rely on the CERBERO_PREFIX environment variable -- osx: Update pkgbuild compression algorithms resulting in much smaller packages - -Contributors to 1.26.4 - -Adrian Perez de Castro, Alicia Boya García, Arun Raghavan, Brad Hards, David Maseda Neira, David Monge, Doug Nazar, Enock Gomes -Neto, François Laignel, Haihua Hu, Hanna Weiß, Jerome Colle, Jochen Henneberg, L. E. Segovia, Mathieu Duponchelle, Matthew -Waters, Nicolas Dufresne, Nirbheek Chauhan, Philippe Normand, Piotr Brzeziński, Robert Ayrapetyan, Robert Mader, Sebastian -Dröge, Seungha Yang, Taruntej Kanakamalla, Thibault Saunier, Tim-Philipp Müller, Vivia Nikolaidou, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.4 - -- List of Merge Requests applied in 1.26.4 -- List of Issues fixed in 1.26.4 - -1.26.5 - -The fifth 1.26 bug-fix release (1.26.5) was released on 7 August 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.5 - -- audioconvert: Fix caps negotiation regression when using a mix matrix - -- aws: Add support for brevity in awstranslate and add option to partition speakers in the transcription output of - awstranscriber2 - -- speechmatics speech-to-text: Expose mask-profanities property - -- cea708mux: Add support for discarding select services on each input - -- cea608overlay, cea708overlay: Accept GPU memory buffers if downstream supports the overlay composition meta - -- d3d12screencapture source element and device provider fixes - -- decodebin3: Don’t error on an incoming ONVIF metadata stream - -- uridecodebin3: Fix potential crash when adding URIs to messages, e.g. if no decoder is available - -- v4l2: Fix memory leak for dynamic resolution change - -- VA encoder fixes - -- videorate, imagefreeze: Add support for JPEG XS - -- Vulkan integration fixes - -- wasapi2 audio device monitor improvements - -- webrtc: Add WHEP client signaller and add whepclientsrc element on top of webrtcsrc using that - -- threadshare: Many improvements and fixes to the generic threadshare and RTP threadshare elements - -- rtpbin2 improvements and fixes - -- gst-device-monitor-1.0 command line tool improvements - -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- aggregator: add sub_latency_min to pad queue size -- build: Disable C5287 warning on MSVC - -gst-plugins-base - -- audioconvert: Fix regression when using a mix matrix -- audioconvert: mix-matrix causes caps negotiation failure -- decodebin3: Don’t error on an incoming ONVIF metadata stream -- gloverlay: Recompute geometry when caps change, and load texture after stopping and starting again -- uridecodebin3: Add missing locking and NULL checks when adding URIs to messages -- uridecodebin3: segfault in update_message_with_uri() if no decoder available -- videorate, imagefreeze: add support for JPEG XS -- gst-device-monitor-1.0: Add shell quoting for launch lines -- gst-device-monitor-1.0: Fix criticals, and also accept utf8 in launch lines -- gst-device-monitor-1.0: Use gst_print instead of g_print - -gst-plugins-good - -- v4l2: fix memory leak for dynamic resolution change -- videorate, imagefreeze: add support for JPEG XS - -gst-plugins-bad - -- av1parse: Don’t error out on “currently” undefined seq-level indices -- av1parse: fails to parse AV1 bitstreams generated by FFmpeg using the av1_nvenc hardware encoder -- d3d12screencapturedevice: Avoid false device removal on monitor reconfiguration -- d3d12screencapturesrc: Fix OS handle leaks/random crash in WGC mode -- meson: d3d12: Add support for MinGW DirectXMath package -- va: Re-negotiate after FLUSH -- vaXXXenc: calculate latency with corrected framerate -- vaXXXenc: fix potential race condition -- vkphysicaldevice: enable sampler ycbcr conversion, synchronization2 and timeline semaphore features -- vulkan: ycbcr conversion extension got promoted in 1.1.0 -- wasapi2: Port to IMMDevice based device selection - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- Note: This list has been updated, since it originally accidentally included some Merge Requests that only landed in the main - branch, not in the 0.14 branch that ships with our GStreamer 1.26.5 packages. - -- awstranscriber2, awstranslate: Handle multiple stream-start event - -- awstranslate: expose property for turning brevity on) - -- awstranscriber2: add property for setting show_speaker_labels) - -- cea708mux: expose “discarded-services” property on sink pads) - -- ceaX08overlay: support ANY caps features, allowing e.g. memory:GLMemory if downstream supports the overlay composition meta - -- hlsmultivariantsink: Fix master playlist version - -- rtprecv: Drop state lock before chaining RTCP packets from the RTP chain function - -- Add rtpbin2 examples - -- rtpmp4apay2: fix payload size prefix - -- rtp: threadshare: fix some property ranges - -- mpegtslivesrc: Remove leftover debug message - -- speechmatics: expose mask-profanities property) - -- ts-audiotestsrc fixes - -- threadshare: fix flush for ts-queue ts-proxy & ts-intersrc - -- threadshare: fix regression in ts-proxysrc - -- threadshare: improvements to some elements - -- threadshare: udp: avoid getifaddrs in android) - -- threadshare: Enable windows Win32_Networking feature - -- threadshare: queue & proxy: fix race condition stopping - -- threadshare: Also enable windows Win32_Networking_WinSock feature - -- tracers: pipeline-snapshot: reduce WebSocket connection log level - -- tracers: queue-levels: add support for threadshare DataQueue related elements - -- tracers: Update to etherparse 0.19 - -- transcriberbin: Fix handling of upstream latency query - -- webrtc: android example: fix media handling initialization sequence) - -- webrtcsink: Move videorate before videoconvert and videoscale to avoid processing frames that would be dropped - -- whep: add WHEP client signaller - -- Fix various new clippy 1.89 warnings - -gst-libav - -- No changes - -gst-rtsp-server - -- No changes - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- No changes - -gst-editing-services - -- No changes - -gst-devtools, gst-validate + gst-integration-testsuites - -- No changes - -gst-examples - -- No changes - -gstreamer-docs - -- No changes - -Development build environment - -- gst-env: only-environment: only dump added and updated vars -- gst-full: Fix detection of duplicate plugin entries -- ci: Fix gst-full breakage due to a typo -- build: Disable C5287 warning on MSVC - -Cerbero build tool and packaging changes in 1.26.5 - -- a52dec: update to 0.8.0 and port to Meson -- build: Fix passing multiple steps -- expat: update to 2.7.1 -- tar: Refactor in preparation for xcframework support - -Contributors to 1.26.5 - -François Laignel, Jan Schmidt, Jaslo Ziska, L. E. Segovia, Marc-André Lureau, Mathieu Duponchelle, Matthew Waters, Nirbheek -Chauhan, Philippe Normand, Qian Hu (胡骞), Sanchayan Maity, Sebastian Dröge, Seungha Yang, Thibault Saunier, Tim-Philipp Müller, -Víctor Manuel Jáquez Leal, Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.5 - -- List of Merge Requests applied in 1.26.5 -- List of Issues fixed in 1.26.5 - -1.26.6 - -The sixth 1.26 bug-fix release (1.26.6) was released on 14 September 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.6 - -- analytics GstTensorMeta handling changes (see note below) -- closed caption combiner and transcriberbin stability fixes -- decklinkvideosrc: fix unrecoverable state after failing to start streaming because device is busy -- decodebin3 tag handling improvements -- fallbacksrc: Fix sources only being restarted once, as well as some deadlocks and race conditions on shutdown -- gtk4paintablesink: Try importing dmabufs withouth DMA_DRM caps -- hlsdemux2: Fix parsing of byterange and init map directives -- rtpmp4gdepay2: allow only constantduration with neither constantsize nor sizelength set -- spotifysrc: update to librespot 0.7 to make work after recent Spotify changes -- threadshare: new blocking adapter element for use in front of block elements such as sinks that sync to the clock -- threadshare: various other threadshare element fixes and improvements -- v4l2: Add support for WVC1 and WMV3 -- videorate: possible performance improvements when operating in drop-only mode -- GstBaseParse fixes -- Vulkan video decoder fixes -- Fix gst-device-monitor-1.0 tool device-path regression on Windows -- Monorepo development environment builds fewer plugins using subprojects by default, those require explicit enablement now -- Python bindings: Handle buffer PTS, DTS, duration, offset, and offset-end as unsigned long long (regression fix) -- Cerbero: Reduce recipe parallelism in various cases and dump cerbero and recipe versions into datadir during packaging -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -Possibly breaking behavioural changes - -- Previously it was guaranteed that there is only ever up to one GstTensorMeta per buffer. This is no longer true and code - working with GstTensorMeta must be able to handle multiple GstTensorMeta now (after this Merge Request). - -gstreamer - -- baseparse: Try harder to fixate caps based on upstream in default negotiation -- gst-discoverer reports 1x1 dimensions for “valid” MP4 files -- baseparse: don’t clear most sticky events after a FLUSH_STOP event -- gstreamer: Disable miniobject inline functions for gobject-introspection for non-subprojects too -- gstreamer: Make sure to zero-initialize the GValue before G_VALUE_COLLECT_INIT -- ptp: Fix a new Rust 1.89 compiler warning on Windows -- ptp: Fix new compiler warning with Rust 1.89 -- Segmentation fault when compiled with “-ftrivial-auto-var-init=pattern”. Use of unitialized GValue. - -gst-plugins-base - -- decodebin3: Update stream tags -- rtpbasedepayload: Avoid potential use-after free -- rtspconnection: Add get_url and get_ip return value annotation -- gst_rtsp_connection_get_url return value transfer annotation missing -- videometa: Fix valgrind warning when deserializing video meta -- videorate: don’t hold the reference to the buffer in drop-only mode -- gst-device-monitor-1.0: Fix device-path regression on Windows -- gst-device-monitor-1.0: Add quoting for powershell and cmd -- Monorepo: opengl, vorbis, plugins require explicit enablement now for a build using the Meson subproject fallback - -gst-plugins-good - -- adaptivedemux2: fix crash due to log -- adaptivedemux2: Crash in logging when “Dropping EOS before next period” -- hlsdemux2: Fix parsing of byterange and init map directives -- mpg123audiodec: Always break the decoding loop and relay downstream flow errors upstream -- v4l2: Add support for WVC1 and WMV3 -- Monorepo: dv plugin requires explicit enablement now for a build using the Meson subproject fallback - -gst-plugins-bad - -- analytics: always add GstTensorMeta -- cccombiner: Crash fixes -- curlsmtpsink: adapt to date formatting issue -- decklinkvideosrc: fix decklinkvideosrc becomes unrecoverable if it fails to start streaming -- decklinkvideosrc gets into unrecoverable state if device is busy -- dwrite: Fix D3D12 critical warning -- hlsdemux: Fix parsing of byterange and init map directives -- mpegtsmux: Caps event fails with stream type change error -- vulkanh24xdec: couple of fixes -- vulkanh26xdec: fix discont state handling -- waylandsink: add some error handler for event dispatch -- zbar: tests: Handle symbol-bytes as not null-terminated -- Monorepo: avtp, codec2json, iqa, microdns, openjpeg, qroverlay, soundtouch, tinyalsa plugins require explicit enablement now - for a build using the Meson subproject fallback - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- analyticscombiner: Use NULL caps instead of EMPTY caps in the array for streams with no caps -- aws: Ensure task stopping on paused-to-ready state change -- fallbacksrc: Don’t panic during retries if the element was shut down in parallel -- fallbacksrc: Don’t restart source if the element is just being shut down -- fallbacksrc: Fix some custom source deadlocks -- fallbacksrc: Fix sources only being restarted once -- gtk4: Try importing dmabufs withouth DMA_DRM caps -- inter: Give the appsrc/appsink a name that has the parent element as prefix -- mp4: Skip tests using x264enc if it does not exist -- rtpgccbwe: avoid clamp() panic when min_bitrate > max_bitrate -- rtpmp4gdepay2: allow only constantduration with neither constantsize nor sizelength set -- rtprecv: fix race condition on first buffer -- speechmatics: Specify rustls as an explicit dependency -- spotify: update to librespot 0.7 -- threadshare: add a blocking adapter element -- threadshare: always use block_on_or_add_subtask -- threadshare: audiotestsrc: fix setting samples-per-buffer… -- threadshare: blocking_adapter: fix Since marker in docs -- threadshare: fix resources not available when preparing asynchronously -- threadshare: fix ts-inter test one_to_one_up_first -- threadshare: have Task log its obj -- threadshare: intersink: return from blocking tasks when stopping -- threadshare: inter: update doc example -- threadshare: runtime/pad: lower log level pushing Buffer to flushing pad -- threadshare: separate blocking & throttling schedulers -- threadshare: update examples -- threadshare: Update to getifaddrs 0.5 -- threadshare: Fix macOS build post getifaddrs 0.5 update -- threadshare: Bump up getiffaddrs to 0.1.5 and revert “udp: avoid getifaddrs in android” -- threadshare: Reapply “udp: avoid getifaddrs in android” -- transcriberbin: Fix some deadlocks -- Update dependencies -- webrtc: Migrate to warp 0.4 and switch to tokio-rustls -- webrtc/signalling: Fix setting of host address -- ci: add script to check readme against plugins list -- Fix various new clippy 1.89 warnings -- Don’t suggest running cargo cinstall after cargo cbuild -- meson: Isolate built plugins from cargo target directory - -gst-libav - -- No changes - -gst-rtsp-server - -- rtsp-server: tests: Switch to fixtures to ensure pool shutdown - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- python: Handle buffer PTS/DTS/duration/offset/offset-end as unsigned long long - -gst-editing-services - -- gstreamer: Make sure to zero-initialize the GValue before G_VALUE_COLLECT_INIT -- Fix various memory leaks - -gst-devtools, gst-validate + gst-integration-testsuites - -- validate: http-actions: Replace GUri with GstURI for GLib 2.64 compatibility -- Fix memory leak and use of incorrect context - -gst-examples - -- No changes - -gstreamer-docs - -- No changes - -Development build environment - -- gobject-introspection: Fix introspection failing on Linux with subproject GLib -- glib: update to 2.82.5 and backport shebangs patch -- ci: Work around PowerShell broken argument parsing -- Disable more plugins on Windows by default by not pulling in fallback subprojects automatically, only if plugins are enabled - explicitly -- Fix build on windows due to proxy-libintl not being invoked -- python: Reapply fixes to enable running Python bindings on Windows - -Cerbero build tool and packaging changes in 1.26.6 - -- ffmpeg: enable filters needed by avvideocompare -- cache: Fix detection of build tools prefix when using a cache -- cerbero/package: fix –tarball –compress-method=none -- cerbero: Reduce recipe parallelism in various cases -- ci: remove unpacked apk dir on completion -- package: Dump cerbero and recipe versions into datadir - -Contributors to 1.26.6 - -Andrey Khamukhin, Daniel Morin, Doug Nazar, François Laignel, Guillaume Desmottes, Hou Qi, Ian Napier, Jan Alexander Steffens -(heftig), Jan Schmidt, Jordan Petridis, L. E. Segovia, Marko Kohtala, Matthew Waters, Monty C, Nirbheek Chauhan, Ola Fornander, -Olivier Crête, Piotr Brzeziński, Qian Hu (胡骞), r4v3n6101, Robert Mader, Ruben Gonzalez, Sanchayan Maity, Sebastian Dröge, -Seungha Yang, Taruntej Kanakamalla, Thibault Saunier, Tim-Philipp Müller, Víctor Manuel Jáquez Leal, Vivian LEE, Vivienne -Watermeier, Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.6 - -- List of Merge Requests applied in 1.26.6 -- List of Issues fixed in 1.26.6 - -1.26.7 - -The seventh 1.26 bug-fix release (1.26.7) was released on 14 October 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.7 - -- cea608overlay: improve handling of non-system memory -- cuda: Fix runtime kernel compile with CUDA 13.0 -- d3d12: Fix crop meta support in converter and passthrough handling in deinterlacer -- fallbacksrc: source handling improvements; no-more-pads signal for streams-unaware parents -- inter: add properties to fine tune the inner elements -- qtdemux: surround sound channel layout handling fixes and performance improvements for GoPro videos -- rtp: Add linear audio (L8, L16, L24) RTP payloaders / depayloaders -- rtspsrc: Send RTSP keepalives in TCP/interleaved modes -- rtpamrpay2: frame quality indicator flag related fixes -- rtpbasepay2: reuse last PTS when possible, to work around problems with NVIDIA Jetson AV1 encoder -- mpegtsmux, tsdemux: Opus audio handling fixes -- threadshare: latency related improvements and many other fixes -- matroskamux, tsmux, flvmux, cea608mux: Best pad determination fixes at EOS -- unixfd: support buffers with a big payload -- videorate unknown buffer duration assertion failure with variable framerates -- editing services: Make GESTimeline respect SELECT_ELEMENT_TRACK signal discard decision; memory leak fixes -- gobject-introspection annotation fixes -- cerbero: Update meson to 1.9.0 to enable Xcode 26 compatibility -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- controller: Fix get_all() return type annotation -- gst-launch: Do not assume error messages have a src element -- multiqueue: Fix object reference handling in signal callbacks -- netclientclock: Fix memory leak in error paths - -gst-plugins-base - -- discoverer: Mark gst_discoverer_stream_info_list_free() as transfer full -- qt: Fix building examples on macOS -- riff: Add channel reorder maps for 3 and 7 channel audio -- sdp: proper usage of gst_buffer_append -- videorate: fix assert fail due to invalid buffer duration -- Fix build error with glib < 2.68 - -gst-plugins-good - -- matroskamux: Properly check if pads are EOS in find_best_pad -- qt: Fix building examples on macOS -- qtdemux: bad performance with GoPro videos containing FDSC metadata tracks -- qtdemux: fix open/seek perf for GoPro files with SOS track -- qtdemux: handle unsupported channel layout tags gracefully -- qtdemux: set channel-mask to 0 for unknown layout tags -- rtspsrc: Send RTSP keepalives in TCP/interleaved modes -- v4l2: Add GstV4l2Error handling in gst_v4l2_get_capabilities -- v4l2: Fix memory leak for DRM caps negotiation -- v4l2transform: reconfigure v4l2object only if respective caps changed -- Fix issues with G_DISABLE_CHECKS & G_DISABLE_ASSERT - -gst-plugins-bad - -- cuda: Fix runtime kernel compile with CUDA 13.0 -- d3d12convert: Fix crop meta support -- d3d12deinterlace: Fix passthrough handling -- gst: Fix a few small leaks -- matroskamux: Properly check if pads are EOS in find_best_pad -- tsdemux: Directly forward Opus AUs without opus_control_header -- tsmux: Write a full Opus channel configuration if no matching Vorbis one is found -- unixfd: Fix case of buffer with big payload -- vacompositor: Correct scale-method properties -- webrtc: nice: Fix a use-after-free and a mem leak -- Fix all compiler warnings on Fedora -- Fix issues with G_DISABLE_CHECKS & G_DISABLE_ASSERT - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- cea608overlay: Support non-system memory correctly -- fallbacksrc: Fix custom source reuse case -- fallbacksrc: Fix sources only being restarted once -- fallbacksrc: Post no-more-pads signal for streams-unaware parent -- inter: add properties to fine tune the inner elements -- onvifmetadatapay: copy metadata from source buffer -- rtp: Add linear audio (L8, L16, L24) RTP payloaders / depayloaders -- rtpamrpay2: Actually forward the frame quality indicator -- rtpamrpay2: Set frame quality indicator flag -- rtp: basedepay: reuse last PTS, when possible to work around problems with NVIDIA Jetson AV1 encoder -- rtpsend/recv: fix property type for stats -- threadshare: audiotestsrc: support more Audio formats -- threadshare: backpressure: abort pending items on flush start -- threadshare: fixes & improvements -- threadshare: latency related improvements and fixes -- threadshare: runtime task: execute action in downward transition -- threadshare: udpsink: fix panic recalculating latency from certain executors -- uriplaylistbin: Ignore all tests for now -- webrtc: livekit: Drop connection lock after take() -- Update dependencies -- Update dependencies -- ci: use image and GST_RS_MSRV / GST_RS_STABLE variables from gstreamer-rs 0.24 in gst-plugins-rs 0.14 branch -- Add rust-tls-native-roots feature to the reqwest dep -- Fix some new clippy 1.90 warnings -- meson: Fix .pc files installation and simplify build output handling - -gst-libav - -- Fix all compiler warnings on Fedora - -gst-rtsp-server - -- Fix issues with G_DISABLE_CHECKS & G_DISABLE_ASSERT - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- No changes - -gst-editing-services - -- ges: timeline: Respect SELECT_ELEMENT_TRACK signal discard decision -- gst: Fix a few small leaks - -gst-devtools, gst-validate + gst-integration-testsuites - -- Fix issues with G_DISABLE_CHECKS & G_DISABLE_ASSERT - -gst-examples - -- No changes - -gstreamer-docs - -- No changes - -Development build environment - -- libsoup.wrap: Apply upstream fix for GModule deadlock - -Cerbero build tool and packaging changes in 1.26.7 - -- meson: Update to 1.9.0 to enable Xcode 26 compatibility -- osxrelocator: Add .so to the allowed dylib extensions -- ci: update macos image to 26-tahoe -- EndeavourOS support - -Contributors to 1.26.7 - -Andoni Morales Alastruey, Branko Subasic, Vincent Beng Keat Cheah, Doug Nazar, Ekwang Lee, François Laignel, Inbok Kim, Jakub -Adam, Jan Schmidt, Jochen Henneberg, L. E. Segovia, Mark Nauwelaerts, Markus Hofstaetter, Matthew Waters, Nirbheek Chauhan, -Norbert Hańderek, Philippe Normand, Razvan Grigore, Sebastian Dröge, Seungha Yang, Taruntej Kanakamalla, Thibault Saunier, -Tim-Philipp Müller, Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.7 - -- List of Merge Requests applied in 1.26.7 -- List of Issues fixed in 1.26.7 - -1.26.8 - -The eighth 1.26 bug-fix release (1.26.8) was released on 10 November 2025. - -This release only contains bugfixes including some important playback fixes, and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.8 - -- Fix showtime video player showing washed-out colours for HDR videos when subtitles are active -- core: performance improvements for elements with many source pads -- aacparse: support streams which do not have frequent LOAS config -- av1parse: Fix duplicated frames issue in frame splitting -- fmp4mux: Fix EAC3 datarate calculation and substream writing -- gtk4painablesink: fixes glitches with padded buffers such as for sub-sampled video formats with odd sizes -- mpegtsmux: PUSI flag and ID3 tag handling fixes -- rtpbaseaudiopay2: Fix marker bit handling for DISCONT and RESYNC buffer flags -- rtpvp9pay: Fix parsing of show-existing-frame flag, fixes compatibility with vavp9lpenc -- splitmuxsink: accept pads named ‘sink_%u’ on the muxer for fmp4 muxer support -- webrtcsink: Correct lock ordering to prevent deadlock -- gst-plugins-rs meson build gained an auto_plugin_features option and no longer requires all gstreamer libraries to be - available -- v4l2 device monitor fixes -- x265enc: advertise latency based on encoder parameters instead of hard-coding it to 5 frames -- cerbero package builder: Add Rust support for 32-bit Linux x86 -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- info : Added parentheses to ensure proper evaluation of conditions in logging level checks. -- info: Fix test pattern to check for an expected debug log line -- pad: make gst_pad_forward not O(n²) -- parse: Move g_strfreev() a bit later to avoid use-after-free -- structure: Don’t crash if GArray has NULL value -- utils: Fix leak in gst_util_filename_compare -- vasnprintf: free dynamic tmp buffer on error to prevent memory leak -- gst-launch-1.0: Print details of error message - -gst-plugins-base - -- encoding-target: Fix memory leak in gst_encoding_target_save -- gl: Support DMABuf passthrough with meta:GstVideoOverlayComposition -- gl: egl: fix memory leak in _print_all_dma_formats() -- gltestsrc: Fix memory leaks on shader creation failure -- id3: fix csets memory leak in string_utf8_dup -- opusdec: Unref intersected caps when empty to avoid leaks -- parsebin: Free missing plugin details and return failure when plugin is not found -- pbutils: Don’t throw critical for unknown mime codec -- rtsp: fix memory leaks in gst_rtsp_connection_connect_with_response_usec - -gst-plugins-good - -- aacparse: support streams which do not have frequent loas config -- multifile: verify format identifiers in filename template strings -- rtp: Fix usage of uninitialized variable -- rtph263pay: Fix Out-of-bounds access (OVERRUN) -- rtpvp9depay: fix wrong event referencing, use same packet lost logic from neighboring rtpvp8depay -- rtpvp9pay: Fix parsing of show-existing-frame -- rtpvp9pay: vavp9lpenc does not work with rtpvp9pay but does with rtpvp9pay2 -- splitmuxsink: accept pads named ‘sink_%u’ on the muxer -- v4l2: Fix NULL pointer dereference in probe error path -- v4l2videoenc: fix memory leak about output state and caps - -gst-plugins-bad - -- alphacombine: Only reset once both pads are done flushing -- av1parse: Fix duplicated frames issue in frame splitting -- avwait: Unify conditions between the different modes -- d3d11converter & d3d12converter: Initialize video_direction -- dtlsconnection: Increase DTLS MTU to 1200 -- h264parser: fix uint32 to int32 truncation -- mpegtsmux: ID3 tag handling fixes and cleanup -- ristsink: Fix double free regression -- scte-section: fix resource leak in splice component parsing -- tsmux: Reset PUSI flag after writing stream packet -- uvcgadget: always ensure to switch to fakesink -- v4l2codecs: Free sub-request on allocation failure -- wasapi2: Handle GetActivateResult failure -- wayland: Fix using uninitialized value of data.wbuf -- gstwasapi2.dll error on machines with no audio devices -- x265enc: Calculate latency based on encoder parameters - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- aws, webrtc, cargo: Remove all constraints on AWS SDK and tune optimizations -- closedcaption: Return FlowError from scan_duration -- fmp4mux: Fix EAC3 datarate calculation -- fmp4mux: Fix EAC3 substream writing in EC3SpecificBox -- fmp4mux: Update to dash-mpd 0.19 -- gtk4: Implement cropped imports without viewport -- json: Return FlowError from scan_duration -- rtp: baseaudiopay: Fix marker bit handling -- threadshare: fix Pad mod diagram -- threadshare: Update to getifaddrs 0.6 -- tracers: Fix inability to create new log files (regression) -- tracers: Fix inverted append logic when writing log files -- uriplaylistbin: Propagate error message source -- webrtc: document grant requirement for livekitwebrtcsink auth token -- webrtcsink: Correct lock ordering to prevent Lock (A), Lock (B) + Lock(B), Lock(A) deadlock between - on_remote_description_set() and handle_ice() -- webrtcsrc: Clean up EOS and session handling -- meson: Add auto_plugin_features option -- meson: Don’t require all gstreamer libraries -- Document the tags and branches in this repository -- Fix a couple of new 1.91 clippy warnings -- Update dependencies - -gst-libav - -- No changes - -gst-rtsp-server - -- No changes - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- python: Fix GDir leak in gst_python_load_directory - -gst-editing-services - -- ges: add error reporting to base bin timeline setup - -gst-devtools, gst-validate + gst-integration-testsuites - -- validate: add missing GST_VALIDATE_API annotation -- validate: use meson compile instead of ninja directly -- dots-viewer: Update Rust dependencies - -gst-examples - -- Fix signal lookup in GTK player example -- Update Rust dependencies - -gstreamer-docs - -- No changes - -Development build environment - -- libnice.wrap: add upstream patch from libnice to fix parsing of incomplete TCP ICE candidates - -Cerbero build tool and packaging changes in 1.26.8 - -- Add Rust support for Linux x86 -- Open log files as utf-8 and with error resilience -- harfbuzz: disable documentation - -Contributors to 1.26.8 - -Amy Ko, Artem Martus, Carlos Bentzen, Christo Joseph, David Maseda Neira, DongJoo Kim, Doug Nazar, François Laignel, Havard -Graff, He Junyan, Inbok Kim, Jan Alexander Steffens (heftig), Jan Schmidt, Jeehyun Lee, Jeongmin Kwak, Jihoon Lee, Kevin Wolf, -L. E. Segovia, Loïc Le Page, Manuel Torres, Marek Olejnik, Matthew Waters, Mazdak Farzone, Michael Grzeschik, Nicolas Dufresne, -Nirbheek Chauhan, Oz Donner, Pablo García, Piotr Brzeziński, Qian Hu (胡骞), Rares Branici, Robert Mader, Ross Burton, Ruben -Gonzalez, Sebastian Dröge, Seungha Yang, Thibault Saunier, Tim-Philipp Müller, Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.8 - -- List of Merge Requests applied in 1.26.8 -- List of Issues fixed in 1.26.8 - -1.26.9 - -The ninth 1.26 bug-fix release (1.26.9) was released on 01 December 2025. - -This release only contains bugfixes and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.9 - -- playback: playbin3 and decodebin3 stability fixes -- Ancillary metadata handling fixes for AJA playout and Blackmagic Decklink capture cards -- HLS and DASH adaptive streaming clients stability improvements -- gst-play-1.0 will now print details of any missing plugins again -- gtk4paintablesink: Add property to fine-tune reconfiguration behaviour on window-resize -- macOS device monitoring: fix potential crash when probing for audio devices -- macOS video decoder stability improvements -- NDI source: fix audio corruption for non-interleaved audio with stride padding -- Add SMPTE ST291-1 ancillary metadata RTP payloader and depayloader -- Add ST-2038 metadata combiner and extractor -- webrtcsink: support hardware-accelerated encoders from the va VA-API plugin -- spotifysrc: fix the Spotify integration by using Spotify’s extended metadata endpoint -- Python bindings cross compilation fixes -- cerbero: add Visual Studio 2026 support, fix building on drives other than C:, and ship svtjpegxs plugin on Windows -- Various bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements - -gstreamer - -- info: Force comparison to same types -- queue: Use GST_PTR_FORMAT everywhere -- streamcollection: Fix race condition between disconnecting notify proxy and notifications -- value: Fix GstAllocationParams string serialisation on 32-bit architectures - -gst-plugins-base - -- allocators: drmdumb: Keep dmabuf mapped -- alsadeviceprovider: Fix device name leak -- audiovisualizer: Use break instead of goto for escape logic -- decodebin3: Clear previous collection on input -- decodebin3: Consider certain meta caps in decodebin3 as raw format to avoid warnings -- decodebin3: Protect again NULL dereference if input slot can’t be mapped -- glbasesrc: Add unlock handling for non-negotiated cases -- glcolorconvert: Fix memory leak in _create_shader -- gldownload: Keep dmabuf mapped -- glfiltershader: Add missing unlock -- glstereosplit: Add missing unlock for exceptional case -- pbutils: Fix bit shifting when generate hevc mime codec string -- rtpbaseaudiopay: Consider RESYNC flag as discontinuity too -- rtpbasedepayload: Add missing unlock in error code path -- uridecodebin3: Add null check of play items in purge -- urisourcebin: Add missing unlock -- urisourcebin: Fix initial values of min_byte_level and min_time_level variables -- videoencoder: fix warning of uninitialized buffer - -Tools: - -- gst-play-1.0: fix printing of missing plugin details -- gst-play-1.0: Add missing unlock for invalid track type - -gst-plugins-good - -- adaptivedemux2: Fix a crash on rapid state changes, and startup busy waiting -- hlsdemux2: Keep streams with different names -- hlsdemux2: error out instead of asserting on negative stream time -- hlsdemux2: Not all subtitles are present in track/collection. Usage of FORCE EXT-X-MEDIA field -- osxaudio: Remove unnecessary if, add comment about GstDevice lifetime -- osxaudio: Various fixes, incl a potential crash when probing -- v4l2allocator: Add KEEP_MAPPED flag to the allocated buffers -- v4l2videoenc: Fix codec frame leak on error - -gst-plugins-bad - -- Add missing G_DECLS symbols to gstvkqueue and gstvkcommandqueue -- ajasink, decklinkvideosrc: Fix some GstAncillaryMeta handling bugs -- analyticsmeta: Initialize span to avoid undefined behavior -- GstPlay: Fixed wrong initial position update interval configuration -- id3tag: Fix resource leak -- mpegtsmux: Avoid infinite recursion writing PCR packets -- mxfdemux: Fix typo on mxf_ffv1_create_caps -- mxfmux: Fix memset usage -- mpegtsmux: segfaults when bitrate is configured lower than bitrate that’s coming in -- osxaudio: Various fixes, incl a potential crash when probing -- scte-section: fix missing cleanup on splice component parse failure -- tsdemux: expose audio GstStream for DTS -- va, unixfdsrc: keep dmabufs mapped -- vkh265dec: Fix a typo -- vkvideo-private: Replace GstBuffer with GstMemory array for video sessions -- vtdec: Fix race condition in decoder draining. Fluster runs were unstable - -gst-plugins-ugly - -- rmdemux: Remove unnecessary condition - -GStreamer Rust plugins - -- analytics splitter/combiner: Remove the separate fields to events and buffer -- audiornnoise: copy input metadatas to ouput buffer -- closedcaption: cctost2038anc: Support alignment -- closedcaption: st2038ancdemux: Support alignment -- closedcaption: st2038ancmux: Support frame alignment -- closedcaption: st2038: Forward frame rate in caps where available -- closedcaption: Add ST-2038 combiner and extractor element -- closedcaption: st2038extractor: Some fixes -- closedcaption: st2038combiner: Some fixes -- gif: Update to gif 0.14 -- gtk4: Add property to control reconfigure on window-resize behavior -- gtk4: Fix compile warning -- fmp4, mp4: Implement GstChildProxy for MP4Mux and FMP4Mux -- fmp4: Update to dash-mpd 0.19 -- ndisrcdemux: fix audio corruption with non-interleaved stride padding -- net/quinn: Update web-transport-quinn and fix flaky QUIC test -- rtp: Add SMPTE ST291-1 (ANC) RTP payloader and depayloader -- spotify: bump librespot 0.8.0 -- webrtcsink: Don’t let recalculate_latency block tokio worker thread -- webrtcsink: support va encoders -- Update dependencies -- meson: fix build when GTK is not present - -gst-libav - -- No changes - -gst-rtsp-server - -- No changes - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- gst-python: fix cross-compiling -- python: Add some typing annotation to overrides - -gst-editing-services - -- No changes - -gst-devtools, gst-validate + gst-integration-testsuites - -- debug-viewer: Make 0x prefix optional thread ID regexes - -gst-examples - -- No changes - -gstreamer-docs - -- No changes - -Development build environment - -- libsoup wrap: remove fallback gio-unix on windows build -- webrtc-audio-processing wrap: Fix build with abseil-cpp 202508 - -Cerbero build tool and packaging changes in 1.26.9 - -- Add support for Visual Studio 2026 (Insiders) -- Fix extraction on Windows when building on a different drive than C:, bump pixman and pygobject -- cookbook: List all the dependencies when listed in reverse -- gst-plugins-bad: actually build svtjpegxs plugin on 64-bit Windows - -Contributors to 1.26.9 - -Artem Martus, Chengfa Wang, Dominique Leroux, Dongjoo Kim, Doug Nazar, Edward Hervey, Gang Zhao, Hyunjun Ko, Jaehoon Lee, Jakub -Adam, Jan Schmidt, Jeongmin Kwak, Jerome Colle, Jihoon Lee, Jordan Yelloz, L. E. Segovia, Matthew Semeniuk, Max Goltzsche, -Michael Olbrich, Monty C, Nicolas Dufresne, Nirbheek Chauhan, Olivier Crête, Philippe Normand, Pratik Pachange, Qian Hu (胡骞), -Robert Mader, Ruben Gonzalez, Sanchayan Maity, Santiago Carot-Nemesio, Sebastian Dröge, Seungha Yang, Stéphane Cerveau, -Tim-Philipp Müller, Xavier Claessens, - -… and many others who have contributed bug reports, translations, sent suggestions or helped testing. Thank you all! +After the 1.28.0 release there will be several 1.28.x bug-fix releases which will contain bug fixes which have been deemed +suitable for a stable branch, but no new features or intrusive changes will be added to a bug-fix release usually. The 1.28.x +bug-fix releases will be made from the git 1.28 branch, which is a stable release series branch. -List of merge requests and issues fixed in 1.26.9 +1.28.1 -- List of Merge Requests applied in 1.26.9 -- List of Issues fixed in 1.26.9 +The first 1.28 bug-fix release (1.28.1) is expected to be released in February 2026. -1.26.10 +Schedule for 1.30 -The tenth 1.26 bug-fix release (1.26.10) was released on 25 December 2025. +Our next major feature release will be 1.30, and 1.29 will be the unstable development version leading up to the stable 1.30 +release. The development of 1.29/1.30 will happen in the git main branch of the GStreamer mono repository. -This release only contains bugfixes and it should be safe to update from 1.26.x. - -Highlighted bugfixes in 1.26.10 - -- curlhttpsrc fixes and improvements -- decklinkvideosink: Fix frame completion callbacks for firmware 14.3+ -- flac: Fix 6.1 and 7.1 channel layouts and support encoding and decoding of 32-bit audio -- glimagesink: Fix handling of odd height buffers -- matroskademux: make maximum allowed block size large enough to support 4k uncompressed video -- mxf: Add support for custom Sony XDCAM video variant -- opusenc: multichannel and surround sound handling improvements -- playbin3: HLS/DASH stream selection handling improvements to fix disabling and re-enabling of audio/video streams with - adaptivedemux2 -- qtmux: robust recording mode space left estimation fixes for streams that start with a timestamp offset -- splitmuxsrc seeking improvements -- Support FLAC audio in DASH manifests -- Python bindings: fix regression where buffers were no longer writable in pad probe callbacks -- cerbero: add python bindings for GstApp; Windows installer improvements -- Various bug fixes,build fixes,memory leak fixes,and other stability and reliability improvements - -gstreamer - -- pipeline: Improve resource cleanup logic for clock objects -- filesink: fix the build with recent mingw-w64 -- basetransform, basesrc: Fix handling of buffer pool configuration failures - -gst-plugins-base - -- basetextoverlay: Don’t negotiate if caps haven’t changed -- codec-utils: Update mime codec strings -- fdmemory: Fix size calculation when sharing -- gl elements add a yellow bar on JPEGs with non-even heights -- glimagesink: Fix handling of odd height buffers -- glwindow_cocoa: fix window not closing (w/o user window handle) -- opusenc: Simplify Vorbis channel layout mapping code and fix 7.1 layout & use surround multistream encoder -- parsebin: Improve debug logging -- playbin3: ensure GST_EVENT_SELECT_STREAMS event is sent to collection source -- tagdemux: propagate seek event seqnum to upstream -- videodecoder: Don’t assume the ALLOCATION query contains a pool -- videodecoder, videoaggregator: Fix handling of buffer pool configuration failures - -gst-plugins-good - -- adaptivedemux2: Initialize start bitrate for dashdemux2 and hlsdemux2 -- dashdemux2: Unknown codec ‘flac’ when streaming a DASH MPD manifest with a mp4 FLAC file -- deinterlace: Improve pool configuration -- flac: Fix 6.1 / 7.1 channel layouts -- flacdec: Don’t forbid S32 sample size (0x07) unnecessarily -- flacenc: Support S32 samples -- flacdec: Decode 32-bit FLAC files -- level: fix crash if no caps have been sent -- level: Floating point exception (core dumped) when sending buffers without caps -- matroskademux: Bump maximum block size from 15MB to 32MB to allow 4k raw video -- matroskamux: Fix some more thread-safety issues -- matroskamux: Fix thread-safety issues when requesting new pads -- matroskamux: pad->track handling results in segmentation fault -- mxfdemux / aiffparse / matroskaparse: Remove segment closing on non-flushing seeks -- qtdemux: Use gst_util_uint64_scale to scale guint64 -- qtmux: Fix robust recording estimates -- splitmuxsrc - fix for seeking / flushing deadlock -- v4l2object: Add support for colorimetry 1:4:16:3 -- wavenc: Fix downstream negotiation -- wavparse: prevent setting empty strings as title tag - -gst-plugins-bad - -- aesenc / aesdec: use correct format specifier for buffer size in debug log -- analytics: Fix build on MSVC by using libm dependency -- curlhttpsrc: Various fixes -- decklinkvideosink: Fix frame completion callbacks for firmware 14.3+ -- dtlsdec: mark generated cert agent with GST_OBJECT_FLAG_MAY_BE_LEAKED -- fdkaacdec: Assertion on handling unsupported channel layouts -- fdkaacdec: Invalidate channel_types/indices when setting a known config -- hlssink: Guard NULL structure and use gst_structure_has_name() -- midiparse: Fix a couple of potential out-of-bounds reads -- mpegtsmux: Fix potential deadlock changing pmt-interval -- mxfdemux: reconsider “closing running segment” for non flushing seeks -- mxfdemux / aiffparse / matroskaparse: Remove segment closing on non-flushing seeks -- mxfdemux: Simplify timestamp tracking -- mxfdemux: send event SegmentDone for segment seeks -- mxfmpeg: Add custom Sony picture essence coding UL -- playbin3: ensure GST_EVENT_SELECT_STREAMS event is sent to collection source -- vabasedec: Don’t assert when negotiating based on a gap event before the first buffer -- vkformat: Add VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16 format -- webrtc: Keep a ref of the ICEStream in the TransportStream -- GstPlay: set_audio_track_enabled / set_video_track_enabled not functioning for adaptivedemux2 sources -- video: decoders: Fix possible crash when flushing H265/H266 decoder - -gst-plugins-ugly - -- No changes - -GStreamer Rust plugins - -- cctost2038anc: Fix typo with c_not_y_channel property documentation -- dav1d: Stop iteration after finding first working pool -- dav1d: Various fixes to allocation query handling -- gtk4paintablesink: Propose a udmabuf pool / allocator if upstream asks for sysmem -- gtk4: Fix typo in odd-size subsample workaround -- rtp: Update to rtcp-types 0.3 -- st2038combiner: Some fixes -- st2038extractor: Add always-add-st2038-pad property -- threadshare: allow disabling the IPv4 or IPv6 socket in ts-udpsink -- threadshare: Update to flume 0.12 -- tracers: add function and signal for writing logs to PadPushTimings -- version-helper: Update to toml_edit 0.24 -- webrtc: mark static caps with GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED -- webrtcsink: don’t upscale when mitigating low bitrate -- Fix new clippy 1.92 warnings -- Update dependencies - -gst-libav - -- avviddec: Various fixes to allocation query handling -- avviddec: Aggregate GstVideoAlignment on top of the meta api params, instead of overriding them -- avviddec: Set video alignment to internal pool - -gst-rtsp-server - -- No changes - -gstreamer-vaapi - -- No changes - -gstreamer-sharp - -- No changes - -gst-python - -- Override GstPadProbeInfo to get writable objects -- Misc improvements -- More typing fixes -- 1.26.2 breaks Python bindings: No longer able to modify Gst.Buffer metadata in pad probe callbacks - -gst-editing-services - -- python: More typing fixes - -gst-devtools,gst-validate + gst-integration-testsuites - -- dotsviewer: Update Rust dependencies - -gst-examples - -- webrtc: Update Rust dependencies - -gstreamer-docs - -- No changes - -Development build environment - -- No changes - -Cerbero build tool and packaging changes in 1.26.10 - -- pkg-config: Ship it in the devel package -- recipe: Update License enums to SPDX expressions -- recipes: Fix GPL categorization of some plugins -- recipes: Fix stray devel files making it into runtime -- recipes: add GstApp python binding -- Modernize MSI license.rtf formatting -- Use ninja for all cmake recipes by default instead of GNU make -- ci: Mark a racy xcode toolchain bug for retrying - -Contributors to 1.26.10 - -Aaron Boxer, Brad Reitmeyer, Christoph Reiter, Doug Nazar, F. Duncanh, François Laignel, Haejung Hwang, Hou Qi, Hyunjun Ko, -Jakub Adam, Jan Schmidt, Jeongmin Kwak, Jerome Colle, L. E. Segovia, Mathieu Duponchelle, Nicolas Dufresne, Nirbheek Chauhan, -Philippe Normand, Piotr Brzeziński, Pratik Pachange, Robert Mader, Sanchayan Maity, Sebastian Dröge, Stéphane Cerveau, Thibault -Saunier, Tim-Philipp Müller, Tobias Schlager, Vivia Nikolaidou, Wilhelm Bartel, Xavier Claessens, Yun Liu, - -… and many others who have contributed bug reports,translations,sent suggestions or helped testing. Thank you all! - -List of merge requests and issues fixed in 1.26.10 - -- List of Merge Requests applied in 1.26.10 -- List of Issues fixed in 1.26.10 - -Schedule for 1.28 +The schedule for 1.30 is still to be determined, but it will likely be in Q4/2026. -Our next major feature release will be 1.28, and 1.27 will be the unstable development version leading up to the stable 1.28 -release. The development of 1.27/1.28 will happen in the git main branch of the GStreamer mono repository. +1.30 will be backwards-compatible to the stable 1.28, 1.26, 1.24, 1.22, 1.20, 1.18, 1.16, 1.14, 1.12, 1.10, 1.8, 1.6, 1.4, 1.2 +and 1.0 release series. -For 1.28 we’re aiming for feature freeze in December 2025 and then the new stable 1.28.0 release in January 2026. +## 1.27 pre-releases (superseded by 1.28) -1.28 will be backwards-compatible to the stable 1.26, 1.24, 1.22, 1.20, 1.18, 1.16, 1.14, 1.12, 1.10, 1.8, 1.6, 1.4, 1.2 and 1.0 -release series. +- 1.27.1 development snapshot release notes +- 1.27.2 development snapshot release notes +- 1.27.50 development snapshot release notes +- 1.27.90 pre-release release notes -------------------------------------------------------------------------------------------------------------------------------- -These release notes have been prepared by Tim-Philipp Müller with contributions from Arun Raghavan, Daniel Morin, Nirbheek -Chauhan, Olivier Crête, Philippe Normand, Sebastian Dröge, Seungha Yang, Thibault Saunier, and Víctor Manuel Jáquez Leal. +These release notes have been prepared by Tim-Philipp Müller with contributions from Daniel Morin, Nirbheek Chauhan, Philippe +Normand, Sebastian Dröge, Thibault Saunier, Víctor Manuel Jáquez Leal, and Xavier Claessens License: CC BY-SA 4.0
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/README.md -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/README.md
Changed
@@ -1,4 +1,4 @@ -GStreamer 1.26.x stable series +GStreamer 1.28.x stable series WHAT IT IS ----------
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/RELEASE -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/RELEASE
Changed
@@ -1,4 +1,4 @@ -This is GStreamer gst-plugins-bad 1.26.10. +This is GStreamer gst-plugins-bad 1.28.0. The GStreamer team is thrilled to announce a new major feature release of your favourite cross-platform multimedia framework! @@ -6,12 +6,12 @@ As always, this release is again packed with new features, bug fixes and other improvements. -The 1.26 release series adds new features on top of the 1.24 series and is +The 1.28 release series adds new features on top of the 1.26 series and is part of the API and ABI-stable 1.x release series. Full release notes can be found at: - https://gstreamer.freedesktop.org/releases/1.26/ + https://gstreamer.freedesktop.org/releases/1.28/ Binaries for Android, iOS, Mac OS X and Windows will usually be provided shortly after the release. @@ -42,10 +42,6 @@ where you can find audio and video decoders and encoders for a wide variety of formats including H.264, AAC, etc. - - gstreamer-vaapi: hardware-accelerated video decoding and encoding using - VA-API on Linux. Primarily for Intel graphics hardware. - (Deprecated, use the new "va" plugin instead) - - gst-rtsp-server: library to serve files or streaming pipelines via RTSP - gst-editing-services: library an plugins for non-linear editing
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/data/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/data/meson.build
Changed
@@ -16,15 +16,12 @@ 'device', 'targets/device/dvd.gep',, -srcdirs = - foreach path_targets : encoding_targets dir = join_paths(encoding_targetsdir, path_targets.get(0)) etargets = path_targets.get(1) install_data(sources: etargets, install_dir: dir) - srcdirs += meson.current_source_dir() / 'targets' / path_targets.get(0) endforeach env = environment() -env.prepend('GST_ENCODING_TARGET_PATH', srcdirs) +env.prepend('GST_ENCODING_TARGET_PATH', meson.current_source_dir() / 'targets') meson.add_devenv(env)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/docs/plugins/gst_plugins_cache.json -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/docs/plugins/gst_plugins_cache.json
Changed
@@ -3501,12 +3501,12 @@ "klass": "Analyzer/Visualization/Video", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -3524,6 +3524,44 @@ "type": "gboolean", "writable": true }, + "draw-tracking-labels": { + "blurb": "Draw object tracking labels", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "expire-overlay": { + "blurb": "Re-uses the last overlay for the specified amount of time before expiring it (in ns), MAX for never", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1000000000", + "max": "18446744073709551615", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint64", + "writable": true + }, + "filled-box": { + "blurb": "Draw a filled box", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "labels-color": { "blurb": "Color (ARGB) to use for object labels", "conditionally-available": false, @@ -3551,6 +3589,73 @@ "readable": true, "type": "guint", "writable": true + }, + "tracking-outline-colors": { + "blurb": "In the presence of tracking information, each object will get its own color, ignores object-detection-outline-color", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + } + }, + "rank": "none" + }, + "segmentationoverlay": { + "author": "Daniel Morin", + "description": "Overlay a visual representation of segmentation metadata on the video", + "hierarchy": + "GstSegmentationOverlay", + "GstVideoFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Visualization/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "hint-maximum-segment-type": { + "blurb": "By providing the expected maximum segment type the overlay can optimize color differentiation between segment", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "10", + "max": "-1", + "min": "1", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + }, + "selected-types": { + "blurb": "List of segment types to overlay separated by ';'", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true } }, "rank": "none" @@ -5968,7 +6073,7 @@ "long-name": "ASS/SSA Render", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { BGRx, RGBx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, I420, YV12, AYUV, YUY2, UYVY, v308, Y41B, Y42B, Y444, NV12, NV21, A420, YUV9, YVU9, IYU1, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { BGRx, RGBx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, I420, YV12, AYUV, YUY2, UYVY, v308, Y41B, Y42B, Y444, NV12, NV21, A420, YUV9, YVU9, IYU1, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, @@ -5978,7 +6083,7 @@ "presence": "always" }, "video_sink": { - "caps": "video/x-raw:\n format: { BGRx, RGBx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, I420, YV12, AYUV, YUY2, UYVY, v308, Y41B, Y42B, Y444, NV12, NV21, A420, YUV9, YVU9, IYU1, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { BGRx, RGBx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, I420, YV12, AYUV, YUY2, UYVY, v308, Y41B, Y42B, Y444, NV12, NV21, A420, YUV9, YVU9, IYU1, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -6049,12 +6154,12 @@ "long-name": "Audio Buffer Split", "pad-templates": { "sink": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", "direction": "sink", "presence": "always" }, "src": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", "direction": "src", "presence": "always" } @@ -8589,12 +8694,12 @@ "presence": "always" }, "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, "video_sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -13865,12 +13970,12 @@ "klass": "Codec/Decoder/Video/Hardware", "pad-templates": { "sink": { - "caps": "video/x-av1:\n alignment: frame\n profile: main\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-av1:\n alignment: frame\n profile: main\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "src", "presence": "always" } @@ -14366,6 +14471,337 @@ }, "rank": "none" }, + "d3d12fisheyedewarp": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Dewarping fisheye image", + "hierarchy": + "GstD3D12FisheyeDewarp", + "GstD3D12BaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Converter/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "center-x": { + "blurb": "Normalized X position of fisheye circle", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.5", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "center-y": { + "blurb": "Normalized Y position of fisheye circle", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.5", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "fisheye-fov": { + "blurb": "Fisheye image field-of-view angle, in degrees", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "180", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "horizontal-fov": { + "blurb": "Horizontal field-of-view angle of output, in degrees; ignored in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "90", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "inner-radius": { + "blurb": "Normalized inner radius for cropping central area (0.0 = center, 1.0 = full crop). Only used in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.3", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "projection-type": { + "blurb": "Projection type to use", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "equirect (1)", + "mutable": "null", + "readable": true, + "type": "GstD3D12FisheyeDewarpProjectionType", + "writable": true + }, + "radius-x": { + "blurb": "Normalized horizontal radius of fisheye circle", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.5", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "radius-y": { + "blurb": "Normalized vertical radius of fisheye circle", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.5", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "roi-height": { + "blurb": "Normalized ROI height, in output image space", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "roi-width": { + "blurb": "Normalized ROI width, in output image space", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "roi-x": { + "blurb": "Normalized horizontal ROI offset (top-left), in output image space", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "roi-y": { + "blurb": "Normalized vertical ROI offset (top-left), in output image space", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "rotation-order": { + "blurb": "Rotation axis order to apply, ignored in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "zxy (4)", + "mutable": "null", + "readable": true, + "type": "GstD3D12FisheyeDewarpRotationOrder", + "writable": true + }, + "rotation-space": { + "blurb": "Controls whether rotations are applied in local (intrinsic, camera-relative) or world (extrinsic, fixed-axis) space", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "local (0)", + "mutable": "null", + "readable": true, + "type": "GstD3D12FisheyeDewarpRotationSpace", + "writable": true + }, + "rotation-x": { + "blurb": "Pitch (X-axis rotation) angle, in degrees; ignored in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "rotation-y": { + "blurb": "Yaw (Y-axis rotation) angle, in degrees; ignored in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "rotation-z": { + "blurb": "Roll (Z-axis rotation) angle, in degrees", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "vertical-fov": { + "blurb": "Vertical field-of-view angle of output, in degrees; ignored in 'panorama' projection", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "90", + "max": "1.79769e+308", + "min": "-1.79769e+308", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "viewport-height": { + "blurb": "Normalized viewport height", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "viewport-width": { + "blurb": "Normalized viewport width", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "viewport-x": { + "blurb": "Normalized top-left viewport X position", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "viewport-y": { + "blurb": "Normalized top-left viewport Y position", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + } + }, + "rank": "none" + }, "d3d12h264dec": { "author": "Seungha Yang <seungha@centricualr.com>", "description": "Direct3D12/DXVA based H.264 video decoder", @@ -14456,12 +14892,12 @@ "klass": "Codec/Encoder/Video/Hardware", "pad-templates": { "sink": { - "caps": "video/x-raw(memory:D3D12Memory):\n format: NV12\n width: 16, 4096 \n height: 16, 4096 \n interlace-mode: progressive\nvideo/x-raw:\n format: NV12\n width: 16, 4096 \n height: 16, 4096 \n interlace-mode: progressive\n", + "caps": "video/x-raw(memory:D3D12Memory):\n format: NV12\n width: 128, 4096 \n height: 128, 4096 \n interlace-mode: progressive\nvideo/x-raw:\n format: NV12\n width: 128, 4096 \n height: 128, 4096 \n interlace-mode: progressive\n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-h264:\n width: 16, 4096 \n height: 16, 4096 \n stream-format: byte-stream\n alignment: au\n profile: { (string)high, (string)main, (string)constrained-baseline }\n", + "caps": "video/x-h264:\n width: 128, 4096 \n height: 128, 4096 \n stream-format: byte-stream\n alignment: au\n profile: { (string)high, (string)main, (string)constrained-baseline }\n", "direction": "src", "presence": "always" } @@ -14738,12 +15174,12 @@ "klass": "Codec/Decoder/Video/Hardware", "pad-templates": { "sink": { - "caps": "video/x-h265:\n stream-format: { (string)hev1, (string)hvc1, (string)byte-stream }\n alignment: au\n profile: { (string)main, (string)main-10 }\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-h265:\n stream-format: { (string)hev1, (string)hvc1, (string)byte-stream }\n alignment: au\n profile: { (string)main, (string)main-10 }\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "src", "presence": "always" } @@ -14794,6 +15230,71 @@ }, "rank": "primary + 2" }, + "d3d12interlace": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "A Direct3D12 interlacer element", + "hierarchy": + "GstD3D12Interlace", + "GstD3D12BaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Interlace/Effect/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: progressive\n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: progressive\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: interleaved\n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: interleaved\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "engine": { + "blurb": "Engine to use", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "auto (0)", + "mutable": "null", + "readable": true, + "type": "GstD3D12InterlaceEngine", + "writable": true + }, + "field-pattern": { + "blurb": "The output field pattern", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1:1 (0)", + "mutable": "null", + "readable": true, + "type": "GstD3D12InterlacePattern", + "writable": true + }, + "top-field-first": { + "blurb": "Interlaced stream should be top field first", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + } + }, + "rank": "none" + }, "d3d12ipcsink": { "author": "Seungha Yang <seungha@centricular.com>", "description": "Sends Direct3D12 shared handle to peer d3d12ipcsrc elements", @@ -15047,6 +15548,73 @@ }, "rank": "primary + 2" }, + "d3d12overlaycompositor": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Blend overlay into stream", + "hierarchy": + "GstD3D12OverlayCompositor", + "GstD3D12BaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Effect/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + }, + "d3d12remap": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Remap pixels", + "hierarchy": + "GstD3D12Remap", + "GstD3D12BaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Converter/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:D3D12Memory):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:D3D12Memory, meta:GstVideoOverlayComposition):\n format: { RGBA64_LE, BGRA64_LE, Y416_LE, Y412_LE, RGB10A2_LE, Y410, BGR10A2_LE, Y216_LE, Y212_LE, Y210, VUYA, RGBA, BGRA, RBGA, P016_LE, P012_LE, P010_10LE, RGBx, BGRx, YUY2, NV12, ARGB64_LE, AYUV64, GBRA_12LE, GBRA_10LE, AYUV, ABGR, ARGB, GBRA, Y444_16LE, A444_16LE, A444_12LE, A444_10LE, A444, A422_16LE, A422_12LE, A422_10LE, A422, A420_16LE, A420_12LE, A420_10LE, A420, AV12, GBR_16LE, Y444_12LE, GBR_12LE, I422_12LE, I420_12LE, Y444_10LE, GBR_10LE, I422_10LE, I420_10LE, Y444, BGRP, GBR, RGBP, xBGR, xRGB, Y42B, NV24, NV16, NV61, NV21, I420, YV12, Y41B, YUV9, YVU9, GRAY16_LE, GRAY8, v216, v210, r210, v308, IYU2, RGB, BGR, UYVY, VYUY, YVYU, RGB16, BGR16, RGB15, BGR15 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "uv-remap": { + "blurb": "ID3D12Resource for UV coordinates remapping. Valid formats are R8G8B8A8_UNORM and R16G16B16A16_UNORM. R -> U, G -> U, B -> unused, and A -> mask where A >= 0.5 applies remapping, otherwise fill background color", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "mutable": "playing", + "readable": true, + "type": "gpointer", + "writable": true + } + }, + "rank": "none" + }, "d3d12scale": { "author": "Seungha Yang <seungha@centricular.com>", "description": "Resizes video using Direct3D12", @@ -15512,6 +16080,23 @@ }, "rank": "none", "signals": { + "last-rendered-sample": { + "action": true, + "args": + { + "name": "arg0", + "type": "gboolean" + } + , + "return-type": "GstSample", + "when": "last" + }, + "redraw": { + "action": true, + "args": , + "return-type": "void", + "when": "last" + }, "resize": { "action": true, "args": @@ -15526,6 +16111,29 @@ , "return-type": "void", "when": "last" + }, + "uv-remap": { + "action": true, + "args": + { + "name": "arg0", + "type": "guint" + }, + { + "name": "arg1", + "type": "gpointer" + }, + { + "name": "arg2", + "type": "gpointer" + }, + { + "name": "arg3", + "type": "gpointer" + } + , + "return-type": "void", + "when": "last" } } }, @@ -16154,12 +16762,12 @@ "klass": "Codec/Decoder/Video/Hardware", "pad-templates": { "sink": { - "caps": "video/x-vp9:\n alignment: frame\n profile: 0\n width: 1, 16384 \n height: 1, 16384 \nvideo/x-vp9:\n alignment: frame\n profile: 2\n bit-depth-luma: 10\nbit-depth-chroma: 10\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-vp9:\n alignment: frame\n profile: 0\n width: 1, 8192 \n height: 1, 8192 \nvideo/x-vp9:\n alignment: frame\n profile: 2\n bit-depth-luma: 10\nbit-depth-chroma: 10\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 16384 \n height: 1, 16384 \n", + "caps": "video/x-raw(memory:D3D12Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n\nvideo/x-raw(memory:D3D11Memory):\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \nvideo/x-raw:\n format: { NV12, P010_10LE }\n width: 1, 8192 \n height: 1, 8192 \n", "direction": "src", "presence": "always" } @@ -16672,6 +17280,116 @@ } }, + "GstD3D12FisheyeDewarpProjectionType": { + "kind": "enum", + "values": + { + "desc": "Passthrough", + "name": "passthrough", + "value": "0" + }, + { + "desc": "Equirectangular", + "name": "equirect", + "value": "1" + }, + { + "desc": "Panorama", + "name": "panorama", + "value": "2" + }, + { + "desc": "Perspective", + "name": "perspective", + "value": "3" + } + + }, + "GstD3D12FisheyeDewarpRotationOrder": { + "kind": "enum", + "values": + { + "desc": "XYZ", + "name": "xyz", + "value": "0" + }, + { + "desc": "XZY", + "name": "xzy", + "value": "1" + }, + { + "desc": "YXZ", + "name": "yxz", + "value": "2" + }, + { + "desc": "YZX", + "name": "yzx", + "value": "3" + }, + { + "desc": "ZXY", + "name": "zxy", + "value": "4" + }, + { + "desc": "ZYX", + "name": "zyx", + "value": "5" + } + + }, + "GstD3D12FisheyeDewarpRotationSpace": { + "kind": "enum", + "values": + { + "desc": "Local", + "name": "local", + "value": "0" + }, + { + "desc": "World", + "name": "world", + "value": "1" + } + + }, + "GstD3D12InterlaceEngine": { + "kind": "enum", + "values": + { + "desc": "iGPU uses 3D engine, dGPU uses compute engine", + "name": "auto", + "value": "0" + }, + { + "desc": "3D", + "name": "3d", + "value": "1" + }, + { + "desc": "Compute", + "name": "compute", + "value": "2" + } + + }, + "GstD3D12InterlacePattern": { + "kind": "enum", + "values": + { + "desc": "1:1 (e.g. 60p -> 60i)", + "name": "1:1", + "value": "0" + }, + { + "desc": "2:2 (e.g. 30p -> 60i)", + "name": "2:2", + "value": "1" + } + + }, "GstD3D12IpcIOMode": { "kind": "enum", "values": @@ -16712,6 +17430,31 @@ } }, + "GstD3D12MemcpyCmdQueueType": { + "kind": "enum", + "values": + { + "desc": "Auto", + "name": "auto", + "value": "0" + }, + { + "desc": "3D", + "name": "3d", + "value": "1" + }, + { + "desc": "Compute", + "name": "compute", + "value": "2" + }, + { + "desc": "Copy", + "name": "copy", + "value": "3" + } + + }, "GstD3D12MemoryCopy": { "hierarchy": "GstD3D12MemoryCopy", @@ -16736,6 +17479,30 @@ "readable": true, "type": "gint", "writable": true + }, + "queue-type": { + "blurb": "Command queue type to use for copy operation", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "auto (0)", + "mutable": "ready", + "readable": true, + "type": "GstD3D12MemcpyCmdQueueType", + "writable": true + }, + "use-staging-memory": { + "blurb": "If FALSE, system memory pool will be used instead of GPU-visible staging memory", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true } } }, @@ -17969,7 +18736,7 @@ "long-name": "Fake Audio Sink", "pad-templates": { "sink": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "sink", "presence": "always" } @@ -18331,7 +19098,7 @@ "long-name": "Fake Video Sink", "pad-templates": { "sink": { - "caps": "video/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -19531,6 +20298,18 @@ "type": "GstDecklinkModes", "writable": true }, + "output-vanc": { + "blurb": "Output ancillary data from input buffers", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "persistent-id": { "blurb": "Output device instance to use. Higher priority than \"device-number\".", "conditionally-available": false, @@ -19718,6 +20497,18 @@ "type": "gboolean", "writable": true }, + "output-vanc": { + "blurb": "Extract and output VANC as GstMeta (if present)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "persistent-id": { "blurb": "Output device instance to use. Higher priority than \"device-number\".", "conditionally-available": false, @@ -21798,7 +22589,7 @@ "writable": false }, "pem": { - "blurb": "A string containing a X509 certificate and RSA private key in PEM format", + "blurb": "A string containing a X509 certificate and private key in PEM format", "conditionally-available": false, "construct": false, "construct-only": false, @@ -22022,7 +22813,7 @@ "writable": false }, "pem": { - "blurb": "A string containing a X509 certificate and RSA private key in PEM format", + "blurb": "A string containing a X509 certificate and private key in PEM format", "conditionally-available": false, "construct": false, "construct-only": false, @@ -24241,7 +25032,7 @@ "long-name": "DVB Subtitles Overlay", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, @@ -24251,7 +25042,7 @@ "presence": "always" }, "video_sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -24324,7 +25115,7 @@ "long-name": "Sub-picture Overlay", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, @@ -24334,7 +25125,7 @@ "presence": "always" }, "video": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -39856,7 +40647,7 @@ "long-name": "Gtk Wayland Video Sink", "pad-templates": { "sink": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { AYUV, RGBA, ARGB, BGRA, ABGR, P010_10LE, NV12_10LE40, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { BGR10A2_LE, RGB10A2_LE, AYUV, RGBA, ARGB, BGRA, ABGR, BGR10x2_LE, RGB10x2_LE, P010_10LE, NV12_10LE40, Y444, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -39881,7 +40672,7 @@ "construct-only": false, "controllable": false, "default": "identity (0)", - "mutable": "null", + "mutable": "playing", "readable": true, "type": "GstVideoOrientationMethod", "writable": true @@ -39909,6 +40700,508 @@ "tracers": {}, "url": "Unknown package origin" }, + "hip": { + "description": "HIP plugin", + "elements": { + "hipcompositor": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "A HIP compositor", + "hierarchy": + "GstHipCompositor", + "GstVideoAggregator", + "GstAggregator", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "interfaces": + "GstChildProxy" + , + "klass": "Filter/Editor/Video/Compositor/Hardware", + "pad-templates": { + "sink_%%u": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "request", + "type": "GstHipCompositorPad" + }, + "src": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always", + "type": "GstAggregatorPad" + } + }, + "properties": { + "device-id": { + "blurb": "HIP device ID to use (-1 = auto)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "-1", + "max": "2147483647", + "min": "-1", + "mutable": "ready", + "readable": true, + "type": "gint", + "writable": true + }, + "ignore-inactive-pads": { + "blurb": "Avoid timing out waiting for inactive pads", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "vendor": { + "blurb": "Vendor type", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "unknown (0)", + "mutable": "ready", + "readable": true, + "type": "GstHipVendor", + "writable": true + } + }, + "rank": "none" + }, + "hipconvert": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Converts video from one colorspace to another using HIP", + "hierarchy": + "GstHipConvert", + "GstHipBaseConvert", + "GstHipBaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Converter/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + }, + "hipconvertscale": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Resizes video and allow color conversion using HIP", + "hierarchy": + "GstHipConvertScale", + "GstHipBaseConvert", + "GstHipBaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "interfaces": + "GstVideoDirection" + , + "klass": "Filter/Converter/Video/Scaler/Colorspace/Effect/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "add-borders": { + "blurb": "Add borders if necessary to keep the display aspect ratio", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "playing", + "readable": true, + "type": "gboolean", + "writable": true + } + }, + "rank": "none" + }, + "hipdownload": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Downloads HIP device memory into system memory", + "hierarchy": + "GstHipDownload", + "GstHipMemoryCopy", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:GLMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:CUDAMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + }, + "hipscale": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Resize video using HIP", + "hierarchy": + "GstHipScale", + "GstHipBaseConvert", + "GstHipBaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Converter/Video/Scaler/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "add-borders": { + "blurb": "Add borders if necessary to keep the display aspect ratio", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "playing", + "readable": true, + "type": "gboolean", + "writable": true + } + }, + "rank": "none" + }, + "hipupload": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Uploads system memory into HIP device memory", + "hierarchy": + "GstHipUpload", + "GstHipMemoryCopy", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:GLMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:CUDAMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw(memory:HIPMemory):\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + } + }, + "filename": "gsthip", + "license": "LGPL", + "other-types": { + "GstHipBaseConvert": { + "hierarchy": + "GstHipBaseConvert", + "GstHipBaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object" + }, + "GstHipBaseFilter": { + "hierarchy": + "GstHipBaseFilter", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "device-id": { + "blurb": "HIP device ID to use (-1 = auto)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "-1", + "max": "2147483647", + "min": "-1", + "mutable": "ready", + "readable": true, + "type": "gint", + "writable": true + }, + "vendor": { + "blurb": "Vendor type", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "unknown (0)", + "mutable": "null", + "readable": true, + "type": "GstHipVendor", + "writable": true + } + } + }, + "GstHipCompositorOperator": { + "kind": "enum", + "values": + { + "desc": "Source", + "name": "source", + "value": "0" + }, + { + "desc": "Over", + "name": "over", + "value": "1" + } + + }, + "GstHipCompositorPad": { + "hierarchy": + "GstHipCompositorPad", + "GstVideoAggregatorPad", + "GstAggregatorPad", + "GstPad", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "alpha": { + "blurb": "Alpha of the picture", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "1", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gdouble", + "writable": true + }, + "height": { + "blurb": "Height of the picture", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "0", + "max": "2147483647", + "min": "-2147483648", + "mutable": "null", + "readable": true, + "type": "gint", + "writable": true + }, + "operator": { + "blurb": "Blending operator to use for blending this pad over the previous ones", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "over (1)", + "mutable": "null", + "readable": true, + "type": "GstHipCompositorOperator", + "writable": true + }, + "sizing-policy": { + "blurb": "Sizing policy to use for image scaling", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "none (0)", + "mutable": "null", + "readable": true, + "type": "GstHipCompositorSizingPolicy", + "writable": true + }, + "width": { + "blurb": "Width of the picture", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "0", + "max": "2147483647", + "min": "-2147483648", + "mutable": "null", + "readable": true, + "type": "gint", + "writable": true + }, + "xpos": { + "blurb": "X position of the picture", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "0", + "max": "2147483647", + "min": "-2147483648", + "mutable": "null", + "readable": true, + "type": "gint", + "writable": true + }, + "ypos": { + "blurb": "Y position of the picture", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": true, + "default": "0", + "max": "2147483647", + "min": "-2147483648", + "mutable": "null", + "readable": true, + "type": "gint", + "writable": true + } + } + }, + "GstHipCompositorSizingPolicy": { + "kind": "enum", + "values": + { + "desc": "None: Image is scaled to fill configured destination rectangle without padding or keeping the aspect ratio", + "name": "none", + "value": "0" + }, + { + "desc": "Keep Aspect Ratio: Image is scaled to fit destination rectangle specified by GstD3D12CompositorPad:{xpos, ypos, width, height} with preserved aspect ratio. Resulting image will be centered in the destination rectangle with padding if necessary", + "name": "keep-aspect-ratio", + "value": "1" + } + + }, + "GstHipMemoryCopy": { + "hierarchy": + "GstHipMemoryCopy", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "device-id": { + "blurb": "HIP device ID to use (-1 = auto)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "-1", + "max": "2147483647", + "min": "-1", + "mutable": "ready", + "readable": true, + "type": "gint", + "writable": true + }, + "vendor": { + "blurb": "Vendor type", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "unknown (0)", + "mutable": "null", + "readable": true, + "type": "GstHipVendor", + "writable": true + } + } + }, + "GstHipVendor": { + "kind": "enum", + "values": + { + "desc": "Unknown", + "name": "unknown", + "value": "0" + }, + { + "desc": "AMD", + "name": "amd", + "value": "1" + }, + { + "desc": "NVIDIA", + "name": "nvidia", + "value": "2" + } + + } + }, + "package": "GStreamer Bad Plug-ins", + "source": "gst-plugins-bad", + "tracers": {}, + "url": "Unknown package origin" + }, "hls": { "description": "HTTP Live Streaming (HLS)", "elements": { @@ -40459,7 +41752,7 @@ "long-name": "Internal audio sink", "pad-templates": { "sink": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "sink", "presence": "always" } @@ -40495,7 +41788,7 @@ "long-name": "Internal audio source", "pad-templates": { "src": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: interleaved\n", "direction": "src", "presence": "always" } @@ -40646,7 +41939,7 @@ "long-name": "Internal video sink", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -40682,7 +41975,7 @@ "long-name": "Internal video source", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -40743,12 +42036,12 @@ "long-name": "Interlace filter", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: progressive\nvideo/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: interleaved\n field-order: { (string)top-field-first, (string)bottom-field-first }\nvideo/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: mixed\n\nvideo/x-raw(format:Interlaced):\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: alternate\n", + "caps": "video/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: progressive\nvideo/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: interleaved\n field-order: { (string)top-field-first, (string)bottom-field-first }\nvideo/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: mixed\n\nvideo/x-raw(format:Interlaced):\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: alternate\n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: { (string)interleaved, (string)mixed }\n\nvideo/x-raw(format:Interlaced):\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: alternate\n", + "caps": "video/x-raw:\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: { (string)interleaved, (string)mixed }\n\nvideo/x-raw(format:Interlaced):\n format: { AYUV64, Y412_BE, Y412_LE, A444_10BE, A444_10LE, AYUV, VUYA, A422_10BE, A422_10LE, A420_10BE, A420_10LE, A420, Y444_16BE, Y444_16LE, Y444_12BE, Y444_12LE, Y410, Y444_10BE, Y444_10LE, v308, IYU2, Y444, NV24, v216, I422_12BE, I422_12LE, Y212_BE, Y212_LE, UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, P016_BE, P016_LE, I420_12BE, I420_12LE, P012_BE, P012_LE, NV12_10LE40, NV12_10LE32, I420_10BE, I420_10LE, P010_10BE, P010_10LE, I420, YV12, NV12, NV21, IYU1, Y41B, YUV9, YVU9 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n interlace-mode: alternate\n", "direction": "src", "presence": "always" } @@ -43051,7 +44344,7 @@ "presence": "always" }, "src": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: { (string)interleaved, (string)non-interleaved }\naudio/x-unaligned-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: { (string)interleaved, (string)non-interleaved }\naudio/x-alaw:\n rate: 1, 2147483647 \n channels: 1, 2147483647 \naudio/x-mulaw:\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: { (string)interleaved, (string)non-interleaved }\naudio/x-unaligned-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n layout: { (string)interleaved, (string)non-interleaved }\naudio/x-alaw:\n rate: 1, 2147483647 \n channels: 1, 2147483647 \naudio/x-mulaw:\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "src", "presence": "always" } @@ -225479,6 +226772,18 @@ "readable": true, "type": "gboolean", "writable": true + }, + "skew-corrections": { + "blurb": "Apply skew corrections", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true } } } @@ -228397,7 +229702,7 @@ "presence": "request" }, "vanc_sink_%%u": { - "caps": "closedcaption/x-cea-708:\n format: cdp\n framerate: 0/1, 2147483647/1 \n", + "caps": "meta/x-st-2038:\n alignment: frame\n", "direction": "sink", "presence": "request" }, @@ -239556,16 +240861,16 @@ "interfaces": "GstChildProxy" , - "klass": "Qrcode overlay containing buffer information", - "long-name": "qroverlay", + "klass": "Video/Overlay/Debug", + "long-name": "debugqroverlay", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -239641,16 +240946,16 @@ "interfaces": "GstChildProxy" , - "klass": "Qrcode overlay containing random data", + "klass": "Video/Overlay", "long-name": "qroverlay", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -241615,7 +242920,7 @@ "long-name": "rsndvdbin", "pad-templates": { "audio": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "src", "presence": "sometimes" }, @@ -241625,7 +242930,7 @@ "presence": "sometimes" }, "video": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "sometimes" } @@ -243900,6 +245205,18 @@ } }, "properties": { + "automatic-association-id": { + "blurb": "Whether a SCTP Association ID should be automatically generated.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "local-sctp-port": { "blurb": "Local sctp port for the sctp association. The remote port is configured via the GstSctpEnc element.", "conditionally-available": false, @@ -244274,12 +245591,12 @@ "long-name": "Audio buffer segment clipper", "pad-templates": { "sink": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "sink", "presence": "always" }, "src": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "src", "presence": "always" } @@ -248017,11 +249334,122 @@ "tensordecoders": { "description": "Tensor decoders elements", "elements": { + "classifiertensordecoder": { + "author": "Daniel Morin <daniel.morin@collabora.com>", + "description": "Decode tensors output from classification model using common format.\n\tTensor format must be: \n\t\tDims: batch-size, class_count\n\t\tDatatype: float32 \n\n\t\tTensor M,N\n\t\t\tBatch 0 | Class 0 confidence level | ... | Class N-1 confidence level |\n\t\t\t...\n\t\t\tBatch M-1 | Class 0 confidence level | ... | Class N-1 confidence level |\n\t\t\n\tIn-memory tensor format:\n\n\t\t|Batch 0, Class 0 confidence level |\n\t\t|Batch 0, ... |\n\t\t|Batch 0, Class N-1 confidence level |\n\t\t| ... |\n\t\t|Batch M-1, Class 0 confidence level |\n\t\t|Batch M-1, ... |\n\t\t|Batch M-1, Class N-1 confidence level |\n\n model", + "hierarchy": + "GstClassifierTensorDecoder", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Tensordecoder", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ classification-generic-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)classification-generic-out\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 1\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)\\\\\\{\\\\\\ float32\\\\\\,\\\\\\ uint8\\\\\\ \\\\\\}\\\\\\;\\\\\\ tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)classification-generic-out\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)\\\\\\{\\\\\\ float32\\\\\\,\\\\\\ uint8\\\\\\ \\\\\\}\\\"\\ \\}\\;\"\nvideo/x-raw:\n tensors: \"tensorgroups\\,\\ classification-generic-softmaxed-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)classification-generic-softmaxed-out\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 1\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)\\\\\\{\\\\\\ float32\\\\\\,\\\\\\ uint8\\\\\\ \\\\\\}\\\\\\;\\\\\\ tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)classification-generic-softmaxed-out\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)\\\\\\{\\\\\\ float32\\\\\\,\\\\\\ uint8\\\\\\ \\\\\\}\\\"\\ \\}\\;\"\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "ANY", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "class-confidence-threshold": { + "blurb": "Classes with a confidence level inferior to this threshold will be excluded", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.7", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "labels-file": { + "blurb": "Path to a file containing class label. COCO format", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + } + }, + "rank": "secondary" + }, + "ioutracker": { + "author": "Santosh Mahto <santosh.mahto@collabora.com>", + "description": "Track the objects across frames based on Intersection-over-Union (IoU)", + "hierarchy": + "GstIouTracker", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Analyzer/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "iou-score-threshold": { + "blurb": "Threshold for deciding wether the object is same in different frames", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.5", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "min-frame-count-for-lost-track": { + "blurb": "Min number of consecutive frames where object is absent before track is considered lost", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "5", + "max": "-1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + } + }, + "rank": "primary" + }, "ssdobjectdetector": { "author": "Aaron Boxer <aaron.boxer@collabora.com>, Marcus Edel <marcus.edel@collabora.com>", "description": "Apply tensor output from inference to detect objects in video frames", "hierarchy": "GstSsdObjectDetector", + "GstSsdTensorDec", "GstBaseTransform", "GstElement", "GstObject", @@ -248031,7 +249459,75 @@ "klass": "Tensordecoder/Video", "pad-templates": { "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ ssd-mobilenet-v1-variant-1-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)generic-variant-1-out-count\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-scores\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-boxes\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\,\\\\\\ 4\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-classes\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\ \\}\\;\"\n", + "direction": "sink", + "presence": "always" + }, + "src": { "caps": "video/x-raw:\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "label-file": { + "blurb": "Label file", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "score-threshold": { + "blurb": "Threshold for deciding when to remove boxes based on score", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.3", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "size-threshold": { + "blurb": "Threshold for deciding when to remove boxes based on proportion of the image", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.9", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + } + }, + "rank": "none" + }, + "ssdtensordec": { + "author": "Aaron Boxer <aaron.boxer@collabora.com>, Marcus Edel <marcus.edel@collabora.com>", + "description": "Apply tensor output from inference to detect objects in video frames", + "hierarchy": + "GstSsdTensorDec", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Tensordecoder/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ ssd-mobilenet-v1-variant-1-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)generic-variant-1-out-count\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-scores\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-boxes\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\,\\\\\\ 4\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-classes\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ \\\\\\\\\\\\ 0\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 0\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\ \\}\\;\"\n", "direction": "sink", "presence": "always" }, @@ -248083,12 +249579,355 @@ "writable": true } }, + "rank": "secondary" + }, + "tensordecodebin": { + "author": "Daniel Morin <daniel.morin@collabora.com>", + "description": "Tensor Decode Bin", + "hierarchy": + "GstTensorDecodeBin", + "GstBin", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "interfaces": + "GstChildProxy" + , + "klass": "Tensor Decoder Bin", + "pad-templates": { + "sink": { + "caps": "ANY", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "ANY", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + }, + "ultralightfacedetectortensordec": { + "author": "Raghavendra Rao <raghavendra.rao@collabora.com>", + "description": "Detect tensor output from the inference of Ultra Light Face Detection to detect the faces in video frames.The original repository of the Ultra Light Face Detection is located at https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB.", + "hierarchy": + "GstFaceDetectorTensorDecoder", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Tensordecoder/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ ultra-lightweight-face-detection-rfb-320-v1-variant-1-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ssd-mobilenet-v1-variant-1-out-boxes\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ 1\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 4\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)ultra-lightweight-face-detection-rfb-320-v1-variant-1-out-scores\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ 1\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ 2\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)row-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\ \\}\\;\"\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "iou-threshold": { + "blurb": "Threshold for removing boxes based on proportion of the image", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.3", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "score-threshold": { + "blurb": "Threshold for deciding when to remove boxes based on score", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.6", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + } + }, + "rank": "primary" + }, + "yolosegv8tensordec": { + "author": "Daniel Morin <daniel.morin@collabora.com>, Santosh Mahto <santosh.mahto@collabora.com>", + "description": "Decode tensors output from the inference of Yolo or FastSAM model (segmentation) on video frames. It works with YOLO version > 8 and FastSAM models.", + "hierarchy": + "GstYoloSegTensorDecoder", + "GstYoloTensorDecoder", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Tensordecoder/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ yolo-v8-segmentation-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)yolo-v8-segmentation-out-detections\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ 1\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)col-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\,\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)yolo-v8-segmentation-out-protos\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ 1\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)col-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\ \\}\\;\"\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n", + "direction": "src", + "presence": "always" + } + }, + "rank": "secondary" + }, + "yolov8tensordec": { + "author": "Daniel Morin <daniel.morin@collabora.com>, Santosh Mahto <santosh.mahto@collabora.com>", + "description": "Decode tensors output from the inference of YOLO Object Detection or FastSAM model (Detection)on video frames. This works on YOLO version 8 and later(v11), and FastSAM models.", + "hierarchy": + "GstYoloTensorDecoder", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Tensordecoder/Video", + "pad-templates": { + "sink": { + "caps": "video/x-raw:\n tensors: \"tensorgroups\\,\\ yolo-v8-out\\=\\(/uniquelist\\)\\{\\ \\(caps\\)\\\"tensor/strided\\\\\\,\\\\\\ tensor-id\\\\\\=\\\\\\(string\\\\\\)yolo-v8-out\\\\\\,\\\\\\ dims\\\\\\=\\\\\\(int\\\\\\)\\\\\\<\\\\\\ 1\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\,\\\\\\ \\\\\\\\\\\\ 1\\\\\\,\\\\\\ 2147483647\\\\\\ \\\\\\\\\\\\ \\\\\\>\\\\\\,\\\\\\ dims-order\\\\\\=\\\\\\(string\\\\\\)col-major\\\\\\,\\\\\\ type\\\\\\=\\\\\\(string\\\\\\)float32\\\"\\ \\}\\;\"\n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-raw:\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "box-confidence-threshold": { + "blurb": "Boxes with a location confidence level inferior to this threshold will be excluded", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.4", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "class-confidence-threshold": { + "blurb": "Classes with a confidence level inferior to this threshold will be excluded", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.4", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "iou-threshold": { + "blurb": "Maximum intersection-over-union between bounding boxes to consider them distinct.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.7", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "label-file": { + "blurb": "Label file", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "max-detections": { + "blurb": "Maximum object/masks detections.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "100", + "max": "-1", + "min": "1", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + } + }, "rank": "primary" } }, "filename": "gsttensordecoders", "license": "LGPL", - "other-types": {}, + "other-types": { + "GstSsdTensorDec": { + "hierarchy": + "GstSsdTensorDec", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "label-file": { + "blurb": "Label file", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "score-threshold": { + "blurb": "Threshold for deciding when to remove boxes based on score", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.3", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "size-threshold": { + "blurb": "Threshold for deciding when to remove boxes based on proportion of the image", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.9", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + } + } + }, + "GstYoloTensorDecoder": { + "hierarchy": + "GstYoloTensorDecoder", + "GstBaseTransform", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "box-confidence-threshold": { + "blurb": "Boxes with a location confidence level inferior to this threshold will be excluded", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.4", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "class-confidence-threshold": { + "blurb": "Classes with a confidence level inferior to this threshold will be excluded", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.4", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "iou-threshold": { + "blurb": "Maximum intersection-over-union between bounding boxes to consider them distinct.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0.7", + "max": "1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gfloat", + "writable": true + }, + "label-file": { + "blurb": "Label file", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "max-detections": { + "blurb": "Maximum object/masks detections.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "100", + "max": "-1", + "min": "1", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + } + } + } + }, "package": "GStreamer Bad Plug-ins", "source": "gst-plugins-bad", "tracers": {}, @@ -249130,7 +250969,7 @@ "long-name": "TTML subtitle renderer", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, @@ -249140,7 +250979,7 @@ "presence": "always" }, "video_sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n\nvideo/x-raw(ANY):\n format: { DMA_DRM, A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -249179,6 +251018,34 @@ } }, "properties": { + "min-memory-size": { + "blurb": "Minimum size to allocate in the case a copy into shared memory is needed.", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "9223372036854775807", + "min": "-1", + "mutable": "null", + "readable": true, + "type": "gint64", + "writable": true + }, + "num-clients": { + "blurb": "The number of clients that are connected currently", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "-1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": false + }, "socket-path": { "blurb": "The path to the control socket used to control the shared memory transport. This may be modified during the NULL->READY transition", "conditionally-available": false, @@ -249416,12 +251283,12 @@ "presence": "always" }, "vfsrc": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nimage/jpeg:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nimage/jpeg:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" }, "vidsrc": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nimage/jpeg:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-h264:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n stream-format: { (string)byte-stream, (string)avc }\n alignment: au\n profile: { (string)high, (string)main, (string)baseline, (string)constrained-baseline }\n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nimage/jpeg:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-h264:\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n stream-format: { (string)byte-stream, (string)avc }\n alignment: au\n profile: { (string)high, (string)main, (string)baseline, (string)constrained-baseline }\n", "direction": "src", "presence": "always" } @@ -250029,7 +251896,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -250082,7 +251949,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -250135,7 +252002,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -250188,7 +252055,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -250271,7 +252138,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -250354,7 +252221,7 @@ "presence": "always" }, "src": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { NV16_10LE40, NV16, MT2110R, MT2110T, NV12_10LE40_4L4, NV12_10LE40, P010_10LE, YUY2, NV12_16L32S, NV12_32L32, NV12_4L4, NV12, I420 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -253987,6 +255854,320 @@ "tracers": {}, "url": "Unknown package origin" }, + "vmaf": { + "description": "Netflix VMAF quality metric plugin", + "elements": { + "vmaf": { + "author": "Casey Bateman <casey.bateman@hudl.com>, Andoni Morales <amorales@fluendo.com>, Diego Nieto <dnieto@fluendo.com>", + "description": "Provides Video Multi-Method Assessment Fusion metric", + "hierarchy": + "GstVmaf", + "GstVideoAggregator", + "GstAggregator", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Filter/Analyzer/Video", + "pad-templates": { + "dist_sink": { + "caps": "video/x-raw:\n format: { I420, NV12, YV12, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always", + "type": "GstVideoAggregatorPad" + }, + "ref_sink": { + "caps": "video/x-raw:\n format: { I420, NV12, YV12, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always", + "type": "GstVideoAggregatorPad" + }, + "src": { + "caps": "video/x-raw:\n format: { I420, NV12, YV12, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "src", + "presence": "always", + "type": "GstAggregatorPad" + } + }, + "properties": { + "conf-interval": { + "blurb": "Enable confidence intervals", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "disable-clip": { + "blurb": "Disable clipping VMAF values", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "enable-transform": { + "blurb": "Enable transform VMAF scores", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "frame-message": { + "blurb": "Enable frame level score messaging", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "log-level": { + "blurb": "VMAF log level", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "none (0)", + "mutable": "null", + "readable": true, + "type": "GstVmafLogLevel", + "writable": true + }, + "model-filename": { + "blurb": "Model *.pkl abs filename, or file version for built in models", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "vmaf_v0.6.1", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "ms-ssim": { + "blurb": "Estimate MS-SSIM", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "phone-model": { + "blurb": "Use VMAF phone model", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "pool-method": { + "blurb": "Pool method for mean", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "mean (3)", + "mutable": "null", + "readable": true, + "type": "GstVmafPoolMethod", + "writable": true + }, + "psnr": { + "blurb": "Estimate PSNR", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "results-filename": { + "blurb": "VMAF results filename for scores", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, + "results-format": { + "blurb": "VMAF results file format used for scores (csv, xml, json)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "none (0)", + "mutable": "null", + "readable": true, + "type": "GstVmafResultsFormat", + "writable": true + }, + "ssim": { + "blurb": "Estimate SSIM", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "subsample": { + "blurb": "Computing on one of every N frames", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "1", + "max": "128", + "min": "1", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + }, + "threads": { + "blurb": "The number of threads", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "8", + "max": "2147483647", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + } + }, + "rank": "none" + } + }, + "filename": "gstvmaf", + "license": "LGPL", + "other-types": { + "GstVmafLogLevel": { + "kind": "enum", + "values": + { + "desc": "No logging", + "name": "none", + "value": "0" + }, + { + "desc": "Error", + "name": "error", + "value": "1" + }, + { + "desc": "Warning", + "name": "warning", + "value": "2" + }, + { + "desc": "Info", + "name": "info", + "value": "3" + }, + { + "desc": "Debug", + "name": "debug", + "value": "4" + } + + }, + "GstVmafPoolMethod": { + "kind": "enum", + "values": + { + "desc": "Minimum value", + "name": "min", + "value": "1" + }, + { + "desc": "Maximum value", + "name": "max", + "value": "2" + }, + { + "desc": "Arithmetic mean", + "name": "mean", + "value": "3" + }, + { + "desc": "Harmonic mean", + "name": "harmonic_mean", + "value": "4" + } + + }, + "GstVmafResultsFormat": { + "kind": "enum", + "values": + { + "desc": "None", + "name": "none", + "value": "0" + }, + { + "desc": "XML", + "name": "xml", + "value": "1" + }, + { + "desc": "Comma Separated File (csv)", + "name": "csv", + "value": "3" + }, + { + "desc": "JSON", + "name": "json", + "value": "2" + } + + } + }, + "package": "GStreamer Bad Plug-ins", + "source": "gst-plugins-bad", + "tracers": {}, + "url": "Unknown package origin" + }, "vmnc": { "description": "VmWare Video Codec plugins", "elements": { @@ -254334,6 +256515,160 @@ }, "rank": "none" }, + "vulkanh264enc": { + "author": "Stéphane Cerveau <scerveau@igalia.com>, Victor Jaquez <vjaquez@igalia.com>", + "description": "A H.264 video encoder based on Vulkan", + "hierarchy": + "GstVulkanH264Encoder", + "GstH264Encoder", + "GstVideoEncoder", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "interfaces": + "GstPreset" + , + "klass": "Codec/Encoder/Video/Hardware", + "pad-templates": { + "sink": { + "caps": "video/x-raw(memory:VulkanImage):\n format: NV12\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "direction": "sink", + "presence": "always" + }, + "src": { + "caps": "video/x-h264:\n profile: { (string)main, (string)high, (string)constrained-baseline }\n stream-format: byte-stream\n alignment: au\n", + "direction": "src", + "presence": "always" + } + }, + "properties": { + "aud": { + "blurb": "Insert AU (Access Unit) delimeter for each frame", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "playing", + "readable": true, + "type": "gboolean", + "writable": true + }, + "bitrate": { + "blurb": "The desired bitrate expressed in kbps (0: auto-calculate)", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "-1", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "max-qp": { + "blurb": "Maximum quantization value for each frame (0: disabled)", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "51", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "min-qp": { + "blurb": "Minimum quantization value for each frame (0: disabled)", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "51", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "qp-b": { + "blurb": "Constant quantization value for each B-frame slice", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "26", + "max": "51", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "qp-i": { + "blurb": "Constant quantization value for each I-frame slice", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "26", + "max": "51", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "qp-p": { + "blurb": "Constant quantization value for each P-frame slice", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "26", + "max": "51", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "quality": { + "blurb": "Video encoding quality level", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "2", + "max": "10", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint", + "writable": true + }, + "rate-control": { + "blurb": "The encoding rate control mode to use", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "cqp (1)", + "mutable": "playing", + "readable": true, + "type": "GstVulkanEncoderRateControlMode", + "writable": true + } + }, + "rank": "none" + }, "vulkanh265dec": { "author": "Víctor Jáquez <vjaquez@igalia.com>", "description": "A H.265 video decoder based on Vulkan", @@ -254682,6 +257017,116 @@ "filename": "gstvulkan", "license": "LGPL", "other-types": { + "GstH264Encoder": { + "hierarchy": + "GstH264Encoder", + "GstVideoEncoder", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "interfaces": + "GstPreset" + , + "kind": "object", + "properties": { + "b-frames": { + "blurb": "Maximum number of consecutive B frames between I and P reference frames", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "31", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + }, + "b-pyramid": { + "blurb": "Enable the b-pyramid reference structure in the GOP", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, + "i-frames": { + "blurb": "Force the number of I frames insertion within one GOP, not including the first IDR frame", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "2147483647", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + }, + "idr-period": { + "blurb": "Maximum number of frames between two IDR frames", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "1073741824", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": true + }, + "num-ref-frames": { + "blurb": "Number of frames referenced by P and B frames", + "conditionally-available": false, + "construct": true, + "construct-only": false, + "controllable": false, + "default": "3", + "max": "16", + "min": "0", + "mutable": "null", + "readable": true, + "type": "gint", + "writable": true + } + } + }, + "GstVulkanEncoderRateControlMode": { + "ignore-enum-members": true, + "kind": "enum", + "values": + { + "desc": "Driver's default", + "name": "default", + "value": "0" + }, + { + "desc": "Constant quantizer", + "name": "cqp", + "value": "1" + }, + { + "desc": "Constant bitrate", + "name": "cbr", + "value": "2" + }, + { + "desc": "Variable bitrate", + "name": "vbr", + "value": "4" + } + + }, "GstVulkanStereoDownmix": { "kind": "enum", "values": @@ -254942,7 +257387,7 @@ "description": "Windows audio session API plugin", "elements": { "wasapi2sink": { - "author": "Nirbheek Chauhan <nirbheek@centricular.com>, Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com>, Seungha Yang <seungha@centricular.com>", + "author": "Seungha Yang <seungha@centricular.com>", "description": "Stream audio to an audio capture device through WASAPI", "hierarchy": "GstWasapi2Sink", @@ -254960,14 +257405,26 @@ "long-name": "Wasapi2Sink", "pad-templates": { "sink": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n layout: interleaved\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n layout: interleaved\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "sink", "presence": "always" } }, "properties": { + "continue-on-error": { + "blurb": "Continue running and consume buffers on device failure", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "ready", + "readable": true, + "type": "gboolean", + "writable": true + }, "device": { - "blurb": "Audio device ID as provided by Windows.Devices.Enumeration.DeviceInformation.Id", + "blurb": "Audio device ID as provided by WASAPI device endpoint ID as provided by IMMDevice::GetId", "conditionally-available": false, "construct": false, "construct-only": false, @@ -254989,6 +257446,18 @@ "type": "gpointer", "writable": true }, + "exclusive": { + "blurb": "Open the device in exclusive mode", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "low-latency": { "blurb": "Optimize all settings for lowest latency. Always safe to enable.", "conditionally-available": false, @@ -255031,7 +257500,7 @@ "rank": "primary + 1" }, "wasapi2src": { - "author": "Nirbheek Chauhan <nirbheek@centricular.com>, Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com>, Seungha Yang <seungha@centricular.com>", + "author": "Seungha Yang <seungha@centricular.com>", "description": "Stream audio from an audio capture device through WASAPI", "hierarchy": "GstWasapi2Src", @@ -255050,20 +257519,32 @@ "long-name": "Wasapi2Src", "pad-templates": { "src": { - "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n layout: interleaved\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", + "caps": "audio/x-raw:\n format: { F64LE, F64BE, F32LE, F32BE, S32LE, S32BE, U32LE, U32BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S24LE, S24BE, U24LE, U24BE, S20_32LE, S20_32BE, U20_32LE, U20_32BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, S16LE, S16BE, U16LE, U16BE, S8, U8 }\n layout: interleaved\n rate: 1, 2147483647 \n channels: 1, 2147483647 \n", "direction": "src", "presence": "always" } }, "properties": { + "continue-on-error": { + "blurb": "Continue running and produce buffers on device failure", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "ready", + "readable": true, + "type": "gboolean", + "writable": true + }, "device": { - "blurb": "Audio device ID as provided by Windows.Devices.Enumeration.DeviceInformation.Id", + "blurb": "Audio device ID as provided by WASAPI device endpoint ID as provided by IMMDevice::GetId", "conditionally-available": false, "construct": false, "construct-only": false, "controllable": false, "default": "NULL", - "mutable": "ready", + "mutable": "playing", "readable": true, "type": "gchararray", "writable": true @@ -255079,6 +257560,18 @@ "type": "gpointer", "writable": true }, + "exclusive": { + "blurb": "Open the device in exclusive mode", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + }, "loopback": { "blurb": "Open render device for loopback recording", "conditionally-available": false, @@ -255222,7 +257715,7 @@ "long-name": "wayland video sink", "pad-templates": { "sink": { - "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { AYUV, RGBA, ARGB, BGRA, ABGR, P010_10LE, NV12_10LE40, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw(memory:DMABuf):\n format: DMA_DRM\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \nvideo/x-raw:\n format: { BGR10A2_LE, RGB10A2_LE, AYUV, RGBA, ARGB, BGRA, ABGR, BGR10x2_LE, RGB10x2_LE, P010_10LE, NV12_10LE40, Y444, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -255252,6 +257745,18 @@ "type": "gchararray", "writable": true }, + "force-aspect-ratio": { + "blurb": "When enabled, scaling will respect original aspect ratio", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "true", + "mutable": "playing", + "readable": true, + "type": "gboolean", + "writable": true + }, "fullscreen": { "blurb": "Whether the surface should be made fullscreen ", "conditionally-available": false, @@ -255259,11 +257764,23 @@ "construct-only": false, "controllable": false, "default": "false", - "mutable": "null", + "mutable": "playing", "readable": true, "type": "gboolean", "writable": true }, + "fullscreen-output": { + "blurb": "The name of the wayland output to fullscreen to.", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "NULL", + "mutable": "null", + "readable": true, + "type": "gchararray", + "writable": true + }, "render-rectangle": { "blurb": "The render rectangle ('<x, y, width, height>')", "conditionally-available": false, @@ -255282,7 +257799,7 @@ "construct-only": false, "controllable": false, "default": "identity (0)", - "mutable": "null", + "mutable": "playing", "readable": true, "type": "GstVideoOrientationMethod", "writable": true @@ -255855,6 +258372,17 @@ "return-type": "gboolean", "when": "last" }, + "close": { + "action": true, + "args": + { + "name": "arg0", + "type": "GstPromise" + } + , + "return-type": "void", + "when": "last" + }, "create-answer": { "action": true, "args": @@ -256786,11 +259314,56 @@ "win32ipc": { "description": "Windows IPC plugin", "elements": { + "win32ipcsink": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Windows shared memory sink", + "hierarchy": + "GstWin32IpcSink", + "GstWin32IpcBaseSink", + "GstBaseSink", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Sink/Generic", + "pad-templates": { + "sink": { + "caps": "ANY", + "direction": "sink", + "presence": "always" + } + }, + "rank": "none" + }, + "win32ipcsrc": { + "author": "Seungha Yang <seungha@centricular.com>", + "description": "Windows shared memory source", + "hierarchy": + "GstWin32IpcSrc", + "GstWin32IpcBaseSrc", + "GstBaseSrc", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "klass": "Source/Generic", + "pad-templates": { + "src": { + "caps": "ANY", + "direction": "src", + "presence": "always" + } + }, + "rank": "none" + }, "win32ipcvideosink": { "author": "Seungha Yang <seungha@centricular.com>", "description": "Send video frames to win32ipcvideosrc elements", "hierarchy": "GstWin32IpcVideoSink", + "GstWin32IpcBaseSink", "GstBaseSink", "GstElement", "GstObject", @@ -256800,7 +259373,7 @@ "klass": "Sink/Video", "pad-templates": { "sink": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "sink", "presence": "always" } @@ -256826,6 +259399,7 @@ "description": "Receive video frames from the win32ipcvideosink", "hierarchy": "GstWin32IpcVideoSrc", + "GstWin32IpcBaseSrc", "GstBaseSrc", "GstElement", "GstObject", @@ -256835,7 +259409,7 @@ "klass": "Source/Video", "pad-templates": { "src": { - "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", + "caps": "video/x-raw:\n format: { A444_16LE, A444_16BE, Y416_LE, AYUV64, RGBA64_LE, ARGB64, ARGB64_LE, BGRA64_LE, ABGR64_LE, Y416_BE, RGBA64_BE, ARGB64_BE, BGRA64_BE, ABGR64_BE, A422_16LE, A422_16BE, A420_16LE, A420_16BE, A444_12LE, GBRA_12LE, A444_12BE, GBRA_12BE, Y412_LE, Y412_BE, A422_12LE, A422_12BE, A420_12LE, A420_12BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, BGR10A2_LE, RGB10A2_LE, Y410, A444, GBRA, AYUV, VUYA, RGBA, RBGA, ARGB, BGRA, ABGR, A422, A420, AV12, Y444_16LE, GBR_16LE, Y444_16BE, GBR_16BE, Y216_LE, Y216_BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, BGR10x2_LE, RGB10x2_LE, r210, I422_10LE, I422_10BE, NV16_10LE40, NV16_10LE32, Y210, UYVP, v210, I420_10LE, I420_10BE, P010_10LE, NV12_10LE40, NV12_10LE32, P010_10BE, MT2110R, MT2110T, NV12_10BE_8L128, NV12_10LE40_4L4, Y444, BGRP, GBR, RGBP, NV24, v308, IYU2, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, VYUY, I420, YV12, NV12, NV21, NV12_16L32S, NV12_32L32, NV12_4L4, NV12_64Z32, NV12_8L128, Y41B, IYU1, YUV9, YVU9, BGR16, RGB16, BGR15, RGB15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE16, GRAY10_LE32, GRAY8 }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", "direction": "src", "presence": "always" } @@ -256873,7 +259447,198 @@ }, "filename": "gstwin32ipc", "license": "LGPL", - "other-types": {}, + "other-types": { + "GstWin32IpcBaseSink": { + "hierarchy": + "GstWin32IpcBaseSink", + "GstBaseSink", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "current-level-buffers": { + "blurb": "The number of currently queued buffers", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "18446744073709551615", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint64", + "writable": false + }, + "leaky-type": { + "blurb": "Whether to drop buffers once the internal queue is full", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "downstream (2)", + "mutable": "null", + "readable": true, + "type": "GstWin32IpcLeakyType", + "writable": true + }, + "max-buffers": { + "blurb": "Maximum number of buffers in queue (0=unlimited)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "2", + "max": "18446744073709551615", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint64", + "writable": true + }, + "num-clients": { + "blurb": "The number of connected clients", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "-1", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint", + "writable": false + }, + "pipe-name": { + "blurb": "The name of Win32 named pipe to communicate with clients. Validation of the pipe name is caller's responsibility", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "\\\\.\\pipe\\gst.win32.ipc.video", + "mutable": "ready", + "readable": true, + "type": "gchararray", + "writable": true + }, + "wait-for-connection": { + "blurb": "Blocks the stream until at least one client is connected", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "false", + "mutable": "null", + "readable": true, + "type": "gboolean", + "writable": true + } + } + }, + "GstWin32IpcBaseSrc": { + "hierarchy": + "GstWin32IpcBaseSrc", + "GstBaseSrc", + "GstElement", + "GstObject", + "GInitiallyUnowned", + "GObject" + , + "kind": "object", + "properties": { + "current-level-buffers": { + "blurb": "The number of currently queued buffers", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "0", + "max": "18446744073709551615", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint64", + "writable": false + }, + "leaky-type": { + "blurb": "Whether to drop buffers once the internal queue is full", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "downstream (2)", + "mutable": "null", + "readable": true, + "type": "GstWin32IpcLeakyType", + "writable": true + }, + "max-buffers": { + "blurb": "Maximum number of buffers in queue (0=unlimited)", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "2", + "max": "18446744073709551615", + "min": "0", + "mutable": "null", + "readable": true, + "type": "guint64", + "writable": true + }, + "pipe-name": { + "blurb": "The name of Win32 named pipe to communicate with server. Validation of the client name is caller's responsibility", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "\\\\.\\pipe\\gst.win32.ipc.video", + "mutable": "ready", + "readable": true, + "type": "gchararray", + "writable": true + }, + "processing-deadline": { + "blurb": "Maximum processing time for a buffer in nanoseconds", + "conditionally-available": false, + "construct": false, + "construct-only": false, + "controllable": false, + "default": "20000000", + "max": "18446744073709551615", + "min": "0", + "mutable": "playing", + "readable": true, + "type": "guint64", + "writable": true + } + } + }, + "GstWin32IpcLeakyType": { + "kind": "enum", + "values": + { + "desc": "None", + "name": "none", + "value": "0" + }, + { + "desc": "Upstream", + "name": "upstream", + "value": "1" + }, + { + "desc": "Downstream", + "name": "downstream", + "value": "2" + } + + } + }, "package": "GStreamer Bad Plug-ins", "source": "gst-plugins-bad", "tracers": {}, @@ -257643,44 +260408,6 @@ "package": "GStreamer Bad Plug-ins", "source": "gst-plugins-bad", "tracers": {}, - "url": "Unknown package origin" - }, - "y4mdec": { - "description": "Demuxes/decodes YUV4MPEG streams", - "elements": { - "y4mdec": { - "author": "David Schleef <ds@schleef.org>", - "description": "Demuxes/decodes YUV4MPEG streams", - "hierarchy": - "GstY4mDec", - "GstElement", - "GstObject", - "GInitiallyUnowned", - "GObject" - , - "klass": "Codec/Demuxer", - "long-name": "YUV4MPEG demuxer/decoder", - "pad-templates": { - "sink": { - "caps": "application/x-yuv4mpeg:\n y4mversion: 2\n", - "direction": "sink", - "presence": "always" - }, - "src": { - "caps": "video/x-raw:\n format: { I420, Y41B, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE, I420_12LE, I422_12LE, Y444_12LE, Y444_16LE, GRAY8, GRAY16_LE }\n width: 1, 2147483647 \n height: 1, 2147483647 \n framerate: 0/1, 2147483647/1 \n", - "direction": "src", - "presence": "always" - } - }, - "rank": "secondary" - } - }, - "filename": "gsty4mdec", - "license": "LGPL", - "other-types": {}, - "package": "GStreamer Bad Plug-ins", - "source": "gst-plugins-bad", - "tracers": {}, "url": "Unknown package origin" }, "zbar": {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/analyticsoverlay/gstanalyticsoverlay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/analyticsoverlay/gstanalyticsoverlay.c
Changed
@@ -25,6 +25,7 @@ #endif #include "gstobjectdetectionoverlay.h" +#include "gstsegmentationoverlay.h" /** * SECTION:plugin-analyticsoverlay @@ -41,6 +42,7 @@ gboolean ret = FALSE; ret |= GST_ELEMENT_REGISTER (objectdetectionoverlay, plugin); + ret |= GST_ELEMENT_REGISTER (segmentationoverlay, plugin); return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/analyticsoverlay/gstobjectdetectionoverlay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/analyticsoverlay/gstobjectdetectionoverlay.c
Changed
@@ -72,16 +72,20 @@ guint od_outline_color; guint od_outline_stroke_width; gboolean draw_labels; + gboolean draw_tracking_labels; + gboolean filled_box; guint labels_color; gdouble labels_stroke_width; gdouble labels_outline_ofs; + GstClockTime expire_overlay; + gboolean tracking_outline_colors; /* composition */ gboolean attach_compo_to_buffer; GstBuffer *canvas; gint canvas_length; GstVideoOverlayComposition *composition; - GstVideoOverlayComposition *upstream_composition; + GstClockTime last_composition_update; /* Graphic Outline */ PangoContext *pango_context; @@ -99,7 +103,11 @@ { PROP_OD_OUTLINE_COLOR = 1, PROP_DRAW_LABELS, + PROP_DRAW_TRACKING_LABELS, PROP_LABELS_COLOR, + PROP_FILLED_BOX, + PROP_EXPIRE_OVERLAY, + PROP_TRACKING_OUTLINE_COLORS, _PROP_COUNT }; @@ -166,7 +174,7 @@ static void gst_object_detection_overlay_render_boundingbox (GstObjectDetectionOverlay * overlay, GstObjectDetectionOverlayPangoCairoContext * cairo_ctx, - GstAnalyticsODMtd * od_mtd); + GstAnalyticsRelationMeta * rmeta, GstAnalyticsODMtd * od_mtd); static void gst_object_detection_overlay_render_text_annotation (GstObjectDetectionOverlay @@ -174,6 +182,13 @@ GstAnalyticsODMtd * od_mtd, const gchar * annotation); static void + gst_object_detection_overlay_render_tracking_text_annotation + (GstObjectDetectionOverlay * overlay, + GstObjectDetectionOverlayPangoCairoContext * ctx, + GstAnalyticsRelationMeta * rmeta, const GstAnalyticsODMtd * od_mtd); + + +static void gst_object_detection_overlay_class_init (GstObjectDetectionOverlayClass * klass) { GObjectClass *gobject_class; @@ -215,6 +230,19 @@ TRUE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); /** + * GstObjectDetectionOverlay:draw-tracking-labels + * + * Control tracking labels drawing + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_DRAW_TRACKING_LABELS, + g_param_spec_boolean ("draw-tracking-labels", + "Draw tracking labels", + "Draw object tracking labels", + TRUE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + /** * GstObjectDetectionOverlay:labels-color * * Control labels color @@ -228,6 +256,51 @@ "Color (ARGB) to use for object labels", 0, G_MAXUINT, 0xFFFFFF, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + /** + * GstObjectDetectionOverlay:filled-box + * + * Draw filled-box in the region where the object is detected is masked. + * Filling color will be based on object-detection-outline-color. + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_FILLED_BOX, + g_param_spec_boolean ("filled-box", + "Filled box", + "Draw a filled box", + TRUE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + /** + * GstObjectDetectionOverlay:expire-overlay + * + * Re-uses the last overlay for the specified amount of time before + * expiring it (in ns), NONE for never + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_EXPIRE_OVERLAY, + g_param_spec_uint64 ("expire-overlay", + "Expire overlay", + "Re-uses the last overlay for the specified amount of time before" + " expiring it (in ns), MAX for never", + 0, GST_CLOCK_TIME_NONE, GST_SECOND, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + /** + * GstObjectDetectionOverlay:tracking-outline-colors + * + * In the presence of tracking information, each object will get its + * own color, ignores object-detection-outline-color + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_TRACKING_OUTLINE_COLORS, + g_param_spec_boolean ("tracking-outline-colors", + "Tracking outline colors", + "In the presence of tracking information, each object will get" + " its own color, ignores object-detection-outline-color", + TRUE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + element_class = (GstElementClass *) klass; gst_element_class_add_static_pad_template (element_class, &sink_template); @@ -281,15 +354,18 @@ overlay->pango_layout = NULL; overlay->od_outline_color = 0xFFFFFFFF; overlay->draw_labels = TRUE; + overlay->draw_tracking_labels = TRUE; overlay->labels_color = 0xFFFFFFFF; + overlay->filled_box = FALSE; overlay->in_info = &GST_VIDEO_FILTER (overlay)->in_info; overlay->attach_compo_to_buffer = TRUE; overlay->canvas = NULL; overlay->labels_stroke_width = 1.0; overlay->od_outline_stroke_width = 2; overlay->composition = NULL; - overlay->upstream_composition = NULL; overlay->flushing = FALSE; + overlay->expire_overlay = GST_SECOND; + overlay->tracking_outline_colors = TRUE; GST_DEBUG_CATEGORY_INIT (objectdetectionoverlay_debug, "analytics_overlay_od", 0, "Object detection overlay"); } @@ -308,9 +384,21 @@ case PROP_DRAW_LABELS: overlay->draw_labels = g_value_get_boolean (value); break; + case PROP_DRAW_TRACKING_LABELS: + overlay->draw_tracking_labels = g_value_get_boolean (value); + break; case PROP_LABELS_COLOR: overlay->labels_color = g_value_get_uint (value); break; + case PROP_FILLED_BOX: + overlay->filled_box = g_value_get_boolean (value); + break; + case PROP_EXPIRE_OVERLAY: + overlay->expire_overlay = g_value_get_uint64 (value); + break; + case PROP_TRACKING_OUTLINE_COLORS: + overlay->tracking_outline_colors = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -330,9 +418,21 @@ case PROP_DRAW_LABELS: g_value_set_boolean (value, od_overlay->draw_labels); break; + case PROP_DRAW_TRACKING_LABELS: + g_value_set_boolean (value, od_overlay->draw_tracking_labels); + break; case PROP_LABELS_COLOR: g_value_set_uint (value, od_overlay->labels_color); break; + case PROP_FILLED_BOX: + g_value_set_boolean (value, od_overlay->filled_box); + break; + case PROP_EXPIRE_OVERLAY: + g_value_set_uint64 (value, od_overlay->expire_overlay); + break; + case PROP_TRACKING_OUTLINE_COLORS: + g_value_set_boolean (value, od_overlay->tracking_outline_colors); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -669,8 +769,8 @@ gst_object_detection_overlay_transform_frame_ip (GstVideoFilter * filter, GstVideoFrame * frame) { + GstBaseTransform *baset = GST_BASE_TRANSFORM (filter); GstObjectDetectionOverlay *overlay = GST_OBJECT_DETECTION_OVERLAY (filter); - GstVideoOverlayCompositionMeta *composition_meta; gpointer state = NULL; GstVideoOverlayRectangle *rectangle = NULL; gchar str_buf5; @@ -679,6 +779,7 @@ gint x, y, w, h; gfloat loc_confi_lvl; gboolean success; + GstClockTime rt = GST_CLOCK_TIME_NONE; GST_DEBUG_OBJECT (filter, "buffer writeable=%d", gst_buffer_is_writable (frame->buffer)); @@ -690,16 +791,9 @@ } g_mutex_unlock (&overlay->stream_event_mutex); - composition_meta = - gst_buffer_get_video_overlay_composition_meta (frame->buffer); - if (composition_meta) { - if (overlay->upstream_composition != composition_meta->overlay) { - GST_DEBUG_OBJECT (overlay, "GstVideoOverlayCompositionMeta found."); - overlay->upstream_composition = composition_meta->overlay; - } - } else if (overlay->upstream_composition != NULL) { - overlay->upstream_composition = NULL; - } + if (baset->have_segment) + rt = gst_segment_to_running_time (&baset->segment, GST_FORMAT_TIME, + GST_BUFFER_PTS (frame->buffer)); GstAnalyticsRelationMeta *rmeta = (GstAnalyticsRelationMeta *) gst_buffer_get_meta (GST_BUFFER (frame->buffer), @@ -730,12 +824,9 @@ if (overlay->composition) gst_video_overlay_composition_unref (overlay->composition); - if (overlay->upstream_composition) { - overlay->composition = - gst_video_overlay_composition_copy (overlay->upstream_composition); - } else { - overlay->composition = gst_video_overlay_composition_new (NULL); - } + overlay->composition = gst_video_overlay_composition_new (NULL); + + overlay->last_composition_update = rt; /* Get quark represent object detection metadata type */ GstAnalyticsMtdType rlt_type = gst_analytics_od_mtd_get_mtd_type (); @@ -756,7 +847,7 @@ gst_analytics_cls_mtd_get_mtd_type (), NULL, &cls_rlt_mtd); gst_object_detection_overlay_render_boundingbox - (GST_OBJECT_DETECTION_OVERLAY (filter), &cairo_ctx, od_mtd); + (GST_OBJECT_DETECTION_OVERLAY (filter), &cairo_ctx, rmeta, od_mtd); if (overlay->draw_labels) { if (success) { @@ -785,6 +876,10 @@ g_free (text); } + + if (overlay->draw_tracking_labels) + gst_object_detection_overlay_render_tracking_text_annotation + (GST_OBJECT_DETECTION_OVERLAY (filter), &cairo_ctx, rmeta, od_mtd); } rectangle = gst_video_overlay_rectangle_new_raw (overlay->canvas, @@ -798,6 +893,15 @@ gst_object_detection_overlay_destroy_cairo_context (&cairo_ctx); gst_buffer_unmap (buffer, &map); + } else { + if (rt != GST_CLOCK_TIME_NONE && + overlay->expire_overlay != GST_CLOCK_TIME_NONE && + overlay->last_composition_update != GST_CLOCK_TIME_NONE && + overlay->composition && + overlay->last_composition_update + overlay->expire_overlay <= rt) { + gst_video_overlay_composition_unref (overlay->composition); + overlay->composition = NULL; + } } if (overlay->composition) { @@ -816,34 +920,139 @@ return GST_FLOW_OK; } + +/* + * HSV version using golden angle distribution around the color wheel. + * This ensures maximum color separation by dividing the color wheel optimally. + * Sequence: 0°, 180°, 90°, 270°, 45°, 225°, 135°, 315°, etc. + * + * Returns RGB color as uint32_t in format 0x00RRGGBB + */ +static guint32 +generate_track_color_hsv (guint32 track_id) +{ + gfloat h = 0.0f; + gfloat increment = 0.5f; + // Fixed saturation and value for consistent appearance + const gfloat S = 0.85f; // High saturation for vivid colors + const gfloat V = 0.95f; // High value for bright colors + + /* Start from 1 to avoid special case */ + track_id++; + + /* Calculate hue using bit-reversal pattern for optimal distribution */ + /* Gives us the sequence: 0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, .. */ + while (track_id > 1) { + if (track_id & 1) { + h += increment; + } + track_id >>= 1; + increment *= 0.5f; + } + + /* Keep hue in 0, 1) range */ + while (h >= 1.0f) + h -= 1.0f; + + /* Convert HSV to RGB */ + int hi = (int) (h * 6.0f); + gfloat f = h * 6.0f - hi; + gfloat p = V * (1.0f - S); + gfloat q = V * (1.0f - f * S); + gfloat t = V * (1.0f - (1.0f - f) * S); + + gfloat r, g, b; + switch (hi % 6) { + case 0: + r = V; + g = t; + b = p; + break; + case 1: + r = q; + g = V; + b = p; + break; + case 2: + r = p; + g = V; + b = t; + break; + case 3: + r = p; + g = q; + b = V; + break; + case 4: + r = t; + g = p; + b = V; + break; + case 5: + r = V; + g = p; + b = q; + break; + default: + r = g = b = 0; + break; + } + + guint8 r8 = (guint8) (r * 255.0f); + guint8 g8 = (guint8) (g * 255.0f); + guint8 b8 = (guint8) (b * 255.0f); + + return ((guint32) r8 << 16) | ((guint32) g8 << 8) | (guint32) b8 | 0xFF000000; +} + static void gst_object_detection_overlay_render_boundingbox (GstObjectDetectionOverlay * overlay, GstObjectDetectionOverlayPangoCairoContext * ctx, - GstAnalyticsODMtd * od_mtd) + GstAnalyticsRelationMeta * rmeta, GstAnalyticsODMtd * od_mtd) { gint x, y, w, h; gfloat _dummy; - cairo_save (ctx->cr); - gst_analytics_od_mtd_get_location (od_mtd, &x, &y, &w, &h, &_dummy); gint maxw = GST_VIDEO_INFO_WIDTH (overlay->in_info) - 1; gint maxh = GST_VIDEO_INFO_HEIGHT (overlay->in_info) - 1; + GstAnalyticsTrackingMtd tracking_mtd; + guint32 color; + + cairo_save (ctx->cr); + gst_analytics_od_mtd_get_location (od_mtd, &x, &y, &w, &h, &_dummy); x = CLAMP (x, 0, maxw); y = CLAMP (y, 0, maxh); w = CLAMP (w, 0, maxw - x); h = CLAMP (h, 0, maxh - y); + if (overlay->tracking_outline_colors && + gst_analytics_relation_meta_get_direct_related (rmeta, od_mtd->id, + GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &tracking_mtd)) { + guint64 tid; + + gst_analytics_tracking_mtd_get_info (&tracking_mtd, &tid, NULL, NULL, NULL); + + color = generate_track_color_hsv (tid & 0xFFFFFFF); + } else { + color = overlay->od_outline_color; + } + /* Set bounding box stroke color and width */ cairo_set_source_rgba (ctx->cr, - ((overlay->od_outline_color >> 16) & 0xFF) / 255.0, - ((overlay->od_outline_color >> 8) & 0xFF) / 255.0, - ((overlay->od_outline_color) & 0xFF) / 255.0, - ((overlay->od_outline_color >> 24) & 0xFF) / 255.0); + ((color >> 16) & 0xFF) / 255.0, + ((color >> 8) & 0xFF) / 255.0, + (color & 0xFF) / 255.0, ((color >> 24) & 0xFF) / 255.0); cairo_set_line_width (ctx->cr, overlay->od_outline_stroke_width); /* draw bounding box */ cairo_rectangle (ctx->cr, x, y, w, h); - cairo_stroke (ctx->cr); + + if (overlay->filled_box == FALSE) + cairo_stroke (ctx->cr); + else + cairo_fill (ctx->cr); + cairo_restore (ctx->cr); } @@ -889,3 +1098,61 @@ cairo_stroke (ctx->cr); cairo_restore (ctx->cr); } + +static void + gst_object_detection_overlay_render_tracking_text_annotation + (GstObjectDetectionOverlay * overlay, + GstObjectDetectionOverlayPangoCairoContext * ctx, + GstAnalyticsRelationMeta * rmeta, const GstAnalyticsODMtd * od_mtd) +{ + GstAnalyticsMtd tracking_mtd; + guint64 tid; + PangoRectangle ink_rect, logical_rect; + gint x, y, w, h; + gint maxw = GST_VIDEO_INFO_WIDTH (overlay->in_info) - 1; + gint maxh = GST_VIDEO_INFO_HEIGHT (overlay->in_info) - 1; + + gchar *annotation; + + if (!gst_analytics_relation_meta_get_direct_related (rmeta, od_mtd->id, + GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &tracking_mtd)) + return; + + gst_analytics_od_mtd_get_location (od_mtd, &x, &y, &w, &h, NULL); + gst_analytics_tracking_mtd_get_info (&tracking_mtd, &tid, NULL, NULL, NULL); + + cairo_save (ctx->cr); + x = CLAMP (x, 0, maxw); + y = CLAMP (y, 0, maxh); + w = CLAMP (w, 0, maxw - x); + h = CLAMP (h, 0, maxh - y); + + /* Set label strokes color and width */ + cairo_set_source_rgba (ctx->cr, + ((overlay->labels_color >> 16) & 0xFF) / 255.0, + ((overlay->labels_color >> 8) & 0xFF) / 255.0, + ((overlay->labels_color) & 0xFF) / 255.0, + ((overlay->labels_color >> 24) & 0xFF) / 255.0); + + cairo_set_line_width (ctx->cr, overlay->labels_stroke_width); + + annotation = g_strdup_printf ("Track: %" G_GUINT64_FORMAT, tid); + pango_layout_set_markup (overlay->pango_layout, annotation, + strlen (annotation)); + g_free (annotation); + + pango_layout_get_pixel_extents (overlay->pango_layout, &ink_rect, + &logical_rect); + + GST_LOG_OBJECT (overlay, "logical_rect:(%d,%d),%dx%d", logical_rect.x, + logical_rect.y, logical_rect.width, logical_rect.height); + GST_LOG_OBJECT (overlay, "ink_rect:(%d,%d),%dx%d", ink_rect.x, ink_rect.y, + ink_rect.width, ink_rect.height); + cairo_move_to (ctx->cr, x + overlay->labels_outline_ofs, + y + h - logical_rect.height - overlay->labels_outline_ofs); + + pango_cairo_layout_path (ctx->cr, overlay->pango_layout); + cairo_stroke (ctx->cr); + cairo_restore (ctx->cr); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/analyticsoverlay/gstsegmentationoverlay.c
Added
@@ -0,0 +1,796 @@ +/* GStreamer segmentation overlay + * Copyright (C) <2023> Collabora Ltd. + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gstsegmentationoverlay.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-segmentationoverlay + * @title: segmentationoverlay + * @see_also: #GstSegmentationOverlay + * + * This element create a graphical representation of the analytics object + * segmentation metadata attached to video stream and overlay graphics above the + * video. + * + * The object segmentation overlay element monitor video stream for + * @GstAnalyticsRelationMeta and query @GstAnalyticsSegmentationMtd. Retrieved + * @GstAnalyticsSegmentationMtd are then used to generate an overlay + * highlighing objects detected. + * + * ## Example launch line + * | + * gst-launch-1.0 multifilesrc location=/onnx-models/strawberries.jpg ! jpegdec ! videoconvertscale add-borders=1 ! onnxinference model-file=segmentation.onnx ! yolosegv8tensordec class-confidence-threshold=0.3 iou-threshold=0.3 max-detections=100 ! segmentationoverlay ! imagefreeze ! glimagesink + * | This pipeline create an overlay representing results of an object + * segmentation. + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/analytics/analytics.h> +#include <math.h> + +#include "gstsegmentationoverlay.h" + +struct _GstSegmentationOverlay +{ + GstVideoFilter parent; + + /* State */ + gboolean active; + gboolean flushing; + + /* properties */ + gsize color_table_size; + gchar *selected_types_str; + GSList *selected_type_filter; + + /* composition */ + gboolean attach_compo_to_buffer; + GstBuffer *canvas; + gint canvas_length; + GstVideoOverlayComposition *composition; + GstVideoOverlayComposition *upstream_composition; + + guint32 *color_table; + gboolean *mask_filter; + gsize mask_filter_len; + gboolean update_mask_filter; + guint32 bg_color; +}; + +#define DEFAULT_MAX_COLORS 10 + +GST_DEBUG_CATEGORY_STATIC (segmentationoverlay_debug); +#define GST_CAT_DEFAULT segmentationoverlay_debug + +enum +{ + PROP_HINT_MAX_SEGMENT_TYPE = 1, + PROP_SELECTED_TYPES, + _PROP_COUNT +}; + +#define VIDEO_FORMATS GST_VIDEO_OVERLAY_COMPOSITION_BLEND_FORMATS +#define SEGMENTATION_OVERLAY_CAPS GST_VIDEO_CAPS_MAKE (VIDEO_FORMATS) + +static GstStaticCaps sw_template_caps = +GST_STATIC_CAPS (SEGMENTATION_OVERLAY_CAPS); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (SEGMENTATION_OVERLAY_CAPS) + ); + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (SEGMENTATION_OVERLAY_CAPS) + ); + +G_DEFINE_TYPE (GstSegmentationOverlay, + gst_segmentation_overlay, GST_TYPE_VIDEO_FILTER); + +#define parent_class gst_segmentation_overlay_parent_class + +GST_ELEMENT_REGISTER_DEFINE (segmentationoverlay, "segmentationoverlay", + GST_RANK_NONE, GST_TYPE_SEGMENTATION_OVERLAY); + +static void gst_segmentation_overlay_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); + +static void gst_segmentation_overlay_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); + +static gboolean gst_segmentation_overlay_sink_event (GstBaseTransform * + trans, GstEvent * event); + +static void gst_segmentation_overlay_before_transform (GstBaseTransform * trans, + GstBuffer * buffer); + +static gboolean gst_segmentation_overlay_start (GstBaseTransform * trans); +static gboolean gst_segmentation_overlay_stop (GstBaseTransform * trans); + +static void gst_segmentation_overlay_hue_to_rgb (guint32 * rgb, double hue); + +static GstFlowReturn +gst_segmentation_overlay_transform_frame_ip (GstVideoFilter * filter, + GstVideoFrame * buf); + +static void gst_segmentation_overlay_finalize (GObject * object); + +static void +gst_segmentation_overlay_fill_canvas (GstSegmentationOverlay * overlay, + GstMapInfo * canvas, GstVideoMeta * cvmeta, GstBuffer * mask, + GstAnalyticsClsMtd * cls_mtd); + + +static void +gst_segmentation_overlay_class_init (GstSegmentationOverlayClass * klass) +{ + GObjectClass *gobject_class; + GstElementClass *element_class; + GstBaseTransformClass *basetransform_class; + GstVideoFilterClass *videofilter_class; + + gobject_class = (GObjectClass *) klass; + gobject_class->set_property = gst_segmentation_overlay_set_property; + gobject_class->get_property = gst_segmentation_overlay_get_property; + gobject_class->finalize = gst_segmentation_overlay_finalize; + + + /* To maximum color disparity to represent segment we can set hint-maximum- + * segment-type.*/ + g_object_class_install_property (gobject_class, PROP_HINT_MAX_SEGMENT_TYPE, + g_param_spec_uint ("hint-maximum-segment-type", + "Expected maximum segment type", + "By providing the expected maximum segment type the overlay can optimize" + " color differentiation between segment", + 1, G_MAXUINT, DEFAULT_MAX_COLORS, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, PROP_SELECTED_TYPES, + g_param_spec_string ("selected-types", + "Select segment types to overlay", + "List of segment types to overlay separated by ';'", + NULL, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + element_class = (GstElementClass *) klass; + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + gst_element_class_set_static_metadata (element_class, + "Segmentation Overlay", + "Visualization/Video", + "Overlay a visual representation of segmentation metadata on the video", + "Daniel Morin"); + + basetransform_class = (GstBaseTransformClass *) klass; + basetransform_class->passthrough_on_same_caps = FALSE; + basetransform_class->before_transform = + gst_segmentation_overlay_before_transform; + basetransform_class->start = gst_segmentation_overlay_start; + basetransform_class->stop = gst_segmentation_overlay_stop; + basetransform_class->sink_event = gst_segmentation_overlay_sink_event; + + videofilter_class = (GstVideoFilterClass *) klass; + videofilter_class->transform_frame_ip = + gst_segmentation_overlay_transform_frame_ip; +} + +static void +gst_segmentation_overlay_finalize (GObject * object) +{ + GstSegmentationOverlay *self = GST_SEGMENTATION_OVERLAY (object); + + g_free (self->selected_types_str); + g_clear_slist (&self->selected_type_filter, NULL); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_segmentation_overlay_init (GstSegmentationOverlay * overlay) +{ + overlay->attach_compo_to_buffer = TRUE; + overlay->canvas = NULL; + overlay->composition = NULL; + overlay->upstream_composition = NULL; + overlay->active = FALSE; + overlay->color_table_size = DEFAULT_MAX_COLORS; + overlay->color_table = NULL; + overlay->mask_filter = NULL; + overlay->mask_filter_len = 0; + overlay->selected_type_filter = NULL; + overlay->update_mask_filter = FALSE; + overlay->selected_types_str = NULL; + overlay->bg_color = 0x00000000; + GST_DEBUG_CATEGORY_INIT (segmentationoverlay_debug, "segmentationoverlay", 0, + "Analytics segmentation overlay"); +} + +static void +gst_segmentation_overlay_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (object); + + if (overlay->active) { + GST_WARNING_OBJECT (overlay, "Can't change properties" + " while element is running"); + return; + } + + switch (prop_id) { + case PROP_HINT_MAX_SEGMENT_TYPE: + overlay->color_table_size = g_value_get_uint (value); + break; + case PROP_SELECTED_TYPES: + { + char *selected_types = g_value_dup_string (value); + g_clear_slist (&overlay->selected_type_filter, NULL); + if (selected_types != NULL) { + overlay->selected_types_str = selected_types; + gchar **tokens = g_strsplit (selected_types, ";", -1); + if (tokens != NULL && tokens0 != NULL) { + gchar *token = tokens0; + for (gsize i = 0; token != NULL; i++, token = tokensi) { + overlay->selected_type_filter = + g_slist_prepend (overlay->selected_type_filter, + GUINT_TO_POINTER (g_quark_from_string (token))); + } + overlay->update_mask_filter = TRUE; + } + } + } + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_segmentation_overlay_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (object); + + switch (prop_id) { + case PROP_HINT_MAX_SEGMENT_TYPE: + g_value_set_uint (value, overlay->color_table_size); + break; + case PROP_SELECTED_TYPES: + g_value_set_string (value, overlay->selected_types_str); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_segmentation_overlay_can_handle_caps (GstCaps * incaps) +{ + gboolean ret; + GstCaps *caps; + + caps = gst_static_caps_get (&sw_template_caps); + ret = gst_caps_is_subset (incaps, caps); + gst_caps_unref (caps); + + return ret; +} + +static gboolean +gst_segmentation_overlay_negotiate (GstSegmentationOverlay * overlay, + GstCaps * caps) +{ + GstBaseTransform *basetransform = GST_BASE_TRANSFORM (overlay); + gboolean upstream_has_meta = FALSE; + gboolean caps_has_meta = FALSE; + gboolean alloc_has_meta = FALSE; + gboolean attach = FALSE; + gboolean ret = TRUE; + guint width, height; + GstCapsFeatures *f; + GstCaps *overlay_caps; + GstQuery *query; + guint alloc_index; + GstPad *srcpad = basetransform->srcpad; + GstPad *sinkpad = basetransform->sinkpad; + + GST_DEBUG_OBJECT (overlay, "performing negotiation"); + + /* Clear any pending reconfigure to avoid negotiating twice */ + gst_pad_check_reconfigure (sinkpad); + + /* Check if upstream caps have meta */ + if ((f = gst_caps_get_features (caps, 0))) { + GST_DEBUG_OBJECT (overlay, "upstream has caps"); + upstream_has_meta = gst_caps_features_contains (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION); + } + + /* Initialize dimensions */ + width = GST_VIDEO_INFO_WIDTH (&GST_VIDEO_FILTER (overlay)->in_info); + height = GST_VIDEO_INFO_HEIGHT (&GST_VIDEO_FILTER (overlay)->in_info); + GST_DEBUG_OBJECT (overlay, "initial dims: %ux%u", width, height); + + if (upstream_has_meta) { + overlay_caps = gst_caps_ref (caps); + } else { + GstCaps *peercaps; + + /* BaseTransform requires caps for the allocation query to work */ + overlay_caps = gst_caps_copy (caps); + f = gst_caps_get_features (overlay_caps, 0); + gst_caps_features_add (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION); + + /* Then check if downstream accept overlay composition in caps */ + /* FIXME: We should probably check if downstream *prefers* the + * overlay meta, and only enforce usage of it if we can't handle + * the format ourselves and thus would have to drop the overlays. + * Otherwise we should prefer what downstream wants here. + */ + peercaps = gst_pad_peer_query_caps (srcpad, overlay_caps); + caps_has_meta = !gst_caps_is_empty (peercaps); + gst_caps_unref (peercaps); + + GST_DEBUG_OBJECT (overlay, "caps have overlay meta %d", caps_has_meta); + } + + if (upstream_has_meta || caps_has_meta) { + /* Send caps immediately, it's needed by GstBaseTransform to get a reply + * from allocation query */ + GST_BASE_TRANSFORM_CLASS (parent_class)->set_caps (basetransform, caps, + overlay_caps); + ret = gst_pad_set_caps (srcpad, overlay_caps); + + /* First check if the allocation meta has compositon */ + query = gst_query_new_allocation (overlay_caps, FALSE); + + if (!gst_pad_peer_query (srcpad, query)) { + /* no problem, we use the query defaults */ + GST_DEBUG_OBJECT (overlay, "ALLOCATION query failed"); + + /* In case we were flushing, mark reconfigure and fail this method, + * will make it retry */ + if (GST_PAD_IS_FLUSHING (GST_BASE_TRANSFORM_SRC_PAD (overlay))) { + ret = FALSE; + goto done; + } + } + + alloc_has_meta = gst_query_find_allocation_meta (query, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, &alloc_index); + + GST_DEBUG_OBJECT (overlay, "sink alloc has overlay meta %d", + alloc_has_meta); + + if (alloc_has_meta) { + const GstStructure *params; + + gst_query_parse_nth_allocation_meta (query, alloc_index, ¶ms); + if (params) { + if (gst_structure_get (params, "width", G_TYPE_UINT, &width, + "height", G_TYPE_UINT, &height, NULL)) { + GST_DEBUG_OBJECT (overlay, "received window size: %dx%d", width, + height); + g_assert (width != 0 && height != 0); + } + } + } + + gst_query_unref (query); + } + + /* Update render size if needed */ + overlay->canvas_length = width * height; + + /* For backward compatibility, we will prefer blitting if downstream + * allocation does not support the meta. In other case we will prefer + * attaching, and will fail the negotiation in the unlikely case we are + * force to blit, but format isn't supported. */ + + if (upstream_has_meta) { + attach = TRUE; + } else if (caps_has_meta) { + if (alloc_has_meta) { + attach = TRUE; + } else { + /* Don't attach unless we cannot handle the format */ + attach = !gst_segmentation_overlay_can_handle_caps (caps); + } + } else { + ret = gst_segmentation_overlay_can_handle_caps (caps); + } + + /* If we attach, then pick the overlay caps */ + if (attach) { + GST_DEBUG_OBJECT (overlay, "Using caps %" GST_PTR_FORMAT, overlay_caps); + /* Caps where already sent */ + } else if (ret) { + GST_DEBUG_OBJECT (overlay, "Using caps %" GST_PTR_FORMAT, caps); + GST_BASE_TRANSFORM_CLASS (parent_class)->set_caps (basetransform, caps, + caps); + ret = gst_pad_set_caps (srcpad, caps); + } + + overlay->attach_compo_to_buffer = attach; + +done: + + if (!ret) { + GST_DEBUG_OBJECT (overlay, "negotiation failed, schedule reconfigure"); + gst_pad_mark_reconfigure (srcpad); + } + + gst_caps_unref (overlay_caps); + + return ret; +} + +static gboolean +gst_segmentation_overlay_setcaps (GstSegmentationOverlay * overlay, + GstCaps * caps) +{ + gboolean ret = FALSE; + + if (!gst_video_info_from_caps (&GST_VIDEO_FILTER (overlay)->in_info, caps)) + goto invalid_caps; + + ret = gst_segmentation_overlay_negotiate (overlay, caps); + GST_VIDEO_FILTER (overlay)->negotiated = ret; + + if (!overlay->attach_compo_to_buffer && + !gst_segmentation_overlay_can_handle_caps (caps)) { + GST_DEBUG_OBJECT (overlay, "unsupported caps %" GST_PTR_FORMAT, caps); + ret = FALSE; + } + + return ret; + + /* ERRORS */ +invalid_caps: + { + GST_DEBUG_OBJECT (overlay, "could not parse caps"); + return FALSE; + } +} + +static gboolean +gst_segmentation_overlay_sink_event (GstBaseTransform * trans, GstEvent * event) +{ + gboolean ret = FALSE; + GST_DEBUG_OBJECT (trans, "received sink event %s", + GST_EVENT_TYPE_NAME (event)); + + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (trans); + switch (GST_EVENT_TYPE (event)) { + case GST_EVENT_CAPS: + { + GstCaps *caps; + gst_event_parse_caps (event, &caps); + ret = gst_segmentation_overlay_setcaps (overlay, caps); + gst_event_unref (event); + break; + } + default: + ret = GST_BASE_TRANSFORM_CLASS (parent_class)->sink_event (trans, event); + break; + } + + return ret; +} + +static void +gst_segmentation_overlay_before_transform (GstBaseTransform * trans, + GstBuffer * buffer) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (trans); + gdouble inc; + + if (overlay->color_table == NULL) { + /* Prepare a color table uniformely discributed to maximize distinctivity + * of each segment */ + overlay->color_table = g_malloc_n (overlay->color_table_size, + sizeof (guint32)); + inc = 360.0 / overlay->color_table_size; + for (gsize d = 0; d < overlay->color_table_size; d++) { + gst_segmentation_overlay_hue_to_rgb (&overlay->color_tabled, d * inc); + } + } +} + +static gboolean +gst_segmentation_overlay_start (GstBaseTransform * trans) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (trans); + overlay->active = TRUE; + return TRUE; +} + +static gboolean +gst_segmentation_overlay_stop (GstBaseTransform * trans) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (trans); + gst_clear_buffer (&overlay->canvas); + g_free (overlay->color_table); + overlay->color_table = NULL; + g_free (overlay->mask_filter); + overlay->mask_filter = NULL; + overlay->active = FALSE; + return TRUE; +} + +static GstFlowReturn +gst_segmentation_overlay_transform_frame_ip (GstVideoFilter * filter, + GstVideoFrame * frame) +{ + GstSegmentationOverlay *overlay = GST_SEGMENTATION_OVERLAY (filter); + GstVideoOverlayCompositionMeta *composition_meta; + gpointer state = NULL, related_state; + GstVideoOverlayRectangle *rectangle = NULL; + GstAnalyticsMtd rlt_seg_mtd, rlt_cls_mtd; + GstAnalyticsMtd *clsmtd = NULL; + GstAnalyticsSegmentationMtd *seg_mtd; + gint ofx = 0, ofy = 0; + guint canvas_w = 0, canvas_h = 0; + GstMapInfo cmap; + GstVideoMeta *cvmeta; + GstAnalyticsRelationMeta *rmeta; + GstAnalyticsMtdType rlt_type; + GstBuffer *canvas, *mask; + GstVideoInfo canvas_info; + + composition_meta = + gst_buffer_get_video_overlay_composition_meta (frame->buffer); + if (composition_meta) { + if (overlay->upstream_composition != composition_meta->overlay) { + GST_DEBUG_OBJECT (overlay, "GstVideoOverlayCompositionMeta found."); + overlay->upstream_composition = composition_meta->overlay; + } + } else if (overlay->upstream_composition != NULL) { + overlay->upstream_composition = NULL; + } + + + /* Retrieve relation-meta attached to this buffer */ + rmeta = (GstAnalyticsRelationMeta *) + gst_buffer_get_meta (GST_BUFFER (frame->buffer), + GST_ANALYTICS_RELATION_META_API_TYPE); + + if (rmeta) { + if (overlay->composition) + gst_video_overlay_composition_unref (overlay->composition); + + if (overlay->upstream_composition) { + overlay->composition = + gst_video_overlay_composition_copy (overlay->upstream_composition); + } else { + overlay->composition = gst_video_overlay_composition_new (NULL); + } + + /* Get the quark representing segmentation metadata type */ + rlt_type = gst_analytics_segmentation_mtd_get_mtd_type (); + + /* Iterate overa all relatable-mtd of type segmentation attached to + * rmeta. + */ + while (gst_analytics_relation_meta_iterate (rmeta, &state, rlt_type, + &rlt_seg_mtd)) { + + GST_DEBUG_OBJECT (filter, "buffer contain seg mtd"); + seg_mtd = (GstAnalyticsSegmentationMtd *) & rlt_seg_mtd; + + /* Retrieve classification mtd associated to segmentation-mtd. If + * present the classificaiton-mtd allow to retrieve a label associated to + * segment id. */ + related_state = NULL; + if (gst_analytics_relation_meta_get_direct_related (rmeta, rlt_seg_mtd.id, + GST_ANALYTICS_REL_TYPE_N_TO_N, + gst_analytics_cls_mtd_get_mtd_type (), &related_state, + &rlt_cls_mtd)) { + clsmtd = &rlt_cls_mtd; + } + + if ((mask = gst_analytics_segmentation_mtd_get_mask (seg_mtd, &ofx, + &ofy, &canvas_w, &canvas_h)) != NULL) { + + ofx = CLAMP (ofx, 0, GST_VIDEO_INFO_WIDTH (&filter->in_info)); + ofy = CLAMP (ofy, 0, GST_VIDEO_INFO_HEIGHT (&filter->in_info)); + canvas_w = + MIN (canvas_w, GST_VIDEO_INFO_WIDTH (&filter->in_info) - ofx); + canvas_h = + MIN (canvas_h, GST_VIDEO_INFO_HEIGHT (&filter->in_info) - ofy); + } else { + GST_TRACE_OBJECT (filter, "Received a segmentation mtd without mask"); + continue; + } + + /* Calculate canvas size required */ + gst_video_info_set_format (&canvas_info, + GST_VIDEO_OVERLAY_COMPOSITION_FORMAT_RGB, canvas_w, canvas_h); + /* Allocate buffer to store canvas */ + canvas = gst_buffer_new_and_alloc (canvas_info.size); + cvmeta = gst_buffer_add_video_meta (canvas, GST_VIDEO_FRAME_FLAG_NONE, + GST_VIDEO_OVERLAY_COMPOSITION_FORMAT_RGB, canvas_w, canvas_h); + + /* Keep an handle on canvas to free it if required */ + gst_buffer_replace (&overlay->canvas, canvas); + gst_buffer_unref (canvas); + + gst_buffer_map (canvas, &cmap, GST_MAP_READWRITE); + + /* Fill canvas with segmentation mask */ + gst_segmentation_overlay_fill_canvas (overlay, &cmap, cvmeta, mask, + clsmtd); + gst_buffer_unmap (canvas, &cmap); + + /* Specify where the canvas need to be overlaid */ + rectangle = gst_video_overlay_rectangle_new_raw (overlay->canvas, + ofx, ofy, canvas_w, canvas_h, GST_VIDEO_OVERLAY_FORMAT_FLAG_NONE); + + /* Add rectangle to composition */ + gst_video_overlay_composition_add_rectangle (overlay->composition, + rectangle); + + gst_video_overlay_rectangle_unref (rectangle); + } + } + + if (overlay->composition) { + GST_DEBUG_OBJECT (filter, "have composition"); + + if (overlay->attach_compo_to_buffer) { + GST_DEBUG_OBJECT (filter, "attach"); + gst_buffer_add_video_overlay_composition_meta (frame->buffer, + overlay->composition); + } else { + gst_video_overlay_composition_blend (overlay->composition, frame); + } + } + + return GST_FLOW_OK; +} + +/* + * gst_segmentation_overlay_hue_to_rgb: + * @rgb: Fill rgb values corresponding to #hue. + * @hue: hue value from HSV colorspace + * Covert #hue from HSV colorspace to rgb values in RGB colorspace + */ +static void +gst_segmentation_overlay_hue_to_rgb (guint32 * rgb, double hue) +{ + hue = fmod (hue, 360.0); + guint32 x = + (guint32) round ((1.0 - fabs (fmod (hue / 60.0, 2.0) - 1.0)) * 255.0); + + if (hue >= 0 && hue < 60) { + *rgb = 255 << 16 | x << 8; + } else if (hue >= 60 && hue < 120) { + *rgb = x << 16 | 255 << 8; + } else if (hue >= 120 && hue < 180) { + *rgb = 255 << 8 | x; + } else if (hue >= 180 && hue < 240) { + *rgb = x << 8 | 255; + } else if (hue >= 240 && hue < 300) { + *rgb = x << 16 | 255; + } else if (hue >= 300 && hue < 360) { + *rgb = 255 << 16 | x; + } +} + +static void +gst_segmentation_overlay_update_mask_filter (GstSegmentationOverlay * overlay, + GstAnalyticsClsMtd * cls_mtd) +{ + GQuark seg_type; + + g_return_if_fail (cls_mtd != NULL); + + /* If not segment type filter is set, all mask are shown */ + if (overlay->selected_type_filter != NULL) { + gsize length = gst_analytics_cls_mtd_get_length (cls_mtd); + if (overlay->mask_filter == NULL || overlay->mask_filter_len != length || + overlay->update_mask_filter == TRUE) { + overlay->mask_filter = g_realloc (overlay->mask_filter, length * + sizeof (gboolean)); + overlay->mask_filter_len = length; + for (gsize i = 0; i < length; i++) { + seg_type = gst_analytics_cls_mtd_get_quark (cls_mtd, i); + overlay->mask_filteri = g_slist_find (overlay->selected_type_filter, + GUINT_TO_POINTER (seg_type)) != NULL; + } + } + overlay->update_mask_filter = FALSE; + } +} + +static void +gst_segmentation_overlay_resampling (GstSegmentationOverlay * overlay, + gint32 * canvas_data, guint8 * mask_data, GstVideoMeta * cvmeta, + GstVideoMeta * mvmeta) +{ + gsize mask_col_idx, mask_line_idx, last_mask_line_idx = -1; + gint32 *cline = canvas_data, *pcline = NULL; + guint8 *mline = mask_data; + gsize color_count = overlay->color_table_size + 1; + guint32 *color_table = overlay->color_table; + gboolean *mask_filter = overlay->mask_filter; + +#define CTBL_IDX(val) (mlineval % color_count) +#define MASK_FILTER(val) (mask_filter == NULL || mask_filter mline val) + + for (gint cl = 0; cl < cvmeta->height; cl++) { + mask_line_idx = (cl * mvmeta->height) / cvmeta->height; + if (last_mask_line_idx != mask_line_idx) { + mask_col_idx = 0; + for (gint cc = 0; cc < cvmeta->width; cc++) { + mask_col_idx = (cc * mvmeta->width) / cvmeta->width; + if (CTBL_IDX (mask_col_idx) != 0 && MASK_FILTER (mask_col_idx)) { + clinecc = 0x80000000 | color_tableCTBL_IDX (mask_col_idx) - 1; + } else { + clinecc = overlay->bg_color; + } + } + } else { + /* If current line would be generate from the same line from the mask + * as the previous line in canvas we can simply copy the previous + * line into the current line */ + memcpy (cline, pcline, sizeof (guint32) * cvmeta->width); + } + last_mask_line_idx = mask_line_idx; + pcline = cline; + cline += cvmeta->width; + mline = (mask_line_idx * mvmeta->width) + mask_data; + } +} + +static void +gst_segmentation_overlay_fill_canvas (GstSegmentationOverlay * overlay, + GstMapInfo * cmap, GstVideoMeta * cvmeta, GstBuffer * mask, + GstAnalyticsClsMtd * cls_mtd) +{ + GstVideoMeta *mvmeta; + GstMapInfo mmap; + + /* Retrieve video-meta describing the mask */ + mvmeta = gst_buffer_get_video_meta (mask); + if (mvmeta != NULL) { + if (cls_mtd != NULL) + gst_segmentation_overlay_update_mask_filter (overlay, cls_mtd); + + gst_buffer_map (mask, &mmap, GST_MAP_READ); + gst_segmentation_overlay_resampling (overlay, + (gint32 *) cmap->data, mmap.data, cvmeta, mvmeta); + gst_buffer_unmap (mask, &mmap); + } + gst_buffer_unref (mask); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/analyticsoverlay/gstsegmentationoverlay.h
Added
@@ -0,0 +1,39 @@ +/* GStreamer segmentation overlay + * Copyright (C) <2024> Collabora Ltd. + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gstsegmentationoverlay.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_SEGMENTATION_OVERLAY_H__ +#define __GST_SEGMENTATION_OVERLAY_H__ + +#include <gst/video/gstvideofilter.h> + +G_BEGIN_DECLS + +#define GST_TYPE_SEGMENTATION_OVERLAY \ + (gst_segmentation_overlay_get_type()) + +G_DECLARE_FINAL_TYPE (GstSegmentationOverlay, gst_segmentation_overlay, + GST, SEGMENTATION_OVERLAY, GstVideoFilter) + +GST_ELEMENT_REGISTER_DECLARE (segmentationoverlay); + +G_END_DECLS +#endif /* __GST_SEGMENTATION_OVERLAY_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/analyticsoverlay/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/analyticsoverlay/meson.build
Changed
@@ -1,6 +1,7 @@ analyticsoverlay_sources = 'gstanalyticsoverlay.c', 'gstobjectdetectionoverlay.c', + 'gstsegmentationoverlay.c' analyticsoverlay_headers = @@ -27,7 +28,7 @@ c_args : gst_plugins_bad_args, include_directories : configinc, libsinc, dependencies : - gstbase_dep, gstvideo_dep, gstanalytics_dep, gstanalyticsoverlay_ext_dep + gstbase_dep, gstvideo_dep, gstanalytics_dep, gstanalyticsoverlay_ext_dep, libm , install : true, install_dir : plugins_install_dir,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/assrender/gstassrender.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/assrender/gstassrender.c
Changed
@@ -580,9 +580,10 @@ GST_DEBUG_OBJECT (pad, "peer caps %" GST_PTR_FORMAT, peer_caps); if (gst_caps_is_any (peer_caps)) { - /* if peer returns ANY caps, return filtered src pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (srcpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (srcpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else { /* duplicate caps which contains the composition into one version with @@ -642,7 +643,9 @@ if (gst_caps_is_any (peer_caps)) { /* if peer returns ANY caps, return filtered sink pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (sinkpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (sinkpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtp.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtp.c
Changed
@@ -203,7 +203,7 @@ * * $ gst-launch-1.0 clockselect. \( clockid=ptp avtpsrc ifname=$IFNAME ! \ * avtpcrfcheck ifname=$IFNAME ! avtpcvfdepay ! \ - * vaapih264dec ! videoconvert ! clockoverlay halignment=right ! \ + * vah264dec ! videoconvert ! clockoverlay halignment=right ! \ * queue ! autovideosink \) * * ### Pipeline clock
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpaafdepay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpaafdepay.c
Changed
@@ -59,8 +59,8 @@ GST_ELEMENT_REGISTER_DEFINE (avtpaafdepay, "avtpaafdepay", GST_RANK_NONE, GST_TYPE_AVTP_AAF_DEPAY); -static GstFlowReturn gst_avtp_aaf_depay_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer); +static GstFlowReturn gst_avtp_aaf_depay_process (GstAvtpBaseDepayload * + basedepay, GstBuffer * buffer); static void gst_avtp_aaf_depay_class_init (GstAvtpAafDepayClass * klass) @@ -77,7 +77,7 @@ "Extracts raw audio from AAF AVTPDUs", "Andre Guedes <andre.guedes@intel.com>"); - avtpbasedepayload_class->chain = GST_DEBUG_FUNCPTR (gst_avtp_aaf_depay_chain); + avtpbasedepayload_class->process = gst_avtp_aaf_depay_process; GST_DEBUG_CATEGORY_INIT (avtpaafdepay_debug, "avtpaafdepay", 0, "AAF AVTP Depayloader"); @@ -207,9 +207,10 @@ } static GstFlowReturn -gst_avtp_aaf_depay_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) +gst_avtp_aaf_depay_process (GstAvtpBaseDepayload * avtpbasedepayload, + GstBuffer * buffer) { - int res; + int res GST_UNUSED_ASSERT; GstMapInfo info; guint32 subtype, version; GstClockTime ptime; @@ -217,7 +218,6 @@ struct avtp_stream_pdu *pdu; guint64 channels, depth, rate, format, tstamp, seqnum, streamid, streamid_valid, data_len; - GstAvtpBaseDepayload *avtpbasedepayload = GST_AVTP_BASE_DEPAYLOAD (parent); GstAvtpAafDepay *avtpaafdepay = GST_AVTP_AAF_DEPAY (avtpbasedepayload); if (!gst_buffer_map (buffer, &info, GST_MAP_READ)) { @@ -284,10 +284,6 @@ gst_buffer_unref (buffer); return GST_FLOW_NOT_NEGOTIATED; } - if (!gst_avtp_base_depayload_push_segment_event (avtpbasedepayload, tstamp)) { - gst_buffer_unref (buffer); - return GST_FLOW_ERROR; - } avtpbasedepayload->seqnum = seqnum; } @@ -304,16 +300,16 @@ avtpbasedepayload->seqnum++; ptime = gst_avtp_base_depayload_tstamp_to_ptime (avtpbasedepayload, tstamp, - avtpbasedepayload->prev_ptime); + avtpbasedepayload->last_dts); subbuffer = gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, sizeof (struct avtp_stream_pdu), data_len); GST_BUFFER_PTS (subbuffer) = ptime; GST_BUFFER_DTS (subbuffer) = ptime; - avtpbasedepayload->prev_ptime = ptime; gst_buffer_unref (buffer); - return gst_pad_push (avtpbasedepayload->srcpad, subbuffer); + + return gst_avtp_base_depayload_push (avtpbasedepayload, subbuffer); discard: gst_buffer_unref (buffer);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpaafpay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpaafpay.c
Changed
@@ -211,7 +211,7 @@ break; } case GST_STATE_CHANGE_READY_TO_PAUSED:{ - int res; + int res GST_UNUSED_ASSERT; GstMapInfo info; struct avtp_stream_pdu *pdu; GstMemory *mem = avtpaafpay->header; @@ -263,7 +263,7 @@ static GstFlowReturn gst_avtp_aaf_pay_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) { - int res; + int res GST_UNUSED_ASSERT; GstMemory *mem; GstMapInfo info; gsize data_len;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpbasedepayload.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpbasedepayload.c
Changed
@@ -48,9 +48,17 @@ static void gst_avtp_base_depayload_get_property (GObject * object, guint prop_id, GValue * value, GParamSpec * pspec); -static gboolean gst_avtp_base_depayload_sink_event (GstPad * pad, +static gboolean gst_avtp_base_depayload_sink_event (GstAvtpBaseDepayload * self, + GstEvent * event); + +static GstFlowReturn avtp_base_depayload_chain (GstPad * pad, + GstObject * parent, GstBuffer * buffer); +static gboolean avtp_base_depayload_sink_event (GstPad * pad, GstObject * parent, GstEvent * event); +static gboolean gst_avtp_base_depayload_push_segment_event (GstAvtpBaseDepayload + * avtpbasedepayload); + GType gst_avtp_base_depayload_get_type (void) { @@ -92,8 +100,7 @@ DEFAULT_STREAMID, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PAUSED)); - klass->chain = NULL; - klass->sink_event = GST_DEBUG_FUNCPTR (gst_avtp_base_depayload_sink_event); + klass->sink_event = gst_avtp_base_depayload_sink_event; GST_DEBUG_CATEGORY_INIT (avtpbasedepayload_debug, "avtpbasedepayload", 0, "Base class for AVTP depayloaders"); @@ -108,10 +115,10 @@ GstPadTemplate *templ; GstElement *element = GST_ELEMENT (avtpbasedepayload); GstElementClass *element_class = GST_ELEMENT_CLASS (g_class); - GstAvtpBaseDepayloadClass *avtpbasedepayload_class = + GstAvtpBaseDepayloadClass *avtpbasedepayload_class GST_UNUSED_ASSERT = GST_AVTP_BASE_DEPAYLOAD_CLASS (g_class); - g_assert (avtpbasedepayload_class->chain != NULL); + g_assert (avtpbasedepayload_class->process != NULL); templ = gst_element_class_get_pad_template (element_class, "src"); g_assert (templ != NULL); @@ -122,14 +129,13 @@ avtpbasedepayload->sinkpad = gst_pad_new_from_static_template (&sink_template, "sink"); gst_pad_set_chain_function (avtpbasedepayload->sinkpad, - avtpbasedepayload_class->chain); + avtp_base_depayload_chain); gst_pad_set_event_function (avtpbasedepayload->sinkpad, - avtpbasedepayload_class->sink_event); + avtp_base_depayload_sink_event); gst_element_add_pad (element, avtpbasedepayload->sinkpad); avtpbasedepayload->streamid = DEFAULT_STREAMID; - avtpbasedepayload->prev_ptime = 0; avtpbasedepayload->seqnum = 0; } @@ -169,14 +175,23 @@ } } -static gboolean -gst_avtp_base_depayload_sink_event (GstPad * pad, GstObject * parent, - GstEvent * event) +static GstFlowReturn +avtp_base_depayload_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) { GstAvtpBaseDepayload *avtpbasedepayload = GST_AVTP_BASE_DEPAYLOAD (parent); + GstAvtpBaseDepayloadClass *klass = + GST_AVTP_BASE_DEPAYLOAD_GET_CLASS (avtpbasedepayload); + + avtpbasedepayload->last_dts = GST_BUFFER_DTS (buffer); + + return klass->process (avtpbasedepayload, buffer); +} - GST_DEBUG_OBJECT (avtpbasedepayload, "event %s", GST_EVENT_TYPE_NAME (event)); +static gboolean +gst_avtp_base_depayload_sink_event (GstAvtpBaseDepayload * avtpbasedepayload, + GstEvent * event) +{ switch (GST_EVENT_TYPE (event)) { case GST_EVENT_SEGMENT: /* Once the first AVTPDU is received, proper CAPS and SEGMENT events are @@ -190,12 +205,27 @@ * gst_avtp_base_depayload_push_segment_event() for more information. */ gst_event_unref (event); + avtpbasedepayload->segment_sent = FALSE; return TRUE; default: - return gst_pad_event_default (pad, parent, event); + return gst_pad_event_default (avtpbasedepayload->sinkpad, + GST_OBJECT (avtpbasedepayload), event); } } +static gboolean +avtp_base_depayload_sink_event (GstPad * pad, GstObject * parent, + GstEvent * event) +{ + GstAvtpBaseDepayload *avtpbasedepayload = GST_AVTP_BASE_DEPAYLOAD (parent); + GstAvtpBaseDepayloadClass *klass = + GST_AVTP_BASE_DEPAYLOAD_GET_CLASS (avtpbasedepayload); + + GST_DEBUG_OBJECT (avtpbasedepayload, "event %s", GST_EVENT_TYPE_NAME (event)); + + return klass->sink_event (avtpbasedepayload, event); +} + /* Helper function to convert AVTP timestamp to AVTP presentation time. Since * AVTP timestamp represents the lower 32-bit part from AVTP presentation time, * the helper requires a reference time ('ref' argument) to convert it properly. @@ -206,41 +236,44 @@ avtpbasedepayload, guint32 tstamp, GstClockTime ref) { GstClockTime ptime; + guint32 ref_low; + + ref += gst_element_get_base_time (GST_ELEMENT (avtpbasedepayload)); + + GST_LOG_OBJECT (avtpbasedepayload, "dts: %" GST_TIME_FORMAT " tstamp: %u", + GST_TIME_ARGS (ref), tstamp); + ref_low = ref & 0xFFFFFFFFULL; ptime = (ref & 0xFFFFFFFF00000000ULL) | tstamp; /* If 'ptime' is less than the our reference time, it means the higher part * from 'ptime' needs to be incremented by 1 in order reflect the correct * presentation time. */ - if (ptime < ref) - ptime += (1ULL << 32); + if (tstamp < G_MAXINT32 && ref_low > G_MAXINT32) + ptime += G_MAXUINT32 + 1; + + if (tstamp < G_MAXINT32 && ref_low > G_MAXINT32 && ptime > G_MAXUINT32) + ptime -= G_MAXUINT32 + 1; GST_LOG_OBJECT (avtpbasedepayload, "AVTP presentation time %" GST_TIME_FORMAT, GST_TIME_ARGS (ptime)); return ptime; } -gboolean +static gboolean gst_avtp_base_depayload_push_segment_event (GstAvtpBaseDepayload * - avtpbasedepayload, guint32 avtp_tstamp) + avtpbasedepayload) { - GstClock *clock; GstEvent *event; GstSegment segment; - GstClockTime now, base_time, avtp_ptime; + GstClockTime base_time; - clock = GST_ELEMENT_CLOCK (avtpbasedepayload); - - now = gst_clock_get_time (clock); - avtp_ptime = - gst_avtp_base_depayload_tstamp_to_ptime (avtpbasedepayload, avtp_tstamp, - now); base_time = gst_element_get_base_time (GST_ELEMENT (avtpbasedepayload)); gst_segment_init (&segment, GST_FORMAT_TIME); - segment.base = avtp_ptime - base_time; - segment.start = avtp_ptime; + segment.base = 0; + segment.start = base_time; segment.stop = -1; event = gst_event_new_segment (&segment); @@ -257,6 +290,16 @@ GST_DEBUG_OBJECT (avtpbasedepayload, "SEGMENT event pushed: %" GST_SEGMENT_FORMAT, &segment); - avtpbasedepayload->prev_ptime = avtp_ptime; + avtpbasedepayload->segment_sent = TRUE; return TRUE; } + +GstFlowReturn +gst_avtp_base_depayload_push (GstAvtpBaseDepayload * + avtpbasedepayload, GstBuffer * buffer) +{ + if (!avtpbasedepayload->segment_sent) + gst_avtp_base_depayload_push_segment_event (avtpbasedepayload); + + return gst_pad_push (avtpbasedepayload->srcpad, buffer); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpbasedepayload.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpbasedepayload.h
Changed
@@ -35,6 +35,9 @@ (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_AVTP_BASE_DEPAYLOAD)) #define GST_IS_AVTP_BASE_DEPAYLOAD_CLASS(klass) \ (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_AVTP_BASE_DEPAYLOAD)) +#define GST_AVTP_BASE_DEPAYLOAD_GET_CLASS(obj) \ + (G_TYPE_INSTANCE_GET_CLASS ((obj), GST_AVTP_BASE_DEPAYLOAD, \ + GstAvtpBaseDepayloadClass)) typedef struct _GstAvtpBaseDepayload GstAvtpBaseDepayload; typedef struct _GstAvtpBaseDepayloadClass GstAvtpBaseDepayloadClass; @@ -48,7 +51,9 @@ guint64 streamid; - GstClockTime prev_ptime; + GstClockTime last_dts; + gboolean segment_sent; + guint8 seqnum; gpointer _gst_reservedGST_PADDING; @@ -59,9 +64,8 @@ GstElementClass parent_class; /* Pure virtual function. */ - GstPadChainFunction chain; - - GstPadEventFunction sink_event; + GstFlowReturn (*process) (GstAvtpBaseDepayload *base, GstBuffer *buf); + gboolean (*sink_event) (GstAvtpBaseDepayload *base, GstEvent *event); gpointer _gst_reservedGST_PADDING; }; @@ -71,8 +75,8 @@ GstClockTime gst_avtp_base_depayload_tstamp_to_ptime (GstAvtpBaseDepayload * avtpbasedepayload, guint32 tstamp, GstClockTime ref); -gboolean gst_avtp_base_depayload_push_segment_event (GstAvtpBaseDepayload * - avtpbasedepayload, guint32 avtp_tstamp); +GstFlowReturn gst_avtp_base_depayload_push (GstAvtpBaseDepayload * + avtpbasedepayload, GstBuffer * buffer); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpbasepayload.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpbasepayload.c
Changed
@@ -25,8 +25,8 @@ #define GST_CAT_DEFAULT (avtpbasepayload_debug) #define DEFAULT_STREAMID 0xAABBCCDDEEFF0000 -#define DEFAULT_MTT 50000000 -#define DEFAULT_TU 1000000 +#define DEFAULT_MTT (50 * GST_MSECOND) +#define DEFAULT_TU (GST_MSECOND) #define DEFAULT_PROCESSING_DEADLINE (20 * GST_MSECOND) enum @@ -55,6 +55,9 @@ static gboolean gst_avtp_base_payload_sink_event (GstPad * pad, GstObject * parent, GstEvent * event); +static gboolean gst_avtp_base_payload_src_query (GstPad * pad, + GstObject * parent, GstQuery * query); + GType gst_avtp_base_payload_get_type (void) @@ -133,6 +136,8 @@ avtpbasepayload->srcpad = gst_pad_new_from_static_template (&src_template, "src"); + gst_pad_set_query_function (avtpbasepayload->srcpad, + gst_avtp_base_payload_src_query); gst_element_add_pad (element, avtpbasepayload->srcpad); templ = gst_element_class_get_pad_template (element_class, "sink"); @@ -225,25 +230,60 @@ } } +static gboolean +gst_avtp_base_payload_src_query (GstPad * pad, GstObject * parent, + GstQuery * query) +{ + GstAvtpBasePayload *avtpbasepayload = GST_AVTP_BASE_PAYLOAD (parent); + gboolean ret; + + ret = gst_pad_query_default (pad, parent, query); + + if (ret && GST_QUERY_TYPE (query) == GST_QUERY_LATENCY) { + gboolean live; + GstClockTime min_latency; + + gst_query_parse_latency (query, &live, &min_latency, NULL); + + if (live) + avtpbasepayload->latency = min_latency; + else + avtpbasepayload->latency = 0; + + GST_DEBUG_OBJECT (avtpbasepayload, "live: %d latency %" GST_TIME_FORMAT, + live, GST_TIME_ARGS (avtpbasepayload->latency)); + } + + return ret; +} + + GstClockTime gst_avtp_base_payload_calc_ptime (GstAvtpBasePayload * avtpbasepayload, GstBuffer * buffer) { GstClockTime base_time, running_time; + GstClockTime avtp_timestamp; g_assert (GST_BUFFER_PTS (buffer) != GST_CLOCK_TIME_NONE); if (G_UNLIKELY (avtpbasepayload->latency == GST_CLOCK_TIME_NONE)) { GstQuery *query; + gboolean live; + GstClockTime min_latency; query = gst_query_new_latency (); if (!gst_pad_peer_query (avtpbasepayload->sinkpad, query)) return GST_CLOCK_TIME_NONE; - gst_query_parse_latency (query, NULL, &avtpbasepayload->latency, NULL); + gst_query_parse_latency (query, &live, &min_latency, NULL); + if (live) + avtpbasepayload->latency = min_latency; + else + avtpbasepayload->latency = 0; gst_query_unref (query); - GST_DEBUG_OBJECT (avtpbasepayload, "latency %" GST_TIME_FORMAT, - GST_TIME_ARGS (avtpbasepayload->latency)); + GST_DEBUG_OBJECT (avtpbasepayload, "live: %d latency %" GST_TIME_FORMAT, + live, GST_TIME_ARGS (avtpbasepayload->latency)); } base_time = gst_element_get_base_time (GST_ELEMENT (avtpbasepayload)); @@ -251,7 +291,22 @@ running_time = gst_segment_to_running_time (&avtpbasepayload->segment, avtpbasepayload->segment.format, GST_BUFFER_PTS (buffer)); - return base_time + running_time + avtpbasepayload->latency + + avtp_timestamp = base_time + running_time + avtpbasepayload->latency + avtpbasepayload->processing_deadline + avtpbasepayload->mtt + avtpbasepayload->tu; + + GST_TRACE_OBJECT (avtpbasepayload, + "Converting PTS: %" GST_TIME_FORMAT " into AVTP: %" GST_TIME_FORMAT + " using running_time: %" GST_TIME_FORMAT " + latency: %" + GST_TIME_FORMAT " + deadline: %" GST_TIME_FORMAT " + mtt: %" + GST_TIME_FORMAT " + tu: %" GST_TIME_FORMAT, + GST_TIME_ARGS (GST_BUFFER_PTS (buffer)), + GST_TIME_ARGS (avtp_timestamp), + GST_TIME_ARGS (running_time), + GST_TIME_ARGS (avtpbasepayload->latency), + GST_TIME_ARGS (avtpbasepayload->processing_deadline), + GST_TIME_ARGS (avtpbasepayload->mtt), + GST_TIME_ARGS (avtpbasepayload->tu)); + + return avtp_timestamp; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpcrfbase.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpcrfbase.c
Changed
@@ -267,7 +267,7 @@ guint64 tstamp_interval, base_freq, pull, type; guint64 streamid_valid, streamid, data_len; guint32 subtype; - int res; + int res GST_UNUSED_ASSERT; if (packet_size < sizeof (struct avtp_crf_pdu)) return FALSE; @@ -439,7 +439,7 @@ */ if (num_pkt_tstamps == 1) { guint64 seqnum; - int res; + int res GST_UNUSED_ASSERT; res = avtp_crf_pdu_get (crf_pdu, AVTP_CRF_FIELD_SEQ_NUM, &seqnum); g_assert (res == 0); @@ -495,7 +495,7 @@ GstAvtpCrfThreadData *data = &avtpcrfbase->thread_data; struct avtp_crf_pdu *crf_pdu = g_alloca (MAX_AVTPDU_SIZE); guint64 media_clk_reset; - int n, res; + int n, res GST_UNUSED_ASSERT; g_assert (data->fd > -1);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpcrfsync.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpcrfsync.c
Changed
@@ -87,7 +87,7 @@ set_avtp_tstamp (GstAvtpCrfSync * avtpcrfsync, struct avtp_stream_pdu *pdu, GstClockTime tstamp) { - int res; + int res GST_UNUSED_ASSERT; guint32 type; res = @@ -113,7 +113,7 @@ set_avtp_mr_bit (GstAvtpCrfSync * avtpcrfsync, struct avtp_stream_pdu *pdu, guint64 mr) { - int res; + int res GST_UNUSED_ASSERT; guint32 type; res =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpcrfutil.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpcrfutil.c
Changed
@@ -32,7 +32,7 @@ struct avtp_stream_pdu *pdu; guint64 subtype; guint32 type; - int res; + int res GST_UNUSED_ASSERT; if (info->size < sizeof (struct avtp_stream_pdu)) return FALSE; @@ -57,7 +57,7 @@ { guint64 tstamp = GST_CLOCK_TIME_NONE, tstamp_valid; guint32 type; - int res; + int res GST_UNUSED_ASSERT; res = avtp_pdu_get ((struct avtp_common_pdu *) pdu, AVTP_FIELD_SUBTYPE, &type); @@ -95,7 +95,7 @@ { guint64 subtype, h264_time_valid; guint32 type; - int res; + int res GST_UNUSED_ASSERT; /* * Validate H264 timestamp for H264 format. For more details about the
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpcvfdepay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpcvfdepay.c
Changed
@@ -49,8 +49,8 @@ /* prototypes */ -static GstFlowReturn gst_avtp_cvf_depay_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer); +static GstFlowReturn gst_avtp_cvf_depay_process (GstAvtpBaseDepayload * + avtpbasedepayload, GstBuffer * buffer); static gboolean gst_avtp_cvf_depay_push_caps (GstAvtpVfDepayBase * avtpvfdepay); #define AVTP_CVF_H264_HEADER_SIZE (sizeof(struct avtp_stream_pdu) + sizeof(guint32)) @@ -102,7 +102,7 @@ "Extracts compressed video from CVF AVTPDUs", "Ederson de Souza <ederson.desouza@intel.com>"); - avtpbasedepayload_class->chain = GST_DEBUG_FUNCPTR (gst_avtp_cvf_depay_chain); + avtpbasedepayload_class->process = gst_avtp_cvf_depay_process; avtpvfdepaybase_class->depay_push_caps = GST_DEBUG_FUNCPTR (gst_avtp_cvf_depay_push_caps); @@ -187,7 +187,7 @@ gboolean result = FALSE; guint64 val; guint val32; - gint r; + gint r GST_UNUSED_ASSERT; if (G_UNLIKELY (map->size < AVTP_CVF_H264_HEADER_SIZE)) { GST_DEBUG_OBJECT (avtpcvfdepay, @@ -301,9 +301,10 @@ gst_avtp_cvf_depay_get_avtp_timestamps (GstAvtpCvfDepay * avtpcvfdepay, GstMapInfo * map, GstClockTime * pts, GstClockTime * dts) { + GstAvtpBaseDepayload *base = GST_AVTP_BASE_DEPAYLOAD (avtpcvfdepay); struct avtp_stream_pdu *pdu; guint64 avtp_time, h264_time, tv, ptv; - gint res; + gint res GST_UNUSED_ASSERT; *pts = GST_CLOCK_TIME_NONE; *dts = GST_CLOCK_TIME_NONE; @@ -317,7 +318,8 @@ res = avtp_cvf_pdu_get (pdu, AVTP_CVF_FIELD_TIMESTAMP, &avtp_time); g_assert (res == 0); - *dts = avtp_time; + *dts = gst_avtp_base_depayload_tstamp_to_ptime (base, avtp_time, + base->last_dts); } res = avtp_cvf_pdu_get (pdu, AVTP_CVF_FIELD_H264_PTV, &ptv); @@ -327,7 +329,8 @@ res = avtp_cvf_pdu_get (pdu, AVTP_CVF_FIELD_H264_TIMESTAMP, &h264_time); g_assert (res == 0); - *pts = h264_time; + *pts = gst_avtp_base_depayload_tstamp_to_ptime (base, h264_time, + base->last_dts); } } @@ -364,7 +367,7 @@ { struct avtp_stream_pdu *pdu; guint64 val; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -380,7 +383,7 @@ { struct avtp_stream_pdu *pdu; guint64 val; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -574,9 +577,10 @@ } static GstFlowReturn -gst_avtp_cvf_depay_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) +gst_avtp_cvf_depay_process (GstAvtpBaseDepayload * avtpbasedepayload, + GstBuffer * buffer) { - GstAvtpCvfDepay *avtpcvfdepay = GST_AVTP_CVF_DEPAY (parent); + GstAvtpCvfDepay *avtpcvfdepay = GST_AVTP_CVF_DEPAY (avtpbasedepayload); GstFlowReturn ret = GST_FLOW_OK; gboolean lost_packet; GstMapInfo map;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpcvfpay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpcvfpay.c
Changed
@@ -133,7 +133,7 @@ if (transition == GST_STATE_CHANGE_NULL_TO_READY) { GstMapInfo map; struct avtp_stream_pdu *pdu; - int res; + int res GST_UNUSED_ASSERT; avtpcvfpay->header = gst_buffer_new_allocate (NULL, AVTP_CVF_H264_HEADER_SIZE, NULL); @@ -376,7 +376,7 @@ &last_fragment))) { GstBuffer *packet; struct avtp_stream_pdu *pdu; - gint res; + gint res GST_UNUSED_ASSERT; /* Copy header to reuse common fields and change what is needed */ header = gst_buffer_copy (avtpcvfpay->header);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtprvfdepay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtprvfdepay.c
Changed
@@ -50,8 +50,8 @@ /* prototypes */ -static GstFlowReturn gst_avtp_rvf_depay_chain (GstPad * pad, GstObject * parent, - GstBuffer * buffer); +static GstFlowReturn gst_avtp_rvf_depay_process (GstAvtpBaseDepayload * + basedepay, GstBuffer * buffer); static gboolean gst_avtp_rvf_depay_push_caps (GstAvtpVfDepayBase * avtpvfdepay); @@ -91,7 +91,8 @@ "Extracts raw video from RVF AVTPDUs", "Adrian Fiergolski <Adrian.Fiergolski@fastree3d.com>"); - avtpbasedepayload_class->chain = GST_DEBUG_FUNCPTR (gst_avtp_rvf_depay_chain); + avtpbasedepayload_class->process = + GST_DEBUG_FUNCPTR (gst_avtp_rvf_depay_process); avtpvfdepaybase_class->depay_push_caps = GST_DEBUG_FUNCPTR (gst_avtp_rvf_depay_push_caps); @@ -247,7 +248,7 @@ gboolean result = FALSE; guint64 val; guint val32; - gint r; + gint r GST_UNUSED_ASSERT; if (G_UNLIKELY (map->size < AVTP_RVF_HEADER_SIZE)) { GST_DEBUG_OBJECT (avtprvfdepay, @@ -537,7 +538,7 @@ { struct avtp_stream_pdu *pdu; guint64 avtp_time, tv; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -559,7 +560,7 @@ { struct avtp_stream_pdu *pdu; guint64 num_lines, line_number, i_seq_num; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -588,7 +589,7 @@ { struct avtp_stream_pdu *pdu; guint64 val; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -604,7 +605,7 @@ { struct avtp_stream_pdu *pdu; guint64 val; - gint res; + gint res GST_UNUSED_ASSERT; pdu = (struct avtp_stream_pdu *) map->data; @@ -682,9 +683,10 @@ } static GstFlowReturn -gst_avtp_rvf_depay_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer) +gst_avtp_rvf_depay_process (GstAvtpBaseDepayload * basedepay, + GstBuffer * buffer) { - GstAvtpRvfDepay *avtprvfdepay = GST_AVTP_RVF_DEPAY (parent); + GstAvtpRvfDepay *avtprvfdepay = GST_AVTP_RVF_DEPAY (basedepay); GstFlowReturn ret = GST_FLOW_OK; gboolean lost_packet; GstMapInfo map;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtprvfpay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtprvfpay.c
Changed
@@ -367,7 +367,7 @@ if (transition == GST_STATE_CHANGE_NULL_TO_READY) { GstMapInfo map; struct avtp_stream_pdu *pdu; - int res; + int res GST_UNUSED_ASSERT; avtprvfpay->header = gst_buffer_new_allocate (NULL, AVTP_RVF_HEADER_SIZE, NULL); @@ -439,7 +439,7 @@ while (offset != buffer_size) { GstBuffer *packet; struct avtp_stream_pdu *pdu; - gint res; + gint res GST_UNUSED_ASSERT; GstBuffer *fragment; gsize fragment_size;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpsrc.c
Changed
@@ -136,6 +136,7 @@ gst_base_src_set_live (GST_BASE_SRC (avtpsrc), TRUE); gst_base_src_set_format (GST_BASE_SRC (avtpsrc), GST_FORMAT_TIME); gst_base_src_set_blocksize (GST_BASE_SRC (avtpsrc), MAX_AVTPDU_SIZE); + gst_base_src_set_do_timestamp (GST_BASE_SRC (avtpsrc), TRUE); avtpsrc->ifname = g_strdup (DEFAULT_IFNAME); avtpsrc->address = g_strdup (DEFAULT_ADDRESS);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpvfdepaybase.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpvfdepaybase.c
Changed
@@ -84,9 +84,6 @@ GstFlowReturn ret; if (G_UNLIKELY (!gst_pad_has_current_caps (avtpbasedepayload->srcpad))) { - guint64 pts_m; - guint32 dts, pts; - if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_DEBUG) { GstClock *clock = gst_element_get_clock (GST_ELEMENT_CAST (avtpvfdepaybase)); @@ -108,67 +105,10 @@ GST_ELEMENT_ERROR (avtpvfdepaybase, CORE, CAPS, (NULL), (NULL)); return GST_FLOW_ERROR; } - - if (!gst_avtp_base_depayload_push_segment_event (avtpbasedepayload, - GST_BUFFER_PTS (avtpvfdepaybase->out_buffer))) { - GST_ELEMENT_ERROR (avtpvfdepaybase, CORE, EVENT, - ("Could not send SEGMENT event"), (NULL)); - } - - /* Now that we sent our segment starting on the first Presentation - * time available, `avtpbasedepayload->prev_ptime` saves that value, - * to be a reference for calculating future buffer timestamps from - * the AVTP timestamps. - * - * However, decode timestamps can be smaller than presentation - * timestamps. So we can't use `avtpbasedepayload->prev_time` as - * reference to calculate them. Instead, here, we calculate the - * first decode timestamp and save it on `avtpvfdepaybase->prev_ptime`. - * - * The method used to calculate the "absolute" decode timestamp (DTS) - * from presentation timestamp is as follows: - * - * DTS = dts > pts ? (PTSm - 1) | dts : PTSm | dts - * - * Where: - * dts: 32 bits gPTP decode timestamp - * pts: 32 bits gPTP presentation timestamp - * PTSm: 32 most signifactive bits of the "absolute" presentation - * timestamp - * - * This allow us to handle cases where the pts ends up being smaller - * than dts due pts falling after an AVTP timestamp wrapping. - */ - - pts = GST_BUFFER_PTS (avtpvfdepaybase->out_buffer); - dts = GST_BUFFER_DTS (avtpvfdepaybase->out_buffer); - pts_m = avtpbasedepayload->prev_ptime & 0xFFFFFFFF00000000ULL; - - avtpbasedepayload->prev_ptime = dts > pts ? (pts_m -= - (1ULL << 32)) | dts : pts_m | dts; - GST_DEBUG_OBJECT (avtpvfdepaybase, "prev_ptime set to %" GST_TIME_FORMAT, - GST_TIME_ARGS (avtpbasedepayload->prev_ptime)); } - /* At this point, we're sure segment was sent, so we can properly calc - * buffer timestamps */ - GST_DEBUG_OBJECT (avtpvfdepaybase, "Converting %" GST_TIME_FORMAT " to PTS", - GST_TIME_ARGS (GST_BUFFER_PTS (avtpvfdepaybase->out_buffer))); - GST_BUFFER_PTS (avtpvfdepaybase->out_buffer) = - gst_avtp_base_depayload_tstamp_to_ptime (avtpbasedepayload, GST_BUFFER_PTS - (avtpvfdepaybase->out_buffer), avtpbasedepayload->prev_ptime); - - GST_DEBUG_OBJECT (avtpvfdepaybase, "Converting %" GST_TIME_FORMAT " to DTS", - GST_TIME_ARGS (GST_BUFFER_DTS (avtpvfdepaybase->out_buffer))); - GST_BUFFER_DTS (avtpvfdepaybase->out_buffer) = - gst_avtp_base_depayload_tstamp_to_ptime (avtpbasedepayload, GST_BUFFER_DTS - (avtpvfdepaybase->out_buffer), avtpbasedepayload->prev_ptime); - - /* Use DTS as prev_ptime as it is smaller or equal to PTS, so that - * next calculations of PTS/DTS won't wrap too early */ - avtpbasedepayload->prev_ptime = GST_BUFFER_DTS (avtpvfdepaybase->out_buffer); - - ret = gst_pad_push (GST_AVTP_BASE_DEPAYLOAD (avtpvfdepaybase)->srcpad, + ret = + gst_avtp_base_depayload_push (GST_AVTP_BASE_DEPAYLOAD (avtpvfdepaybase), avtpvfdepaybase->out_buffer); avtpvfdepaybase->out_buffer = NULL;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/avtp/gstavtpvfpaybase.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/avtp/gstavtpvfpaybase.c
Changed
@@ -323,24 +323,26 @@ gst_avtp_vf_pay_base_sink_event (GstPad * pad, GstObject * parent, GstEvent * event) { - GstCaps *caps; GstAvtpBasePayload *avtpbasepayload = GST_AVTP_BASE_PAYLOAD (parent); GstAvtpVfPayBase *avtpvfpaybase = GST_AVTP_VF_PAY_BASE (avtpbasepayload); - gboolean ret; GST_DEBUG_OBJECT (avtpvfpaybase, "Sink event %s", GST_EVENT_TYPE_NAME (event)); switch (GST_EVENT_TYPE (event)) { case GST_EVENT_CAPS: - gst_event_parse_caps (event, &caps); - g_assert (GST_AVTP_VF_PAY_BASE_GET_CLASS (avtpvfpaybase)->new_caps != - NULL); - ret = - GST_AVTP_VF_PAY_BASE_GET_CLASS (avtpvfpaybase)->new_caps - (avtpvfpaybase, caps); - gst_event_unref (event); - return ret; + if (GST_AVTP_VF_PAY_BASE_GET_CLASS (avtpvfpaybase)->new_caps) { + GstCaps *caps; + gboolean ret; + + gst_event_parse_caps (event, &caps); + ret = + GST_AVTP_VF_PAY_BASE_GET_CLASS (avtpvfpaybase)->new_caps + (avtpvfpaybase, caps); + gst_event_unref (event); + return ret; + } + break; case GST_EVENT_FLUSH_STOP: if (GST_ELEMENT (avtpvfpaybase)->current_state == GST_STATE_PLAYING) { /* After a flush, the sink will reset pipeline base_time, but only
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/bs2b/gstbs2b.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/bs2b/gstbs2b.c
Changed
@@ -222,7 +222,7 @@ g_object_class_install_properties (gobject_class, PROP_LAST, properties); - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "Crossfeed effect", "Filter/Effect/Audio", "Improve headphone listening of stereo audio records using the bs2b "
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dash/gstmpdclient.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dash/gstmpdclient.c
Changed
@@ -727,7 +727,7 @@ gchar ** query) { GstStreamPeriod *stream_period; - static const gchar empty = ""; + static const gchar empty GST_UNUSED_CHECKS = ""; gchar *ret = NULL; GstUri *abs_url; @@ -2297,14 +2297,15 @@ guint stream_idx, gint64 * range_start, gint64 * range_end) { GstActiveStream *stream; - GstStreamPeriod *stream_period; stream = gst_mpd_client_get_active_stream_by_index (client, stream_idx); g_return_val_if_fail (stream != NULL, FALSE); g_return_val_if_fail (stream->cur_representation != NULL, FALSE); - stream_period = gst_mpd_client_get_stream_period (client); +#ifndef G_DISABLE_CHECKS + GstStreamPeriod *stream_period = gst_mpd_client_get_stream_period (client); g_return_val_if_fail (stream_period != NULL, FALSE); g_return_val_if_fail (stream_period->period != NULL, FALSE); +#endif *range_start = 0; *range_end = -1; @@ -2343,14 +2344,15 @@ guint stream_idx, gint64 * range_start, gint64 * range_end) { GstActiveStream *stream; - GstStreamPeriod *stream_period; stream = gst_mpd_client_get_active_stream_by_index (client, stream_idx); g_return_val_if_fail (stream != NULL, FALSE); g_return_val_if_fail (stream->cur_representation != NULL, FALSE); - stream_period = gst_mpd_client_get_stream_period (client); +#ifndef G_DISABLE_CHECKS + GstStreamPeriod *stream_period = gst_mpd_client_get_stream_period (client); g_return_val_if_fail (stream_period != NULL, FALSE); g_return_val_if_fail (stream_period->period != NULL, FALSE); +#endif *range_start = 0; *range_end = -1;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dash/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dash/meson.build
Changed
@@ -79,8 +79,8 @@ xml2_dep = dependency('libxml-2.0', version : '>= 2.8', - fallback : 'libxml2', - required : get_option('dash') + required : get_option('dash'), + default_options: {'python': false}, ) if xml2_dep.found()
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dtls/gstdtlscertificate.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dtls/gstdtlscertificate.c
Changed
@@ -102,7 +102,7 @@ propertiesPROP_PEM = g_param_spec_string ("pem", "Pem string", - "A string containing a X509 certificate and RSA private key in PEM format", + "A string containing a X509 certificate and private key in PEM format", DEFAULT_PEM, G_PARAM_READWRITE | G_PARAM_CONSTRUCT_ONLY | G_PARAM_STATIC_STRINGS); @@ -192,7 +192,8 @@ init_generated (GstDtlsCertificate * self) { GstDtlsCertificatePrivate *priv = self->priv; - RSA *rsa; + EC_KEY *ec; + int curve; BIGNUM *serial_number; ASN1_INTEGER *asn1_serial_number; X509_NAME *name = NULL; @@ -218,57 +219,46 @@ return; } - /* XXX: RSA_generate_key is actually deprecated in 0.9.8 */ -#if OPENSSL_VERSION_NUMBER < 0x10100001L - rsa = RSA_generate_key (2048, RSA_F4, NULL, NULL); -#else - /* - * OpenSSL 3.0 deprecated all low-level APIs, so we need to rewrite this code - * to get rid of the warnings. The porting guide explicitly recommends - * disabling the warnings if this is not feasible, so let's do that for now: - * https://wiki.openssl.org/index.php/OpenSSL_3.0#Upgrading_to_OpenSSL_3.0_from_OpenSSL_1.1.1 - */ + GST_INFO_OBJECT (self, + "Generating a default DTLS certificate with a ECDSA P-256 private key"); + curve = OBJ_txt2nid ("prime256v1"); G_GNUC_BEGIN_IGNORE_DEPRECATIONS; - rsa = RSA_new (); + ec = EC_KEY_new_by_curve_name (curve); G_GNUC_END_IGNORE_DEPRECATIONS; - if (rsa != NULL) { - BIGNUM *e = BN_new (); - G_GNUC_BEGIN_IGNORE_DEPRECATIONS; - if (e == NULL || !BN_set_word (e, RSA_F4) - || !RSA_generate_key_ex (rsa, 2048, e, NULL)) { - RSA_free (rsa); - rsa = NULL; - } - G_GNUC_END_IGNORE_DEPRECATIONS; - if (e) - BN_free (e); + if (!ec) { + GST_WARNING_OBJECT (self, "failed to create EC"); + EVP_PKEY_free (priv->private_key); + priv->private_key = NULL; + X509_free (priv->x509); + priv->x509 = NULL; + return; } -#endif - if (!rsa) { - GST_WARNING_OBJECT (self, "failed to generate RSA"); - G_GNUC_BEGIN_IGNORE_DEPRECATIONS; + G_GNUC_BEGIN_IGNORE_DEPRECATIONS; + EC_KEY_set_asn1_flag (ec, OPENSSL_EC_NAMED_CURVE); + + if (!EC_KEY_generate_key (ec)) { + GST_WARNING_OBJECT (self, "failed to generate EC"); + EC_KEY_free (ec); + ec = NULL; EVP_PKEY_free (priv->private_key); - G_GNUC_END_IGNORE_DEPRECATIONS; priv->private_key = NULL; X509_free (priv->x509); priv->x509 = NULL; return; } - G_GNUC_BEGIN_IGNORE_DEPRECATIONS; - if (!EVP_PKEY_assign_RSA (priv->private_key, rsa)) { - GST_WARNING_OBJECT (self, "failed to assign RSA"); - RSA_free (rsa); - G_GNUC_END_IGNORE_DEPRECATIONS; - rsa = NULL; + if (!EVP_PKEY_assign_EC_KEY (priv->private_key, ec)) { + GST_WARNING_OBJECT (self, "failed to assign EC"); + EC_KEY_free (ec); + ec = NULL; EVP_PKEY_free (priv->private_key); priv->private_key = NULL; X509_free (priv->x509); priv->x509 = NULL; return; } - rsa = NULL; + G_GNUC_END_IGNORE_DEPRECATIONS; X509_set_version (priv->x509, 2);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dtls/gstdtlsdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dtls/gstdtlsdec.c
Changed
@@ -32,6 +32,16 @@ #include "gstdtlscertificate.h" +/** + * SECTION: element-dtlsdec + * @title: dtlsdec + * + * This element decodes DTLS packets. Before 1.28 the default X509 PEM + * certificate was encoded using a RSA 2048 bits private key. Since 1.28 the + * default certificate is encoded using a ECDSA P-256 private key. + * + */ + static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, @@ -142,7 +152,7 @@ propertiesPROP_PEM = g_param_spec_string ("pem", "PEM string", - "A string containing a X509 certificate and RSA private key in PEM format", + "A string containing a X509 certificate and private key in PEM format", DEFAULT_PEM, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_DOC_SHOW_DEFAULT);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dtls/gstdtlssrtpdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dtls/gstdtlssrtpdec.c
Changed
@@ -31,6 +31,17 @@ #include "gstdtlssrtpdec.h" #include "gstdtlsconnection.h" +/** + * SECTION: element-dtlssrtpdec + * @title: dtlssrtpdec + * + * This element decodes SRTP packets with a key received from DTLS. Before 1.28 + * the default X509 PEM certificate was encoded using a RSA 2048 bits private + * key. Since 1.28 the default certificate is encoded using a ECDSA P-256 + * private key. + * + */ + static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, @@ -125,7 +136,7 @@ propertiesPROP_PEM = g_param_spec_string ("pem", "PEM string", - "A string containing a X509 certificate and RSA private key in PEM format", + "A string containing a X509 certificate and private key in PEM format", DEFAULT_PEM, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_DOC_SHOW_DEFAULT); @@ -171,7 +182,7 @@ GstElementClass *klass = GST_ELEMENT_GET_CLASS (GST_ELEMENT (self)); GstPadTemplate *templ; GstPad *target_pad, *ghost_pad; - gboolean ret; + gboolean ret GST_UNUSED_CHECKS; /* +-----------+ @@ -311,7 +322,7 @@ GstDtlsSrtpDec *self = GST_DTLS_SRTP_DEC (element); GstElementClass *klass = GST_ELEMENT_GET_CLASS (element); GstPad *ghost_pad = NULL; - gboolean ret; + gboolean ret GST_UNUSED_CHECKS; GST_DEBUG_OBJECT (element, "pad requested"); @@ -459,7 +470,7 @@ { GstDtlsSrtpDec *self = GST_DTLS_SRTP_DEC (bin); GstPad *demux_pad; - gulong id; + gulong id GST_UNUSED_CHECKS; if (!bin->dtls_element) { return;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dtls/gstdtlssrtpenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dtls/gstdtlssrtpenc.c
Changed
@@ -187,7 +187,7 @@ { GstElementClass *klass = GST_ELEMENT_GET_CLASS (GST_ELEMENT (self)); static GEnumClass *cipher_enum_class, *auth_enum_class; - gboolean ret; + gboolean ret GST_UNUSED_CHECKS; /* +--------------------+ +--------------+ +-----------------+ @@ -504,7 +504,7 @@ { GstDtlsSrtpEnc *self = GST_DTLS_SRTP_ENC (bin); GstPad *dtls_sink_pad, *peer_pad; - gulong id; + gulong id GST_UNUSED_CHECKS; guint rtp_cipher = 1, rtcp_cipher = 1, rtp_auth = 1, rtcp_auth = 1; if (!bin->dtls_element) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dts/gstdtsdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dts/gstdtsdec.c
Changed
@@ -84,10 +84,6 @@ #include "gstdtsdec.h" -#if HAVE_ORC -#include <orc/orc.h> -#endif - #if defined(LIBDTS_FIXED) || defined(LIBDCA_FIXED) #define SAMPLE_WIDTH 16 #define SAMPLE_FORMAT GST_AUDIO_NE(S16) @@ -154,7 +150,10 @@ GObjectClass *gobject_class; GstElementClass *gstelement_class; GstAudioDecoderClass *gstbase_class; - guint cpuflags; + + const gboolean cpuid_mmx = gst_cpuid_supports_x86_mmx (); + const gboolean cpuid_3dnow = gst_cpuid_supports_x86_3dnow (); + const gboolean cpuid_mmxext = gst_cpuid_supports_x86_mmxext (); gobject_class = (GObjectClass *) klass; gstelement_class = (GstElementClass *) klass; @@ -192,20 +191,15 @@ klass->dts_cpuflags = 0; -#if HAVE_ORC - cpuflags = orc_target_get_default_flags (orc_target_get_by_name ("mmx")); - if (cpuflags & ORC_TARGET_MMX_MMX) + if (cpuid_mmx) klass->dts_cpuflags |= MM_ACCEL_X86_MMX; - if (cpuflags & ORC_TARGET_MMX_3DNOW) + if (cpuid_3dnow) klass->dts_cpuflags |= MM_ACCEL_X86_3DNOW; - if (cpuflags & ORC_TARGET_MMX_MMXEXT) + if (cpuid_mmxext) klass->dts_cpuflags |= MM_ACCEL_X86_MMXEXT; -#else - cpuflags = 0; - klass->dts_cpuflags = 0; -#endif - GST_LOG ("CPU flags: dts=%08x, orc=%08x", klass->dts_cpuflags, cpuflags); + GST_LOG ("CPU flags: dts=%08x, cpuid: mmx=%x, mmxext=%x, 3dnow=%x", + klass->dts_cpuflags, cpuid_mmx, cpuid_mmxext, cpuid_3dnow); } static void @@ -792,10 +786,6 @@ { GST_DEBUG_CATEGORY_INIT (dtsdec_debug, "dtsdec", 0, "DTS/DCA audio decoder"); -#if HAVE_ORC - orc_init (); -#endif - return gst_element_register (plugin, "dtsdec", GST_RANK_PRIMARY, GST_TYPE_DTSDEC); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/dts/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/dts/meson.build
Changed
@@ -49,7 +49,7 @@ c_args : gst_plugins_bad_args + no_warn_c_args, link_args : noseh_link_args, include_directories : configinc, libsinc, - dependencies : gstaudio_dep, orc_dep, dca_dep, + dependencies : gstaudio_dep, dca_dep, install : true, install_dir : plugins_install_dir, )
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/gtk/gstgtkwaylandsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/gtk/gstgtkwaylandsink.c
Changed
@@ -31,6 +31,7 @@ #include <drm_fourcc.h> #include <gdk/gdk.h> #include <gst/allocators/allocators.h> +#include <gst/video/gstvideodmabufpool.h> #include <gst/wayland/wayland.h> #ifdef GDK_WINDOWING_WAYLAND @@ -102,9 +103,14 @@ GstBufferPool *pool; GstBuffer *last_buffer; - gboolean video_info_changed; + gboolean render_info_changed; + GstVideoInfo render_info; GstVideoInfo video_info; GstVideoInfoDmaDrm drm_info; + GstVideoMasteringDisplayInfo minfo; + GstVideoContentLightLevel linfo; + gboolean have_mastering_info; + gboolean have_light_info; GstCaps *caps; GMutex render_lock; @@ -155,7 +161,8 @@ "rotate method", "rotate method", GST_TYPE_VIDEO_ORIENTATION_METHOD, GST_VIDEO_ORIENTATION_IDENTITY, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + G_PARAM_READWRITE | GST_PARAM_MUTABLE_PLAYING | + G_PARAM_STATIC_STRINGS)); /** * GstGtkWaylandSink:drm-device: @@ -171,8 +178,8 @@ gstelement_class->change_state = GST_DEBUG_FUNCPTR (gst_gtk_wayland_sink_change_state); - gst_element_class_set_metadata (gstelement_class, "Gtk Wayland Video Sink", - "Sink/Video", + gst_element_class_set_static_metadata (gstelement_class, + "Gtk Wayland Video Sink", "Sink/Video", "A video sink that renders to a GtkWidget using Wayland API", "George Kiagiadakis <george.kiagiadakis@collabora.com>"); @@ -819,7 +826,7 @@ { GstGtkWaylandSinkPrivate *priv = gst_gtk_wayland_sink_get_instance_private (self); - gsize size = priv->video_info.size; + gsize size = priv->render_info.size; GstStructure *config; /* Pools with outstanding buffer cannot be reconfigured, so we must use @@ -918,19 +925,24 @@ goto invalid_format; if (!gst_video_info_dma_drm_to_video_info (&priv->drm_info, - &priv->video_info)) + &priv->render_info)) goto invalid_format; } else { /* extract info from caps */ - if (!gst_video_info_from_caps (&priv->video_info, caps)) + if (!gst_video_info_from_caps (&priv->render_info, caps)) goto invalid_format; if (!gst_video_info_dma_drm_from_video_info (&priv->drm_info, - &priv->video_info, DRM_FORMAT_MOD_LINEAR)) + &priv->render_info, DRM_FORMAT_MOD_LINEAR)) gst_video_info_dma_drm_init (&priv->drm_info); } - priv->video_info_changed = TRUE; + priv->have_mastering_info = + gst_video_mastering_display_info_from_caps (&priv->minfo, caps); + priv->have_light_info = + gst_video_content_light_level_from_caps (&priv->linfo, caps); + + priv->render_info_changed = TRUE; priv->skip_dumb_buffer_copy = FALSE; /* free pooled buffer used with previous caps */ @@ -948,7 +960,7 @@ &priv->drm_info)) goto unsupported_drm_format; } else if (!gst_wl_display_check_format_for_shm (priv->display, - &priv->video_info)) { + &priv->render_info)) { /* Note: we still support dmabuf in this case, but formats must also be * supported on SHM interface to ensure a fallback is possible as we are * not guarantied we'll get dmabuf in the buffers. */ @@ -965,7 +977,7 @@ } if (!gtk_gst_base_widget_set_format (GTK_GST_BASE_WIDGET (priv->gtk_widget), - &priv->video_info)) { + &priv->render_info)) { GST_OBJECT_UNLOCK (self); return FALSE; } @@ -979,6 +991,7 @@ /* Will be used to create buffer pools */ gst_caps_replace (&priv->caps, caps); + priv->video_info = priv->render_info; return TRUE; @@ -998,7 +1011,8 @@ unsupported_format: { GST_ERROR_OBJECT (self, "Format %s is not available on the display", - gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (&priv->video_info))); + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT + (&priv->render_info))); return FALSE; } } @@ -1009,35 +1023,66 @@ GstGtkWaylandSink *self = GST_GTK_WAYLAND_SINK (bsink); GstGtkWaylandSinkPrivate *priv = gst_gtk_wayland_sink_get_instance_private (self); + GstAllocator *allocator = NULL; GstCaps *caps; GstBufferPool *pool = NULL; gboolean need_pool; + guint size; gst_query_parse_allocation (query, &caps, &need_pool); + if (caps == NULL) + return FALSE; + + if (gst_video_is_dma_drm_caps (caps)) { + GstVideoInfoDmaDrm drm_info; + + if (!gst_video_info_dma_drm_from_caps (&drm_info, caps)) + return FALSE; + + size = drm_info.vinfo.size; + } else { + GstVideoInfo vinfo; + + /* extract info from caps */ + if (!gst_video_info_from_caps (&vinfo, caps)) + return FALSE; + + size = vinfo.size; + + allocator = gst_udmabuf_allocator_get (); + if (!allocator) + allocator = gst_shm_allocator_get (); + } + if (need_pool && !gst_video_is_dma_drm_caps (caps)) { GstStructure *config; - pool = gst_wl_video_buffer_pool_new (); - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_set_params (config, - caps, priv->video_info.size, 2, 0); - gst_buffer_pool_config_set_allocator (config, - gst_shm_allocator_get (), NULL); - gst_buffer_pool_set_config (pool, config); + + if (GST_IS_UDMABUF_ALLOCATOR (allocator)) { + pool = gst_video_dmabuf_pool_new (); + } else { + pool = gst_wl_video_buffer_pool_new (); + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_set_params (config, caps, size, 2, 0); + gst_buffer_pool_config_set_allocator (config, + gst_object_ref (allocator), NULL); + gst_buffer_pool_set_config (pool, config); + } } - gst_query_add_allocation_pool (query, pool, priv->video_info.size, 2, 0); + gst_query_add_allocation_pool (query, pool, size, 2, 0); if (pool) g_object_unref (pool); - if (!gst_video_is_dma_drm_caps (caps)) { - GstAllocator *alloc = gst_shm_allocator_get (); - gst_query_add_allocation_param (query, alloc, NULL); - g_object_unref (alloc); - } + if (!gst_video_is_dma_drm_caps (caps)) + gst_query_add_allocation_param (query, allocator, NULL); gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); + if (gst_wl_display_get_viewporter (priv->display)) + gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, NULL); + + gst_clear_object (&allocator); return TRUE; } @@ -1049,17 +1094,74 @@ gst_gtk_wayland_sink_get_instance_private (self); GstWlBuffer *wlbuffer; const GstVideoInfo *info = NULL; + const GstVideoMasteringDisplayInfo *minfo = NULL; + const GstVideoContentLightLevel *linfo = NULL; if (!priv->wl_window) return FALSE; wlbuffer = gst_buffer_get_wl_buffer (priv->display, priv->last_buffer); - if (G_UNLIKELY (priv->video_info_changed && !redraw)) { - info = &priv->video_info; - priv->video_info_changed = FALSE; + if (G_UNLIKELY (priv->render_info_changed && !redraw)) { + info = &priv->render_info; + + if (priv->have_mastering_info) + minfo = &priv->minfo; + + if (priv->have_light_info) + linfo = &priv->linfo; + + priv->render_info_changed = FALSE; + } + return gst_wl_window_render_hdr (priv->wl_window, wlbuffer, info, minfo, + linfo); +} + +static GstFlowReturn +gst_gtk_wayland_sink_copy_frame (GstGtkWaylandSink * self, + GstBuffer * src_buffer, GstBuffer * dst_buffer) +{ + GstGtkWaylandSinkPrivate *priv = + gst_gtk_wayland_sink_get_instance_private (self); + GstVideoFrame src, dst; + + if (!gst_video_frame_map (&dst, &priv->video_info, dst_buffer, GST_MAP_WRITE)) + goto dst_map_failed; + + if (!gst_video_frame_map (&src, &priv->video_info, src_buffer, GST_MAP_READ)) { + gst_video_frame_unmap (&dst); + goto src_map_failed; + } + + gst_video_frame_copy (&dst, &src); + + gst_video_frame_unmap (&src); + gst_video_frame_unmap (&dst); + + /* Also copy the crop meta so its offloaded */ + GstVideoCropMeta *src_cmeta = gst_buffer_get_video_crop_meta (src_buffer); + if (src_cmeta) { + GstVideoCropMeta *dst_cmeta = gst_buffer_add_video_crop_meta (dst_buffer); + dst_cmeta->x = src_cmeta->x; + dst_cmeta->y = src_cmeta->y; + dst_cmeta->width = src_cmeta->width; + dst_cmeta->height = src_cmeta->height; + } + + return GST_FLOW_OK; + +src_map_failed: + { + GST_ELEMENT_ERROR (self, RESOURCE, READ, + ("Video memory can not be read from userspace."), (NULL)); + return GST_FLOW_ERROR; + } +dst_map_failed: + { + GST_ELEMENT_ERROR (self, RESOURCE, WRITE, + ("Video memory can not be written from userspace."), (NULL)); + return GST_FLOW_ERROR; } - return gst_wl_window_render (priv->wl_window, wlbuffer, info); } static GstFlowReturn @@ -1086,6 +1188,35 @@ goto done; } + /* + * The GstVideoFrame fast copy can't crop, make sure the internal pool + * allocated buffers large enough to hold the padded frames. + */ + if (gst_buffer_get_video_crop_meta (buffer)) { + gint padded_width, padded_height; + GstVideoMeta *vmeta; + GstStructure *s; + + vmeta = gst_buffer_get_video_meta (buffer); + priv->caps = gst_caps_make_writable (priv->caps); + s = gst_caps_get_structure (priv->caps, 0); + gst_structure_get (s, "width", G_TYPE_INT, &padded_width, + "height", G_TYPE_INT, &padded_height, NULL); + + if (vmeta->width != padded_width || vmeta->height != padded_height) { + gst_structure_set (s, "width", G_TYPE_INT, vmeta->width, + "height", G_TYPE_INT, vmeta->height, NULL); + + if (priv->pool) { + gst_buffer_pool_set_active (priv->pool, FALSE); + gst_clear_object (&priv->pool); + } + + gst_video_info_set_format (&priv->video_info, vmeta->format, + vmeta->width, vmeta->height); + } + } + /* make sure that the application has called set_render_rectangle() */ if (G_UNLIKELY (gst_wl_window_get_render_rectangle (priv->wl_window)->w == 0)) goto no_window_size; @@ -1125,7 +1256,6 @@ * offloading the compositor from a copy helps maintaining a smoother * desktop. */ - GstVideoFrame src, dst; if (!gst_gtk_wayland_activate_drm_dumb_pool (self)) { priv->skip_dumb_buffer_copy = TRUE; @@ -1153,19 +1283,9 @@ wlbuffer = gst_buffer_add_wl_buffer (to_render, wbuf, priv->display); } - if (!gst_video_frame_map (&dst, &priv->video_info, to_render, - GST_MAP_WRITE)) - goto dst_map_failed; - - if (!gst_video_frame_map (&src, &priv->video_info, buffer, GST_MAP_READ)) { - gst_video_frame_unmap (&dst); - goto src_map_failed; - } - - gst_video_frame_copy (&dst, &src); - - gst_video_frame_unmap (&src); - gst_video_frame_unmap (&dst); + ret = gst_gtk_wayland_sink_copy_frame (self, buffer, to_render); + if (ret != GST_FLOW_OK) + goto done; goto render; } @@ -1173,15 +1293,13 @@ handle_shm: if (!wbuf && gst_wl_display_check_format_for_shm (priv->display, - &priv->video_info)) { + &priv->render_info)) { if (gst_buffer_n_memory (buffer) == 1 && gst_is_fd_memory (mem)) wbuf = gst_wl_shm_memory_construct_wl_buffer (mem, priv->display, - &priv->video_info); + &priv->render_info); /* If nothing worked, copy into our internal pool */ if (!wbuf) { - GstVideoFrame src, dst; - /* we don't know how to create a wl_buffer directly from the provided * memory, so we have to copy the data to shm memory that we know how * to handle... */ @@ -1205,7 +1323,7 @@ if (G_UNLIKELY (!wlbuffer)) { mem = gst_buffer_peek_memory (to_render, 0); wbuf = gst_wl_shm_memory_construct_wl_buffer (mem, priv->display, - &priv->video_info); + &priv->render_info); if (G_UNLIKELY (!wbuf)) goto no_wl_buffer_shm; @@ -1213,19 +1331,9 @@ wlbuffer = gst_buffer_add_wl_buffer (to_render, wbuf, priv->display); } - if (!gst_video_frame_map (&dst, &priv->video_info, to_render, - GST_MAP_WRITE)) - goto dst_map_failed; - - if (!gst_video_frame_map (&src, &priv->video_info, buffer, GST_MAP_READ)) { - gst_video_frame_unmap (&dst); - goto src_map_failed; - } - - gst_video_frame_copy (&dst, &src); - - gst_video_frame_unmap (&src); - gst_video_frame_unmap (&dst); + ret = gst_gtk_wayland_sink_copy_frame (self, buffer, to_render); + if (ret != GST_FLOW_OK) + goto done; goto render; } @@ -1289,20 +1397,6 @@ ret = GST_FLOW_ERROR; goto done; } -src_map_failed: - { - GST_ELEMENT_ERROR (self, RESOURCE, READ, - ("Video memory can not be read from userspace."), (NULL)); - ret = GST_FLOW_ERROR; - goto done; - } -dst_map_failed: - { - GST_ELEMENT_ERROR (self, RESOURCE, WRITE, - ("Video memory can not be written from userspace."), (NULL)); - ret = GST_FLOW_ERROR; - goto done; - } done: { g_mutex_unlock (&priv->render_lock);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/hls/gsthlsdemux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/hls/gsthlsdemux.c
Changed
@@ -428,7 +428,9 @@ /* FIXME: Here we need proper discont handling */ for (walk = hls_stream->playlist->files; walk; walk = walk->next) { file = walk->data; - + if (file->discont) { + stream->discont = TRUE; + } current_sequence = file->sequence; if ((forward && snap_after) || snap_nearest) { if (current_pos >= ts)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevcdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevcdec.c
Changed
@@ -88,7 +88,7 @@ { LCEVC_DecoderHandle decoder_handle; LCEVC_PictureHandle picture_handle; - guint32 width; + guint width; guint height; } PictureData; @@ -103,7 +103,7 @@ /* Alloc LCEVC picture handle */ if (!gst_lcevc_dec_utils_alloc_picture_handle (decoder_handle, frame, - &ret->picture_handle)) { + &ret->picture_handle, LCEVC_Access_Write)) { g_free (ret); return NULL; } @@ -257,6 +257,10 @@ { GstLcevcDec *lcevc = GST_LCEVC_DEC (decoder); + /* Reset */ + lcevc->out_alloc_width = 0; + lcevc->out_alloc_height = 0; + /* Initialize LCEVC decoder */ if (!initialize_lcevc_decoder (lcevc)) { GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL), @@ -272,6 +276,12 @@ { GstLcevcDec *lcevc = GST_LCEVC_DEC (decoder); + /* Clear input state */ + g_clear_pointer (&lcevc->input_state, gst_video_codec_state_unref); + + /* Clear output state */ + g_clear_pointer (&lcevc->output_state, gst_video_codec_state_unref); + /* Destry LCEVC decoder */ LCEVC_DestroyDecoder (lcevc->decoder_handle); @@ -295,22 +305,66 @@ static gboolean ensure_output_resolution (GstLcevcDec * lcevc, guint32 width, guint32 height, - GstVideoCodecState * state) + guint32 alloc_width, guint32 alloc_height) { - /* Set output state with input resolution to do passthrough */ - if (width != lcevc->out_width || height != lcevc->out_height) { - GstVideoCodecState *s; + GstVideoCodecState *curr_s, *new_s; + + curr_s = lcevc->output_state ? lcevc->output_state : lcevc->input_state; + if (curr_s && + width == GST_VIDEO_INFO_WIDTH (&curr_s->info) && + height == GST_VIDEO_INFO_HEIGHT (&curr_s->info) && + alloc_width == lcevc->out_alloc_width && + alloc_height == lcevc->out_alloc_height) + return TRUE; + + new_s = gst_video_decoder_set_output_state (GST_VIDEO_DECODER (lcevc), + GST_VIDEO_INFO_FORMAT (&lcevc->input_state->info), width, height, curr_s); + if (!new_s) + return FALSE; - s = gst_video_decoder_set_output_state (GST_VIDEO_DECODER (lcevc), - GST_VIDEO_INFO_FORMAT (&lcevc->in_info), width, height, state); - if (!s) - return FALSE; + /* Set allocation caps */ + new_s->allocation_caps = gst_video_info_to_caps (&new_s->info); + gst_caps_set_simple (new_s->allocation_caps, "width", G_TYPE_INT, alloc_width, + "height", G_TYPE_INT, alloc_height, NULL); + lcevc->out_alloc_width = alloc_width; + lcevc->out_alloc_height = alloc_height; - lcevc->out_width = width; - lcevc->out_height = height; + g_clear_pointer (&lcevc->output_state, gst_video_codec_state_unref); + lcevc->output_state = new_s; - gst_video_codec_state_unref (s); - } + GST_INFO_OBJECT (lcevc, "Set output resolution to %dx%d", width, height); + + return TRUE; +} + +static gboolean +ensure_output_par (GstLcevcDec * lcevc, guint32 par_n, guint32 par_d) +{ + GstVideoCodecState *curr_s, *new_s; + + curr_s = lcevc->output_state ? lcevc->output_state : lcevc->input_state; + if (curr_s && + par_n == GST_VIDEO_INFO_PAR_N (&curr_s->info) && + par_d == GST_VIDEO_INFO_PAR_D (&curr_s->info)) + return TRUE; + + new_s = gst_video_decoder_set_output_state (GST_VIDEO_DECODER (lcevc), + GST_VIDEO_INFO_FORMAT (&curr_s->info), + GST_VIDEO_INFO_WIDTH (&curr_s->info), + GST_VIDEO_INFO_HEIGHT (&curr_s->info), curr_s); + if (!new_s) + return FALSE; + + new_s->allocation_caps = + curr_s->allocation_caps ? gst_caps_ref (curr_s->allocation_caps) : NULL; + + GST_VIDEO_INFO_PAR_N (&new_s->info) = par_n; + GST_VIDEO_INFO_PAR_D (&new_s->info) = par_d; + + g_clear_pointer (&lcevc->output_state, gst_video_codec_state_unref); + lcevc->output_state = new_s; + + GST_INFO_OBJECT (lcevc, "Set output par to %d/%d", par_n, par_d); return TRUE; } @@ -320,8 +374,6 @@ { GstLcevcDec *lcevc = GST_LCEVC_DEC (decoder); LCEVC_ColorFormat format; - gint par_n, par_d; - guint32 w, h; /* Make sure format is supported */ format = @@ -330,30 +382,13 @@ if (format == LCEVC_ColorFormat_Unknown) return FALSE; - /* Keep input info */ - lcevc->in_info = state->info; - - /* Output resultion is always twice as big as input resultion divided by - * pixel aspect ratio */ - par_n = GST_VIDEO_INFO_PAR_N (&state->info); - if (par_n == 0) - par_n = 1; - par_d = GST_VIDEO_INFO_PAR_D (&state->info); - if (par_d == 0) - par_d = 1; - w = (GST_VIDEO_INFO_WIDTH (&state->info) * 2) / par_d; - h = (GST_VIDEO_INFO_HEIGHT (&state->info) * 2) / par_n; - - /* Set pixel aspect ratio back to 1/1 */ - GST_VIDEO_INFO_PAR_N (&state->info) = 1; - GST_VIDEO_INFO_PAR_D (&state->info) = 1; - - /* Set output resolution */ - if (!ensure_output_resolution (lcevc, w, h, state)) - return FALSE; + /* Keep input state reference */ + g_clear_pointer (&lcevc->input_state, gst_video_codec_state_unref); + lcevc->input_state = gst_video_codec_state_ref (state); - /* We always work with full RAW video frames */ - gst_video_decoder_set_subframe_mode (decoder, FALSE); + GST_INFO_OBJECT (lcevc, "Input resolution changed to %dx%d", + GST_VIDEO_INFO_WIDTH (&lcevc->input_state->info), + GST_VIDEO_INFO_HEIGHT (&lcevc->input_state->info)); return TRUE; } @@ -397,7 +432,6 @@ &picture_handle, &decode_info)) == LCEVC_Success) { LCEVC_PictureDesc pic_desc = { 0, }; GstVideoCodecFrame *received_frame; - GstVideoCropMeta *cmeta; if (LCEVC_GetPictureDesc (lcevc->decoder_handle, picture_handle, &pic_desc) != LCEVC_Success) { @@ -408,67 +442,86 @@ GST_INFO_OBJECT (lcevc, "Received enhanced picture: ts=%" G_GINT64_FORMAT " e=%d w=%d h=%d" - " t=%d b=%d l=%d r=%d", + " t=%d b=%d l=%d r=%d par=%d/%d", decode_info.timestamp, decode_info.enhanced, pic_desc.width, pic_desc.height, pic_desc.cropTop, pic_desc.cropBottom, - pic_desc.cropLeft, pic_desc.cropRight); + pic_desc.cropLeft, pic_desc.cropRight, pic_desc.sampleAspectRatioNum, + pic_desc.sampleAspectRatioDen); + /* Get the pending frame */ received_frame = find_pending_frame_from_picture_handle (lcevc, picture_handle); - if (received_frame) { - /* Change output allocation if enhanced picutre resolution changed */ - if (!ensure_output_resolution (lcevc, pic_desc.width, pic_desc.height, - NULL)) { + if (!received_frame) { + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Decoded LCEVC picture has no pending frame")); + return FALSE; + } + + /* Make sure enhanced resolution is valid */ + if (pic_desc.width != GST_VIDEO_INFO_WIDTH (&lcevc->output_state->info) || + pic_desc.height != GST_VIDEO_INFO_HEIGHT (&lcevc->output_state->info)) { + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Decoded LCEVC picture has wrong resolution")); + gst_video_codec_frame_unref (received_frame); + return FALSE; + } + + /* Check if decoded picture is cropped */ + if (pic_desc.cropTop > 0 || pic_desc.cropBottom > 0 || + pic_desc.cropLeft > 0 || pic_desc.cropRight > 0) { + guint32 crop_width, crop_height; + + /* Make sure enhanced crop dimensions are valid */ + if (pic_desc.width <= pic_desc.cropLeft + pic_desc.cropRight || + pic_desc.height <= pic_desc.cropTop + pic_desc.cropBottom) { + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Decoded LCEVC picture has wrong crop dimensions")); gst_video_codec_frame_unref (received_frame); return FALSE; } - /* Add crop meta if downstream can crop */ + crop_width = pic_desc.width - (pic_desc.cropLeft + pic_desc.cropRight); + crop_height = pic_desc.height - (pic_desc.cropTop + pic_desc.cropBottom); + + /* Attach crop meta if downstream can crop */ if (lcevc->can_crop) { + GstVideoCropMeta *cmeta; cmeta = gst_buffer_add_video_crop_meta (received_frame->output_buffer); cmeta->x = pic_desc.cropLeft; cmeta->y = pic_desc.cropTop; - cmeta->width = - pic_desc.width - (pic_desc.cropLeft + pic_desc.cropRight); - cmeta->height = - pic_desc.height - (pic_desc.cropTop + pic_desc.cropBottom); - - /* Change output caps if crop values changed */ - if (lcevc->out_crop_top != pic_desc.cropTop || - lcevc->out_crop_bottom != pic_desc.cropBottom || - lcevc->out_crop_left != pic_desc.cropLeft || - lcevc->out_crop_right != pic_desc.cropRight) { - GstVideoCodecState *s; - - lcevc->out_crop_top = pic_desc.cropTop; - lcevc->out_crop_bottom = pic_desc.cropBottom; - lcevc->out_crop_left = pic_desc.cropLeft; - lcevc->out_crop_right = pic_desc.cropRight; - - s = gst_video_decoder_get_output_state (GST_VIDEO_DECODER (lcevc)); - if (!s) { - gst_video_codec_frame_unref (received_frame); - return FALSE; - } - - s->caps = gst_video_info_to_caps (&s->info); - gst_caps_set_simple (s->caps, - "width", G_TYPE_INT, - pic_desc.width - (pic_desc.cropLeft + pic_desc.cropRight), - "height", G_TYPE_INT, - pic_desc.height - (pic_desc.cropTop + pic_desc.cropBottom), NULL); - gst_video_decoder_negotiate (GST_VIDEO_DECODER (lcevc)); - - gst_video_codec_state_unref (s); + cmeta->width = crop_width; + cmeta->height = crop_height; + + /* Update the crop resolution */ + if (!ensure_output_resolution (lcevc, crop_width, crop_height, + lcevc->out_alloc_width, lcevc->out_alloc_width)) { + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Could not update output crop resolution")); + gst_video_codec_frame_unref (received_frame); + return FALSE; } + } else { + /* FIXME: Do a copy of the cropped area instead of error */ + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Decoded LCEVC picture is cropped but downstream cannot crop")); + gst_video_codec_frame_unref (received_frame); + return FALSE; } + } - /* Finish frame */ - received_frame->output_buffer->pts = decode_info.timestamp; - gst_video_decoder_finish_frame (GST_VIDEO_DECODER (lcevc), - received_frame); + /* Update the pixel aspect ratio */ + if (!ensure_output_par (lcevc, pic_desc.sampleAspectRatioNum, + pic_desc.sampleAspectRatioDen)) { + GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), + ("Could not update output pixel aspect ratio")); gst_video_codec_frame_unref (received_frame); + return FALSE; } + + /* Finish frame */ + received_frame->output_buffer->pts = decode_info.timestamp; + gst_video_decoder_finish_frame (GST_VIDEO_DECODER (lcevc), received_frame); + gst_video_codec_frame_unref (received_frame); } /* Make sure no errors happened */ @@ -517,6 +570,10 @@ gboolean ret = FALSE; GstLcevcMeta *lcevc_meta; GstMapInfo enhancement_info; + uint32_t out_w, out_h; + + out_w = GST_VIDEO_INFO_WIDTH (&lcevc->input_state->info); + out_h = GST_VIDEO_INFO_HEIGHT (&lcevc->input_state->info); lcevc_meta = gst_buffer_get_lcevc_meta (input_buffer); if (!lcevc_meta) { @@ -524,11 +581,7 @@ "Input buffer %" GST_TIME_FORMAT " enhancement data not found, doing passthrough", GST_TIME_ARGS (GST_BUFFER_PTS (input_buffer))); - - /* Set output state with input resolution to do passthrough */ - return ensure_output_resolution (lcevc, - GST_VIDEO_INFO_WIDTH (&lcevc->in_info), - GST_VIDEO_INFO_HEIGHT (&lcevc->in_info), NULL); + return ensure_output_resolution (lcevc, out_w, out_h, out_w, out_h); } if (!gst_buffer_map (lcevc_meta->enhancement_data, &enhancement_info, @@ -539,7 +592,7 @@ } if (LCEVC_SendDecoderEnhancementData (lcevc->decoder_handle, - input_buffer->pts, TRUE, enhancement_info.data, + input_buffer->pts, enhancement_info.data, enhancement_info.size) != LCEVC_Success) { GST_INFO_OBJECT (lcevc, "Could not send input buffer %" GST_TIME_FORMAT @@ -548,6 +601,19 @@ goto done; } + /* Now peek and update the output resolution */ + if (LCEVC_PeekDecoder (lcevc->decoder_handle, input_buffer->pts, + &out_w, &out_h) != LCEVC_Success) { + GST_INFO_OBJECT (lcevc, "Could not peek decoder for output resolution"); + goto done; + } + + if (!ensure_output_resolution (lcevc, out_w, out_h, out_w, out_h)) { + GST_INFO_OBJECT (lcevc, "Could not set output resolution to %dx%d", out_w, + out_h); + goto done; + } + GST_INFO_OBJECT (lcevc, "Sent input buffer %" GST_TIME_FORMAT " enhancement data with size %zu", GST_TIME_ARGS (GST_BUFFER_PTS (input_buffer)), enhancement_info.size); @@ -565,7 +631,7 @@ LCEVC_PictureHandle picture_handle; GstVideoFrame frame = { 0, }; - if (!gst_video_frame_map (&frame, &lcevc->in_info, input_buffer, + if (!gst_video_frame_map (&frame, &lcevc->input_state->info, input_buffer, GST_MAP_READ)) { GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), ("Could not map input buffer %" GST_TIME_FORMAT, @@ -574,14 +640,14 @@ } if (!gst_lcevc_dec_utils_alloc_picture_handle (lcevc->decoder_handle, - &frame, &picture_handle)) { + &frame, &picture_handle, LCEVC_Access_Read)) { GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), ("Could not allocate input picture handle %" GST_TIME_FORMAT, GST_TIME_ARGS (GST_BUFFER_PTS (input_buffer)))); goto done; } - if (LCEVC_SendDecoderBase (lcevc->decoder_handle, input_buffer->pts, TRUE, + if (LCEVC_SendDecoderBase (lcevc->decoder_handle, input_buffer->pts, picture_handle, 1000000, NULL) != LCEVC_Success) { GST_ELEMENT_ERROR (lcevc, STREAM, DECODE, (NULL), ("Could not send input buffer %" GST_TIME_FORMAT " base picture", @@ -589,8 +655,10 @@ goto done; } - GST_INFO_OBJECT (lcevc, "Sent input buffer %" GST_TIME_FORMAT " base picture", - GST_TIME_ARGS (GST_BUFFER_PTS (input_buffer))); + GST_INFO_OBJECT (lcevc, + "Sent input buffer %" GST_TIME_FORMAT " base picture %dx%d", + GST_TIME_ARGS (GST_BUFFER_PTS (input_buffer)), + GST_VIDEO_FRAME_WIDTH (&frame), GST_VIDEO_FRAME_HEIGHT (&frame)); ret = TRUE; done: @@ -625,7 +693,8 @@ /* Get pic data if any and size didn't change, otherwise create a new one */ pd = gst_mini_object_get_qdata (GST_MINI_OBJECT (frame->output_buffer), GST_LCEVC_DEC_PICTURE_DATA); - if (!pd || pd->width != lcevc->out_width || pd->height != lcevc->out_height) { + if (!pd || pd->width != lcevc->out_alloc_width || + pd->height != lcevc->out_alloc_height) { /* Create picture data */ pd = picture_data_new (lcevc->decoder_handle, &map); if (!pd) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevcdec.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevcdec.h
Changed
@@ -51,15 +51,12 @@ gint max_latency; LCEVC_DecoderHandle decoder_handle; - GstVideoInfo in_info; + GstVideoCodecState *input_state; + GstVideoCodecState *output_state; gboolean can_crop; - guint32 out_width; - guint32 out_height; - guint32 out_crop_top; - guint32 out_crop_bottom; - guint32 out_crop_left; - guint32 out_crop_right; + guint32 out_alloc_width; + guint32 out_alloc_height; }; struct _GstLcevcDecClass {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevcdecodebin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevcdecodebin.c
Changed
@@ -25,6 +25,7 @@ #include "gstlcevcdecutils.h" #include "gstlcevcdecodebin.h" +#include "gstlcevcdec.h" enum { @@ -40,8 +41,10 @@ /* Props */ gchar *base_decoder; - gboolean constructed; - const gchar *missing_element; + GstPad *sink_pad; + GstPad *src_pad; + GstElement *base_decoder_element; + GstElement *lcevcdec_element; } GstLcevcDecodeBinPrivate; #define gst_lcevc_decode_bin_parent_class parent_class @@ -59,42 +62,6 @@ (GST_LCEVC_DEC_UTILS_SUPPORTED_FORMATS)) ); -static gboolean -gst_lcevc_decode_bin_open (GstLcevcDecodeBin * self) -{ - GstLcevcDecodeBinPrivate *priv = - gst_lcevc_decode_bin_get_instance_private (self); - - if (priv->missing_element) { - gst_element_post_message (GST_ELEMENT (self), - gst_missing_element_message_new (GST_ELEMENT (self), - priv->missing_element)); - } else if (!priv->constructed) { - GST_ELEMENT_ERROR (self, CORE, FAILED, (NULL), - ("Failed to construct or link LCEVC decoder elements.")); - } - - return priv->constructed; -} - -static GstStateChangeReturn -gst_lcevc_decode_bin_change_state (GstElement * element, - GstStateChange transition) -{ - GstLcevcDecodeBin *self = GST_LCEVC_DECODE_BIN (element); - - switch (transition) { - case GST_STATE_CHANGE_NULL_TO_READY: - if (!gst_lcevc_decode_bin_open (self)) - return GST_STATE_CHANGE_FAILURE; - break; - default: - break; - } - - return GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); -} - static char * gst_lcevc_decode_bin_find_base_decoder (GstLcevcDecodeBin * self) { @@ -158,78 +125,135 @@ return res; } -static void -gst_lcevc_decode_bin_constructed (GObject * obj) +static gboolean +gst_lcevc_decode_bin_open (GstLcevcDecodeBin * self) { - GstLcevcDecodeBin *self = GST_LCEVC_DECODE_BIN (obj); - GstLcevcDecodeBinClass *klass = GST_LCEVC_DECODE_BIN_GET_CLASS (self); GstLcevcDecodeBinPrivate *priv = gst_lcevc_decode_bin_get_instance_private (self); - GstPad *src_gpad, *sink_gpad; - GstPad *src_pad = NULL, *sink_pad = NULL; - GstElement *base_decoder = NULL; - GstElement *lcevcdec = NULL; - - /* setup ghost pads */ - sink_gpad = gst_ghost_pad_new_no_target_from_template ("sink", - gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink")); - gst_element_add_pad (GST_ELEMENT (self), sink_gpad); + GstPad *sink_pad = NULL, *src_pad = NULL; - src_gpad = gst_ghost_pad_new_no_target_from_template ("src", - gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src")); - gst_element_add_pad (GST_ELEMENT (self), src_gpad); + /* Create and add the LCEVC decoder */ + priv->lcevcdec_element = g_object_new (GST_TYPE_LCEVC_DEC, NULL); + gst_bin_add (GST_BIN (self), gst_object_ref (priv->lcevcdec_element)); - /* Create base decoder if name is given, otherwise fine one */ + /* Create the base decoder if name is given, otherwise find one */ if (priv->base_decoder) { - base_decoder = gst_element_factory_make (priv->base_decoder, NULL); - if (!base_decoder) { - priv->missing_element = priv->base_decoder; + priv->base_decoder_element = gst_element_factory_make (priv->base_decoder, + NULL); + if (!priv->base_decoder_element) { + GST_ELEMENT_ERROR (self, CORE, FAILED, (NULL), + ("Could not create %s element", priv->base_decoder)); goto error; } } else { gchar *name = gst_lcevc_decode_bin_find_base_decoder (self); - if (!name) + if (!name) { + GST_ELEMENT_ERROR (self, CORE, FAILED, (NULL), + ("Could not find any base decoder element")); goto error; - base_decoder = gst_element_factory_make (name, NULL); + } + priv->base_decoder_element = gst_element_factory_make (name, NULL); + g_assert (priv->base_decoder_element); g_free (name); - if (!base_decoder) - goto error; - } - - /* Create LCEVC decoder */ - lcevcdec = gst_element_factory_make ("lcevcdec", NULL); - if (!lcevcdec) { - priv->missing_element = "lcevcdec"; - goto error; } - if (!gst_bin_add (GST_BIN (self), base_decoder)) - goto error; - if (!gst_bin_add (GST_BIN (self), lcevcdec)) - goto error; + /* Add the base decoder to bin */ + gst_bin_add (GST_BIN (self), gst_object_ref (priv->base_decoder_element)); - if (!gst_element_link (base_decoder, lcevcdec)) + /* Link the base decoder with the LCEVC decoder */ + if (!gst_element_link (priv->base_decoder_element, priv->lcevcdec_element)) { + GST_ELEMENT_ERROR (self, CORE, FAILED, (NULL), + ("Could not link base decoder with LCEVC decoder")); goto error; + } - /* link elements */ - sink_pad = gst_element_get_static_pad (base_decoder, "sink"); - gst_ghost_pad_set_target (GST_GHOST_PAD (sink_gpad), sink_pad); + /* Set sink ghost pad target */ + sink_pad = gst_element_get_static_pad (priv->base_decoder_element, "sink"); + gst_ghost_pad_set_target (GST_GHOST_PAD (priv->sink_pad), sink_pad); gst_clear_object (&sink_pad); - src_pad = gst_element_get_static_pad (lcevcdec, "src"); - gst_ghost_pad_set_target (GST_GHOST_PAD (src_gpad), src_pad); + /* Set src ghost pad target */ + src_pad = gst_element_get_static_pad (priv->lcevcdec_element, "src"); + gst_ghost_pad_set_target (GST_GHOST_PAD (priv->src_pad), src_pad); gst_object_unref (src_pad); - /* signal success, we will handle this in NULL->READY transition */ - priv->constructed = TRUE; - G_OBJECT_CLASS (parent_class)->constructed (obj); - return; + return TRUE; error: - gst_clear_object (&base_decoder); - gst_clear_object (&lcevcdec); + if (priv->base_decoder_element) { + gst_bin_remove (GST_BIN (self), priv->base_decoder_element); + priv->base_decoder_element = NULL; + } + if (priv->lcevcdec_element) { + gst_bin_remove (GST_BIN (self), priv->lcevcdec_element); + priv->lcevcdec_element = NULL; + } + return FALSE; +} + +static void +gst_lcevc_decode_bin_close (GstLcevcDecodeBin * self) +{ + GstLcevcDecodeBinPrivate *priv = + gst_lcevc_decode_bin_get_instance_private (self); + + g_assert (priv->base_decoder_element); + + /* Unset sink ghost pad target */ + gst_ghost_pad_set_target (GST_GHOST_PAD (priv->sink_pad), NULL); + + /* Unset source ghost pad target */ + gst_ghost_pad_set_target (GST_GHOST_PAD (priv->src_pad), NULL); + + /* Unlink and remove base decoder */ + if (priv->base_decoder_element) { + gst_element_unlink (priv->base_decoder_element, priv->lcevcdec_element); + gst_bin_remove (GST_BIN (self), priv->base_decoder_element); + priv->base_decoder_element = NULL; + } + + /* Remove LCEVC decoder */ + gst_bin_remove (GST_BIN (self), priv->lcevcdec_element); + priv->lcevcdec_element = NULL; +} + +static GstStateChangeReturn +gst_lcevc_decode_bin_change_state (GstElement * element, + GstStateChange transition) +{ + GstLcevcDecodeBin *self = GST_LCEVC_DECODE_BIN (element); + + switch (transition) { + case GST_STATE_CHANGE_NULL_TO_READY: + if (!gst_lcevc_decode_bin_open (self)) + return GST_STATE_CHANGE_FAILURE; + break; + case GST_STATE_CHANGE_READY_TO_NULL: + gst_lcevc_decode_bin_close (self); + break; + default: + break; + } + + return GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); +} + +static void +gst_lcevc_decode_bin_constructed (GObject * obj) +{ + GstLcevcDecodeBin *self = GST_LCEVC_DECODE_BIN (obj); + GstLcevcDecodeBinPrivate *priv = + gst_lcevc_decode_bin_get_instance_private (self); + GstLcevcDecodeBinClass *klass = GST_LCEVC_DECODE_BIN_GET_CLASS (self); + + priv->sink_pad = gst_ghost_pad_new_no_target_from_template ("sink", + gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink")); + gst_element_add_pad (GST_ELEMENT (self), gst_object_ref (priv->sink_pad)); + + priv->src_pad = gst_ghost_pad_new_no_target_from_template ("src", + gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src")); + gst_element_add_pad (GST_ELEMENT (self), gst_object_ref (priv->src_pad)); - priv->constructed = FALSE; G_OBJECT_CLASS (parent_class)->constructed (obj); } @@ -240,6 +264,10 @@ GstLcevcDecodeBinPrivate *priv = gst_lcevc_decode_bin_get_instance_private (self); + gst_clear_object (&priv->sink_pad); + gst_clear_object (&priv->src_pad); + + /* Props */ g_free (priv->base_decoder); G_OBJECT_CLASS (parent_class)->finalize (obj); @@ -256,9 +284,15 @@ switch (prop_id) { case PROP_BASE_DECODER: + if (GST_STATE (self) != GST_STATE_NULL) { + GST_WARNING_OBJECT (self, + "Can't set base decoder property if not on NULL state"); + break; + } g_clear_pointer (&priv->base_decoder, g_free); priv->base_decoder = g_value_dup_string (value); break; + default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevcdecutils.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevcdecutils.c
Changed
@@ -25,10 +25,30 @@ switch (format) { case GST_VIDEO_FORMAT_I420: return LCEVC_I420_8; + case GST_VIDEO_FORMAT_I420_10LE: + return LCEVC_I420_10_LE; + case GST_VIDEO_FORMAT_I420_12LE: + return LCEVC_I420_12_LE; + + case GST_VIDEO_FORMAT_Y42B: + return LCEVC_I422_8; + case GST_VIDEO_FORMAT_I422_10LE: + return LCEVC_I422_10_LE; + case GST_VIDEO_FORMAT_I422_12LE: + return LCEVC_I422_12_LE; + + case GST_VIDEO_FORMAT_Y444: + return LCEVC_I444_8; + case GST_VIDEO_FORMAT_Y444_10LE: + return LCEVC_I444_10_LE; + case GST_VIDEO_FORMAT_Y444_12LE: + return LCEVC_I444_12_LE; + case GST_VIDEO_FORMAT_NV12: return LCEVC_NV12_8; case GST_VIDEO_FORMAT_NV21: return LCEVC_NV21_8; + case GST_VIDEO_FORMAT_RGB: return LCEVC_RGB_8; case GST_VIDEO_FORMAT_BGR: @@ -41,6 +61,12 @@ return LCEVC_ARGB_8; case GST_VIDEO_FORMAT_ABGR: return LCEVC_ABGR_8; + + case GST_VIDEO_FORMAT_GRAY8: + return LCEVC_GRAY_8; + case GST_VIDEO_FORMAT_GRAY16_LE: + return LCEVC_GRAY_16_LE; + default: break; } @@ -50,7 +76,8 @@ gboolean gst_lcevc_dec_utils_alloc_picture_handle (LCEVC_DecoderHandle decoder_handle, - GstVideoFrame * frame, LCEVC_PictureHandle * picture_handle) + GstVideoFrame * frame, LCEVC_PictureHandle * picture_handle, + LCEVC_Access access) { LCEVC_PictureDesc picture_desc = { 0, }; LCEVC_PictureBufferDesc buffer_desc = { 0, }; @@ -67,11 +94,13 @@ GST_VIDEO_FRAME_WIDTH (frame), GST_VIDEO_FRAME_HEIGHT (frame)) != LCEVC_Success) return FALSE; + picture_desc.sampleAspectRatioNum = GST_VIDEO_INFO_PAR_N (&frame->info); + picture_desc.sampleAspectRatioDen = GST_VIDEO_INFO_PAR_D (&frame->info); /* Set buffer description */ buffer_desc.data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); buffer_desc.byteSize = GST_VIDEO_FRAME_SIZE (frame); - buffer_desc.access = LCEVC_Access_Write; + buffer_desc.access = access; /* Set plane description */ for (i = 0; i < GST_VIDEO_FRAME_N_PLANES (frame); i++) { @@ -79,10 +108,6 @@ plane_desci.rowByteStride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, i); } - /* FIXME: We set the stride on all the array (needed for LCEVCdec 2.0.0) */ - for (; i < GST_VIDEO_MAX_PLANES; i++) - plane_desci.rowByteStride = GST_VIDEO_FRAME_WIDTH (frame); - /* Allocate LCEVC Picture */ if (LCEVC_AllocPictureExternal (decoder_handle, &picture_desc, &buffer_desc, plane_desc, picture_handle) != LCEVC_Success)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevcdecutils.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevcdecutils.h
Changed
@@ -27,15 +27,17 @@ G_BEGIN_DECLS -/* TODO: Only I420 and NV12 are currently working with the SDK */ +/* RGB and GRAY formats are only placeholders in LCEVCDec and therefore are not + * supported yet. */ #define GST_LCEVC_DEC_UTILS_SUPPORTED_FORMATS \ - "{ I420, NV12 }" + "{ I420, I420_10LE, I420_12LE, Y42B, I422_10LE, I422_12LE, Y444, \ + Y444_10LE, Y444_12LE, NV12, NV21 }" LCEVC_ColorFormat gst_lcevc_dec_utils_get_color_format (GstVideoFormat format); gboolean gst_lcevc_dec_utils_alloc_picture_handle ( LCEVC_DecoderHandle decoder_handle, GstVideoFrame *frame, - LCEVC_PictureHandle *picture_handle); + LCEVC_PictureHandle *picture_handle, LCEVC_Access access); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/gstlcevch264decodebin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevch264decodebin.c
Changed
@@ -38,9 +38,10 @@ G_DEFINE_TYPE (GstLcevcH264DecodeBin, gst_lcevc_h264_decode_bin, GST_TYPE_LCEVC_DECODE_BIN); +// No rank for now owing to autoplugging issues with non-LCEVC streams. +// was: GST_RANK_PRIMARY + GST_LCEVC_DECODE_BIN_RANK_OFFSET, GST_ELEMENT_REGISTER_DEFINE (lcevch264decodebin, "lcevch264decodebin", - GST_RANK_PRIMARY + GST_LCEVC_DECODE_BIN_RANK_OFFSET, - GST_TYPE_LCEVC_H264_DECODE_BIN); + GST_RANK_NONE, GST_TYPE_LCEVC_H264_DECODE_BIN); static GstCaps * gst_lcevc_h264_decode_bin_get_base_decoder_sink_caps (GstLcevcDecodeBin * base)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevch265decodebin.c
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstlcevch265decodebin.h" + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-h265, lcevc = (boolean) true") + ); + +struct _GstLcevcH265DecodeBin +{ + GstLcevcDecodeBin parent; +}; + +#define gst_lcevc_h265_decode_bin_parent_class parent_class +G_DEFINE_TYPE (GstLcevcH265DecodeBin, gst_lcevc_h265_decode_bin, + GST_TYPE_LCEVC_DECODE_BIN); + +// No rank for now owing to autoplugging issues with non-LCEVC streams. +// was: GST_RANK_PRIMARY + GST_LCEVC_DECODE_BIN_RANK_OFFSET, +GST_ELEMENT_REGISTER_DEFINE (lcevch265decodebin, "lcevch265decodebin", + GST_RANK_NONE, GST_TYPE_LCEVC_H265_DECODE_BIN); + +static GstCaps * +gst_lcevc_h265_decode_bin_get_base_decoder_sink_caps (GstLcevcDecodeBin * base) +{ + return gst_caps_new_simple ("video/x-h265", + "lcevc", G_TYPE_BOOLEAN, FALSE, NULL); +} + +static void +gst_lcevc_h265_decode_bin_class_init (GstLcevcH265DecodeBinClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstLcevcDecodeBinClass *ldb_class = GST_LCEVC_DECODE_BIN_CLASS (klass); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + + gst_element_class_set_static_metadata (element_class, + "H.265 + MPEG-5 LCEVC Decode Bin", "Codec/Decoder/Video", + "Wrapper bin to decode H265 with LCEVC data.", + "Julian Bouzas <julian.bouzas@collabora.com>"); + + ldb_class->get_base_decoder_sink_caps = + gst_lcevc_h265_decode_bin_get_base_decoder_sink_caps; +} + +static void +gst_lcevc_h265_decode_bin_init (GstLcevcH265DecodeBin * self) +{ +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevch265decodebin.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_LCEVC_H265_DECODE_BIN_H__ +#define __GST_LCEVC_H265_DECODE_BIN_H__ + +#include "gstlcevcdecodebin.h" + +G_BEGIN_DECLS + +#define GST_TYPE_LCEVC_H265_DECODE_BIN (gst_lcevc_h265_decode_bin_get_type()) +G_DECLARE_FINAL_TYPE (GstLcevcH265DecodeBin, gst_lcevc_h265_decode_bin, + GST, LCEVC_H265_DECODE_BIN, GstLcevcDecodeBin); + +GST_ELEMENT_REGISTER_DECLARE (lcevch265decodebin); + +G_END_DECLS +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevch266decodebin.c
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstlcevch266decodebin.h" + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-h266, lcevc = (boolean) true") + ); + +struct _GstLcevcH266DecodeBin +{ + GstLcevcDecodeBin parent; +}; + +#define gst_lcevc_h266_decode_bin_parent_class parent_class +G_DEFINE_TYPE (GstLcevcH266DecodeBin, gst_lcevc_h266_decode_bin, + GST_TYPE_LCEVC_DECODE_BIN); + +// No rank for now owing to autoplugging issues with non-LCEVC streams. +// was: GST_RANK_PRIMARY + GST_LCEVC_DECODE_BIN_RANK_OFFSET, +GST_ELEMENT_REGISTER_DEFINE (lcevch266decodebin, "lcevch266decodebin", + GST_RANK_NONE, GST_TYPE_LCEVC_H266_DECODE_BIN); + +static GstCaps * +gst_lcevc_h266_decode_bin_get_base_decoder_sink_caps (GstLcevcDecodeBin * base) +{ + return gst_caps_new_simple ("video/x-h266", + "lcevc", G_TYPE_BOOLEAN, FALSE, NULL); +} + +static void +gst_lcevc_h266_decode_bin_class_init (GstLcevcH266DecodeBinClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstLcevcDecodeBinClass *ldb_class = GST_LCEVC_DECODE_BIN_CLASS (klass); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + + gst_element_class_set_static_metadata (element_class, + "H.266 + MPEG-5 LCEVC Decode Bin", "Codec/Decoder/Video", + "Wrapper bin to decode H266 with LCEVC data.", + "Julian Bouzas <julian.bouzas@collabora.com>"); + + ldb_class->get_base_decoder_sink_caps = + gst_lcevc_h266_decode_bin_get_base_decoder_sink_caps; +} + +static void +gst_lcevc_h266_decode_bin_init (GstLcevcH266DecodeBin * self) +{ +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/gstlcevch266decodebin.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_LCEVC_H266_DECODE_BIN_H__ +#define __GST_LCEVC_H266_DECODE_BIN_H__ + +#include "gstlcevcdecodebin.h" + +G_BEGIN_DECLS + +#define GST_TYPE_LCEVC_H266_DECODE_BIN (gst_lcevc_h266_decode_bin_get_type()) +G_DECLARE_FINAL_TYPE (GstLcevcH266DecodeBin, gst_lcevc_h266_decode_bin, + GST, LCEVC_H266_DECODE_BIN, GstLcevcDecodeBin); + +GST_ELEMENT_REGISTER_DECLARE (lcevch266decodebin); + +G_END_DECLS +#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/meson.build
Changed
@@ -4,11 +4,15 @@ 'gstlcevcdec.c', 'gstlcevcdecodebin.c', 'gstlcevch264decodebin.c', + 'gstlcevch265decodebin.c', + 'gstlcevch266decodebin.c', lcevcdecoder_headers = 'gstlcevcdec.h', 'gstlcevch264decodebin.h', + 'gstlcevch265decodebin.h', + 'gstlcevch266decodebin.h', 'gstlcevcdecutils.h', 'gstlcevcdecodebin.h', @@ -22,7 +26,7 @@ 'lcevcdecoder': pathsep.join(doc_sources) } -lcevc_dec_dep = dependency ('lcevc_dec', required: get_option('lcevcdecoder')) +lcevc_dec_dep = dependency ('lcevc_dec', version: '>= 4.0.1', required: get_option('lcevcdecoder')) if lcevc_dec_dep.found() gstlcevcdecoder = library('gstlcevcdecoder',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcdecoder/plugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcdecoder/plugin.c
Changed
@@ -25,6 +25,8 @@ #include "gstlcevcdec.h" #include "gstlcevch264decodebin.h" +#include "gstlcevch265decodebin.h" +#include "gstlcevch266decodebin.h" static gboolean plugin_init (GstPlugin * plugin) @@ -33,6 +35,8 @@ ret |= GST_ELEMENT_REGISTER (lcevcdec, plugin); ret |= GST_ELEMENT_REGISTER (lcevch264decodebin, plugin); + ret |= GST_ELEMENT_REGISTER (lcevch265decodebin, plugin); + ret |= GST_ELEMENT_REGISTER (lcevch266decodebin, plugin); return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcencoder/README.md -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/README.md
Changed
@@ -16,8 +16,9 @@ - For example, $INSTALL_DIR for Linux can be `/usr/local`: ``` -$ cp -v include/*.h /usr/local/include -$ cp -v *.so /usr/local/lib +# cp -v include/*.h /usr/local/include +# cp -v *.so /usr/local/lib +# ldconfig ``` - Afterwards, you need to manually create the `lcevc_eil.pc` package config file with this contents: @@ -50,6 +51,7 @@ $ cd GStreaner $ meson setup $BUILD_DIR --pkg-config-path=$INSTALL_DIR/lib/pkgconfig -Dgst-plugins-bad:lcevcencoder=enabled $ ninja -C $BUILD_DIR +$ sudo ninja -C $BUILD_DIR install ``` 3. Run GStreamer LCEVC encoder pipeline:
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/gstlcevch265enc.c
Added
@@ -0,0 +1,81 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstlcevch265enc.h" + +#define GST_LCEVC_H265_ENC_CAPS \ + "video/x-h265, " \ + "lcevc = (boolean) true, " \ + "stream-format = (string) byte-stream, " \ + "alignment = (string) au" + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_LCEVC_H265_ENC_CAPS) + ); + +struct _GstLcevcH265Enc +{ + GstLcevcEncoder parent; +}; + +#define gst_lcevc_h265_enc_parent_class parent_class +G_DEFINE_TYPE (GstLcevcH265Enc, gst_lcevc_h265_enc, GST_TYPE_LCEVC_ENCODER); + +GST_ELEMENT_REGISTER_DEFINE (lcevch265enc, "lcevch265enc", + GST_RANK_PRIMARY, GST_TYPE_LCEVC_H265_ENC); + +static const gchar * +gst_lecevc_h265_enc_get_eil_plugin_name (GstLcevcEncoder * enc) +{ + return "x265"; +} + +static GstCaps * +gst_lecevc_h265_enc_get_output_caps (GstLcevcEncoder * enc) +{ + return gst_static_caps_get (&src_template.static_caps); +} + +static void +gst_lcevc_h265_enc_class_init (GstLcevcH265EncClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstLcevcEncoderClass *le_class = GST_LCEVC_ENCODER_CLASS (klass); + + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, + "H.265 LCEVC Encoder", "Codec/Encoder/Video", + "Encoder that internally uses EIL plugins to encode LCEVC H.265 video", + "Julian Bouzas <julian.bouzas@collabora.com>"); + + le_class->get_eil_plugin_name = gst_lecevc_h265_enc_get_eil_plugin_name; + le_class->get_output_caps = gst_lecevc_h265_enc_get_output_caps; +} + +static void +gst_lcevc_h265_enc_init (GstLcevcH265Enc * self) +{ +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/gstlcevch265enc.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_LCEVC_H265_ENC_H__ +#define __GST_LCEVC_H265_ENC_H__ + +#include "gstlcevcencoder.h" + +G_BEGIN_DECLS + +#define GST_TYPE_LCEVC_H265_ENC (gst_lcevc_h265_enc_get_type()) +G_DECLARE_FINAL_TYPE (GstLcevcH265Enc, gst_lcevc_h265_enc, + GST, LCEVC_H265_ENC, GstLcevcEncoder); + +GST_ELEMENT_REGISTER_DECLARE (lcevch265enc); + +G_END_DECLS +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/gstlcevch266enc.c
Added
@@ -0,0 +1,81 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstlcevch266enc.h" + +#define GST_LCEVC_H266_ENC_CAPS \ + "video/x-h266, " \ + "lcevc = (boolean) true, " \ + "stream-format = (string) byte-stream, " \ + "alignment = (string) au" + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_LCEVC_H266_ENC_CAPS) + ); + +struct _GstLcevcH266Enc +{ + GstLcevcEncoder parent; +}; + +#define gst_lcevc_h266_enc_parent_class parent_class +G_DEFINE_TYPE (GstLcevcH266Enc, gst_lcevc_h266_enc, GST_TYPE_LCEVC_ENCODER); + +GST_ELEMENT_REGISTER_DEFINE (lcevch266enc, "lcevch266enc", + GST_RANK_PRIMARY, GST_TYPE_LCEVC_H266_ENC); + +static const gchar * +gst_lecevc_h266_enc_get_eil_plugin_name (GstLcevcEncoder * enc) +{ + return "vvenc"; +} + +static GstCaps * +gst_lecevc_h266_enc_get_output_caps (GstLcevcEncoder * enc) +{ + return gst_static_caps_get (&src_template.static_caps); +} + +static void +gst_lcevc_h266_enc_class_init (GstLcevcH266EncClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstLcevcEncoderClass *le_class = GST_LCEVC_ENCODER_CLASS (klass); + + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, + "H.266 LCEVC Encoder", "Codec/Encoder/Video", + "Encoder that internally uses EIL plugins to encode LCEVC H.266 video", + "Julian Bouzas <julian.bouzas@collabora.com>"); + + le_class->get_eil_plugin_name = gst_lecevc_h266_enc_get_eil_plugin_name; + le_class->get_output_caps = gst_lecevc_h266_enc_get_output_caps; +} + +static void +gst_lcevc_h266_enc_init (GstLcevcH266Enc * self) +{ +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/gstlcevch266enc.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) <2025> V-Nova International Limited + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_LCEVC_H266_ENC_H__ +#define __GST_LCEVC_H266_ENC_H__ + +#include "gstlcevcencoder.h" + +G_BEGIN_DECLS + +#define GST_TYPE_LCEVC_H266_ENC (gst_lcevc_h266_enc_get_type()) +G_DECLARE_FINAL_TYPE (GstLcevcH266Enc, gst_lcevc_h266_enc, + GST, LCEVC_H266_ENC, GstLcevcEncoder); + +GST_ELEMENT_REGISTER_DECLARE (lcevch266enc); + +G_END_DECLS +#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcencoder/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/meson.build
Changed
@@ -3,12 +3,16 @@ 'gstlcevcencoderutils.c', 'gstlcevcencoder.c', 'gstlcevch264enc.c', + 'gstlcevch265enc.c', + 'gstlcevch266enc.c', lcevcencoder_headers = 'gstlcevcencoder.h', 'gstlcevcencoderutils.h', 'gstlcevch264enc.h', + 'gstlcevch265enc.h', + 'gstlcevch266enc.h', doc_sources =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/lcevcencoder/plugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/lcevcencoder/plugin.c
Changed
@@ -24,6 +24,8 @@ #include <gst/gst.h> #include "gstlcevch264enc.h" +#include "gstlcevch265enc.h" +#include "gstlcevch266enc.h" static gboolean plugin_init (GstPlugin * plugin) @@ -31,6 +33,8 @@ gboolean ret = FALSE; ret |= GST_ELEMENT_REGISTER (lcevch264enc, plugin); + ret |= GST_ELEMENT_REGISTER (lcevch265enc, plugin); + ret |= GST_ELEMENT_REGISTER (lcevch266enc, plugin); return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/meson.build
Changed
@@ -6,7 +6,6 @@ subdir('bs2b') subdir('bz2') subdir('chromaprint') -subdir('closedcaption') subdir('codec2json') subdir('colormanagement') subdir('curl') @@ -37,6 +36,7 @@ subdir('mdns') subdir('modplug') subdir('mpeg2enc') +subdir('mpeghdec') subdir('mplex') subdir('musepack') subdir('neon') @@ -69,7 +69,9 @@ subdir('svthevcenc') subdir('svtjpegxs') subdir('teletextdec') +subdir('tflite') subdir('ttml') +subdir('vmaf') subdir('voaacenc') subdir('voamrwbenc') subdir('vulkan') @@ -79,6 +81,7 @@ subdir('webp') subdir('wildmidi') subdir('wpe') +subdir('wpe2') subdir('x265') subdir('zxing') subdir('zbar')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/mpeg2enc/gstmpeg2enc.cc -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/mpeg2enc/gstmpeg2enc.cc
Changed
@@ -90,7 +90,7 @@ static gboolean gst_mpeg2enc_stop (GstVideoEncoder * video_encoder); static gboolean gst_mpeg2enc_set_format (GstVideoEncoder * video_encoder, GstVideoCodecState * state); -static GstCaps * gst_mpeg2enc_getcaps (GstVideoEncoder * +static GstCaps *gst_mpeg2enc_getcaps (GstVideoEncoder * video_encoder, GstCaps * filter); static GstFlowReturn gst_mpeg2enc_handle_frame (GstVideoEncoder * video_encoder, GstVideoCodecFrame * frame); @@ -104,8 +104,8 @@ guint prop_id, GValue * value, GParamSpec * pspec); static void gst_mpeg2enc_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec); -static gboolean gst_mpeg2enc_src_activate_mode (GstPad * pad, GstObject * parent, - GstPadMode mode, gboolean active); +static gboolean gst_mpeg2enc_src_activate_mode (GstPad * pad, + GstObject * parent, GstPadMode mode, gboolean active); static gboolean mpeg2enc_element_init (GstPlugin * plugin); #define gst_mpeg2enc_parent_class parent_class @@ -145,7 +145,8 @@ video_encoder_class->start = GST_DEBUG_FUNCPTR (gst_mpeg2enc_start); video_encoder_class->stop = GST_DEBUG_FUNCPTR (gst_mpeg2enc_stop); - video_encoder_class->handle_frame = GST_DEBUG_FUNCPTR (gst_mpeg2enc_handle_frame); + video_encoder_class->handle_frame = + GST_DEBUG_FUNCPTR (gst_mpeg2enc_handle_frame); video_encoder_class->set_format = GST_DEBUG_FUNCPTR (gst_mpeg2enc_set_format); video_encoder_class->finish = GST_DEBUG_FUNCPTR (gst_mpeg2enc_finish); //video_encoder_class->pre_push = GST_DEBUG_FUNCPTR (gst_mpeg2enc_pre_push); @@ -162,6 +163,10 @@ gst_mpeg2enc_reset (enc); delete enc->options; + if (enc->input_state) { + gst_video_codec_state_unref (enc->input_state); + enc->input_state = NULL; + } g_mutex_clear (&enc->tlock); g_cond_clear (&enc->cond); @@ -219,7 +224,8 @@ /* in case of error'ed ending */ if (enc->pending_frame) { - gst_video_encoder_finish_frame (GST_VIDEO_ENCODER (enc), enc->pending_frame); + gst_video_encoder_finish_frame (GST_VIDEO_ENCODER (enc), + enc->pending_frame); enc->pending_frame = NULL; } @@ -244,8 +250,7 @@ } /* start task to create multiplexor and start muxing */ if (G_UNLIKELY (enc->srcresult != GST_FLOW_OK)) { - GST_ELEMENT_ERROR (enc, LIBRARY, INIT, - ("Invalid encoder state"), (NULL)); + GST_ELEMENT_ERROR (enc, LIBRARY, INIT, ("Invalid encoder state"), (NULL)); return FALSE; } @@ -290,7 +295,8 @@ gst_mpeg2enc_add_fps (GstStructure * structure, gint fpss) { GValue list = { 0, }, fps = { - 0,}; + 0, + }; guint n; g_value_init (&list, GST_TYPE_LIST); @@ -323,7 +329,8 @@ } static gboolean -gst_mpeg2enc_set_format (GstVideoEncoder * video_encoder, GstVideoCodecState * state) +gst_mpeg2enc_set_format (GstVideoEncoder * video_encoder, + GstVideoCodecState * state) { GstVideoCodecState *output_state; GstMpeg2enc *enc; @@ -348,11 +355,10 @@ caps = gst_caps_new_simple ("video/mpeg", "systemstream", G_TYPE_BOOLEAN, FALSE, - "mpegversion", G_TYPE_INT, (enc->options->mpeg == 1)?1:2, NULL); + "mpegversion", G_TYPE_INT, (enc->options->mpeg == 1) ? 1 : 2, NULL); output_state = - gst_video_encoder_set_output_state (video_encoder, - caps, state); + gst_video_encoder_set_output_state (video_encoder, caps, state); gst_video_codec_state_unref (output_state); gst_video_encoder_negotiate (GST_VIDEO_ENCODER (enc)); @@ -388,7 +394,8 @@ { GValue list = { 0, } , val = { - 0,}; + 0, + }; g_value_init (&list, GST_TYPE_LIST); g_value_init (&val, G_TYPE_INT); @@ -456,11 +463,14 @@ case 3: case 8: case 9: - default: - caps = gst_caps_copy (gst_pad_get_pad_template_caps (video_encoder->sinkpad)); + default:{ + GstCaps *tcaps = gst_pad_get_pad_template_caps (video_encoder->sinkpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); gst_mpeg2enc_add_fps (gst_caps_get_structure (caps, 0), gst_mpeg2enc_get_fps (enc)); break; + } } return caps; @@ -496,14 +506,18 @@ switch (GST_EVENT_TYPE (event)) { case GST_EVENT_FLUSH_START: /* forward event */ - result = GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, event); + result = + GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, + event); /* no special action as there is not much to flush; * neither is it possible to halt the mpeg encoding loop */ goto done; case GST_EVENT_FLUSH_STOP: /* forward event */ - result = GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, event); + result = + GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, + event); if (!result) goto done; @@ -535,7 +549,8 @@ break; } - result = GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, event); + result = + GST_VIDEO_ENCODER_CLASS (parent_class)->sink_event (video_encoder, event); done: return result; @@ -562,10 +577,13 @@ gboolean ret; GstClockTime latency; GstVideoInfo *info = &enc->input_state->info; + GstCaps *caps; /* create new encoder with these settings */ - enc->encoder = new GstMpeg2Encoder (enc->options, GST_ELEMENT (video_encoder), - gst_pad_get_current_caps(video_encoder->sinkpad)); + caps = gst_pad_get_current_caps (video_encoder->sinkpad); + enc->encoder = + new GstMpeg2Encoder (enc->options, GST_ELEMENT (video_encoder), caps); + gst_clear_caps (&caps); ret = enc->encoder->setup (); @@ -587,7 +605,8 @@ 1 * GST_SECOND, 25); } else { latency = gst_util_uint64_scale (enc->options->max_GOP_size + 5, - GST_VIDEO_INFO_FPS_D (info) * GST_SECOND, GST_VIDEO_INFO_FPS_N (info)); + GST_VIDEO_INFO_FPS_D (info) * GST_SECOND, + GST_VIDEO_INFO_FPS_N (info)); } gst_video_encoder_set_latency (video_encoder, latency, latency); @@ -639,7 +658,7 @@ encoder: { GST_ELEMENT_ERROR (enc, CORE, NEGOTIATION, (NULL), - ("encoder setup failed")); + ("encoder setup failed")); if (enc->encoder) { delete enc->encoder; @@ -662,7 +681,8 @@ } static GstFlowReturn -gst_mpeg2enc_handle_frame (GstVideoEncoder *video_encoder, GstVideoCodecFrame *frame) +gst_mpeg2enc_handle_frame (GstVideoEncoder * video_encoder, + GstVideoCodecFrame * frame) { GstMpeg2enc *enc = GST_MPEG2ENC (video_encoder); @@ -696,7 +716,8 @@ if (!enc->started) { GST_DEBUG_OBJECT (video_encoder, "handle_frame: START task"); - gst_pad_start_task (video_encoder->srcpad, (GstTaskFunction) gst_mpeg2enc_loop, enc, NULL); + gst_pad_start_task (video_encoder->srcpad, + (GstTaskFunction) gst_mpeg2enc_loop, enc, NULL); enc->started = TRUE; }
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/mpeghdec
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/mpeghdec/gstmpeghdec.c
Added
@@ -0,0 +1,900 @@ +/* + * Copyright (C) 2025 Fraunhofer Institute for Integrated Circuits IIS + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstmpeghdec.h" + +#include <gst/pbutils/pbutils.h> +#include <string.h> + +#define MAX_NUM_OUTPUT_CHANNELS 24 +#define MAX_AUDIO_FRAME_SIZE 3072 +#define MAX_OUTBUF_SIZE (MAX_NUM_OUTPUT_CHANNELS * MAX_AUDIO_FRAME_SIZE) + +typedef struct +{ + gint channels; + GstAudioChannelPosition positions24; +} GstMpeghChannelLayout; + +static const GstMpeghChannelLayout channel_layouts = { + /* CICP 1: Mono */ + {1, {GST_AUDIO_CHANNEL_POSITION_MONO}}, + /* CICP 2: Stereo */ + {2, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + }}, + /* CICP 3: */ + {3, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + }}, + /* CICP 4: */ + {4, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_REAR_CENTER, + }}, + /* CICP 5: */ + {5, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + }}, + /* CICP 6: */ + {6, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + }}, + /* CICP 7: */ + {8, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_WIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_WIDE_RIGHT, + }}, + /* CICP 8: not defined */ + {0, { + }}, + /* CICP 9: */ + {3, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_CENTER, + }}, + /* CICP 10: */ + {4, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + }}, + /* CICP 11: */ + {7, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_CENTER, + }}, + /* CICP 12: */ + {8, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + }}, + /* CICP 13: */ + {24, { + GST_AUDIO_CHANNEL_POSITION_WIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_WIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE2, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_TOP_CENTER, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_CENTER, + GST_AUDIO_CHANNEL_POSITION_BOTTOM_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_BOTTOM_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_BOTTOM_FRONT_RIGHT, + }}, + /* CICP 14: */ + {8, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + }}, + /* CICP 15: */ + {12, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_LFE2, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_CENTER, + }}, + /* CICP 16: */ + {10, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_RIGHT, + }}, + /* CICP 17: */ + {12, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_CENTER, + }}, + /* CICP 18: */ + {14, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_TOP_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_CENTER, + }}, + /* CICP 19: */ + {12, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_RIGHT, + }}, + /* CICP 20: */ + {14, { + GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_FRONT_CENTER, + GST_AUDIO_CHANNEL_POSITION_LFE1, + GST_AUDIO_CHANNEL_POSITION_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_SIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_SIDE_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_FRONT_RIGHT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_LEFT, + GST_AUDIO_CHANNEL_POSITION_TOP_REAR_RIGHT, + GST_AUDIO_CHANNEL_POSITION_WIDE_LEFT, + GST_AUDIO_CHANNEL_POSITION_WIDE_RIGHT, + }}, +}; + +enum +{ + PROP_0, + PROP_MPEGH_TARGET_LAYOUT, + PROP_MPEGH_TARGET_REFERENCE_LEVEL, + PROP_MPEGH_DRC_EFFECT_TYPE, + PROP_MPEGH_DRC_ATTENUATION_FACTOR, + PROP_MPEGH_DRC_BOOST_FACTOR, + PROP_MPEGH_ALBUM_MODE +}; + +#define PROP_DEFAULT_MPEGH_TARGET_LAYOUT (6) +#define PROP_DEFAULT_MPEGH_TARGET_REFERENCE_LEVEL (-24.0) +#define PROP_DEFAULT_MPEGH_DRC_EFFECT_TYPE (GST_MPEGH_DRC_EFFECT_TYPE_GENERAL) +#define PROP_DEFAULT_MPEGH_DRC_ATTENUATION_FACTOR (1.0) +#define PROP_DEFAULT_MPEGH_DRC_BOOST_FACTOR (1.0) +#define PROP_DEFAULT_MPEGH_ALBUM_MODE (FALSE) + +/* Notes on MPEG-D DRC + * + * Suggested Target Reference Level + Effect Types + default based on device classes: + * Mobile Device: -16 LKFS, 2, 3, default: 3 + * TV: -24 LKFS, -1, 1, 2, 6, default: 6 + * AVR: -31 LKFS. -1, 1, 2, 6, default: 6 + */ + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("audio/x-mpeg-h, " + "stream-format = (string) { mhas, raw }, " + "framed = (boolean) true, " + "stream-type = (string) single, " + "profile = (string) baseline, " + "level = (int) { 1, 2, 3, 4 }, " "rate = (int) 48000") + ); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("audio/x-raw, " + "format=(string) " GST_AUDIO_NE (S32) ", " + "layout=(string) interleaved, " + "channels = (int) 1, 24 , " "rate = (int) 48000") + ); + +GST_DEBUG_CATEGORY_STATIC (gst_mpeghdec_debug); +#define GST_CAT_DEFAULT gst_mpeghdec_debug + +static void gst_mpeghdec_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_mpeghdec_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +static gboolean gst_mpeghdec_start (GstAudioDecoder * dec); +static gboolean gst_mpeghdec_stop (GstAudioDecoder * dec); +static gboolean gst_mpeghdec_set_format (GstAudioDecoder * dec, GstCaps * caps); +static GstFlowReturn gst_mpeghdec_handle_frame (GstAudioDecoder * dec, + GstBuffer * inbuf); +static void gst_mpeghdec_flush (GstAudioDecoder * dec, gboolean hard); + +G_DEFINE_TYPE (GstMpeghDec, gst_mpeghdec, GST_TYPE_AUDIO_DECODER); + +#define GST_MPEGH_EFFECT_TYPE (gst_mpegh_effect_type_get_type()) +static GType +gst_mpegh_effect_type_get_type (void) +{ + static GType mpegh_drc_effect_type = 0; + static const GEnumValue drc_effect_types = { + {GST_MPEGH_DRC_EFFECT_TYPE_OFF, "Off", "off"}, + {GST_MPEGH_DRC_EFFECT_TYPE_NONE, "None", "none"}, + {GST_MPEGH_DRC_EFFECT_TYPE_NIGHT, "Late night", "night"}, + {GST_MPEGH_DRC_EFFECT_TYPE_NOISY, "Noisy environment", "noisy"}, + {GST_MPEGH_DRC_EFFECT_TYPE_LIMITED, "Limited playback range", "limited"}, + {GST_MPEGH_DRC_EFFECT_TYPE_LOWLEVEL, "Low playback level", "lowlevel"}, + {GST_MPEGH_DRC_EFFECT_TYPE_DIALOG, "Dialog enhancement", "dialog"}, + {GST_MPEGH_DRC_EFFECT_TYPE_GENERAL, "General compression", "general"}, + {0, NULL, NULL} + }; + + if (!mpegh_drc_effect_type) { + mpegh_drc_effect_type = + g_enum_register_static ("GstMpeghEffectType", drc_effect_types); + } + return mpegh_drc_effect_type; +} + +static void +gst_mpeghdec_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstMpeghDec *self = GST_MPEGHDEC (object); + MPEGH_DECODER_ERROR err; + GST_DEBUG_OBJECT (self, "set_property: property_id = %d", prop_id); + switch (prop_id) { + case PROP_MPEGH_TARGET_LAYOUT: + GST_OBJECT_LOCK (object); + self->target_layout = g_value_get_int (value); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_TARGET_REFERENCE_LEVEL: + GST_OBJECT_LOCK (object); + self->target_reference_level = g_value_get_float (value); + /* If decoder is already initialized, also set on API directly to switch during runtime */ + if (self->dec) { + /* Note: mpeghdec API needs the loudness value mapped to an int 40...127 */ + gint loudness = self->target_reference_level * -4; + err = + mpeghdecoder_setParam (self->dec, + MPEGH_DEC_PARAM_TARGET_REFERENCE_LEVEL, loudness); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc reference level %d with error: %d", loudness, + err); + } + } + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_EFFECT_TYPE: + GST_OBJECT_LOCK (object); + self->drc_effect_type = g_value_get_enum (value); + /* If decoder is already initialized, also set on API directly to switch during runtime */ + if (self->dec) { + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_EFFECT_TYPE, + self->drc_effect_type); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc effect type %d with error: %d", + self->drc_effect_type, err); + } + } + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_ATTENUATION_FACTOR: + GST_OBJECT_LOCK (object); + self->drc_attenuation_factor = g_value_get_float (value); + /* If decoder is already initialized, also set on API directly to switch during runtime */ + if (self->dec) { + /* Note: FDK API needs the attenuation factor mapped to an int 0...127 */ + gint attenuation = self->drc_attenuation_factor * 127; + err = + mpeghdecoder_setParam (self->dec, + MPEGH_DEC_PARAM_ATTENUATION_FACTOR, attenuation); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc attenuation factor %d with error: %d", + attenuation, err); + } + } + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_BOOST_FACTOR: + GST_OBJECT_LOCK (object); + self->drc_boost_factor = g_value_get_float (value); + /* If decoder is already initialized, also set on API directly to switch during runtime */ + if (self->dec) { + /* Note: FDK API needs the boost factor mapped to an int 0...127 */ + gint boost = self->drc_boost_factor * 127; + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_BOOST_FACTOR, + boost); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc boost factor %d with error: %d", boost, err); + } + } + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_ALBUM_MODE: + GST_OBJECT_LOCK (object); + self->album_mode = g_value_get_boolean (value); + /* If decoder is already initialized, also set on API directly to switch during runtime */ + if (self->dec) { + gint album_mode = self->album_mode ? 1 : 0; + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_ALBUM_MODE, + album_mode); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set album mode %d with error: %d", album_mode, err); + } + } + GST_OBJECT_UNLOCK (object); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_mpeghdec_get_property (GObject * object, guint prop_id, GValue * value, + GParamSpec * pspec) +{ + GstMpeghDec *self = GST_MPEGHDEC (object); + GST_DEBUG_OBJECT (self, "get_property: property_id = %d", prop_id); + switch (prop_id) { + case PROP_MPEGH_TARGET_LAYOUT: + GST_OBJECT_LOCK (object); + g_value_set_int (value, self->target_layout); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_TARGET_REFERENCE_LEVEL: + GST_OBJECT_LOCK (object); + g_value_set_float (value, self->target_reference_level); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_EFFECT_TYPE: + GST_OBJECT_LOCK (object); + g_value_set_enum (value, self->drc_effect_type); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_ATTENUATION_FACTOR: + GST_OBJECT_LOCK (object); + g_value_set_float (value, self->drc_attenuation_factor); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_DRC_BOOST_FACTOR: + GST_OBJECT_LOCK (object); + g_value_set_float (value, self->drc_boost_factor); + GST_OBJECT_UNLOCK (object); + break; + case PROP_MPEGH_ALBUM_MODE: + GST_OBJECT_LOCK (object); + g_value_set_boolean (value, self->album_mode); + GST_OBJECT_UNLOCK (object); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_mpeghdec_start (GstAudioDecoder * dec) +{ + GstMpeghDec *self = GST_MPEGHDEC (dec); + GST_DEBUG_OBJECT (self, "start"); + + self->samplerate = 0; + self->channels = 0; + return TRUE; +} + +static gboolean +gst_mpeghdec_stop (GstAudioDecoder * dec) +{ + GstMpeghDec *self = GST_MPEGHDEC (dec); + GST_DEBUG_OBJECT (self, "stop"); + + if (self->dec) + mpeghdecoder_destroy (self->dec); + self->dec = NULL; + return TRUE; +} + +static gboolean +gst_mpeghdec_set_format (GstAudioDecoder * dec, GstCaps * caps) +{ + GstMpeghDec *self = GST_MPEGHDEC (dec); + GST_DEBUG_OBJECT (self, "set_format"); + + gboolean ret = TRUE; + gboolean is_raw = FALSE; + GstStructure *s; + MPEGH_DECODER_ERROR err; + + if (self->dec) { + /* drain */ + gst_mpeghdec_handle_frame (dec, NULL); + mpeghdecoder_destroy (self->dec); + self->dec = NULL; + } + + s = gst_caps_get_structure (caps, 0); + const gchar *stream_format = gst_structure_get_string (s, "stream-format"); + if (strcmp (stream_format, "raw") == 0) { + is_raw = TRUE; + } else if (strcmp (stream_format, "mhas") == 0) { + is_raw = FALSE; + } else { + g_assert_not_reached (); + } + + GST_OBJECT_LOCK (dec); + int target_layout = self->target_layout; + GST_OBJECT_UNLOCK (dec); + self->dec = mpeghdecoder_init (target_layout); + if (!self->dec) { + GST_ERROR_OBJECT (self, + "mpeghdecoder_init FAILED! Maybe unsupported target layout(%d)", + target_layout); + ret = FALSE; + goto out; + } + + if (is_raw) { + GstBuffer *codec_data = NULL; + GstMapInfo map; + const guint8 *data; + guint size; + gst_structure_get (s, "codec_data", GST_TYPE_BUFFER, &codec_data, NULL); + if (!codec_data) { + GST_ERROR_OBJECT (self, "MHA1 without codec_data not supported"); + ret = FALSE; + goto out; + } + + gst_buffer_map (codec_data, &map, GST_MAP_READ); + data = map.data; + size = map.size; + err = mpeghdecoder_setMhaConfig (self->dec, data, size); + if (err != MPEGH_DEC_OK) { + gst_buffer_unmap (codec_data, &map); + gst_buffer_unref (codec_data); + GST_ERROR_OBJECT (self, "Invalid codec_data: %d", err); + ret = FALSE; + goto out; + } + gst_buffer_unmap (codec_data, &map); + gst_buffer_unref (codec_data); + } + + /* Configure default target reference level parameter. */ + /* Note: FDK API needs the loudness value mapped to a int 40...127 */ + GST_OBJECT_LOCK (dec); + gint loudness = self->target_reference_level * -4; + GST_OBJECT_UNLOCK (dec); + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_TARGET_REFERENCE_LEVEL, + loudness); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc reference level %d with error: %d", loudness, err); + ret = FALSE; + goto out; + } + + /* Configure default drc target effect type parameter (only applied for xHE-AAC) */ + GST_OBJECT_LOCK (dec); + int drc_effect_type = self->drc_effect_type; + GST_OBJECT_UNLOCK (dec); + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_EFFECT_TYPE, + drc_effect_type); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "Failed to set drc effect type %d with error: %d", + drc_effect_type, err); + ret = FALSE; + goto out; + } + + /* Configure default drc attenuation factor */ + /* Note: FDK API needs the attenuation factor mapped to an int 0...127 */ + GST_OBJECT_LOCK (dec); + gint attenuation = self->drc_attenuation_factor * 127; + GST_OBJECT_UNLOCK (dec); + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_ATTENUATION_FACTOR, + attenuation); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, + "Failed to set drc attenuation factor %d with error: %d", attenuation, + err); + ret = FALSE; + goto out; + } + + /* Configure default drc boost factor */ + /* Note: FDK API needs the boost factor mapped to an int 0...127 */ + GST_OBJECT_LOCK (dec); + gint boost = self->drc_boost_factor * 127; + GST_OBJECT_UNLOCK (dec); + err = mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_BOOST_FACTOR, boost); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "Failed to set drc boost factor %d with error: %d", + boost, err); + ret = FALSE; + goto out; + } + + /* Configure default album mode */ + GST_OBJECT_LOCK (dec); + gint album_mode = self->album_mode ? 1 : 0; + GST_OBJECT_UNLOCK (dec); + err = + mpeghdecoder_setParam (self->dec, MPEGH_DEC_PARAM_ALBUM_MODE, album_mode); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "Failed to set drc album mode %d with error: %d", + album_mode, err); + ret = FALSE; + goto out; + } + +out: + return ret; +} + +static gboolean +gst_mpeghdec_map_channels (GstMpeghDec * self, int channels) +{ + GST_OBJECT_LOCK (self); + int target_layout = self->target_layout; + GST_OBJECT_UNLOCK (self); + if (channel_layoutstarget_layout - 1.channels == 0 + || channels != channel_layoutstarget_layout - 1.channels) { + return FALSE; + } + + memset (self->positions, 0, sizeof (self->positions)); + memcpy (self->positions, channel_layoutstarget_layout - 1.positions, + channels * sizeof (GstAudioChannelPosition)); + return TRUE; +} + +static gboolean +gst_mpeghdec_update_info (GstMpeghDec * self, int channels, int samplerate) +{ + if (!gst_mpeghdec_map_channels (self, channels)) { + GST_ERROR_OBJECT (self, "Failed to get channel positions"); + return FALSE; + } + + if (self->channels != channels || self->samplerate != samplerate + || memcmp (self->mapped_positions, self->positions, + sizeof (self->positions)) != 0) { + self->channels = channels; + self->samplerate = samplerate; + + memcpy (self->mapped_positions, self->positions, sizeof (self->positions)); + if (!gst_audio_channel_positions_to_valid_order (self->mapped_positions, + self->channels)) { + GST_ERROR_OBJECT (self, "Failed to reorder channels"); + return FALSE; + } + + gst_audio_info_set_format (&self->info, GST_AUDIO_FORMAT_S32, + self->samplerate, self->channels, self->mapped_positions); + + if (!gst_audio_decoder_set_output_format (GST_AUDIO_DECODER (self), + &self->info)) { + GST_ERROR_OBJECT (self, "Failed to set output format"); + return FALSE; + } + + self->need_reorder = memcmp (self->mapped_positions, self->positions, + sizeof (self->positions)) != 0; + } + return TRUE; +} + +static GstFlowReturn +gst_mpeghdec_handle_frame (GstAudioDecoder * dec, GstBuffer * inbuf) +{ + GstMpeghDec *self = GST_MPEGHDEC (dec); + GST_DEBUG_OBJECT (self, "handle_frame"); + GstMapInfo imap; + GstFlowReturn ret = GST_FLOW_OK; + GstBuffer *outbuf; + GstMapInfo omap; + MPEGH_DECODER_ERROR err; + MPEGH_DECODER_OUTPUT_INFO out_info; + + if (inbuf) { + gst_buffer_map (inbuf, &imap, GST_MAP_READ); + + /* feed decoder with data */ + GST_DEBUG_OBJECT (self, "inbuf pts %" GST_TIME_FORMAT, + GST_TIME_ARGS (GST_BUFFER_PTS (inbuf))); + err = + mpeghdecoder_process (self->dec, imap.data, imap.size, + GST_BUFFER_PTS (inbuf)); + gst_buffer_unmap (inbuf, &imap); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "mpeghdecoder_process failed with %d", err); + goto out; + } + } else { + GST_DEBUG_OBJECT (self, "input buffer is NULL; assuming EOS!"); + err = mpeghdecoder_flushAndGet (self->dec); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "mpeghdecoder_flushAndGet failed with %d", err); + goto out; + } + } + + while (err == MPEGH_DEC_OK) { + int out_samples_per_channel; + int out_channels; + int out_samplerate; + + outbuf = + gst_audio_decoder_allocate_output_buffer (dec, + MAX_OUTBUF_SIZE * sizeof (gint32)); + gst_buffer_map (outbuf, &omap, GST_MAP_WRITE); + + err = + mpeghdecoder_getSamples (self->dec, (gint32 *) omap.data, + MAX_OUTBUF_SIZE, &out_info); + gst_buffer_unmap (outbuf, &omap); + if (err != MPEGH_DEC_OK && err != MPEGH_DEC_FEED_DATA) { + GST_ERROR_OBJECT (self, "mpeghdecoder_getSamples failed with %d", err); + goto out; + } else { + out_samples_per_channel = out_info.numSamplesPerChannel; + out_samplerate = out_info.sampleRate; + out_channels = out_info.numChannels; + if (err == MPEGH_DEC_FEED_DATA) { + continue; + } + } + + gst_buffer_resize (outbuf, 0, + out_samples_per_channel * out_channels * sizeof (gint32)); + + if (!gst_mpeghdec_update_info (self, out_channels, out_samplerate)) { + ret = GST_FLOW_NOT_NEGOTIATED; + goto out; + } + + if (self->need_reorder) { + gst_audio_buffer_reorder_channels (outbuf, + GST_AUDIO_INFO_FORMAT (&self->info), + GST_AUDIO_INFO_CHANNELS (&self->info), + self->positions, self->mapped_positions); + } + + GST_DEBUG_OBJECT (self, "gst_buffer_get_size = %lu", + gst_buffer_get_size (outbuf)); + GST_DEBUG_OBJECT (self, "output buffer = %" GST_PTR_FORMAT, + (void *) outbuf); + + ret = gst_audio_decoder_finish_frame (dec, outbuf, 1); + } + +out: + return ret; +} + +static void +gst_mpeghdec_flush (GstAudioDecoder * dec, gboolean hard) +{ + GstMpeghDec *self = GST_MPEGHDEC (dec); + GST_DEBUG_OBJECT (self, "flush"); + if (self->dec) { + MPEGH_DECODER_ERROR err; + err = mpeghdecoder_flush (self->dec); + if (err != MPEGH_DEC_OK) { + GST_ERROR_OBJECT (self, "flushing error: %d", err); + } + } +} + +static void +gst_mpeghdec_init (GstMpeghDec * self) +{ + GST_DEBUG_OBJECT (self, "init"); + self->dec = NULL; + self->target_layout = PROP_DEFAULT_MPEGH_TARGET_LAYOUT; + self->target_reference_level = PROP_DEFAULT_MPEGH_TARGET_REFERENCE_LEVEL; + self->drc_effect_type = PROP_DEFAULT_MPEGH_DRC_EFFECT_TYPE; + self->drc_attenuation_factor = PROP_DEFAULT_MPEGH_DRC_ATTENUATION_FACTOR; + self->drc_boost_factor = PROP_DEFAULT_MPEGH_DRC_BOOST_FACTOR; + self->album_mode = PROP_DEFAULT_MPEGH_ALBUM_MODE; + + gst_audio_decoder_set_drainable (GST_AUDIO_DECODER (self), TRUE); + gst_audio_decoder_set_needs_format (GST_AUDIO_DECODER (self), TRUE); +} + +static void +gst_mpeghdec_class_init (GstMpeghDecClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstAudioDecoderClass *base_class = GST_AUDIO_DECODER_CLASS (klass); + GObjectClass *gobject_class = (GObjectClass *) klass; + + base_class->start = GST_DEBUG_FUNCPTR (gst_mpeghdec_start); + base_class->stop = GST_DEBUG_FUNCPTR (gst_mpeghdec_stop); + base_class->set_format = GST_DEBUG_FUNCPTR (gst_mpeghdec_set_format); + base_class->handle_frame = GST_DEBUG_FUNCPTR (gst_mpeghdec_handle_frame); + base_class->flush = GST_DEBUG_FUNCPTR (gst_mpeghdec_flush); + + gobject_class->set_property = GST_DEBUG_FUNCPTR (gst_mpeghdec_set_property); + gobject_class->get_property = GST_DEBUG_FUNCPTR (gst_mpeghdec_get_property); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, "MPEG-H audio decoder", + "Codec/Decoder/Audio", "MPEG-H audio decoder", + "<mpeg-h-techsupport@iis.fraunhofer.de>"); + + g_object_class_install_property (gobject_class, PROP_MPEGH_TARGET_LAYOUT, + g_param_spec_int ("target-layout", "Target Layout", + "Target Layout (can only be set at initialization)", 1, 20, + PROP_DEFAULT_MPEGH_TARGET_LAYOUT, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, + PROP_MPEGH_TARGET_REFERENCE_LEVEL, g_param_spec_float ("target-ref-level", + "Target Reference Level", "Desired Target Reference Level", -31.75, + -10.0, PROP_DEFAULT_MPEGH_TARGET_REFERENCE_LEVEL, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, PROP_MPEGH_DRC_EFFECT_TYPE, + g_param_spec_enum ("drc-effect-type", "MPEG-D DRC Effect Type", + "Desired MPEG-D DRC Effect Type", GST_MPEGH_EFFECT_TYPE, + PROP_DEFAULT_MPEGH_DRC_EFFECT_TYPE, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, + PROP_MPEGH_DRC_ATTENUATION_FACTOR, g_param_spec_float ("drc-cut-level", + "DRC Attenuation Factor", + "Attenuation scaling factor applied to attenuation DRC gains", 0.0, + 1.0, PROP_DEFAULT_MPEGH_DRC_ATTENUATION_FACTOR, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, PROP_MPEGH_DRC_BOOST_FACTOR, + g_param_spec_float ("drc-boost-level", "DRC Boost Factor", + "Boost scaling factor applied to amplification DRC gains", 0.0, 1.0, + PROP_DEFAULT_MPEGH_DRC_BOOST_FACTOR, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_property (gobject_class, PROP_MPEGH_ALBUM_MODE, + g_param_spec_boolean ("album-mode", "Album Mode", + "Enable/Disable album mode", PROP_DEFAULT_MPEGH_ALBUM_MODE, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + /* Register new types */ + gst_type_mark_as_plugin_api (GST_MPEGH_EFFECT_TYPE, 0); +} + +static gboolean +plugin_init (GstPlugin * plugin) +{ + GST_DEBUG_CATEGORY_INIT (gst_mpeghdec_debug, "mpeghdec", 0, "MPEG-H Decoder"); + return gst_element_register (plugin, "mpeghdec", GST_RANK_PRIMARY, + GST_TYPE_MPEGHDEC); +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, GST_VERSION_MINOR, mpeghdec, + "MPEG-H Decoder", plugin_init, VERSION, "LGPL", GST_PACKAGE_NAME, + GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/mpeghdec/gstmpeghdec.h
Added
@@ -0,0 +1,91 @@ +/* + * Copyright (C) 2025 Fraunhofer Institute for Integrated Circuits IIS + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GSTMPEGHDEC_H__ +#define __GSTMPEGHDEC_H__ + +#include <gst/gst.h> +#include <gst/audio/gstaudiodecoder.h> + +#include "mpeghdec/mpeghdecoder.h" + +G_BEGIN_DECLS +#define GST_TYPE_MPEGHDEC (gst_mpeghdec_get_type()) +#define GST_MPEGHDEC(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_MPEGHDEC, GstMpeghDec)) +#define GST_MPEGHDEC_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_MPEGHDEC, GstMpeghDecClass)) +#define GST_IS_MPEGHDEC(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_MPEGHDEC)) +#define GST_IS_MPEGHDEC_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_MPEGHDEC)) +typedef struct _GstMpeghDec GstMpeghDec; +typedef struct _GstMpeghDecClass GstMpeghDecClass; + +typedef enum +{ + /* Off */ + GST_MPEGH_DRC_EFFECT_TYPE_OFF = -1, + /* None */ + GST_MPEGH_DRC_EFFECT_TYPE_NONE = 0, + /* Late night */ + GST_MPEGH_DRC_EFFECT_TYPE_NIGHT = 1, + /* Noisy environment */ + GST_MPEGH_DRC_EFFECT_TYPE_NOISY = 2, + /* Limited playback range */ + GST_MPEGH_DRC_EFFECT_TYPE_LIMITED = 3, + /* Low playback level */ + GST_MPEGH_DRC_EFFECT_TYPE_LOWLEVEL = 4, + /* Dialog enhancement */ + GST_MPEGH_DRC_EFFECT_TYPE_DIALOG = 5, + /* General compression */ + GST_MPEGH_DRC_EFFECT_TYPE_GENERAL = 6 +} GstMpeghEffectType; + +struct _GstMpeghDec +{ + GstAudioDecoder element; + + HANDLE_MPEGH_DECODER_CONTEXT dec; + + gint target_layout; + gfloat target_reference_level; + GstMpeghEffectType drc_effect_type; + gfloat drc_attenuation_factor; + gfloat drc_boost_factor; + gboolean album_mode; + + gint channels; + gint samplerate; + + gboolean need_reorder; + GstAudioChannelPosition positions64; + GstAudioChannelPosition mapped_positions64; + + GstAudioInfo info; +}; + +struct _GstMpeghDecClass +{ + GstAudioDecoderClass parent_class; +}; + +GType gst_mpeghdec_get_type (void); + +GST_ELEMENT_REGISTER_DECLARE (mpeghdec); + +G_END_DECLS +#endif /* __GSTMPEGHDEC_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/mpeghdec/meson.build
Added
@@ -0,0 +1,30 @@ +mpeghdec_sources = + 'gstmpeghdec.c' + + +mpeghdec_headers = + 'gstmpeghdec.h' + + +doc_sources = +foreach s: mpeghdec_sources + mpeghdec_headers + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'mpeghdec': pathsep.join(doc_sources) +} + +mpeghdec_dep = dependency('mpeghdec', version : '>= 3.0.2', required : get_option('mpeghdec'), allow_fallback: false, static: false) + +if mpeghdec_dep.found() + gstmpeghdec = library('gstmpeghdec', + mpeghdec_sources, + c_args : gst_plugins_bad_args, + include_directories : configinc, + dependencies : gstaudio_dep, gstpbutils_dep, mpeghdec_dep, + install : true, + install_dir : plugins_install_dir, + ) + plugins += gstmpeghdec +endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/nvdswrapper/gstnvdsdewarp.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/nvdswrapper/gstnvdsdewarp.cpp
Changed
@@ -253,6 +253,7 @@ PROP_BOTTOM_ANGLE, PROP_FOV, PROP_CONTROL, + PROP_ADD_BORDERS, }; #define DEFAULT_DEVICE_ID -1 @@ -263,10 +264,16 @@ #define DEFAULT_ANGLE 0 #define DEFAULT_FOV 180.0 #define DEFAULT_CONTROL 0.6 +#define DEFAULT_ADD_BORDERS TRUE struct GstNvDsDewarpPrivate { - ~GstNvDsDewarpPrivate () + GstNvDsDewarpPrivate () + { + texture_token = gst_cuda_create_user_token (); + } + + ~GstNvDsDewarpPrivate () { reset (); } @@ -292,6 +299,10 @@ GstVideoInfo in_info; GstVideoInfo out_info; bool params_updated = true; + bool clear_background = false; + bool same_size = false; + GstVideoRectangle out_rect; + gint64 texture_token = 0; std::recursive_mutex context_lock; std::mutex lock; @@ -305,6 +316,7 @@ gdouble bottom_angle = DEFAULT_BOTTOM_ANGLE; gdouble fov = DEFAULT_FOV; gdouble control = DEFAULT_CONTROL; + gboolean add_borders = DEFAULT_ADD_BORDERS; }; struct _GstNvDsDewarp @@ -330,6 +342,10 @@ GstQuery * decide_query, GstQuery * query); static gboolean gst_nv_ds_dewarp_decide_allocation (GstBaseTransform * trans, GstQuery * query); +static GstCaps *gst_nv_ds_dewarp_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_nv_ds_dewarp_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); static gboolean gst_nv_ds_dewarp_set_caps (GstBaseTransform * trans, GstCaps * in_caps, GstCaps * out_caps); static void gst_nv_ds_dewarp_before_transform (GstBaseTransform * trans, @@ -397,6 +413,11 @@ g_param_spec_double ("control", "Control", "Projection specific control value", 0, 1, DEFAULT_CONTROL, (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_ADD_BORDERS, + g_param_spec_boolean ("add-borders", "Add Borders", + "Add black borders if necessary to keep the display aspect ratio", + DEFAULT_ADD_BORDERS, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); elem_class->set_context = GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_set_context); @@ -416,6 +437,9 @@ GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_propose_allocation); trans_class->decide_allocation = GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_decide_allocation); + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_transform_caps); + trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_fixate_caps); trans_class->set_caps = GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_set_caps); trans_class->before_transform = GST_DEBUG_FUNCPTR (gst_nv_ds_dewarp_before_transform); @@ -476,6 +500,7 @@ if (priv->warp_type != warp_type) { priv->warp_type = warp_type; priv->params_updated = true; + gst_base_transform_reconfigure_src (GST_BASE_TRANSFORM_CAST (self)); } break; } @@ -509,6 +534,14 @@ case PROP_CONTROL: update_prop_double (self, &priv->control, value); break; + case PROP_ADD_BORDERS: + { + auto val = g_value_get_boolean (value); + if (val != priv->add_borders) + gst_base_transform_reconfigure_src (GST_BASE_TRANSFORM_CAST (self)); + priv->add_borders = val; + break; + } default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -554,6 +587,9 @@ case PROP_CONTROL: g_value_set_double (value, priv->control); break; + case PROP_ADD_BORDERS: + g_value_set_boolean (value, priv->add_borders); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -831,8 +867,8 @@ return FALSE; } - params.dstWidth = priv->out_info.width; - params.dstHeight = priv->out_info.height; + params.dstWidth = priv->out_rect.w; + params.dstHeight = priv->out_rect.h; strcpy (params.rotAxes, g_axes_typespriv->axes.value_name); for (guint i = 0; i < 3; i++) { @@ -866,12 +902,552 @@ return TRUE; } +static GstCaps * +gst_nv_ds_dewarp_caps_rangify_size_info (GstCaps * caps) +{ + GstStructure *st; + GstCapsFeatures *f; + gint i, n; + GstCaps *res; + GstCapsFeatures *feature = + gst_caps_features_from_string (GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY); + + res = gst_caps_new_empty (); + + n = gst_caps_get_size (caps); + for (i = 0; i < n; i++) { + st = gst_caps_get_structure (caps, i); + f = gst_caps_get_features (caps, i); + + /* If this is already expressed by the existing caps + * skip this structure */ + if (i > 0 && gst_caps_is_subset_structure_full (res, st, f)) + continue; + + st = gst_structure_copy (st); + /* Only remove format info for the cases when we can actually convert */ + if (!gst_caps_features_is_any (f) + && gst_caps_features_is_equal (f, feature)) { + gst_structure_set (st, "width", GST_TYPE_INT_RANGE, 1, G_MAXINT, + "height", GST_TYPE_INT_RANGE, 1, G_MAXINT, NULL); + + /* if pixel aspect ratio, make a range of it */ + if (gst_structure_has_field (st, "pixel-aspect-ratio")) { + gst_structure_set (st, "pixel-aspect-ratio", + GST_TYPE_FRACTION_RANGE, 1, G_MAXINT, G_MAXINT, 1, NULL); + } + } + + gst_caps_append_structure_full (res, st, gst_caps_features_copy (f)); + } + gst_caps_features_free (feature); + + return res; +} + +static GstCaps * +gst_nv_ds_dewarp_fixate_size (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + auto self = GST_NV_DS_DEWARP (base); + auto priv = self->priv; + GstStructure *ins, *outs; + const GValue *from_par, *to_par; + GValue fpar = G_VALUE_INIT, tpar = G_VALUE_INIT; + + othercaps = gst_caps_truncate (othercaps); + othercaps = gst_caps_make_writable (othercaps); + ins = gst_caps_get_structure (caps, 0); + outs = gst_caps_get_structure (othercaps, 0); + + from_par = gst_structure_get_value (ins, "pixel-aspect-ratio"); + to_par = gst_structure_get_value (outs, "pixel-aspect-ratio"); + + /* If we're fixating from the sinkpad we always set the PAR and + * assume that missing PAR on the sinkpad means 1/1 and + * missing PAR on the srcpad means undefined + */ + std::lock_guard < std::mutex > lk (priv->lock); + if (direction == GST_PAD_SINK) { + if (!from_par) { + g_value_init (&fpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&fpar, 1, 1); + from_par = &fpar; + } + if (!to_par) { + g_value_init (&tpar, GST_TYPE_FRACTION_RANGE); + gst_value_set_fraction_range_full (&tpar, 1, G_MAXINT, G_MAXINT, 1); + to_par = &tpar; + } + } else { + gint from_par_n, from_par_d; + + if (!from_par) { + g_value_init (&fpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&fpar, 1, 1); + from_par = &fpar; + + from_par_n = from_par_d = 1; + } else { + from_par_n = gst_value_get_fraction_numerator (from_par); + from_par_d = gst_value_get_fraction_denominator (from_par); + } + + if (!to_par) { + gint to_par_n, to_par_d; + + to_par_n = from_par_n; + to_par_d = from_par_d; + + g_value_init (&tpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&tpar, to_par_n, to_par_d); + to_par = &tpar; + + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + to_par_n, to_par_d, nullptr); + } + } + + /* we have both PAR but they might not be fixated */ + { + gint from_w, from_h, from_par_n, from_par_d, to_par_n, to_par_d; + gint w = 0, h = 0; + gint from_dar_n, from_dar_d; + gint num, den; + + /* from_par should be fixed */ + g_return_val_if_fail (gst_value_is_fixed (from_par), othercaps); + + from_par_n = gst_value_get_fraction_numerator (from_par); + from_par_d = gst_value_get_fraction_denominator (from_par); + + gst_structure_get_int (ins, "width", &from_w); + gst_structure_get_int (ins, "height", &from_h); + + gst_structure_get_int (outs, "width", &w); + gst_structure_get_int (outs, "height", &h); + + /* if both width and height are already fixed, we can't do anything + * about it anymore */ + if (w && h) { + guint n, d; + + GST_DEBUG_OBJECT (base, "dimensions already set to %dx%d, not fixating", + w, h); + if (!gst_value_is_fixed (to_par)) { + if (gst_video_calculate_display_ratio (&n, &d, from_w, from_h, + from_par_n, from_par_d, w, h)) { + GST_DEBUG_OBJECT (base, "fixating to_par to %dx%d", n, d); + if (gst_structure_has_field (outs, "pixel-aspect-ratio")) + gst_structure_fixate_field_nearest_fraction (outs, + "pixel-aspect-ratio", n, d); + else if (n != d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + n, d, nullptr); + } + } + goto done; + } + + /* Calculate input DAR */ + if (!gst_util_fraction_multiply (from_w, from_h, from_par_n, from_par_d, + &from_dar_n, &from_dar_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + GST_DEBUG_OBJECT (base, "Input DAR is %d/%d", from_dar_n, from_dar_d); + + /* If either width or height are fixed there's not much we + * can do either except choosing a height or width and PAR + * that matches the DAR as good as possible + */ + if (h) { + GstStructure *tmp; + gint set_w, set_par_n, set_par_d; + + GST_DEBUG_OBJECT (base, "height is fixed (%d)", h); + + /* If the PAR is fixed too, there's not much to do + * except choosing the width that is nearest to the + * width with the same DAR */ + if (gst_value_is_fixed (to_par)) { + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + GST_DEBUG_OBJECT (base, "PAR is fixed %d/%d", to_par_n, to_par_d); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_d, + to_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (h, num, den); + gst_structure_fixate_field_nearest_int (outs, "width", w); + + goto done; + } + + /* The PAR is not fixed and it's quite likely that we can set + * an arbitrary PAR. */ + + /* Check if we can keep the input width */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + /* Might have failed but try to keep the DAR nonetheless by + * adjusting the PAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, h, set_w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + /* Check if the adjusted PAR is accepted */ + if (set_par_n == to_par_n && set_par_d == to_par_d) { + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "width", G_TYPE_INT, set_w, + "pixel-aspect-ratio", GST_TYPE_FRACTION, set_par_n, set_par_d, + nullptr); + goto done; + } + + /* Otherwise scale the width to the new PAR and check if the + * adjusted with is accepted. If all that fails we can't keep + * the DAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (h, num, den); + gst_structure_fixate_field_nearest_int (outs, "width", w); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + + goto done; + } else if (w) { + GstStructure *tmp; + gint set_h, set_par_n, set_par_d; + + GST_DEBUG_OBJECT (base, "width is fixed (%d)", w); + + /* If the PAR is fixed too, there's not much to do + * except choosing the height that is nearest to the + * height with the same DAR */ + if (gst_value_is_fixed (to_par)) { + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + GST_DEBUG_OBJECT (base, "PAR is fixed %d/%d", to_par_n, to_par_d); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_d, + to_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + h = (guint) gst_util_uint64_scale_int_round (w, den, num); + gst_structure_fixate_field_nearest_int (outs, "height", h); + + goto done; + } + + /* The PAR is not fixed and it's quite likely that we can set + * an arbitrary PAR. */ + + /* Check if we can keep the input height */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + + /* Might have failed but try to keep the DAR nonetheless by + * adjusting the PAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_h, w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + /* Check if the adjusted PAR is accepted */ + if (set_par_n == to_par_n && set_par_d == to_par_d) { + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "height", G_TYPE_INT, set_h, + "pixel-aspect-ratio", GST_TYPE_FRACTION, set_par_n, set_par_d, + nullptr); + goto done; + } + + /* Otherwise scale the height to the new PAR and check if the + * adjusted with is accepted. If all that fails we can't keep + * the DAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scale sized - integer overflow")); + goto done; + } + + h = (guint) gst_util_uint64_scale_int_round (w, den, num); + gst_structure_fixate_field_nearest_int (outs, "height", h); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + + goto done; + } else if (gst_value_is_fixed (to_par)) { + GstStructure *tmp; + gint set_h, set_w, f_h, f_w; + + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + /* Calculate scale factor for the PAR change */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_n, + to_par_d, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + /* Try to keep the input height (because of interlacing) */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + + /* This might have failed but try to scale the width + * to keep the DAR nonetheless */ + w = (guint) gst_util_uint64_scale_int_round (set_h, num, den); + gst_structure_fixate_field_nearest_int (tmp, "width", w); + gst_structure_get_int (tmp, "width", &set_w); + gst_structure_free (tmp); + + /* We kept the DAR and the height is nearest to the original height */ + if (set_w == w) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + goto done; + } + + f_h = set_h; + f_w = set_w; + + /* If the former failed, try to keep the input width at least */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + /* This might have failed but try to scale the width + * to keep the DAR nonetheless */ + h = (guint) gst_util_uint64_scale_int_round (set_w, den, num); + gst_structure_fixate_field_nearest_int (tmp, "height", h); + gst_structure_get_int (tmp, "height", &set_h); + gst_structure_free (tmp); + + /* We kept the DAR and the width is nearest to the original width */ + if (set_h == h) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + goto done; + } + + /* If all this failed, keep the dimensions with the DAR that was closest + * to the correct DAR. This changes the DAR but there's not much else to + * do here. + */ + if (set_w * ABS (set_h - h) < ABS (f_w - w) * f_h) { + f_h = set_h; + f_w = set_w; + } + gst_structure_set (outs, "width", G_TYPE_INT, f_w, "height", G_TYPE_INT, + f_h, nullptr); + goto done; + } else { + GstStructure *tmp; + gint set_h, set_w, set_par_n, set_par_d, tmp2; + + /* width, height and PAR are not fixed but passthrough is not possible */ + + /* First try to keep the height and width as good as possible + * and scale PAR */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_h, set_w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + if (set_par_n == to_par_n && set_par_d == to_par_d) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* Otherwise try to scale width to keep the DAR with the set + * PAR and height */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (set_h, num, den); + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", w); + gst_structure_get_int (tmp, "width", &tmp2); + gst_structure_free (tmp); + + if (tmp2 == w) { + gst_structure_set (outs, "width", G_TYPE_INT, tmp2, "height", + G_TYPE_INT, set_h, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* ... or try the same with the height */ + h = (guint) gst_util_uint64_scale_int_round (set_w, den, num); + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", h); + gst_structure_get_int (tmp, "height", &tmp2); + gst_structure_free (tmp); + + if (tmp2 == h) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, tmp2, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* If all fails we can't keep the DAR and take the nearest values + * for everything from the first try */ + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + } + } + +done: + if (from_par == &fpar) + g_value_unset (&fpar); + if (to_par == &tpar) + g_value_unset (&tpar); + + return othercaps; +} + +static GstCaps * +gst_nv_ds_dewarp_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + auto self = GST_NV_DS_DEWARP (trans); + auto priv = self->priv; + GstCaps *ret; + + std::lock_guard < std::mutex > lk (priv->lock); + /* Passthrough should be the same size */ + if (priv->warp_type == GST_NV_DS_DEWARP_WARP_NONE) + ret = gst_caps_ref (caps); + else + ret = gst_nv_ds_dewarp_caps_rangify_size_info (caps); + + if (filter) { + auto tmp = gst_caps_intersect_full (filter, ret, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (ret); + ret = tmp; + } + + GST_DEBUG_OBJECT (trans, "transformed %" GST_PTR_FORMAT " into %" + GST_PTR_FORMAT, caps, ret); + + return ret; +} + +static GstCaps * +gst_nv_ds_dewarp_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GST_DEBUG_OBJECT (base, + "trying to fixate othercaps %" GST_PTR_FORMAT " based on caps %" + GST_PTR_FORMAT, othercaps, caps); + + othercaps = gst_nv_ds_dewarp_fixate_size (base, direction, caps, othercaps); + + GST_DEBUG_OBJECT (base, "fixated othercaps to %" GST_PTR_FORMAT, othercaps); + + return othercaps; +} + static gboolean gst_nv_ds_dewarp_set_caps (GstBaseTransform * trans, GstCaps * in_caps, GstCaps * out_caps) { auto self = GST_NV_DS_DEWARP (trans); auto priv = self->priv; + gint from_dar_n, from_dar_d, to_dar_n, to_dar_d; + gint borders_w = 0; + gint borders_h = 0; + gint in_width, in_height, in_par_n, in_par_d; if (!priv->handle) { GST_ERROR_OBJECT (self, "Dewarper handle is not configured"); @@ -888,6 +1464,75 @@ return FALSE; } + auto in_info = &priv->in_info; + auto out_info = &priv->out_info; + priv->same_size = false; + + in_width = in_info->height; + in_height = in_info->width; + in_par_n = in_info->par_d; + in_par_d = in_info->par_n; + + if (!gst_util_fraction_multiply (in_width, + in_height, in_par_n, in_par_d, &from_dar_n, &from_dar_d)) { + from_dar_n = from_dar_d = -1; + } + + if (!gst_util_fraction_multiply (out_info->width, + out_info->height, out_info->par_n, out_info->par_d, &to_dar_n, + &to_dar_d)) { + to_dar_n = to_dar_d = -1; + } + + if (to_dar_n != from_dar_n || to_dar_d != from_dar_d) { + if (priv->add_borders) { + gint n, d, to_h, to_w; + + if (from_dar_n != -1 && from_dar_d != -1 + && gst_util_fraction_multiply (from_dar_n, from_dar_d, + out_info->par_d, out_info->par_n, &n, &d)) { + to_h = gst_util_uint64_scale_int (out_info->width, d, n); + if (to_h <= out_info->height) { + borders_h = out_info->height - to_h; + borders_w = 0; + } else { + to_w = gst_util_uint64_scale_int (out_info->height, n, d); + g_assert (to_w <= out_info->width); + borders_h = 0; + borders_w = out_info->width - to_w; + } + } else { + GST_WARNING_OBJECT (self, "Can't calculate borders"); + } + } else { + GST_INFO_OBJECT (self, "Display aspect ratio update %d/%d -> %d/%d", + from_dar_n, from_dar_d, to_dar_n, to_dar_d); + } + } + + priv->out_rect.x = 0; + priv->out_rect.y = 0; + priv->out_rect.w = out_info->width; + priv->out_rect.h = out_info->height; + + if (borders_w) { + priv->out_rect.x = borders_w / 2; + priv->out_rect.w = out_info->width - (2 * priv->out_rect.x); + } + + if (borders_h) { + priv->out_rect.y = borders_h / 2; + priv->out_rect.h = out_info->height - (2 * priv->out_rect.y); + } + + if (borders_w > 0 || borders_h > 0) + priv->clear_background = true; + else + priv->clear_background = false; + + GST_DEBUG_OBJECT (self, "Output rect %dx%d at %d, %d", priv->out_rect.w, + priv->out_rect.h, priv->out_rect.x, priv->out_rect.y); + std::lock_guard < std::mutex > lk (priv->lock); gst_cuda_context_push (priv->context); auto ret = gst_nv_ds_dewarp_update_params (self); @@ -911,6 +1556,12 @@ } } +struct GstNvDsDewarpTextureData +{ + GstCudaContext *context; + CUtexObject texture; +}; + static GstFlowReturn gst_nv_ds_dewarp_transform (GstBaseTransform * trans, GstBuffer * inbuf, GstBuffer * outbuf) @@ -976,30 +1627,94 @@ return GST_FLOW_ERROR; } - auto cuda_ret = CuTexObjectCreate (&texture, - &resource_desc, &texture_desc, nullptr); - if (!gst_cuda_result (cuda_ret)) { - GST_ERROR_OBJECT (self, "Couldn't create texture object"); - gst_video_frame_unmap (&in_frame); - gst_video_frame_unmap (&out_frame); - return GST_FLOW_ERROR; + CUresult cuda_ret = CUDA_SUCCESS; + auto in_cmem = GST_CUDA_MEMORY_CAST (in_mem); + auto texture_data = (GstNvDsDewarpTextureData *) + gst_cuda_memory_get_token_data (in_cmem, priv->texture_token); + if (texture_data && texture_data->context == priv->context) { + GST_LOG_OBJECT (self, "Have cached texture"); + texture = texture_data->texture; + } else { + GST_DEBUG_OBJECT (self, "Creating new texture object"); + + cuda_ret = CuTexObjectCreate (&texture, + &resource_desc, &texture_desc, nullptr); + if (!gst_cuda_result (cuda_ret)) { + GST_ERROR_OBJECT (self, "Couldn't create texture object"); + gst_video_frame_unmap (&in_frame); + gst_video_frame_unmap (&out_frame); + gst_cuda_context_pop (nullptr); + return GST_FLOW_ERROR; + } + + texture_data = new GstNvDsDewarpTextureData (); + texture_data->context = (GstCudaContext *) gst_object_ref (priv->context); + texture_data->texture = texture; + + gst_cuda_memory_set_token_data (in_cmem, priv->texture_token, texture_data, + (gpointer user_data)->void + { + auto data = (GstNvDsDewarpTextureData *) user_data; + gst_cuda_context_push (data->context); + CuTexObjectDestroy (data->texture); gst_cuda_context_pop (nullptr); + delete data; + }); } CUstream cuda_stream = 0; - auto in_cmem = GST_CUDA_MEMORY_CAST (in_mem); - auto stream = gst_cuda_memory_get_stream (in_cmem); - if (stream) - cuda_stream = gst_cuda_stream_get_handle (stream); - else - cuda_stream = gst_cuda_stream_get_handle (priv->stream); + auto in_stream = gst_cuda_memory_get_stream (in_cmem); + auto out_cmem = GST_CUDA_MEMORY_CAST (out_mem); + auto out_stream = gst_cuda_memory_get_stream (out_cmem); + GstCudaStream *selected_stream = nullptr; + + /* If downstream does not aware of CUDA stream (i.e., using default stream) */ + if (!out_stream) { + if (in_stream) { + GST_TRACE_OBJECT (self, "Use upstram CUDA stream"); + selected_stream = in_stream; + } else if (priv->stream) { + GST_TRACE_OBJECT (self, "Use our CUDA stream"); + selected_stream = priv->stream; + } + } else { + selected_stream = out_stream; + if (in_stream) { + if (in_stream == out_stream) { + GST_TRACE_OBJECT (self, "Same stream"); + } else { + GST_TRACE_OBJECT (self, "Different CUDA stream"); + gst_cuda_memory_sync (in_cmem); + } + } + } - auto ret = nvwarpWarpBuffer (priv->handle, (cudaStream_t) cuda_stream, - (cudaTextureObject_t) texture, - GST_VIDEO_FRAME_PLANE_DATA (&out_frame, 0), - GST_VIDEO_FRAME_PLANE_STRIDE (&out_frame, 0)); - CuStreamSynchronize (cuda_stream); + cuda_stream = gst_cuda_stream_get_handle (selected_stream); + + auto data = (guint8 *) GST_VIDEO_FRAME_PLANE_DATA (&out_frame, 0); + auto stride = GST_VIDEO_FRAME_PLANE_STRIDE (&out_frame, 0); + auto offset = stride * priv->out_rect.y + + priv->out_rect.x * GST_VIDEO_FRAME_COMP_PSTRIDE (&out_frame, 0); + + if (priv->clear_background) { + cuda_ret = CuMemsetD2D32Async ((CUdeviceptr) data, stride, + ((guint32) 0xff) << 24, priv->out_info.width, priv->out_info.height, + cuda_stream); + if (!gst_cuda_result (cuda_ret)) { + GST_ERROR_OBJECT (self, "Couldn't clear background"); + gst_video_frame_unmap (&in_frame); + gst_video_frame_unmap (&out_frame); + gst_cuda_context_pop (nullptr); + return GST_FLOW_ERROR; + } + } - CuTexObjectDestroy (texture); + auto ret = nvwarpWarpBuffer (priv->handle, (cudaStream_t) cuda_stream, + (cudaTextureObject_t) texture, data + offset, stride); + if (selected_stream != out_stream) { + GST_MEMORY_FLAG_UNSET (out_cmem, GST_CUDA_MEMORY_TRANSFER_NEED_SYNC); + GST_TRACE_OBJECT (self, "Waiting for convert sync"); + CuStreamSynchronize (cuda_stream); + } gst_cuda_context_pop (nullptr); GstFlowReturn flow_ret = GST_FLOW_OK;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/onnx/README.md -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/onnx/README.md
Changed
@@ -4,23 +4,23 @@ ### Build 1. do a recursive checkout of onnxruntime tag 1.16.3(https://github.com/microsoft/onnxruntime) - 1. `$SRC_DIR` and `$BUILD_DIR` are local source and build directories - 1. To run with CUDA, both CUDA(https://developer.nvidia.com/cuda-downloads) and cuDNN(https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn_762/cudnn-install/index.html) libraries must be installed. + 2. `$SRC_DIR` and `$BUILD_DIR` are local source and build directories + 3. To run with CUDA, both CUDA(https://developer.nvidia.com/cuda-downloads) and cuDNN(https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn_762/cudnn-install/index.html) libraries must be installed. ``` $ cd $SRC_DIR $ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.16.3 refs/tags/v1.16.3 $ mkdir $BUILD_DIR/onnxruntime && cd $BUILD_DIR/onnxruntime - +$ apt-get update && apt-get install -y libeigen3-dev ``` 1. CPU ``` -$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install +$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_PREINSTALLED_EIGEN=ON -Deigen_SOURCE_PATH=/usr/include/eigen3 $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install ``` 2. CUDA ``` -cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=/usr/local/cuda -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install +cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=/usr/local/cuda -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc -Donnxruntime_USE_PREINSTALLED_EIGEN=ON -Deigen_SOURCE_PATH=/usr/include/eigen3 $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install ``` 3. Intel oneDNN
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/onnx/gstonnxinference.c
Added
@@ -0,0 +1,1813 @@ +/* + * GStreamer gstreamer-onnxinference + * Copyright (C) 2023-2025 Collabora Ltd. + * + * gstonnxinference.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-onnxinference + * @short_description: Run ONNX inference model on video buffers + * + * This element can apply an ONNX model to video buffers. It attaches + * the tensor output to the buffer as a @ref GstTensorMeta. + * + * To install ONNX on your system, follow the instructions in the + * README.md in with this plugin. + * + * ## Example launch command: + * + * Test image file, model file (SSD) and label file can be found here : + * https://gitlab.collabora.com/gstreamer/onnx-models + * + * GST_DEBUG=ssdobjectdetector:5 \ + * gst-launch-1.0 filesrc location=onnx-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \ + * ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink + * + * + * Note: in order for downstream tensor decoders to correctly parse the tensor + * data in the GstTensorMeta, meta data must be attached to the ONNX model + * assigning a unique string id to each output layer. These unique string ids + * and corresponding GQuark ids are currently stored in the tensor decoder's + * header file, in this case gstssdobjectdetector.h. If the meta data is absent, + * the pipeline will fail. + * + * As a convenience, there is a python script + * currently stored at + * https://gitlab.collabora.com/gstreamer/onnx-models/-/blob/master/scripts/modify_onnx_metadata.py + * to enable users to easily add and remove meta data from json files. It can also dump + * the names of all output layers, which can then be used to craft the json meta data file. + * + * Since: 1.20 + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstonnxinference.h" + +#include <gst/gst.h> +#include <gst/analytics/analytics.h> + +#include <onnxruntime_c_api.h> + +#ifdef HAVE_VSI_NPU +#include <core/providers/vsinpu/vsinpu_provider_factory.h> +#endif + +typedef enum +{ + GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL, + GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC, + GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, + GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL, +} GstOnnxOptimizationLevel; + +typedef enum +{ + GST_ONNX_EXECUTION_PROVIDER_CPU, + GST_ONNX_EXECUTION_PROVIDER_CUDA, + GST_ONNX_EXECUTION_PROVIDER_VSI, +} GstOnnxExecutionProvider; + +struct _GstOnnxInference +{ + GstBaseTransform basetransform; + gchar *model_file; + GstOnnxOptimizationLevel optimization_level; + GstOnnxExecutionProvider execution_provider; + GstVideoInfo video_info; + GstCaps *input_tensors_caps; + GstCaps *output_tensors_caps; + + OrtEnv *env; + OrtSession *session; + OrtMemoryInfo *memory_info; + OrtAllocator *allocator; + int32_t width; + int32_t height; + int32_t channels; + gboolean planar; + gint height_dim; + gint width_dim; + gint channels_dim; + gint batch_dim; + uint8_t *dest; + size_t output_count; + gchar **output_names; + GQuark *output_ids; + GstTensorDataType input_data_type; + bool fixedInputImageSize; + double *scales; + double *offsets; + gsize num_channels; +}; + +static const OrtApi *api = NULL; + + +GST_DEBUG_CATEGORY (onnx_inference_debug); +GST_DEBUG_CATEGORY (onnx_runtime_debug); + +#define GST_CAT_DEFAULT onnx_inference_debug +GST_ELEMENT_REGISTER_DEFINE (onnx_inference, "onnxinference", + GST_RANK_PRIMARY, GST_TYPE_ONNX_INFERENCE); + +/* GstOnnxInference properties */ +enum +{ + PROP_0, + PROP_MODEL_FILE, + PROP_OPTIMIZATION_LEVEL, + PROP_EXECUTION_PROVIDER, + PROP_INPUT_OFFSET, + PROP_INPUT_SCALE +}; + +#define GST_ONNX_INFERENCE_DEFAULT_EXECUTION_PROVIDER GST_ONNX_EXECUTION_PROVIDER_CPU +#define GST_ONNX_INFERENCE_DEFAULT_OPTIMIZATION_LEVEL GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED + +static GstStaticPadTemplate gst_onnx_inference_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ RGB,RGBA,BGR,BGRA }")) + ); + +static GstStaticPadTemplate gst_onnx_inference_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ RGB,RGBA,BGR,BGRA }")) + ); + + +static void gst_onnx_inference_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_onnx_inference_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_onnx_inference_finalize (GObject * object); +static GstFlowReturn gst_onnx_inference_transform_ip (GstBaseTransform * + trans, GstBuffer * buf); +static GstCaps *gst_onnx_inference_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps); +static gboolean +gst_onnx_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps); +static gboolean gst_onnx_inference_start (GstBaseTransform * trans); +static gboolean gst_onnx_inference_stop (GstBaseTransform * trans); + +G_DEFINE_TYPE (GstOnnxInference, gst_onnx_inference, GST_TYPE_BASE_TRANSFORM); + +GType gst_onnx_optimization_level_get_type (void); +#define GST_TYPE_ONNX_OPTIMIZATION_LEVEL (gst_onnx_optimization_level_get_type ()) + +GType gst_onnx_execution_provider_get_type (void); +#define GST_TYPE_ONNX_EXECUTION_PROVIDER (gst_onnx_execution_provider_get_type ()) + +GType +gst_onnx_optimization_level_get_type (void) +{ + static GType onnx_optimization_type = 0; + + if (g_once_init_enter (&onnx_optimization_type)) { + static GEnumValue optimization_level_types = { + {GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL, "Disable all optimization", + "disable-all"}, + {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC, + "Enable basic optimizations (redundant node removals))", + "enable-basic"}, + {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, + "Enable extended optimizations (redundant node removals + node fusions)", + "enable-extended"}, + {GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL, + "Enable all possible optimizations", "enable-all"}, + {0, NULL, NULL}, + }; + + GType temp = g_enum_register_static ("GstOnnxOptimizationLevel", + optimization_level_types); + + g_once_init_leave (&onnx_optimization_type, temp); + } + + return onnx_optimization_type; +} + +GType +gst_onnx_execution_provider_get_type (void) +{ + static GType onnx_execution_type = 0; + + if (g_once_init_enter (&onnx_execution_type)) { + static GEnumValue execution_provider_types = { + {GST_ONNX_EXECUTION_PROVIDER_CPU, "CPU execution provider", + "cpu"}, +#if HAVE_CUDA + {GST_ONNX_EXECUTION_PROVIDER_CUDA, + "CUDA execution provider", + "cuda"}, +#else + {GST_ONNX_EXECUTION_PROVIDER_CUDA, + "CUDA execution provider (compiled out, will use CPU)", + "cuda"}, +#endif +#ifdef HAVE_VSI_NPU + {GST_ONNX_EXECUTION_PROVIDER_VSI, + "VeriSilicon NPU execution provider", + "vsi"}, +#else + {GST_ONNX_EXECUTION_PROVIDER_VSI, + "VeriSilicon NPU execution provider (compiled out, will use CPU)", + "vsi"}, +#endif + {0, NULL, NULL}, + }; + + GType temp = g_enum_register_static ("GstOnnxExecutionProvider", + execution_provider_types); + + g_once_init_leave (&onnx_execution_type, temp); + } + + return onnx_execution_type; +} + +static void +gst_onnx_inference_class_init (GstOnnxInferenceClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + GST_DEBUG_CATEGORY_INIT (onnx_inference_debug, "onnxinference", + 0, "ONNX Runtime Inference"); + GST_DEBUG_CATEGORY_INIT (onnx_runtime_debug, "onnxruntime", + 0, "ONNX Runtime"); + gobject_class->set_property = gst_onnx_inference_set_property; + gobject_class->get_property = gst_onnx_inference_get_property; + gobject_class->finalize = gst_onnx_inference_finalize; + + /** + * GstOnnxInference:model-file + * + * ONNX model file + * + * Since: 1.24 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_MODEL_FILE, + g_param_spec_string ("model-file", + "ONNX model file", "ONNX model file", NULL, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstOnnxInference:optimization-level + * + * ONNX optimization level + * + * Since: 1.24 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_OPTIMIZATION_LEVEL, + g_param_spec_enum ("optimization-level", + "Optimization level", + "ONNX optimization level", + GST_TYPE_ONNX_OPTIMIZATION_LEVEL, + GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstOnnxInference:execution-provider + * + * ONNX execution provider + * + * Since: 1.24 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_EXECUTION_PROVIDER, + g_param_spec_enum ("execution-provider", + "Execution provider", + "ONNX execution provider", + GST_TYPE_ONNX_EXECUTION_PROVIDER, + GST_ONNX_EXECUTION_PROVIDER_CPU, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_set_static_metadata (element_class, "onnxinference", + "Filter/Video", + "Apply neural network to video frames and create tensor output", + "Aaron Boxer <aaron.boxer@collabora.com>"); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_onnx_inference_sink_template)); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_onnx_inference_src_template)); + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_onnx_inference_transform_ip); + basetransform_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_onnx_inference_transform_caps); + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_onnx_inference_set_caps); + basetransform_class->start = GST_DEBUG_FUNCPTR (gst_onnx_inference_start); + basetransform_class->stop = GST_DEBUG_FUNCPTR (gst_onnx_inference_stop); + + gst_type_mark_as_plugin_api (GST_TYPE_ONNX_OPTIMIZATION_LEVEL, + (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_ONNX_EXECUTION_PROVIDER, + (GstPluginAPIFlags) 0); + + api = OrtGetApiBase ()->GetApi (ORT_API_VERSION); +} + +static void +gst_onnx_inference_init (GstOnnxInference * self) +{ + /* TODO: at the moment onnx inference only support video output. We + * should revisit this aspect once we generalize it */ + self->input_tensors_caps = gst_caps_new_empty_simple ("video/x-raw"); + self->output_tensors_caps = gst_caps_new_empty_simple ("video/x-raw"); + + self->execution_provider = GST_ONNX_EXECUTION_PROVIDER_CPU; + + self->scales = NULL; + self->offsets = NULL; + self->num_channels = 0; + + self->height_dim = -1; + self->width_dim = -1; + self->channels_dim = -1; + self->batch_dim = -1; + + /* Passthrough would propagate tensors caps upstream */ + gst_base_transform_set_prefer_passthrough (GST_BASE_TRANSFORM (self), FALSE); +} + +static void +gst_onnx_inference_finalize (GObject * object) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (object); + + g_free (self->model_file); + g_free (self->scales); + g_free (self->offsets); + gst_caps_unref (self->input_tensors_caps); + gst_caps_unref (self->output_tensors_caps); + G_OBJECT_CLASS (gst_onnx_inference_parent_class)->finalize (object); +} + +static void +gst_onnx_inference_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (object); + const gchar *filename; + + switch (prop_id) { + case PROP_MODEL_FILE: + filename = g_value_get_string (value); + if (filename + && g_file_test (filename, + (GFileTest) (G_FILE_TEST_EXISTS | G_FILE_TEST_IS_REGULAR))) { + if (self->model_file) + g_free (self->model_file); + self->model_file = g_strdup (filename); + } else { + GST_WARNING_OBJECT (self, "Model file '%s' not found!", filename); + } + break; + case PROP_OPTIMIZATION_LEVEL: + self->optimization_level = + (GstOnnxOptimizationLevel) g_value_get_enum (value); + break; + case PROP_EXECUTION_PROVIDER: + self->execution_provider = + (GstOnnxExecutionProvider) g_value_get_enum (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_onnx_inference_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (object); + + switch (prop_id) { + case PROP_MODEL_FILE: + g_value_set_string (value, self->model_file); + break; + case PROP_OPTIMIZATION_LEVEL: + g_value_set_enum (value, self->optimization_level); + break; + case PROP_EXECUTION_PROVIDER: + g_value_set_enum (value, self->execution_provider); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gsize +get_tensor_type_size (GstTensorDataType data_type) +{ + switch (data_type) { + case GST_TENSOR_DATA_TYPE_UINT8: + return sizeof (uint8_t); + case GST_TENSOR_DATA_TYPE_UINT16: + return sizeof (uint16_t); + case GST_TENSOR_DATA_TYPE_UINT32: + return sizeof (uint32_t); + case GST_TENSOR_DATA_TYPE_INT32: + return sizeof (int32_t); + case GST_TENSOR_DATA_TYPE_FLOAT16: + return 2; + case GST_TENSOR_DATA_TYPE_FLOAT32: + return sizeof (float); + default: + g_error ("Data type %d not handled", data_type); + return 0; + }; +} + +static GstCaps * +gst_onnx_inference_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (trans); + GstCaps *other_caps; + GstCaps *restrictions; + bool has_session; + + GST_OBJECT_LOCK (self); + has_session = self->session != NULL; + GST_OBJECT_UNLOCK (self); + + if (!has_session) { + other_caps = gst_caps_ref (caps); + goto done; + } + + GST_LOG_OBJECT (self, "transforming caps %" GST_PTR_FORMAT, caps); + + GST_DEBUG_OBJECT (self, "Applying model input tensors caps restrictions: %" + GST_PTR_FORMAT, self->input_tensors_caps); + + restrictions = gst_caps_ref (self->input_tensors_caps); + + if (direction == GST_PAD_SINK) { + /* Create tensors_caps from output_tensor_caps and intersect with + * restrictions */ + GstCaps *tensors_caps = gst_caps_copy (self->output_tensors_caps); + GstCaps *intersect = gst_caps_intersect_full (restrictions, tensors_caps, + GST_CAPS_INTERSECT_FIRST); + gst_caps_replace (&restrictions, intersect); + gst_caps_unref (tensors_caps); + gst_caps_unref (intersect); + other_caps = gst_caps_intersect_full (caps, restrictions, + GST_CAPS_INTERSECT_FIRST); + + } else if (direction == GST_PAD_SRC) { + /* Remove tensors from caps to prevent upstream propagation. */ + GstCaps *tmp_caps = gst_caps_copy (caps); + + if (!gst_caps_is_empty (tmp_caps)) { + GstStructure *tstruct = gst_caps_get_structure (tmp_caps, 0); + gst_structure_remove_field (tstruct, "tensors"); + } + + other_caps = gst_caps_intersect_full (tmp_caps, restrictions, + GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp_caps); + } else { + other_caps = gst_caps_intersect_full (caps, restrictions, + GST_CAPS_INTERSECT_FIRST); + } + + gst_caps_unref (restrictions); + +done: + if (filter_caps) { + GstCaps *tmp = gst_caps_intersect_full (other_caps, filter_caps, + GST_CAPS_INTERSECT_FIRST); + gst_caps_replace (&other_caps, tmp); + gst_caps_unref (tmp); + } + + return other_caps; +} + +static GstTensorDataType +onnx_data_type_to_gst (ONNXTensorElementDataType dt) +{ + const gint ONNX_TO_GST_TENSOR_DATATYPE = { + -1, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED */ + GST_TENSOR_DATA_TYPE_FLOAT32, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT */ + GST_TENSOR_DATA_TYPE_UINT8, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8 */ + GST_TENSOR_DATA_TYPE_INT8, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8 */ + GST_TENSOR_DATA_TYPE_UINT16, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16 */ + GST_TENSOR_DATA_TYPE_INT16, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_INT16 */ + GST_TENSOR_DATA_TYPE_INT32, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32 */ + GST_TENSOR_DATA_TYPE_INT64, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64 */ + GST_TENSOR_DATA_TYPE_STRING, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_STRING */ + GST_TENSOR_DATA_TYPE_BOOL, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL */ + GST_TENSOR_DATA_TYPE_FLOAT16, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16 */ + GST_TENSOR_DATA_TYPE_FLOAT64, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_DOUBLE */ + GST_TENSOR_DATA_TYPE_UINT32, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT32 */ + GST_TENSOR_DATA_TYPE_UINT64, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64 */ + GST_TENSOR_DATA_TYPE_COMPLEX64, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX64 */ + GST_TENSOR_DATA_TYPE_COMPLEX128, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_COMPLEX128 */ + GST_TENSOR_DATA_TYPE_BFLOAT16, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16 */ + GST_TENSOR_DATA_TYPE_FLOAT8E4M3FN, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT8E4M3FN */ + GST_TENSOR_DATA_TYPE_FLOAT8E4M3FNUZ, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT8E4M3FNUZ */ + GST_TENSOR_DATA_TYPE_FLOAT8E5M2, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT8E5M2 */ + GST_TENSOR_DATA_TYPE_FLOAT8E5M2FNUZ, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT8E5M2FNUZ */ + GST_TENSOR_DATA_TYPE_UINT4, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT4 */ + GST_TENSOR_DATA_TYPE_INT4, /* ONNX_TENSOR_ELEMENT_DATA_TYPE_INT4 */ + }; + + if (dt > ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED && + dt <= ONNX_TENSOR_ELEMENT_DATA_TYPE_INT4) { + return ONNX_TO_GST_TENSOR_DATATYPEdt; + } + + g_error ("Unexpected datatype: %d", dt); +} + +static gboolean +gst_onnx_inference_set_tensordec_datatype (GstOnnxInference * self, + ONNXTensorElementDataType dt, GstStructure * tensor_desc) +{ + GValue val = G_VALUE_INIT; + GstTensorDataType gst_dt; + + g_value_init (&val, G_TYPE_STRING); + + if (dt > ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED && + dt <= ONNX_TENSOR_ELEMENT_DATA_TYPE_INT4) { + gst_dt = onnx_data_type_to_gst (dt); + g_value_set_string (&val, gst_tensor_data_type_get_name (gst_dt)); + } else { + GST_ERROR_OBJECT (self, "Unexpected datatype: %d", dt); + g_value_unset (&val); + return FALSE; + } + + gst_structure_take_value (tensor_desc, "type", &val); + g_value_unset (&val); + return TRUE; +} + +static void +gst_onnx_log_function (void *param, OrtLoggingLevel severity, + const char *category, const char *logid, const char *code_location, + const char *message) +{ + GObject *obj = param; + GstDebugLevel level = GST_LEVEL_ERROR; + + switch (severity) { + case ORT_LOGGING_LEVEL_VERBOSE: + level = GST_LEVEL_LOG; + break; + case ORT_LOGGING_LEVEL_INFO: + level = GST_LEVEL_INFO; + break; + case ORT_LOGGING_LEVEL_WARNING: + level = GST_LEVEL_WARNING; + break; + case ORT_LOGGING_LEVEL_ERROR: + case ORT_LOGGING_LEVEL_FATAL: + level = GST_LEVEL_ERROR; + break; + } + + gst_debug_log (onnx_runtime_debug, level, code_location, + "gst_onnx_log_function", 0, obj, "%s", message); +} + +/* FIXME: This is copied from Gsttfliteinference and we should create something + * more generic + */ + +static gboolean +_guess_tensor_data_type (GstOnnxInference * self, gsize dims_count, + gsize * dims, const gchar ** gst_format) +{ + self->height_dim = -1; + self->width_dim = -1; + self->channels_dim = -1; + self->batch_dim = -1; + + if (dims_count < 2 || dims_count > 4) { + GST_ERROR_OBJECT (self, + "Don't know how to interpret tensors with %zu dimensions", dims_count); + return FALSE; + } + + switch (dims_count) { + case 2: + *gst_format = "GRAY8"; + self->height_dim = 0; + self->width_dim = 1; + break; + case 3: + if (dims0 == 1 || dims0 == 3) { + self->channels_dim = 0; + if (dims0 == 1) { + *gst_format = "GRAY8"; + } else { + *gst_format = "RGBP"; + } + self->height_dim = 1; + self->width_dim = 2; + } else if (dims2 == 1 || dims2 == 3) { + self->channels_dim = 2; + if (dims2 == 1) + *gst_format = "GRAY"; + else + *gst_format = "RGB"; + self->height_dim = 0; + self->width_dim = 1; + } else { + GST_ERROR_OBJECT (self, "Don't know how to interpret dims"); + return FALSE; + } + break; + case 4: + /* Assuming dims0 is a batch */ + self->batch_dim = 0; + if (dims1 == 1 || dims1 == 3) { + self->channels_dim = 1; + self->height_dim = 2; + self->width_dim = 3; + } else if (dims3 == 1 || dims3 == 3) { + self->height_dim = 1; + self->width_dim = 2; + self->channels_dim = 3; + } else { + GST_ERROR_OBJECT (self, "Don't know how to interpret dims"); + return FALSE; + } + + if (dimsself->channels_dim == 1) { + *gst_format = "GRAY8"; + } else if (dimsself->channels_dim == 3) { + if (self->planar) + *gst_format = "RGBP"; + else + *gst_format = "RGB"; + } else { + g_assert_not_reached (); + } + break; + } + + return TRUE; +} + +static gchar * +build_dims_str (gsize dims_count, gsize * dims) +{ + GString *dims_gstr = g_string_new (""); + gsize j; + + if (dims_count == 0) + goto done; + + + if (dims0 == G_MAXSIZE) + g_string_append (dims_gstr, "-1"); + else + g_string_append_printf (dims_gstr, "%zu", dims0); + + for (j = 1; j < dims_count; j++) + if (dimsj == G_MAXSIZE) + g_string_append (dims_gstr, ",-1"); + else + g_string_append_printf (dims_gstr, ",%zu", dimsj); + +done: + return g_string_free (dims_gstr, FALSE); +} + +static gboolean +gst_onnx_inference_start (GstBaseTransform * trans) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (trans); + gboolean ret = FALSE; + OrtStatus *status = NULL; + OrtSessionOptions *session_options = NULL; + OrtTypeInfo *input_type_info = NULL; + const OrtTensorTypeAndShapeInfo *input_tensor_info = NULL; + GraphOptimizationLevel onnx_optim; + size_t num_input_dims; + int64_t *input_dims; + gsize *gst_input_dims; + ONNXTensorElementDataType element_type; + size_t i; + const gchar *gst_format; + GstAnalyticsModelInfo *modelinfo = NULL; + const gchar *onnx_input_tensor_name = NULL; + gchar *tensor_name = NULL; + + + GST_OBJECT_LOCK (self); + if (self->session) { + ret = TRUE; + goto done; + } + + if (self->model_file == NULL) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("model-file property not set")); + goto done; + } + + modelinfo = gst_analytics_modelinfo_load (self->model_file); + if (!modelinfo) { + GST_ERROR_OBJECT (self, "Failed to load modelinfo for %s. " + "This could be due to: file not found, unsupported version, " + "or invalid file format.", self->model_file); + goto error; + } + + if (self->session) { + ret = TRUE; + goto done; + } + // Create environment + OrtLoggingLevel ort_logging; + + switch (gst_debug_category_get_threshold (GST_CAT_DEFAULT)) { + case GST_LEVEL_NONE: + case GST_LEVEL_ERROR: + ort_logging = ORT_LOGGING_LEVEL_ERROR; + break; + case GST_LEVEL_WARNING: + case GST_LEVEL_FIXME: + ort_logging = ORT_LOGGING_LEVEL_WARNING; + break; + case GST_LEVEL_INFO: + ort_logging = ORT_LOGGING_LEVEL_INFO; + break; + case GST_LEVEL_DEBUG: + case GST_LEVEL_LOG: + case GST_LEVEL_TRACE: + case GST_LEVEL_MEMDUMP: + default: + ort_logging = ORT_LOGGING_LEVEL_VERBOSE; + break; + } + + status = api->CreateEnvWithCustomLogger (gst_onnx_log_function, self, + ort_logging, "GstOnnx", &self->env); + if (status) { + GST_ERROR_OBJECT (self, "Failed to create environment: %s", + api->GetErrorMessage (status)); + goto error; + } + // Create session options + status = api->CreateSessionOptions (&session_options); + if (status) { + GST_ERROR_OBJECT (self, "Failed to create session options: %s", + api->GetErrorMessage (status)); + goto error; + } + // Set graph optimization level + switch (self->optimization_level) { + case GST_ONNX_OPTIMIZATION_LEVEL_DISABLE_ALL: + onnx_optim = ORT_DISABLE_ALL; + break; + case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_BASIC: + onnx_optim = ORT_ENABLE_BASIC; + break; + case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_EXTENDED: + onnx_optim = ORT_ENABLE_EXTENDED; + break; + case GST_ONNX_OPTIMIZATION_LEVEL_ENABLE_ALL: + onnx_optim = ORT_ENABLE_ALL; + break; + default: + onnx_optim = ORT_ENABLE_EXTENDED; + break; + } + + status = api->SetSessionGraphOptimizationLevel (session_options, onnx_optim); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to set optimization level: %s", + api->GetErrorMessage (status))); + goto error; + } + // Set execution provider + switch (self->execution_provider) { + case GST_ONNX_EXECUTION_PROVIDER_CUDA: + { + OrtCUDAProviderOptionsV2 *cuda_options = NULL; + status = api->CreateCUDAProviderOptions (&cuda_options); + if (status) { + GST_ERROR_OBJECT (self, + "Failed to create CUDA provider %s", api->GetErrorMessage (status)); + goto error; + } + + status = + api->SessionOptionsAppendExecutionProvider_CUDA_V2 (session_options, + cuda_options); + api->ReleaseCUDAProviderOptions (cuda_options); + if (status) { + GST_ERROR_OBJECT (self, "Failed to append CUDA provider: %s", + api->GetErrorMessage (status)); + goto error; + } + break; + } + case GST_ONNX_EXECUTION_PROVIDER_VSI: +#ifdef HAVE_VSI_NPU + status = + OrtSessionOptionsAppendExecutionProvider_VSINPU (session_options); + if (status) { + GST_ERROR_OBJECT (self, "Failed to set VSINPU AI execution provider:" + " %s", api->GetErrorMessage (status)); + goto error; + } + api->DisableCpuMemArena (session_options); +#else + GST_ERROR_OBJECT (self, "Compiled without VSI support"); + goto error; +#endif + break; + default: + break; + } + + // Create session + status = api->CreateSession (self->env, self->model_file, session_options, + &self->session); + if (status) { + GST_ERROR_OBJECT (self, "Failed to create session: %s", + api->GetErrorMessage (status)); + self->session = NULL; + goto error; + } + + api->ReleaseSessionOptions (session_options); + session_options = NULL; + + // Get allocator + status = api->GetAllocatorWithDefaultOptions (&self->allocator); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get allocator: %s", + api->GetErrorMessage (status)); + goto error; + } + // Get input info + status = api->SessionGetInputTypeInfo (self->session, 0, &input_type_info); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get input type info: %s", + api->GetErrorMessage (status)); + goto error; + } + + status = api->CastTypeInfoToTensorInfo (input_type_info, &input_tensor_info); + if (status) { + GST_ERROR_OBJECT (self, "Failed to cast type info: %s", + api->GetErrorMessage (status)); + goto error; + } + + status = api->GetDimensionsCount (input_tensor_info, &num_input_dims); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get dimensions count: %s", + api->GetErrorMessage (status)); + goto error; + } + + input_dims = (int64_t *) g_alloca (num_input_dims * sizeof (int64_t)); + gst_input_dims = (gsize *) g_alloca (num_input_dims * sizeof (gsize)); + status = api->GetDimensions (input_tensor_info, input_dims, num_input_dims); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get dimensions: %s", + api->GetErrorMessage (status)); + goto error; + } + + for (i = 0; i < num_input_dims; i++) { + if (input_dimsi < 0) + gst_input_dimsi = G_MAXSIZE; + else + gst_input_dimsi = input_dimsi; + } + + gchar *dims = build_dims_str (num_input_dims, gst_input_dims); + GST_DEBUG_OBJECT (self, "Input dimensions: %s", dims); + g_free (dims); + + if (!_guess_tensor_data_type (self, num_input_dims, gst_input_dims, + &gst_format)) + goto error; + + self->height = gst_input_dimsself->height_dim; + self->width = gst_input_dimsself->width_dim; + if (self->channels_dim >= 0) { + self->channels = gst_input_dimsself->channels_dim; + self->planar = (self->channels_dim != num_input_dims - 1); + } else { + self->channels = 1; + } + + + GST_DEBUG_OBJECT (self, "height dim%d=%d, width dim%d=%d," + " channels dim%d=%d, batch_dim%d=%zu planar=%d", + self->height_dim, self->height, self->width_dim, self->width, + self->channels_dim, self->channels, self->batch_dim, + self->batch_dim >= 0 ? gst_input_dimsself->batch_dim : 0, self->planar); + + self->fixedInputImageSize = self->width > 0 && self->height > 0; + + status = api->SessionGetOutputCount (self->session, &self->output_count); + if (status) { + GST_ERROR_OBJECT (self, "Could to retrieve output count: %s", + api->GetErrorMessage (status)); + goto error; + } + GST_DEBUG_OBJECT (self, "Number of Output Nodes: %zu", self->output_count); + + if (self->output_count == 0) { + GST_ERROR_OBJECT (self, "Model with 0 output nodes is not " "supported."); + goto error; + } + + status = api->GetTensorElementType (input_tensor_info, &element_type); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get element type: %s", + api->GetErrorMessage (status)); + goto error; + } + + api->ReleaseTypeInfo (input_type_info); + input_type_info = NULL; + + self->input_data_type = onnx_data_type_to_gst (element_type); + + /* Get input tensor name from ONNX file */ + status = api->SessionGetInputName (self->session, 0, self->allocator, + (char **) &onnx_input_tensor_name); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get input name: %s", + api->GetErrorMessage (status)); + goto error; + } + + tensor_name = gst_analytics_modelinfo_find_tensor_name (modelinfo, + MODELINFO_DIRECTION_INPUT, 0, onnx_input_tensor_name, + self->input_data_type, num_input_dims, gst_input_dims); + + if (!tensor_name) { + gchar *dims_str = build_dims_str (num_input_dims, gst_input_dims); + GST_ERROR_OBJECT (self, + "Model info file doesn't contain info for input_tensor0:%s matching the" + " type %s and dims %s", onnx_input_tensor_name, + gst_tensor_data_type_get_name (self->input_data_type), dims_str); + g_free (dims_str); + if (onnx_input_tensor_name) + self->allocator->Free (self->allocator, (char *) onnx_input_tensor_name); + goto error; + } + + /* Validation: modelinfo successfully matched dims and datatype from ONNX */ + GST_INFO_OBJECT (self, + "Input tensor0:%s validated - modelinfo matches ONNX model (type: %s)", + onnx_input_tensor_name, + gst_tensor_data_type_get_name (self->input_data_type)); + + /* Get per-channel scales and offsets from modelinfo */ + /* For video input, we assume uint8 pixel values in range 0, 255 */ + { + gdouble *input_mins = NULL; + gdouble *input_maxs = NULL; + gsize num_target_ranges; + gsize j; + + /* First, get the number of target ranges from modelinfo to allocate input ranges */ + if (!gst_analytics_modelinfo_get_target_ranges (modelinfo, tensor_name, + &num_target_ranges, &input_mins, &input_maxs)) { + GST_ERROR_OBJECT (self, + "Failed to get target ranges from modelinfo for tensor %s", + tensor_name); + g_free (tensor_name); + if (onnx_input_tensor_name) + self->allocator->Free (self->allocator, + (char *) onnx_input_tensor_name); + goto error; + } + + /* Free the target ranges - we only needed them to know the count */ + g_free (input_mins); + g_free (input_maxs); + + /* Prepare input ranges - for video uint8 input, range is 0, 255 for all channels */ + input_mins = g_new (gdouble, num_target_ranges); + input_maxs = g_new (gdouble, num_target_ranges); + for (j = 0; j < num_target_ranges; j++) { + input_minsj = 0.0; + input_maxsj = 255.0; + } + + if (!gst_analytics_modelinfo_get_input_scales_offsets (modelinfo, + tensor_name, num_target_ranges, input_mins, input_maxs, + &self->num_channels, &self->scales, &self->offsets)) { + GST_ERROR_OBJECT (self, "Failed to get scales/offsets for tensor %s", + tensor_name); + g_free (input_mins); + g_free (input_maxs); + g_free (tensor_name); + if (onnx_input_tensor_name) + self->allocator->Free (self->allocator, + (char *) onnx_input_tensor_name); + goto error; + } + + g_free (input_mins); + g_free (input_maxs); + } + + GST_INFO_OBJECT (self, "Input tensor normalization: %zu channel(s)", + self->num_channels); + for (i = 0; i < self->num_channels; i++) { + GST_DEBUG_OBJECT (self, " Channel%zu: scale=%f, offset=%f", i, + self->scalesi, self->offsetsi); + } + + g_free (tensor_name); + if (onnx_input_tensor_name) + self->allocator->Free (self->allocator, (char *) onnx_input_tensor_name); + + /* Setting input tensor caps */ + self->input_tensors_caps = gst_caps_make_writable (self->input_tensors_caps); + + /* Check if all channels are passthrough (scale=1.0, offset=0.0) */ + gboolean is_passthrough = TRUE; + if (self->scales && self->offsets) { + for (i = 0; i < self->num_channels; i++) { + if (self->scalesi != 1.0 || self->offsetsi != 0.0) { + is_passthrough = FALSE; + break; + } + } + } + + if (self->input_data_type == GST_TENSOR_DATA_TYPE_UINT8 && gst_format && + is_passthrough) + gst_caps_set_simple (self->input_tensors_caps, "format", G_TYPE_STRING, + gst_format, NULL); + if (self->fixedInputImageSize) + gst_caps_set_simple (self->input_tensors_caps, "width", G_TYPE_INT, + self->width, "height", G_TYPE_INT, self->height, NULL); + + // Get output names + self->output_names = g_new0 (char *, self->output_count); + for (i = 0; i < self->output_count; ++i) { + status = + api->SessionGetOutputName (self->session, i, self->allocator, + &self->output_namesi); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get output name %zu: %s", i, + api->GetErrorMessage (status)); + goto error; + } + GST_DEBUG_OBJECT (self, "Output name %lu:%s", i, self->output_namesi); + } + + GValue v_tensors_set = G_VALUE_INIT; + GstStructure *tensors_s = NULL; + gchar *group_id = NULL; + + g_value_init (&v_tensors_set, GST_TYPE_UNIQUE_LIST); + + self->output_ids = g_new0 (GQuark, self->output_count); + + for (i = 0; i < self->output_count; i++) { + OrtTypeInfo *output_type_info = NULL; + const OrtTensorTypeAndShapeInfo *output_tensor_info = NULL; + size_t card; + ONNXTensorElementDataType type; + GstTensorDataType gst_data_type; + size_t j; + gchar *tensor_name = NULL; + gchar *tensor_id = NULL; + gsize *output_dims = NULL; + + + status = + api->SessionGetOutputTypeInfo (self->session, i, &output_type_info); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get info for output tensor %zu: %s", + i, api->GetErrorMessage (status)); + goto error; + } + + status = + api->CastTypeInfoToTensorInfo (output_type_info, &output_tensor_info); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get cast type for output tensor" + " %zu: %s", i, api->GetErrorMessage (status)); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + status = api->GetDimensionsCount (output_tensor_info, &card); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get cardinality for output tensor" + " %zu: %s", i, api->GetErrorMessage (status)); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + status = api->GetTensorElementType (output_tensor_info, &type); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get element type for output tensor" + " %zu: %s", i, api->GetErrorMessage (status)); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + gst_data_type = onnx_data_type_to_gst (type); + + /* Get dimensions from ONNX */ + int64_t *shape = (int64_t *) g_alloca (card * sizeof (int64_t)); + output_dims = (gsize *) g_malloc0 (card * sizeof (gsize)); + status = api->GetDimensions (output_tensor_info, shape, card); + if (status) { + GST_ERROR_OBJECT (self, "Failed to get output tensor (%s) dimensions", + self->output_namesi); + api->ReleaseStatus (status); + status = NULL; + g_free (output_dims); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + for (j = 0; j < card; j++) { + output_dimsj = shapej > 0 ? shapej : G_MAXSIZE; + } + + /* Look up tensor name in modelinfo */ + tensor_name = gst_analytics_modelinfo_find_tensor_name (modelinfo, + MODELINFO_DIRECTION_OUTPUT, i, self->output_namesi, + gst_data_type, card, output_dims); + + if (!tensor_name) { + gchar *dims_str = build_dims_str (card, output_dims); + GST_ERROR_OBJECT (self, + "Model info file doesn't contain info for output_tensor%zu:%s matching the" + " type %s and dims %s", i, self->output_namesi, + gst_tensor_data_type_get_name (gst_data_type), dims_str); + g_free (dims_str); + g_free (output_dims); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + /* Validation: modelinfo successfully matched dims and datatype from ONNX */ + GST_INFO_OBJECT (self, + "Output tensor%zu:%s validated - modelinfo matches ONNX model " + "(type: %s)", i, self->output_namesi, + gst_tensor_data_type_get_name (gst_data_type)); + + /* Get tensor ID from modelinfo */ + tensor_id = gst_analytics_modelinfo_get_id (modelinfo, tensor_name); + if (!tensor_id) { + GST_ERROR_OBJECT (self, "Model info doesn't have 'id' for tensor %s", + tensor_name); + g_free (tensor_name); + g_free (output_dims); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + GST_DEBUG_OBJECT (self, "Mapping output_tensor%zu:%s of type %s to id %s", + i, self->output_namesi, gst_tensor_data_type_get_name (gst_data_type), + tensor_id); + + self->output_idsi = g_quark_from_string (tensor_id); + + /* tensor description */ + GstStructure *tensor_desc = gst_structure_new_empty ("tensor/strided"); + + /* Setting dims */ + GValue val_dims = G_VALUE_INIT, val = G_VALUE_INIT; + GValue val_caps = G_VALUE_INIT; + gst_value_array_init (&val_dims, card); + g_value_init (&val, G_TYPE_INT); + g_value_init (&val_caps, GST_TYPE_CAPS); + + for (j = 0; j < card; j++) { + g_value_set_int (&val, output_dimsj != G_MAXSIZE ? output_dimsj : 0); + gst_value_array_append_value (&val_dims, &val); + } + + /* Get dims-order from modelinfo (defaults to row-major if not specified) */ + GstTensorDimOrder dims_order = + gst_analytics_modelinfo_get_dims_order (modelinfo, tensor_name); + const gchar *dims_order_str = + dims_order == + GST_TENSOR_DIM_ORDER_COL_MAJOR ? "col-major" : "row-major"; + gst_structure_set (tensor_desc, "dims-order", G_TYPE_STRING, dims_order_str, + "tensor-id", G_TYPE_STRING, g_quark_to_string (self->output_idsi), + NULL); + GST_INFO_OBJECT (self, "%sdims-order=%s", + g_quark_to_string (self->output_idsi), dims_order_str); + + gst_structure_take_value (tensor_desc, "dims", &val_dims); + g_value_unset (&val); + + /* Setting datatype */ + if (!gst_onnx_inference_set_tensordec_datatype (self, type, tensor_desc)) { + GST_ERROR_OBJECT (self, + "Failed to datatype for output tensor (%s) dimensions", + self->output_namesi); + + gst_structure_free (tensor_desc); + g_value_unset (&v_tensors_set); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + + /* tensor caps */ + GstCaps *tensor_caps = gst_caps_new_full (tensor_desc, NULL); + + /* Append tensor caps to set */ + gst_value_set_caps (&val_caps, tensor_caps); + gst_caps_unref (tensor_caps); + gst_value_unique_list_append_and_take_value (&v_tensors_set, &val_caps); + + /* Get group-id from modelinfo on last tensor */ + if (i == (self->output_count - 1)) { + group_id = gst_analytics_modelinfo_get_group_id (modelinfo); + if (!group_id) { + GST_ERROR_OBJECT (self, "Model info doesn't have 'group-id'"); + g_free (tensor_name); + g_free (tensor_id); + g_free (output_dims); + api->ReleaseTypeInfo (output_type_info); + goto error; + } + } + + /* Cleanup */ + g_free (tensor_name); + g_free (tensor_id); + g_free (output_dims); + api->ReleaseTypeInfo (output_type_info); + } + + if (!tensors_s) + tensors_s = gst_structure_new_empty ("tensorgroups"); + GstStructure *output_caps_struct; + + gst_structure_set_value (tensors_s, group_id, &v_tensors_set); + output_caps_struct = gst_caps_get_structure (self->output_tensors_caps, 0); + gst_structure_set (output_caps_struct, "tensors", GST_TYPE_STRUCTURE, + tensors_s, NULL); + gst_structure_free (tensors_s); + g_value_unset (&v_tensors_set); + + if (group_id) + g_free (group_id); + + // Create memory info for CPU + status = + api->CreateCpuMemoryInfo (OrtArenaAllocator, OrtMemTypeDefault, + &self->memory_info); + if (status) { + GST_WARNING_OBJECT (self, "Failed to create memory info: %s", + api->GetErrorMessage (status)); + goto error; + } + + ret = TRUE; +done: + if (modelinfo) + gst_analytics_modelinfo_free (modelinfo); + GST_OBJECT_UNLOCK (self); + + return ret; + +error: + if (status) + api->ReleaseStatus (status); + if (input_type_info) + api->ReleaseTypeInfo (input_type_info); + if (session_options) + api->ReleaseSessionOptions (session_options); + + if (modelinfo) + gst_analytics_modelinfo_free (modelinfo); + GST_OBJECT_UNLOCK (self); + + gst_onnx_inference_stop (trans); + return ret; + +} + +static gboolean +gst_onnx_inference_stop (GstBaseTransform * trans) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (trans); + size_t i; + + GST_OBJECT_LOCK (self); + if (!self->session) + goto done; + // Clean up output names + + if (self->output_names) { + for (i = 0; i < self->output_count; i++) { + if (self->output_namesi) + self->allocator->Free (self->allocator, self->output_namesi); + } + } + g_free (self->output_names); + self->output_names = NULL; + + g_free (self->output_ids); + self->output_ids = NULL; + self->output_count = 0; + + if (self->memory_info) + api->ReleaseMemoryInfo (self->memory_info); + self->memory_info = NULL; + + api->ReleaseSession (self->session); + self->session = NULL; + + if (self->env) + api->ReleaseEnv (self->env); + self->env = NULL; + +done: + GST_OBJECT_UNLOCK (self); + + return TRUE; +} + +static gboolean +gst_onnx_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (trans); + + if (!gst_video_info_from_caps (&self->video_info, incaps)) { + GST_ERROR_OBJECT (self, "Failed to parse caps"); + return FALSE; + } + + if (self->fixedInputImageSize && + (self->video_info.width != self->width || + self->video_info.height != self->height)) { + GST_ERROR_OBJECT (self, "Dimensions from caps %ux%u doesn't match model" + " dimensions %dx%d", self->video_info.width, self->video_info.height, + self->width, self->height); + return FALSE; + } + + if (self->dest == NULL || self->width * self->height != + self->video_info.width * self->video_info.height) { + gsize element_size = get_tensor_type_size (self->input_data_type); + gsize alloc_size; + + /* Use GLib's checked multiplication to prevent overflow */ + if (!g_size_checked_mul (&alloc_size, self->video_info.width, + self->video_info.height) || + !g_size_checked_mul (&alloc_size, alloc_size, self->channels) || + !g_size_checked_mul (&alloc_size, alloc_size, element_size)) { + GST_ERROR_OBJECT (self, + "Integer overflow in buffer allocation: %dx%d pixels, %u channels, %zu bytes per element", + self->video_info.width, self->video_info.height, self->channels, + element_size); + return FALSE; + } + + g_free (self->dest); + self->dest = g_malloc (alloc_size); + } + self->width = self->video_info.width; + self->height = self->video_info.height; + + return TRUE; +} + +#define _convert_image_scale_offset(Type) \ +G_STMT_START { \ + size_t destIndex = 0; \ + Type tmp; \ + \ + if (!planar) { \ + for (int32_t j = 0; j < dstHeight; ++j) { \ + for (int32_t i = 0; i < dstWidth; ++i) { \ + for (int32_t k = 0; k < dstChannels; ++k) { \ + tmp = *srcPtrk; \ + dstdestIndex++ = (Type)(tmp * scalesk + offsetsk); \ + srcPtrk += pixel_stride; \ + } \ + } \ + /* correct for stride */ \ + for (uint32_t k = 0; k < 3; ++k) \ + srcPtrk += stride - pixel_stride * dstWidth; \ + } \ + } else { \ + size_t frameSize = dstWidth * dstHeight; \ + Type *destPtr3 = { dst, dst + frameSize, dst + 2 * frameSize }; \ + for (int32_t j = 0; j < dstHeight; ++j) { \ + for (int32_t i = 0; i < dstWidth; ++i) { \ + for (int32_t k = 0; k < dstChannels; ++k) { \ + tmp = *srcPtrk; \ + destPtrkdestIndex = (Type)(tmp * scalesk + offsetsk); \ + srcPtrk += pixel_stride; \ + } \ + destIndex++; \ + } \ + /* correct for stride */ \ + for (uint32_t k = 0; k < 3; ++k) \ + srcPtrk += stride - pixel_stride * dstWidth; \ + } \ + } \ +} \ +G_STMT_END; + +static void +convert_image_scale_offset_u8 (guint8 * dst, gint dstWidth, gint dstHeight, + gint dstChannels, gboolean planar, guint8 ** srcPtr, + guint8 pixel_stride, guint32 stride, const gdouble * scales, + const gdouble * offsets) +{ + _convert_image_scale_offset (guint8); +} + +static void +convert_image_scale_offset_f32 (gfloat * dst, gint dstWidth, gint dstHeight, + gint dstChannels, gboolean planar, guint8 ** srcPtr, + guint8 pixel_stride, guint32 stride, const gdouble * scales, + const gdouble * offsets) +{ + _convert_image_scale_offset (gfloat); +} + +static GstFlowReturn +gst_onnx_inference_transform_ip (GstBaseTransform * trans, GstBuffer * buf) +{ + GstOnnxInference *self = GST_ONNX_INFERENCE (trans); + GstMapInfo info; + OrtStatus *status = NULL; + OrtTypeInfo *input_type_info = NULL; + OrtValue *input_tensor = NULL; + OrtValue **output_tensors = NULL; + const OrtTensorTypeAndShapeInfo *input_tensor_info; + size_t num_dims; + int64_t *input_dims; + uint8_t *srcPtr3; + size_t inputTensorSize; + char *input_names1; + GstTensorMeta *tmeta = NULL; + OrtTensorTypeAndShapeInfo *output_tensor_info = NULL; + + if (!gst_buffer_map (buf, &info, GST_MAP_READ)) { + GST_ELEMENT_ERROR (trans, STREAM, FAILED, (NULL), + ("Could not map input buffer")); + return GST_FLOW_ERROR; + } + + status = + api->SessionGetInputName (self->session, 0, self->allocator, input_names); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get input name")); + goto error; + } + + status = api->SessionGetInputTypeInfo (self->session, 0, &input_type_info); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get input type info: %s", api->GetErrorMessage (status))); + goto error; + } + + status = api->CastTypeInfoToTensorInfo (input_type_info, &input_tensor_info); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to cast type info: %s", api->GetErrorMessage (status))); + goto error; + } + + status = api->GetDimensionsCount (input_tensor_info, &num_dims); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get dimensions count: %s", api->GetErrorMessage (status))); + goto error; + } + + input_dims = (int64_t *) g_alloca (num_dims * sizeof (int64_t)); + status = api->GetDimensions (input_tensor_info, input_dims, num_dims); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get dimensions: %s", api->GetErrorMessage (status))); + goto error; + } + + api->ReleaseTypeInfo (input_type_info); + input_type_info = NULL; + + if (self->batch_dim >= 0) + input_dimsself->batch_dim = 1; + + if (input_dimsself->height_dim >= 0) { + if (input_dimsself->height_dim != self->height) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Buffer has height %d, but model expects %zu", + self->height, input_dimsself->height_dim)); + goto error; + } + } else { + input_dimsself->height_dim = self->height; + } + if (input_dimsself->width_dim >= 0) { + if (input_dimsself->width_dim != self->width) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Buffer has width %d, but model expects %zu", + self->width, input_dimsself->width_dim)); + goto error; + } + } else { + input_dimsself->width_dim = self->width; + } + + GST_LOG_OBJECT (self, "Input dimensions: %" G_GINT64_FORMAT + ":%" G_GINT64_FORMAT ":%" G_GINT64_FORMAT ":%" G_GINT64_FORMAT, + input_dims0, input_dims1, input_dims2, input_dims3); + + // copy video frame + switch (self->video_info.finfo->format) { + case GST_VIDEO_FORMAT_RGBA: + srcPtr0 = info.data; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 2; + break; + case GST_VIDEO_FORMAT_BGRA: + srcPtr0 = info.data + 2; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 0; + break; + case GST_VIDEO_FORMAT_ARGB: + srcPtr0 = info.data + 1; + srcPtr1 = info.data + 2; + srcPtr2 = info.data + 3; + break; + case GST_VIDEO_FORMAT_ABGR: + srcPtr0 = info.data + 3; + srcPtr1 = info.data + 2; + srcPtr2 = info.data + 1; + break; + case GST_VIDEO_FORMAT_RGB: + srcPtr0 = info.data; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 2; + break; + case GST_VIDEO_FORMAT_BGR: + srcPtr0 = info.data + 2; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 0; + break; + default: + g_assert_not_reached (); + break; + } + + inputTensorSize = self->width * self->height * self->channels * + get_tensor_type_size (self->input_data_type); + + /* Check if all channels are passthrough (scale=1.0, offset=0.0) */ + gboolean is_passthrough_transform = TRUE; + if (self->scales && self->offsets) { + for (gsize c = 0; c < self->num_channels; c++) { + if (self->scalesc != 1.0 || self->offsetsc != 0.0) { + is_passthrough_transform = FALSE; + break; + } + } + } + + switch (self->input_data_type) { + case GST_TENSOR_DATA_TYPE_UINT8:{ + uint8_t *src_data; + + if (is_passthrough_transform) { + src_data = info.data; + } else { + convert_image_scale_offset_u8 (self->dest, self->width, self->height, + self->channels, self->planar, srcPtr, + GST_VIDEO_INFO_COMP_PSTRIDE (&self->video_info, 0), + GST_VIDEO_INFO_PLANE_STRIDE (&self->video_info, 0), + self->scales, self->offsets); + src_data = self->dest; + } + + status = api->CreateTensorWithDataAsOrtValue (self->memory_info, src_data, + inputTensorSize, input_dims, num_dims, + ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8, &input_tensor); + break; + } + case GST_TENSOR_DATA_TYPE_FLOAT32:{ + convert_image_scale_offset_f32 ((float *) self->dest, self->width, + self->height, + self->channels, self->planar, srcPtr, + GST_VIDEO_INFO_COMP_PSTRIDE (&self->video_info, 0), + GST_VIDEO_INFO_PLANE_STRIDE (&self->video_info, 0), + self->scales, self->offsets); + + status = api->CreateTensorWithDataAsOrtValue (self->memory_info, + (float *) self->dest, + inputTensorSize, input_dims, num_dims, + ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT, &input_tensor); + break; + } + default: + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Unsupported input datatype")); + goto error; + } + + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to create input tensor: %s", api->GetErrorMessage (status))); + goto error; + } + + output_tensors = g_new0 (OrtValue *, self->output_count); + + status = api->Run (self->session, NULL, (const char *const *) input_names, + (const OrtValue * const *) &input_tensor, 1, + (const char *const *) self->output_names, self->output_count, + output_tensors); + + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to run inference: %s", api->GetErrorMessage (status))); + goto error; + } + + self->allocator->Free (self->allocator, input_names0); + api->ReleaseValue (input_tensor); + + if (!output_tensors || self->output_count == 0) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("ONNX inference failed to produce outputs")); + goto error; + } + + + tmeta = gst_buffer_add_tensor_meta (buf); + tmeta->num_tensors = self->output_count; + tmeta->tensors = g_new0 (GstTensor *, self->output_count); + + for (size_t i = 0; i < self->output_count; i++) { + size_t j; + ONNXTensorElementDataType tensor_type; + size_t num_dims; + size_t num_elements; + void *tensor_data; + + status = + api->GetTensorTypeAndShape (output_tensorsi, &output_tensor_info); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get tensor info: %s", api->GetErrorMessage (status))); + goto error; + } + + status = api->GetTensorElementType (output_tensor_info, &tensor_type); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get tensor type: %s", api->GetErrorMessage (status))); + goto error; + } + + status = api->GetDimensionsCount (output_tensor_info, &num_dims); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get dimensions count: %s", + api->GetErrorMessage (status))); + + api->ReleaseTensorTypeAndShapeInfo (output_tensor_info); + goto error; + } + + int64_t *shape = (int64_t *) g_alloca (num_dims * sizeof (int64_t)); + status = api->GetDimensions (output_tensor_info, shape, num_dims); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get dimensions: %s", api->GetErrorMessage (status))); + goto error; + } + + GstTensor *tensor = gst_tensor_alloc (num_dims); + tmeta->tensorsi = tensor; + tensor->id = self->output_idsi; + + for (j = 0; j < num_dims; ++j) + tensor->dimsj = shapej; + + status = + api->GetTensorShapeElementCount (output_tensor_info, &num_elements); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Could not get the number of elements in the tensor: %s", + api->GetErrorMessage (status))); + goto error; + } + + api->ReleaseTensorTypeAndShapeInfo (output_tensor_info); + output_tensor_info = NULL; + + status = api->GetTensorMutableData (output_tensorsi, &tensor_data); + if (status) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to get tensor data: %s", api->GetErrorMessage (status))); + goto error; + } + + if (tensor_type == ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT) { + size_t buffer_size = num_elements * sizeof (float); + tensor->data = gst_buffer_new_allocate (NULL, buffer_size, NULL); + gst_buffer_fill (tensor->data, 0, tensor_data, buffer_size); + tensor->data_type = GST_TENSOR_DATA_TYPE_FLOAT32; + } else if (tensor_type == ONNX_TENSOR_ELEMENT_DATA_TYPE_INT32) { + size_t buffer_size = num_elements * sizeof (int); + tensor->data = gst_buffer_new_allocate (NULL, buffer_size, NULL); + gst_buffer_fill (tensor->data, 0, tensor_data, buffer_size); + tensor->data_type = GST_TENSOR_DATA_TYPE_INT32; + } else { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Output tensor is not FLOAT32 or INT32, not supported")); + goto error; + } + } + + // Clean up output tensors + for (size_t i = 0; i < self->output_count; i++) { + if (output_tensorsi) + api->ReleaseValue (output_tensorsi); + } + g_free (output_tensors); + + GST_TRACE_OBJECT (trans, "Num tensors:%zu", self->output_count); + gst_buffer_unmap (buf, &info); + + return GST_FLOW_OK; + +error: + if (status) + api->ReleaseStatus (status); + if (input_names0) + self->allocator->Free (self->allocator, input_names0); + if (input_type_info) + api->ReleaseTypeInfo (input_type_info); + if (input_tensor) + api->ReleaseValue (input_tensor); + if (output_tensors) { + for (size_t i = 0; i < self->output_count; i++) { + if (output_tensorsi) + api->ReleaseValue (output_tensorsi); + } + g_free (output_tensors); + } + + if (output_tensor_info) + api->ReleaseTensorTypeAndShapeInfo (output_tensor_info); + + if (tmeta) + gst_buffer_remove_meta (buf, (GstMeta *) tmeta); + + + gst_buffer_unmap (buf, &info); + + return GST_FLOW_ERROR; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/onnx/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/onnx/meson.build
Changed
@@ -1,13 +1,10 @@ onnx_sources = 'gstonnx.c', - 'gstonnxinference.cpp', - 'gstonnxclient.cpp', + 'gstonnxinference.c', onnx_headers = 'gstonnxinference.h', - 'gstml.h', - 'gstonnxclient.h', doc_sources = @@ -23,26 +20,55 @@ subdir_done() endif -onnxrt_dep = dependency('libonnxruntime', version : '>= 1.16.1', required : get_option('onnx')) - extra_args = extra_deps = +extra_incs = + +onnxrt_dep = dependency('libonnxruntime', version : '>= 1.16.1', + required: false) +if not onnxrt_dep.found() + fsmod = import('fs') + sysroot = meson.get_external_property('sys_root', '/') + onnx_inc = join_paths(sysroot, 'usr/include/onnxruntime') + + incs = + if fsmod.is_dir(onnx_inc) + incs = include_directories(onnx_inc) + endif + + onnxrt_dep = cc.find_library('onnxruntime', + has_headers: 'onnxruntime_c_api.h', + header_include_directories: incs, + required: get_option('onnx')) + extra_incs += incs +endif + +if not onnxrt_dep.found() + subdir_done() +endif + if gstcuda_dep.found() extra_args += '-DHAVE_CUDA' extra_deps += gstcuda_dep endif -if onnxrt_dep.found() - gstonnx = library('gstonnx', - onnx_sources, - c_args : gst_plugins_bad_args + extra_args, - cpp_args : gst_plugins_bad_args + extra_args, - link_args : noseh_link_args, - include_directories : configinc, libsinc, cuda_stubinc, - dependencies : gstbase_dep, gstvideo_dep, gstanalytics_dep, onnxrt_dep, - libm + extra_deps, - install : true, - install_dir : plugins_install_dir, - ) - plugins += gstonnx +if cc.has_function('OrtSessionOptionsAppendExecutionProvider_VSINPU', + dependencies: onnxrt_dep) and \ + cc.has_header('core/providers/vsinpu/vsinpu_provider_factory.h', + dependencies: onnxrt_dep, + include_directories: extra_incs) + message('Enabled VSI Onnx VSI NPU provider') + extra_args += '-DHAVE_VSI_NPU' endif + +gstonnx = library('gstonnx', + onnx_sources, + c_args : gst_plugins_bad_args + extra_args, + link_args : noseh_link_args, + include_directories : configinc, libsinc, cuda_stubinc + extra_incs, + dependencies : gstbase_dep, gstvideo_dep, gstanalytics_dep, onnxrt_dep, + libm + extra_deps, + install : true, + install_dir : plugins_install_dir, +) +plugins += gstonnx
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/openaptx/openaptx-plugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/openaptx/openaptx-plugin.c
Changed
@@ -42,7 +42,7 @@ gboolean ret = FALSE; ret |= GST_ELEMENT_REGISTER (openaptxdec, plugin); ret |= GST_ELEMENT_REGISTER (openaptxenc, plugin); - return TRUE; + return ret; } GST_PLUGIN_DEFINE (GST_VERSION_MAJOR,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/opencv/gstmotioncells.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/opencv/gstmotioncells.h
Changed
@@ -102,4 +102,4 @@ GST_ELEMENT_REGISTER_DECLARE (motioncells); G_END_DECLS -#endif /* __GST_MOTION_CELLS_H__ */ +#endif /* __GST_MOTIONCELLS_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/openjpeg/gstopenjpegdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/openjpeg/gstopenjpegdec.c
Changed
@@ -1,4 +1,4 @@ -/* +/* * Copyright (C) 2012 Collabora Ltd. * Author: Sebastian Dröge <sebastian.droege@collabora.co.uk> * Copyright (C) 2013 Sebastian Dröge <slomo@circular-chaos.org> @@ -1560,8 +1560,8 @@ self->available_threads--; g_mutex_unlock (&self->messages_lock); - gst_element_call_async (GST_ELEMENT (self), - (GstElementCallAsyncFunc) gst_openjpeg_dec_decode_stripe, message, NULL); + gst_object_call_async (GST_OBJECT (self), + (GstObjectCallAsyncFunc) gst_openjpeg_dec_decode_stripe, message); if (gst_video_decoder_get_subframe_mode (decoder) && gst_openjpeg_dec_is_last_input_subframe (decoder, message)) gst_video_decoder_have_last_subframe (decoder, frame);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/openjpeg/gstopenjpegenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/openjpeg/gstopenjpegenc.c
Changed
@@ -1,4 +1,4 @@ -/* +/* * Copyright (C) 2012 Collabora Ltd. * Author: Sebastian Dröge <sebastian.droege@collabora.co.uk> * Copyright (C) 2013 Sebastian Dröge <slomo@circular-chaos.org> @@ -1287,9 +1287,8 @@ GST_LOG_OBJECT (self, "About to enqueue an encoding message from frame %p stripe %d", frame, message->stripe); - gst_element_call_async (GST_ELEMENT (self), - (GstElementCallAsyncFunc) gst_openjpeg_enc_encode_stripe, message, - NULL); + gst_object_call_async (GST_OBJECT (self), + (GstObjectCallAsyncFunc) gst_openjpeg_enc_encode_stripe, message); enqueued_stripes++; } while (enqueued_stripes > 0) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/qroverlay/gstdebugqroverlay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/qroverlay/gstdebugqroverlay.c
Changed
@@ -156,8 +156,8 @@ gst_element_class_set_details_simple (gstelement_class, - "qroverlay", - "Qrcode overlay containing buffer information", + "debugqroverlay", + "Video/Overlay/Debug", "Overlay Qrcodes over each buffer with buffer information and custom data", "Anthony Violo <anthony.violo@ubicast.eu>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/qroverlay/gstqroverlay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/qroverlay/gstqroverlay.c
Changed
@@ -181,7 +181,7 @@ gst_element_class_set_details_simple (gstelement_class, "qroverlay", - "Qrcode overlay containing random data", + "Video/Overlay", "Overlay Qrcodes over each buffer with data passed in", "Thibault Saunier <tsaunier@igalia.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/rsvg/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/rsvg/meson.build
Changed
@@ -18,7 +18,9 @@ 'rsvg': pathsep.join(doc_sources) } -cairo_dep = dependency('cairo', version: '>= 1.16.0', allow_fallback: true, required : get_option('rsvg')) +cairo_dep = dependency('cairo', version: '>= 1.16.0', + default_options: {'tests': 'disabled'}, + allow_fallback: true, required : get_option('rsvg')) rsvg_dep = dependency('librsvg-2.0', version : '>= 2.36.2', required : get_option('rsvg')) if cairo_dep.found() and rsvg_dep.found() gstrsvg = library('gstrsvg',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/rtmp/gstrtmp.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/rtmp/gstrtmp.c
Changed
@@ -41,6 +41,9 @@ ret |= GST_ELEMENT_REGISTER (rtmpsrc, plugin); ret |= GST_ELEMENT_REGISTER (rtmpsink, plugin); + gst_plugin_add_status_warning (plugin, "The rtmp plugin is deprecated, " + "please use rtmp2"); + return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/rtmp/gstrtmpsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/rtmp/gstrtmpsink.c
Changed
@@ -137,6 +137,9 @@ GST_ERROR_OBJECT (sink, "WSAStartup failed: 0x%08x", WSAGetLastError ()); } #endif + + GST_WARNING_OBJECT (sink, "rtmpsink is deprecated, please move to " + "rtmp2sink"); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/rtmp/gstrtmpsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/rtmp/gstrtmpsrc.c
Changed
@@ -180,6 +180,9 @@ } #endif + GST_WARNING_OBJECT (rtmpsrc, "rtmpsrc is deprecated, please move to " + "rtmp2src"); + rtmpsrc->cur_offset = 0; rtmpsrc->last_timestamp = 0; rtmpsrc->timeout = DEFAULT_TIMEOUT;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sctp/gstsctpdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sctp/gstsctpdec.c
Changed
@@ -64,6 +64,7 @@ PROP_GST_SCTP_ASSOCIATION_ID, PROP_LOCAL_SCTP_PORT, + PROP_AUTOMATIC_ASSOCIATION_ID, NUM_PROPERTIES }; @@ -72,6 +73,7 @@ #define DEFAULT_GST_SCTP_ASSOCIATION_ID 1 #define DEFAULT_LOCAL_SCTP_PORT 0 +#define DEFAULT_AUTOMATIC_ASSOCIATION_ID FALSE #define MAX_SCTP_PORT 65535 #define MAX_GST_SCTP_ASSOCIATION_ID 65535 @@ -207,6 +209,20 @@ 0, MAX_SCTP_PORT, DEFAULT_LOCAL_SCTP_PORT, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS); + /** + * GstSctpDec:automatic-association-id: + * + * Whether a SCTP Association ID should be automatically generated. + * + * Since: 1.28 + */ + propertiesPROP_AUTOMATIC_ASSOCIATION_ID = + g_param_spec_boolean ("automatic-association-id", + "Automatic SCTP Association ID", + "Whether a SCTP Association ID should be automatically generated.", + DEFAULT_AUTOMATIC_ASSOCIATION_ID, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS); + g_object_class_install_properties (gobject_class, NUM_PROPERTIES, properties); signalsSIGNAL_RESET_STREAM = g_signal_new ("reset-stream", @@ -226,7 +242,9 @@ { self->sctp_association_id = DEFAULT_GST_SCTP_ASSOCIATION_ID; self->local_sctp_port = DEFAULT_LOCAL_SCTP_PORT; + self->automatic_association_id = DEFAULT_AUTOMATIC_ASSOCIATION_ID; + self->automatic_sctp_association = NULL; self->flow_combiner = gst_flow_combiner_new (); self->sink_pad = gst_pad_new_from_static_template (&sink_template, "sink"); @@ -246,11 +264,29 @@ switch (prop_id) { case PROP_GST_SCTP_ASSOCIATION_ID: - self->sctp_association_id = g_value_get_uint (value); + if (self->automatic_association_id) { + GST_WARNING_OBJECT (object, + "Cannot modify association id if automatic association id enabled"); + } else { + self->sctp_association_id = g_value_get_uint (value); + } break; case PROP_LOCAL_SCTP_PORT: self->local_sctp_port = g_value_get_uint (value); break; + case PROP_AUTOMATIC_ASSOCIATION_ID: + self->automatic_association_id = g_value_get_boolean (value); + if (self->automatic_association_id && !self->automatic_sctp_association) { + self->automatic_sctp_association = gst_sctp_association_create (); + if (self->automatic_sctp_association) + self->sctp_association_id = + self->automatic_sctp_association->association_id; + } else if (!self->automatic_association_id + && self->automatic_sctp_association) { + gst_clear_object (&self->automatic_sctp_association); + self->sctp_association_id = DEFAULT_GST_SCTP_ASSOCIATION_ID; + } + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (self, prop_id, pspec); break; @@ -270,6 +306,9 @@ case PROP_LOCAL_SCTP_PORT: g_value_set_uint (value, self->local_sctp_port); break; + case PROP_AUTOMATIC_ASSOCIATION_ID: + g_value_set_boolean (value, self->automatic_association_id); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (self, prop_id, pspec); break; @@ -284,6 +323,9 @@ gst_flow_combiner_free (self->flow_combiner); self->flow_combiner = NULL; + if (self->automatic_sctp_association) + gst_object_unref (self->automatic_sctp_association); + G_OBJECT_CLASS (parent_class)->finalize (object); } @@ -467,7 +509,13 @@ { gint state; - self->sctp_association = gst_sctp_association_get (self->sctp_association_id); + if (self->automatic_sctp_association) { + self->sctp_association = self->automatic_sctp_association; + self->automatic_sctp_association = NULL; + } else { + self->sctp_association = + gst_sctp_association_get (self->sctp_association_id); + } g_object_get (self->sctp_association, "state", &state, NULL);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sctp/gstsctpdec.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sctp/gstsctpdec.h
Changed
@@ -50,7 +50,9 @@ GstPad *sink_pad; guint sctp_association_id; guint local_sctp_port; + gboolean automatic_association_id; + GstSctpAssociation *automatic_sctp_association; GstSctpAssociation *sctp_association; gulong signal_handler_stream_reset; };
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sctp/gstsctpenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sctp/gstsctpenc.c
Changed
@@ -395,8 +395,9 @@ case GST_STATE_CHANGE_NULL_TO_READY: break; case GST_STATE_CHANGE_READY_TO_PAUSED: - gst_pad_start_task (self->src_pad, - (GstTaskFunction) gst_sctp_enc_srcpad_loop, self->src_pad, NULL); + if (ret != GST_STATE_CHANGE_FAILURE) + gst_pad_start_task (self->src_pad, + (GstTaskFunction) gst_sctp_enc_srcpad_loop, self->src_pad, NULL); break; case GST_STATE_CHANGE_PLAYING_TO_PAUSED: break;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sctp/sctpassociation.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sctp/sctpassociation.c
Changed
@@ -67,7 +67,22 @@ return id; } -G_DEFINE_TYPE (GstSctpAssociation, gst_sctp_association, G_TYPE_OBJECT); +static void +_init_debug (void) +{ + static gsize _init = 0; + + if (g_once_init_enter (&_init)) { + GST_DEBUG_CATEGORY_INIT (gst_sctp_association_debug_category, + "sctpassociation", 0, "debug category for sctpassociation"); + GST_DEBUG_CATEGORY_INIT (gst_sctp_debug_category, "sctplib", 0, + "debug category for messages from usrsctp"); + g_once_init_leave (&_init, 1); + } +} + +G_DEFINE_TYPE_WITH_CODE (GstSctpAssociation, gst_sctp_association, + G_TYPE_OBJECT, _init_debug ()); enum { @@ -363,11 +378,6 @@ GstSctpAssociation *association; G_LOCK (associations_lock); - GST_DEBUG_CATEGORY_INIT (gst_sctp_association_debug_category, - "sctpassociation", 0, "debug category for sctpassociation"); - GST_DEBUG_CATEGORY_INIT (gst_sctp_debug_category, - "sctplib", 0, "debug category for messages from usrsctp"); - if (!associations) { associations = g_hash_table_new_full (g_direct_hash, g_direct_equal, NULL, NULL); @@ -388,6 +398,40 @@ return association; } +GstSctpAssociation * +gst_sctp_association_create (void) +{ + GstSctpAssociation *association; + guint association_id = 0; + + G_LOCK (associations_lock); + if (!associations) { + associations = + g_hash_table_new_full (g_direct_hash, g_direct_equal, NULL, NULL); + } + + while (association_id < G_MAXUINT16) { + association = + g_hash_table_lookup (associations, GUINT_TO_POINTER (association_id)); + if (!association) + break; + association_id++; + } + + if (association) { + G_UNLOCK (associations_lock); + return NULL; + } + + association = + g_object_new (GST_SCTP_TYPE_ASSOCIATION, "association-id", association_id, + NULL); + g_hash_table_insert (associations, GUINT_TO_POINTER (association_id), + association); + G_UNLOCK (associations_lock); + return association; +} + gboolean gst_sctp_association_start (GstSctpAssociation * self) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sctp/sctpassociation.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sctp/sctpassociation.h
Changed
@@ -103,6 +103,7 @@ GType gst_sctp_association_get_type (void); GstSctpAssociation *gst_sctp_association_get (guint32 association_id); +GstSctpAssociation *gst_sctp_association_create (void); gboolean gst_sctp_association_start (GstSctpAssociation * self); void gst_sctp_association_set_on_packet_out (GstSctpAssociation * self,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/smoothstreaming/gstmssdemux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/smoothstreaming/gstmssdemux.c
Changed
@@ -392,9 +392,11 @@ if (tmpl != NULL) { srcpad = GST_PAD_CAST (gst_pad_new_from_template (tmpl, name)); - g_free (name); gst_object_unref (tmpl); } + + g_free (name); + if (!srcpad) { GST_WARNING_OBJECT (mssdemux, "Ignoring unknown type stream"); return NULL;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/smoothstreaming/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/smoothstreaming/meson.build
Changed
@@ -20,7 +20,8 @@ 'smoothstreaming': pathsep.join(doc_sources) } -xml28_dep = dependency('libxml-2.0', version : '>= 2.8', required : get_option('smoothstreaming')) +xml28_dep = dependency('libxml-2.0', version: '>= 2.8', required: get_option('smoothstreaming'), + default_options: {'python': false}) if xml28_dep.found() gstmss = library('gstsmoothstreaming',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sndfile/gstsfdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sndfile/gstsfdec.c
Changed
@@ -356,8 +356,8 @@ GstSFDec *self = GST_SF_DEC (element); GST_INFO_OBJECT (self, "transition: %s -> %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_READY_TO_PAUSED:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/sndfile/gstsfsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/sndfile/gstsfsrc.c
Changed
@@ -405,11 +405,13 @@ gst_sf_src_get_caps (GstBaseSrc * bsrc) { GstSFSrc *this; - GstCaps *ret; + GstCaps *ret, *tcaps; this = GST_SF_SRC (bsrc); - ret = gst_caps_copy (gst_pad_get_pad_template_caps (bsrc->srcpad)); + tcaps = gst_pad_get_pad_template_caps (bsrc->srcpad); + ret = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); if (this->file) { GstStructure *s;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/srt/gstsrtsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/srt/gstsrtsink.c
Changed
@@ -407,7 +407,7 @@ gst_srt_object_install_properties_helper (gobject_class); gst_element_class_add_static_pad_template (gstelement_class, &sink_template); - gst_element_class_set_metadata (gstelement_class, + gst_element_class_set_static_metadata (gstelement_class, "SRT sink", "Sink/Network", "Send data over the network via SRT", "Justin Kim <justin.joy.9to5@gmail.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/srt/gstsrtsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/srt/gstsrtsrc.c
Changed
@@ -464,7 +464,7 @@ FALSE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); gst_element_class_add_static_pad_template (gstelement_class, &src_template); - gst_element_class_set_metadata (gstelement_class, + gst_element_class_set_static_metadata (gstelement_class, "SRT source", "Source/Network", "Receive data over the network via SRT", "Justin Kim <justin.joy.9to5@gmail.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/svtav1/gstsvtav1enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/svtav1/gstsvtav1enc.c
Changed
@@ -80,7 +80,9 @@ #else gint logical_processors; #endif +#if !SVT_AV1_CHECK_VERSION(4, 0, 0) gint target_socket; +#endif gchar *parameters_string; EbBufferHeaderType *input_buf; @@ -132,7 +134,7 @@ PROP_INTRA_PERIOD_LENGTH, PROP_INTRA_REFRESH_TYPE, PROP_LOGICAL_PROCESSORS, /// DEPRECATED: should be removed once the minimum version is 3.0.0 - PROP_TARGET_SOCKET, + PROP_TARGET_SOCKET, /// DEPRECATED: should be removed once the minimum version is 4.0.0 PROP_PARAMETERS_STRING, PROP_LEVEL_OF_PARALLELISM, }; @@ -149,7 +151,7 @@ #define PROP_INTRA_REFRESH_TYPE_DEFAULT SVT_AV1_KF_REFRESH #define PROP_LEVEL_OF_PARALLELISM_DEFAULT 0 #define PROP_LOGICAL_PROCESSORS_DEFAULT 0 /// DEPRECATED: should be removed once the minimum version is 3.0.0 -#define PROP_TARGET_SOCKET_DEFAULT -1 +#define PROP_TARGET_SOCKET_DEFAULT -1 /// DEPRECATED: should be removed once the minimum version is 4.0.0 #define PROP_PARAMETERS_STRING_DEFAULT NULL #if G_BYTE_ORDER == G_LITTLE_ENDIAN @@ -339,10 +341,12 @@ G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_DEPRECATED)); g_object_class_install_property (gobject_class, - PROP_TARGET_SOCKET, - g_param_spec_int ("target-socket", - "Target socket", + PROP_TARGET_SOCKET, g_param_spec_int ("target-socket", "Target socket", +#if SVT_AV1_CHECK_VERSION(4, 0, 0) + "Deprecated. This property is ignored in SVT-AV1 4.0.0 and later.", +#else "Target CPU socket to run on. -1: all available", +#endif -1, 15, PROP_TARGET_SOCKET_DEFAULT, @@ -378,7 +382,9 @@ #else svtav1enc->logical_processors = PROP_LOGICAL_PROCESSORS_DEFAULT; #endif +#if !SVT_AV1_CHECK_VERSION(4, 0, 0) svtav1enc->target_socket = PROP_TARGET_SOCKET_DEFAULT; +#endif svtav1enc->parameters_string = PROP_PARAMETERS_STRING_DEFAULT; } @@ -439,7 +445,9 @@ #endif break; case PROP_TARGET_SOCKET: +#if !SVT_AV1_CHECK_VERSION(4, 0, 0) svtav1enc->target_socket = g_value_get_int (value); +#endif break; case PROP_PARAMETERS_STRING:{ g_free (svtav1enc->parameters_string); @@ -500,7 +508,9 @@ #endif break; case PROP_TARGET_SOCKET: +#if !SVT_AV1_CHECK_VERSION(4, 0, 0) g_value_set_int (value, svtav1enc->target_socket); +#endif break; case PROP_PARAMETERS_STRING: g_value_set_string (value, svtav1enc->parameters_string); @@ -586,7 +596,11 @@ GST_DEBUG_OBJECT (svtav1enc, "Enabling CQP mode (qp %u)", svtav1enc->cqp); svtav1enc->svt_config->qp = svtav1enc->cqp; svtav1enc->svt_config->rate_control_mode = SVT_AV1_RC_MODE_CQP_OR_CRF; +#if SVT_AV1_CHECK_VERSION(4, 0, 0) + svtav1enc->svt_config->aq_mode = 0; +#else svtav1enc->svt_config->enable_adaptive_quantization = FALSE; +#endif svtav1enc->svt_config->force_key_frames = TRUE; } else { GST_DEBUG_OBJECT (svtav1enc, "Using default rate control settings"); @@ -598,7 +612,9 @@ #else svtav1enc->svt_config->logical_processors = svtav1enc->logical_processors; #endif +#if !SVT_AV1_CHECK_VERSION(4, 0, 0) svtav1enc->svt_config->target_socket = svtav1enc->target_socket; +#endif gst_svtav1enc_parse_parameters_string (svtav1enc); /* set properties out of GstVideoInfo */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/svthevcenc/gstsvthevcenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/svthevcenc/gstsvthevcenc.c
Changed
@@ -627,6 +627,9 @@ gst_type_mark_as_plugin_api (GST_SVTHEVC_ENC_PRED_STRUCT_TYPE, 0); gst_type_mark_as_plugin_api (GST_SVTHEVC_ENC_RC_TYPE, 0); gst_type_mark_as_plugin_api (GST_SVTHEVC_ENC_TUNE_TYPE, 0); + + GST_DEBUG_CATEGORY_INIT (svthevc_enc_debug, "svthevcenc", 0, + "h265 encoding element"); } static void @@ -1454,6 +1457,12 @@ caps = gst_video_info_to_caps (info); pool = gst_video_buffer_pool_new (); + { + gchar *name = + g_strdup_printf ("%s-internal-pool", GST_OBJECT_NAME (encoder)); + g_object_set (pool, "name", name, NULL); + g_free (name); + } size = GST_VIDEO_INFO_SIZE (info); GST_INFO_OBJECT (encoder, @@ -1554,6 +1563,12 @@ gst_query_add_allocation_param (query, allocator, ¶ms); pool = gst_video_buffer_pool_new (); + { + gchar *name = + g_strdup_printf ("%s-propose-pool", GST_OBJECT_NAME (svthevcenc)); + g_object_set (pool, "name", name, NULL); + g_free (name); + } config = gst_buffer_pool_get_config (pool); gst_buffer_pool_config_set_params (config, caps, size, 0, 0); @@ -2271,9 +2286,6 @@ static gboolean plugin_init (GstPlugin * plugin) { - GST_DEBUG_CATEGORY_INIT (svthevc_enc_debug, "svthevcenc", 0, - "h265 encoding element"); - return gst_element_register (plugin, "svthevcenc", GST_RANK_PRIMARY, GST_TYPE_SVTHEVC_ENC); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/svtjpegxs/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/svtjpegxs/meson.build
Changed
@@ -18,7 +18,10 @@ 'svtjpegxs': pathsep.join(doc_sources) } -svtjpegxs_dep = dependency('SvtJpegxs', version: '>= 0.9', required: get_option('svtjpegxs')) +svtjpegxs_dep = dependency('SvtJpegxs', + version: '>= 0.9', + required: get_option('svtjpegxs'), +) if svtjpegxs_dep.found() gstsvtjpegxs = library('gstsvtjpegxs',
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/README.md
Added
@@ -0,0 +1,9 @@ +# GStreamer elements for TensorFlow Lite # + +Given a TensorFlow Lite model, this element executes the inference to produce and add `GstTensorMeta` metadata to the buffer to be consumed by a tensor decoder + +Requires the TensorFlow Lite library. Tested with TensorFlow r2.18 + +# To build TensorFlow Lite: + +See detailed info on: https://www.tensorflow.org/lite/guide/build_cmake(https://www.tensorflow.org/lite/guide/build_cmake)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/VX
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/VX/vsi_npu_custom_op.cc
Added
@@ -0,0 +1,63 @@ +/* Copyright 2018 The TensorFlow Authors. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +==============================================================================*/ + +#include <cstring> +#include <cstdlib> + +#include "vsi_npu_custom_op.h" + +namespace tflite { +namespace ops { +namespace custom { +namespace vsi_npu { + +static void* Init(TfLiteContext* context, const char* buffer, size_t length) { + TfLiteVsiNpuParams* data = reinterpret_cast<TfLiteVsiNpuParams*>( + malloc(sizeof(TfLiteVsiNpuParams) + sizeof(char) * length)); + data->length = length; + data->binary = reinterpret_cast<char*>(data) + sizeof(TfLiteVsiNpuParams); + memcpy(reinterpret_cast<char*>(data->binary), buffer, length); + return reinterpret_cast<void*>(data); +} + +static void Free(TfLiteContext* context, void* buffer) { + auto* data = reinterpret_cast<TfLiteVsiNpuParams*>(buffer); + delete data; +} + +static TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) { + auto* data = + reinterpret_cast<TfLiteVsiNpuParams*>(node->user_data); + data->input_count = node->inputs->size; + data->output_cout = node->outputs->size; + return kTfLiteOk; +} + +static TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) { + return kTfLiteOk; +} + +} // namespace vsi_npu + +TfLiteRegistration* Register_VSI_NPU_PRECOMPILED() { + static TfLiteRegistration r = { + vsi_npu::Init, vsi_npu::Free, + vsi_npu::Prepare,vsi_npu::Eval}; + return &r; +} + +} // namespace custom +} // namespace ops +} // namespace tflite
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/VX/vsi_npu_custom_op.h
Added
@@ -0,0 +1,49 @@ +/* Copyright 2018 The TensorFlow Authors. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +==============================================================================*/ +#ifndef TENSORFLOW_LITE_DELEGATES_VSI_NPU_CUSTOM_OP_H_ +#define TENSORFLOW_LITE_DELEGATES_VSI_NPU_CUSTOM_OP_H_ + +#include "tensorflow/lite/c/common.h" + +#ifdef __cplusplus +extern "C" { +#endif // __cplusplus + +static const char kNbgCustomOp = "vsi-npu"; + +typedef struct { + size_t length; + size_t input_count; + size_t output_cout; + char* binary; +} TfLiteVsiNpuParams; + +#ifdef __cplusplus +namespace tflite { + namespace ops { + namespace custom { +#endif // __cplusplus + +TfLiteRegistration* Register_VSI_NPU_PRECOMPILED(void); + +#ifdef __cplusplus + } // namespace custom + } // namespace ops +} // namespace tflite + +} // extern "C" +#endif // __cplusplus + +#endif //TENSORFLOW_LITE_DELEGATES_VSI_NPU_CUSTOM_OP_H_
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gstml.h
Changed
(renamed from ext/onnx/gstml.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttflite.c
Added
@@ -0,0 +1,57 @@ + +/* + * GStreamer gstreamer-tflite + * Copyright (C) 2024 Collabora Ltd + * + * gsttflite.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsttfliteinference.h" + +#ifdef EDGETPU +#include "gsttfliteedgetpuinference.h" +#endif + +#ifdef TFLITE_VSI +#include "gsttflitevsiinference.h" +#endif + +static gboolean +plugin_init (GstPlugin * plugin) +{ + gboolean ret = GST_ELEMENT_REGISTER (tflite_inference, plugin); + +#ifdef EDGETPU + ret |= GST_ELEMENT_REGISTER (tflite_edgetpu_inference, plugin); +#endif + +#ifdef TFLITE_VSI + ret |= GST_ELEMENT_REGISTER (tflite_vsi_inference, plugin); +#endif + + return ret; +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, + GST_VERSION_MINOR, + tflite, + "TFLITE neural network plugin", + plugin_init, VERSION, GST_LICENSE, GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttfliteedgetpuinference.c
Added
@@ -0,0 +1,165 @@ +/* + * GStreamer + * Copyright (C) 2025 Collabora Ltd. + * + * gsttfliteedgetpuinference.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-tfliteedgetpuinference + * @short_description: Run TFLITE inference model on video buffers using a EdgeTpu device + * + * This element can apply an TFLITE model to video buffers. It attaches + * the tensor output to the buffer as a @ref GstTensorMeta. + * + * Uses the Google Coral EdgeTpu devices. + * + * To install TFLITE on your system, follow the instructions in the + * README.md in with this plugin. + * + * ## Example launch command: + * + * GST_DEBUG=ssdobjectdetector:5 \ + * gst-launch-1.0 filesrc location=tflite-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! tfliteedgetpuinference model-file=tflite-models/models/ssd_mobilenet_v1_coco.tflite ! \ + * ssdobjectdetector label-file=tflite-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink + * + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsttfliteedgetpuinference.h" +#include <libedgetpu/edgetpu_c.h> + + +typedef struct _GstTFliteEdgeTpuInference +{ + GstTFliteInference parent; + + TfLiteDelegate *tflite_delegate; +} GstTFliteEdgeTpuInference; + +GST_DEBUG_CATEGORY (tflite_edgetpu_inference_debug); +#define GST_CAT_DEFAULT tflite_edgetpu_inference_debug + +GST_ELEMENT_REGISTER_DEFINE (tflite_edgetpu_inference, + "tfliteedgetpuinference", GST_RANK_NONE, GST_TYPE_TFLITE_EDGETPU_INFERENCE); + + +static gboolean gst_tflite_edgetpu_update_options (GstTFliteInference * inf, + TfLiteInterpreterOptions * interpreter_options); +static gboolean gst_tflite_edgetpu_inference_stop (GstBaseTransform * trans); + +G_DEFINE_TYPE (GstTFliteEdgeTpuInference, gst_tflite_edgetpu_inference, + GST_TYPE_TFLITE_INFERENCE); + +static void +gst_tflite_edgetpu_inference_class_init (GstTFliteEdgeTpuInferenceClass * klass) +{ + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + GstTFliteInferenceClass *tflite_class = (GstTFliteInferenceClass *) klass; + + GST_DEBUG_CATEGORY_INIT (tflite_edgetpu_inference_debug, + "tfliteedgetpuinference", 0, "TFLlite edgetpu inference"); + + gst_element_class_set_static_metadata (element_class, + "tfliteedgetpuinference", + "Filter/Effect", + "Apply neural network to video frames and create tensor output" + " using the Google Edge TPU", + "Olivier Crête <olivier.crete@collabora.com>"); + + basetransform_class->stop = gst_tflite_edgetpu_inference_stop; + + tflite_class->update_options = gst_tflite_edgetpu_update_options; +} + +static void +gst_tflite_edgetpu_inference_init (GstTFliteEdgeTpuInference * self) +{ +} + +static gboolean +gst_tflite_edgetpu_update_options (GstTFliteInference * inf, + TfLiteInterpreterOptions * interpreter_options) +{ + GstTFliteEdgeTpuInference *self = GST_TFLITE_EDGETPU_INFERENCE (inf); + size_t num_devices = 0; + struct edgetpu_device *devices; + + devices = edgetpu_list_devices (&num_devices); + + if (num_devices == 0) { + GST_ERROR_OBJECT (self, + "Could not create EdgeTPU session because no EdgeTPU" + " device is connected"); + return FALSE; + } + + /* Not passing options or a path for now */ + self->tflite_delegate = edgetpu_create_delegate (devices0.type, + devices0.path, NULL, 0); + + if (self->tflite_delegate == NULL) { + GST_ERROR_OBJECT (self, "Could not create EdgeTPU session"); + edgetpu_free_devices (devices); + return FALSE; + } + + const gchar *dev_type_str = ""; + switch (devices0.type) { + case EDGETPU_APEX_PCI: + dev_type_str = "PCIe"; + break; + case EDGETPU_APEX_USB: + dev_type_str = "USB"; + break; + default: + dev_type_str = "unknown"; + break; + } + + GST_DEBUG ("Using EdgeTPU version %s device of type %s at %s", + edgetpu_version (), dev_type_str, devices0.path); + + edgetpu_free_devices (devices); + + if (self->tflite_delegate) + TfLiteInterpreterOptionsAddDelegate (interpreter_options, + self->tflite_delegate); + + return TRUE; +} + +static gboolean +gst_tflite_edgetpu_inference_stop (GstBaseTransform * trans) +{ + GstTFliteEdgeTpuInference *self = GST_TFLITE_EDGETPU_INFERENCE (trans); + gboolean ret; + + ret = GST_BASE_TRANSFORM_CLASS (gst_tflite_edgetpu_inference_parent_class) + ->stop (trans); + + if (self->tflite_delegate) + edgetpu_free_delegate (self->tflite_delegate); + self->tflite_delegate = NULL; + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttfliteedgetpuinference.h
Added
@@ -0,0 +1,39 @@ +/* + * GStreamer gstreamer-tfliteinference + * Copyright (C) 2024 Collabora Ltd + * + * gsttfliteinference.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_TFLITE_EDGETPU_INFERENCE_H__ +#define __GST_TFLITE_EDGETPU_INFERENCE_H__ + +#include "gsttfliteinference.h" + + +G_BEGIN_DECLS + +#define GST_TYPE_TFLITE_EDGETPU_INFERENCE (gst_tflite_edgetpu_inference_get_type()) +G_DECLARE_FINAL_TYPE (GstTFliteEdgeTpuInference, gst_tflite_edgetpu_inference, GST, + TFLITE_EDGETPU_INFERENCE, GstTFliteInference) + +GST_ELEMENT_REGISTER_DECLARE (tflite_edgetpu_inference) + +G_END_DECLS + +#endif /* __GST_TFLITE_INFERENCE_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttfliteinference.c
Added
@@ -0,0 +1,1108 @@ +/* + * GStreamer + * Copyright (C) 2024 Collabora Ltd. + * + * gsttfliteinference.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-tfliteinference + * @short_description: Run TFLITE inference model on video buffers + * + * This element can apply an TFLITE model to video buffers. It attaches + * the tensor output to the buffer as a @ref GstTensorMeta. + * + * To install TFLITE on your system, follow the instructions in the + * README.md in with this plugin. + * + * ## Example launch command: + * + * GST_DEBUG=ssdobjectdetector:5 \ + * gst-launch-1.0 filesrc location=tflite-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! tfliteinference model-file=tflite-models/models/ssd_mobilenet_v1_coco.tflite ! \ + * ssdobjectdetector label-file=tflite-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink + * + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include "gsttfliteinference.h" +#include <gst/analytics/analytics.h> + +#include <tensorflow/lite/c/common.h> + +#define DEFAULT_MODEL_FILE "" +#define DEFAULT_THREADS 0 + +/* + * GstTFliteInference: + * + * @model_file model file + * @tflite_client opaque pointer to TFLITE client + * @tflite_disabled true if inference is disabled + * @video_info @ref GstVideoInfo of sink caps + */ +typedef struct _GstTFliteInferencePrivate +{ + GstBaseTransform basetransform; + gchar *model_file; + gsize numberOfThreads; + gchar *vxdelegate; + gboolean planar; + GPtrArray *tensor_templates; + + TfLiteInterpreter *interpreter; + TfLiteInterpreterOptions *interpreter_options; + TfLiteModel *model; + gboolean tflite_disabled; + GstVideoInfo video_info; + guint8 *dest; + + GstCaps *model_incaps; + GstCaps *model_outcaps; + + + gint channels; + gdouble *scales; + gdouble *offsets; + gsize num_channels; + +} GstTFliteInferencePrivate; + +GST_DEBUG_CATEGORY (tflite_inference_debug); + +#define GST_CAT_DEFAULT tflite_inference_debug +GST_ELEMENT_REGISTER_DEFINE (tflite_inference, "tfliteinference", + GST_RANK_NONE, GST_TYPE_TFLITE_INFERENCE); + +/* GstTFliteInference properties */ +enum +{ + PROP_0, + PROP_MODEL_FILE, + PROP_THREADS, +}; + +#define VIDEO_CAPS GST_VIDEO_CAPS_MAKE ("{ RGB, RGBA, BGR, BGRA }") + +static GstStaticPadTemplate gst_tflite_inference_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (VIDEO_CAPS) + ); + +static GstStaticPadTemplate gst_tflite_inference_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (VIDEO_CAPS) + ); + +static gboolean gst_tflite_inference_start (GstBaseTransform * trans); +static gboolean gst_tflite_inference_stop (GstBaseTransform * trans); + +static void gst_tflite_inference_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_tflite_inference_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_tflite_inference_finalize (GObject * object); +static GstFlowReturn gst_tflite_inference_transform_ip (GstBaseTransform * + trans, GstBuffer * buf); +static gboolean gst_tflite_inference_process (GstBaseTransform * trans, + GstBuffer * buf); +static GstCaps *gst_tflite_inference_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps); +static gboolean +gst_tflite_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps); + +G_DEFINE_TYPE_WITH_PRIVATE (GstTFliteInference, gst_tflite_inference, + GST_TYPE_BASE_TRANSFORM); + +static void +gst_tflite_inference_class_init (GstTFliteInferenceClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + GST_DEBUG_CATEGORY_INIT (tflite_inference_debug, "tfliteinference", + 0, "tflite_inference"); + gobject_class->set_property = gst_tflite_inference_set_property; + gobject_class->get_property = gst_tflite_inference_get_property; + gobject_class->finalize = gst_tflite_inference_finalize; + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_MODEL_FILE, + g_param_spec_string ("model-file", + "TFLITE model file", "TFLITE model file", DEFAULT_MODEL_FILE, + (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_THREADS, + g_param_spec_int ("threads", + "Number of Threads", + "Set the number of threads to be used by the TFLITE inference (-1 for auto)", + -1, G_MAXINT, DEFAULT_THREADS, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + + gst_element_class_set_static_metadata (element_class, "tfliteinference", + "Filter/Video", + "Apply neural network to video frames and create tensor output", + "Denis Shimizu <denis.shimizu@collabora.com>, " + "Aaron Boxer <aaron.boxer@collabora.com>," + "Daniel Morin <daniel.morin@collabora.com>"); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_tflite_inference_sink_template)); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_tflite_inference_src_template)); + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_tflite_inference_transform_ip); + basetransform_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_tflite_inference_transform_caps); + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_tflite_inference_set_caps); + basetransform_class->start = GST_DEBUG_FUNCPTR (gst_tflite_inference_start); + basetransform_class->stop = GST_DEBUG_FUNCPTR (gst_tflite_inference_stop); +} + +static gboolean +gst_tflite_inference_has_session (GstTFliteInference * self) +{ + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + return priv->interpreter != NULL; +} + +static void +gst_tflite_inference_init (GstTFliteInference * self) +{ + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + priv->numberOfThreads = DEFAULT_THREADS; + priv->tensor_templates = g_ptr_array_new_with_free_func ((GDestroyNotify) + gst_tensor_free); + priv->tflite_disabled = TRUE; + priv->scales = NULL; + priv->offsets = NULL; + priv->num_channels = 0; + + /* Passthrough would propagate tensors caps upstream */ + gst_base_transform_set_prefer_passthrough (GST_BASE_TRANSFORM (self), FALSE); +} + +static void +gst_tflite_inference_finalize (GObject * object) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (object); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + g_free (priv->model_file); + g_free (priv->scales); + g_free (priv->offsets); + g_ptr_array_unref (priv->tensor_templates); + G_OBJECT_CLASS (gst_tflite_inference_parent_class)->finalize (object); +} + +static void +gst_tflite_inference_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (object); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + const gchar *filename; + + switch (prop_id) { + case PROP_MODEL_FILE: + filename = g_value_get_string (value); + if (filename + && g_file_test (filename, + (GFileTest) (G_FILE_TEST_EXISTS | G_FILE_TEST_IS_REGULAR))) { + if (priv->model_file) + g_free (priv->model_file); + priv->model_file = g_strdup (filename); + priv->tflite_disabled = FALSE; + } else { + GST_WARNING_OBJECT (self, "Model file '%s' not found!", filename); + } + break; + case PROP_THREADS: + priv->numberOfThreads = g_value_get_int (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_tflite_inference_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (object); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + switch (prop_id) { + case PROP_MODEL_FILE: + g_value_set_string (value, priv->model_file); + break; + case PROP_THREADS: + g_value_set_int (value, priv->numberOfThreads); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstTensorDataType +gst_tflite_convert_data_type (TfLiteType type) +{ + switch (type) { + case kTfLiteFloat32: + return GST_TENSOR_DATA_TYPE_FLOAT32; + case kTfLiteInt32: + return GST_TENSOR_DATA_TYPE_INT32; + case kTfLiteUInt8: + return GST_TENSOR_DATA_TYPE_UINT8; + case kTfLiteInt64: + return GST_TENSOR_DATA_TYPE_INT64; + case kTfLiteInt16: + return GST_TENSOR_DATA_TYPE_INT16; + case kTfLiteInt8: + return GST_TENSOR_DATA_TYPE_INT8; + case kTfLiteFloat16: + return GST_TENSOR_DATA_TYPE_FLOAT16; + case kTfLiteFloat64: + return GST_TENSOR_DATA_TYPE_FLOAT64; + case kTfLiteUInt64: + return GST_TENSOR_DATA_TYPE_UINT64; + case kTfLiteUInt32: + return GST_TENSOR_DATA_TYPE_UINT32; + case kTfLiteUInt16: + return GST_TENSOR_DATA_TYPE_UINT16; + case kTfLiteInt4: + return GST_TENSOR_DATA_TYPE_INT4; +#ifdef TFLITE_HAS_BFLOAT16 + case kTfLiteBFloat16: + return GST_TENSOR_DATA_TYPE_BFLOAT16; +#endif + + default: + GST_FIXME ("GstTensorDataType currently does not have a mapping \ + for this type."); + g_assert_not_reached (); + } +} + +static gboolean +convert_tensor_info (const TfLiteTensor * tflite_tensor, + const gchar ** tname, GstTensorDataType * data_type, + gsize * dims_count, gsize ** out_dims) +{ + gsize j; + gsize *dims; + + if (tname) + *tname = TfLiteTensorName (tflite_tensor); + *dims_count = TfLiteTensorNumDims (tflite_tensor); + + if (*dims_count == 0) + return FALSE; + + dims = *out_dims = (gsize *) g_malloc0_n (*dims_count, sizeof (gsize)); + + if (tflite_tensor->dims_signature && tflite_tensor->dims_signature->size) { + for (j = 0; j < *dims_count; j++) { + if (tflite_tensor->dims_signature->dataj < 0) + dimsj = G_MAXSIZE; + else + dimsj = tflite_tensor->dims_signature->dataj; + } + } else { + for (j = 0; j < *dims_count; j++) + dimsj = TfLiteTensorDim (tflite_tensor, j); + } + + *data_type = gst_tflite_convert_data_type (TfLiteTensorType (tflite_tensor)); + + return TRUE; +} + +static gchar * +build_dims_str (gsize dims_count, gsize * dims) +{ + GString *dims_gstr = g_string_new (""); + gsize j; + + if (dims_count == 0) + goto done; + + + if (dims0 == G_MAXSIZE) + g_string_append (dims_gstr, "-1"); + else + g_string_append_printf (dims_gstr, "%zu", dims0); + + for (j = 1; j < dims_count; j++) + if (dimsj == G_MAXSIZE) + g_string_append (dims_gstr, ",-1"); + else + g_string_append_printf (dims_gstr, ",%zu", dimsj); + +done: + return g_string_free (dims_gstr, FALSE); +} + +static gboolean +_guess_tensor_data_type (GstTFliteInference * self, gsize dims_count, + gsize * dims, const gchar ** gst_format, gint * width, gint * height, + gint * channels, gboolean * planar) +{ + if (dims_count < 2 || dims_count > 4) { + GST_ERROR_OBJECT (self, + "Don't know how to interpret tensors with %zu dimensions", dims_count); + return FALSE; + } + + *planar = FALSE; + + switch (dims_count) { + case 2: + *gst_format = "GRAY8"; + *height = dims0; + *width = dims1; + break; + case 3: + if (dims0 == 1 || dims0 == 3) { + *channels = dims0; + if (dims0 == 1) { + *gst_format = "GRAY8"; + } else { + *gst_format = "RGBP"; + *planar = TRUE; + } + *height = dims1; + *width = dims2; + } else if (dims2 == 1 || dims2 == 3) { + *channels = dims2; + if (dims2 == 1) + *gst_format = "GRAY"; + else + *gst_format = "RGB"; + *height = dims0; + *width = dims1; + } else { + GST_ERROR_OBJECT (self, "Don't know how to interpret dims"); + return FALSE; + } + break; + case 4: + /* Assuming dims0 is a batch */ + if (dims1 == 1 || dims1 == 3) { + *channels = dims1; + *planar = TRUE; + *height = dims2; + *width = dims3; + } else if (dims3 == 1 || dims3 == 3) { + *channels = dims3; + *height = dims1; + *width = dims2; + } else { + GST_ERROR_OBJECT (self, "Don't know how to interpret dims"); + return FALSE; + } + + if (*channels == 1) { + *gst_format = "GRAY8"; + *planar = FALSE; + } else if (*channels == 3) { + if (*planar) + *gst_format = "RGBP"; + else + *gst_format = "RGB"; + } else { + g_assert_not_reached (); + } + break; + } + + return TRUE; +} + +static gboolean +_get_input_params (GstTFliteInference * self, GstTensorDataType * data_type, + gint * width, gint * height, const gchar ** gst_format, + gint * channels, gboolean * planar) +{ + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + const TfLiteTensor *input_tensor; + gint i_size = TfLiteInterpreterGetInputTensorCount (priv->interpreter); + gsize dims_count; + gsize *dims = NULL; + gboolean ret; + + if (i_size != 1) { + GST_ERROR_OBJECT (self, "Currently only support model with a single" + " input tensor, but model has %d", i_size); + return FALSE; + } + + input_tensor = TfLiteInterpreterGetInputTensor (priv->interpreter, 0); + if (convert_tensor_info (input_tensor, NULL, data_type, &dims_count, &dims)) { + ret = _guess_tensor_data_type (self, dims_count, dims, gst_format, width, + height, channels, planar); + } else { + GST_ERROR_OBJECT (self, "Input tensor has no dimensions, rejecting"); + ret = FALSE; + } + g_free (dims); + + return ret; +} + +static gboolean +gst_tflite_inference_start (GstBaseTransform * trans) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (trans); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + gboolean ret = FALSE; + GstAnalyticsModelInfo *modelinfo = NULL; + gint i_size, o_size; + GstTFliteInferenceClass *klass = GST_TFLITE_INFERENCE_GET_CLASS (self); + GstStructure *tensors_s = NULL; + GValue v_tensors_set = G_VALUE_INIT; + + GST_OBJECT_LOCK (self); + if (gst_tflite_inference_has_session (self)) { + ret = TRUE; + goto done; + } + + if (priv->model_file == NULL) { + GST_ERROR_OBJECT (self, "model-file property not set"); + goto done; + } + + priv->model = TfLiteModelCreateFromFile (priv->model_file); + if (!priv->model) { + GST_ERROR_OBJECT (self, "Failed to mmap model %s", priv->model_file); + goto error; + } + + GST_DEBUG_OBJECT (self, "Loaded model %s", priv->model_file); + + priv->interpreter_options = TfLiteInterpreterOptionsCreate (); + if (priv->numberOfThreads != 0) { + TfLiteInterpreterOptionsSetNumThreads (priv->interpreter_options, + priv->numberOfThreads); + } + + if (klass->update_options) + if (!klass->update_options (self, priv->interpreter_options)) + goto error; + + priv->interpreter = TfLiteInterpreterCreate (priv->model, + priv->interpreter_options); + if (!priv->interpreter) { + GST_ERROR_OBJECT (self, "Failed to construct interpreter"); + goto error; + } + + modelinfo = gst_analytics_modelinfo_load (priv->model_file); + if (!modelinfo) { + GST_ERROR_OBJECT (self, "Failed to load modelinfo for %s. " + "This could be due to: file not found, unsupported version, " + "or invalid file format.", priv->model_file); + goto error; + } + + i_size = TfLiteInterpreterGetInputTensorCount (priv->interpreter); + if (i_size != 1) { + GST_ERROR_OBJECT (self, "Currently only support model with a single" + " input tensor, but model has %d", i_size); + goto error; + } + + { + const guint i = 0; + const TfLiteTensor *tflite_tensor = + TfLiteInterpreterGetInputTensor (priv->interpreter, i); + const gchar *tname; + GstTensorDataType data_type; + gsize dims_count; + gsize *dims; + gchar *tensor_name = NULL; + gint width = 0, height = 0; + const gchar *gst_format = NULL; + + if (!_get_input_params (self, &data_type, &width, &height, &gst_format, + &priv->channels, &priv->planar)) { + GST_ERROR_OBJECT (self, "Failed to get parameters"); + goto error; + } + + if (!convert_tensor_info (tflite_tensor, &tname, &data_type, + &dims_count, &dims)) { + GST_ERROR_OBJECT (self, "Rejecting input_tensor%d:%s with no dims", + i, tname); + goto error; + } + + tensor_name = gst_analytics_modelinfo_find_tensor_name (modelinfo, + MODELINFO_DIRECTION_INPUT, i, tname, data_type, dims_count, dims); + + if (tensor_name == NULL) { + gchar *dims_str = build_dims_str (dims_count, dims); + GST_DEBUG_OBJECT (self, + "Model info file doesn't contain info for input_tensor%u:%s matching the" + " type %s and dims %s", i, tname, + gst_tensor_data_type_get_name (data_type), dims_str); + g_free (dims); + g_free (dims_str); + } else { + + /* Get per-channel scales and offsets from modelinfo */ + /* For video input, we assume uint8 pixel values in range 0, 255 */ + { + gdouble *input_mins = NULL; + gdouble *input_maxs = NULL; + gsize num_target_ranges; + gsize j; + + /* First, get the number of target ranges from modelinfo to allocate input ranges */ + if (!gst_analytics_modelinfo_get_target_ranges (modelinfo, tensor_name, + &num_target_ranges, &input_mins, &input_maxs)) { + GST_ERROR_OBJECT (self, + "Failed to get target ranges from modelinfo for tensor %s", + tensor_name); + g_free (tensor_name); + goto error; + } + + /* Free the target ranges - we only needed them to know the count */ + g_free (input_mins); + g_free (input_maxs); + + /* Prepare input ranges - for video uint8 input, range is 0, 255 for all channels */ + input_mins = g_new (gdouble, num_target_ranges); + input_maxs = g_new (gdouble, num_target_ranges); + for (j = 0; j < num_target_ranges; j++) { + input_minsj = 0.0; + input_maxsj = 255.0; + } + + if (!gst_analytics_modelinfo_get_input_scales_offsets (modelinfo, + tensor_name, num_target_ranges, input_mins, input_maxs, + &priv->num_channels, &priv->scales, &priv->offsets)) { + GST_ERROR_OBJECT (self, "Failed to get scales/offsets for tensor %s", + tensor_name); + g_free (input_mins); + g_free (input_maxs); + g_free (tensor_name); + goto error; + } + + g_free (input_mins); + g_free (input_maxs); + } + + } + + gst_clear_caps (&priv->model_incaps); + priv->model_incaps = gst_caps_new_empty_simple ("video/x-raw"); + if (width && height) + gst_caps_set_simple (priv->model_incaps, "width", G_TYPE_INT, width, + "height", G_TYPE_INT, height, NULL); + + /* Check if all channels are passthrough (scale=1.0, offset=0.0) */ + gboolean is_passthrough = TRUE; + if (priv->scales && priv->offsets) { + for (gsize c = 0; c < priv->num_channels; c++) { + if (priv->scalesc != 1.0 || priv->offsetsc != 0.0) { + is_passthrough = FALSE; + break; + } + } + } + + if (data_type == GST_TENSOR_DATA_TYPE_UINT8 && gst_format && is_passthrough) + gst_caps_set_simple (priv->model_incaps, "format", G_TYPE_STRING, + gst_format, NULL); + + g_free (tensor_name); + } + + if (TfLiteInterpreterAllocateTensors (priv->interpreter) != kTfLiteOk) { + GST_ERROR_OBJECT (self, "Failed to allocate tensors"); + goto error; + } + + gst_clear_caps (&priv->model_outcaps); + o_size = TfLiteInterpreterGetOutputTensorCount (priv->interpreter); + + if (o_size != 0) { + tensors_s = gst_structure_new_empty ("tensorgroups"); + g_value_init (&v_tensors_set, GST_TYPE_UNIQUE_LIST); + } + + for (guint i = 0; i < o_size; i++) { + const TfLiteTensor *tflite_tensor = + TfLiteInterpreterGetOutputTensor (priv->interpreter, i); + const gchar *tname; + GstTensorDataType data_type; + gsize dims_count; + gsize *dims; + gchar *tensor_name = NULL; + + if (!convert_tensor_info (tflite_tensor, &tname, &data_type, + &dims_count, &dims)) { + GST_WARNING_OBJECT (self, "Skipping output_tensor%d:%s with no dims", + i, tname); + continue; + } + + tensor_name = gst_analytics_modelinfo_find_tensor_name (modelinfo, + MODELINFO_DIRECTION_OUTPUT, i, tname, data_type, dims_count, dims); + + + gchar *dims_str = build_dims_str (dims_count, dims); + if (tensor_name == NULL) { + GST_ERROR_OBJECT (self, + "Model info file doesn't contain info for output_tensor%u:%s matching the" + " type %s and dims %s", i, tname, + gst_tensor_data_type_get_name (data_type), dims_str); + g_free (dims); + g_free (dims_str); + g_ptr_array_set_size (priv->tensor_templates, 0); + goto error; + } + + GstTensor *t = gst_tensor_alloc (dims_count); + + gchar *id = gst_analytics_modelinfo_get_id (modelinfo, tensor_name); + GST_DEBUG_OBJECT (self, "Mapping output_tensor%d:%s of type %s and" + " dims %s to id %s", i, tname, + gst_tensor_data_type_get_name (data_type), dims_str, id); + g_free (dims_str); + + /* Get dims-order from modelinfo (defaults to row-major if not specified) */ + GstTensorDimOrder dims_order = + gst_analytics_modelinfo_get_dims_order (modelinfo, tensor_name); + const gchar *dims_order_str = + dims_order == + GST_TENSOR_DIM_ORDER_COL_MAJOR ? "col-major" : "row-major"; + + t->id = gst_analytics_modelinfo_get_quark_id (modelinfo, tensor_name); + t->layout = GST_TENSOR_LAYOUT_CONTIGUOUS; + t->data_type = data_type; + t->dims_order = dims_order; + memcpy (t->dims, dims, sizeof (gsize) * t->num_dims); + + GstStructure *tensor_desc = gst_structure_new_empty ("tensor/strided"); + + /* Setting dims */ + GValue val_dims = G_VALUE_INIT, val = G_VALUE_INIT; + GValue val_caps = G_VALUE_INIT; + GValue val_dt = G_VALUE_INIT; + + gst_value_array_init (&val_dims, t->num_dims); + g_value_init (&val, G_TYPE_INT); + g_value_init (&val_caps, GST_TYPE_CAPS); + g_value_init (&val_dt, G_TYPE_STRING); + + for (gsize i = 0; i < t->num_dims; i++) { + g_value_set_int (&val, t->dimsi ? t->dimsi : 0); + gst_value_array_append_value (&val_dims, &val); + } + + gst_structure_set (tensor_desc, "dims-order", G_TYPE_STRING, dims_order_str, + "tensor-id", G_TYPE_STRING, id, NULL); + + gst_structure_take_value (tensor_desc, "dims", &val_dims); + g_value_unset (&val); + + /* Setting datatype */ + g_value_set_string (&val_dt, gst_tensor_data_type_get_name (t->data_type)); + gst_structure_take_value (tensor_desc, "type", &val_dt); + + /* tensor caps */ + GstCaps *tensor_caps = gst_caps_new_full (tensor_desc, NULL); + + /* Append tensor caps to set */ + gst_value_set_caps (&val_caps, tensor_caps); + gst_caps_unref (tensor_caps); + gst_value_unique_list_append_and_take_value (&v_tensors_set, &val_caps); + + + if (i == (o_size - 1)) { + gchar *gid = gst_analytics_modelinfo_get_group_id (modelinfo); + gst_structure_set_value (tensors_s, gid, &v_tensors_set); + g_free (gid); + + priv->model_outcaps = gst_caps_new_simple ("video/x-raw", "tensors", + GST_TYPE_STRUCTURE, tensors_s, NULL); + } + g_free (dims); + + g_ptr_array_add (priv->tensor_templates, t); + + g_free (tensor_name); + } + + TfLiteTensor *itensor = TfLiteInterpreterGetInputTensor (priv->interpreter, + 0); + if (TfLiteTensorType (itensor) == kTfLiteFloat32) { + GST_DEBUG_OBJECT (self, "Floating point Tensorflow Lite Model"); + } + + ret = TRUE; + +done: + if (modelinfo) + gst_analytics_modelinfo_free (modelinfo); + + GST_OBJECT_UNLOCK (self); + + return ret; + +error: + + GST_ERROR_OBJECT (self, + "Unable to create TFLITE session. Inference is disabled."); + + GST_BASE_TRANSFORM_GET_CLASS (self)->stop (trans); + + goto done; +} + +static gboolean +gst_tflite_inference_stop (GstBaseTransform * trans) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (trans); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + if (priv->interpreter) + TfLiteInterpreterDelete (priv->interpreter); + priv->interpreter = NULL; + + if (priv->interpreter_options) + TfLiteInterpreterOptionsDelete (priv->interpreter_options); + priv->interpreter_options = NULL; + + if (priv->model) + TfLiteModelDelete (priv->model); + priv->model = NULL; + + gst_clear_caps (&priv->model_incaps); + + g_ptr_array_set_size (priv->tensor_templates, 0); + + return TRUE; +} + +static GstCaps * +gst_tflite_inference_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter_caps) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (trans); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + GstCaps *other_caps, *restrictions; + + if (priv->model_incaps == NULL) { + other_caps = gst_caps_ref (caps); + goto done; + } + + GST_DEBUG_OBJECT (self, "Applying caps restrictions: %" GST_PTR_FORMAT, + priv->model_incaps); + + if (direction == GST_PAD_SINK) { + restrictions = gst_caps_intersect_full (caps, priv->model_incaps, + GST_CAPS_INTERSECT_FIRST); + other_caps = gst_caps_intersect (restrictions, priv->model_outcaps); + gst_caps_unref (restrictions); + } else if (direction == GST_PAD_SRC) { + /* Remove tensors from caps if no upstream element produce tensors. */ + GstCaps *tmp_caps = gst_caps_copy (caps); + + if (!gst_caps_is_empty (tmp_caps)) { + GstStructure *tstruct = gst_caps_get_structure (tmp_caps, 0); + gst_structure_remove_field (tstruct, "tensors"); + } + + other_caps = gst_caps_intersect_full (tmp_caps, priv->model_incaps, + GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp_caps); + } + + +done: + if (filter_caps) { + GstCaps *tmp = gst_caps_intersect_full (other_caps, filter_caps, + GST_CAPS_INTERSECT_FIRST); + gst_caps_replace (&other_caps, tmp); + gst_caps_unref (tmp); + } + + return other_caps; +} + +static gboolean +gst_tflite_inference_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (trans); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + + if (!gst_video_info_from_caps (&priv->video_info, incaps)) { + GST_ERROR_OBJECT (self, "Failed to parse caps"); + return FALSE; + } + + return TRUE; +} + +static GstFlowReturn +gst_tflite_inference_transform_ip (GstBaseTransform * trans, GstBuffer * buf) +{ + if (!gst_base_transform_is_passthrough (trans) + && !gst_tflite_inference_process (trans, buf)) { + GST_ELEMENT_ERROR (trans, STREAM, FAILED, + (NULL), ("TFLITE inference failed")); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +#define _convert_image_scale_offset(Type, dst, srcPtr, \ + srcSamplesPerPixel, stride, scales, offsets) \ +G_STMT_START { \ + size_t destIndex = 0; \ + Type tmp; \ + \ + if (!planar) { \ + for (int32_t j = 0; j < dstHeight; ++j) { \ + for (int32_t i = 0; i < dstWidth; ++i) { \ + for (int32_t k = 0; k < dstChannels; ++k) { \ + tmp = *srcPtrk; \ + dstdestIndex++ = (Type)(tmp * scalesk + offsetsk); \ + srcPtrk += srcSamplesPerPixel; \ + } \ + } \ + /* correct for stride */ \ + for (uint32_t k = 0; k < 3; ++k) \ + srcPtrk += stride - srcSamplesPerPixel * dstWidth; \ + } \ + } else { \ + size_t frameSize = dstWidth * dstHeight; \ + Type *destPtr3 = { dst, dst + frameSize, dst + 2 * frameSize }; \ + for (int32_t j = 0; j < dstHeight; ++j) { \ + for (int32_t i = 0; i < dstWidth; ++i) { \ + for (int32_t k = 0; k < dstChannels; ++k) { \ + tmp = *srcPtrk; \ + destPtrkdestIndex = (Type)(tmp * scalesk + offsetsk); \ + srcPtrk += srcSamplesPerPixel; \ + } \ + destIndex++; \ + } \ + /* correct for stride */ \ + for (uint32_t k = 0; k < 3; ++k) \ + srcPtrk += stride - srcSamplesPerPixel * dstWidth; \ + } \ + } \ +} \ +G_STMT_END; + +static void +convert_image_scale_offset_u8 (guint8 * dst, gint dstWidth, gint dstHeight, + gint dstChannels, gboolean planar, guint8 ** srcPtr, + guint8 srcSamplesPerPixel, guint32 stride, const gdouble * scales, + const gdouble * offsets) +{ + _convert_image_scale_offset (guint8, dst, srcPtr, srcSamplesPerPixel, + stride, scales, offsets); +} + +static void +convert_image_scale_offset_f32 (gfloat * dst, gint dstWidth, gint dstHeight, + gint dstChannels, gboolean planar, guint8 ** srcPtr, + guint8 srcSamplesPerPixel, guint32 stride, const gdouble * scales, + const gdouble * offsets) +{ + _convert_image_scale_offset (gfloat, dst, srcPtr, srcSamplesPerPixel, + stride, scales, offsets); +} + +static gboolean +gst_tflite_inference_process (GstBaseTransform * trans, GstBuffer * buf) +{ + GstTFliteInference *self = GST_TFLITE_INFERENCE (trans); + GstTFliteInferencePrivate *priv = + gst_tflite_inference_get_instance_private (self); + GstMapInfo info; + guint8 *srcPtr3; + gsize srcSamplesPerPixel = 3; + GstTensorDataType datatype; + + if (gst_buffer_map (buf, &info, GST_MAP_READ)) { + + // <== + srcPtr0 = info.data; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 2; + + switch (priv->video_info.finfo->format) { + case GST_VIDEO_FORMAT_RGBA: + srcSamplesPerPixel = 4; + break; + case GST_VIDEO_FORMAT_BGRA: + srcSamplesPerPixel = 4; + srcPtr0 = info.data + 2; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 0; + break; + case GST_VIDEO_FORMAT_ARGB: + srcSamplesPerPixel = 4; + srcPtr0 = info.data + 1; + srcPtr1 = info.data + 2; + srcPtr2 = info.data + 3; + break; + case GST_VIDEO_FORMAT_ABGR: + srcSamplesPerPixel = 4; + srcPtr0 = info.data + 3; + srcPtr1 = info.data + 2; + srcPtr2 = info.data + 1; + break; + case GST_VIDEO_FORMAT_BGR: + srcPtr0 = info.data + 2; + srcPtr1 = info.data + 1; + srcPtr2 = info.data + 0; + break; + default: + break; + } + + TfLiteTensor *tensor = TfLiteInterpreterGetInputTensor (priv->interpreter, + 0); + + guint width = GST_VIDEO_INFO_WIDTH (&priv->video_info); + guint height = GST_VIDEO_INFO_HEIGHT (&priv->video_info); + guint32 stride = priv->video_info.stride0; + guint channels; + if (GST_VIDEO_INFO_IS_GRAY (&priv->video_info)) { + channels = 1; + } else if (GST_VIDEO_INFO_IS_RGB (&priv->video_info)) { + channels = 3; + } else { + g_assert_not_reached (); + } + + + datatype = gst_tflite_convert_data_type (TfLiteTensorType (tensor)); + switch (datatype) { + case GST_TENSOR_DATA_TYPE_UINT8:{ + uint8_t *dest = (uint8_t *) TfLiteTensorData (tensor); + + if (dest == NULL) + return false; + convert_image_scale_offset_u8 (dest, width, height, channels, + priv->planar, srcPtr, srcSamplesPerPixel, stride, priv->scales, + priv->offsets); + break; + } + case GST_TENSOR_DATA_TYPE_FLOAT32:{ + float *dest = (float *) TfLiteTensorData (tensor); + + if (dest == NULL) + return false; + convert_image_scale_offset_f32 (dest, width, height, channels, + priv->planar, srcPtr, srcSamplesPerPixel, stride, priv->scales, + priv->offsets); + break; + } + default:{ + GST_ERROR_OBJECT (self, "Data type not handled"); + return false; + } + break; + } + + /* Run inference */ + if (TfLiteInterpreterInvoke (priv->interpreter) != kTfLiteOk) { + GST_ERROR_OBJECT (self, "Failed to invoke tflite!"); + return false; + } + + gsize num_tensors = + TfLiteInterpreterGetOutputTensorCount (priv->interpreter); + + g_assert (num_tensors == priv->tensor_templates->len); + GstTensor **tensors = + (GstTensor **) g_malloc0_n (num_tensors, sizeof (gpointer)); + + for (size_t i = 0; i < num_tensors; i++) { + + const TfLiteTensor *output_tensor = + TfLiteInterpreterGetOutputTensor (priv->interpreter, i); + + tensorsi = gst_tensor_alloc (TfLiteTensorNumDims (output_tensor)); + memcpy (tensorsi, g_ptr_array_index (priv->tensor_templates, i), + sizeof (GstTensor)); + tensorsi->num_dims = TfLiteTensorNumDims (output_tensor); + + for (gsize j = 0; j < tensorsi->num_dims; j++) + tensorsi->dimsj = TfLiteTensorDim (output_tensor, j);; + + tensorsi->data = + gst_buffer_new_allocate (NULL, TfLiteTensorByteSize (output_tensor), + NULL); + + gst_buffer_fill (tensorsi->data, 0, TfLiteTensorData (output_tensor), + TfLiteTensorByteSize (output_tensor)); + } + + GstTensorMeta *tmeta = gst_buffer_add_tensor_meta (buf); + gst_tensor_meta_set (tmeta, num_tensors, tensors); + + if (!tmeta) + return FALSE; + + GST_TRACE_OBJECT (trans, "Num tensors: %zu", tmeta->num_tensors); + gst_buffer_unmap (buf, &info); + } + + return TRUE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttfliteinference.h
Added
@@ -0,0 +1,48 @@ +/* + * GStreamer gstreamer-tfliteinference + * Copyright (C) 2024 Collabora Ltd + * + * gsttfliteinference.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_TFLITE_INFERENCE_H__ +#define __GST_TFLITE_INFERENCE_H__ + +#include <gst/gst.h> +#include <gst/base/base.h> + +#include "tensorflow/lite/c/c_api.h" + +G_BEGIN_DECLS + +#define GST_TYPE_TFLITE_INFERENCE (gst_tflite_inference_get_type()) +G_DECLARE_DERIVABLE_TYPE (GstTFliteInference, gst_tflite_inference, GST, + TFLITE_INFERENCE, GstBaseTransform) + +GST_ELEMENT_REGISTER_DECLARE (tflite_inference) +struct _GstTFliteInferenceClass +{ + GstBaseTransformClass basetransform; + + gboolean (*update_options) (GstTFliteInference * self, + TfLiteInterpreterOptions * interpreter_options); +}; + +G_END_DECLS + +#endif /* __GST_TFLITE_INFERENCE_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttflitevsiinference.c
Added
@@ -0,0 +1,205 @@ +/* + * GStreamer + * Copyright (C) 2025 Collabora Ltd. + * + * gsttflitevsiinference.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-tflitevsiinference + * @short_description: Run TFLITE inference model on video buffers + * using a Verisilicon accelerator + * + * This element can apply an TFLITE model to video buffers. It attaches + * the tensor output to the buffer as a @ref GstTensorMeta. + * + * To install TFLITE on your system, follow the instructions in the + * README.md in with this plugin. + * + * ## Example launch command: + * + * GST_DEBUG=ssdobjectdetector:5 \ + * gst-launch-1.0 filesrc location=tflite-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! tflitevsiinference model-file=tflite-models/models/ssd_mobilenet_v1_coco.tflite ! \ + * ssdobjectdetector label-file=tflite-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink + * + * Since: 1.28 + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsttflitevsiinference.h" + +#include "VX/vsi_npu_custom_op.h" + +#include <tensorflow/lite/delegates/external/external_delegate.h> + +typedef struct _GstTFliteVsiInference +{ + GstTFliteInference parent; + + gchar *delegate_path; + + TfLiteDelegate *tflite_delegate; +} GstTFliteVsiInference; + +GST_DEBUG_CATEGORY (tflite_vsi_inference_debug); +#define GST_CAT_DEFAULT tflite_vsi_inference_debug + +GST_ELEMENT_REGISTER_DEFINE (tflite_vsi_inference, + "tflitevsiinference", GST_RANK_NONE, GST_TYPE_TFLITE_VSI_INFERENCE); + +enum +{ + PROP_0, + PROP_DELEGATE, +}; + +#define DEFAULT_DELEGATE_PATH "libvx_delegate.so.2" + +static void gst_tflite_vsi_inference_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_tflite_vsi_inference_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_tflite_vsi_inference_finalize (GObject * object); + +static gboolean gst_tflite_vsi_update_options (GstTFliteInference * inf, + TfLiteInterpreterOptions * interpreter_options); +static gboolean gst_tflite_vsi_inference_stop (GstBaseTransform * trans); + +G_DEFINE_TYPE (GstTFliteVsiInference, gst_tflite_vsi_inference, + GST_TYPE_TFLITE_INFERENCE); + +static void +gst_tflite_vsi_inference_class_init (GstTFliteVsiInferenceClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + GstTFliteInferenceClass *tflite_class = (GstTFliteInferenceClass *) klass; + + GST_DEBUG_CATEGORY_INIT (tflite_vsi_inference_debug, + "tflitevsiinference", 0, "TFLlite vsi inference"); + + gst_element_class_set_static_metadata (element_class, + "tflitevsiinference", + "Filter/Effect", + "Apply neural network to video frames and create tensor output" + " using a Verisilicon accelerator", + "Olivier Crête <olivier.crete@collabora.com>"); + + gobject_class->set_property = gst_tflite_vsi_inference_set_property; + gobject_class->get_property = gst_tflite_vsi_inference_get_property; + gobject_class->finalize = gst_tflite_vsi_inference_finalize; + basetransform_class->stop = gst_tflite_vsi_inference_stop; + tflite_class->update_options = gst_tflite_vsi_update_options; + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_DELEGATE, + g_param_spec_string ("delegate", + "TfLite Delegate", "Path to the VSI TfLite delegate library", + DEFAULT_DELEGATE_PATH, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); +} + +static void +gst_tflite_vsi_inference_init (GstTFliteVsiInference * self) +{ + self->delegate_path = g_strdup (DEFAULT_DELEGATE_PATH); +} + +static void +gst_tflite_vsi_inference_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstTFliteVsiInference *self = GST_TFLITE_VSI_INFERENCE (object); + + switch (prop_id) { + case PROP_DELEGATE: + g_free (self->delegate_path); + self->delegate_path = g_value_dup_string (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_tflite_vsi_inference_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstTFliteVsiInference *self = GST_TFLITE_VSI_INFERENCE (object); + + switch (prop_id) { + case PROP_DELEGATE: + g_value_set_string (value, self->delegate_path); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_tflite_vsi_inference_finalize (GObject * object) +{ + GstTFliteVsiInference *self = GST_TFLITE_VSI_INFERENCE (object); + + g_free (self->delegate_path); + + G_OBJECT_CLASS (gst_tflite_vsi_inference_parent_class)->finalize (object); +} + +static gboolean +gst_tflite_vsi_update_options (GstTFliteInference * inf, + TfLiteInterpreterOptions * interpreter_options) +{ + GstTFliteVsiInference *self = GST_TFLITE_VSI_INFERENCE (inf); + TfLiteExternalDelegateOptions external_delegate_options; + + external_delegate_options = + TfLiteExternalDelegateOptionsDefault (self->delegate_path); + + self->tflite_delegate = + TfLiteExternalDelegateCreate (&external_delegate_options); + + TfLiteInterpreterOptionsAddDelegate (interpreter_options, + self->tflite_delegate); + + TfLiteInterpreterOptionsAddCustomOp (interpreter_options, + kNbgCustomOp, Register_VSI_NPU_PRECOMPILED (), 1, 1); + + return TRUE; +} + +static gboolean +gst_tflite_vsi_inference_stop (GstBaseTransform * trans) +{ + GstTFliteVsiInference *self = GST_TFLITE_VSI_INFERENCE (trans); + gboolean ret; + + ret = GST_BASE_TRANSFORM_CLASS (gst_tflite_vsi_inference_parent_class) + ->stop (trans); + + if (self->tflite_delegate) + TfLiteExternalDelegateDelete (self->tflite_delegate); + self->tflite_delegate = NULL; + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/gsttflitevsiinference.h
Added
@@ -0,0 +1,39 @@ +/* + * GStreamer gstreamer-tflitevsiinference + * Copyright (C) 2025 Collabora Ltd + * + * gsttflitevsiinference.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_TFLITE_VSI_INFERENCE_H__ +#define __GST_TFLITE_VSI_INFERENCE_H__ + +#include "gsttfliteinference.h" + + +G_BEGIN_DECLS + +#define GST_TYPE_TFLITE_VSI_INFERENCE (gst_tflite_vsi_inference_get_type()) +G_DECLARE_FINAL_TYPE (GstTFliteVsiInference, gst_tflite_vsi_inference, GST, + TFLITE_VSI_INFERENCE, GstTFliteInference) + +GST_ELEMENT_REGISTER_DECLARE (tflite_vsi_inference) + +G_END_DECLS + +#endif /* __GST_TFLITE_INFERENCE_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/tflite/meson.build
Added
@@ -0,0 +1,88 @@ +tflite_sources = + 'gsttflite.c', + 'gsttfliteinference.c', + + +tflite_headers = + 'gsttfliteinference.h', + 'gsttfliteedgetpuinference.h', + 'gsttflitevsiinference.h', + + +edgetpu_sources = + 'gsttfliteedgetpuinference.c' + + +vsi_sources = + 'gsttflitevsiinference.c', + 'VX/vsi_npu_custom_op.cc' + + +doc_sources = +foreach s: tflite_sources + tflite_headers + edgetpu_sources + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'tflite': pathsep.join(doc_sources) +} + +if get_option('tflite').disabled() + subdir_done() +endif + +tensorflow_lite_dep = cc.find_library('tensorflowlite_c', required: false) + +if not tensorflow_lite_dep.found() + tensorflow_lite_dep = cc.find_library('tensorflow-lite', + required: get_option('tflite')) + + if not cc.has_function('TfLiteInterpreterCreate', + dependencies: tensorflow_lite_dep, + required: get_option('tflite')) + tensorflow_lite_dep = disabler() + endif +endif + +tensorflow_lite_header_found = cc.has_header('tensorflow/lite/c/c_api.h', + dependencies: tensorflow_lite_dep, + required: get_option('tflite')) + +if tensorflow_lite_dep.found() and tensorflow_lite_header_found + tflite_extra_dep = + tflite_c_args = + + if cc.has_header_symbol('tensorflow/lite/c/c_api.h', 'kTfLiteBFloat16', + dependencies: tensorflow_lite_dep) + tflite_c_args += '-DTFLITE_HAS_BFLOAT16' + endif + + edgetpu_dep = cc.find_library('edgetpu', + required : get_option('tflite-edgetpu')) + + if edgetpu_dep.found() and cc.has_header('libedgetpu/edgetpu_c.h', + dependencies: edgetpu_dep, + required: get_option('tflite-edgetpu')) + tflite_c_args += '-DEDGETPU','-DTFLITE_USE_OPAQUE_DELEGATE=0', + '-DTFLITE_WITH_STABLE_ABI=0' + tflite_sources += edgetpu_sources + tflite_extra_dep += edgetpu_dep + endif + + if get_option('tflite-vsi').allowed() + tflite_sources += vsi_sources + tflite_c_args += '-Wno-aggregate-return', '-DTFLITE_VSI' + endif + + gsttflite = library('gsttflite', + tflite_sources, + c_args : gst_plugins_bad_args + tflite_c_args, + include_directories : configinc, libsinc, + dependencies : gstbase_dep, gstvideo_dep, gstanalytics_dep, + tensorflow_lite_dep,libm, gio_dep, tflite_extra_dep, + install : true, + install_dir : plugins_install_dir, + ) + + plugins += gsttflite +endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/ttml/gstttmlrender.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/ttml/gstttmlrender.c
Changed
@@ -695,7 +695,9 @@ if (gst_caps_is_any (peer_caps)) { /* if peer returns ANY caps, return filtered src pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (srcpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (srcpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else { /* duplicate caps which contains the composition into one version with @@ -758,7 +760,9 @@ if (gst_caps_is_any (peer_caps)) { /* if peer returns ANY caps, return filtered sink pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (sinkpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (sinkpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/ttml/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/ttml/meson.build
Changed
@@ -1,4 +1,5 @@ -libxml_dep = dependency('libxml-2.0', version : '>= 2.9.2', required : get_option('ttml')) +libxml_dep = dependency('libxml-2.0', version : '>= 2.9.2', required : get_option('ttml'), + default_options: {'python': false}) pango_dep = dependency('pango', required : get_option('ttml')) cairo_dep = dependency('cairo', required : get_option('ttml')) pangocairo_dep = dependency('pangocairo', required : get_option('ttml'))
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vmaf
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vmaf/gstvmafelement.c
Added
@@ -0,0 +1,1199 @@ +/* VMAF plugin + * Copyright (C) 2021 Hudl + * @author: Casey Bateman <Casey.Bateman@hudl.com> + * Copyright (C) 2025 Fluendo S.A. <contact@fluendo.com> + * Authors: Diego Nieto <dnieto@fluendo.com> + * Authors: Andoni Morales Alastruey <amorales@fluendo.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ +/** + * SECTION:element-vmaf + * @title: vmaf + * @short_description: Provides Video Multi-Method Assessment Fusion quality metrics + * + * VMAF (Video Multi-Method Assessment Fusion) is a perceptual video quality + * assessment algorithm developed by Netflix. It combines multiple elementary + * quality metrics (VIF, DLM, Motion, ADM) and fuses them using a machine + * learning model to predict the perceived video quality as experienced by + * human viewers. VMAF scores range from 0 to 100, where higher scores indicate + * better perceptual quality. + * + * This element is useful for: + * - Evaluating video encoding quality and compression efficiency + * - Comparing different encoding settings or codecs + * - Quality assurance in video processing pipelines + * - A/B testing of video content + * + * For more information about VMAF, see: https://github.com/Netflix/vmaf + * + * VMAF will perform perceptive video quality analysis on a set of input + * pads, the first pad is the reference video, the second is the distorted pad. + * + * The image output will be the be the reference video pad, ref_pad. + * + * VMAF will post a message containing a structure named "VMAF" at EOS + * or every reference frame if the property for frame-message=true. + * + * The VMAF message structure contains the following fields: + * + * - "timestamp" #G_TYPE_UINT64 Buffer timestamp in nanoseconds + * - "stream-time" #G_TYPE_UINT64 Stream time in nanoseconds + * - "running-time" #G_TYPE_UINT64 Running time in nanoseconds + * - "duration" #G_TYPE_UINT64 Duration in nanoseconds + * - "score" #G_TYPE_DOUBLE The VMAF quality score (0-100, higher is better) + * - "type" #G_TYPE_STRING Message type: "frame" = per-frame score, "pooled" = aggregate score + * - "index" #G_TYPE_INT Frame index (only present for type="frame", per-frame messages) + * - "psnr-y" #G_TYPE_DOUBLE Peak Signal-to-Noise Ratio for Y (luma) channel in dB + * (only present if psnr property is enabled) + * - "ssim" #G_TYPE_DOUBLE Structural Similarity Index (0-1, higher is better) + * (only present if ssim property is enabled) + * - "ms-ssim" #G_TYPE_DOUBLE Multi-Scale Structural Similarity Index (0-1, higher is better) + * (only present if ms-ssim property is enabled) + * + * The "type" field indicates whether the message contains a score for an individual + * frame (type="frame") or a pooled score for the entire stream up to that point (type="pooled"). + * Pooled scores are calculated at EOS using the pool-method property (mean, min, max, + * or harmonic mean). + * + * The timing fields (timestamp, stream-time, running-time, duration) allow correlation + * of VMAF scores with specific video frames in the pipeline. + * + * Per-frame messages (type="frame") include an "index" field indicating the frame number. + * With sub-sampling enabled, scores are only computed for frames at the sub-sampling + * rate, except motion scores which are computed for every frame. + * + * It is possible to configure and run PSNR, SSIM, MS-SSIM together with VMAF + * by setting the appropriate properties to true. + * + * For example, if ms-ssim, ssim, psnr are set to true, the emitted structure will look like this: + * + * VMAF, timestamp=(guint64)1234567890, stream-time=(guint64)1234567890, running-time=(guint64)1234567890, duration=(guint64)40000000, score=(double)78.910751757633022, index=(int)26, type=(string)frame, ms-ssim=(double)0.96676034472760064, ssim=(double)0.8706783652305603, psnr-y=(double)30.758853484390933; + * + * ## Example launch line + * | + * gst-launch-1.0 -m \ + * filesrc location=test1.yuv ! rawvideoparse width=1920 height=1080 ! v.ref_sink \ + * filesrc location=test2.yuv ! rawvideoparse width=1920 height=1080 ! v.dist_sink \ + * vmaf name=v frame-message=true results-filename=scores.json psnr=true ssim=true ms-ssim=true ! autovideosink \ + * | This pipeline will output messages to the console for each set of compared frames. + * + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/video/gstvideofilter.h> + +#include "gstvmafelement.h" + +#include <stdio.h> +#include <libvmaf.h> + +GST_DEBUG_CATEGORY_STATIC (gst_vmaf_debug); +#define GST_CAT_DEFAULT gst_vmaf_debug +#define SINK_FORMATS " { I420, NV12, YV12, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE } " +#define SRC_FORMAT " { I420, NV12, YV12, Y42B, Y444, I420_10LE, I422_10LE, Y444_10LE } " +#define DEFAULT_MODEL_FILENAME "vmaf_v0.6.1" +#define DEFAULT_DISABLE_CLIP FALSE +#define DEFAULT_ENABLE_TRANSFORM FALSE +#define DEFAULT_PHONE_MODEL FALSE +#define DEFAULT_PSNR FALSE +#define DEFAULT_SSIM FALSE +#define DEFAULT_MS_SSIM FALSE +#define DEFAULT_FRAME_MESSAGING FALSE +#define DEFAULT_POOL_METHOD VMAF_POOL_METHOD_MEAN +#define DEFAULT_NUM_THREADS g_get_num_processors() +#define DEFAULT_SUBSAMPLE 1 +#define DEFAULT_CONF_INT FALSE +#define DEFAULT_VMAF_LOG_LEVEL VMAF_LOG_LEVEL_NONE +#define DEFAULT_VMAF_RESULTS_FORMAT VMAF_OUTPUT_FORMAT_NONE +#define DEFAULT_VMAF_RESULTS_FILENAME NULL +#define GST_TYPE_VMAF_POOL_METHOD (gst_vmaf_pool_method_get_type ()) +#define GST_TYPE_VMAF_OUTPUT_FORMATS (gst_vmaf_results_format_get_type ()) +#define GST_TYPE_VMAF_LOG_LEVEL (gst_vmaf_log_level_get_type ()) + +typedef enum _GstReadFrameReturnCodes +{ + READING_SUCCESSFUL = 0, + READING_ERROR = 1, + READING_EOS = 2, +} GstReadFrameReturnCodes; + +typedef enum _GstVmafPropertyTypes +{ + PROP_0, + PROP_MODEL_FILENAME, + PROP_DISABLE_CLIP, + PROP_ENABLE_TRANSFORM, + PROP_PHONE_MODEL, + PROP_PSNR, + PROP_SSIM, + PROP_MS_SSIM, + PROP_NUM_THREADS, + PROP_SUBSAMPLE, + PROP_CONF_INT, + PROP_LAST, + PROP_POOL_METHOD, + PROP_FRAME_MESSAGING, + PROP_VMAF_RESULTS_FORMAT, + PROP_VMAF_RESULTS_FILENAME, + PROP_LOG_LEVEL, +} GstVmafPropertyTypes; + +typedef enum _GstVmafMessageBusScoreTypes +{ + MESSAGE_TYPE_FRAME = 0, + MESSAGE_TYPE_POOLED = 1, +} GstVmafMessageBusScoreTypes; + +static GstStaticPadTemplate src_factory = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (SRC_FORMAT))); + +static GstStaticPadTemplate ref_factory = GST_STATIC_PAD_TEMPLATE ("ref_sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (SINK_FORMATS))); + +static GstStaticPadTemplate dist_factory = GST_STATIC_PAD_TEMPLATE ("dist_sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (SINK_FORMATS))); + +#define gst_vmaf_parent_class parent_class +G_DEFINE_TYPE (GstVmaf, gst_vmaf, GST_TYPE_VIDEO_AGGREGATOR); + +static GType +gst_vmaf_pool_method_get_type (void) +{ + static const GEnumValue types = { + {VMAF_POOL_METHOD_MIN, "Minimum value", "min"}, + {VMAF_POOL_METHOD_MAX, "Maximum value", "max"}, + {VMAF_POOL_METHOD_MEAN, "Arithmetic mean", "mean"}, + {VMAF_POOL_METHOD_HARMONIC_MEAN, "Harmonic mean", "harmonic_mean"}, + {0, NULL, NULL}, + }; + static gsize id = 0; + + if (g_once_init_enter (&id)) { + GType _id = g_enum_register_static ("GstVmafPoolMethod", types); + g_once_init_leave (&id, _id); + } + + return (GType) id; +} + +#define GST_VMAF_POOL_METHOD_TYPE (gst_vmaf_pool_method_get_type()) + +static GType +gst_vmaf_results_format_get_type (void) +{ + static const GEnumValue types = { + {VMAF_OUTPUT_FORMAT_NONE, "None", "none"}, + {VMAF_OUTPUT_FORMAT_XML, "XML", "xml"}, + {VMAF_OUTPUT_FORMAT_CSV, "Comma Separated File (csv)", "csv"}, + {VMAF_OUTPUT_FORMAT_JSON, "JSON", "json"}, + {0, NULL, NULL}, + }; + static gsize id = 0; + + if (g_once_init_enter (&id)) { + GType _id = g_enum_register_static ("GstVmafResultsFormat", types); + g_once_init_leave (&id, _id); + } + + return (GType) id; +} + +#define GST_VMAF_RESULTS_FORMAT_TYPE (gst_vmaf_results_format_get_type()) + +static GType +gst_vmaf_log_level_get_type (void) +{ + static const GEnumValue types = { + {VMAF_LOG_LEVEL_NONE, "No logging", "none"}, + {VMAF_LOG_LEVEL_ERROR, "Error", "error"}, + {VMAF_LOG_LEVEL_WARNING, "Warning", "warning"}, + {VMAF_LOG_LEVEL_INFO, "Info", "info"}, + {VMAF_LOG_LEVEL_DEBUG, "Debug", "debug"}, + {0, NULL, NULL}, + }; + static gsize id = 0; + + if (g_once_init_enter (&id)) { + GType _id = g_enum_register_static ("GstVmafLogLevel", types); + g_once_init_leave (&id, _id); + } + + return (GType) id; +} + +#define GST_VMAF_LOG_LEVEL_TYPE (gst_vmaf_log_level_get_type()) + +static void +gst_vmaf_context_free (GstVmaf * self) +{ + g_clear_pointer (&self->vmaf_ctx, vmaf_close); + g_clear_pointer (&self->vmaf_model, vmaf_model_destroy); + g_clear_pointer (&self->vmaf_model_collection, vmaf_model_collection_destroy); +} + +static gboolean +gst_vmaf_model_init (GstVmaf * self, VmafModelConfig * model_cfg) +{ + gint err = 0; + gint err_builtin = 0; + gint err_path = 0; + + //attempt to load from the built in models first + err_builtin = + vmaf_model_load (&self->vmaf_model, model_cfg, + self->vmaf_config_model_filename); + if (err_builtin) { + //if built in model will not load, attempt to load from file path + err_path = + vmaf_model_load_from_path (&self->vmaf_model, model_cfg, + self->vmaf_config_model_filename); + if (err_path) { + GST_ERROR_OBJECT (self, + "Failed to load VMAF model '%s': not found as built-in model " + "(err=%d) or file path (err=%d)", + self->vmaf_config_model_filename, err_builtin, err_path); + return FALSE; + } + } + + err = vmaf_use_features_from_model (self->vmaf_ctx, self->vmaf_model); + if (err) { + GST_ERROR_OBJECT (self, + "Error %d. Failed to load self feature extractors from model file: %s", + err, self->vmaf_config_model_filename); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_vmaf_model_collection_init (GstVmaf * self, VmafModelConfig * model_cfg) +{ + gint err = 0; + gint err_builtin = 0; + gint err_path = 0; + //attempt to load from the built in models first + err_builtin = + vmaf_model_collection_load (&self->vmaf_model, + &self->vmaf_model_collection, model_cfg, + self->vmaf_config_model_filename); + if (err_builtin) { + //if built in model will not load, attempt to load from file path + err_path = + vmaf_model_collection_load_from_path (&self->vmaf_model, + &self->vmaf_model_collection, model_cfg, + self->vmaf_config_model_filename); + if (err_path) { + GST_ERROR_OBJECT (self, + "Failed to load VMAF model collection '%s': not found as built-in model collection " + "(err=%d) or file path (err=%d)", + self->vmaf_config_model_filename, err_builtin, err_path); + return FALSE; + } + } + + err = + vmaf_use_features_from_model_collection (self->vmaf_ctx, + self->vmaf_model_collection); + if (err) { + GST_ERROR_OBJECT (self, + "Error %d. Failed to load self feature extractors from model file: %s", + err, self->vmaf_config_model_filename); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_vmaf_context_init (GstVmaf * self) +{ + gint err = 0; + gboolean result = TRUE; + VmafFeatureDictionary *d = NULL; + enum VmafModelFlags flags = VMAF_MODEL_FLAGS_DEFAULT; + VmafModelConfig model_cfg = { 0 }; + VmafConfiguration cfg = { + .log_level = self->vmaf_config_log_level, + .n_threads = + self->vmaf_config_frame_messaging ? 0 : self->vmaf_config_num_threads, + .n_subsample = self->vmaf_config_subsample, + }; + + GST_INFO_OBJECT (self, "Initializing VMAF"); + + err = vmaf_init (&self->vmaf_ctx, cfg); + if (err) { + GST_ERROR_OBJECT (self, "Failed to initialize self context."); + result = FALSE; + goto free_data; + } + + if (self->vmaf_config_disable_clip) + flags |= VMAF_MODEL_FLAG_DISABLE_CLIP; + if (self->vmaf_config_enable_transform || self->vmaf_config_phone_model) + flags |= VMAF_MODEL_FLAG_ENABLE_TRANSFORM; + + model_cfg.name = "self"; + model_cfg.flags = flags; + + if (self->vmaf_config_conf_int) { + if (!gst_vmaf_model_collection_init (self, &model_cfg)) { + GST_ERROR_OBJECT (self, "Failed to initialize model collection"); + result = FALSE; + goto free_data; + } + } else { + if (!gst_vmaf_model_init (self, &model_cfg)) { + GST_ERROR_OBJECT (self, "Failed to initialize model"); + result = FALSE; + goto free_data; + } + } + + if (self->vmaf_config_psnr) { + vmaf_feature_dictionary_set (&d, "enable_chroma", "false"); + + err = vmaf_use_feature (self->vmaf_ctx, "psnr", d); + if (err) { + GST_ERROR_OBJECT (self, "Problem loading feature extractor: psnr"); + result = FALSE; + goto free_data; + } + } + if (self->vmaf_config_ssim) { + err = vmaf_use_feature (self->vmaf_ctx, "float_ssim", NULL); + if (err) { + GST_ERROR_OBJECT (self, "Problem loading feature extractor: ssim"); + result = FALSE; + goto free_data; + } + } + if (self->vmaf_config_ms_ssim) { + err = vmaf_use_feature (self->vmaf_ctx, "float_ms_ssim", NULL); + if (err) { + GST_ERROR_OBJECT (self, + "Problem loading feature extractor: float_ms_ssim"); + result = FALSE; + goto free_data; + } + } + + self->processed_frames = 0; + self->pix_fmt = VMAF_PIX_FMT_YUV400P; + self->initialized = TRUE; + self->flushed = FALSE; + + GST_INFO_OBJECT (self, "Initialized VMAF"); + +end: + return result; +free_data: + gst_vmaf_context_free (self); + goto end; +} + +static gboolean +gst_vmaf_context_flush (GstVmaf * self) +{ + gint err = 0; + + GST_DEBUG_OBJECT (self, "Flushing buffers and calculating pooled score."); + + GST_OBJECT_LOCK (self); + + if (self->vmaf_ctx && !self->flushed) { + err = vmaf_read_pictures (self->vmaf_ctx, NULL, NULL, 0); + if (err) { + GST_ERROR_OBJECT (self, "failed to flush VMAF context"); + GST_OBJECT_UNLOCK (self); + return FALSE; + } + self->flushed = TRUE; + } + + GST_OBJECT_UNLOCK (self); + + return TRUE; +} + +static void +gst_vmaf_add_feature_score (GstVmaf * self, + GstStructure * structure, + const gchar * feature_name, const gchar * field_name, gint frame_index) +{ + gint err; + gdouble score = 0; + + err = vmaf_feature_score_at_index (self->vmaf_ctx, feature_name, + &score, frame_index); + if (err) { + GST_WARNING_OBJECT (self, + "could not calculate %s score on frame:%d err:%d", + feature_name, frame_index, err); + } else { + gst_structure_set (structure, field_name, G_TYPE_DOUBLE, score, NULL); + } +} + +static void +gst_vmaf_add_pooled_feature_score (GstVmaf * self, GstStructure * structure, + const gchar * feature_name, const gchar * field_name, + enum VmafPoolingMethod pooling_method, gint start_frame, gint end_frame) +{ + gint err; + gdouble score = 0; + + err = vmaf_feature_score_pooled (self->vmaf_ctx, feature_name, + pooling_method, &score, start_frame, end_frame); + if (err) { + GST_WARNING_OBJECT (self, + "could not calculate %s score on range:%d-%d err:%d", + feature_name, start_frame, end_frame, err); + } else { + gst_structure_set (structure, field_name, G_TYPE_DOUBLE, score, NULL); + } +} + +static gint +gst_vmaf_post_pooled_score (GstVmaf * self) +{ + gint err = 0; + gdouble vmaf_score = 0; + gboolean successful_post = TRUE; + VmafModelCollectionScore model_collection_score; + GstStructure *vmaf_message_structure; + GstMessage *vmaf_message; + enum VmafOutputFormat vmaf_output_format = self->vmaf_config_results_format; + gint last_frame_index = self->processed_frames - 1; + GstClockTime timestamp, stream_time, running_time, duration; + GstAggregator *agg = GST_AGGREGATOR (self); + GstSegment *segment; + + if (self->vmaf_config_conf_int) { + err = vmaf_score_pooled_model_collection (self->vmaf_ctx, + self->vmaf_model_collection, + self->vmaf_config_pool_method, &model_collection_score, 0, + last_frame_index); + if (err) { + GST_DEBUG_OBJECT (self, + "could not calculate pooled vmaf score on range 0 to %d, for model collection", + last_frame_index); + return FALSE; + } + } + + err = vmaf_score_pooled (self->vmaf_ctx, + self->vmaf_model, + self->vmaf_config_pool_method, &vmaf_score, 0, last_frame_index); + if (err) { + GST_WARNING_OBJECT (self, + "could not calculate pooled vmaf score on range 0 to %d", + last_frame_index); + return FALSE; + } + GST_DEBUG_OBJECT (self, + "posting pooled vmaf score on range:0-%d score:%f", + last_frame_index, vmaf_score); + + GST_OBJECT_LOCK (agg->srcpad); + segment = &GST_AGGREGATOR_PAD (agg->srcpad)->segment; + timestamp = segment->position; + + if (GST_CLOCK_TIME_IS_VALID (timestamp)) { + if (GST_VIDEO_INFO_FPS_N (&GST_VIDEO_AGGREGATOR_PAD (self->ref_pad)->info) > + 0) { + duration = + gst_util_uint64_scale (self->processed_frames, + GST_SECOND * + GST_VIDEO_INFO_FPS_D (&GST_VIDEO_AGGREGATOR_PAD (self-> + ref_pad)->info), + GST_VIDEO_INFO_FPS_N (&GST_VIDEO_AGGREGATOR_PAD (self-> + ref_pad)->info)); + } else { + duration = GST_CLOCK_TIME_NONE; + } + + running_time = gst_segment_to_running_time (segment, GST_FORMAT_TIME, + timestamp); + stream_time = gst_segment_to_stream_time (segment, GST_FORMAT_TIME, + timestamp); + } else { + duration = GST_CLOCK_TIME_NONE; + running_time = GST_CLOCK_TIME_NONE; + stream_time = GST_CLOCK_TIME_NONE; + } + GST_OBJECT_UNLOCK (agg->srcpad); + + vmaf_message_structure = gst_structure_new_empty ("VMAF"); + gst_structure_set (vmaf_message_structure, + "timestamp", G_TYPE_UINT64, timestamp, + "stream-time", G_TYPE_UINT64, stream_time, + "running-time", G_TYPE_UINT64, running_time, + "duration", G_TYPE_UINT64, duration, + "score", G_TYPE_DOUBLE, vmaf_score, + "type", G_TYPE_STRING, "pooled", NULL); + + if (self->vmaf_config_ms_ssim) { + gst_vmaf_add_pooled_feature_score (self, + vmaf_message_structure, "float_ms_ssim", "ms-ssim", + self->vmaf_config_pool_method, 0, last_frame_index); + } + if (self->vmaf_config_ssim) { + gst_vmaf_add_pooled_feature_score (self, + vmaf_message_structure, "float_ssim", "ssim", + self->vmaf_config_pool_method, 0, last_frame_index); + } + if (self->vmaf_config_psnr) { + gst_vmaf_add_pooled_feature_score (self, + vmaf_message_structure, "psnr_y", "psnr-y", + self->vmaf_config_pool_method, 0, last_frame_index); + } + + vmaf_message = + gst_message_new_element (GST_OBJECT (self), vmaf_message_structure); + successful_post = gst_element_post_message (GST_ELEMENT (self), vmaf_message); + if (!successful_post) { + GST_WARNING_OBJECT (self, + "could not post pooled VMAF on message bus. score:%f", vmaf_score); + } + + if (vmaf_output_format == VMAF_OUTPUT_FORMAT_NONE + && self->vmaf_config_results_filename) { + vmaf_output_format = VMAF_OUTPUT_FORMAT_JSON; + GST_DEBUG_OBJECT (self, "using default JSON style logging."); + } + + if (vmaf_output_format) { + GST_DEBUG_OBJECT (self, + "writing VMAF score data to location:%s.", + self->vmaf_config_results_filename); + + err = + vmaf_write_output (self->vmaf_ctx, self->vmaf_config_results_filename, + vmaf_output_format); + + if (err) { + GST_WARNING_OBJECT (self, + "Failed to write VMAF output to '%s' (format=%d, err=%d)", + self->vmaf_config_results_filename, vmaf_output_format, err); + return FALSE; + } + } + + return TRUE; +} + +static gint +gst_vmaf_post_frame_score (GstVmaf * self, gint frame_index) +{ + gint err = 0, scored_frame; + gdouble vmaf_score = 0; + gboolean mod_frame; + GstStructure *vmaf_message_structure; + GstMessage *vmaf_message; + GstClockTime timestamp, stream_time, running_time, duration; + GstAggregator *agg = GST_AGGREGATOR (self); + GstSegment *segment; + + /* With sub-sampling, scores are only computed for frames at the sub-sampling rate + * except VMAF_integer_feature_motion_score and VMAF_integer_feature_motion_score2 + * that are computed for every frame. + * VMAF_integer_feature_motion2_score is computed for the past frame, so there is a + * 1 frame delay in scores. + * mod_frame is true when the current frame is one where a score was computed. + * scored_frame is the frame index where the score was actually computed. + */ + if (self->vmaf_config_subsample <= 1) { + mod_frame = TRUE; + } else { + mod_frame = (frame_index % self->vmaf_config_subsample) == 1; + } + scored_frame = frame_index - 2; + + if ((!self->vmaf_config_frame_messaging) + || frame_index <= 0 || !mod_frame) { + GST_LOG_OBJECT (self, + "Skipping frame vmaf score posting. frame:%d", frame_index); + return TRUE; + } + + err = + vmaf_score_at_index (self->vmaf_ctx, self->vmaf_model, + &vmaf_score, scored_frame); + if (err) { + GST_WARNING_OBJECT (self, + "could not calculate vmaf score on frame:%d err:%d", scored_frame, err); + return FALSE; + } + + GST_DEBUG_OBJECT (self, + "posting frame vmaf score. score:%f frame:%d", vmaf_score, scored_frame); + + GST_OBJECT_LOCK (agg->srcpad); + segment = &GST_AGGREGATOR_PAD (agg->srcpad)->segment; + timestamp = segment->position; + + if (GST_CLOCK_TIME_IS_VALID (timestamp)) { + if (GST_VIDEO_INFO_FPS_N (&GST_VIDEO_AGGREGATOR_PAD (self->ref_pad)->info) > + 0) { + duration = + gst_util_uint64_scale (1, + GST_SECOND * + GST_VIDEO_INFO_FPS_D (&GST_VIDEO_AGGREGATOR_PAD (self-> + ref_pad)->info), + GST_VIDEO_INFO_FPS_N (&GST_VIDEO_AGGREGATOR_PAD (self-> + ref_pad)->info)); + } else { + duration = GST_CLOCK_TIME_NONE; + } + + running_time = gst_segment_to_running_time (segment, GST_FORMAT_TIME, + timestamp); + stream_time = gst_segment_to_stream_time (segment, GST_FORMAT_TIME, + timestamp); + } else { + duration = GST_CLOCK_TIME_NONE; + running_time = GST_CLOCK_TIME_NONE; + stream_time = GST_CLOCK_TIME_NONE; + } + GST_OBJECT_UNLOCK (agg->srcpad); + + vmaf_message_structure = gst_structure_new_empty ("VMAF"); + vmaf_message = gst_message_new_element (GST_OBJECT (self), + vmaf_message_structure); + + gst_structure_set (vmaf_message_structure, + "timestamp", G_TYPE_UINT64, timestamp, + "stream-time", G_TYPE_UINT64, stream_time, + "running-time", G_TYPE_UINT64, running_time, + "duration", G_TYPE_UINT64, duration, + "score", G_TYPE_DOUBLE, vmaf_score, + "index", G_TYPE_INT, scored_frame, "type", G_TYPE_STRING, "frame", NULL); + + if (self->vmaf_config_ms_ssim) { + gst_vmaf_add_feature_score (self, + vmaf_message_structure, "float_ms_ssim", "ms-ssim", scored_frame); + } + if (self->vmaf_config_ssim) { + gst_vmaf_add_feature_score (self, + vmaf_message_structure, "float_ssim", "ssim", scored_frame); + } + if (self->vmaf_config_psnr) { + gst_vmaf_add_feature_score (self, vmaf_message_structure, + "psnr_y", "psnr-y", scored_frame); + } + if (!gst_element_post_message (GST_ELEMENT (self), vmaf_message)) { + GST_WARNING_OBJECT (self, + "could not post frame VMAF on message bus. score:%f frame:%d", + vmaf_score, scored_frame); + return FALSE; + } + + return TRUE; +} + +static void +gst_vmaf_fill_picture (VmafPicture * dst, guint8 * src, unsigned width, + unsigned height, int src_stride) +{ + guint8 *a = src; + uint8_t *b = dst->data0; + for (unsigned i = 0; i < height; i++) { + memcpy (b, a, width); + a += src_stride; + b += dst->stride0; + } +} + +static void +gst_vmaf_process_frame (GstVmaf * self, GstVideoFrame * ref_frame, + GstVideoFrame * dist_frame) +{ + gint err = 0; + VmafPicture pic_ref, pic_dist; + gint frame_index = self->processed_frames; + guint8 *ref_data, *dist_data; + + // allocate vmaf pictures + err = + vmaf_picture_alloc (&pic_ref, self->pix_fmt, ref_frame->info.finfo->bits, + ref_frame->info.width, ref_frame->info.height); + if (err) { + GST_ERROR_OBJECT (self, + "failed to allocate reference picture VMAF picture memory"); + goto end; + } + err = + vmaf_picture_alloc (&pic_dist, self->pix_fmt, + dist_frame->info.finfo->bits, dist_frame->info.width, + dist_frame->info.height); + if (err) { + vmaf_picture_unref (&pic_ref); + GST_ERROR_OBJECT (self, + "failed to allocate distorted picture VMAF picture memory"); + goto end; + } + + ref_data = ref_frame->map->data; + dist_data = dist_frame->map->data; + + // vmaf only uses luma data, so we only fill that plane + gst_vmaf_fill_picture (&pic_ref, ref_data, ref_frame->info.width, + ref_frame->info.height, ref_frame->info.stride0); + gst_vmaf_fill_picture (&pic_dist, dist_data, dist_frame->info.width, + dist_frame->info.height, dist_frame->info.stride0); + + //read pictures, run calculation + GST_DEBUG_OBJECT (self, + "reading images into vmaf context. ref:%p dist:%p frame:%d", + &pic_ref, &pic_dist, frame_index); + + err = vmaf_read_pictures (self->vmaf_ctx, &pic_ref, &pic_dist, frame_index); + self->processed_frames++; + if (err != 0) { + vmaf_picture_unref (&pic_ref); + vmaf_picture_unref (&pic_dist); + GST_ERROR_OBJECT (self, "failed to read VMAF pictures into context"); + } +end: + return; +} + +static GstFlowReturn +gst_vmaf_create_output_buffer (GstVideoAggregator * videoaggregator, + GstBuffer ** outbuffer) +{ + GstVmaf *self = GST_VMAF (videoaggregator); + GstBuffer *current_buf; + + current_buf = + gst_video_aggregator_pad_get_current_buffer (GST_VIDEO_AGGREGATOR_PAD + (self->ref_pad)); + + if (current_buf == NULL) { + if (gst_aggregator_pad_is_eos (GST_AGGREGATOR_PAD (self->ref_pad))) { + GST_INFO_OBJECT (self, "Reference pad is EOS, forwarding EOS"); + return GST_FLOW_EOS; + } + GST_ERROR_OBJECT (self, "No frame available on reference pad."); + return GST_FLOW_ERROR; + } + + *outbuffer = gst_buffer_ref (current_buf); + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vmaf_aggregate_frames (GstVideoAggregator * vagg, GstBuffer * outbuf) +{ + GstVmaf *self = GST_VMAF (vagg); + GstVideoFrame *ref_frame = NULL; + GstVideoFrame *dist_frame = NULL; + + GST_DEBUG_OBJECT (self, "frames are prepared and ready for processing"); + + ref_frame = gst_video_aggregator_pad_get_prepared_frame (self->ref_pad); + dist_frame = gst_video_aggregator_pad_get_prepared_frame (self->dist_pad); + + if (ref_frame == NULL) { + GST_ERROR_OBJECT (self, + "No frame available on reference pad but not EOS yet"); + } + + if (dist_frame == NULL) { + if (gst_aggregator_pad_is_eos (GST_AGGREGATOR_PAD (self->dist_pad))) { + GST_INFO_OBJECT (self, + "Distorted pad is EOS, skipping VMAF processing for remaining frames"); + return GST_FLOW_OK; + } else { + GST_ERROR_OBJECT (self, + "No frame available on distorted pad but not EOS yet"); + return GST_FLOW_ERROR; + } + } + + if (G_UNLIKELY (!self->initialized)) { + gst_vmaf_context_init (self); + } + + gst_vmaf_process_frame (self, ref_frame, dist_frame); + gst_vmaf_post_frame_score (self, self->processed_frames); + + return GST_FLOW_OK; +} + +static void +gst_vmaf_set_property (GObject * object, guint prop_id, const GValue * value, + GParamSpec * pspec) +{ + GstVmaf *self = GST_VMAF (object); + + GST_OBJECT_LOCK (self); + switch (prop_id) { + case PROP_MODEL_FILENAME: + g_free (self->vmaf_config_model_filename); + self->vmaf_config_model_filename = g_value_dup_string (value); + break; + case PROP_DISABLE_CLIP: + self->vmaf_config_disable_clip = g_value_get_boolean (value); + break; + case PROP_ENABLE_TRANSFORM: + self->vmaf_config_enable_transform = g_value_get_boolean (value); + break; + case PROP_PHONE_MODEL: + self->vmaf_config_phone_model = g_value_get_boolean (value); + break; + case PROP_PSNR: + self->vmaf_config_psnr = g_value_get_boolean (value); + break; + case PROP_SSIM: + self->vmaf_config_ssim = g_value_get_boolean (value); + break; + case PROP_MS_SSIM: + self->vmaf_config_ms_ssim = g_value_get_boolean (value); + break; + case PROP_POOL_METHOD: + self->vmaf_config_pool_method = g_value_get_enum (value); + break; + case PROP_NUM_THREADS: + self->vmaf_config_num_threads = g_value_get_uint (value); + break; + case PROP_SUBSAMPLE: + self->vmaf_config_subsample = g_value_get_uint (value); + break; + case PROP_CONF_INT: + self->vmaf_config_conf_int = g_value_get_boolean (value); + break; + case PROP_FRAME_MESSAGING: + self->vmaf_config_frame_messaging = g_value_get_boolean (value); + break; + case PROP_VMAF_RESULTS_FORMAT: + self->vmaf_config_results_format = g_value_get_enum (value); + break; + case PROP_VMAF_RESULTS_FILENAME: + g_free (self->vmaf_config_results_filename); + self->vmaf_config_results_filename = g_value_dup_string (value); + break; + case PROP_LOG_LEVEL: + self->vmaf_config_log_level = g_value_get_enum (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); +} + +static void +gst_vmaf_get_property (GObject * object, guint prop_id, GValue * value, + GParamSpec * pspec) +{ + GstVmaf *self = GST_VMAF (object); + + GST_OBJECT_LOCK (self); + switch (prop_id) { + case PROP_MODEL_FILENAME: + g_value_set_string (value, self->vmaf_config_model_filename); + break; + case PROP_DISABLE_CLIP: + g_value_set_boolean (value, self->vmaf_config_disable_clip); + break; + case PROP_ENABLE_TRANSFORM: + g_value_set_boolean (value, self->vmaf_config_enable_transform); + break; + case PROP_PHONE_MODEL: + g_value_set_boolean (value, self->vmaf_config_phone_model); + break; + case PROP_PSNR: + g_value_set_boolean (value, self->vmaf_config_psnr); + break; + case PROP_SSIM: + g_value_set_boolean (value, self->vmaf_config_ssim); + break; + case PROP_MS_SSIM: + g_value_set_boolean (value, self->vmaf_config_ms_ssim); + break; + case PROP_POOL_METHOD: + g_value_set_enum (value, self->vmaf_config_pool_method); + break; + case PROP_NUM_THREADS: + g_value_set_uint (value, self->vmaf_config_num_threads); + break; + case PROP_SUBSAMPLE: + g_value_set_uint (value, self->vmaf_config_subsample); + break; + case PROP_CONF_INT: + g_value_set_boolean (value, self->vmaf_config_conf_int); + break; + case PROP_FRAME_MESSAGING: + g_value_set_boolean (value, self->vmaf_config_frame_messaging); + break; + case PROP_VMAF_RESULTS_FORMAT: + g_value_set_enum (value, self->vmaf_config_results_format);; + break; + case PROP_VMAF_RESULTS_FILENAME: + g_value_set_string (value, self->vmaf_config_results_filename); + break; + case PROP_LOG_LEVEL: + g_value_set_enum (value, self->vmaf_config_log_level); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); +} + +static void +gst_vmaf_init (GstVmaf * self) +{ + GstPadTemplate *ref_template = gst_static_pad_template_get (&ref_factory); + GstPadTemplate *dist_template = gst_static_pad_template_get (&dist_factory); + self->vmaf_config_model_filename = g_strdup (DEFAULT_MODEL_FILENAME); + self->vmaf_config_disable_clip = DEFAULT_DISABLE_CLIP; + self->vmaf_config_enable_transform = DEFAULT_ENABLE_TRANSFORM; + self->vmaf_config_phone_model = DEFAULT_PHONE_MODEL; + self->vmaf_config_psnr = DEFAULT_PSNR; + self->vmaf_config_ssim = DEFAULT_SSIM; + self->vmaf_config_ms_ssim = DEFAULT_MS_SSIM; + self->vmaf_config_num_threads = DEFAULT_NUM_THREADS; + self->vmaf_config_subsample = DEFAULT_SUBSAMPLE; + self->vmaf_config_conf_int = DEFAULT_CONF_INT; + self->vmaf_config_pool_method = DEFAULT_POOL_METHOD; + self->vmaf_config_frame_messaging = DEFAULT_FRAME_MESSAGING; + self->vmaf_config_results_filename = DEFAULT_VMAF_RESULTS_FILENAME; + self->vmaf_config_results_format = DEFAULT_VMAF_RESULTS_FORMAT; + self->vmaf_config_log_level = DEFAULT_VMAF_LOG_LEVEL; + self->initialized = FALSE; + + self->ref_pad = + GST_VIDEO_AGGREGATOR_PAD (g_object_new (gst_video_aggregator_pad_get_type + (), "name", "ref_sink", "direction", GST_PAD_SINK, "template", + ref_template, NULL)); + gst_element_add_pad (GST_ELEMENT (self), GST_PAD (self->ref_pad)); + gst_object_unref (ref_template); + + self->dist_pad = + GST_VIDEO_AGGREGATOR_PAD (g_object_new (gst_video_aggregator_pad_get_type + (), "name", "dist_sink", "direction", GST_PAD_SINK, "template", + dist_template, NULL)); + gst_element_add_pad (GST_ELEMENT (self), GST_PAD (self->dist_pad)); + gst_object_unref (dist_template); +} + +static void +gst_vmaf_finalize (GObject * object) +{ + GstVmaf *self = GST_VMAF (object); + GST_DEBUG_OBJECT (self, "finalize plugin called, freeing memory"); + g_free (self->vmaf_config_model_filename); + self->vmaf_config_model_filename = NULL; + g_free (self->vmaf_config_results_filename); + self->vmaf_config_results_filename = NULL; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static gboolean +gst_vmaf_sink_event (GstAggregator * aggregator, + GstAggregatorPad * aggregator_pad, GstEvent * event) +{ + GstVmaf *self = GST_VMAF (aggregator); + + if (GST_EVENT_TYPE (event) == GST_EVENT_EOS) { + GST_DEBUG_OBJECT (self, "Received EOS on pad %s", + GST_PAD_NAME (aggregator_pad)); + if (GST_VIDEO_AGGREGATOR_PAD (aggregator_pad) == self->ref_pad) { + gst_vmaf_context_flush (self); + if (self->vmaf_ctx != NULL) { + gst_vmaf_post_pooled_score (self); + } + } + } + + return GST_AGGREGATOR_CLASS (parent_class)->sink_event (aggregator, + aggregator_pad, event); +} + +static gboolean +gst_vmaf_start (GstAggregator * agg) +{ + GstVmaf *self = GST_VMAF (agg); + gboolean ret = TRUE; + + GST_DEBUG_OBJECT (self, "Starting vmaf"); + + return ret; +} + +static gboolean +gst_vmaf_stop (GstAggregator * agg) +{ + GstVmaf *self = GST_VMAF (agg); + + gst_vmaf_context_free (self); + + GST_DEBUG_OBJECT (self, "Stopping vmaf element."); + + return TRUE; +} + +static gboolean +gst_vmaf_flush (GstAggregator * agg) +{ + GstVmaf *self = GST_VMAF (agg); + gboolean ret = TRUE; + GST_DEBUG_OBJECT (self, "Flushing vmaf element."); + + ret &= gst_vmaf_context_flush (self); + ret &= gst_vmaf_post_pooled_score (self); + + return ret; +} + +static void +gst_vmaf_class_init (GstVmafClass * klass) +{ + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + GstElementClass *gstelement_class = (GstElementClass *) klass; + GstVideoAggregatorClass *videoaggregator_class = + (GstVideoAggregatorClass *) klass; + GstAggregatorClass *aggregator_class = (GstAggregatorClass *) klass; + + videoaggregator_class->aggregate_frames = gst_vmaf_aggregate_frames; + videoaggregator_class->create_output_buffer = gst_vmaf_create_output_buffer; + + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &src_factory, GST_TYPE_AGGREGATOR_PAD); + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &ref_factory, GST_TYPE_VIDEO_AGGREGATOR_PAD); + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &dist_factory, GST_TYPE_VIDEO_AGGREGATOR_PAD); + + aggregator_class->sink_event = gst_vmaf_sink_event; + aggregator_class->start = gst_vmaf_start; + aggregator_class->stop = gst_vmaf_stop; + aggregator_class->flush = gst_vmaf_flush; + + gobject_class->set_property = GST_DEBUG_FUNCPTR (gst_vmaf_set_property); + gobject_class->get_property = GST_DEBUG_FUNCPTR (gst_vmaf_get_property); + gobject_class->finalize = GST_DEBUG_FUNCPTR (gst_vmaf_finalize); + + g_object_class_install_property (gobject_class, PROP_MODEL_FILENAME, + g_param_spec_string ("model-filename", + "model-filename", + "Model *.pkl abs filename, or file version for built in models", + DEFAULT_MODEL_FILENAME, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_DISABLE_CLIP, + g_param_spec_boolean ("disable-clip", + "disable-clip", + "Disable clipping VMAF values", + DEFAULT_DISABLE_CLIP, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_ENABLE_TRANSFORM, + g_param_spec_boolean ("enable-transform", + "enable-transform", + "Enable transform VMAF scores", + DEFAULT_ENABLE_TRANSFORM, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_PHONE_MODEL, + g_param_spec_boolean ("phone-model", + "phone-model", + "Use VMAF phone model", DEFAULT_PHONE_MODEL, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_PSNR, + g_param_spec_boolean ("psnr", "psnr", + "Estimate PSNR", DEFAULT_PSNR, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_SSIM, + g_param_spec_boolean ("ssim", "ssim", + "Estimate SSIM", DEFAULT_SSIM, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_MS_SSIM, + g_param_spec_boolean ("ms-ssim", "ms-ssim", + "Estimate MS-SSIM", DEFAULT_MS_SSIM, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_POOL_METHOD, + g_param_spec_enum ("pool-method", "pool-method", + "Pool method for mean", GST_TYPE_VMAF_POOL_METHOD, + DEFAULT_POOL_METHOD, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_NUM_THREADS, + g_param_spec_uint ("threads", "threads", + "The number of threads", + 0, G_MAXINT, DEFAULT_NUM_THREADS, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_SUBSAMPLE, + g_param_spec_uint ("subsample", + "subsample", + "Computing on one of every N frames", + 1, 128, DEFAULT_SUBSAMPLE, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_CONF_INT, + g_param_spec_boolean ("conf-interval", + "conf-interval", + "Enable confidence intervals", DEFAULT_CONF_INT, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_FRAME_MESSAGING, + g_param_spec_boolean ("frame-message", + "frame-message", + "Enable frame level score messaging", DEFAULT_FRAME_MESSAGING, + G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_VMAF_RESULTS_FILENAME, + g_param_spec_string ("results-filename", + "results-filename", + "VMAF results filename for scores", + DEFAULT_VMAF_RESULTS_FILENAME, G_PARAM_READWRITE)); + + g_object_class_install_property (gobject_class, PROP_VMAF_RESULTS_FORMAT, + g_param_spec_enum ("results-format", "results-format", + "VMAF results file format used for scores (csv, xml, json)", + GST_TYPE_VMAF_OUTPUT_FORMATS, DEFAULT_VMAF_RESULTS_FORMAT, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_LOG_LEVEL, + g_param_spec_enum ("log-level", "(internal) VMAF log level", + "VMAF log level", GST_TYPE_VMAF_LOG_LEVEL, + DEFAULT_VMAF_LOG_LEVEL, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + gst_element_class_set_static_metadata (gstelement_class, "vmaf", + "Filter/Analyzer/Video", + "Provides Video Multi-Method Assessment Fusion metric", + "Casey Bateman <casey.bateman@hudl.com>, Andoni Morales <amorales@fluendo.com>, Diego Nieto <dnieto@fluendo.com>"); + GST_DEBUG_CATEGORY_INIT (gst_vmaf_debug, "vmaf", 0, "vmaf"); + + gst_type_mark_as_plugin_api (GST_VMAF_RESULTS_FORMAT_TYPE, 0); + gst_type_mark_as_plugin_api (GST_VMAF_POOL_METHOD_TYPE, 0); + gst_type_mark_as_plugin_api (GST_VMAF_LOG_LEVEL_TYPE, 0); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vmaf/gstvmafelement.h
Added
@@ -0,0 +1,98 @@ +/* VMAF plugin + * Copyright (C) 2021 Hudl + * @author: Casey Bateman <Casey.Bateman@hudl.com> + * Copyright (C) 2025 Fluendo S.A. <contact@fluendo.com> + * Authors: Diego Nieto <dnieto@fluendo.com> + * Authors: Andoni Morales <amorales@fluendo.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:plugin-vmaf + * + * Provides Video Multi-Method Assessment Fusion quality metrics. + * + * Since: 1.28 + */ + +#ifndef __GST_VMAFELEMENT_H__ +#define __GST_VMAFELEMENT_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/video/gstvideoaggregator.h> + +#include <libvmaf.h> + +G_BEGIN_DECLS +#define GST_TYPE_VMAF (gst_vmaf_get_type()) +#define GST_VMAF(obj) \ + (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_VMAF, GstVmaf)) +#define GST_VMAF_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_VMAF, GstVmafClass)) +#define GST_IS_VMAF(obj) \ + (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_VMAF)) +#define GST_IS_VMAF_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_VMAF)) +typedef struct _GstVmaf GstVmaf; +typedef struct _GstVmafClass GstVmafClass; + +struct _GstVmaf +{ + GstVideoAggregator videoaggregator; + + GstVideoAggregatorPad *ref_pad; + GstVideoAggregatorPad *dist_pad; + + // VMAF settings + enum VmafPoolingMethod vmaf_config_pool_method; + enum VmafOutputFormat vmaf_config_results_format; + gchar *vmaf_config_model_filename; + gboolean vmaf_config_disable_clip; + gboolean vmaf_config_enable_transform; + gboolean vmaf_config_phone_model; + gboolean vmaf_config_psnr; + gboolean vmaf_config_ssim; + gboolean vmaf_config_ms_ssim; + guint vmaf_config_num_threads; + guint vmaf_config_subsample; + gboolean vmaf_config_conf_int; + gboolean vmaf_config_frame_messaging; + gchar *vmaf_config_results_filename; + enum VmafLogLevel vmaf_config_log_level; + + // Process state + gboolean flushed; + gboolean initialized; + + gint processed_frames; + enum VmafPixelFormat pix_fmt; + + VmafContext *vmaf_ctx; + VmafModel *vmaf_model; + VmafModelCollection *vmaf_model_collection; +}; + +struct _GstVmafClass +{ + GstVideoAggregatorClass parent_class; +}; + +GType gst_vmaf_get_type (void); + +G_END_DECLS +#endif /* __GST_VMAFELEMENT_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vmaf/gstvmafplugin.c
Added
@@ -0,0 +1,49 @@ +/* VMAF plugin + * Copyright (C) 2021 Hudl + * @author: Casey Bateman <Casey.Bateman@hudl.com> + * Copyright (C) 2025 Fluendo S.A. <contact@fluendo.com> + * Authors: Diego Nieto <dnieto@fluendo.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstvmafelement.h" + +/** + * SECTION:plugin-vmaf + * + * Provides Video Multi-Method Assessment Fusion quality metrics. + * + * Since: 1.28 + */ + +static gboolean +plugin_init (GstPlugin * plugin) +{ + gboolean result = + gst_element_register (plugin, "vmaf", GST_RANK_NONE, GST_TYPE_VMAF); + return result; +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, + GST_VERSION_MINOR, + vmaf, + "Netflix VMAF quality metric plugin", + plugin_init, VERSION, "LGPL", GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vmaf/meson.build
Added
@@ -0,0 +1,45 @@ +vmaf_sources = + 'gstvmafelement.c', + 'gstvmafplugin.c', + + +vmaf_headers = + 'gstvmafenums.h', + 'gstvmafelement.h', + + +doc_sources = +foreach s: vmaf_sources + vmaf_headers + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'vmaf': pathsep.join(doc_sources) +} + +if get_option('vmaf').disabled() + libvmaf_dep = dependency('', required: false) + subdir_done() +endif + +cc = meson.get_compiler('c') +is_msvc_windows = host_system == 'windows' and cc.get_id() == 'msvc' + +libvmaf_dep = dependency('libvmaf', + required : get_option('vmaf'), + fallback : is_msvc_windows ? : 'libvmaf', 'libvmaf_dep') + +if not libvmaf_dep.found() + subdir_done() +endif + +gstvmaf = library('gstvmaf', + vmaf_sources, + c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', + include_directories : configinc, + dependencies : gstvideo_dep, gstbase_dep, gst_dep, libvmaf_dep, + install : true, + install_dir : plugins_install_dir, + ) + pkgconfig.generate(gstvmaf, install_dir : plugins_pkgconfig_install_dir) + plugins += gstvmaf
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/base
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/base/gsth264encoder.c
Added
@@ -0,0 +1,3231 @@ +/* GStreamer + * Copyright (C) 2021 Intel Corporation + * Author: He Junyan <junyan.he@intel.com> + * Copyright (C) 2023 Michael Grzeschik <m.grzeschik@pengutronix.de> + * Copyright (C) 2021, 2025 Igalia, S.L. + * Author: Stéphane Cerveau <scerveau@igalia.com> + * Author: Víctor J́áquez <ceyusa@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:gsth264encoder + * @title: GstH264Encoder + * @short_description: Base class to implement stateless H.264 encoders + * + * This H.264 encoder base class helps in for the generation of GOPs (Group of + * Pictures) using I, P and B frames, along with SPS and PPS proposals. The + * subclass is expected to implement the rate control algorithms and the + * specific accelerator logic. + * + * + Extended profile isn't supported. + * + Only progressive frames are supported (not interlaced) + * * Neither intra profiles are fully supported + */ + +/* ToDo: +* + add SEI message support */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsth264encoder.h" + +/** + * GST_FLOW_OUTPUT_NOT_READY: + * + * A #GstFlowReturn for not ready operations + */ +#define GST_FLOW_OUTPUT_NOT_READY GST_FLOW_CUSTOM_SUCCESS_2 + +GST_DEBUG_CATEGORY (gst_h264_encoder_debug); +#define GST_CAT_DEFAULT gst_h264_encoder_debug + +#define H264ENC_IDR_PERIOD_DEFAULT 0 +#define H264ENC_B_FRAMES_DEFAULT 0 +#define H264ENC_I_FRAMES_DEFAULT 0 +#define H264ENC_NUM_REF_FRAMES_DEFAULT 3 +#define H264ENC_B_PYRAMID_DEFAULT FALSE + +typedef struct _GstH264EncoderPrivate GstH264EncoderPrivate; + +/* *INDENT-OFF* */ +/* Table A-1 - Level limits */ +static const GstH264LevelDescriptor _h264_levels = { + /* level idc MaxMBPS MaxFS MaxDpbMbs MaxBR MaxCPB MinCr */ + { "1", GST_H264_LEVEL_L1, 1485, 99, 396, 64, 175, 2 }, + { "1b", GST_H264_LEVEL_L1B, 1485, 99, 396, 128, 350, 2 }, + { "1.1", GST_H264_LEVEL_L1_1, 3000, 396, 900, 192, 500, 2 }, + { "1.2", GST_H264_LEVEL_L1_2, 6000, 396, 2376, 384, 1000, 2 }, + { "1.3", GST_H264_LEVEL_L1_3, 11880, 396, 2376, 768, 2000, 2 }, + { "2", GST_H264_LEVEL_L2, 11880, 396, 2376, 2000, 2000, 2 }, + { "2.1", GST_H264_LEVEL_L2_1, 19800, 792, 4752, 4000, 4000, 2 }, + { "2.2", GST_H264_LEVEL_L2_2, 20250, 1620, 8100, 4000, 4000, 2 }, + { "3", GST_H264_LEVEL_L3, 40500, 1620, 8100, 10000, 10000, 2 }, + { "3.1", GST_H264_LEVEL_L3_1, 108000, 3600, 18000, 14000, 14000, 4 }, + { "3.2", GST_H264_LEVEL_L3_2, 216000, 5120, 20480, 20000, 20000, 4 }, + { "4", GST_H264_LEVEL_L4, 245760, 8192, 32768, 20000, 25000, 4 }, + { "4.1", GST_H264_LEVEL_L4_1, 245760, 8192, 32768, 50000, 62500, 2 }, + { "4.2", GST_H264_LEVEL_L4_2, 522240, 8704, 34816, 50000, 62500, 2 }, + { "5", GST_H264_LEVEL_L5, 589824, 22080, 110400, 135000, 135000, 2 }, + { "5.1", GST_H264_LEVEL_L5_1, 983040, 36864, 184320, 240000, 240000, 2 }, + { "5.2", GST_H264_LEVEL_L5_2, 2073600, 36864, 184320, 240000, 240000, 2 }, + { "6", GST_H264_LEVEL_L6, 4177920, 139264, 696320, 240000, 240000, 2 }, + { "6.1", GST_H264_LEVEL_L6_1, 8355840, 139264, 696320, 480000, 480000, 2 }, + { "6.2", GST_H264_LEVEL_L6_2, 16711680, 139264, 696320, 800000, 800000, 2 }, +}; + +/* Table A-2 - CPB BR NAL factor + H.10.2.1 (r) */ +static const struct { + GstH264Profile profile; + int cpb_br_nal_factor; +} _h264_nal_factors = { + { GST_H264_PROFILE_BASELINE, 1200 }, + { GST_H264_PROFILE_MAIN, 1200 }, + { GST_H264_PROFILE_EXTENDED, 1200 }, + { GST_H264_PROFILE_STEREO_HIGH, 1500 }, + { GST_H264_PROFILE_MULTIVIEW_HIGH, 1500 }, + { GST_H264_PROFILE_HIGH, 1500 }, + { GST_H264_PROFILE_HIGH10, 3600 }, + { GST_H264_PROFILE_HIGH_422, 4800 }, + { GST_H264_PROFILE_HIGH_444, 4800 }, +}; + +/* TABLE E-1 Meaning of sample aspect ratio indicator */ +static const struct { + int num; + int den; +} _h264_aspect_ratio = { + { 0, 1 }, + { 1, 1 }, + { 12, 11 }, + { 10, 11 }, + { 16, 11 }, + { 40, 33 }, + { 24, 11 }, + { 20, 11 }, + { 32, 11 }, + { 80, 33 }, + { 18, 11 }, + { 15, 11 }, + { 64, 33 }, + { 160, 99 }, + { 4, 3 }, + { 3, 2 }, + { 2, 1 }, +}; +/* *INDENT-ON* */ + +enum +{ + PROP_IDR_PERIOD = 1, /* aka PROP_KEY_INT_MAX */ + PROP_BFRAMES, + PROP_IFRAMES, + PROP_NUM_REF_FRAMES, + PROP_B_PYRAMID, + N_PROPERTIES +}; + +static GParamSpec *propertiesN_PROPERTIES; + +struct _GstH264EncoderPrivate +{ + GstVideoCodecState *input_state; + + struct + { + guint max_num_reference_list0; + guint max_num_reference_list1; + guint preferred_output_delay; + } config; + + struct + { + guint32 idr_period; + guint num_iframes; + guint num_bframes; + guint num_ref_frames; + gboolean b_pyramid; + } prop; + + struct + { + GstH264Profile profile; + GstH264Level level; + } stream; + + struct + { + /* frames between two IDR idr, ...., idr) */ + guint32 idr_period; + /* How may IDRs we have encoded */ + guint32 total_idr_count; + /* frames between I/P and P frames I, B, B, .., B, P) */ + guint32 ip_period; + /* frames between I frames I, B, B, .., B, P, ..., I), open GOP */ + guint32 i_period; + /* B frames between I/P and P. */ + guint32 num_bframes; + /* Use B pyramid structure in the GOP. */ + gboolean b_pyramid; + /* Level 0 is the simple B not acting as ref. */ + guint32 highest_pyramid_level; + /* If open GOP, I frames within a GOP. */ + guint32 num_iframes; + /* A map of all frames types within a GOP. */ + GArray *frame_map; + /* current index in the frames types map. */ + guint32 cur_frame_index; + /* Number of ref frames within current GOP. H264's frame num. */ + guint32 cur_frame_num; + /* Max frame num within a GOP. */ + guint32 max_frame_num; + guint32 log2_max_frame_num; + /* Max poc within a GOP. */ + guint32 max_pic_order_cnt; + guint32 log2_max_poc_lsb; + + /* Total ref frames of list0 and list1. */ + guint32 num_ref_frames; + guint32 ref_num_list0; + guint32 ref_num_list1; + + guint num_reorder_frames; + guint max_dec_frame_buffering; + guint max_num_ref_frames; + + GstVideoCodecFrame *last_keyframe; + } gop; + + /* current params */ + struct + { + GstH264SPS sps; + GstH264PPS pps; + } params; + + GstClockTime frame_duration; + guint fps_n; + guint fps_d; + + GQueue output_list; + GQueue ref_list; + GQueue reorder_list; + GstVecDeque *dts_queue; + + GArray *ref_list0; + GArray *ref_list1; + + gboolean is_live; + gboolean need_configure; +}; + +/** + * GstH264Encoder: + * + * Opaque #GstH264Encoder data structure. + * + * Since: 1.28 + */ + +#define parent_class gst_h264_encoder_parent_class +G_DEFINE_ABSTRACT_TYPE_WITH_CODE (GstH264Encoder, gst_h264_encoder, + GST_TYPE_VIDEO_ENCODER, + G_ADD_PRIVATE (GstH264Encoder); + GST_DEBUG_CATEGORY_INIT (gst_h264_encoder_debug, "h264encoder", 0, + "H264 Video Encoder")); + +GST_DEFINE_MINI_OBJECT_TYPE (GstH264EncoderFrame, gst_h264_encoder_frame); + +#define update_property(type, obj, old_val, new_val, prop_id) \ +static inline void \ +gst_h264_encoder_update_property_##type (GstH264Encoder * encoder, type * old_val, type new_val, guint prop_id) \ +{ \ + GST_OBJECT_LOCK (encoder); \ + if (*old_val == new_val) { \ + GST_OBJECT_UNLOCK (encoder); \ + return; \ + } \ + *old_val = new_val; \ + GST_OBJECT_UNLOCK (encoder); \ + if (prop_id > 0) \ + g_object_notify_by_pspec (G_OBJECT (encoder), propertiesprop_id); \ +} + +update_property (guint, obj, old_val, new_val, prop_id); +update_property (gboolean, obj, old_val, new_val, prop_id); + +#undef update_property + +#define update_property_uint(obj, old_val, new_val, prop_id) \ + gst_h264_encoder_update_property_guint (obj, old_val, new_val, prop_id) +#define update_property_bool(obj, old_val, new_val, prop_id) \ + gst_h264_encoder_update_property_gboolean (obj, old_val, new_val, prop_id) + +#define _GET_PRIV(obj) gst_h264_encoder_get_instance_private (obj) +#define _GET_FRAME(codec_frame) GST_H264_ENCODER_FRAME (gst_video_codec_frame_get_user_data (codec_frame)) + +static void +gst_h264_encoder_frame_free (GstMiniObject * mini_object) +{ + GstH264EncoderFrame *frame = GST_H264_ENCODER_FRAME (mini_object); + + GST_TRACE ("Free frame %p", frame); + if (frame->user_data_destroy_notify) + frame->user_data_destroy_notify (frame->user_data); + + g_free (frame); +} + +/** + * gst_h264_encoder_frame_new: + * + * Create new #GstH264EncoderFrame + * + * Returns: a new #GstH264EncoderFrame + */ +GstH264EncoderFrame * +gst_h264_encoder_frame_new (void) +{ + GstH264EncoderFrame *frame; + + frame = g_new (GstH264EncoderFrame, 1); + + /* *INDENT-OFF* */ + *frame = (GstH264EncoderFrame) { + .gop_frame_num = 0, + .unused_for_reference_pic_num = -1, + }; + /* *INDENT-ON */ + + gst_mini_object_init (GST_MINI_OBJECT_CAST (frame), 0, + GST_TYPE_H264_ENCODER_FRAME, NULL, NULL, gst_h264_encoder_frame_free); + + GST_TRACE ("New frame %p", frame); + + return frame; +} + +/** + * gst_h264_encoder_frame_set_user_data: + * @frame: a #GstH264EncoderFrame + * @user_data: private data + * @notify: (closure user_data): a #GDestroyNotify + * + * Sets @user_data on the frame and the #GDestroyNotify that will be called when + * the frame is freed. Allows to attach private data by the subclass to frames. + * + * If a @user_data was previously set, then the previous set @notify will be called + * before the @user_data is replaced. + */ +void +gst_h264_encoder_frame_set_user_data (GstH264EncoderFrame * frame, + gpointer user_data, GDestroyNotify notify) +{ + if (frame->user_data_destroy_notify) + frame->user_data_destroy_notify (frame->user_data); + + frame->user_data = user_data; + frame->user_data_destroy_notify = notify; +} + +/** + * gst_h264_encoder_frame_get_user_data: + * @frame: a #GstH264EncoderFrame + * + * Gets private data set on the frame by the subclass via + * gst_video_codec_frame_set_user_data() previously. + * + * Returns: (transfer none): The previously set user_data + */ + + +struct PyramidInfo +{ + guint level; + gint left_ref_poc_diff; + gint right_ref_poc_diff; +}; + +/* recursive function */ +static void +gst_h264_encoder_set_pyramid_info (struct PyramidInfo *info, guint len, + guint current_level, guint highest_level) +{ + guint index; + + g_assert (len >= 1 && len <= 31); + + if (current_level == highest_level || len == 1) { + for (index = 0; index < len; index++) { + infoindex.level = current_level; + infoindex.left_ref_poc_diff = (index + 1) * -2; + infoindex.right_ref_poc_diff = (len - index) * 2; + } + + return; + } + + index = len / 2; + infoindex.level = current_level; + infoindex.left_ref_poc_diff = (index + 1) * -2; + infoindex.right_ref_poc_diff = (len - index) * 2; + + current_level++; + + if (index > 0) { + gst_h264_encoder_set_pyramid_info (info, index, current_level, + highest_level); + } + + if (index + 1 < len) { + gst_h264_encoder_set_pyramid_info (&infoindex + 1, len - (index + 1), + current_level, highest_level); + } +} + +static void +gst_h264_encoder_create_gop_frame_map (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + guint i; + guint i_frames = priv->gop.num_iframes; + struct PyramidInfo pyramid_info31 = { 0, }; + GstH264GOPFrame *gop_frame; + + if (priv->gop.highest_pyramid_level > 0) { + g_assert (priv->gop.num_bframes > 0); + gst_h264_encoder_set_pyramid_info (pyramid_info, priv->gop.num_bframes, + 0, priv->gop.highest_pyramid_level); + } + + if (!priv->gop.frame_map) { + priv->gop.frame_map = g_array_sized_new (TRUE, TRUE, + sizeof (GstH264GOPFrame), priv->gop.idr_period); + } else { + priv->gop.frame_map = g_array_set_size (priv->gop.frame_map, + priv->gop.idr_period); + } + + for (i = 0; i < priv->gop.idr_period; i++) { + gop_frame = &g_array_index (priv->gop.frame_map, GstH264GOPFrame, i); + + if (i == 0) { + gop_frame->slice_type = GST_H264_I_SLICE; + gop_frame->is_ref = TRUE; + continue; + } + + /* Intra only stream. */ + if (priv->gop.ip_period == 0) { + gop_frame->slice_type = GST_H264_I_SLICE; + gop_frame->is_ref = FALSE; + continue; + } + + if (i % priv->gop.ip_period) { + guint pyramid_index = + i % priv->gop.ip_period - 1 /* The first P or IDR */ ; + + gop_frame->slice_type = GST_H264_B_SLICE; + gop_frame->pyramid_level = pyramid_infopyramid_index.level; + gop_frame->is_ref = + (gop_frame->pyramid_level < priv->gop.highest_pyramid_level); + gop_frame->left_ref_poc_diff = + pyramid_infopyramid_index.left_ref_poc_diff; + gop_frame->right_ref_poc_diff = + pyramid_infopyramid_index.right_ref_poc_diff; + continue; + } + + if (priv->gop.i_period && i % priv->gop.i_period == 0 && i_frames > 0) { + /* Replace P with I. */ + gop_frame->slice_type = GST_H264_I_SLICE; + gop_frame->is_ref = TRUE; + i_frames--; + continue; + } + + gop_frame->slice_type = GST_H264_P_SLICE; + gop_frame->is_ref = TRUE; + } + + /* Force the last one to be a P */ + if (priv->gop.idr_period > 1 && priv->gop.ip_period > 0) { + gop_frame = &g_array_index (priv->gop.frame_map, GstH264GOPFrame, + priv->gop.idr_period - 1); + + gop_frame->slice_type = GST_H264_P_SLICE; + gop_frame->is_ref = TRUE; + } +} + +static void +gst_h264_encoder_print_gop_structure (GstH264Encoder * self) +{ +#ifndef GST_DISABLE_GST_DEBUG + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GString *str; + guint i; + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) < GST_LEVEL_INFO) + return; + + str = g_string_new (NULL); + + g_string_append_printf (str, " "); + + for (i = 0; i < priv->gop.idr_period; i++) { + GstH264GOPFrame *gop_frame = + &g_array_index (priv->gop.frame_map, GstH264GOPFrame, i); + if (i == 0) { + g_string_append_printf (str, "IDR"); + continue; + } else { + g_string_append_printf (str, ", "); + } + + g_string_append_printf (str, "%s", + gst_h264_slice_type_to_string (gop_frame->slice_type)); + + if (priv->gop.b_pyramid && gop_frame->slice_type == GST_H264_B_SLICE) { + g_string_append_printf (str, "<L%d (%d, %d)>", + gop_frame->pyramid_level, + gop_frame->left_ref_poc_diff, gop_frame->right_ref_poc_diff); + } + + if (gop_frame->is_ref) { + g_string_append_printf (str, "(ref)"); + } + } + + g_string_append_printf (str, " "); + + GST_INFO_OBJECT (self, "GOP size: %d, forward reference %d, backward" + " reference %d, GOP structure: %s", priv->gop.idr_period, + priv->gop.ref_num_list0, priv->gop.ref_num_list1, str->str); + + g_string_free (str, TRUE); +#endif +} + +/* + * TODO: + * + Load some preset fixed GOP structure. + * + Skip this if in lookahead mode. + */ +static void +gst_h264_encoder_generate_gop_structure (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + guint32 list0, list1, gop_ref_num; + gint32 p_frames; + + if (priv->stream.profile == GST_H264_PROFILE_BASELINE) + priv->gop.num_bframes = 0; + + /* If not set, generate a idr every second */ + if (priv->gop.idr_period == 0) { + priv->gop.idr_period = (priv->fps_n + priv->fps_d - 1) / priv->fps_d; + } + + /* Prefer have more than 1 reference for the GOP which is not very small. */ + if (priv->gop.idr_period > 8) { + if (priv->gop.num_bframes > (priv->gop.idr_period - 1) / 2) { + priv->gop.num_bframes = (priv->gop.idr_period - 1) / 2; + GST_INFO_OBJECT (self, "Lowering the number of num_bframes to %d", + priv->gop.num_bframes); + } + } else { + /* begin and end should be reference */ + if (priv->gop.num_bframes > priv->gop.idr_period - 1 - 1) { + if (priv->gop.idr_period > 1) { + priv->gop.num_bframes = priv->gop.idr_period - 1 - 1; + } else { + priv->gop.num_bframes = 0; + } + GST_INFO_OBJECT (self, "Lowering the number of num_bframes to %d", + priv->gop.num_bframes); + } + } + + list0 = MIN (priv->config.max_num_reference_list0, priv->gop.num_ref_frames); + list1 = MIN (priv->config.max_num_reference_list1, priv->gop.num_ref_frames); + + if (list0 == 0) { + GST_INFO_OBJECT (self, + "No reference support, fallback to intra only stream"); + + /* It does not make sense that if only the list1 exists. */ + priv->gop.num_ref_frames = 0; + + priv->gop.ip_period = 0; + priv->gop.num_bframes = 0; + priv->gop.b_pyramid = FALSE; + priv->gop.highest_pyramid_level = 0; + priv->gop.num_iframes = priv->gop.idr_period - 1 /* The idr */ ; + priv->gop.ref_num_list0 = 0; + priv->gop.ref_num_list1 = 0; + goto create_poc; + } + + if (priv->gop.num_ref_frames <= 1) { + GST_INFO_OBJECT (self, "The number of reference frames is only %d," + " no B frame allowed, fallback to I/P mode", priv->gop.num_ref_frames); + priv->gop.num_bframes = 0; + list1 = 0; + } + + /* b_pyramid needs at least 1 ref for B, besides the I/P */ + if (priv->gop.b_pyramid && priv->gop.num_ref_frames <= 1) { + GST_INFO_OBJECT (self, "The number of reference frames is only %d," + " not enough for b_pyramid", priv->gop.num_ref_frames); + priv->gop.b_pyramid = FALSE; + } + + if (list1 == 0 && priv->gop.num_bframes > 0) { + GST_INFO_OBJECT (self, + "No max reference count for list 1, fallback to I/P mode"); + priv->gop.num_bframes = 0; + priv->gop.b_pyramid = FALSE; + } + + /* I/P mode, no list1 needed. */ + if (priv->gop.num_bframes == 0) + list1 = 0; + + /* Not enough B frame, no need for b_pyramid. */ + if (priv->gop.num_bframes <= 1) + priv->gop.b_pyramid = FALSE; + + /* b pyramid has only one backward reference. */ + if (priv->gop.b_pyramid) + list1 = 1; + + if (priv->gop.num_ref_frames > list0 + list1) { + priv->gop.num_ref_frames = list0 + list1; + GST_WARNING_OBJECT (self, "number of reference frames is bigger than max " + "reference count. Lowered number of reference frames to %d", + priv->gop.num_ref_frames); + } + + /* How many possible refs within a GOP. */ + gop_ref_num = (priv->gop.idr_period + priv->gop.num_bframes) / + (priv->gop.num_bframes + 1); + + /* The end reference. */ + if (priv->gop.num_bframes > 0 + /* frame_num % (priv->gop.num_bframes + 1) happens to be the end P */ + && (priv->gop.idr_period % (priv->gop.num_bframes + 1) != 1)) + gop_ref_num++; + + /* Adjust reference num based on B frames and B pyramid. */ + if (priv->gop.num_bframes == 0) { + priv->gop.b_pyramid = FALSE; + priv->gop.ref_num_list0 = priv->gop.num_ref_frames; + priv->gop.ref_num_list1 = 0; + } else if (priv->gop.b_pyramid) { + guint b_frames = priv->gop.num_bframes; + + /* b pyramid has only one backward ref. */ + g_assert (list1 == 1); + priv->gop.ref_num_list1 = list1; + priv->gop.ref_num_list0 = + MIN (priv->gop.num_ref_frames - priv->gop.ref_num_list1, list0); + + b_frames = b_frames / 2; + while (b_frames) { + /* All the reference pictures and the current picture should be in the + DPB. So each B level as reference, plus the IDR or P in both ends and + the current picture should not exceed the max_dpb_size. */ + if (priv->gop.highest_pyramid_level + 2 + 1 == 16) + break; + + priv->gop.highest_pyramid_level++; + b_frames = b_frames / 2; + } + + GST_INFO_OBJECT (self, "pyramid level is %d", + priv->gop.highest_pyramid_level); + } else { + /* We prefer list0. Backward references have more latency. */ + priv->gop.ref_num_list1 = 1; + priv->gop.ref_num_list0 = + priv->gop.num_ref_frames - priv->gop.ref_num_list1; + /* Balance the forward and backward references, but not cause a big + latency. */ + while ((priv->gop.num_bframes * priv->gop.ref_num_list1 <= 16) + && (priv->gop.ref_num_list1 <= gop_ref_num) + && (priv->gop.ref_num_list1 < list1) + && (priv->gop.ref_num_list0 / priv->gop.ref_num_list1 > 4)) { + priv->gop.ref_num_list0--; + priv->gop.ref_num_list1++; + } + + if (priv->gop.ref_num_list0 > list0) + priv->gop.ref_num_list0 = list0; + } + + /* It's OK, keep slots for GST_VIDEO_CODEC_FRAME_IS_FORCE_KEYFRAME frame. */ + if (priv->gop.ref_num_list0 > gop_ref_num) { + GST_DEBUG_OBJECT (self, "num_ref_frames %d is bigger than gop_ref_num %d", + priv->gop.ref_num_list0, gop_ref_num); + } + + /* Include the reference picture itself. */ + priv->gop.ip_period = 1 + priv->gop.num_bframes; + + p_frames = MAX (gop_ref_num - 1 /* IDR */, 0); + if (priv->gop.num_iframes > p_frames) { + priv->gop.num_iframes = p_frames; + GST_INFO_OBJECT (self, "Too many I frames insertion, lowering it to %d", + priv->gop.num_iframes); + } + + if (priv->gop.num_iframes > 0) { + guint total_i_frames = priv->gop.num_iframes + 1 /* IDR */ ; + priv->gop.i_period = + (gop_ref_num / total_i_frames) * (priv->gop.num_bframes + 1); + } + +create_poc: + /* initialize max_frame_num and max_poc. */ + priv->gop.log2_max_frame_num = 4; + while ((1 << priv->gop.log2_max_frame_num) <= priv->gop.idr_period) + priv->gop.log2_max_frame_num++; + + priv->gop.max_frame_num = (1 << priv->gop.log2_max_frame_num); + priv->gop.log2_max_poc_lsb = priv->gop.log2_max_frame_num + 1; + + /* 8.2.1.1 Decoding process for picture order count type 0: For intra only + stream, because all frames are non-reference, poc is easy to wrap. Need to + increase the max poc. */ + if (priv->gop.ip_period == 0) + priv->gop.log2_max_poc_lsb++; + priv->gop.max_pic_order_cnt = (1 << priv->gop.log2_max_poc_lsb); + + /* Intra only stream. */ + if (priv->gop.ip_period == 0) { + priv->gop.num_reorder_frames = 0; + + priv->gop.max_dec_frame_buffering = 1 + 1; /* IDR and current frame. */ + priv->gop.max_num_ref_frames = 0; + } else { + priv->gop.num_reorder_frames = MIN (16, priv->gop.b_pyramid ? + priv->gop.highest_pyramid_level + 1 /* the last P frame. */ : + priv->gop.num_bframes > 0 ? priv->gop.ref_num_list1 : 0); + + priv->gop.max_dec_frame_buffering = MIN (16, + MAX (priv->gop.num_ref_frames + 1, priv->gop.b_pyramid + ? priv->gop.highest_pyramid_level + 2 + 1 + : priv->gop.num_reorder_frames + 1)); + + priv->gop.max_num_ref_frames = priv->gop.max_dec_frame_buffering - 1; + } + + /* logic from x264 -- keep it in order to support open GOPs in the future */ +#if 0 + { + /* number of refs + current frame */ + guint max_frame_num = + priv->gop.max_dec_frame_buffering * (priv->gop.b_pyramid ? 2 : 1) + 1; + + priv->gop.log2_max_frame_num = 4; + while ((1 << priv->gop.log2_max_frame_num) <= max_frame_num) + priv->gop.log2_max_frame_num++; + + priv->gop.max_frame_num = (1 << priv->gop.log2_max_frame_num); + + if (priv->gop.num_bframes > 0) { /* poc_type == 0 */ + gint32 max_delta_poc = + (priv->gop.num_bframes + 2) * (priv->gop.b_pyramid ? 2 : 1) * 2; + priv->gop.log2_max_poc_lsb = 4; + while ((1 << priv->gop.log2_max_poc_lsb) <= max_delta_poc * 2) + priv->gop.log2_max_poc_lsb++; + } + + priv->gop.max_pic_order_cnt = (1 << priv->gop.log2_max_poc_lsb); + } +#endif + + gst_h264_encoder_create_gop_frame_map (self); + gst_h264_encoder_print_gop_structure (self); + + /* updates & notifications */ + update_property_uint (self, &priv->prop.idr_period, priv->gop.idr_period, + PROP_IDR_PERIOD); + update_property_uint (self, &priv->prop.num_ref_frames, + priv->gop.num_ref_frames, PROP_NUM_REF_FRAMES); + update_property_uint (self, &priv->prop.num_iframes, priv->gop.num_iframes, + PROP_IFRAMES); + update_property_bool (self, &priv->prop.b_pyramid, priv->gop.b_pyramid, + PROP_B_PYRAMID); + update_property_uint (self, &priv->prop.num_bframes, priv->gop.num_bframes, + PROP_BFRAMES); +} + +static inline void +gst_h264_encoder_flush_lists (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + g_queue_clear_full (&priv->output_list, + (GDestroyNotify) gst_video_codec_frame_unref); + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + g_queue_clear_full (&priv->reorder_list, + (GDestroyNotify) gst_video_codec_frame_unref); + + g_clear_pointer (&priv->gop.frame_map, g_array_unref); + g_clear_pointer (&priv->dts_queue, gst_vec_deque_free); + + g_clear_pointer (&priv->ref_list0, g_array_unref); + g_clear_pointer (&priv->ref_list1, g_array_unref); +} + +static gboolean +gst_h264_encoder_start (GstVideoEncoder * encoder) +{ + /* Set the minimum pts to some huge value (1000 hours). This keeps + * the dts at the start of the stream from needing to be negative. */ + gst_video_encoder_set_min_pts (encoder, GST_SECOND * 60 * 60 * 1000); + return TRUE; +} + +static gboolean +gst_h264_encoder_stop (GstVideoEncoder * encoder) +{ + GstH264Encoder *self = GST_H264_ENCODER (encoder); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + gst_h264_encoder_flush_lists (self); + + g_clear_pointer (&priv->input_state, gst_video_codec_state_unref); + + return TRUE; +} + +static void +gst_h264_encoder_reset (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderClass *klass = GST_H264_ENCODER_GET_CLASS (self); + + + GST_OBJECT_LOCK (self); + priv->gop.idr_period = priv->prop.idr_period; + priv->gop.num_ref_frames = priv->prop.num_ref_frames; + priv->gop.num_bframes = priv->prop.num_bframes; + priv->gop.num_iframes = priv->prop.num_iframes; + priv->gop.b_pyramid = priv->prop.b_pyramid; + GST_OBJECT_UNLOCK (self); + + priv->stream.profile = GST_H264_PROFILE_INVALID; + priv->stream.level = 0; + + priv->gop.i_period = 0; + priv->gop.total_idr_count = 0; + priv->gop.ip_period = 0; + priv->gop.highest_pyramid_level = 0; + if (priv->gop.frame_map) + g_array_set_size (priv->gop.frame_map, 0); + priv->gop.cur_frame_index = 0; + priv->gop.cur_frame_num = 0; + priv->gop.max_frame_num = 0; + priv->gop.log2_max_frame_num = 0; + priv->gop.max_pic_order_cnt = 0; + priv->gop.log2_max_poc_lsb = 0; + priv->gop.ref_num_list0 = 0; + priv->gop.ref_num_list1 = 0; + priv->gop.num_reorder_frames = 0; + priv->gop.max_dec_frame_buffering = 0; + priv->gop.max_num_ref_frames = 0; + priv->gop.last_keyframe = NULL; + + gst_h264_sps_clear (&priv->params.sps); + gst_h264_pps_clear (&priv->params.pps); + + g_atomic_int_set (&priv->need_configure, FALSE); + + if (klass->reset) + klass->reset (self); +} + +static gboolean +gst_h264_encoder_set_format (GstVideoEncoder * encoder, + GstVideoCodecState * state) +{ + GstH264Encoder *self = GST_H264_ENCODER (encoder); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstQuery *query; + + if (priv->input_state) + gst_video_codec_state_unref (priv->input_state); + priv->input_state = gst_video_codec_state_ref (state); + + priv->fps_d = GST_VIDEO_INFO_FPS_D (&priv->input_state->info); + priv->fps_n = GST_VIDEO_INFO_FPS_N (&priv->input_state->info); + + /* if still image */ + if (priv->fps_d == 0 || priv->fps_n == 0) { + priv->fps_d = 1; + priv->fps_n = 30; + } + + /* in case live streaming, we should run on low-latency mode */ + priv->is_live = FALSE; + query = gst_query_new_latency (); + if (gst_pad_peer_query (GST_VIDEO_ENCODER_SINK_PAD (encoder), query)) + gst_query_parse_latency (query, &priv->is_live, NULL, NULL); + gst_query_unref (query); + + g_atomic_int_set (&priv->need_configure, TRUE); + + return TRUE; +} + +static GstFlowReturn +gst_h264_encoder_finish_frame (GstH264Encoder * self, + GstVideoCodecFrame * frame) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderClass *base_class = GST_H264_ENCODER_GET_CLASS (self); + GstH264EncoderFrame *h264_frame = _GET_FRAME (frame); + GstFlowReturn ret; + + if (gst_vec_deque_get_length (priv->dts_queue) > 0) + frame->dts = + *((GstClockTime *) gst_vec_deque_pop_head_struct (priv->dts_queue)); + else + frame->dts = GST_CLOCK_TIME_NONE; + + if (base_class->prepare_output) { + ret = base_class->prepare_output (self, frame); + if (ret == GST_FLOW_ERROR) + goto prepare_error; + else if (ret == GST_FLOW_OUTPUT_NOT_READY) + return GST_FLOW_OK; + } + + if (h264_frame->poc == 0) { + GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame); + GST_BUFFER_FLAG_UNSET (frame->output_buffer, GST_BUFFER_FLAG_DELTA_UNIT); + GST_BUFFER_FLAG_SET (frame->output_buffer, GST_BUFFER_FLAG_HEADER); + } else { + GST_VIDEO_CODEC_FRAME_UNSET_SYNC_POINT (frame); + GST_BUFFER_FLAG_SET (frame->output_buffer, GST_BUFFER_FLAG_DELTA_UNIT); + } + + GST_LOG_OBJECT (self, "Push to downstream: frame system_frame_number: %d," + " pts: %" GST_TIME_FORMAT ", dts: %" GST_TIME_FORMAT + " duration: %" GST_TIME_FORMAT ", buffer size: %" G_GSIZE_FORMAT, + frame->system_frame_number, GST_TIME_ARGS (frame->pts), + GST_TIME_ARGS (frame->dts), GST_TIME_ARGS (frame->duration), + frame->output_buffer ? gst_buffer_get_size (frame->output_buffer) : 0); + + return gst_video_encoder_finish_frame (GST_VIDEO_ENCODER (self), frame); + +prepare_error: + { + GST_ERROR_OBJECT (self, "Failed to prepare output"); + gst_clear_buffer (&frame->output_buffer); + ret = gst_video_encoder_finish_frame (GST_VIDEO_ENCODER (self), frame); + if (ret != GST_FLOW_OK) + GST_WARNING_OBJECT (self, "Failed to drop unprepared frame"); + + return GST_FLOW_ERROR; + } +} + +static gboolean +gst_h264_encoder_reorder_lists_push (GstH264Encoder * self, + GstVideoCodecFrame * frame, gboolean last) +{ + GstH264EncoderFrame *h264_frame; + GstH264EncoderPrivate *priv = _GET_PRIV (self); + gboolean add_cached_key_frame = FALSE; + + g_return_val_if_fail (priv->gop.cur_frame_index <= priv->gop.idr_period, + FALSE); + + if (frame) { + h264_frame = _GET_FRAME (frame); + + /* Force to insert the key frame inside a GOP, just end the current + * GOP and start a new one. */ + if (GST_VIDEO_CODEC_FRAME_IS_FORCE_KEYFRAME (frame) && + !(priv->gop.cur_frame_index == 0 || + priv->gop.cur_frame_index == priv->gop.idr_period)) { + GST_DEBUG_OBJECT (self, "system_frame_number: %u is a force key " + "frame(IDR), begin a new GOP.", frame->system_frame_number); + + h264_frame->type = + g_array_index (priv->gop.frame_map, GstH264GOPFrame, 0); + h264_frame->poc = 0; + h264_frame->force_idr = TRUE; + + /* The previous key frame should be already be poped out. */ + g_assert (priv->gop.last_keyframe == NULL); + + /* An empty reorder list, start the new GOP immediately. */ + if (g_queue_is_empty (&priv->reorder_list)) { + priv->gop.cur_frame_index = 1; + priv->gop.cur_frame_num = 0; + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + last = FALSE; + } else { + /* Cache the key frame and end the current GOP. + * Next time calling this push() without frame, start the new GOP. */ + priv->gop.last_keyframe = frame; + last = TRUE; + } + + add_cached_key_frame = TRUE; + } else { + /* Begin a new GOP, should have a empty reorder_list. */ + if (priv->gop.cur_frame_index == priv->gop.idr_period) { + g_assert (g_queue_is_empty (&priv->reorder_list)); + priv->gop.cur_frame_index = 0; + priv->gop.cur_frame_num = 0; + } + + if (priv->gop.cur_frame_index == 0) { + g_assert (h264_frame->poc == 0); + GST_LOG_OBJECT (self, "system_frame_number: %d, an IDR frame, starts" + " a new GOP", frame->system_frame_number); + + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + } + + h264_frame->type = g_array_index (priv->gop.frame_map, GstH264GOPFrame, + priv->gop.cur_frame_index); + h264_frame->poc = + (priv->gop.cur_frame_index * 2) % priv->gop.max_pic_order_cnt; + + GST_LOG_OBJECT (self, "Push frame, system_frame_number: %d, poc %d, " + "frame type %s", frame->system_frame_number, h264_frame->poc, + gst_h264_slice_type_to_string (h264_frame->type.slice_type)); + + priv->gop.cur_frame_index++; + + g_queue_push_tail (&priv->reorder_list, + gst_video_codec_frame_ref (frame)); + } + } else if (priv->gop.last_keyframe) { + g_assert (priv->gop.last_keyframe == + g_queue_peek_tail (&priv->reorder_list)); + + if (g_queue_get_length (&priv->reorder_list) == 1) { + /* The last cached key frame begins a new GOP */ + priv->gop.cur_frame_index = 1; + priv->gop.cur_frame_num = 0; + priv->gop.last_keyframe = NULL; + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + } + } + + /* ensure the last one a non-B and end the GOP. */ + if (last && priv->gop.cur_frame_index < priv->gop.idr_period) { + GstVideoCodecFrame *last_frame; + + /* Ensure next push will start a new GOP. */ + priv->gop.cur_frame_index = priv->gop.idr_period; + + if (!g_queue_is_empty (&priv->reorder_list)) { + last_frame = g_queue_peek_tail (&priv->reorder_list); + h264_frame = _GET_FRAME (last_frame); + if (h264_frame->type.slice_type == GST_H264_B_SLICE) { + h264_frame->type.slice_type = GST_H264_P_SLICE; + h264_frame->type.is_ref = TRUE; + } + } + } + + /* Insert the cached next key frame after ending the current GOP. */ + if (add_cached_key_frame) { + g_queue_push_tail (&priv->reorder_list, gst_video_codec_frame_ref (frame)); + } + + return TRUE; +} + +struct RefFramesCount +{ + gint poc; + guint num; +}; + +static void +_count_backward_ref_num (gpointer data, gpointer user_data) +{ + GstH264EncoderFrame *frame = _GET_FRAME (data); + struct RefFramesCount *count = (struct RefFramesCount *) user_data; + + g_assert (frame->poc != count->poc); + if (frame->poc > count->poc) + count->num++; +} + +static GstVideoCodecFrame * +_pop_pyramid_b_frame (GstH264Encoder * self, guint gop_len) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + guint i; + gint index = -1; + GstH264EncoderFrame *h264_frame, *b_h264_frame; + GstVideoCodecFrame *frame, *b_frame; + struct RefFramesCount count; + + g_assert (priv->gop.ref_num_list1 == 1); + + b_frame = NULL; + b_h264_frame = NULL; + + /* Find the lowest level with smallest poc. */ + for (i = 0; i < gop_len; i++) { + + frame = g_queue_peek_nth (&priv->reorder_list, i); + + if (!b_frame) { + b_frame = frame; + b_h264_frame = _GET_FRAME (b_frame); + index = i; + continue; + } + + h264_frame = _GET_FRAME (frame); + if (b_h264_frame->type.pyramid_level < h264_frame->type.pyramid_level) { + b_frame = frame; + b_h264_frame = h264_frame; + index = i; + continue; + } + + if (b_h264_frame->poc > h264_frame->poc) { + b_frame = frame; + b_h264_frame = h264_frame; + index = i; + } + } + +again: + /* Check whether its refs are already poped. */ + g_assert (b_h264_frame->type.left_ref_poc_diff != 0); + g_assert (b_h264_frame->type.right_ref_poc_diff != 0); + + for (i = 0; i < gop_len; i++) { + GstH264EncoderFrame *h264_frame; + GstVideoCodecFrame *frame; + + frame = g_queue_peek_nth (&priv->reorder_list, i); + + if (frame == b_frame) + continue; + + h264_frame = _GET_FRAME (frame); + if (h264_frame->poc == b_h264_frame->poc + + b_h264_frame->type.left_ref_poc_diff + || h264_frame->poc == b_h264_frame->poc + + b_h264_frame->type.right_ref_poc_diff) { + b_frame = frame; + b_h264_frame = h264_frame; + index = i; + goto again; + } + } + + /* Ensure we already have enough backward refs */ + count.num = 0; + count.poc = b_h264_frame->poc; + g_queue_foreach (&priv->ref_list, (GFunc) _count_backward_ref_num, &count); + if (count.num >= priv->gop.ref_num_list1) { + GstVideoCodecFrame *frame; + + /* it will unref at pop_frame */ + frame = g_queue_pop_nth (&priv->reorder_list, index); + g_assert (frame == b_frame); + } else { + b_frame = NULL; + } + + return b_frame; +} + +static gboolean +gst_h264_encoder_reorder_lists_pop (GstH264Encoder * self, + GstVideoCodecFrame ** out_frame) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderFrame *h264_frame; + GstVideoCodecFrame *frame; + struct RefFramesCount count; + guint gop_len; + + g_return_val_if_fail (priv->gop.cur_frame_index <= priv->gop.idr_period, + FALSE); + + *out_frame = NULL; + + if (g_queue_is_empty (&priv->reorder_list)) + return TRUE; + + gop_len = g_queue_get_length (&priv->reorder_list); + + if (priv->gop.last_keyframe && gop_len > 1) + gop_len--; + + /* Return the last pushed non-B immediately. */ + frame = g_queue_peek_nth (&priv->reorder_list, gop_len - 1); + h264_frame = _GET_FRAME (frame); + if (h264_frame->type.slice_type != GST_H264_B_SLICE) { + frame = g_queue_pop_nth (&priv->reorder_list, gop_len - 1); + goto get_one; + } + + if (priv->gop.b_pyramid) { + frame = _pop_pyramid_b_frame (self, gop_len); + if (!frame) + return TRUE; + goto get_one; + } + + g_assert (priv->gop.ref_num_list1 > 0); + + /* If GOP end, pop anyway. */ + if (priv->gop.cur_frame_index == priv->gop.idr_period) { + frame = g_queue_pop_head (&priv->reorder_list); + goto get_one; + } + + /* Ensure we already have enough backward refs */ + frame = g_queue_peek_head (&priv->reorder_list); + h264_frame = _GET_FRAME (frame); + count.num = 0; + count.poc = h264_frame->poc; + g_queue_foreach (&priv->ref_list, _count_backward_ref_num, &count); + if (count.num >= priv->gop.ref_num_list1) { + frame = g_queue_pop_head (&priv->reorder_list); + goto get_one; + } + + return TRUE; + +get_one: + g_assert (priv->gop.cur_frame_num < priv->gop.max_frame_num); + + h264_frame = _GET_FRAME (frame); + h264_frame->gop_frame_num = priv->gop.cur_frame_num; + + /* Add the frame number for ref frames. */ + if (h264_frame->type.is_ref) { + if (!g_uint_checked_add (&priv->gop.cur_frame_num, priv->gop.cur_frame_num, + 1)) + return FALSE; + } + + /* used to identify idr_pic_id, incremented only when are two consecutive + * IDR */ + if (h264_frame->gop_frame_num == 0) { + if (!g_uint_checked_add (&priv->gop.total_idr_count, + priv->gop.total_idr_count, 1)) + return FALSE; + } + + h264_frame->idr_pic_id = priv->gop.total_idr_count; + + if (priv->gop.b_pyramid && h264_frame->type.slice_type == GST_H264_B_SLICE) { + GST_LOG_OBJECT (self, "pop a pyramid B frame with system_frame_number:" + " %d, poc: %d, frame num: %d, is_ref: %s, level %d", + frame->system_frame_number, h264_frame->poc, + h264_frame->gop_frame_num, h264_frame->type.is_ref ? "true" : "false", + h264_frame->type.pyramid_level); + } else { + GST_LOG_OBJECT (self, "pop a frame with system_frame_number: %d," + " frame type: %s, poc: %d, frame num: %d, is_ref: %s", + frame->system_frame_number, + gst_h264_slice_type_to_string (h264_frame->type.slice_type), + h264_frame->poc, h264_frame->gop_frame_num, + h264_frame->type.is_ref ? "true" : "false"); + } + + /* unref frame popped from queue or pyramid b_frame */ + gst_video_codec_frame_unref (frame); + *out_frame = frame; + return TRUE; +} + +static gboolean +gst_h264_encoder_reorder_frame (GstH264Encoder * self, + GstVideoCodecFrame * frame, gboolean bump_all, + GstVideoCodecFrame ** out_frame) +{ + if (!gst_h264_encoder_reorder_lists_push (self, frame, bump_all)) { + GST_ERROR_OBJECT (self, "Failed to push the input frame" + " system_frame_number: %d into the reorder list", + frame->system_frame_number); + + *out_frame = NULL; + return FALSE; + } + + if (!gst_h264_encoder_reorder_lists_pop (self, out_frame)) { + GST_ERROR_OBJECT (self, "Failed to pop the frame from the reorder list"); + *out_frame = NULL; + return FALSE; + } + + return TRUE; +} + +static void +_update_ref_pic_marking_for_unused_frame (GstH264SliceHdr * slice_hdr, + GstH264EncoderFrame * frame) +{ + GstH264RefPicMarking *refpicmarking; + + slice_hdr->dec_ref_pic_marking.adaptive_ref_pic_marking_mode_flag = 1; + slice_hdr->dec_ref_pic_marking.n_ref_pic_marking = 2; + + refpicmarking = &slice_hdr->dec_ref_pic_marking.ref_pic_marking0; + + refpicmarking->memory_management_control_operation = 1; + refpicmarking->difference_of_pic_nums_minus1 = + frame->gop_frame_num - frame->unused_for_reference_pic_num - 1; + + refpicmarking = &slice_hdr->dec_ref_pic_marking.ref_pic_marking1; + refpicmarking->memory_management_control_operation = 0; +} + +static gint +_frame_num_asc_compare (const GstH264EncoderFrame ** a, + const GstH264EncoderFrame ** b) +{ + return (*a)->gop_frame_num - (*b)->gop_frame_num; +} + +static gint +_frame_num_desc_compare (const GstH264EncoderFrame ** a, + const GstH264EncoderFrame ** b) +{ + return (*b)->gop_frame_num - (*a)->gop_frame_num; +} + +static void +_update_ref_pic_list_modification (GstH264SliceHdr * slice_hdr, GArray * list, + gboolean is_asc) +{ + GArray *list_by_pic_num; + guint modified, i; + GstH264RefPicListModification *ref_pic_list_modification = NULL; + guint16 pic_num_lx_pred; + + list_by_pic_num = g_array_copy (list); + + if (is_asc) + g_array_sort (list_by_pic_num, (GCompareFunc) _frame_num_asc_compare); + else + g_array_sort (list_by_pic_num, (GCompareFunc) _frame_num_desc_compare); + + modified = 0; + for (i = 0; i < list->len; i++) { + GstH264EncoderFrame *frame_poc = + g_array_index (list, GstH264EncoderFrame *, i); + GstH264EncoderFrame *frame_framenum = + g_array_index (list_by_pic_num, GstH264EncoderFrame *, i); + + if (frame_poc->poc != frame_framenum->poc) + modified++; + } + + g_array_unref (list_by_pic_num); + + if (modified == 0) + return; + + if (is_asc) { + slice_hdr->ref_pic_list_modification_flag_l1 = 1; + slice_hdr->n_ref_pic_list_modification_l1 = modified + 1; /* The end operation */ + ref_pic_list_modification = slice_hdr->ref_pic_list_modification_l1; + } else { + slice_hdr->ref_pic_list_modification_flag_l0 = 1; + slice_hdr->n_ref_pic_list_modification_l0 = modified + 1; /* The end operation */ + ref_pic_list_modification = slice_hdr->ref_pic_list_modification_l0; + } + + pic_num_lx_pred = slice_hdr->frame_num; + for (i = 0; i < modified; i++) { + GstH264EncoderFrame *frame = g_array_index (list, GstH264EncoderFrame *, i); + gint pic_num_diff = frame->gop_frame_num - pic_num_lx_pred; + + g_assert (pic_num_diff != 0); + + ref_pic_list_modificationi = (GstH264RefPicListModification) { + .modification_of_pic_nums_idc = pic_num_diff > 0 ? 1 : 0, + .value.abs_diff_pic_num_minus1 = ABS (pic_num_diff) - 1, + }; + + /* For the nex loop. */ + pic_num_lx_pred = frame->gop_frame_num; + } + + /* *INDENT-OFF* */ + ref_pic_list_modificationi = (GstH264RefPicListModification) { + .modification_of_pic_nums_idc = 3, + }; + /* *INDENT-ON* */ +} + +/* If all the pic_num in the same order, OK. */ +static gboolean +_ref_list_need_reorder (GArray * list, gboolean is_asc) +{ + guint i; + + if (list->len <= 1) + return FALSE; + + for (i = 1; i < list->len; i++) { + GstH264EncoderFrame *frame = g_array_index (list, GstH264EncoderFrame *, i); + GstH264EncoderFrame *prev_frame = + g_array_index (list, GstH264EncoderFrame *, i - 1); + gint pic_num_diff = frame->gop_frame_num - prev_frame->gop_frame_num; + g_assert (pic_num_diff != 0); + + if (pic_num_diff > 0 && !is_asc) + return TRUE; + + if (pic_num_diff < 0 && is_asc) + return TRUE; + } + + return FALSE; +} + +static void +gst_h264_encoder_slicehdr_init (GstH264Encoder * self, + GstH264EncoderFrame * frame, GstH264SliceHdr * slice_hdr) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + g_assert (priv->params.sps.separate_colour_plane_flag == 0); + /* only progressive so far */ + g_assert (priv->params.sps.frame_mbs_only_flag == 1); + + g_assert (priv->params.pps.pic_order_present_flag == 0); + g_assert (priv->params.pps.redundant_pic_cnt_present_flag == 0); + + /* *INDENT-OFF* */ + *slice_hdr = (GstH264SliceHdr) { + .first_mb_in_slice = 0, /* XXX: update if multiple slices */ + .type = frame->type.slice_type, + .pps = &priv->params.pps, + + /* if seq->separate_colour_plane_flag */ + .colour_plane_id = 0, + + .frame_num = frame->gop_frame_num, + + /* interlaced not supported now. */ + .field_pic_flag = 0, + .bottom_field_flag = 0, + + /* if nal_unit.type == IDR */ + .idr_pic_id = + frame->gop_frame_num == 0 ? frame->idr_pic_id : 0, + + /* if seq->pic_order_cnt_type == 0 */ + /* only pic_order_cnt_type 1 is supported now. */ + .pic_order_cnt_lsb = frame->poc, + /* if seq->pic_order_present_flag && !field_pic_flag: Not support + * top/bottom. */ + .delta_pic_order_cnt_bottom = 0, + + .delta_pic_order_cnt = { 0, 0 }, + .redundant_pic_cnt = 0, + + /* if slice_type == B_SLICE */ + .direct_spatial_mv_pred_flag = + frame->type.slice_type == GST_H264_B_SLICE ? 1 : 0, + + .num_ref_idx_l0_active_minus1 = 0, /* defined later */ + .num_ref_idx_l1_active_minus1 = 0, /* defined later */ + .num_ref_idx_active_override_flag = 0, /* defined later */ + + /* Calculate it later. */ + .ref_pic_list_modification_flag_l0 = 0, + .n_ref_pic_list_modification_l0 = 0, + .ref_pic_list_modification_l0 = { { 0, }, }, + .ref_pic_list_modification_flag_l1 = 0, + .n_ref_pic_list_modification_l1 = 0, + .ref_pic_list_modification_l1 = { { 0, }, }, + + /* We have weighted_pred_flag and weighted_bipred_idc 0 here, no + * need weight_table. */ + .pred_weight_table = { 0, }, + /* if nal_unit.ref_idc != 0 */ + .dec_ref_pic_marking = { 0, }, + + .cabac_init_idc = 0, + .slice_qp_delta = 0, /* XXX: update it if rate control */ + + .disable_deblocking_filter_idc = 0, + .slice_alpha_c0_offset_div2 = 2, + .slice_beta_offset_div2 = 2, + + .slice_group_change_cycle = 0, + + /* Size of the slice_header() in bits */ + .header_size = 0, + + /* Number of emulation prevention bytes (EPB) in this slice_header() */ + .n_emulation_prevention_bytes = 0, + .sp_for_switch_flag = 0, + + .pic_order_cnt_bit_size = 0, + }; + /* *INDENT-ON* */ + + if (frame->type.slice_type == GST_H264_B_SLICE + || frame->type.slice_type == GST_H264_P_SLICE) { + slice_hdr->num_ref_idx_active_override_flag = + priv->ref_list0->len > 0 || priv->ref_list1->len > 0; + slice_hdr->num_ref_idx_l0_active_minus1 = + priv->ref_list0->len > 0 ? priv->ref_list0->len - 1 : 0; + if (frame->type.slice_type == GST_H264_B_SLICE) { + slice_hdr->num_ref_idx_l1_active_minus1 = + priv->ref_list1->len > 0 ? priv->ref_list1->len - 1 : 0; + } + } + + /* Reorder the ref lists if needed. */ + if (_ref_list_need_reorder (priv->ref_list0, FALSE)) + _update_ref_pic_list_modification (slice_hdr, priv->ref_list0, FALSE); + + /* Mark the unused reference explicitly which this frame replaces. */ + if (frame->unused_for_reference_pic_num >= 0) + _update_ref_pic_marking_for_unused_frame (slice_hdr, frame); +} + +static gint +_sort_by_frame_num (gconstpointer a, gconstpointer b, gpointer user_data) +{ + GstH264EncoderFrame *frame1 = _GET_FRAME ((GstVideoCodecFrame *) a); + GstH264EncoderFrame *frame2 = _GET_FRAME ((GstVideoCodecFrame *) b); + + g_assert (frame1->gop_frame_num != frame2->gop_frame_num); + + return frame1->gop_frame_num - frame2->gop_frame_num; +} + +static GstVideoCodecFrame * +gst_h264_encoder_find_unused_reference_frame (GstH264Encoder * self, + GstH264EncoderFrame * h264_frame) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderFrame *b_h264_frame; + GstVideoCodecFrame *b_frame; + guint i; + + /* We still have more space. */ + if (g_queue_get_length (&priv->ref_list) < + priv->gop.max_dec_frame_buffering - 1) + return NULL; + + /* Not b_pyramid, sliding window is enough. */ + if (!priv->gop.b_pyramid) + return g_queue_peek_head (&priv->ref_list); + + /* I/P frame, just using sliding window. */ + if (h264_frame->type.slice_type != GST_H264_B_SLICE) + return g_queue_peek_head (&priv->ref_list); + + /* Choose the B frame with lowest POC. */ + b_frame = NULL; + b_h264_frame = NULL; + for (i = 0; i < g_queue_get_length (&priv->ref_list); i++) { + GstH264EncoderFrame *h264frame; + GstVideoCodecFrame *frame; + + frame = g_queue_peek_nth (&priv->ref_list, i); + h264frame = _GET_FRAME (frame); + if (h264frame->type.slice_type != GST_H264_B_SLICE) + continue; + + if (!b_frame) { + b_frame = frame; + b_h264_frame = _GET_FRAME (b_frame); + continue; + } + + b_h264_frame = _GET_FRAME (b_frame); + g_assert (h264frame->poc != b_h264_frame->poc); + if (h264frame->poc < b_h264_frame->poc) { + b_frame = frame; + b_h264_frame = _GET_FRAME (b_frame); + } + } + + /* No B frame as ref. */ + if (!b_frame) + return g_queue_peek_head (&priv->ref_list); + + if (b_frame != g_queue_peek_head (&priv->ref_list)) { + b_h264_frame = _GET_FRAME (b_frame); + h264_frame->unused_for_reference_pic_num = b_h264_frame->gop_frame_num; + GST_LOG_OBJECT (self, "The frame with POC: %d, pic_num %d will be" + " replaced by the frame with POC: %d, pic_num %d explicitly by" + " using memory_management_control_operation=1", + b_h264_frame->poc, b_h264_frame->gop_frame_num, + h264_frame->poc, h264_frame->gop_frame_num); + } + + return b_frame; +} + +static gint +_poc_asc_compare (const GstH264EncoderFrame ** a, + const GstH264EncoderFrame ** b) +{ + return (*a)->poc - (*b)->poc; +} + +static gint +_poc_desc_compare (const GstH264EncoderFrame ** a, + const GstH264EncoderFrame ** b) +{ + return (*b)->poc - (*a)->poc; +} + +static GstFlowReturn +gst_h264_encoder_encode_frame_with_ref_lists (GstH264Encoder * self, + GstVideoCodecFrame * frame) +{ + GstH264EncoderClass *klass = GST_H264_ENCODER_GET_CLASS (self); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderFrame *h264_frame; + GArray *list0, *list1; + GstH264SliceHdr slice_hdr; + gint i; + + g_return_val_if_fail (frame, FALSE); + + h264_frame = _GET_FRAME (frame); + + list0 = priv->ref_list0; + list1 = priv->ref_list1; + + g_array_set_size (list0, 0); + g_array_set_size (list1, 0); + + /* Non I frame, construct reference list. */ + if (h264_frame->type.slice_type != GST_H264_I_SLICE) { + g_assert (g_queue_get_length (&priv->ref_list) < + priv->gop.max_dec_frame_buffering); + + GST_INFO_OBJECT (self, "Default RefPicList0 for fn=%u/poc=%d:", + h264_frame->gop_frame_num, h264_frame->poc); + for (i = g_queue_get_length (&priv->ref_list) - 1; i >= 0; i--) { + GstVideoCodecFrame *ref_frame; + GstH264EncoderFrame *ref_h264_frame; + + ref_frame = g_queue_peek_nth (&priv->ref_list, i); + ref_h264_frame = _GET_FRAME (ref_frame); + if (ref_h264_frame->poc > h264_frame->poc) + continue; + + GST_INFO_OBJECT (self, " fn=%u/poc=%d:", ref_h264_frame->gop_frame_num, + ref_h264_frame->poc); + g_array_append_val (list0, ref_h264_frame); + } + + /* reorder to select the nearest forward frames. */ + g_array_sort (list0, (GCompareFunc) _poc_desc_compare); + + if (list0->len > priv->gop.ref_num_list0) + g_array_set_size (list0, priv->gop.ref_num_list0); + } + + if (h264_frame->type.slice_type == GST_H264_B_SLICE) { + GST_INFO_OBJECT (self, "Default RefPicList1 for fn=%u/poc=%d:", + h264_frame->gop_frame_num, h264_frame->poc); + for (i = 0; i < g_queue_get_length (&priv->ref_list); i++) { + GstH264EncoderFrame *ref_h264_frame; + GstVideoCodecFrame *ref_frame; + + ref_frame = g_queue_peek_nth (&priv->ref_list, i); + ref_h264_frame = _GET_FRAME (ref_frame); + if (ref_h264_frame->poc < h264_frame->poc) + continue; + + GST_INFO_OBJECT (self, " fn=%d/poc=%d", + ref_h264_frame->gop_frame_num, ref_h264_frame->poc); + g_array_append_val (list1, ref_h264_frame); + } + + /* reorder to select the nearest backward frames. */ + g_array_sort (list1, (GCompareFunc) _poc_asc_compare); + + if (list1->len > priv->gop.ref_num_list1) + g_array_set_size (list1, priv->gop.ref_num_list1); + } + + g_assert (list0->len + list1->len <= priv->gop.num_ref_frames); + + gst_h264_encoder_slicehdr_init (self, h264_frame, &slice_hdr); + + g_assert (klass->encode_frame); + return klass->encode_frame (self, frame, h264_frame, &slice_hdr, list0, + list1); +} + +static GstFlowReturn +gst_h264_encoder_encode_frame (GstH264Encoder * self, + GstVideoCodecFrame * frame, gboolean is_last) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderFrame *h264_frame; + GstVideoCodecFrame *unused_ref = NULL; + GstFlowReturn ret; + + h264_frame = _GET_FRAME (frame); + h264_frame->last_frame = is_last; + + if (h264_frame->type.is_ref) { + unused_ref = + gst_h264_encoder_find_unused_reference_frame (self, h264_frame); + } + + ret = gst_h264_encoder_encode_frame_with_ref_lists (self, frame); + if (ret != GST_FLOW_OK) { + GST_ERROR_OBJECT (self, "Failed to encode the frame: %s", + gst_flow_get_name (ret)); + return ret; + } + + g_queue_push_tail (&priv->output_list, gst_video_codec_frame_ref (frame)); + + if (h264_frame->type.is_ref) { + if (unused_ref) { + if (!g_queue_remove (&priv->ref_list, unused_ref)) + g_assert_not_reached (); + + gst_video_codec_frame_unref (unused_ref); + } + + /* Add it into the reference list. */ + g_queue_push_tail (&priv->ref_list, gst_video_codec_frame_ref (frame)); + g_queue_sort (&priv->ref_list, _sort_by_frame_num, NULL); + + g_assert (g_queue_get_length (&priv->ref_list) < + priv->gop.max_dec_frame_buffering); + } + + return ret; +} + +static GstFlowReturn +gst_h264_encoder_finish_last_frame (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstVideoCodecFrame *frame; + GstFlowReturn ret; + guint32 system_frame_number; + + if (g_queue_is_empty (&priv->output_list)) + return GST_FLOW_OUTPUT_NOT_READY; + + /* TODO: check if the output buffer is ready */ + + frame = g_queue_pop_head (&priv->output_list); + system_frame_number = frame->system_frame_number; + + gst_video_codec_frame_unref (frame); + + ret = gst_h264_encoder_finish_frame (self, frame); + + if (ret != GST_FLOW_OK) { + GST_DEBUG_OBJECT (self, "fails to push one buffer, system_frame_number " + "%d: %s", system_frame_number, gst_flow_get_name (ret)); + } + + return ret; +} + +static GstFlowReturn +gst_h264_encoder_drain (GstH264Encoder * self) +{ + GstVideoEncoder *encoder = GST_VIDEO_ENCODER (self); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstFlowReturn ret = GST_FLOW_OK; + GstVideoCodecFrame *frame = NULL; + gboolean is_last; + + GST_DEBUG_OBJECT (self, "Encoder is draining"); + + /* Kickout all cached frames */ + if (!gst_h264_encoder_reorder_frame (self, NULL, TRUE, &frame)) { + ret = GST_FLOW_ERROR; + goto error_and_purge_all; + } + + while (frame) { + is_last = g_queue_is_empty (&priv->reorder_list); + ret = gst_h264_encoder_encode_frame (self, frame, is_last); + if (ret != GST_FLOW_OK) + goto error_and_purge_all; + + frame = NULL; + + ret = gst_h264_encoder_finish_last_frame (self); + if (ret != GST_FLOW_OK) + goto error_and_purge_all; + + if (!gst_h264_encoder_reorder_frame (self, NULL, TRUE, &frame)) { + ret = GST_FLOW_ERROR; + goto error_and_purge_all; + } + } + + g_assert (g_queue_is_empty (&priv->reorder_list)); + + /* Output all frames. */ + while (!g_queue_is_empty (&priv->output_list)) { + ret = gst_h264_encoder_finish_last_frame (self); + if (ret != GST_FLOW_OK) + goto error_and_purge_all; + } + + /* Also clear the reference list. */ + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + + return GST_FLOW_OK; + +error_and_purge_all: + if (frame) { + gst_clear_buffer (&frame->output_buffer); + gst_video_encoder_finish_frame (encoder, frame); + } + + if (!g_queue_is_empty (&priv->output_list)) { + GST_WARNING_OBJECT (self, "Still %d frame in the output list" + " after drain", g_queue_get_length (&priv->output_list)); + while (!g_queue_is_empty (&priv->output_list)) { + frame = g_queue_pop_head (&priv->output_list); + gst_video_codec_frame_unref (frame); + gst_clear_buffer (&frame->output_buffer); + gst_video_encoder_finish_frame (encoder, frame); + } + } + + if (!g_queue_is_empty (&priv->reorder_list)) { + GST_WARNING_OBJECT (self, "Still %d frame in the reorder list" + " after drain", g_queue_get_length (&priv->reorder_list)); + while (!g_queue_is_empty (&priv->reorder_list)) { + frame = g_queue_pop_head (&priv->reorder_list); + gst_video_codec_frame_unref (frame); + gst_clear_buffer (&frame->output_buffer); + gst_video_encoder_finish_frame (encoder, frame); + } + } + + /* Also clear the reference list. */ + g_queue_clear_full (&priv->ref_list, + (GDestroyNotify) gst_video_codec_frame_unref); + + gst_vec_deque_clear (priv->dts_queue); + + return ret; +} + +enum +{ + GST_CHROMA_420 = 1, + GST_CHROMA_422 = 2, + GST_CHROMA_444 = 3, + GST_CHROMA_INVALID = 0xFF, +}; + +static guint8 +_h264_get_chroma_idc (GstVideoInfo * info) +{ + gint w_sub, h_sub; + + if (!GST_VIDEO_FORMAT_INFO_IS_YUV (info->finfo)) + return GST_CHROMA_INVALID; + + w_sub = 1 << GST_VIDEO_FORMAT_INFO_W_SUB (info->finfo, 1); + h_sub = 1 << GST_VIDEO_FORMAT_INFO_H_SUB (info->finfo, 1); + + if (w_sub == 2 && h_sub == 2) + return GST_CHROMA_420; + else if (w_sub == 2 && h_sub == 1) + return GST_CHROMA_422; + else if (w_sub == 1 && h_sub == 1) + return GST_CHROMA_444; + return GST_CHROMA_INVALID; +} + +static const struct +{ + const char *name; + GstH264Level level; +} _h264_level_map = { + {"1", GST_H264_LEVEL_L1}, + {"1b", GST_H264_LEVEL_L1B}, + {"1.1", GST_H264_LEVEL_L1_1}, + {"1.2", GST_H264_LEVEL_L1_2}, + {"1.3", GST_H264_LEVEL_L1_3}, + {"2", GST_H264_LEVEL_L2}, + {"2.1", GST_H264_LEVEL_L2_1}, + {"2.2", GST_H264_LEVEL_L2_2}, + {"3", GST_H264_LEVEL_L3}, + {"3.1", GST_H264_LEVEL_L3_1}, + {"3.2", GST_H264_LEVEL_L3_2}, + {"4", GST_H264_LEVEL_L4}, + {"4.1", GST_H264_LEVEL_L4_1}, + {"4.2", GST_H264_LEVEL_L4_2}, + {"5", GST_H264_LEVEL_L5}, + {"5.1", GST_H264_LEVEL_L5_1}, + {"5.2", GST_H264_LEVEL_L5_2}, + {"6", GST_H264_LEVEL_L6}, + {"6.1", GST_H264_LEVEL_L6_1}, + {"6.2", GST_H264_LEVEL_L6_2}, +}; + +static guint8 +_h264_get_level_idc (const gchar * level) +{ + if (!level) + return 0; + + for (int i = 0; i < G_N_ELEMENTS (_h264_level_map); i++) { + if (strcmp (level, _h264_level_mapi.name) == 0) + return _h264_level_mapi.level; + } + + return 0; +} + +static GstH264Profile +gst_h264_encoder_profile_from_string (const char *profile) +{ + if (g_strcmp0 (profile, "constrained-baseline") == 0) + return GST_H264_PROFILE_BASELINE; + return gst_h264_profile_from_string (profile); +} + +struct ProfileCandidate +{ + const char *profile_name; + GstH264Profile profile; + guint level; +}; + +static GstFlowReturn +gst_h264_encoder_negotiate_default (GstH264Encoder * self, + GstVideoCodecState * in_state, GstH264Profile * profile, + GstH264Level * level) +{ + GstCaps *allowed_caps; + guint i, num_structures, num_candidates = 0; + guint8 chroma, bit_depth_luma; + struct ProfileCandidate candidates16 = + { {NULL, GST_H264_PROFILE_INVALID, 0}, }; + + allowed_caps = gst_pad_get_allowed_caps (GST_VIDEO_ENCODER_SRC_PAD (self)); + if (!allowed_caps) + return GST_FLOW_NOT_LINKED; + if (gst_caps_is_empty (allowed_caps)) { + gst_caps_unref (allowed_caps); + return GST_FLOW_NOT_NEGOTIATED; + } + + num_structures = gst_caps_get_size (allowed_caps); + for (i = 0; i < num_structures; i++) { + GstStructure *structure = gst_caps_get_structure (allowed_caps, i); + const GValue *profiles = gst_structure_get_value (structure, "profile"), + *level = gst_structure_get_value (structure, "level"); + struct ProfileCandidate *candidate; + + if (!profile) + continue; + + candidate = &candidatesnum_candidates; + + if (G_VALUE_HOLDS_STRING (profiles)) { + candidate->profile_name = g_value_get_string (profiles); + candidate->profile = + gst_h264_encoder_profile_from_string (candidate->profile_name); + candidate->level = level ? + _h264_get_level_idc (g_value_get_string (level)) : 0; + num_candidates++; + } else if (GST_VALUE_HOLDS_LIST (profiles)) { + for (guint j = 0; j < gst_value_list_get_size (profiles); j++) { + const GValue *profile = gst_value_list_get_value (profiles, j); + + candidate->profile_name = g_value_get_string (profile); + candidate->profile = + gst_h264_encoder_profile_from_string (candidate->profile_name); + candidate->level = level ? + _h264_get_level_idc (g_value_get_string (level)) : 0; + num_candidates++; + } + } + + if (num_candidates == G_N_ELEMENTS (candidates)) + break; + } + + gst_caps_unref (allowed_caps); + + if (num_candidates == 0) { + GST_ERROR_OBJECT (self, "Source caps with no profile"); + return GST_FLOW_NOT_NEGOTIATED; + } + + chroma = _h264_get_chroma_idc (&in_state->info); + if (chroma == GST_CHROMA_INVALID) + return GST_FLOW_NOT_NEGOTIATED; + bit_depth_luma = GST_VIDEO_INFO_COMP_DEPTH (&in_state->info, 0); + + /* let's just pick the best one according to the input */ + for (i = 0; i < num_candidates; i++) { + struct ProfileCandidate *candidate = &candidatesi; + + if (candidate->profile < *profile) + continue; + if (candidate->profile < GST_H264_PROFILE_HIGH_444 + && chroma == GST_CHROMA_444) { + GST_INFO_OBJECT (self, "Profile %s doesn't supports 4:4:4", + candidate->profile_name); + continue; + } + if (candidate->profile < GST_H264_PROFILE_HIGH_422 + && chroma >= GST_CHROMA_422) { + GST_INFO_OBJECT (self, "Profile %s doesn't supports 4:2:2", + candidate->profile_name); + continue; + } + if (candidate->profile < GST_H264_PROFILE_HIGH10 && bit_depth_luma > 8) { + GST_INFO_OBJECT (self, "Profile %s doesn't support a bit depth of %d", + candidate->profile_name, bit_depth_luma); + continue; + } + + *profile = candidatesi.profile; + *level = candidatesi.level; + } + + if (*profile == GST_H264_PROFILE_INVALID) { + GST_ERROR_OBJECT (self, "No valid profile found"); + return GST_FLOW_NOT_NEGOTIATED; + } + + return GST_FLOW_OK; +} + +#ifndef GST_DISABLE_GST_DEBUG +#define SPS_MEMBERS(F) \ + F(id) \ + F(profile_idc) \ + F(constraint_set0_flag) \ + F(constraint_set1_flag) \ + F(constraint_set2_flag) \ + F(constraint_set3_flag) \ + F(constraint_set4_flag) \ + F(constraint_set5_flag) \ + F(level_idc) \ + F(chroma_format_idc) \ + F(separate_colour_plane_flag) \ + F(bit_depth_luma_minus8) \ + F(bit_depth_chroma_minus8) \ + F(qpprime_y_zero_transform_bypass_flag) \ + F(scaling_matrix_present_flag) \ + F(log2_max_frame_num_minus4) \ + F(pic_order_cnt_type) \ + F(log2_max_pic_order_cnt_lsb_minus4) \ + F(delta_pic_order_always_zero_flag) \ + F(offset_for_non_ref_pic) \ + F(offset_for_top_to_bottom_field) \ + F(num_ref_frames_in_pic_order_cnt_cycle) \ + F(num_ref_frames) \ + F(gaps_in_frame_num_value_allowed_flag) \ + F(pic_width_in_mbs_minus1) \ + F(pic_height_in_map_units_minus1) \ + F(frame_mbs_only_flag) \ + F(mb_adaptive_frame_field_flag) \ + F(direct_8x8_inference_flag) \ + F(frame_cropping_flag) \ + F(frame_crop_left_offset) \ + F(frame_crop_right_offset) \ + F(frame_crop_top_offset) \ + F(frame_crop_bottom_offset) \ + F(vui_parameters_present_flag) \ + F(vui_parameters.aspect_ratio_info_present_flag) \ + F(vui_parameters.aspect_ratio_idc) \ + F(vui_parameters.sar_width) \ + F(vui_parameters.sar_height) \ + F(vui_parameters.overscan_info_present_flag) \ + F(vui_parameters.overscan_appropriate_flag) \ + F(vui_parameters.chroma_loc_info_present_flag) \ + F(vui_parameters.timing_info_present_flag) \ + F(vui_parameters.num_units_in_tick) \ + F(vui_parameters.time_scale) \ + F(vui_parameters.fixed_frame_rate_flag) \ + F(vui_parameters.nal_hrd_parameters_present_flag) \ + F(vui_parameters.vcl_hrd_parameters_present_flag) \ + F(vui_parameters.low_delay_hrd_flag) \ + F(vui_parameters.pic_struct_present_flag) \ + F(vui_parameters.bitstream_restriction_flag) \ + F(vui_parameters.motion_vectors_over_pic_boundaries_flag) \ + F(vui_parameters.max_bytes_per_pic_denom) \ + F(vui_parameters.max_bits_per_mb_denom) \ + F(vui_parameters.log2_max_mv_length_horizontal) \ + F(vui_parameters.log2_max_mv_length_vertical) \ + F(vui_parameters.num_reorder_frames) \ + F(vui_parameters.max_dec_frame_buffering) +#endif + +static void +gst_h264_sps_dump (GstH264Encoder * self, GstH264SPS * sps) +{ +#ifndef GST_DISABLE_GST_DEBUG +#define SPS_STR(member) " " G_STRINGIFY(member) " = %u\n" +#define SPS_VAL(member) sps->member, + GST_INFO_OBJECT (self, "SPS\n" SPS_MEMBERS (SPS_STR) "%s", + SPS_MEMBERS (SPS_VAL) ""); +#undef SPS_STR +#undef SPS_VAL +#endif +} + +#ifndef GST_DISABLE_GST_DEBUG +#define PPS_MEMBERS(F) \ + F(id) \ + F(entropy_coding_mode_flag) \ + F(pic_order_present_flag) \ + F(num_slice_groups_minus1) \ + F(slice_group_map_type) \ + F(slice_group_change_direction_flag) \ + F(slice_group_change_rate_minus1) \ + F(pic_size_in_map_units_minus1) \ + F(num_ref_idx_l0_active_minus1) \ + F(num_ref_idx_l1_active_minus1) \ + F(weighted_pred_flag) \ + F(weighted_bipred_idc) \ + F(pic_init_qp_minus26) \ + F(pic_init_qs_minus26) \ + F(chroma_qp_index_offset) \ + F(deblocking_filter_control_present_flag) \ + F(constrained_intra_pred_flag) \ + F(redundant_pic_cnt_present_flag) \ + F(transform_8x8_mode_flag) \ + F(second_chroma_qp_index_offset) \ + F(pic_scaling_matrix_present_flag) +#endif + +static void +gst_h264_pps_dump (GstH264Encoder * self, GstH264PPS * pps) +{ +#ifndef GST_DISABLE_GST_DEBUG +#define PPS_STR(member) " " G_STRINGIFY(member) " = %u\n" +#define PPS_VAL(member) pps->member, + GST_INFO_OBJECT (self, "PPS\n" PPS_MEMBERS (PPS_STR) "%s", + PPS_MEMBERS (PPS_VAL) ""); +#undef PPS_STR +#undef PPS_VAL +#endif +} + +/* 7.4.2.1.1 Sequence parameter set data semantics */ +static void +gst_h264_encoder_sps_init (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstVideoInfo *info; + gint i, mb_width, mb_height; + guint8 chroma_format_idc, bit_depth_luma, bit_depth_chroma, + frame_cropping_flag, frame_crop_right_offset, frame_crop_bottom_offset, + aspect_ratio_present_flag, aspect_ratio_idc, sar_width, sar_height, + timing_info_present_flag, num_units_in_tick, time_scale, + fixed_frame_rate_flag, constraint_set3_flag, constraint_set4_flag, + constraint_set5_flag, level_idc, direct_8x8_inference_flag; + + info = &priv->input_state->info; + + GST_DEBUG_OBJECT (self, "filling SPS"); + + chroma_format_idc = _h264_get_chroma_idc (info); + mb_width = GST_ROUND_UP_16 (GST_VIDEO_INFO_WIDTH (info)) / 16; + mb_height = GST_ROUND_UP_16 (GST_VIDEO_INFO_HEIGHT (info)) / 16; + bit_depth_luma = GST_VIDEO_INFO_COMP_DEPTH (info, 0); + bit_depth_chroma = GST_VIDEO_INFO_COMP_DEPTH (info, 1); + + if (GST_VIDEO_INFO_WIDTH (info) != + GST_ROUND_UP_16 (GST_VIDEO_INFO_WIDTH (info)) + || GST_VIDEO_INFO_HEIGHT (info) != + GST_ROUND_UP_16 (GST_VIDEO_INFO_HEIGHT (info))) { + /* Table 6-1 */ + const guint SubWidthC = { 1, 2, 2, 1 }; + const guint SubHeightC = { 1, 2, 1, 1 }; + + frame_cropping_flag = 1; + frame_crop_right_offset = (16 * mb_width - GST_VIDEO_INFO_WIDTH (info)) + / SubWidthCchroma_format_idc; + frame_crop_bottom_offset = (16 * mb_height - GST_VIDEO_INFO_HEIGHT (info)) + / SubHeightCchroma_format_idc; + } else { + frame_cropping_flag = frame_crop_right_offset = frame_crop_bottom_offset = + 0; + } + + aspect_ratio_present_flag = aspect_ratio_idc = sar_width = sar_height = 0; + + if (GST_VIDEO_INFO_PAR_N (info) != 0 && GST_VIDEO_INFO_PAR_D (info) != 0) { + aspect_ratio_present_flag = 1; + for (i = 0; i < G_N_ELEMENTS (_h264_aspect_ratio); i++) { + if (gst_util_fraction_compare (GST_VIDEO_INFO_PAR_N (info), + GST_VIDEO_INFO_PAR_D (info), _h264_aspect_ratioi.num, + _h264_aspect_ratioi.den) == 0) { + aspect_ratio_idc = i; + sar_width = sar_height = 0; + break; + } + } + + /* Extended SAR */ + if (i >= G_N_ELEMENTS (_h264_aspect_ratio)) { + aspect_ratio_idc = 0xff; + sar_width = GST_VIDEO_INFO_PAR_N (info); + sar_height = GST_VIDEO_INFO_PAR_D (info); + } + } + + if (GST_VIDEO_INFO_FPS_N (info) > 0 && GST_VIDEO_INFO_FPS_D (info) > 0) { + timing_info_present_flag = 1; + num_units_in_tick = GST_VIDEO_INFO_FPS_D (info); + time_scale = 2 * GST_VIDEO_INFO_FPS_N (info); + fixed_frame_rate_flag = 1; + } else { + timing_info_present_flag = num_units_in_tick = time_scale = + fixed_frame_rate_flag = 0; + } + + constraint_set3_flag = 0; + if (priv->stream.level == GST_H264_LEVEL_L1B + && (priv->stream.profile == GST_H264_PROFILE_BASELINE + || priv->stream.profile == GST_H264_PROFILE_MAIN)) { + constraint_set3_flag = 1; /* level 1b with Baseline or Main profile is + * signaled via constraint_set3 */ + } + + /* support intra profiles */ + if (priv->gop.idr_period == 1 + && priv->stream.profile >= GST_H264_PROFILE_HIGH) + constraint_set3_flag = 1; + + constraint_set4_flag = 0; + /* If profile_idc is equal to 77, 88, 100, or 110, constraint_set4_flag equal + * to 1 indicates that the value of frame_mbs_only_flag is equal to 1 */ + /* and frame_mbs_only_flag is 1 since we don't support interlaced streams */ + if (priv->stream.profile == GST_H264_PROFILE_MAIN + || priv->stream.profile == GST_H264_PROFILE_EXTENDED + || priv->stream.profile == GST_H264_PROFILE_HIGH + || priv->stream.profile == GST_H264_PROFILE_HIGH10) + constraint_set4_flag = 1; + + constraint_set5_flag = 0; + /* If profile_idc is equal to 77, 88, or 100, constraint_set5_flag equal to 1 + * indicates that B slice types are not present */ + if (priv->gop.num_bframes == 0 + && (priv->stream.profile == GST_H264_PROFILE_MAIN + || priv->stream.profile == GST_H264_PROFILE_EXTENDED + || priv->stream.profile == GST_H264_PROFILE_HIGH)) + constraint_set5_flag = 1; + + if (priv->stream.level >= GST_H264_LEVEL_L1B) { + level_idc = priv->stream.level; + } else { + level_idc = 0; + } + + g_assert (priv->gop.log2_max_poc_lsb >= 4); + g_assert (priv->gop.log2_max_frame_num >= 4); + + /* A.2.3 Extended profile: + * + * Sequence parameter sets shall have direct_8x8_inference_flag equal to 1. + * + * A.3.3 Profile-specific level limits: + * + * direct_8x8_inference_flag is not relevant to the Baseline, + * Constrained Baseline, Constrained High, High 10 Intra, High 4:2:2 + * Intra, High 4:4:4 Intra, and CAVLC 4:4:4 Intra profiles as these + * profiles do not allow B slice types, and + * direct_8x8_inference_flag is equal to 1 for all levels of the + * Extended profile. Table A-4. We only have constrained baseline + * here. */ + direct_8x8_inference_flag = + priv->stream.profile == GST_H264_PROFILE_BASELINE ? 0 : 1; + + priv->params.sps = (GstH264SPS) { + /* *INDENT-OFF* */ + .id = 0, + + .profile_idc = priv->stream.profile, + .constraint_set0_flag = priv->stream.profile == GST_H264_PROFILE_BASELINE, + .constraint_set1_flag = priv->stream.profile <= GST_H264_PROFILE_MAIN, + /* Extended profile not supported and not widely used */ + .constraint_set2_flag = 0, + .constraint_set3_flag = constraint_set3_flag, + .constraint_set4_flag = constraint_set4_flag, + .constraint_set5_flag = constraint_set5_flag, + /* override by implementation if 0 */ + .level_idc = level_idc, + + .chroma_format_idc = chroma_format_idc, + .separate_colour_plane_flag = 0, + .bit_depth_luma_minus8 = CLAMP (bit_depth_luma - 8, 0, 6), + .bit_depth_chroma_minus8 = CLAMP (bit_depth_chroma - 8, 0, 6), + .qpprime_y_zero_transform_bypass_flag = 0, + + .scaling_matrix_present_flag = 0, + .scaling_lists_4x4 = { { 0, }, }, + .scaling_lists_8x8 = { { 0, }, }, + + .log2_max_frame_num_minus4 = + CLAMP ((gint) (priv->gop.log2_max_frame_num - 4), 0, 12), + .pic_order_cnt_type = 0, + + /* if pic_order_cnt_type == 0 */ + .log2_max_pic_order_cnt_lsb_minus4 = + CLAMP ((gint) (priv->gop.log2_max_poc_lsb - 4), 0, 12), + /* else if pic_order_cnt_type == 1 */ + .delta_pic_order_always_zero_flag = 0, + .offset_for_non_ref_pic = 0, + .offset_for_top_to_bottom_field = 0, + .num_ref_frames_in_pic_order_cnt_cycle = 0, + .offset_for_ref_frame = { 0, }, + + .num_ref_frames = priv->gop.max_num_ref_frames, + .gaps_in_frame_num_value_allowed_flag = 0, + .pic_width_in_mbs_minus1 = mb_width - 1, + .pic_height_in_map_units_minus1 = mb_height - 1, + .frame_mbs_only_flag = 1, + + .mb_adaptive_frame_field_flag = 0, + + /* override if implementation doesn't support it for profile */ + .direct_8x8_inference_flag = direct_8x8_inference_flag, + + .frame_cropping_flag = frame_cropping_flag, + /* if frame_cropping_flag = 1 */ + .frame_crop_left_offset = 0, + .frame_crop_right_offset = frame_crop_right_offset, + .frame_crop_top_offset = 0, + .frame_crop_bottom_offset = frame_crop_bottom_offset, + + .vui_parameters_present_flag = 1, + .vui_parameters = { + .aspect_ratio_info_present_flag = aspect_ratio_present_flag, + .aspect_ratio_idc = aspect_ratio_idc, + /* if aspect_ratio_idc == 255 */ + .sar_width = sar_width, + .sar_height = sar_height, + + .overscan_info_present_flag = 0, + /* if overscan_info_present_flag */ + .overscan_appropriate_flag = 0, + + .chroma_loc_info_present_flag = 0, /* chroma location isn't defined in GStreamer */ + .timing_info_present_flag = timing_info_present_flag, + .num_units_in_tick = num_units_in_tick, + .time_scale = time_scale, + .fixed_frame_rate_flag = fixed_frame_rate_flag, + + /* We do not write hrd and no need for buffering period SEI. */ + /* TODO: support timing units */ + .nal_hrd_parameters_present_flag = 0, + .vcl_hrd_parameters_present_flag = 0, + + .low_delay_hrd_flag = 0, + .pic_struct_present_flag = 1, /* Table E-6 */ + .bitstream_restriction_flag = 1, + .motion_vectors_over_pic_boundaries_flag = 1, + .max_bytes_per_pic_denom = 0, /* not present */ + .max_bits_per_mb_denom = 0, /* not present */ + .log2_max_mv_length_horizontal = 15, + .log2_max_mv_length_vertical = 15, + .num_reorder_frames = priv->gop.num_reorder_frames, + .max_dec_frame_buffering = priv->gop.max_dec_frame_buffering, + }, + + /* ... */ + /* *INDENT-ON* */ + }; +} + +static void +gst_h264_encoder_pps_init (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264SPS *sps = &priv->params.sps; + + /* *INDENT-OFF* */ + priv->params.pps = (GstH264PPS) { + .id = 0, + + .sequence = sps, + + /* override by implementation if CABAC isn't supported or disabled */ + .entropy_coding_mode_flag = !(sps->profile_idc == GST_H264_PROFILE_BASELINE + || sps->profile_idc == GST_H264_PROFILE_EXTENDED), + + .pic_order_present_flag = 0, + + .num_slice_groups_minus1 = 0, + /* if num_slice_groups_minus1 > 0*/ + .slice_group_map_type = 0, + /* if slice_group_map_type == 0 */ + .run_length_minus1 = { 0, }, + /* if slice_group_map_type == 2 */ + .top_left = { 0, }, + .bottom_right = { 0, }, + /* if slice_group_map_type == 3, 4, 5 */ + .slice_group_change_direction_flag = 0, + .slice_group_change_rate_minus1 = 0, + /* if slice_group_map_type == 6 */ + .pic_size_in_map_units_minus1 = 0, + .slice_group_id = NULL, + + /* Use slice's these fields to control ref num. */ + .num_ref_idx_l0_active_minus1 = 0, + .num_ref_idx_l1_active_minus1 = 0, + + .weighted_pred_flag = 0, + .weighted_bipred_idc = 0, + + .pic_init_qp_minus26 = 0, /* XXX: defined by rate control QP I */ + .pic_init_qs_minus26 = 0, + .chroma_qp_index_offset = 0, + .second_chroma_qp_index_offset = 0, + + /* enable deblocking */ + .deblocking_filter_control_present_flag = 1, + .constrained_intra_pred_flag = 0, + .redundant_pic_cnt_present_flag = 0, + + /* override by implementation if supported or enabled */ + .transform_8x8_mode_flag = !(sps->profile_idc == GST_H264_PROFILE_BASELINE + || sps->profile_idc == GST_H264_PROFILE_EXTENDED + || sps->profile_idc == GST_H264_PROFILE_MAIN), + + /* unsupport scaling lists */ + .pic_scaling_matrix_present_flag = 0, + .scaling_lists_4x4 = { { 0, }, }, + .scaling_lists_8x8 = { { 0, }, }, + }; + /* *INDENT-ON* */ +} + +static GstFlowReturn +gst_h264_encoder_configure (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderClass *klass = GST_H264_ENCODER_GET_CLASS (self); + GstFlowReturn ret; + + if (!priv->input_state) + return GST_FLOW_NOT_NEGOTIATED; + + if (gst_h264_encoder_drain (self) != GST_FLOW_OK) + return GST_FLOW_ERROR; + + GST_LOG_OBJECT (self, "Configuring encoder"); + + gst_h264_encoder_reset (self); + + ret = klass->negotiate (self, priv->input_state, &priv->stream.profile, + &priv->stream.level); + if (ret != GST_FLOW_OK) + return ret; + + if (klass->new_sequence) { + ret = klass->new_sequence (self, priv->input_state, priv->stream.profile, + &priv->stream.level); + if (ret != GST_FLOW_OK) + return ret; + } + + /* now we have the L0/L1 list sizes */ + gst_h264_encoder_generate_gop_structure (self); + + if (priv->stream.level == 0) { + const GstH264LevelDescriptor *desc; + + desc = gst_h264_get_level_descriptor (priv->stream.profile, 0, + &priv->input_state->info, priv->gop.max_dec_frame_buffering); + if (!desc) + return GST_FLOW_ERROR; + + priv->stream.level = desc->level_idc; + } + + /* after gop generation */ + gst_h264_encoder_sps_init (self); + gst_h264_encoder_pps_init (self); + + /* this has to be the last operation since it calls + * gst_video_encoder_set_output() */ + g_assert (klass->new_parameters); + ret = klass->new_parameters (self, &priv->params.sps, &priv->params.pps); + + if (ret != GST_FLOW_OK) + return ret; + + /* latency */ + { + GstVideoEncoder *encoder = GST_VIDEO_ENCODER (self); + guint frames_latency = + priv->config.preferred_output_delay + priv->gop.ip_period - 1; + GstClockTime latency = gst_util_uint64_scale (frames_latency, + priv->fps_d * GST_SECOND, priv->fps_n); + gst_video_encoder_set_latency (encoder, latency, latency); + } + + /* dump parameter sets after been overrode by implementation */ + gst_h264_sps_dump (self, &priv->params.sps); + gst_h264_pps_dump (self, &priv->params.pps); + + return ret; +} + +static inline void +gst_h264_encoder_push_dts (GstH264Encoder * self, GstVideoCodecFrame * frame) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + guint max_reorder_num = priv->gop.num_reorder_frames; + + /* We need to manually insert max_reorder_num slots before the first frame to + ensure DTS never bigger than PTS. */ + if (gst_vec_deque_get_length (priv->dts_queue) == 0 && max_reorder_num > 0) { + GstClockTime dts_diff = 0, dts; + + if (GST_CLOCK_TIME_IS_VALID (frame->duration)) + dts_diff = frame->duration; + + if (GST_CLOCK_TIME_IS_VALID (priv->frame_duration)) + dts_diff = MAX (priv->frame_duration, dts_diff); + + while (max_reorder_num > 0) { + if (GST_CLOCK_TIME_IS_VALID (frame->pts)) { + dts = frame->pts - dts_diff * max_reorder_num; + } else { + dts = frame->pts; + } + + gst_vec_deque_push_tail_struct (priv->dts_queue, &dts); + max_reorder_num--; + } + } + + gst_vec_deque_push_tail_struct (priv->dts_queue, &frame->pts); +} + + +static inline GstFlowReturn +gst_h264_encoder_try_to_finish_all_frames (GstH264Encoder * self) +{ + GstFlowReturn ret; + + do { + ret = gst_h264_encoder_finish_last_frame (self); + } while (ret == GST_FLOW_OK); + + if (ret == GST_FLOW_OUTPUT_NOT_READY) + ret = GST_FLOW_OK; + + return ret; +} + +static GstFlowReturn +gst_h264_encoder_handle_frame (GstVideoEncoder * encoder, + GstVideoCodecFrame * frame) +{ + GstH264Encoder *self = GST_H264_ENCODER (encoder); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + GstH264EncoderClass *klass = GST_H264_ENCODER_GET_CLASS (self); + GstFlowReturn ret = GST_FLOW_ERROR; + GstH264EncoderFrame *h264_frame; + GstVideoCodecFrame *frame_encode = NULL; + + GST_LOG_OBJECT (encoder, "handle frame id %d, dts %" GST_TIME_FORMAT + ", pts %" GST_TIME_FORMAT, frame->system_frame_number, + GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)), + GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer))); + + if (g_atomic_int_compare_and_exchange (&priv->need_configure, TRUE, FALSE)) { + if (gst_h264_encoder_configure (self) != GST_FLOW_OK) { + gst_video_encoder_finish_frame (encoder, frame); + return GST_FLOW_ERROR; + } + } + + h264_frame = gst_h264_encoder_frame_new (); + gst_video_codec_frame_set_user_data (frame, h264_frame, + gst_h264_encoder_frame_unref); + gst_h264_encoder_push_dts (self, frame); + + if (klass->new_output) { + ret = klass->new_output (self, frame, h264_frame); + if (ret != GST_FLOW_OK) + goto error_new_frame; + } + + if (!gst_h264_encoder_reorder_frame (self, frame, FALSE, &frame_encode)) + goto error_reorder; + + /* pass it to reorder list and we should not use it again. */ + frame = NULL; + + if (frame_encode) { + while (frame_encode) { + ret = gst_h264_encoder_encode_frame (self, frame_encode, FALSE); + if (ret != GST_FLOW_OK) + goto error_encode; + + while (ret == GST_FLOW_OK && g_queue_get_length (&priv->output_list) > + priv->config.preferred_output_delay) + ret = gst_h264_encoder_finish_last_frame (self); + + if (ret != GST_FLOW_OK) + goto error_push_buffer; + + /* Try to push out all ready frames. */ + ret = gst_h264_encoder_try_to_finish_all_frames (self); + if (ret != GST_FLOW_OK) + goto error_push_buffer; + + frame_encode = NULL; + if (!gst_h264_encoder_reorder_frame (self, NULL, FALSE, &frame_encode)) + goto error_reorder; + } + } else { + /* Try to push out all ready frames. */ + ret = gst_h264_encoder_try_to_finish_all_frames (self); + if (ret != GST_FLOW_OK) + goto error_push_buffer; + } + + return ret; + +error_new_frame: + { + GST_ELEMENT_ERROR (encoder, STREAM, ENCODE, + ("Failed to create the input frame."), (NULL)); + gst_clear_buffer (&frame->output_buffer); + gst_video_encoder_finish_frame (encoder, frame); + return GST_FLOW_ERROR; + } +error_reorder: + { + GST_ELEMENT_ERROR (encoder, STREAM, ENCODE, + ("Failed to reorder the input frame."), (NULL)); + if (frame) { + gst_clear_buffer (&frame->output_buffer); + gst_video_encoder_finish_frame (encoder, frame); + } + return GST_FLOW_ERROR; + } +error_encode: + { + GST_ELEMENT_ERROR (encoder, STREAM, ENCODE, + ("Failed to encode the frame %s.", gst_flow_get_name (ret)), (NULL)); + gst_clear_buffer (&frame_encode->output_buffer); + gst_video_encoder_finish_frame (encoder, frame_encode); + return ret; + } +error_push_buffer: + { + GST_ELEMENT_ERROR (encoder, STREAM, ENCODE, + ("Failed to finish frame."), (NULL)); + return ret; + } +} + +static gboolean +gst_h264_encoder_flush (GstVideoEncoder * encoder) +{ + GstH264Encoder *self = GST_H264_ENCODER (encoder); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + gst_h264_encoder_flush_lists (self); + gst_vec_deque_clear (priv->dts_queue); + + /* begin from an IDR after flush. */ + priv->gop.cur_frame_index = 0; + priv->gop.cur_frame_num = 0; + priv->gop.last_keyframe = NULL; + /* XXX: enough? */ + + return TRUE; +} + +static GstFlowReturn +gst_h264_encoder_finish (GstVideoEncoder * encoder) +{ + return gst_h264_encoder_drain (GST_H264_ENCODER (encoder)); +} + +static void +gst_h264_encoder_init (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + g_queue_init (&priv->output_list); + g_queue_init (&priv->ref_list); + g_queue_init (&priv->reorder_list); + + priv->dts_queue = gst_vec_deque_new_for_struct (sizeof (GstClockTime), 8); + + priv->config.max_num_reference_list0 = 1; + priv->config.max_num_reference_list1 = 0; + priv->config.preferred_output_delay = 0; + + priv->ref_list0 = g_array_sized_new (FALSE, TRUE, + sizeof (GstH264EncoderFrame *), 16); + priv->ref_list1 = g_array_sized_new (FALSE, TRUE, + sizeof (GstH264EncoderFrame *), 16); + + /* default values */ + priv->prop.idr_period = H264ENC_IDR_PERIOD_DEFAULT; + priv->prop.num_bframes = H264ENC_B_FRAMES_DEFAULT; + priv->prop.num_iframes = H264ENC_I_FRAMES_DEFAULT; + priv->prop.num_ref_frames = H264ENC_NUM_REF_FRAMES_DEFAULT; + priv->prop.b_pyramid = H264ENC_B_PYRAMID_DEFAULT; +} + +static void +gst_h264_encoder_dispose (GObject * object) +{ + gst_h264_encoder_flush_lists (GST_H264_ENCODER (object)); + + G_OBJECT_CLASS (parent_class)->dispose (object); +} + +static void +gst_h264_encoder_get_property (GObject * object, guint property_id, + GValue * value, GParamSpec * pspec) +{ + GstH264Encoder *self = GST_H264_ENCODER (object); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + GST_OBJECT_LOCK (self); + switch (property_id) { + case PROP_IDR_PERIOD: + g_value_set_uint (value, priv->prop.idr_period); + break; + case PROP_BFRAMES: + g_value_set_uint (value, priv->prop.num_bframes); + break; + case PROP_IFRAMES: + g_value_set_uint (value, priv->prop.num_iframes); + break; + case PROP_NUM_REF_FRAMES: + g_value_set_int (value, priv->prop.num_ref_frames); + break; + case PROP_B_PYRAMID: + g_value_set_boolean (value, priv->prop.b_pyramid); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); +} + +static void +gst_h264_encoder_set_property (GObject * object, guint property_id, + const GValue * value, GParamSpec * pspec) +{ + GstH264Encoder *self = GST_H264_ENCODER (object); + GstH264EncoderPrivate *priv = _GET_PRIV (self); + + GST_OBJECT_LOCK (self); + switch (property_id) { + case PROP_IDR_PERIOD: + priv->prop.idr_period = g_value_get_uint (value); + g_atomic_int_set (&priv->need_configure, TRUE); + break; + case PROP_BFRAMES: + priv->prop.num_bframes = g_value_get_uint (value); + g_atomic_int_set (&priv->need_configure, TRUE); + break; + case PROP_IFRAMES: + priv->prop.num_iframes = g_value_get_uint (value); + g_atomic_int_set (&priv->need_configure, TRUE); + break; + case PROP_NUM_REF_FRAMES: + priv->prop.num_ref_frames = g_value_get_int (value); + g_atomic_int_set (&priv->need_configure, TRUE); + break; + case PROP_B_PYRAMID: + priv->prop.b_pyramid = g_value_get_boolean (value); + g_atomic_int_set (&priv->need_configure, TRUE); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); +} + +static void +gst_h264_encoder_class_init (GstH264EncoderClass * klass) +{ + GstVideoEncoderClass *encoder_class = GST_VIDEO_ENCODER_CLASS (klass); + GObjectClass *object_class = G_OBJECT_CLASS (klass); + GParamFlags param_flags = + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT; + + object_class->get_property = gst_h264_encoder_get_property; + object_class->set_property = gst_h264_encoder_set_property; + object_class->dispose = gst_h264_encoder_dispose; + + encoder_class->start = GST_DEBUG_FUNCPTR (gst_h264_encoder_start); + encoder_class->stop = GST_DEBUG_FUNCPTR (gst_h264_encoder_stop); + encoder_class->set_format = GST_DEBUG_FUNCPTR (gst_h264_encoder_set_format); + encoder_class->handle_frame = + GST_DEBUG_FUNCPTR (gst_h264_encoder_handle_frame); + encoder_class->flush = GST_DEBUG_FUNCPTR (gst_h264_encoder_flush); + encoder_class->finish = GST_DEBUG_FUNCPTR (gst_h264_encoder_finish); + + klass->negotiate = GST_DEBUG_FUNCPTR (gst_h264_encoder_negotiate_default); + + /** + * GstH264Encoder:idr-period: + * + * Maximum number of frames between two IDR frames. A higher value will result + * in a lager IDR frame interval and thus slowdown seeking; a lower value will + * result in a shorter IDR frame interval and thus improve seeking. As a rule + * of thumb, the IDR period shouldn't be lower than the framerate of the video + * multiplied by a factor between the range 1..10 + * + * Set 0 for auto-calculate it. + * + * Since: 1.28 + */ + propertiesPROP_IDR_PERIOD = g_param_spec_uint ("idr-period", + "Maximum GOP size", "Maximum number of frames between two IDR frames", + 0, MIN (G_MAXINT, 1 << 30), H264ENC_IDR_PERIOD_DEFAULT, param_flags); + + /** + * GstH264Encoder:b-frames: + * + * Maximum number of consecutive B-Frames. B-Frames refer to both, the + * previous and the following I-Frame (or P-Frame). This way B-Frames can + * compress even more efficient that P-Frames. + * + * The availability of B-frames depends on the driver. + * + * Since: 1.28 + */ + propertiesPROP_BFRAMES = g_param_spec_uint ("b-frames", "B Frames", + "Maximum number of consecutive B frames between I and P reference frames", + 0, 31, H264ENC_B_FRAMES_DEFAULT, param_flags); + + /** + * GstH264Encoder:i-frames: + * + * Force the number of i-frames insertion within one GOP. More I-Frames will + * increase the size of the video, but it will be more resilient to data + * lose. + * + * Since: 1.28 + */ + propertiesPROP_IFRAMES = g_param_spec_uint ("i-frames", "I Frames", + "Force the number of I frames insertion within one GOP, not including the " + "first IDR frame", 0, G_MAXINT, H264ENC_I_FRAMES_DEFAULT, param_flags); + + /** + * GstH264Encoder:num-ref-frames: + * + * The number of frames can be referenced by P-Frames and B-Frames. Higher + * values will usually result in a more efficient compression, which means + * better visual quality at the same file size, but it may require encoding + * time. + * + * Since: 1.28 + */ + propertiesPROP_NUM_REF_FRAMES = g_param_spec_int ("num-ref-frames", + "Number of reference frames", "Number of frames referenced by P and B " + "frames", 0, 16, H264ENC_NUM_REF_FRAMES_DEFAULT, param_flags); + + /** + * GstH264Encoder:b-pyramid: + * + * Enable the b-pyramid reference structure in GOP. It allows to make + * references non-linearly in order to improve bitrate usage and quality. This + * way B-Frames can refer to B-Frames. + * + * It only works with "high" profile. + * + * Since: 1.28 + */ + propertiesPROP_B_PYRAMID = g_param_spec_boolean ("b-pyramid", "b pyramid", + "Enable the b-pyramid reference structure in the GOP", + H264ENC_B_PYRAMID_DEFAULT, param_flags); + + g_object_class_install_properties (object_class, N_PROPERTIES, properties); + + gst_type_mark_as_plugin_api (GST_TYPE_H264_ENCODER, 0); +} + +/** + * gst_h264_self_set_max_num_references: + * @self: A #GstH264Encoder + * @list0: the maximum number of reference pictures for list L0 + * @list1: the maximum number of reference pictures for list L1 + * + * Set the maximum number of reference pictures allowed by the accelerator. + */ +void +gst_h264_encoder_set_max_num_references (GstH264Encoder * self, guint list0, + guint list1) +{ + GstH264EncoderPrivate *priv; + + g_return_if_fail (GST_IS_H264_ENCODER (self)); + + priv = _GET_PRIV (self); + + priv->config.max_num_reference_list0 = list0; + priv->config.max_num_reference_list1 = list1; +} + +/** + * gst_h264_encoder_is_live: + * @self: a #GstH264Encoder + * + * Returns: whether the current stream is live + */ +gboolean +gst_h264_encoder_is_live (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv; + + g_return_val_if_fail (GST_IS_H264_ENCODER (self), FALSE); + + priv = _GET_PRIV (self); + return priv->is_live; +} + +/** + * gst_h264_encoder_set_preferred_output_delay: + * @self: a #GstH264Encoder + * @delay: the number of frames to hold and process + * + * Some accelerators such as Intel VA-API has better performance if it holds a + * group of frames to process. + */ +void +gst_h264_encoder_set_preferred_output_delay (GstH264Encoder * self, guint delay) +{ + GstH264EncoderPrivate *priv; + + g_return_if_fail (GST_IS_H264_ENCODER (self)); + + priv = _GET_PRIV (self); + priv->config.preferred_output_delay = delay; +} + +/** + * gst_h264_encoder_reconfigure: + * @self: a #GstH264Encoder + * @force: whether if configuration will run now or for next input frame + * + * Through this method the subclass can request the encoder reconfiguration + * and downstream renegotiation. + */ +gboolean +gst_h264_encoder_reconfigure (GstH264Encoder * self, gboolean force) +{ + GstH264EncoderPrivate *priv; + + g_return_val_if_fail (GST_IS_H264_ENCODER (self), FALSE); + + priv = _GET_PRIV (self); + + if (!force) { + g_atomic_int_set (&priv->need_configure, TRUE); + return TRUE; + } else { + if (g_atomic_int_compare_and_exchange (&priv->need_configure, TRUE, FALSE)) { + return (gst_h264_encoder_configure (self) == GST_FLOW_OK); + } + return TRUE; + } +} + +/** + * gst_h264_encoder_get_idr_period: + * @self: a #GstH264Encoder + * + * Returns the IDR period property without the marshalling burden of GObject + * properties. + * + * Returns: the IDR period + */ +guint32 +gst_h264_encoder_get_idr_period (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv; + guint32 ret; + + g_return_val_if_fail (GST_IS_H264_ENCODER (self), -1); + + priv = _GET_PRIV (self); + + GST_OBJECT_LOCK (self); + ret = priv->prop.idr_period; + GST_OBJECT_UNLOCK (self); + + return ret; +} + +/** + * gst_h264_encoder_get_num_b_frames: + * @self: a #GstH264Encoder + * + * Returns the number of consecutive B-Frames without the marshalling burden of + * GObject properties. + * + * Returns: the number of consecutive B-Frames + */ +guint32 +gst_h264_encoder_get_num_b_frames (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv; + guint32 ret; + + g_return_val_if_fail (GST_IS_H264_ENCODER (self), -1); + + priv = _GET_PRIV (self); + + GST_OBJECT_LOCK (self); + ret = priv->prop.num_bframes; + GST_OBJECT_UNLOCK (self); + + return ret; +} + +/** + * gst_h264_encoder_gop_is_b_pyramid: + * @self: a #GstH264Encoder + * + * Returns whether the GOP has a b-pyramid structure. + * + * Returns: %TRUE if GOP has a b-pyramid structure + */ +gboolean +gst_h264_encoder_gop_is_b_pyramid (GstH264Encoder * self) +{ + GstH264EncoderPrivate *priv; + gboolean ret; + + g_return_val_if_fail (GST_IS_H264_ENCODER (self), FALSE); + + priv = _GET_PRIV (self); + + GST_OBJECT_LOCK (self); + ret = priv->prop.b_pyramid; + GST_OBJECT_UNLOCK (self); + + return ret; +} + +/** + * gst_h264_get_cpb_nal_factor: + * @profile: a #GstH264Profile + * + * The values comes from Table A-2 + H.10.2.1 + * + * Returns: the bitrate NAL factor of the coded picture buffer. + * + * Since: 1.28 + */ +guint +gst_h264_get_cpb_nal_factor (GstH264Profile profile) +{ + for (int i = 0; i < G_N_ELEMENTS (_h264_nal_factors); i++) { + if (_h264_nal_factorsi.profile == profile) + return _h264_nal_factorsi.cpb_br_nal_factor; + } + + /* default to non-high profile */ + return 1200; +} + +/** + * gst_h264_get_level_descriptor: + * @profile: a #GstH264Profile + * @bitrate: bit rate in bytes per second + * @in_info: raw stream's #GstVideoInfo + * @max_dec_frame_buffering: the max size of DPB + * + * Returns: #GStH264LevelDescriptor associated with associated with @profile, + * @bitrate, framesize and framerate in @in_info, and + * @max_dec_frame_buffering. If no descriptor found, it returns %NULL. + * + * Since: 1.28 + */ +const GstH264LevelDescriptor * +gst_h264_get_level_descriptor (GstH264Profile profile, guint64 bitrate, + GstVideoInfo * in_info, int max_dec_frame_buffering) +{ + guint mbWidth, mbHeight, cpb_factor; + guint32 i, picSizeMbs, maxMBPS; + + g_return_val_if_fail (in_info, NULL); + + cpb_factor = gst_h264_get_cpb_nal_factor (profile); + mbWidth = GST_ROUND_UP_16 (GST_VIDEO_INFO_WIDTH (in_info)) / 16; + mbHeight = GST_ROUND_UP_16 (GST_VIDEO_INFO_HEIGHT (in_info)) / 16; + + picSizeMbs = mbWidth * mbHeight; + if (GST_VIDEO_INFO_FPS_N (in_info) > 0 && GST_VIDEO_INFO_FPS_D (in_info) > 0) { + maxMBPS = gst_util_uint64_scale_int_ceil (picSizeMbs, + GST_VIDEO_INFO_FPS_N (in_info), GST_VIDEO_INFO_FPS_D (in_info)); + } else { + maxMBPS = 16; + } + + for (i = 0; i < G_N_ELEMENTS (_h264_levels); i++) { + const GstH264LevelDescriptor *level = &_h264_levelsi; + + if (bitrate > (guint64) level->max_br * cpb_factor) + continue; + if (picSizeMbs > level->max_fs) + continue; + if (picSizeMbs > 0) { + gint max_dpb_frames = MIN (level->max_dpb_mbs / picSizeMbs, 16); + if (max_dec_frame_buffering > max_dpb_frames) + continue; + + if (maxMBPS > level->max_mbps) + continue; + } + + return level; + } + + GST_ERROR ("Failed to find a suitable level: " + "frame is too big or bitrate too high"); + return NULL; +} + +/* Maximum sizes for common headers (in bits) */ +#define MAX_SPS_HDR_SIZE 16473 +#define MAX_VUI_PARAMS_SIZE 210 +#define MAX_HRD_PARAMS_SIZE 4103 +#define MAX_PPS_HDR_SIZE 101 +#define MAX_SLICE_HDR_SIZE 397 + 2572 + 6670 + 2402 + +/** + * gst_h264_calculate_coded_size: + * @sps: the #GstH264SPS + * @num_slices: number of slices to encode per frame + * + * Returns the calculated size of the encoded buffer. + * + * Since: 1.28 + */ +gsize +gst_h264_calculate_coded_size (GstH264SPS * sps, guint num_slices) +{ + gsize codedbuf_size = 0; + GstH264Profile profile; + guint mb_width, mb_height, chroma_subsampling; + + g_return_val_if_fail (sps && num_slices >= 1, 0); + + profile = sps->profile_idc; + chroma_subsampling = sps->chroma_format_idc; + mb_width = sps->pic_width_in_mbs_minus1 + 1; + mb_height = sps->pic_height_in_map_units_minus1 + 1; + + if (profile >= GST_H264_PROFILE_HIGH + && profile <= GST_H264_PROFILE_STEREO_HIGH) { + /* The number of bits of macroblock_layer( ) data for any macroblock + is not greater than 128 + RawMbBits */ + guint RawMbBits, MbWidthC, MbHeightC; + guint8 bit_depth_luma, bit_depth_chroma; + + bit_depth_luma = sps->bit_depth_luma_minus8 + 8; + bit_depth_chroma = sps->bit_depth_chroma_minus8 + 8; + + switch (chroma_subsampling) { + case GST_CHROMA_420: + MbWidthC = 8; + MbHeightC = 8; + break; + case GST_CHROMA_422: + MbWidthC = 8; + MbHeightC = 16; + break; + case GST_CHROMA_444: + MbWidthC = 16; + MbHeightC = 16; + break; + default: + g_assert_not_reached (); + break; + } + + /* The variable RawMbBits is derived as + * RawMbBits = 256 * BitDepthY + 2 * MbWidthC * MbHeightC * BitDepthC */ + RawMbBits = + 256 * bit_depth_luma + 2 * MbWidthC * MbHeightC * bit_depth_chroma; + codedbuf_size = (mb_width * mb_height) * (128 + RawMbBits) / 8; + } else { + /* The number of bits of macroblock_layer( ) data for any macroblock + * is not greater than 3200 */ + codedbuf_size = (mb_width * mb_height) * (3200 / 8); + } + + /* Account for SPS header */ + /* XXX: exclude scaling lists, MVC/SVC extensions */ + codedbuf_size += 4 /* start code */ + GST_ROUND_UP_8 (MAX_SPS_HDR_SIZE + + MAX_VUI_PARAMS_SIZE + 2 * MAX_HRD_PARAMS_SIZE) / 8; + + /* Account for PPS header */ + /* XXX: exclude slice groups, scaling lists, MVC/SVC extensions */ + codedbuf_size += 4 + GST_ROUND_UP_8 (MAX_PPS_HDR_SIZE) / 8; + + /* Account for slice header */ + codedbuf_size += num_slices * (4 + GST_ROUND_UP_8 (MAX_SLICE_HDR_SIZE) / 8); + + /* Add ceil 5% for safety */ + codedbuf_size = ((guint) (((gfloat) codedbuf_size * 1.05) + 1)) >> 0; + + return codedbuf_size; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/base/gsth264encoder.h
Added
@@ -0,0 +1,326 @@ +/* GStreamer + * Copyright (C) 2021 Intel Corporation + * Author: He Junyan <junyan.he@intel.com> + * Copyright (C) 2023 Michael Grzeschik <m.grzeschik@pengutronix.de> + * Copyright (C) 2021, 2025 Igalia, S.L. + * Author: Stéphane Cerveau <scerveau@igalia.com> + * Author: Víctor Jáquez <vjaquez@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/codecparsers/gsth264parser.h> +#include <gst/video/gstvideoencoder.h> + +G_BEGIN_DECLS + +typedef struct _GstH264Encoder GstH264Encoder; +typedef struct _GstH264EncoderClass GstH264EncoderClass; +typedef struct _GstH264EncoderFrame GstH264EncoderFrame; +typedef struct _GstH264GOPFrame GstH264GOPFrame; + +typedef struct _GstH264LevelDescriptor GstH264LevelDescriptor; + +/** + * GstH264LevelDescriptor: + * @name: level identifier string + * @level_idc: the #GstH264Level + * @max_mbps: maximum macroblock processing rate (mb/s) + * @max_fs: maximum frame size (mb) + * @max_dpb_mps: maximum decoded picture buffer size (mb) + * @max_br: maximum bitrate (bits/s) + * @max_cpb: maximum CPB size + * @min_cr: minimum compression ration + * + * Since: 1.28 + */ +struct _GstH264LevelDescriptor +{ + const gchar *name; + GstH264Level level_idc; + guint32 max_mbps; + guint32 max_fs; + guint32 max_dpb_mbs; + guint32 max_br; + guint32 max_cpb; + guint32 min_cr; +}; + +#define GST_TYPE_H264_ENCODER (gst_h264_encoder_get_type()) +#define GST_H264_ENCODER(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_H264_ENCODER, GstH264Encoder)) +#define GST_H264_ENCODER_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_H264_ENCODER, GstH264EncoderClass)) +#define GST_H264_ENCODER_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_H264_ENCODER, GstH264EncoderClass)) +#define GST_IS_H264_ENCODER(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_H264_ENCODER)) +#define GST_IS_H264_ENCODER_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_H264_ENCODER)) + +_GLIB_DEFINE_AUTOPTR_CHAINUP (GstH264Encoder, GstVideoEncoder) +G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstH264EncoderClass, g_type_class_unref) + +struct _GstH264Encoder +{ + GstVideoEncoder parent_instance; +}; + +/** + * GstH264EncoderClass: + * + * The opaque #GstH264EncoderClass data structure. + * + * Since: 1.28 + */ +struct _GstH264EncoderClass +{ + GstVideoEncoderClass parent_class; + + /** + * GstH264Encoder::negotiate: + * @encoder: a #GstH264Encoder + * @in_state: (transfer none): the input #GstVideoCodecState + * @profile: (out): the negotiated profile + * @level: (out): the negotiated level + * + * Optional. Allows the subclass to negotiate downstream the @profile and + * @level. The default implementation will choose the most advanced profile + * allowed. If the callee returns @level to zero, it will be guessed later. + * + * Since: 1.28 + */ + GstFlowReturn (*negotiate) (GstH264Encoder * encoder, + GstVideoCodecState * in_state, + GstH264Profile * profile, + GstH264Level * level); + + /** + * GstH264Encoder::new_sequence: + * @encoder: a #GstH264Encoder + * @in_state: (transfer none): the input #GstVideoCodecState + * @profile: the negotiated profile + * @level: (out): the negotiated level + * + * Optional. Allows the subclass to open a session with the hardware + * accelerator given the stream properties, such as video info (from + * @in_state), @profile and @level, and to verify the accelerator limitations. + * If the callee returns @level to zero, it will be guessed later. + * + * Since: 1.28 + */ + GstFlowReturn (*new_sequence) (GstH264Encoder * encoder, + GstVideoCodecState * in_state, + GstH264Profile profile, + GstH264Level * level); + + /** + * GstH264Encoder::new_parameters: + * @encoder: a #GstH264Encoder + * @input_state: (transfer none): the input #GstVideoCodecState + * @sps: (transfer none): a #GstH264SPS + * @pps: (transfer none): a #GstH264PPS + * + * Called when configuration changes and H.264 parameters change. The subclass + * can modify them, carefully, according to the accelerator limitations, and + * transfer them to their own structures. In particular the subclass have to + * define the profile and its related @sps parameters. The method is expected + * to call gst_video_encoder_set_output(), if needed, to (re)negotiate + * downstream. + * + * Since: 1.28 + */ + GstFlowReturn (*new_parameters) (GstH264Encoder * encoder, + GstH264SPS * sps, + GstH264PPS * pps); + + /** + * GstH264EncoderClass::new_output: + * @encoder: a #GstH264Encoder + * @frame: (transfer none): a #GstVideoCodecFrame + * @h264_frame: (transfer none): a #GstH264EncoderFrame + * + * Optional. Called whenever a new #GstH264EncoderFrame is created. Subclass + * can set implementation specific user data on #GstH264EncoderFrame via + * gst_h264_encoder_frame_set_user_data() + * + * Since: 1.28 + */ + GstFlowReturn (*new_output) (GstH264Encoder * encoder, + GstVideoCodecFrame * frame, + GstH264EncoderFrame * h264_frame); + + /** + * GstH264EncoderClass::encode_frame: + * @encoder: a #GstH264Encoder + * @frame: (transfer none): a #GstVideoCodecFrame + * @h264_frame: (transfer none): a #GstH264EncoderFrame + * @slice_hdr: (transfer none): a #GstH264SliceHdr + * @list0: (transfer none) (element-type GstH264EncoderFrame): a list of + * reference #GstH264EncoderFrame pointers + * @list1: (transfer none) (element-type GstH264EncoderFrame): a list of + * reference #GstH264EncoderFrame pointers + * + * Provide the frame to be encoded with the reference lists. If the + * accelerated haven't completed the encoding, the callee can return + * @GST_FLOW_OUTPUT_NOT_READY + * + * Since: 1.28 + */ + GstFlowReturn (*encode_frame) (GstH264Encoder * encoder, + GstVideoCodecFrame * frame, + GstH264EncoderFrame * h264_frame, + GstH264SliceHdr * slice_hdr, + GArray * list0, + GArray * list1); + + /** + * GstH264EncoderClass::prepare_output: + * @encoder: a #GstH264Encoder + * @frame: (transfer none): a #GstVideoCodecFrame + * + * Optional. It's called before pushing @frame downstream. It's intended to + * add metadata, and prepend other units, to @frame and its user's data. + * + * Since: 1.28 + */ + GstFlowReturn (*prepare_output) (GstH264Encoder * encoder, + GstVideoCodecFrame * frame); + + /** + * GstH264EncoderClass::reset: + * @encoder: a #GstH264Encoder + * + * Optional. It's called when resetting the global state of the encoder. + * Allows the subclass to re-initialize its internal variables. + * + * Since: 1.28 + */ + void (*reset) (GstH264Encoder * encoder); + + /*< private > */ + gpointer paddingGST_PADDING_LARGE; +}; + +GType gst_h264_encoder_get_type (void); + +void gst_h264_encoder_set_max_num_references (GstH264Encoder * self, + guint list0, + guint list1); + +void gst_h264_encoder_set_preferred_output_delay + (GstH264Encoder * self, + guint delay); + +gboolean gst_h264_encoder_is_live (GstH264Encoder * self); + +gboolean gst_h264_encoder_reconfigure (GstH264Encoder * self, + gboolean force); + +guint32 gst_h264_encoder_get_idr_period (GstH264Encoder * self); + +guint32 gst_h264_encoder_get_num_b_frames (GstH264Encoder * self); + +gboolean gst_h264_encoder_gop_is_b_pyramid (GstH264Encoder * self); + +const GstH264LevelDescriptor *gst_h264_get_level_descriptor (GstH264Profile profile, + guint64 bitrate, + GstVideoInfo * in_info, + int max_dec_frame_buffering); + +guint gst_h264_get_cpb_nal_factor (GstH264Profile profile); + +gsize gst_h264_calculate_coded_size (GstH264SPS * sps, + guint num_slices); + +/* H264 encoder frame */ + +#define GST_TYPE_H264_ENCODER_FRAME (gst_h264_encoder_frame_get_type ()) +#define GST_IS_H264_ENCODER_FRAME(obj) (GST_IS_MINI_OBJECT_TYPE (obj, GST_TYPE_H264_ENCODE_FRAME)) +#define GST_H264_ENCODER_FRAME(obj) ((GstH264EncoderFrame *)obj) + +/** + * GstH264GOPFrame: + * + * Description of an H.264 frame in the Group Of Pictures (GOP). + * + * Since: 1.28 + */ +struct _GstH264GOPFrame +{ + /*< private >*/ + GstH264SliceType slice_type; + gboolean is_ref; + guint8 pyramid_level; + + /* Only for b pyramid */ + gint left_ref_poc_diff; + gint right_ref_poc_diff; +}; + +/** + * GstH264EncoderFrame: + * + * Represents a frame that is going to be encoded with H.264 + * + * Since: 1.28 + */ +struct _GstH264EncoderFrame +{ + GstMiniObject parent; + + /*< private >*/ + GstH264GOPFrame type; + + /* Number of ref frames within current GOP. H264's frame number. */ + guint16 gop_frame_num; + gboolean last_frame; + gint poc; + guint32 idr_pic_id; + gboolean force_idr; + + /* The pic_num will be marked as unused_for_reference, which is replaced by + * this frame. -1 if we do not need to care about it explicitly. */ + gint32 unused_for_reference_pic_num; + + gpointer user_data; + GDestroyNotify user_data_destroy_notify; +}; + +GType gst_h264_encoder_frame_get_type (void); + +GstH264EncoderFrame *gst_h264_encoder_frame_new (void); + +void gst_h264_encoder_frame_set_user_data (GstH264EncoderFrame * frame, + gpointer user_data, + GDestroyNotify notify); + +static inline gpointer +gst_h264_encoder_frame_get_user_data (GstH264EncoderFrame * frame) +{ + return frame->user_data; +} + +static inline GstH264EncoderFrame * +gst_h264_encode_frame_ref (GstH264EncoderFrame * frame) +{ + return (GstH264EncoderFrame *) gst_mini_object_ref (GST_MINI_OBJECT_CAST (frame)); +} + +static inline void +gst_h264_encoder_frame_unref (void * frame) +{ + gst_mini_object_unref (GST_MINI_OBJECT_CAST (frame)); +} + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvkutils.c
Added
@@ -0,0 +1,121 @@ +/* GStreamer + * + * GStreamer Vulkan plugins utilities + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstvkutils.h" + +/** + * gst_vulkan_buffer_peek_plane_memory: + * @buffer: a #GstBuffer + * @vinfo: a #GstVideoInfo + * @buffer: a #GstBuffer + * @plane: the plane number + * @cat: the #GstBufferCategory to log into + * + * Returns: (transfer none): the #GstMemory that belongs to @plane + */ +GstMemory * +_gst_vulkan_buffer_peek_plane_memory (GstBuffer * buffer, + const GstVideoInfo * vinfo, gint plane, GstDebugCategory * cat) +{ + guint idx, len; + gsize offset, skip; + GstVideoMeta *vmeta; + + g_return_val_if_fail (GST_IS_BUFFER (buffer), NULL); + g_return_val_if_fail (vinfo, NULL); + g_return_val_if_fail (plane >= 0 && plane <= GST_VIDEO_MAX_PLANES, NULL); + g_return_val_if_fail (cat, NULL); + + vmeta = gst_buffer_get_video_meta (buffer); + if (vmeta) + offset = vmeta->offsetplane; + else + offset = GST_VIDEO_INFO_PLANE_OFFSET (vinfo, plane); + + if (!gst_buffer_find_memory (buffer, offset, 1, &idx, &len, &skip)) { + GST_CAT_WARNING (cat, + "Buffer's plane %u has no memory at offset %" G_GSIZE_FORMAT, plane, + offset); + return NULL; + } + + return gst_buffer_peek_memory (buffer, idx); +} + +/** + * gst_vulkan_buffer_get_plane_dimensions: + * @buffer: a #GstBuffer + * @info: a #GstVideoInfo + * @plane: the plane to get it dimensions + * @width: (out) (not nullable): width in texels of @plane + * @height: (out) (not nullable): height in texels of @plane + * @row_length: (out) (not nullable): strides in texels of @plane + * @img_height: (out) (not nullable): height plus paddings of @plane + * + * This functions returns the values required for VkBufferImageCopy. In that + * structure, bufferRowLength and bufferImageHeight are the stride and height of + * the image in texels, then this function calculates the number of texels + * (pixels) given the stride (in bytes) and the pixel stride (in bytes too) of + * the component. For that, we have to find the component that maps to the + * specified @plane. + */ +void +gst_vulkan_buffer_get_plane_dimensions (GstBuffer * buffer, + const GstVideoInfo * info, gint plane, guint32 * width, guint32 * height, + guint32 * row_length, guint32 * img_height) +{ + gint compGST_VIDEO_MAX_COMPONENTS, pixel_stride; + GstVideoMeta *meta; + + g_return_if_fail (GST_IS_BUFFER (buffer)); + g_return_if_fail (info && plane >= 0 && plane <= GST_VIDEO_MAX_PLANES); + g_return_if_fail (width && height && row_length && img_height); + + gst_video_format_info_component (info->finfo, plane, comp); + + *width = GST_VIDEO_INFO_COMP_WIDTH (info, comp0); + *height = GST_VIDEO_INFO_COMP_HEIGHT (info, comp0); + + pixel_stride = GST_VIDEO_INFO_COMP_PSTRIDE (info, comp0); + /* FIXME: complex formats like v210, UYVP and IYU1 that have pstride == 0 + * color formats which we don't currently support in GStreamer Vulkan */ + g_assert (pixel_stride > 0); + + meta = gst_buffer_get_video_meta (buffer); + if (meta) { + *row_length = meta->strideplane + meta->alignment.padding_left + + meta->alignment.padding_right; + *img_height = *height + meta->alignment.padding_top + + meta->alignment.padding_bottom; + } else { + *row_length = GST_VIDEO_INFO_COMP_STRIDE (info, comp0); + *img_height = *height; + } + + g_assert (*row_length % pixel_stride == 0); + + /* Convert row length from bytes to texels for Vulkan's bufferRowLength */ + *row_length /= pixel_stride; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvkutils.h
Added
@@ -0,0 +1,49 @@ +/* GStreamer + * + * GStreamer Vulkan plugins utilities + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/vulkan/vulkan.h> + +G_BEGIN_DECLS + +#ifndef GST_DISABLE_GST_DEBUG +#define gst_vulkan_buffer_peek_plane_memory(buffer, vinfo, plane) \ + _gst_vulkan_buffer_peek_plane_memory(buffer, vinfo, plane, GST_CAT_DEFAULT) +#else +#define gst_vulkan_buffer_peek_plane_memory(buffer, vinfo, plane) \ + _gst_vulkan_buffer_peek_plane_memory(buffer, vinfo, plane, NULL) +#endif + +GstMemory * _gst_vulkan_buffer_peek_plane_memory (GstBuffer * buffer, + const GstVideoInfo * vinfo, + gint plane, + GstDebugCategory * cat); + +void gst_vulkan_buffer_get_plane_dimensions (GstBuffer * buffer, + const GstVideoInfo * info, + gint plane, + guint32 * width, + guint32 * height, + guint32 * row_length, + guint32 * img_height); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvkvideocaps.c
Added
@@ -0,0 +1,565 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstvkvideocaps.h" +#include "gst/vulkan/gstvkvideo-private.h" + +GST_DEBUG_CATEGORY_EXTERN (gst_vulkan_debug); +#define GST_CAT_DEFAULT gst_vulkan_debug + +static gboolean +try_profile (GstVulkanPhysicalDevice * device, GstVulkanVideoProfile * profile, + GstCaps ** codec_caps, GstCaps ** raw_caps) +{ + gboolean ret; + GstVulkanVideoCapabilities vkcaps; + GstCaps *codec, *raw = NULL; + GArray *vkformats; + GError *err = NULL; + + ret = + gst_vulkan_video_try_configuration (device, profile, &vkcaps, &codec, + &vkformats, &err); + if (!ret) { + GST_LOG ("Couldn't get configuration for 0x%x, %u %d %d: %s", + profile->profile.videoCodecOperation, + profile->profile.chromaSubsampling, + profile->profile.chromaBitDepth, profile->profile.lumaBitDepth, + err ? err->message : "Unknown error"); + g_clear_error (&err); + return FALSE; + } + + if (!codec || gst_caps_is_empty (codec)) { + GST_DEBUG ("No codec caps could be generated"); + g_clear_pointer (&vkformats, g_array_unref); + gst_clear_caps (&codec); + return FALSE; + } + + gst_caps_set_simple (codec, "width", GST_TYPE_INT_RANGE, + vkcaps.caps.minCodedExtent.width, vkcaps.caps.maxCodedExtent.width, + "height", GST_TYPE_INT_RANGE, vkcaps.caps.minCodedExtent.height, + vkcaps.caps.maxCodedExtent.height, NULL); + + for (int i = 0; i < gst_caps_get_size (codec); i++) { + GstStructure *st = gst_caps_get_structure (codec, i); + + /* these fields are removed because they aren't exposed by all the parsers + * for negotiation, and no other decoder/encoder element exposes them in + * their pad templates */ + gst_structure_remove_fields (st, "interlace-mode", "bit-depth-luma", + "bit-depth-chroma", "chroma-format", "film-grain", NULL); + } + + /* generate raw caps given the possible output formats */ + raw = gst_caps_new_empty (); + for (int i = 0; i < vkformats->len; i++) { + GstCaps *raw_next = NULL; + VkVideoFormatPropertiesKHR *fmt = + &g_array_index (vkformats, VkVideoFormatPropertiesKHR, i); + GstVideoFormat format = gst_vulkan_format_to_video_format (fmt->format); + + if (format == GST_VIDEO_FORMAT_UNKNOWN) { + GST_DEBUG ("Missing mapping to output format %u", fmt->format); + continue; + } + + raw_next = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, + gst_video_format_to_string (format), "width", GST_TYPE_INT_RANGE, + vkcaps.caps.minCodedExtent.width, vkcaps.caps.maxCodedExtent.width, + "height", GST_TYPE_INT_RANGE, vkcaps.caps.minCodedExtent.height, + vkcaps.caps.maxCodedExtent.height, NULL); + raw = gst_caps_merge (raw, raw_next); + } + + g_array_unref (vkformats); + + if (gst_caps_is_empty (raw)) { + gst_caps_unref (codec); + gst_caps_unref (raw); + GST_DEBUG ("Couldn't get configuration for %u, %u %d %d: %s", + profile->profile.videoCodecOperation, + profile->profile.chromaSubsampling, + profile->profile.chromaBitDepth, profile->profile.lumaBitDepth, + "Invalid output format"); + return FALSE; + } + + *codec_caps = codec; + *raw_caps = raw; + + return TRUE; +} + +static void +build_profile (GstVulkanVideoProfile * profile, + VkVideoCodecOperationFlagBitsKHR codec) +{ + /* *INDENT-OFF* */ + *profile = (GstVulkanVideoProfile) { + .profile = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &profile->usage, + .videoCodecOperation = codec, + } + }; + + if (GST_VULKAN_VIDEO_CODEC_OPERATION_IS_DECODE (codec)) { + profile->usage.decode = (VkVideoDecodeUsageInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR, + .pNext = &profile->codec, + .videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR,}; + } else if (GST_VULKAN_VIDEO_CODEC_OPERATION_IS_ENCODE (codec)) { + profile->usage.encode = (VkVideoEncodeUsageInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_USAGE_INFO_KHR, + .pNext = &profile->codec, + .videoUsageHints = VK_VIDEO_ENCODE_USAGE_DEFAULT_KHR, + .videoContentHints = VK_VIDEO_ENCODE_CONTENT_DEFAULT_KHR, + .tuningMode = VK_VIDEO_ENCODE_TUNING_MODE_DEFAULT_KHR, + }; + } else { + g_assert_not_reached (); + } + /* *INDENT-ON* */ +} + +static const VkVideoChromaSubsamplingFlagBitsKHR chroma_map = { + VK_VIDEO_CHROMA_SUBSAMPLING_MONOCHROME_BIT_KHR, + VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, + VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, + VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, +}; + +static const VkVideoComponentBitDepthFlagsKHR bit_depth_map = { + VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, + VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR +}; + +/* Try to generate GStreamer caps given the Vulkan profile. The caps can be + * empty if function fails */ +static inline void +try_get_caps (GstVulkanPhysicalDevice * device, GstVulkanVideoProfile * profile, + GstCaps * codec_caps, GstCaps * raw_caps) +{ + for (int j = 0; j < G_N_ELEMENTS (chroma_map); j++) { + profile->profile.chromaSubsampling = chroma_mapj; + + for (int k = 0; k < G_N_ELEMENTS (bit_depth_map); k++) { + profile->profile.chromaBitDepth = bit_depth_mapk; + for (int l = 0; l < G_N_ELEMENTS (bit_depth_map); l++) { + profile->profile.lumaBitDepth = bit_depth_mapl; + + if (profile->profile.chromaSubsampling == + VK_VIDEO_CHROMA_SUBSAMPLING_MONOCHROME_BIT_KHR + && profile->profile.chromaBitDepth != profile->profile.lumaBitDepth) + continue; + + { + GstCaps *codec = NULL, *raw = NULL; + + if (!try_profile (device, profile, &codec, &raw)) + continue; + + codec_caps = gst_caps_merge (codec_caps, codec); + raw_caps = gst_caps_merge (raw_caps, raw); + } + } + } + } +} + +static inline gboolean +check_caps (GstCaps ** codec_caps, GstCaps ** raw_caps) +{ + if (gst_caps_is_empty (*codec_caps) || gst_caps_is_empty (*raw_caps)) { + gst_clear_caps (codec_caps); + gst_clear_caps (raw_caps); + return FALSE; + } + + *codec_caps = gst_caps_simplify (*codec_caps); + *raw_caps = gst_caps_simplify (*raw_caps); + return TRUE; +} + +static const StdVideoH264ProfileIdc h264_profile_idc = { + STD_VIDEO_H264_PROFILE_IDC_HIGH, STD_VIDEO_H264_PROFILE_IDC_MAIN, + STD_VIDEO_H264_PROFILE_IDC_BASELINE, +}; + +static const VkVideoDecodeH264PictureLayoutFlagBitsKHR h264_layout_map = { + VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_PROGRESSIVE_KHR, + VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_INTERLEAVED_LINES_BIT_KHR, + VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_SEPARATE_PLANES_BIT_KHR, +}; + +static void +h26x_complete_caps (GstCaps * caps, char **stream_formats) +{ + int i; + GValue stream_format_value = G_VALUE_INIT; + + for (i = 0; stream_formatsi; i++); + + if (i > 1) { + g_value_init (&stream_format_value, GST_TYPE_LIST); + for (int i = 0; stream_formatsi; i++) { + GValue value = G_VALUE_INIT; + + g_value_init (&value, G_TYPE_STRING); + g_value_set_string (&value, stream_formatsi); + gst_value_list_append_value (&stream_format_value, &value); + g_value_unset (&value); + } + } else { + g_value_init (&stream_format_value, G_TYPE_STRING); + g_value_set_string (&stream_format_value, stream_formats0); + } + + gst_caps_set_value (caps, "stream-format", &stream_format_value); + g_value_unset (&stream_format_value); + gst_caps_set_simple (caps, "alignment", G_TYPE_STRING, "au", NULL); +} + +static gboolean +h264_encode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + const char *stream_format = { "byte-stream", NULL }; + + profile->codec.h264enc.sType = + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (h264_profile_idc); i++) { + profile->codec.h264enc.stdProfileIdc = h264_profile_idci; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + h26x_complete_caps (codec_caps, (char **) stream_format); + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +static const StdVideoH265ProfileIdc h265_profile_idc = { + STD_VIDEO_H265_PROFILE_IDC_MAIN, STD_VIDEO_H265_PROFILE_IDC_MAIN_10, + STD_VIDEO_H265_PROFILE_IDC_MAIN_STILL_PICTURE, + STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, + STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, +}; + +static gboolean +h265_encode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + const char *stream_format = { "byte-stream", NULL }; + + profile->codec.h265enc.sType = + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (h265_profile_idc); i++) { + profile->codec.h265enc.stdProfileIdc = h265_profile_idci; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + h26x_complete_caps (codec_caps, (char **) stream_format); + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +static gboolean +h264_decode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + const char *stream_format = { "avc", "byte-stream", NULL }; + + profile->codec.h264dec.sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (h264_profile_idc); i++) { + profile->codec.h264dec.stdProfileIdc = h264_profile_idci; + + for (int j = 0; j < G_N_ELEMENTS (h264_layout_map); j++) { + profile->codec.h264dec.pictureLayout = h264_layout_mapj; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + h26x_complete_caps (codec_caps, (char **) stream_format); + + /* HACK: add baseline and extended profiles if constrained-baseline is + * supported */ + { + const GstStructure *structure = gst_caps_get_structure (codec_caps, 0); + const GValue *profiles_value = + gst_structure_get_value (structure, "profile"); + gboolean has_constrained_baseline = FALSE; + + if (GST_VALUE_HOLDS_LIST (profiles_value)) { + for (int i = 0; i < gst_value_list_get_size (profiles_value); i++) { + const GValue *profile = gst_value_list_get_value (profiles_value, i); + if (G_VALUE_HOLDS_STRING (profile)) { + const gchar *profile_str = g_value_get_string (profile); + if (g_strcmp0 (profile_str, "constrained-baseline") == 0) { + has_constrained_baseline = TRUE; + break; + } + } + } + } else if (G_VALUE_HOLDS_STRING (profiles_value)) { + const gchar *profile_str = g_value_get_string (profiles_value); + has_constrained_baseline = + (g_strcmp0 (profile_str, "constrained-baseline") == 0); + } + + if (has_constrained_baseline) { + const char *profiles = { "baseline", "extended" }; + GValue new_profiles = G_VALUE_INIT; + + g_value_init (&new_profiles, GST_TYPE_LIST); + g_value_copy (profiles_value, &new_profiles); + + for (int i = 0; i < G_N_ELEMENTS (profiles); i++) { + GValue value = G_VALUE_INIT; + + g_value_init (&value, G_TYPE_STRING); + g_value_set_string (&value, profilesi); + gst_value_list_append_value (&new_profiles, &value); + g_value_unset (&value); + } + + gst_caps_set_value (codec_caps, "profile", &new_profiles); + g_value_unset (&new_profiles); + } + } + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +static gboolean +h265_decode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + const char *stream_format = { "hvc1", "hev1", "byte-stream", NULL }; + + profile->codec.h265dec.sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (h265_profile_idc); i++) { + profile->codec.h265dec.stdProfileIdc = h265_profile_idci; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + h26x_complete_caps (codec_caps, (char **) stream_format); + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +static const StdVideoAV1Profile av1_profile = { + STD_VIDEO_AV1_PROFILE_MAIN, STD_VIDEO_AV1_PROFILE_HIGH, + STD_VIDEO_AV1_PROFILE_PROFESSIONAL, +}; + +static const VkBool32 av1_film_grain_map = { + VK_TRUE, VK_FALSE, +}; + +static gboolean +av1_decode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + + profile->codec.av1dec.sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (av1_profile); i++) { + profile->codec.av1dec.stdProfile = av1_profilei; + + for (int j = 0; j < G_N_ELEMENTS (av1_film_grain_map); j++) { + profile->codec.av1dec.filmGrainSupport = av1_film_grain_mapj; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + gst_caps_set_simple (codec_caps, "alignment", G_TYPE_STRING, "frame", + "stream-format", G_TYPE_STRING, "obu-stream", NULL); + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +static const StdVideoVP9Profile vp9_profile = { + STD_VIDEO_VP9_PROFILE_0, STD_VIDEO_VP9_PROFILE_1, STD_VIDEO_VP9_PROFILE_2, + STD_VIDEO_VP9_PROFILE_3, +}; + +static gboolean +vp9_decode_caps (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstCaps ** codec_caps_ptr, + GstCaps ** raw_caps_ptr) +{ + GstCaps *codec_caps, *raw_caps; + + profile->codec.vp9dec.sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PROFILE_INFO_KHR; + + codec_caps = gst_caps_new_empty (); + raw_caps = gst_caps_new_empty (); + + for (int i = 0; i < G_N_ELEMENTS (vp9_profile); i++) { + profile->codec.vp9dec.stdProfile = vp9_profilei; + + try_get_caps (device, profile, codec_caps, raw_caps); + } + + if (!check_caps (&codec_caps, &raw_caps)) + return FALSE; + + gst_caps_set_simple (codec_caps, "alignment", G_TYPE_STRING, "frame", NULL); + + gst_caps_set_features_simple (raw_caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + + *codec_caps_ptr = codec_caps; + *raw_caps_ptr = raw_caps; + + return TRUE; +} + +/** + * gst_vulkan_physical_device_codec_caps: + * @device: a #GstVulkanPhysicalDevice + * @codec: (type int): Vulkan codec operation type + * @codec_caps: (out) (not nullable) (transfer full): the codec #GstCaps + * @raw_caps: (out) (not nullable) (transfer full): the raw #GstCaps + * + * Returns: whether the @codec_caps and @raw_caps were extracted from the + * @device configured for @codec. + */ +gboolean +gst_vulkan_physical_device_codec_caps (GstVulkanPhysicalDevice * device, + VkVideoCodecOperationFlagBitsKHR codec, GstCaps ** codec_caps, + GstCaps ** raw_caps) +{ + GstVulkanVideoProfile profile; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + build_profile (&profile, codec); + + switch (codec) { + case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: + return h264_encode_caps (device, &profile, codec_caps, raw_caps); + case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: + return h265_encode_caps (device, &profile, codec_caps, raw_caps); + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + return h264_decode_caps (device, &profile, codec_caps, raw_caps); + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + return h265_decode_caps (device, &profile, codec_caps, raw_caps); + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + return av1_decode_caps (device, &profile, codec_caps, raw_caps); + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR: + return FALSE; /* unimplemented */ + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + return vp9_decode_caps (device, &profile, codec_caps, raw_caps); + default: + g_assert_not_reached (); + } + + return FALSE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvkvideocaps.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/vulkan/vulkan.h> + +G_BEGIN_DECLS + +gboolean gst_vulkan_physical_device_codec_caps (GstVulkanPhysicalDevice * device, + VkVideoCodecOperationFlagBitsKHR codec, + GstCaps ** codec_caps, + GstCaps ** raw_caps); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/gstvulkan.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvulkan.c
Changed
@@ -46,8 +46,14 @@ #if GST_VULKAN_HAVE_VIDEO_EXTENSIONS #include "vkh264dec.h" #include "vkh265dec.h" +#include "vkav1dec.h" +#include "vkvp9dec.h" +#include "vkh264enc.h" #endif +GST_DEBUG_CATEGORY_EXTERN (gst_vulkan_debug); +#define GST_CAT_DEFAULT gst_vulkan_debug + static gboolean plugin_init (GstPlugin * plugin) { @@ -70,6 +76,8 @@ gst_plugin_add_dependency (plugin, env_vars, NULL, NULL, GST_PLUGIN_DEPENDENCY_FLAG_NONE); + vulkan_element_init (plugin); + if (!have_instance) { GST_WARNING_OBJECT (plugin, "Failed to create vulkan instance: %s", error->message); @@ -104,7 +112,19 @@ VK_KHR_VIDEO_DECODE_H265_EXTENSION_NAME)) { ret |= gst_vulkan_h265_decoder_register (plugin, device, GST_RANK_NONE); } -#endif + if (gst_vulkan_device_is_extension_enabled (device, + VK_KHR_VIDEO_DECODE_VP9_EXTENSION_NAME)) { + ret |= gst_vulkan_vp9_decoder_register (plugin, device, GST_RANK_NONE); + } + if (gst_vulkan_device_is_extension_enabled (device, + VK_KHR_VIDEO_DECODE_AV1_EXTENSION_NAME)) { + ret |= gst_vulkan_av1_decoder_register (plugin, device, GST_RANK_NONE); + } + if (gst_vulkan_device_is_extension_enabled (device, + VK_KHR_VIDEO_ENCODE_H264_EXTENSION_NAME)) { + ret |= gst_vulkan_h264_encoder_register (plugin, device, GST_RANK_NONE); + } +#endif /* GST_VULKAN_HAVE_VIDEO_EXTENSIONS */ ret |= gst_vulkan_sink_register (plugin, device, GST_RANK_NONE); gst_object_unref (device); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/gstvulkanelement.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/gstvulkanelement.c
Changed
@@ -34,7 +34,7 @@ #include <gst/vulkan/vulkan.h> #define GST_CAT_DEFAULT gst_vulkan_debug -GST_DEBUG_CATEGORY_STATIC (GST_CAT_DEFAULT); +GST_DEBUG_CATEGORY (GST_CAT_DEFAULT); void vulkan_element_init (GstPlugin * plugin)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/meson.build
Changed
@@ -1,6 +1,7 @@ vulkan_sources = 'gstvulkan.c', 'gstvulkanelement.c', + 'gstvkutils.c', 'vkdownload.c', 'vkdeviceprovider.c', 'vksink.c', @@ -14,6 +15,7 @@ 'vkdownload.h', 'vkh264dec.h', 'vkh265dec.h', + 'vkvp9dec.h', 'vkimageidentity.h', 'vkoverlaycompositor.h', 'vkshaderspv.h', @@ -31,8 +33,13 @@ video_sources = + 'base/gsth264encoder.c', + 'gstvkvideocaps.c', + 'vkav1dec.c', + 'vkh264enc.c', 'vkh264dec.c', 'vkh265dec.c', + 'vkvp9dec.c', doc_sources =
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkav1dec.c
Added
@@ -0,0 +1,1489 @@ +/* GStreamer + * Copyright (C) 2025 Collabora, Ltd. + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "vkav1dec.h" + +#include <gst/video/video.h> +#include <gst/codecs/gstav1decoder.h> + +#include "gst/vulkan/gstvkdecoder-private.h" +#include "gst/vulkan/gstvkphysicaldevice-private.h" +#include "gstvkvideocaps.h" +#include "gstvulkanelements.h" + +#define GST_VULKAN_AV1_DECODER(obj) ((GstVulkanAV1Decoder *) obj) +#define GST_VULKAN_AV1_DECODER_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), G_TYPE_FROM_INSTANCE (obj), GstVulkanAV1DecoderClass)) +#define GST_VULKAN_AV1_DECODER_CLASS(klass) ((GstVulkanAV1DecoderClass *) klass) + + +#define GST_VULKAN_AV1_MAX_DPB_SLOTS 32 + +typedef struct _GstVulkanAV1Decoder GstVulkanAV1Decoder; +typedef struct _GstVulkanAV1DecoderClass GstVulkanAV1DecoderClass; +typedef struct _GstVulkanAV1Picture GstVulkanAV1Picture; + +struct _GstVulkanAV1Decoder +{ + GstAV1Decoder parent; + + GstVulkanInstance *instance; + GstVulkanDevice *device; + GstVulkanQueue *graphic_queue, *decode_queue; + + GstVulkanDecoder *decoder; + + gboolean need_negotiation; + gboolean resolution_changed; + + gint width, height; + gint coded_width, coded_height; + gint dpb_size; + + VkSamplerYcbcrRange range; + VkChromaLocation chroma_location; + + GstVideoCodecState *output_state; + struct + { + StdVideoAV1SequenceHeader sequence; + StdVideoAV1TimingInfo timing_info; + StdVideoAV1ColorConfig color_config; + } vk; + + guint32 free_slot_mask; +}; + +struct _GstVulkanAV1DecoderClass +{ + GstAV1DecoderClass parent; + + gint device_index; +}; + +static GstElementClass *parent_class = NULL; + +GST_DEBUG_CATEGORY (gst_vulkan_av1_decoder_debug); +#define GST_CAT_DEFAULT gst_vulkan_av1_decoder_debug + +struct _GstVulkanAV1Picture +{ + GstVulkanDecoderPicture base; + + /* Picture refs */ + StdVideoDecodeAV1ReferenceInfo std_refsGST_AV1_NUM_REF_FRAMES; + VkVideoDecodeAV1DpbSlotInfoKHR vk_slotsGST_AV1_NUM_REF_FRAMES; + + /* Current picture */ + StdVideoDecodeAV1ReferenceInfo std_ref; + VkVideoDecodeAV1DpbSlotInfoKHR vk_slot; + guint16 width_in_sbs_minus164; + guint16 height_in_sbs_minus164; + guint16 mi_col_starts64; + guint16 mi_row_starts64; + StdVideoAV1TileInfo tile_info; + StdVideoAV1Quantization quantization; + StdVideoAV1Segmentation segmentation; + StdVideoAV1LoopFilter loop_filter; + StdVideoAV1CDEF cdef; + StdVideoAV1LoopRestoration loop_restoration; + StdVideoAV1GlobalMotion global_motion; + StdVideoAV1FilmGrain film_grain; + + GArray *tile_sizes; + GArray *tile_offsets; + guint num_tiles; + guint32 tile_data_sz; + + VkVideoDecodeAV1PictureInfoKHR vk_av1pic; + StdVideoDecodeAV1PictureInfo std_av1pic; + + gint32 slot_idx; + + // Used to update the mask when this picture is freed. + guint32 *free_slot_mask; +}; + +static gpointer +_register_debug_category (gpointer data) +{ + GST_DEBUG_CATEGORY_INIT (gst_vulkan_av1_decoder_debug, "vulkanav1dec", 0, + "Vulkan AV1 decoder"); + + return NULL; +} + +static void +gst_vulkan_av1_decoder_set_context (GstElement * element, GstContext * context) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (element); + + gst_vulkan_handle_set_context (element, context, NULL, &self->instance); + + GST_ELEMENT_CLASS (parent_class)->set_context (element, context); +} + +static gboolean +_query_context (GstVulkanAV1Decoder * self, GstQuery * query) +{ + if (gst_vulkan_handle_context_query (GST_ELEMENT (self), query, NULL, + self->instance, self->device)) + return TRUE; + + if (gst_vulkan_queue_handle_context_query (GST_ELEMENT (self), query, + self->graphic_queue)) + return TRUE; + + return FALSE; +} + +static gboolean +gst_vulkan_av1_decoder_src_query (GstVideoDecoder * decoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_AV1_DECODER (decoder), query); + break; + default: + ret = GST_VIDEO_DECODER_CLASS (parent_class)->src_query (decoder, query); + break; + } + + return ret; +} + +static gboolean +gst_vulkan_av1_decoder_sink_query (GstVideoDecoder * decoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_AV1_DECODER (decoder), query); + break; + default: + ret = GST_VIDEO_DECODER_CLASS (parent_class)->sink_query (decoder, query); + break; + } + + return ret; +} + +static gboolean +_find_queues (GstVulkanDevice * device, GstVulkanQueue * queue, gpointer data) +{ + GstVulkanAV1Decoder *self = data; + guint32 flags = + device->physical_device->queue_family_propsqueue->family.queueFlags; + guint32 codec = + device->physical_device->queue_family_opsqueue->family.video; + + if (!self->graphic_queue + && ((flags & VK_QUEUE_GRAPHICS_BIT) == VK_QUEUE_GRAPHICS_BIT)) { + self->graphic_queue = gst_object_ref (queue); + } + + if (!self->decode_queue + && ((codec & VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR) + == VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR) + && ((flags & VK_QUEUE_VIDEO_DECODE_BIT_KHR) + == VK_QUEUE_VIDEO_DECODE_BIT_KHR)) { + self->decode_queue = gst_object_ref (queue); + } + + return !(self->decode_queue && self->graphic_queue); +} + +static gboolean +gst_vulkan_av1_decoder_open (GstVideoDecoder * decoder) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + + if (!gst_vulkan_ensure_element_data (GST_ELEMENT (decoder), NULL, + &self->instance)) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to retrieve vulkan instance"), (NULL)); + return FALSE; + } + + if (!gst_vulkan_ensure_element_device (GST_ELEMENT (decoder), self->instance, + &self->device, 0)) { + return FALSE; + } + + if (!gst_vulkan_queue_run_context_query (GST_ELEMENT (self), + &self->graphic_queue)) { + GST_DEBUG_OBJECT (self, "No graphic queue retrieved from peer elements"); + } + + gst_vulkan_device_foreach_queue (self->device, _find_queues, self); + + if (!self->decode_queue) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to create/retrieve vulkan AV1 decoder queue"), (NULL)); + return FALSE; + } + + self->decoder = gst_vulkan_decoder_new_from_queue (self->decode_queue, + VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR); + if (!self->decoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to create vulkan AV1 decoder"), (NULL)); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_vulkan_av1_decoder_close (GstVideoDecoder * decoder) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + + gst_clear_object (&self->decoder); + gst_clear_object (&self->decode_queue); + gst_clear_object (&self->graphic_queue); + gst_clear_object (&self->device); + gst_clear_object (&self->instance); + + return TRUE; +} + +static gboolean +gst_vulkan_av1_decoder_stop (GstVideoDecoder * decoder) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + + if (self->decoder) + gst_vulkan_decoder_stop (self->decoder); + + if (self->output_state) + gst_video_codec_state_unref (self->output_state); + + return GST_VIDEO_DECODER_CLASS (parent_class)->stop (decoder); +} + +static gboolean +gst_vulkan_av1_decoder_negotiate (GstVideoDecoder * decoder) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstAV1Decoder *av1dec = GST_AV1_DECODER (decoder); + VkVideoFormatPropertiesKHR format_prop; + GstVideoFormat format; + + /* Ignore downstream renegotiation request. */ + if (!self->need_negotiation) { + GST_DEBUG_OBJECT (decoder, + "Input state hasn't changed, no need to reconfigure downstream caps"); + goto bail; + } + + if (!gst_vulkan_decoder_out_format (self->decoder, &format_prop)) + return FALSE; + + self->need_negotiation = FALSE; + + if (self->output_state) + gst_video_codec_state_unref (self->output_state); + + format = gst_vulkan_format_to_video_format (format_prop.format); + self->output_state = gst_video_decoder_set_interlaced_output_state (decoder, + format, GST_VIDEO_INTERLACE_MODE_PROGRESSIVE, self->width, self->height, + av1dec->input_state); + + self->output_state->caps = gst_video_info_to_caps (&self->output_state->info); + gst_caps_set_features_simple (self->output_state->caps, + gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, + NULL)); + + GST_INFO_OBJECT (self, "Negotiated caps %" GST_PTR_FORMAT, + self->output_state->caps); + +bail: + return GST_VIDEO_DECODER_CLASS (parent_class)->negotiate (decoder); +} + +static gboolean +gst_vulkan_av1_decoder_decide_allocation (GstVideoDecoder * decoder, + GstQuery * query) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstCaps *new_caps, *profile_caps, *caps = NULL, *dpb_caps = NULL; + GstBufferPool *pool = NULL; + GstStructure *config; + guint size, min, max; + gboolean update_pool; + VkImageUsageFlags usage; + GstVulkanVideoCapabilities vk_caps; + + if (self->dpb_size == 0) { + return + GST_VIDEO_DECODER_CLASS (parent_class)->decide_allocation (decoder, + query); + } + + gst_query_parse_allocation (query, &caps, NULL); + if (!caps) + return FALSE; + if (!gst_vulkan_decoder_caps (self->decoder, &vk_caps)) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + update_pool = TRUE; + } else { + GstVideoInfo vinfo; + + gst_video_info_from_caps (&vinfo, caps); + size = GST_VIDEO_INFO_SIZE (&vinfo); + min = 2; + max = 0; + update_pool = FALSE; + } + + if (!(pool && GST_IS_VULKAN_IMAGE_BUFFER_POOL (pool))) { + gst_clear_object (&pool); + pool = gst_vulkan_image_buffer_pool_new (self->device); + } + + usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_SAMPLED_BIT + | VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR; + + if (!self->decoder->dedicated_dpb) { + min = MAX (min, MIN (self->dpb_size, vk_caps.caps.maxDpbSlots)); + max = 0; + usage |= VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR; + } + + new_caps = gst_caps_copy (caps); + gst_caps_set_simple (new_caps, "width", G_TYPE_INT, self->coded_width, + "height", G_TYPE_INT, self->coded_height, NULL); + profile_caps = gst_vulkan_decoder_profile_caps (self->decoder); + + config = gst_buffer_pool_get_config (pool); + + gst_buffer_pool_config_set_params (config, new_caps, size, min, max); + + gst_vulkan_image_buffer_pool_config_set_allocation_params (config, usage, + VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_IMAGE_LAYOUT_VIDEO_DECODE_DST_KHR, + VK_ACCESS_TRANSFER_WRITE_BIT); + gst_vulkan_image_buffer_pool_config_set_decode_caps (config, profile_caps); + + gst_caps_unref (profile_caps); + + if (!gst_buffer_pool_set_config (pool, config)) + goto bail; + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + dpb_caps = gst_caps_copy (caps); + gst_caps_set_simple (dpb_caps, "width", G_TYPE_INT, + vk_caps.caps.maxCodedExtent.width, "height", G_TYPE_INT, + vk_caps.caps.maxCodedExtent.height, NULL); + + if (!gst_vulkan_decoder_create_dpb_pool (self->decoder, dpb_caps)) + goto bail; + + gst_caps_unref (dpb_caps); + gst_caps_unref (new_caps); + + return TRUE; + +bail: + { + gst_clear_caps (&new_caps); + gst_clear_caps (&dpb_caps); + gst_clear_object (&pool); + return FALSE; + } +} + +static VkVideoChromaSubsamplingFlagBitsKHR +_get_chroma_subsampling_flag (const GstAV1SequenceHeaderOBU * seq_hdr) +{ + if (seq_hdr->color_config.mono_chrome) { + return VK_VIDEO_CHROMA_SUBSAMPLING_MONOCHROME_BIT_KHR; + } else if (seq_hdr->color_config.subsampling_x == 0 + && seq_hdr->color_config.subsampling_y == 0) { + return VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR; + } else if (seq_hdr->color_config.subsampling_x == 1 + && seq_hdr->color_config.subsampling_y == 0) { + return VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR; + } else if (seq_hdr->color_config.subsampling_x == 1 + && seq_hdr->color_config.subsampling_y == 1) { + return VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR; + } else { + return VK_VIDEO_CHROMA_SUBSAMPLING_INVALID_KHR; + } +} + +static VkVideoComponentBitDepthFlagBitsKHR +_get_component_bit_depth (const GstAV1SequenceHeaderOBU * seq_hdr) +{ + switch (seq_hdr->bit_depth) { + case 8: + return VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR; + case 10: + return VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR; + case 12: + return VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR; + default: + return VK_VIDEO_COMPONENT_BIT_DEPTH_INVALID_KHR; + } +} + +static StdVideoAV1Profile +_get_av1_profile (const GstAV1SequenceHeaderOBU * seq_hdr) +{ + switch (seq_hdr->seq_profile) { + case GST_AV1_PROFILE_0: + return STD_VIDEO_AV1_PROFILE_MAIN; + case GST_AV1_PROFILE_1: + return STD_VIDEO_AV1_PROFILE_HIGH; + case GST_AV1_PROFILE_2: + return STD_VIDEO_AV1_PROFILE_PROFESSIONAL; + default: + return STD_VIDEO_AV1_PROFILE_INVALID; + } +} + +static void +gst_vulkan_video_profile_from_av1_sequence_hdr (GstVulkanVideoProfile * profile, + const GstAV1SequenceHeaderOBU * seq_hdr) +{ + /* *INDENT-OFF* */ + *profile = (GstVulkanVideoProfile) { + .profile = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &profile->usage, + .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR, + .chromaSubsampling = _get_chroma_subsampling_flag (seq_hdr), + .lumaBitDepth = _get_component_bit_depth (seq_hdr), + .chromaBitDepth = _get_component_bit_depth (seq_hdr), + }, + .usage.decode = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR, + .videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR, + .pNext = &profile->codec, + }, + .codec.av1dec = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_PROFILE_INFO_KHR, + .stdProfile = _get_av1_profile (seq_hdr), + .filmGrainSupport = VK_FALSE, + }, + }; + /* *INDENT-ON* */ +} + +static GstFlowReturn +_update_parameters (GstVulkanAV1Decoder * self, + const GstAV1SequenceHeaderOBU * seq) +{ + GError *error = NULL; + /* *INDENT-OFF* */ + self->vk.timing_info = (StdVideoAV1TimingInfo) { + .flags = { + .equal_picture_interval = seq->timing_info.equal_picture_interval, + }, + .num_units_in_display_tick = seq->timing_info.num_units_in_display_tick, + .time_scale = seq->timing_info.time_scale, + .num_ticks_per_picture_minus_1 = + seq->timing_info.num_ticks_per_picture_minus_1, + }; + + self->vk.color_config = (StdVideoAV1ColorConfig) { + .flags = { + .mono_chrome = seq->color_config.mono_chrome, + .color_range = seq->color_config.color_range, + .separate_uv_delta_q = seq->color_config.separate_uv_delta_q, + }, + .BitDepth = seq->color_config.twelve_bit ? 12 : + seq->color_config.high_bitdepth ? 10 : 8, + .subsampling_x = seq->color_config.subsampling_x, + .subsampling_y = seq->color_config.subsampling_y, + .color_primaries = + (StdVideoAV1ColorPrimaries) seq->color_config.color_primaries, + .transfer_characteristics = + (StdVideoAV1TransferCharacteristics) seq->color_config.transfer_characteristics, + .matrix_coefficients = + (StdVideoAV1MatrixCoefficients) seq->color_config.matrix_coefficients, + }; + + self->vk.sequence = (StdVideoAV1SequenceHeader) { + .flags = { + .still_picture = seq->still_picture, + .reduced_still_picture_header = seq->reduced_still_picture_header, + .use_128x128_superblock = seq->use_128x128_superblock, + .enable_filter_intra = seq->enable_filter_intra, + .enable_intra_edge_filter = seq->enable_intra_edge_filter, + .enable_interintra_compound = seq->enable_interintra_compound, + .enable_masked_compound = seq->enable_masked_compound, + .enable_warped_motion = seq->enable_warped_motion, + .enable_dual_filter = seq->enable_dual_filter, + .enable_order_hint = seq->enable_order_hint, + .enable_jnt_comp = seq->enable_jnt_comp, + .enable_ref_frame_mvs = seq->enable_ref_frame_mvs, + .frame_id_numbers_present_flag = seq->frame_id_numbers_present_flag, + .enable_superres = seq->enable_superres, + .enable_cdef = seq->enable_cdef, + .enable_restoration = seq->enable_restoration, + .film_grain_params_present = seq->film_grain_params_present, + .timing_info_present_flag = seq->timing_info_present_flag, + .initial_display_delay_present_flag = seq->initial_display_delay_present_flag, + }, + .seq_profile = _get_av1_profile (seq), + .frame_width_bits_minus_1 = seq->frame_width_bits_minus_1, + .frame_height_bits_minus_1 = seq->frame_height_bits_minus_1, + .max_frame_width_minus_1 = seq->max_frame_width_minus_1, + .max_frame_height_minus_1 = seq->max_frame_height_minus_1, + .delta_frame_id_length_minus_2 = seq->delta_frame_id_length_minus_2, + .additional_frame_id_length_minus_1 = seq->additional_frame_id_length_minus_1, + .order_hint_bits_minus_1 = seq->order_hint_bits_minus_1, + .seq_force_integer_mv = seq->seq_force_integer_mv, + .seq_force_screen_content_tools = seq->seq_force_screen_content_tools, + .pTimingInfo = &self->vk.timing_info, + .pColorConfig = &self->vk.color_config, + }; + + GstVulkanDecoderParameters dec_params = { + .av1 = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_SESSION_PARAMETERS_CREATE_INFO_KHR, + .pNext = NULL, + .pStdSequenceHeader = &self->vk.sequence, + }, + }; + /* *INDENT-ON* */ + + if (!gst_vulkan_decoder_update_video_session_parameters (self->decoder, + &dec_params, &error)) { + if (error) { + GST_ERROR_OBJECT (self, "Couldn't set codec parameters: %s", + error->message); + g_clear_error (&error); + } + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_new_sequence (GstAV1Decoder * decoder, + const GstAV1SequenceHeaderOBU * seq_hdr, gint max_dpb_size) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstVulkanVideoProfile profile; + GstVulkanVideoCapabilities vk_caps; + GError *error = NULL; + gint width = seq_hdr->max_frame_width_minus_1 + 1; + gint height = seq_hdr->max_frame_height_minus_1 + 1; + VkFormat old_format = VK_FORMAT_UNDEFINED; + VkVideoFormatPropertiesKHR format_prop; + GstFlowReturn ret; + + gst_vulkan_video_profile_from_av1_sequence_hdr (&profile, seq_hdr); + + if (gst_vulkan_decoder_is_started (self->decoder)) { + if (!gst_vulkan_video_profile_is_equal (&self->decoder->profile, &profile)) { + if (gst_vulkan_decoder_out_format (self->decoder, &format_prop)) + old_format = format_prop.format; + gst_vulkan_decoder_stop (self->decoder); + } else { + self->need_negotiation = FALSE; + } + } + + if (!gst_vulkan_decoder_is_started (self->decoder)) { + self->need_negotiation = TRUE; + if (!gst_vulkan_decoder_start (self->decoder, &profile, &error)) { + GST_ERROR_OBJECT (self, "Couldn't start decoder: %s", + error ? error->message : ""); + g_clear_error (&error); + return GST_FLOW_ERROR; + } + } + + ret = _update_parameters (self, seq_hdr); + if (ret != GST_FLOW_OK) + return ret; + + self->dpb_size = CLAMP (max_dpb_size, 0, GST_VULKAN_AV1_MAX_DPB_SLOTS); + + gst_vulkan_decoder_caps (self->decoder, &vk_caps); + if (width < vk_caps.caps.minCodedExtent.width + || height < vk_caps.caps.minCodedExtent.height + || width > vk_caps.caps.maxCodedExtent.width + || height > vk_caps.caps.maxCodedExtent.height) { + + GST_ERROR_OBJECT (self, + "The following sequence can not be decoded because the frame dimension does not fit the decoder bounds: %dx%d" + ", minCodedExtent=%dx%d, maxCodedExtent=%dx%d", + width, height, vk_caps.caps.minCodedExtent.width, + vk_caps.caps.minCodedExtent.height, vk_caps.caps.maxCodedExtent.width, + vk_caps.caps.maxCodedExtent.height); + return GST_FLOW_ERROR; + } + + self->coded_width = width; + self->coded_height = height; + + self->resolution_changed = self->coded_width > 0 && self->coded_height > 0 + && (width != self->coded_width || height != self->coded_height); + self->need_negotiation &= (width != self->width || height != self->height); + self->width = width; + self->height = height; + + /* Ycbcr sampler */ + { + VkSamplerYcbcrRange range; + VkChromaLocation chroma_location; + gboolean ret; + + ret = gst_vulkan_decoder_out_format (self->decoder, &format_prop); + g_assert (ret); + + range = (seq_hdr->color_config.color_range) ? + VK_SAMPLER_YCBCR_RANGE_ITU_FULL : VK_SAMPLER_YCBCR_RANGE_ITU_NARROW; + + switch (seq_hdr->color_config.chroma_sample_position) { + case GST_AV1_CSP_COLOCATED: + chroma_location = VK_CHROMA_LOCATION_COSITED_EVEN; + break; + default: + chroma_location = VK_CHROMA_LOCATION_MIDPOINT; + } + + if (old_format != format_prop.format || range != self->range || + chroma_location != self->chroma_location) { + self->range = range; + self->chroma_location = chroma_location; + ret = + gst_vulkan_decoder_update_ycbcr_sampler (self->decoder, range, + VK_CHROMA_LOCATION_COSITED_EVEN, chroma_location, &error); + if (!ret && error) { + GST_WARNING_OBJECT (self, "Unable to create Ycbcr sampler: %s", + error->message); + g_clear_error (&error); + } + } + } + + return GST_FLOW_OK; +} + +static GstVulkanAV1Picture * +gst_vulkan_av1_picture_new (GstVulkanAV1Decoder * self, GstBuffer * out) +{ + GstVulkanAV1Picture *pic; + + pic = g_new0 (GstVulkanAV1Picture, 1); + gst_vulkan_decoder_picture_init (self->decoder, &pic->base, out); + + pic->tile_sizes = g_array_new (TRUE, TRUE, sizeof (guint32)); + pic->tile_offsets = g_array_new (TRUE, TRUE, sizeof (guint32)); + pic->tile_data_sz = 0; + pic->slot_idx = -1; + pic->free_slot_mask = &self->free_slot_mask; + + return pic; +} + +static void +gst_vulkan_av1_picture_free (gpointer data) +{ + GstVulkanAV1Picture *pic = data; + + // Mark our slot as free in the decoder, if we were assigned any. + if (pic->slot_idx >= 0) + *pic->free_slot_mask &= ~(1 << pic->slot_idx); + + gst_vulkan_decoder_picture_release (&pic->base); + g_clear_pointer (&pic->tile_offsets, g_array_unref); + g_clear_pointer (&pic->tile_sizes, g_array_unref); + g_free (pic); +} + +static GstFlowReturn +_check_resolution_change (GstVulkanAV1Decoder * self, GstAV1Picture * picture) +{ + const GstAV1FrameHeaderOBU *frame_hdr = &picture->frame_hdr; + + if (!self->output_state) { + GST_DEBUG_OBJECT (self, "output_state not yet initialized"); + return GST_FLOW_OK; + } + + if (self->resolution_changed + || self->coded_width != frame_hdr->frame_width + || self->coded_height != frame_hdr->frame_height) { + GstVideoInfo *info = &self->output_state->info; + GST_VIDEO_INFO_WIDTH (info) = self->coded_width = frame_hdr->frame_width; + GST_VIDEO_INFO_HEIGHT (info) = self->coded_height = frame_hdr->frame_height; + + self->need_negotiation = TRUE; + + if (!gst_video_decoder_negotiate (GST_VIDEO_DECODER (self))) { + GST_ERROR_OBJECT (self, "Resolution changed, but failed to" + " negotiate with downstream"); + return GST_FLOW_NOT_NEGOTIATED; + } + self->resolution_changed = TRUE; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_new_picture (GstAV1Decoder * decoder, + GstVideoCodecFrame * frame, GstAV1Picture * picture) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstVideoDecoder *vdec = GST_VIDEO_DECODER (decoder); + GstFlowReturn ret; + GstVulkanAV1Picture *pic; + + GST_TRACE_OBJECT (self, "New picture"); + + ret = _check_resolution_change (self, picture); + if (ret != GST_FLOW_OK) + return ret; + + if (self->need_negotiation) { + if (!gst_video_decoder_negotiate (vdec)) { + GST_ERROR_OBJECT (self, "Failed downstream negotiation."); + return GST_FLOW_ERROR; + } + } + + ret = gst_video_decoder_allocate_output_frame (vdec, frame); + if (ret != GST_FLOW_OK) + goto allocation_failed; + + pic = gst_vulkan_av1_picture_new (self, frame->output_buffer); + gst_av1_picture_set_user_data (picture, pic, gst_vulkan_av1_picture_free); + + return GST_FLOW_OK; + +allocation_failed: + { + GST_WARNING_OBJECT (self, "Failed to allocated input or output buffer: %s", + gst_flow_get_name (ret)); + return ret; + } +} + +static void +_fill_ref_slot (GstVulkanAV1Decoder * self, GstAV1Picture * picture, + VkVideoReferenceSlotInfoKHR * slot, VkVideoPictureResourceInfoKHR * res, + VkVideoDecodeAV1DpbSlotInfoKHR * vkav1_slot, + StdVideoDecodeAV1ReferenceInfo * stdav1_ref, GstVulkanDecoderPicture ** ref) +{ + GstVulkanAV1Picture *pic = gst_av1_picture_get_user_data (picture); + GstAV1FrameHeaderOBU *fh = &picture->frame_hdr; + guint8 ref_frame_sign_bias = 0; + guint8 i; + + for (i = 0; i < STD_VIDEO_AV1_NUM_REF_FRAMES; i++) { + ref_frame_sign_bias |= (fh->ref_frame_sign_biasi <= 0) << i; + stdav1_ref->SavedOrderHintsi = fh->order_hintsi; + } + + /* *INDENT-OFF* */ + *stdav1_ref = (StdVideoDecodeAV1ReferenceInfo) { + .flags = (StdVideoDecodeAV1ReferenceInfoFlags) { + .disable_frame_end_update_cdf = fh->disable_frame_end_update_cdf, + .segmentation_enabled = fh->segmentation_params.segmentation_enabled, + }, + .frame_type = (StdVideoAV1FrameType)fh->frame_type, + .RefFrameSignBias = ref_frame_sign_bias, + .OrderHint = fh->order_hint, + }; + + *vkav1_slot = (VkVideoDecodeAV1DpbSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_DPB_SLOT_INFO_KHR, + .pStdReferenceInfo = stdav1_ref, + }; + + *res = (VkVideoPictureResourceInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedExtent = { self->coded_width, self->coded_height }, + .baseArrayLayer = (self->decoder->layered_dpb && self->decoder->dedicated_dpb) ? pic->slot_idx : 0, + .imageViewBinding = pic->base.img_view_ref->view, + }; + + *slot = (VkVideoReferenceSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_REFERENCE_SLOT_INFO_KHR, + .pNext = vkav1_slot, + .slotIndex = pic->slot_idx, + .pPictureResource = res, + }; + /* *INDENT-ON* */ + + if (ref) + *ref = &pic->base; + + GST_TRACE_OBJECT (self, "0x%" G_GUINT64_FORMAT "x slotIndex: %d", + res->imageViewBinding, slot->slotIndex); +} + +static gint32 +_find_next_slot_idx (GstVulkanAV1Decoder * self) +{ + gint32 i; + g_return_val_if_fail (self != NULL, -1); + + + for (i = 0; i < self->dpb_size; i++) + if (!(self->free_slot_mask & (1 << i))) { + // Mark as used. + self->free_slot_mask |= (1 << i); + return i; + } + + GST_ERROR_OBJECT (self, + "Failed to find free DPB slot (dpb_size=%d, free_mask=0x%08x)", + self->dpb_size, self->free_slot_mask); + return -1; +} + +static inline guint8 +gst_vulkan_av1_dec_get_lr_unit_size (guint size) +{ + switch (size) { + case 32: + return 0; + case 64: + return 1; + case 128: + return 2; + case 256: + return 3; + default: + break; + } + + return 3; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_start_picture (GstAV1Decoder * decoder, + GstAV1Picture * picture, GstAV1Dpb * dpb) +{ + + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstAV1FrameHeaderOBU *fh = &picture->frame_hdr; + GstAV1QuantizationParams *qp = &fh->quantization_params; + GstAV1LoopFilterParams *lf = &fh->loop_filter_params; + GstAV1SegmenationParams *seg = &fh->segmentation_params; + GstAV1LoopRestorationParams *lr = &fh->loop_restoration_params; + GstAV1TileInfo *ti = &fh->tile_info; + GstAV1CDEFParams *cdef = &fh->cdef_params; + GstAV1FilmGrainParams *fg = &fh->film_grain_params; + GstAV1GlobalMotionParams *gm = &fh->global_motion_params; + GstVulkanAV1Picture *pic = gst_av1_picture_get_user_data (picture); + guint num_refs = 0; + guint i, j; + + GST_TRACE_OBJECT (self, "Start picture"); + + /* *INDENT-OFF* */ + pic->tile_info = (StdVideoAV1TileInfo) { + .flags = (StdVideoAV1TileInfoFlags) { + .uniform_tile_spacing_flag = ti->uniform_tile_spacing_flag, + }, + .TileCols = ti->tile_cols, + .TileRows = ti->tile_rows, + .context_update_tile_id = ti->context_update_tile_id, + .tile_size_bytes_minus_1 = ti->tile_size_bytes_minus_1, + .pWidthInSbsMinus1 = pic->width_in_sbs_minus1, + .pHeightInSbsMinus1 = pic->height_in_sbs_minus1, + .pMiColStarts = pic->mi_col_starts, + .pMiRowStarts = pic->mi_row_starts, + }; + /* *INDENT-ON* */ + + for (guint i = 0; i < 64; i++) { + pic->width_in_sbs_minus1i = ti->width_in_sbs_minus_1i; + pic->height_in_sbs_minus1i = ti->height_in_sbs_minus_1i; + pic->mi_col_startsi = ti->mi_col_startsi; + pic->mi_row_startsi = ti->mi_row_startsi; + } + /* *INDENT-OFF* */ + pic->quantization = (StdVideoAV1Quantization) { + .flags = (StdVideoAV1QuantizationFlags) { + .diff_uv_delta = qp->diff_uv_delta, + .using_qmatrix = qp->using_qmatrix, + }, + .base_q_idx = qp->base_q_idx, + .DeltaQYDc = qp->delta_q_y_dc, + .DeltaQUDc = qp->delta_q_u_dc, + .DeltaQUAc = qp->delta_q_u_ac, + .DeltaQVDc = qp->delta_q_v_dc, + .DeltaQVAc = qp->delta_q_v_ac, + .qm_y = qp->qm_y, + .qm_u = qp->qm_u, + .qm_v = qp->qm_v, + }; + + pic->loop_filter = (StdVideoAV1LoopFilter) { + .flags = (StdVideoAV1LoopFilterFlags) { + .loop_filter_delta_enabled = lf->loop_filter_delta_enabled, + .loop_filter_delta_update = lf->loop_filter_delta_update, + }, + .loop_filter_sharpness = lf->loop_filter_sharpness, + }; + /* *INDENT-ON* */ + + for (i = 0; i < STD_VIDEO_AV1_TOTAL_REFS_PER_FRAME; i++) + pic->loop_filter.loop_filter_ref_deltasi = lf->loop_filter_ref_deltasi; + + for (i = 0; i < STD_VIDEO_AV1_LOOP_FILTER_ADJUSTMENTS; i++) + pic->loop_filter.loop_filter_mode_deltasi = + lf->loop_filter_mode_deltasi; + + for (i = 0; i < STD_VIDEO_AV1_MAX_LOOP_FILTER_STRENGTHS; i++) + pic->loop_filter.loop_filter_leveli = lf->loop_filter_leveli; + + /* *INDENT-OFF* */ + pic->cdef = (StdVideoAV1CDEF) { + .cdef_damping_minus_3 = cdef->cdef_damping - 3, + .cdef_bits = cdef->cdef_bits, + }; + /* *INDENT-ON* */ + + for (i = 0; i < STD_VIDEO_AV1_MAX_CDEF_FILTER_STRENGTHS; i++) { + pic->cdef.cdef_y_pri_strengthi = cdef->cdef_y_pri_strengthi; + // Trick from gstnvav1dec.c + pic->cdef.cdef_y_sec_strengthi = + cdef->cdef_y_sec_strengthi == 4 ? 3 : cdef->cdef_y_sec_strengthi; + pic->cdef.cdef_uv_pri_strengthi = cdef->cdef_uv_pri_strengthi; + // Trick from gstnvav1dec.c + pic->cdef.cdef_uv_sec_strengthi = + cdef->cdef_uv_sec_strengthi == 4 ? 3 : cdef->cdef_uv_sec_strengthi; + } + + for (i = 0; i < 3; i++) { + pic->loop_restoration.FrameRestorationTypei = + (StdVideoAV1FrameRestorationType) lr->frame_restoration_typei; + pic->loop_restoration.LoopRestorationSizei = + gst_vulkan_av1_dec_get_lr_unit_size (lr->loop_restoration_sizei); + } + + for (i = 0; i < GST_AV1_MAX_SEGMENTS; i++) { + pic->segmentation.FeatureEnabledi = 0; + for (j = 0; j < GST_AV1_SEG_LVL_MAX; j++) { + pic->segmentation.FeatureEnabledi |= seg->feature_enabledij << j; + pic->segmentation.FeatureDataij = seg->feature_dataij; + } + } + /* *INDENT-OFF* */ + pic->film_grain = (StdVideoAV1FilmGrain) { + .flags = (StdVideoAV1FilmGrainFlags) { + .chroma_scaling_from_luma = fg->chroma_scaling_from_luma, + .overlap_flag = fg->overlap_flag, + fg->clip_to_restricted_range = fg->clip_to_restricted_range, + }, + .grain_scaling_minus_8 = fg->grain_scaling_minus_8, + .ar_coeff_lag = fg->ar_coeff_lag, + .ar_coeff_shift_minus_6 = fg->ar_coeff_shift_minus_6, + .grain_scale_shift = fg->grain_scale_shift, + .grain_seed = fg->grain_seed, + .film_grain_params_ref_idx = fg->film_grain_params_ref_idx, + .num_y_points = fg->num_y_points, + .num_cb_points = fg->num_cb_points, + .num_cr_points = fg->num_cr_points, + .cb_mult = fg->cb_mult, + .cb_luma_mult = fg->cb_luma_mult, + .cb_offset = fg->cb_offset, + .cr_mult = fg->cr_mult, + .cr_luma_mult = fg->cr_luma_mult, + .cr_offset = fg->cr_offset, + }; + /* *INDENT-ON* */ + + if (fg->apply_grain) { + for (i = 0; i < STD_VIDEO_AV1_MAX_NUM_Y_POINTS; i++) { + pic->film_grain.point_y_valuei = fg->point_y_valuei; + pic->film_grain.point_y_scalingi = fg->point_y_scalingi; + } + + for (i = 0; i < STD_VIDEO_AV1_MAX_NUM_CB_POINTS; i++) { + pic->film_grain.point_cb_valuei = fg->point_cb_valuei; + pic->film_grain.point_cb_scalingi = fg->point_cb_scalingi; + pic->film_grain.point_cr_valuei = fg->point_cr_valuei; + pic->film_grain.point_cr_scalingi = fg->point_cr_scalingi; + } + + for (i = 0; i < STD_VIDEO_AV1_MAX_NUM_POS_LUMA; i++) + pic->film_grain.ar_coeffs_y_plus_128i = fg->ar_coeffs_y_plus_128i; + + for (i = 0; i < STD_VIDEO_AV1_MAX_NUM_POS_CHROMA; i++) { + pic->film_grain.ar_coeffs_cb_plus_128i = fg->ar_coeffs_cb_plus_128i; + pic->film_grain.ar_coeffs_cr_plus_128i = fg->ar_coeffs_cr_plus_128i; + } + } + + for (i = 0; i < 8; i++) { + pic->global_motion.GmTypei = gm->gm_typei; + for (j = 0; j < STD_VIDEO_AV1_GLOBAL_MOTION_PARAMS; j++) { + pic->global_motion.gm_paramsij = gm->gm_paramsij; + } + } + /* *INDENT-OFF* */ + pic->std_av1pic = (StdVideoDecodeAV1PictureInfo) { + .flags = (StdVideoDecodeAV1PictureInfoFlags){ + .error_resilient_mode = fh->error_resilient_mode, + .disable_cdf_update = fh->disable_cdf_update, + .use_superres = fh->use_superres, + .render_and_frame_size_different = fh->render_and_frame_size_different, + .allow_screen_content_tools = fh->allow_screen_content_tools, + .is_filter_switchable = fh->is_filter_switchable, + .force_integer_mv = fh->force_integer_mv, + .frame_size_override_flag = fh->frame_size_override_flag, + .buffer_removal_time_present_flag = fh->buffer_removal_time_present_flag, + .allow_intrabc = fh->allow_intrabc, + .frame_refs_short_signaling = fh->frame_refs_short_signaling, + .allow_high_precision_mv = fh->allow_high_precision_mv, + .is_motion_mode_switchable = fh->is_motion_mode_switchable, + .use_ref_frame_mvs = fh->use_ref_frame_mvs, + .disable_frame_end_update_cdf = fh->disable_frame_end_update_cdf, + .allow_warped_motion = fh->allow_warped_motion, + .reduced_tx_set = fh->reduced_tx_set, + .reference_select = fh->reference_select, + .skip_mode_present = fh->skip_mode_present, + .delta_q_present = qp->delta_q_present, + .delta_lf_present = lf->delta_lf_present, + .delta_lf_multi = lf->delta_lf_multi, + .segmentation_enabled = seg->segmentation_enabled, + .segmentation_update_map = seg->segmentation_update_map, + .segmentation_temporal_update = seg->segmentation_temporal_update, + .segmentation_update_data = seg->segmentation_update_data, + .UsesLr = lr->uses_lr, + }, + .frame_type = (StdVideoAV1FrameType)fh->frame_type, + .current_frame_id = fh->current_frame_id, + .OrderHint = fh->order_hint, + .primary_ref_frame = fh->primary_ref_frame, + .refresh_frame_flags = fh->refresh_frame_flags, + .interpolation_filter = + (StdVideoAV1InterpolationFilter) fh->interpolation_filter, + .TxMode = (StdVideoAV1TxMode) fh->tx_mode, + .delta_q_res = qp->delta_q_res, + .delta_lf_res = lf->delta_lf_res, + .SkipModeFrame = { fh->skip_mode_frame0, fh->skip_mode_frame1, }, + .coded_denom = fh->use_superres ? fh->superres_denom - 9 : 0, + /* .OrderHints (filled below) */ + .pTileInfo = &pic->tile_info, + .pQuantization = &pic->quantization, + .pSegmentation = &pic->segmentation, + .pLoopFilter = &pic->loop_filter, + .pCDEF = &pic->cdef, + .pLoopRestoration = &pic->loop_restoration, + .pGlobalMotion = &pic->global_motion, + .pFilmGrain = &pic->film_grain, + }; + /* *INDENT-ON* */ + + for (i = 0; i < VK_MAX_VIDEO_AV1_REFERENCES_PER_FRAME_KHR; i++) + pic->std_av1pic.OrderHintsi = fh->order_hintsi; + + /* *INDENT-OFF* */ + pic->vk_av1pic = (VkVideoDecodeAV1PictureInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_PICTURE_INFO_KHR, + .pStdPictureInfo = &pic->std_av1pic, + .frameHeaderOffset = 0, /* ?? */ + /* + * Filled in end_picture(): + * + * uint32_t tileCount; + * const uint32_t* pTileOffsets; + * const uint32_t* pTileSizes; + */ + }; + /* *INDENT-ON* */ + + for (i = 0; i < VK_MAX_VIDEO_AV1_REFERENCES_PER_FRAME_KHR; i++) { + gint ref_idx = fh->ref_frame_idxi; + if (ref_idx >= 0) { + GstAV1Picture *ref_pic = dpb->pic_listref_idx; + if (ref_pic) { + GstVulkanAV1Picture *ref_vk_pic = + gst_av1_picture_get_user_data (ref_pic); + + pic->vk_av1pic.referenceNameSlotIndicesi = ref_vk_pic->slot_idx; + } + } else { + pic->vk_av1pic.referenceNameSlotIndicesi = -1; + } + } + + pic->slot_idx = _find_next_slot_idx (self); + if (pic->slot_idx < 0) { + GST_ERROR_OBJECT (self, "No free DPB slots available"); + return GST_FLOW_ERROR; + } + /* fill main slot */ + _fill_ref_slot (self, picture, &pic->base.slot, &pic->base.pic_res, + &pic->vk_slot, &pic->std_ref, NULL); + + for (i = 0; i < VK_MAX_VIDEO_AV1_REFERENCES_PER_FRAME_KHR; i++) { + gint ref_idx = fh->ref_frame_idxi; + if (ref_idx >= 0) { + GstAV1Picture *ref_pic = dpb->pic_listref_idx; + int found = 0; + + if (ref_pic) { + GstVulkanAV1Picture *ref_vk_pic = + gst_av1_picture_get_user_data (ref_pic); + + for (j = 0; j < num_refs; j++) { + if (pic->base.slotsj.slotIndex == ref_vk_pic->slot_idx) { + found = 1; + break; + } + } + + if (found) + continue; + + _fill_ref_slot (self, ref_pic, &pic->base.slotsnum_refs, + &pic->base.pics_resnum_refs, &pic->vk_slotsnum_refs, + &pic->std_refsnum_refs, &pic->base.refsnum_refs); + + num_refs++; + } + } + } + + /* *INDENT-OFF* */ + pic->base.decode_info = (VkVideoDecodeInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_INFO_KHR, + .pNext = &pic->vk_av1pic, + .flags = 0x0, + .pSetupReferenceSlot = &pic->base.slot, + .referenceSlotCount = num_refs, + .pReferenceSlots = (const VkVideoReferenceSlotInfoKHR *) &pic->base.slots, + .dstPictureResource = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + // .codedOffset = {0, 0} /* is there any cropping rectangle in AV1? */ + .codedExtent = { self->coded_width, self->coded_height }, + .baseArrayLayer = 0, + .imageViewBinding = pic->base.img_view_out->view, + }, + }; + /* *INDENT-ON* */ + + self->resolution_changed = FALSE; + + /* only wait if there's a buffer processed */ + if (GST_CODEC_PICTURE_FRAME_NUMBER (picture) > 0) { + if (!gst_vulkan_decoder_wait (self->decoder)) { + GST_ERROR_OBJECT (self, "Error at waiting for decoding operation to end"); + return GST_FLOW_ERROR; + } + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_decode_tile (GstAV1Decoder * decoder, + GstAV1Picture * picture, GstAV1Tile * tile) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstAV1TileGroupOBU *tile_group = &tile->tile_group; + GstVulkanAV1Picture *pic; + guint i; + + GST_TRACE_OBJECT (self, "Decode tile"); + + pic = gst_av1_picture_get_user_data (picture); + g_assert (pic); + + if (!gst_vulkan_decoder_append_slice (self->decoder, &pic->base, + tile->obu.data, tile->obu.obu_size, FALSE)) + return GST_FLOW_ERROR; + + for (i = tile_group->tg_start; i <= tile_group->tg_end; i++) { + guint32 offset = tile_group->entryi.tile_offset + pic->tile_data_sz; + + g_array_append_val (pic->tile_sizes, tile_group->entryi.tile_size); + g_array_append_val (pic->tile_offsets, offset); + pic->num_tiles++; + } + + pic->tile_data_sz += tile->obu.obu_size; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_end_picture (GstAV1Decoder * decoder, + GstAV1Picture * picture) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstVulkanAV1Picture *pic; + GError *error = NULL; + VkVideoDecodeAV1InlineSessionParametersInfoKHR inline_params = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_INLINE_SESSION_PARAMETERS_INFO_KHR, + .pStdSequenceHeader = &self->vk.sequence, + }; + + GST_TRACE_OBJECT (self, "End picture"); + + pic = gst_av1_picture_get_user_data (picture); + g_assert (pic); + + if (pic->base.slice_offs->len == 0) + return GST_FLOW_OK; + + pic->vk_av1pic.pTileOffsets = &g_array_index (pic->tile_offsets, guint32, 0); + pic->vk_av1pic.tileCount = pic->num_tiles; + pic->vk_av1pic.pTileSizes = &g_array_index (pic->tile_sizes, guint32, 0); + + if (gst_vulkan_decoder_has_feature (self->decoder, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) + vk_link_struct (&pic->base.decode_info, &inline_params); + + GST_LOG_OBJECT (self, "Decoding frame, %d", picture->display_frame_id); + + if (!gst_vulkan_decoder_decode (self->decoder, &pic->base, &error)) { + GST_ERROR_OBJECT (self, "Couldn't decode frame: %s", + error ? error->message : ""); + g_clear_error (&error); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_av1_decoder_output_picture (GstAV1Decoder * decoder, + GstVideoCodecFrame * frame, GstAV1Picture * picture) +{ + GstVideoDecoder *vdec = GST_VIDEO_DECODER (decoder); + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + + GST_TRACE_OBJECT (self, "Output picture"); + + GST_LOG_OBJECT (self, + "Outputting picture %p (poc %d)", picture, picture->display_frame_id); + + if (GST_CODEC_PICTURE (picture)->discont_state) { + self->need_negotiation = TRUE; + if (!gst_video_decoder_negotiate (vdec)) { + gst_av1_picture_unref (picture); + GST_ERROR_OBJECT (self, "Could not re-negotiate with updated state"); + return GST_FLOW_ERROR; + } + } + + gst_av1_picture_unref (picture); + + return gst_video_decoder_finish_frame (vdec, frame); +} + +static GstAV1Picture * +gst_vulkan_av1_decoder_duplicate_picture (GstAV1Decoder * decoder, + GstVideoCodecFrame * frame, GstAV1Picture * picture) +{ + GstVulkanAV1Decoder *self = GST_VULKAN_AV1_DECODER (decoder); + GstVulkanAV1Picture *pic, *new_pic; + GstAV1Picture *new_picture; + + pic = gst_av1_picture_get_user_data (picture); + if (!pic) { + GST_ERROR_OBJECT (self, "Parent picture does not have a vulkan picture"); + return NULL; + } + + new_picture = gst_av1_picture_new (); + new_picture->frame_hdr = picture->frame_hdr; + new_pic = gst_vulkan_av1_picture_new (self, pic->base.out); + + frame->output_buffer = gst_buffer_ref (new_pic->base.out); + + GST_LOG_OBJECT (self, "Duplicate output with buffer %" GST_PTR_FORMAT, pic); + + gst_av1_picture_set_user_data (new_picture, new_pic, + gst_vulkan_av1_picture_free); + + return new_picture; +} + +static void +gst_vulkan_av1_decoder_init (GTypeInstance * instance, gpointer klass) +{ + gst_vulkan_buffer_memory_init_once (); +} + +struct CData +{ + gchar *description; + gint device_index; + GstCaps *codec; + GstCaps *raw; +}; + +static void +gst_vulkan_av1_decoder_class_init (gpointer klass, gpointer class_data) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_CLASS (klass); + GstAV1DecoderClass *av1decoder_class = GST_AV1_DECODER_CLASS (klass); + GstVulkanAV1DecoderClass *vk_av1decoder_class = + GST_VULKAN_AV1_DECODER_CLASS (klass); + struct CData *cdata = class_data; + gchar *long_name; + const gchar *name; + GstPadTemplate *sink_pad_template, *src_pad_template; + GstCaps *sink_doc_caps, *src_doc_caps; + + name = "Vulkan AV1 decoder"; + if (cdata->description) + long_name = g_strdup_printf ("%s on %s", name, cdata->description); + else + long_name = g_strdup (name); + + vk_av1decoder_class->device_index = cdata->device_index; + + gst_element_class_set_metadata (element_class, long_name, + "Codec/Decoder/Video/Hardware", "An AV1 video decoder based on Vulkan", + "Daniel Almeida <daniel.almeida@collabora.com>"); + + parent_class = g_type_class_peek_parent (klass); + + sink_doc_caps = gst_caps_from_string ("video/x-av1, " + "profile = (string) { main, high }, alignment = (string) frame"); + src_doc_caps = + gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12")); + + sink_pad_template = + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->codec); + gst_element_class_add_pad_template (element_class, sink_pad_template); + + src_pad_template = + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, cdata->raw); + gst_element_class_add_pad_template (element_class, src_pad_template); + + gst_pad_template_set_documentation_caps (sink_pad_template, sink_doc_caps); + gst_caps_unref (sink_doc_caps); + + gst_pad_template_set_documentation_caps (src_pad_template, src_doc_caps); + gst_caps_unref (src_doc_caps); + + element_class->set_context = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_set_context); + + decoder_class->src_query = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_src_query); + decoder_class->sink_query = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_sink_query); + decoder_class->open = GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_open); + decoder_class->close = GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_close); + decoder_class->stop = GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_stop); + decoder_class->negotiate = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_negotiate); + decoder_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_decide_allocation); + + av1decoder_class->new_sequence = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_new_sequence); + av1decoder_class->new_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_new_picture); + av1decoder_class->start_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_start_picture); + + av1decoder_class->decode_tile = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_decode_tile); + av1decoder_class->end_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_end_picture); + av1decoder_class->output_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_output_picture); + av1decoder_class->duplicate_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_av1_decoder_duplicate_picture); + + g_free (long_name); + g_free (cdata->description); + g_free (cdata); +} + +gboolean +gst_vulkan_av1_decoder_register (GstPlugin * plugin, GstVulkanDevice * device, + guint rank) +{ + static GOnce debug_once = G_ONCE_INIT; + GType type; + GTypeInfo type_info = { + .class_size = sizeof (GstVulkanAV1DecoderClass), + .class_init = gst_vulkan_av1_decoder_class_init, + .instance_size = sizeof (GstVulkanAV1Decoder), + .instance_init = gst_vulkan_av1_decoder_init, + }; + struct CData *cdata; + gboolean ret; + gchar *type_name, *feature_name; + GstCaps *codec = NULL, *raw = NULL; + + g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); + + if (!gst_vulkan_physical_device_codec_caps (device->physical_device, + VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR, &codec, &raw)) { + gst_plugin_add_status_warning (plugin, + "Unable to query AV1 decoder properties"); + return FALSE; + } + + cdata = g_new (struct CData, 1); + cdata->description = NULL; + cdata->device_index = device->physical_device->device_index; + cdata->codec = codec; + cdata->raw = raw; + + /* class data will be leaked if the element never gets instantiated */ + GST_MINI_OBJECT_FLAG_SET (cdata->codec, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (cdata->raw, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + + gst_vulkan_create_feature_name (device, "GstVulkanAV1Decoder", + "GstVulkanAV1Device%dDecoder", &type_name, "vulkanav1dec", + "vulkanav1device%ddec", &feature_name, &cdata->description, &rank); + + type_info.class_data = cdata; + + g_once (&debug_once, _register_debug_category, NULL); + type = + g_type_register_static (GST_TYPE_AV1_DECODER, type_name, &type_info, 0); + + ret = gst_element_register (plugin, feature_name, rank, type); + + g_free (type_name); + g_free (feature_name); + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkav1dec.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2025 Collabora, Ltd. + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/vulkan/vulkan.h> + +G_BEGIN_DECLS + +gboolean gst_vulkan_av1_decoder_register (GstPlugin * plugin, + GstVulkanDevice *device, + guint rank); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkcolorconvert.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkcolorconvert.c
Changed
@@ -706,7 +706,7 @@ if (sinfo->user_data) { return gst_memory_ref (sinfo->user_data); } else { - struct YUVUpdateData data; + struct YUVUpdateData data = { 0 }; ConvertInfo *conv_info; GstMapInfo map_info; GstMemory *uniforms; @@ -899,8 +899,8 @@ gstelement_class = (GstElementClass *) klass; gstbasetransform_class = (GstBaseTransformClass *) klass; - gst_element_class_set_metadata (gstelement_class, "Vulkan Color Convert", - "Filter/Video/Convert", "A Vulkan Color Convert", + gst_element_class_set_static_metadata (gstelement_class, + "Vulkan Color Convert", "Filter/Video/Convert", "A Vulkan Color Convert", "Matthew Waters <matthew@centricular.com>"); gst_element_class_add_static_pad_template (gstelement_class,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkdownload.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkdownload.c
Changed
@@ -32,6 +32,7 @@ #include <string.h> #include "gstvulkanelements.h" +#include "gstvkutils.h" #include "vkdownload.h" GST_DEBUG_CATEGORY (gst_debug_vulkan_download); @@ -83,7 +84,6 @@ GstVideoInfo out_info; GstBufferPool *pool; - gboolean pool_active; GstVulkanOperation *exec; }; @@ -138,6 +138,61 @@ /* FIXME: implement */ } +static gboolean +_image_to_raw_decide_allocation (gpointer impl, GstQuery * query) +{ + struct ImageToRawDownload *raw = impl; + GstStructure *config; + guint min = 1, max = 0, size = 1; + GstCaps *caps; + gboolean update_pool = FALSE; + GstBufferPool *pool = NULL; + + gst_query_parse_allocation (query, &caps, NULL); + if (!caps) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + if (GST_IS_VULKAN_BUFFER_POOL (pool)) { + update_pool = TRUE; + } else { + gst_clear_object (&pool); + } + } + + /* let's null current pool */ + gst_clear_object (&raw->pool); + + if (!pool) { + pool = gst_vulkan_buffer_pool_new (raw->download->device); + } + + config = gst_buffer_pool_get_config (pool); + + gst_buffer_pool_config_set_params (config, raw->download->out_caps, size, + min, max); + if (gst_query_find_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL)) { + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + } + + if (!gst_buffer_pool_set_config (pool, config)) { + gst_clear_object (&pool); + GST_ERROR_OBJECT (raw->download, "Failed to set buffer pool config"); + return FALSE; + } + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + raw->pool = pool; + + return TRUE; +} + static GstFlowReturn _image_to_raw_perform (gpointer impl, GstBuffer * inbuf, GstBuffer ** outbuf) { @@ -161,22 +216,13 @@ } if (!raw->pool) { - GstStructure *config; - guint min = 0, max = 0; - gsize size = 1; - - raw->pool = gst_vulkan_buffer_pool_new (raw->download->device); - config = gst_buffer_pool_get_config (raw->pool); - gst_buffer_pool_config_set_params (config, raw->download->out_caps, size, - min, max); - if (!gst_buffer_pool_set_config (raw->pool, config)) { - gst_clear_object (&raw->pool); - return GST_FLOW_ERROR; - } + GST_ERROR_OBJECT (raw->download, "No pool found."); + goto error; } - if (!raw->pool_active) { - gst_buffer_pool_set_active (raw->pool, TRUE); - raw->pool_active = TRUE; + + if (!gst_buffer_pool_set_active (raw->pool, TRUE)) { + GST_ERROR_OBJECT (raw->download, "Couldn't activate pool."); + goto error; } if ((ret = @@ -236,49 +282,60 @@ for (i = 0; i < n_planes; i++) { VkBufferImageCopy region; - GstMemory *out_mem; + GstMemory *mem; GstVulkanBufferMemory *buf_mem; GstVulkanImageMemory *img_mem; - gint idx; const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, }; VkImageAspectFlags plane_aspect; + guint32 width, height, row, img_h; - idx = MIN (i, n_mems - 1); - img_mem = (GstVulkanImageMemory *) gst_buffer_peek_memory (inbuf, idx); + mem = gst_vulkan_buffer_peek_plane_memory (inbuf, &raw->in_info, i); + if (!mem) + goto unlock_error; + if (!gst_is_vulkan_image_memory (mem)) { + GST_WARNING_OBJECT (raw->download, "Input buffer is not a Vulkan image"); + goto unlock_error; + } + img_mem = (GstVulkanImageMemory *) mem; - out_mem = gst_buffer_peek_memory (*outbuf, i); - if (!gst_is_vulkan_buffer_memory (out_mem)) { + mem = gst_vulkan_buffer_peek_plane_memory (*outbuf, &raw->out_info, i); + if (!mem) + goto unlock_error; + if (!gst_is_vulkan_buffer_memory (mem)) { GST_WARNING_OBJECT (raw->download, - "Output is not a GstVulkanBufferMemory"); + "Output buffer is not a Vulkan buffer"); goto unlock_error; } - buf_mem = (GstVulkanBufferMemory *) out_mem; + buf_mem = (GstVulkanBufferMemory *) mem; if (n_planes == n_mems) plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; else plane_aspect = aspectsi; + gst_vulkan_buffer_get_plane_dimensions (inbuf, &raw->in_info, i, &width, + &height, &row, &img_h); + /* *INDENT-OFF* */ region = (VkBufferImageCopy) { - .bufferOffset = 0, - .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&raw->in_info, i), - .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&raw->in_info, i), - .imageSubresource = { - /* XXX: each plane is a buffer */ - .aspectMask = plane_aspect, - .mipLevel = 0, - .baseArrayLayer = 0, - .layerCount = 1, - }, - .imageOffset = { .x = 0, .y = 0, .z = 0, }, - .imageExtent = { - .width = GST_VIDEO_INFO_COMP_WIDTH (&raw->out_info, i), - .height = GST_VIDEO_INFO_COMP_HEIGHT (&raw->out_info, i), - .depth = 1, - } + .bufferOffset = 0, + .bufferRowLength = row, + .bufferImageHeight = img_h, + .imageSubresource = { + /* XXX: each plane is a buffer */ + .aspectMask = plane_aspect, + .mipLevel = 0, + .baseArrayLayer = 0, + .layerCount = 1, + }, + .imageOffset = { .x = 0, .y = 0, .z = 0, }, + .imageExtent = { + .width = width, + .height = height, + .depth = 1, + } }; /* *INDENT-ON* */ @@ -321,10 +378,7 @@ struct ImageToRawDownload *raw = impl; if (raw->pool) { - if (raw->pool_active) { - gst_buffer_pool_set_active (raw->pool, FALSE); - } - raw->pool_active = FALSE; + gst_buffer_pool_set_active (raw->pool, FALSE); gst_object_unref (raw->pool); raw->pool = NULL; } @@ -346,6 +400,7 @@ _image_to_raw_transform_caps, _image_to_raw_set_caps, _image_to_raw_propose_allocation, + _image_to_raw_decide_allocation, _image_to_raw_perform, _image_to_raw_free, }; @@ -441,7 +496,7 @@ gstelement_class = (GstElementClass *) klass; gstbasetransform_class = (GstBaseTransformClass *) klass; - gst_element_class_set_metadata (gstelement_class, "Vulkan Downloader", + gst_element_class_set_static_metadata (gstelement_class, "Vulkan Downloader", "Filter/Video", "A Vulkan data downloader", "Matthew Waters <matthew@centricular.com>"); @@ -556,8 +611,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG ("changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY: @@ -756,7 +811,40 @@ static gboolean gst_vulkan_download_decide_allocation (GstBaseTransform * bt, GstQuery * query) { - return TRUE; + GstVulkanDownload *vk_download = GST_VULKAN_DOWNLOAD (bt); + guint i; + gboolean ret = TRUE; + + for (i = 0; i < G_N_ELEMENTS (download_methods); i++) { + GstCaps *templ; + gboolean res; + + templ = gst_static_caps_get (download_methodsi->in_template); + if (!gst_caps_can_intersect (vk_download->in_caps, templ)) { + gst_caps_unref (templ); + continue; + } + gst_caps_unref (templ); + + templ = gst_static_caps_get (download_methodsi->out_template); + if (!gst_caps_can_intersect (vk_download->out_caps, templ)) { + gst_caps_unref (templ); + continue; + } + gst_caps_unref (templ); + + res = + download_methodsi->decide_allocation (vk_download->download_implsi, + query); + + /* if all methods fail, function fails */ + if (i == 0) + ret = res; + else + ret |= res; + } + + return ret; } static gboolean
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkdownload.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkdownload.h
Changed
@@ -53,6 +53,8 @@ void (*propose_allocation) (gpointer impl, GstQuery * decide_query, GstQuery * query); + gboolean (*decide_allocation) (gpointer impl, + GstQuery * decide_query); GstFlowReturn (*perform) (gpointer impl, GstBuffer * buffer, GstBuffer ** outbuf);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkh264dec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh264dec.c
Changed
@@ -24,12 +24,13 @@ #include "vkh264dec.h" #include <gst/video/video.h> -#include <gst/vulkan/vulkan.h> +#include <gst/codecs/gsth264decoder.h> #include "gst/vulkan/gstvkdecoder-private.h" +#include "gst/vulkan/gstvkphysicaldevice-private.h" +#include "gstvkvideocaps.h" #include "gstvulkanelements.h" - GST_DEBUG_CATEGORY_STATIC (gst_vulkan_h264_decoder_debug); #define GST_CAT_DEFAULT gst_vulkan_h264_decoder_debug @@ -43,6 +44,8 @@ { gchar *description; gint device_index; + GstCaps *codec; + GstCaps *raw; }; typedef struct _GstVulkanH264Decoder GstVulkanH264Decoder; @@ -120,18 +123,6 @@ gint device_index; }; -static GstStaticPadTemplate gst_vulkan_h264dec_sink_template = -GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, - GST_STATIC_CAPS ("video/x-h264, " - "profile = { (string) high, (string) main, (string) constrained-baseline, (string) baseline, (string) extended } ," - "stream-format = { (string) avc, (string) byte-stream }, " - "alignment = (string) au")); - -static GstStaticPadTemplate gst_vulkan_h264dec_src_template = -GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12"))); - #define gst_vulkan_h264_decoder_parent_class parent_class static gpointer @@ -353,6 +344,12 @@ VkImageUsageFlags usage; GstVulkanVideoCapabilities vk_caps; + if (self->dpb_size == 0) { + return + GST_VIDEO_DECODER_CLASS (parent_class)->decide_allocation (decoder, + query); + } + gst_query_parse_allocation (query, &caps, NULL); if (!caps) return FALSE; @@ -1294,6 +1291,12 @@ GstVulkanH264Decoder *self = GST_VULKAN_H264_DECODER (decoder); GstVulkanH264Picture *pic; GError *error = NULL; + VkVideoDecodeH264InlineSessionParametersInfoKHR inline_params = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_INLINE_SESSION_PARAMETERS_INFO_KHR, + .pStdSPS = &self->std_sps.sps, + .pStdPPS = &self->std_pps.pps, + }; GST_TRACE_OBJECT (self, "End picture"); @@ -1303,6 +1306,10 @@ pic->vk_h264pic.sliceCount = pic->base.slice_offs->len - 1; pic->vk_h264pic.pSliceOffsets = (const guint32 *) pic->base.slice_offs->data; + if (gst_vulkan_decoder_has_feature (self->decoder, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) + vk_link_struct (&pic->base.decode_info, &inline_params); + GST_LOG_OBJECT (self, "Decoding frame, %d bytes %d slices", pic->vk_h264pic.pSliceOffsetspic->vk_h264pic.sliceCount, pic->vk_h264pic.sliceCount); @@ -1368,6 +1375,8 @@ struct CData *cdata = class_data; gchar *long_name; const gchar *name; + GstPadTemplate *sink_pad_template, *src_pad_template; + GstCaps *sink_doc_caps, *src_doc_caps; name = "Vulkan H.264 decoder"; if (cdata->description) @@ -1383,11 +1392,27 @@ parent_class = g_type_class_peek_parent (g_klass); - gst_element_class_add_static_pad_template (element_class, - &gst_vulkan_h264dec_sink_template); + sink_doc_caps = gst_caps_from_string ("video/x-h264, " + "profile = { (string) high, (string) main, (string) constrained-baseline }, " + "stream-format = { (string) avc, (string) byte-stream }, " + "alignment = (string) au"); + src_doc_caps = + gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12")); + + sink_pad_template = + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->codec); + gst_element_class_add_pad_template (element_class, sink_pad_template); - gst_element_class_add_static_pad_template (element_class, - &gst_vulkan_h264dec_src_template); + src_pad_template = + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, cdata->raw); + gst_element_class_add_pad_template (element_class, src_pad_template); + + gst_pad_template_set_documentation_caps (sink_pad_template, sink_doc_caps); + gst_caps_unref (sink_doc_caps); + + gst_pad_template_set_documentation_caps (src_pad_template, src_doc_caps); + gst_caps_unref (src_doc_caps); element_class->set_context = GST_DEBUG_FUNCPTR (gst_vulkan_h264_decoder_set_context); @@ -1421,6 +1446,8 @@ g_free (long_name); g_free (cdata->description); + gst_clear_caps (&cdata->codec); + gst_clear_caps (&cdata->raw); g_free (cdata); } @@ -1439,12 +1466,27 @@ struct CData *cdata; gboolean ret; gchar *type_name, *feature_name; + GstCaps *codec = NULL, *raw = NULL; + + g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); + + if (!gst_vulkan_physical_device_codec_caps (device->physical_device, + VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR, &codec, &raw)) { + gst_plugin_add_status_warning (plugin, + "Unable to query H.264 decoder properties"); + return FALSE; + } cdata = g_new (struct CData, 1); cdata->description = NULL; cdata->device_index = device->physical_device->device_index; + cdata->codec = codec; + cdata->raw = raw; - g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + /* class data will be leaked if the element never gets instantiated */ + GST_MINI_OBJECT_FLAG_SET (cdata->codec, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (cdata->raw, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); gst_vulkan_create_feature_name (device, "GstVulkanH264Decoder", "GstVulkanH264Device%dDecoder", &type_name, "vulkanh264dec",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkh264dec.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh264dec.h
Changed
@@ -19,13 +19,12 @@ #pragma once -#include <gst/codecs/gsth264decoder.h> - #include <gst/vulkan/vulkan.h> G_BEGIN_DECLS -gboolean -gst_vulkan_h264_decoder_register (GstPlugin * plugin, GstVulkanDevice *device, guint rank); +gboolean gst_vulkan_h264_decoder_register (GstPlugin * plugin, + GstVulkanDevice * device, + guint rank); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh264enc.c
Added
@@ -0,0 +1,2331 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * Author: Stéphane Cerveau <scerveau@igalia.com> + * Author: Victor Jaquez <vjaquez@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-vkh264enc + * @title: vkh264enc + * @short_description: A Vulkan based H264 video encoder + * + * vkh264enc encodes raw video surfaces into H.264 bitstreams using + * Vulkan video extensions. + * + * + * ## Example launch line + * ``` + * gst-launch-1.0 videotestsrc num-buffers=60 ! timeoverlay ! vulkanupload ! vulkanh264enc ! h264parse ! mp4mux ! filesink location=test.mp4 + * ``` + * + * Since: 1.28 + */ + +/* + * TODO: + * + * + support multi-slices + */ + +/** + * GstVulkanEncoderRateControlMode: + * + * Rate control modes for Vulkan encoders. + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "vkh264enc.h" + +#include <gst/codecparsers/gsth264bitwriter.h> +#include <gst/codecparsers/gsth264parser.h> + +#include "base/gsth264encoder.h" +#include "gst/vulkan/gstvkencoder-private.h" +#include "gstvkvideocaps.h" +#include "gstvulkanelements.h" + +typedef struct _GstVulkanH264Encoder GstVulkanH264Encoder; +typedef struct _GstVulkanH264EncoderClass GstVulkanH264EncoderClass; +typedef struct _GstVulkanH264EncoderFrame GstVulkanH264EncoderFrame; + +enum +{ + PROP_BITRATE = 1, + PROP_AUD, + PROP_QUALITY, + PROP_RATECONTROL, + PROP_QP_I, + PROP_QP_P, + PROP_QP_B, + PROP_MAX_QP, + PROP_MIN_QP, + N_PROPERTIES +}; + +static GParamSpec *propertiesN_PROPERTIES; + +struct _GstVulkanH264Encoder +{ + /*< private > */ + GstH264Encoder parent; + + GstVideoCodecState *in_state; + + gint coded_width; + gint coded_height; + + GstVulkanInstance *instance; + GstVulkanDevice *device; + GstVulkanQueue *encode_queue; + GstVulkanEncoder *encoder; + + /* sequence configuration */ + GstVulkanVideoProfile profile; + GstH264SPS sps; + GstH264PPS pps; + gsize coded_buffer_size; + + struct + { + StdVideoH264SequenceParameterSet sps; + StdVideoH264PictureParameterSet pps; + StdVideoH264SequenceParameterSetVui vui; + StdVideoH264HrdParameters hrd; + } params; + + struct + { + guint bitrate; + gboolean aud; + guint32 quality; + VkVideoEncodeRateControlModeFlagBitsKHR ratecontrol; + guint32 qp_i; + guint32 qp_p; + guint32 qp_b; + guint32 max_qp; + guint32 min_qp; + } prop; + + gboolean update_props; + + struct + { + guint bitrate; + guint max_bitrate; + guint cpb_size; + guint32 quality; + VkVideoEncodeRateControlModeFlagBitsKHR ratecontrol; + guint32 max_qp; + guint32 min_qp; + guint32 qp_i; + guint32 qp_p; + guint32 qp_b; + } rc; +}; + +struct _GstVulkanH264EncoderClass +{ + GstH264EncoderClass parent; + + gint device_index; +}; + +struct _GstVulkanH264EncoderFrame +{ + GstVulkanEncoderPicture picture; + GstVulkanEncoder *encoder; + + VkVideoEncodeH264RateControlInfoKHR vkrc_info; + VkVideoEncodeH264RateControlLayerInfoKHR vkrc_layer_info; + + /* StdVideoEncodeH264WeightTable slice_wt; *//* UNUSED */ + StdVideoEncodeH264SliceHeader slice_hdr; + VkVideoEncodeH264NaluSliceInfoKHR vkslice_info; + + StdVideoEncodeH264PictureInfo h264pic_info; + VkVideoEncodeH264PictureInfoKHR vkh264pic_info; + + StdVideoEncodeH264ReferenceInfo ref_info; + VkVideoEncodeH264DpbSlotInfoKHR vkref_info; + + StdVideoEncodeH264RefListModEntry mods2STD_VIDEO_H264_MAX_NUM_LIST_REF + + 1; + StdVideoEncodeH264RefPicMarkingEntry mmcoSTD_VIDEO_H264_MAX_NUM_LIST_REF + + 1; + StdVideoEncodeH264ReferenceListsInfo ref_list_info; +}; + +struct CData +{ + gchar *description; + gint device_index; + GstCaps *codec; + GstCaps *raw; +}; + +#define GST_VULKAN_H264_ENCODER(obj) ((GstVulkanH264Encoder *)obj) +#define GST_VULKAN_H264_ENCODER_GET_CLASS(obj) \ + (G_TYPE_INSTANCE_GET_CLASS((obj), G_TYPE_FROM_INSTANCE(obj), \ + GstVulkanH264EncoderClass)) +#define GST_VULKAN_H264_ENCODER_CLASS(klass) \ + ((GstVulkanH264EncoderClass *)klass) + +static GstElementClass *parent_class = NULL; + +GST_DEBUG_CATEGORY_STATIC (gst_vulkan_h264_encoder_debug); +#define GST_CAT_DEFAULT gst_vulkan_h264_encoder_debug + +static gpointer +_register_debug_category (gpointer data) +{ + GST_DEBUG_CATEGORY_INIT (gst_vulkan_h264_encoder_debug, "vulkanh264enc", 0, + "Vulkan H.264 encoder"); + + return NULL; +} + +#define update_property(type, obj, old_val, new_val, prop_id) \ +static inline void \ +gst_vulkan_h264_encoder_update_property_##type (GstVulkanH264Encoder * encoder, type * old_val, type new_val, guint prop_id) \ +{ \ + GST_OBJECT_LOCK (encoder); \ + if (*old_val == new_val) { \ + GST_OBJECT_UNLOCK (encoder); \ + return; \ + } \ + *old_val = new_val; \ + GST_OBJECT_UNLOCK (encoder); \ + if (prop_id > 0) \ + g_object_notify_by_pspec (G_OBJECT (encoder), propertiesprop_id); \ +} + +update_property (guint, obj, old_val, new_val, prop_id); +#undef update_property + +#define update_property_uint(obj, old_val, new_val, prop_id) \ + gst_vulkan_h264_encoder_update_property_guint (obj, old_val, new_val, prop_id) + +static GstVulkanH264EncoderFrame * +gst_vulkan_h264_encoder_frame_new (GstVulkanH264Encoder * self, + GstVideoCodecFrame * frame) +{ + GstVulkanH264EncoderFrame *vkframe; + + if (self->coded_buffer_size == 0) { + self->coded_buffer_size = gst_h264_calculate_coded_size (&self->sps, 1); + if (self->coded_buffer_size == 0) + goto fail; + GST_DEBUG_OBJECT (self, "Calculated coded buffer size: %" G_GSIZE_FORMAT, + self->coded_buffer_size); + } + + vkframe = g_new (GstVulkanH264EncoderFrame, 1); + vkframe->encoder = gst_object_ref (self->encoder); + if (!gst_vulkan_encoder_picture_init (&vkframe->picture, self->encoder, + frame->input_buffer, self->coded_buffer_size)) { + gst_object_unref (vkframe->encoder); + g_free (vkframe); + goto fail; + } + + return vkframe; + +fail: + { + GST_DEBUG_OBJECT (self, "Failed to allocate a vulkan encoding frame"); + return NULL; + } +} + +static void +gst_vulkan_h264_encoder_frame_free (gpointer frame) +{ + GstVulkanH264EncoderFrame *vkframe = frame; + gst_vulkan_encoder_picture_clear (&vkframe->picture, vkframe->encoder); + gst_object_unref (vkframe->encoder); + g_free (vkframe); +} + +static inline GstVulkanH264EncoderFrame * +_GET_FRAME (GstH264EncoderFrame * frame) +{ + GstVulkanH264EncoderFrame *enc_frame = + gst_h264_encoder_frame_get_user_data (frame); + g_assert (enc_frame); + return enc_frame; +} + +static StdVideoH264SliceType +gst_vulkan_h264_slice_type (GstH264SliceType type) +{ + switch (type) { + case GST_H264_I_SLICE: + return STD_VIDEO_H264_SLICE_TYPE_I; + case GST_H264_P_SLICE: + return STD_VIDEO_H264_SLICE_TYPE_P; + case GST_H264_B_SLICE: + return STD_VIDEO_H264_SLICE_TYPE_B; + default: + GST_WARNING ("Unsupported picture type '%d'", type); + return STD_VIDEO_H264_SLICE_TYPE_INVALID; + } +} + +static const struct +{ + GstH264Profile gst; + StdVideoH264ProfileIdc vk; + const char *name; +} H264ProfileMap = { + /* *INDENT-OFF* */ + { GST_H264_PROFILE_BASELINE, STD_VIDEO_H264_PROFILE_IDC_BASELINE, "constrained-baseline" }, + { GST_H264_PROFILE_MAIN, STD_VIDEO_H264_PROFILE_IDC_MAIN, "main" }, + { GST_H264_PROFILE_HIGH, STD_VIDEO_H264_PROFILE_IDC_HIGH, "high" }, + /* { GST_H264_PROFILE_HIGH_444, STD_VIDEO_H264_PROFILE_IDC_HIGH_444_PREDICTIVE, "high-4:4:4" }, */ + /* *INDENT-ON* */ +}; + +static StdVideoH264ProfileIdc +gst_vulkan_h264_profile_type (GstH264Profile profile) +{ + for (int i = 0; i < G_N_ELEMENTS (H264ProfileMap); i++) { + if (profile == H264ProfileMapi.gst) + return H264ProfileMapi.vk; + } + + GST_WARNING ("Unsupported profile type '%d'", profile); + return STD_VIDEO_H264_PROFILE_IDC_INVALID; +} + +static const char * +gst_vulkan_h264_profile_name (StdVideoH264ProfileIdc profile) +{ + for (int i = 0; i < G_N_ELEMENTS (H264ProfileMap); i++) { + if (profile == H264ProfileMapi.vk) + return H264ProfileMapi.name; + } + + GST_WARNING ("Unsupported profile type '%d'", profile); + return NULL; +} + +/* *INDENT-OFF* */ +static const struct +{ + GstH264Level gst; + StdVideoH264LevelIdc vk; + const char *name; +} H264LevelMap = { + { GST_H264_LEVEL_L1, STD_VIDEO_H264_LEVEL_IDC_1_0, "1" }, + /* {GST_H264_LEVEL_L1B, "1b", }, */ + { GST_H264_LEVEL_L1_1, STD_VIDEO_H264_LEVEL_IDC_1_1, "1.1"}, + { GST_H264_LEVEL_L1_2, STD_VIDEO_H264_LEVEL_IDC_1_2, "1.2" }, + { GST_H264_LEVEL_L1_3, STD_VIDEO_H264_LEVEL_IDC_1_3, "1.3" }, + { GST_H264_LEVEL_L2, STD_VIDEO_H264_LEVEL_IDC_2_0, "2" }, + { GST_H264_LEVEL_L2_1, STD_VIDEO_H264_LEVEL_IDC_2_1, "2.1" }, + { GST_H264_LEVEL_L2_2, STD_VIDEO_H264_LEVEL_IDC_2_2, "2.2" }, + { GST_H264_LEVEL_L3, STD_VIDEO_H264_LEVEL_IDC_3_0, "3" }, + { GST_H264_LEVEL_L3_1, STD_VIDEO_H264_LEVEL_IDC_3_1, "3.1" }, + { GST_H264_LEVEL_L3_2, STD_VIDEO_H264_LEVEL_IDC_3_2, "3.2" }, + { GST_H264_LEVEL_L4, STD_VIDEO_H264_LEVEL_IDC_4_0, "4" }, + { GST_H264_LEVEL_L4_1, STD_VIDEO_H264_LEVEL_IDC_4_1, "4.1" }, + { GST_H264_LEVEL_L4_2, STD_VIDEO_H264_LEVEL_IDC_4_2, "4.2" }, + { GST_H264_LEVEL_L5, STD_VIDEO_H264_LEVEL_IDC_5_0, "5" }, + { GST_H264_LEVEL_L5_1, STD_VIDEO_H264_LEVEL_IDC_5_1, "5.1" }, + { GST_H264_LEVEL_L5_2, STD_VIDEO_H264_LEVEL_IDC_5_2, "5.2" }, + { GST_H264_LEVEL_L6, STD_VIDEO_H264_LEVEL_IDC_6_0, "6" }, + { GST_H264_LEVEL_L6_1, STD_VIDEO_H264_LEVEL_IDC_6_1, "6.1" }, + { GST_H264_LEVEL_L6_2, STD_VIDEO_H264_LEVEL_IDC_6_2, "6.2" }, +}; +/* *INDENT-ON* */ + +static StdVideoH264LevelIdc +gst_vulkan_h264_level_idc (int level_idc) +{ + for (guint i = 0; i < G_N_ELEMENTS (H264LevelMap); i++) { + if (level_idc == (int) H264LevelMapi.gst) + return H264LevelMapi.vk; + } + + GST_WARNING ("Unsupported level idc '%d'", level_idc); + return STD_VIDEO_H264_LEVEL_IDC_INVALID; +} + +static GstH264Level +gst_h264_level_idc_from_vk (StdVideoH264LevelIdc vk_level_idc) +{ + for (guint i = 0; i < G_N_ELEMENTS (H264LevelMap); i++) { + if (vk_level_idc == (int) H264LevelMapi.vk) + return H264LevelMapi.gst; + } + + GST_WARNING ("Unsupported level idc '%d'", vk_level_idc); + return -1; +} + +static const char * +gst_vulkan_h264_level_name (StdVideoH264LevelIdc level_idc) +{ + for (guint i = 0; i < G_N_ELEMENTS (H264LevelMap); i++) { + if (level_idc == (int) H264LevelMapi.vk) + return H264LevelMapi.name; + } + + GST_WARNING ("Unsupported level idc '%d'", level_idc); + return NULL; +} + +static VkVideoComponentBitDepthFlagBitsKHR +gst_vulkan_h264_bit_depth (guint8 depth) +{ + switch (depth) { + case 8: + return VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR; + case 10: + return VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR; + case 12: + return VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR; + default: + GST_WARNING ("Unsupported bit depth '%u'", depth); + return VK_VIDEO_COMPONENT_BIT_DEPTH_INVALID_KHR; + } +} + +#define SPS_GST_2_VK(F) \ + F(constraint_set0_flag, flags.constraint_set0_flag) \ + F(constraint_set1_flag, flags.constraint_set1_flag) \ + F(constraint_set2_flag, flags.constraint_set2_flag) \ + F(constraint_set3_flag, flags.constraint_set3_flag) \ + F(constraint_set4_flag, flags.constraint_set4_flag) \ + F(constraint_set5_flag, flags.constraint_set5_flag) \ + F(direct_8x8_inference_flag, flags.direct_8x8_inference_flag) \ + F(mb_adaptive_frame_field_flag, flags.mb_adaptive_frame_field_flag) \ + F(frame_mbs_only_flag, flags.frame_mbs_only_flag) \ + F(delta_pic_order_always_zero_flag, flags.delta_pic_order_always_zero_flag) \ + F(separate_colour_plane_flag, flags.separate_colour_plane_flag) \ + F(gaps_in_frame_num_value_allowed_flag, flags.gaps_in_frame_num_value_allowed_flag) \ + F(qpprime_y_zero_transform_bypass_flag, flags.qpprime_y_zero_transform_bypass_flag) \ + F(frame_cropping_flag, flags.frame_cropping_flag) \ + F(scaling_matrix_present_flag, flags.seq_scaling_matrix_present_flag) \ + F(vui_parameters_present_flag, flags.vui_parameters_present_flag) \ + F(id, seq_parameter_set_id) \ + F(bit_depth_luma_minus8, bit_depth_luma_minus8) \ + F(bit_depth_chroma_minus8, bit_depth_chroma_minus8) \ + F(log2_max_frame_num_minus4, log2_max_frame_num_minus4) \ + F(pic_order_cnt_type, pic_order_cnt_type) \ + F(offset_for_non_ref_pic, offset_for_non_ref_pic) \ + F(offset_for_top_to_bottom_field, offset_for_top_to_bottom_field) \ + F(log2_max_pic_order_cnt_lsb_minus4, log2_max_pic_order_cnt_lsb_minus4) \ + F(num_ref_frames_in_pic_order_cnt_cycle, num_ref_frames_in_pic_order_cnt_cycle) \ + F(num_ref_frames, max_num_ref_frames) \ + F(pic_width_in_mbs_minus1, pic_width_in_mbs_minus1) \ + F(pic_height_in_map_units_minus1, pic_height_in_map_units_minus1) \ + F(frame_crop_left_offset, frame_crop_left_offset) \ + F(frame_crop_right_offset, frame_crop_right_offset) \ + F(frame_crop_top_offset, frame_crop_top_offset) \ + F(frame_crop_bottom_offset, frame_crop_bottom_offset) + +#define SPS_VUI_GST_2_VK(F) \ + F(aspect_ratio_info_present_flag, flags.aspect_ratio_info_present_flag) \ + F(overscan_info_present_flag, flags.overscan_info_present_flag) \ + F(overscan_appropriate_flag, flags.overscan_appropriate_flag) \ + F(chroma_loc_info_present_flag, flags.chroma_loc_info_present_flag) \ + F(timing_info_present_flag, flags.timing_info_present_flag) \ + F(nal_hrd_parameters_present_flag, flags.nal_hrd_parameters_present_flag) \ + F(vcl_hrd_parameters_present_flag, flags.vcl_hrd_parameters_present_flag) \ + F(fixed_frame_rate_flag, flags.fixed_frame_rate_flag) \ + F(bitstream_restriction_flag, flags.bitstream_restriction_flag) \ + F(aspect_ratio_idc, aspect_ratio_idc) \ + F(sar_width, sar_width) \ + F(sar_height, sar_height) \ + F(num_units_in_tick, num_units_in_tick) \ + F(time_scale, time_scale) \ + F(num_reorder_frames, max_num_reorder_frames) \ + F(max_dec_frame_buffering, max_dec_frame_buffering) \ + F(video_signal_type_present_flag, flags.video_signal_type_present_flag) \ + F(video_full_range_flag, flags.video_full_range_flag) \ + F(colour_description_present_flag, flags.color_description_present_flag) \ + F(video_format, video_format) \ + F(colour_primaries, colour_primaries) \ + F(transfer_characteristics, transfer_characteristics) \ + F(matrix_coefficients, matrix_coefficients) \ + F(chroma_sample_loc_type_top_field, chroma_sample_loc_type_top_field) \ + F(chroma_sample_loc_type_bottom_field, chroma_sample_loc_type_bottom_field) + +static inline void +_configure_rate_control (GstVulkanH264Encoder * self, + GstVulkanVideoCapabilities * vk_caps) +{ + self->rc.bitrate = + MIN (self->rc.bitrate, vk_caps->encoder.caps.maxBitrate / 1024); + update_property_uint (self, &self->prop.bitrate, self->rc.bitrate, + PROP_BITRATE); + + switch (self->rc.ratecontrol) { + case VK_VIDEO_ENCODE_RATE_CONTROL_MODE_CBR_BIT_KHR: + self->rc.max_bitrate = self->rc.bitrate; + break; + case VK_VIDEO_ENCODE_RATE_CONTROL_MODE_VBR_BIT_KHR: + /* by default max bitrate is 66% from vah264enc (target_percentage) */ + self->rc.max_bitrate = (guint) + gst_util_uint64_scale_int (self->rc.bitrate, 100, 66); + self->rc.max_bitrate = + MIN (self->rc.max_bitrate, vk_caps->encoder.caps.maxBitrate / 1024); + break; + default: + break; + } + + self->rc.cpb_size = (guint) + gst_util_uint64_scale_int (self->rc.max_bitrate, 1000LL, + self->rc.bitrate); + + /* uncomment if max_bitrate turns into a property */ + /* update_property_uint (self, &self->prop.max_bitrate, self->rc.max_bitrate, */ + /* PROP_MAX_BITRATE); */ + + /* uncomment if cpb_size turns into a property */ + /* update_property_uint (self, &self->prop.cpb_size, self->rc.cpb_size, */ + /* PROP_MAX_BITRATE); */ + + { + GstTagList *tags = gst_tag_list_new_empty (); + gst_tag_list_add (tags, GST_TAG_MERGE_REPLACE, GST_TAG_NOMINAL_BITRATE, + self->rc.bitrate, GST_TAG_MAXIMUM_BITRATE, self->rc.max_bitrate, + GST_TAG_CODEC, "H.264", GST_TAG_ENCODER, "vulkanh264enc", NULL); + + gst_video_encoder_merge_tags (GST_VIDEO_ENCODER (self), tags, + GST_TAG_MERGE_REPLACE); + gst_tag_list_unref (tags); + } +} + +static gboolean +gst_vulkan_h264_encoder_init_std_sps (GstVulkanH264Encoder * self, + GstH264SPS * sps) +{ + GstVulkanVideoCapabilities vk_caps; + VkVideoEncodeH264CapabilitiesKHR *vk_h264_caps; + + if (!gst_vulkan_encoder_caps (self->encoder, &vk_caps)) + return FALSE; + vk_h264_caps = &vk_caps.encoder.codec.h264; + + g_assert (sps->vui_parameters_present_flag == 1); + g_assert (sps->scaling_matrix_present_flag == 0); + + self->params.sps = (StdVideoH264SequenceParameterSet) { +#define FILL_SPS(gst, vk) .vk = sps->gst, + SPS_GST_2_VK (FILL_SPS) +#undef FILL_SPS + }; + + self->params.sps.profile_idc = + gst_vulkan_h264_profile_type (sps->profile_idc); + self->params.sps.chroma_format_idc = + (StdVideoH264ChromaFormatIdc) sps->chroma_format_idc; + + self->params.sps.level_idc = gst_vulkan_h264_level_idc (sps->level_idc); + if (sps->level_idc == 0xff) + return FALSE; + + if (self->rc.bitrate == 0) { + const GstH264LevelDescriptor *desc; + + desc = gst_h264_get_level_descriptor (sps->profile_idc, 0, + &self->in_state->info, sps->vui_parameters.max_dec_frame_buffering); + if (!desc) + return FALSE; + + self->rc.bitrate = + desc->max_br * gst_h264_get_cpb_nal_factor (sps->profile_idc) / 1024; + } + + _configure_rate_control (self, &vk_caps); + + if (sps->direct_8x8_inference_flag == 0 + && (vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DIRECT_8X8_INFERENCE_FLAG_UNSET_BIT_KHR) == + 0) { + sps->direct_8x8_inference_flag = + self->params.sps.flags.direct_8x8_inference_flag = 1; + } + + if (sps->vui_parameters_present_flag == 1) { + g_assert (sps->vui_parameters.nal_hrd_parameters_present_flag == 0); + g_assert (sps->vui_parameters.vcl_hrd_parameters_present_flag == 0); + + self->params.vui = (StdVideoH264SequenceParameterSetVui) { +#define FILL_VUI(gst, vk) .vk = sps->vui_parameters.gst, + SPS_VUI_GST_2_VK (FILL_VUI) +#undef FILL_VUI + }; + + self->params.vui.aspect_ratio_idc = + (StdVideoH264AspectRatioIdc) sps->vui_parameters.aspect_ratio_idc; + self->params.sps.pSequenceParameterSetVui = &self->params.vui; + } + + return TRUE; +} + +#define PPS_MEMBERS(F) \ + F(id, pic_parameter_set_id) \ + F(sequence->id, seq_parameter_set_id) \ + F(entropy_coding_mode_flag, flags.entropy_coding_mode_flag) \ + F(pic_order_present_flag, \ + flags.bottom_field_pic_order_in_frame_present_flag) \ + F(num_ref_idx_l0_active_minus1, num_ref_idx_l0_default_active_minus1) \ + F(num_ref_idx_l1_active_minus1, num_ref_idx_l1_default_active_minus1) \ + F(weighted_pred_flag, flags.weighted_pred_flag) \ + F(weighted_bipred_idc, weighted_bipred_idc) \ + F(pic_init_qp_minus26, pic_init_qp_minus26) \ + F(pic_init_qs_minus26, pic_init_qs_minus26) \ + F(chroma_qp_index_offset, chroma_qp_index_offset) \ + F(deblocking_filter_control_present_flag, \ + flags.deblocking_filter_control_present_flag) \ + F(constrained_intra_pred_flag, flags.constrained_intra_pred_flag) \ + F(redundant_pic_cnt_present_flag, flags.redundant_pic_cnt_present_flag) \ + F(transform_8x8_mode_flag, flags.transform_8x8_mode_flag) \ + F(second_chroma_qp_index_offset, second_chroma_qp_index_offset) \ + F(pic_scaling_matrix_present_flag, flags.pic_scaling_matrix_present_flag) + /* Missing in Vulkan + * num_slice_groups_minus1 + * slice_group_map_type + * slice_group_change_direction_flag + * slice_group_change_rate_minus1 + * pic_size_in_map_units_minus1 + */ + +static gboolean +gst_vulkan_h264_encoder_init_std_pps (GstVulkanH264Encoder * self, + GstH264PPS * pps) +{ + GstVulkanVideoCapabilities vk_caps; + VkVideoEncodeH264CapabilitiesKHR *caps; + + if (!gst_vulkan_encoder_caps (self->encoder, &vk_caps)) + return FALSE; + caps = &vk_caps.encoder.codec.h264; + + self->params.pps = (StdVideoH264PictureParameterSet) { +#define FILL_PPS(gst, vk) .vk = pps->gst, + PPS_MEMBERS (FILL_PPS) +#undef FILL_PPS + }; + + /* CABAC */ + if (pps->entropy_coding_mode_flag + && !(caps->stdSyntaxFlags + & VK_VIDEO_ENCODE_H264_STD_ENTROPY_CODING_MODE_FLAG_SET_BIT_KHR)) { + pps->entropy_coding_mode_flag = + self->params.pps.flags.entropy_coding_mode_flag = 0; + } + + /* dct 8x8 */ + if (pps->transform_8x8_mode_flag + && !(caps->stdSyntaxFlags + & VK_VIDEO_ENCODE_H264_STD_TRANSFORM_8X8_MODE_FLAG_SET_BIT_KHR)) { + pps->transform_8x8_mode_flag = + self->params.pps.flags.transform_8x8_mode_flag = 0; + } + + return TRUE; +} + +static VkVideoChromaSubsamplingFlagBitsKHR +_h264_get_chroma_subsampling (GstVideoInfo * info) +{ + gint w_sub, h_sub; + + w_sub = 1 << GST_VIDEO_FORMAT_INFO_W_SUB (info->finfo, 1); + h_sub = 1 << GST_VIDEO_FORMAT_INFO_H_SUB (info->finfo, 1); + + if (w_sub == 2 && h_sub == 2) + return VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR; + else if (w_sub == 2 && h_sub == 1) + return VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR; + else if (w_sub == 1 && h_sub == 1) + return VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR; + + g_assert_not_reached (); +} + +static GstFlowReturn +gst_vulkan_h264_encoder_new_sequence (GstH264Encoder * encoder, + GstVideoCodecState * in_state, GstH264Profile profile, GstH264Level * level) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (encoder); + GError *err = NULL; + GstVideoInfo *in_info = &in_state->info; + VkVideoChromaSubsamplingFlagBitsKHR chroma_subsampling; + VkVideoComponentBitDepthFlagsKHR bit_depth_luma, bit_depth_chroma; + StdVideoH264ProfileIdc vk_profile; + GstVulkanVideoCapabilities vk_caps; + VkVideoEncodeH264CapabilitiesKHR *vk_h264_caps; + GstVulkanEncoderQualityProperties quality_props; + + if (!self->encoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("The vulkan encoder has not been initialized properly"), (NULL)); + return GST_FLOW_ERROR; + } + + /* profile configuration */ + { + chroma_subsampling = _h264_get_chroma_subsampling (in_info); + bit_depth_luma = + gst_vulkan_h264_bit_depth (GST_VIDEO_INFO_COMP_DEPTH (in_info, 0)); + g_assert (bit_depth_luma != VK_VIDEO_COMPONENT_BIT_DEPTH_INVALID_KHR); + bit_depth_chroma = + gst_vulkan_h264_bit_depth (GST_VIDEO_INFO_COMP_DEPTH (in_info, 1)); + g_assert (bit_depth_chroma != VK_VIDEO_COMPONENT_BIT_DEPTH_INVALID_KHR); + + vk_profile = gst_vulkan_h264_profile_type (profile); + + /* *INDENT-OFF* */ + self->profile = (GstVulkanVideoProfile) { + .profile = (VkVideoProfileInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &self->profile.usage.encode, + .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, + .chromaSubsampling = chroma_subsampling, + .chromaBitDepth = bit_depth_chroma, + .lumaBitDepth = bit_depth_luma, + }, + .usage.encode = (VkVideoEncodeUsageInfoKHR) { + .pNext = &self->profile.codec.h264enc, + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_USAGE_INFO_KHR, + .videoUsageHints = VK_VIDEO_ENCODE_USAGE_DEFAULT_KHR, + .videoContentHints = VK_VIDEO_ENCODE_CONTENT_DEFAULT_KHR, + .tuningMode = VK_VIDEO_ENCODE_TUNING_MODE_DEFAULT_KHR, + }, + .codec.h264enc = (VkVideoEncodeH264ProfileInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PROFILE_INFO_KHR, + .stdProfileIdc = vk_profile, + }, + }; + quality_props = (GstVulkanEncoderQualityProperties) { + .quality_level = self->rc.quality, + .codec.h264 = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_QUALITY_LEVEL_PROPERTIES_KHR, + }, + }; + /* *INDENT-ON* */ + } + + if (gst_vulkan_encoder_is_started (self->encoder)) { + if (self->profile.profile.chromaSubsampling == chroma_subsampling + && self->profile.profile.chromaBitDepth == bit_depth_chroma + && self->profile.profile.lumaBitDepth == bit_depth_luma + && self->profile.codec.h264enc.stdProfileIdc == vk_profile) { + return GST_FLOW_OK; + } else { + GST_DEBUG_OBJECT (self, "Restarting vulkan encoder"); + gst_vulkan_encoder_stop (self->encoder); + } + } + + if (!gst_vulkan_encoder_start (self->encoder, &self->profile, &quality_props, + &err)) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Unable to start vulkan encoder with error %s", err->message), (NULL)); + g_clear_error (&err); + return GST_FLOW_ERROR; + } + + /* quality configuration */ + { + self->rc.quality = gst_vulkan_encoder_quality_level (self->encoder); + update_property_uint (self, &self->prop.quality, self->rc.quality, + PROP_QUALITY); + self->rc.ratecontrol = gst_vulkan_encoder_rc_mode (self->encoder); + update_property_uint (self, &self->prop.ratecontrol, self->rc.ratecontrol, + PROP_RATECONTROL); + } + + gst_vulkan_encoder_caps (self->encoder, &vk_caps); + vk_h264_caps = &vk_caps.encoder.codec.h264; + + GST_LOG_OBJECT (self, "H264 encoder capabilities:\n" + " Standard capability flags:\n" + " separate_color_plane: %i\n" + " qprime_y_zero_transform_bypass: %i\n" + " scaling_lists: %i\n" + " chroma_qp_index_offset: %i\n" + " second_chroma_qp_index_offset: %i\n" + " pic_init_qp: %i\n" + " weighted:%s%s%s\n" + " 8x8_transforms: %i\n" + " disable_direct_spatial_mv_pred: %i\n" + " coder:%s%s\n" + " direct_8x8_inference: %i\n" + " constrained_intra_pred: %i\n" + " deblock:%s%s%s\n" + " Capability flags:\n" + " hdr_compliance: %i\n" + " pred_weight_table_generated: %i\n" + " row_unaligned_slice: %i\n" + " different_slice_type: %i\n" + " b_frame_in_l0_list: %i\n" + " b_frame_in_l1_list: %i\n" + " per_pict_type_min_max_qp: %i\n" + " per_slice_constant_qp: %i\n" + " generate_prefix_nalu: %i\n" + " Capabilities:\n" + " maxLevelIdc: %i\n" + " maxSliceCount: %i\n" + " max(P/B)PictureL0ReferenceCount: %i P / %i B\n" + " maxL1ReferenceCount: %i\n" + " maxTemporalLayerCount: %i\n" + " expectDyadicTemporalLayerPattern: %i\n" + " min/max Qp: %i, %i\n" + " prefersGopRemainingFrames: %i\n" + " requiresGopRemainingFrames: %i\n", + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_SEPARATE_COLOR_PLANE_FLAG_SET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_QPPRIME_Y_ZERO_TRANSFORM_BYPASS_FLAG_SET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_SCALING_MATRIX_PRESENT_FLAG_SET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_CHROMA_QP_INDEX_OFFSET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_SECOND_CHROMA_QP_INDEX_OFFSET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_PIC_INIT_QP_MINUS26_BIT_KHR), + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_WEIGHTED_PRED_FLAG_SET_BIT_KHR ? + " pred" : "", + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_WEIGHTED_BIPRED_IDC_EXPLICIT_BIT_KHR ? + " bipred_explicit" : "", + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_WEIGHTED_BIPRED_IDC_IMPLICIT_BIT_KHR ? + " bipred_implicit" : "", + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_TRANSFORM_8X8_MODE_FLAG_SET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DIRECT_SPATIAL_MV_PRED_FLAG_UNSET_BIT_KHR), + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_ENTROPY_CODING_MODE_FLAG_UNSET_BIT_KHR ? + " cabac" : "", + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_ENTROPY_CODING_MODE_FLAG_SET_BIT_KHR ? + " cavlc" : "", + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DIRECT_8X8_INFERENCE_FLAG_UNSET_BIT_KHR), + !!(vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_CONSTRAINED_INTRA_PRED_FLAG_SET_BIT_KHR), + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DEBLOCKING_FILTER_DISABLED_BIT_KHR ? + " filter_disabling" : "", + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DEBLOCKING_FILTER_ENABLED_BIT_KHR ? + " filter_enabling" : "", + vk_h264_caps->stdSyntaxFlags & + VK_VIDEO_ENCODE_H264_STD_DEBLOCKING_FILTER_PARTIAL_BIT_KHR ? + " filter_partial" : "", + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_HRD_COMPLIANCE_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_PREDICTION_WEIGHT_TABLE_GENERATED_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_ROW_UNALIGNED_SLICE_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_DIFFERENT_SLICE_TYPE_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_B_FRAME_IN_L0_LIST_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_B_FRAME_IN_L1_LIST_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_PER_PICTURE_TYPE_MIN_MAX_QP_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_PER_SLICE_CONSTANT_QP_BIT_KHR), + !!(vk_h264_caps->flags & + VK_VIDEO_ENCODE_H264_CAPABILITY_GENERATE_PREFIX_NALU_BIT_KHR), + vk_h264_caps->maxLevelIdc, + vk_h264_caps->maxSliceCount, + vk_h264_caps->maxPPictureL0ReferenceCount, + vk_h264_caps->maxBPictureL0ReferenceCount, + vk_h264_caps->maxL1ReferenceCount, + vk_h264_caps->maxTemporalLayerCount, + vk_h264_caps->expectDyadicTemporalLayerPattern, + vk_h264_caps->maxQp, vk_h264_caps->minQp, + vk_h264_caps->prefersGopRemainingFrames, + vk_h264_caps->requiresGopRemainingFrames); + + if (GST_VIDEO_INFO_WIDTH (in_info) > vk_caps.caps.maxCodedExtent.width + || GST_VIDEO_INFO_HEIGHT (in_info) > vk_caps.caps.maxCodedExtent.height + || GST_VIDEO_INFO_WIDTH (in_info) < vk_caps.caps.minCodedExtent.width + || GST_VIDEO_INFO_HEIGHT (in_info) < vk_caps.caps.minCodedExtent.height) { + GST_ERROR_OBJECT (self, "Frame size is out of driver limits"); + gst_vulkan_encoder_stop (self->encoder); + return GST_FLOW_NOT_NEGOTIATED; + } + + gst_h264_encoder_set_max_num_references (encoder, + vk_h264_caps->maxPPictureL0ReferenceCount, + vk_h264_caps->maxL1ReferenceCount); + + if (gst_h264_encoder_is_live (encoder)) { + /* low latency */ + gst_h264_encoder_set_preferred_output_delay (encoder, 0); + } else { + /* experimental best value for VA */ + gst_h264_encoder_set_preferred_output_delay (encoder, 4); + } + + if (self->in_state) + gst_video_codec_state_unref (self->in_state); + self->in_state = gst_video_codec_state_ref (in_state); + + self->coded_width = GST_ROUND_UP_N (GST_VIDEO_INFO_WIDTH (in_info), + vk_caps.encoder.caps.encodeInputPictureGranularity.width); + self->coded_height = GST_ROUND_UP_N (GST_VIDEO_INFO_HEIGHT (in_info), + vk_caps.encoder.caps.encodeInputPictureGranularity.height); + + return GST_FLOW_OK; +} + +static gboolean +_h264_parameters_parse (GstVulkanH264Encoder * self, gpointer data, + gsize data_size, GstH264SPS * sps, GstH264PPS * pps) +{ + GstH264ParserResult res, pres; + GstH264NalUnit nalu = { 0, }; + GstH264NalParser parser = { 0, }; + guint offset = 0; + + do { + res = + gst_h264_parser_identify_nalu (&parser, data, offset, data_size, &nalu); + if (res != GST_H264_PARSER_OK && res != GST_H264_PARSER_NO_NAL_END) { + GST_WARNING_OBJECT (self, "Failed to parse overridden parameters"); + return FALSE; + } + + if (nalu.type == GST_H264_NAL_SPS) { + pres = gst_h264_parser_parse_sps (&parser, &nalu, sps); + if (pres != GST_H264_PARSER_OK) + GST_WARNING_OBJECT (self, "Failed to parse overridden SPS"); + } else if (nalu.type == GST_H264_NAL_PPS) { + pres = gst_h264_parser_parse_pps (&parser, &nalu, pps); + if (pres != GST_H264_PARSER_OK) + GST_WARNING_OBJECT (self, "Failed to parse overridden PPS"); + } else { + GST_WARNING_OBJECT (self, "Unexpected NAL identified: %d", nalu.type); + } + + offset = nalu.offset + nalu.size; + } while (res == GST_H264_PARSER_OK); + + /* from gst_h264_nal_parser_free */ + gst_h264_sps_clear (&parser.sps0); + gst_h264_pps_clear (&parser.pps0); + + return res == GST_H264_PARSER_OK; +} + +static GstFlowReturn +gst_vulkan_h264_encoder_update_parameters (GstVulkanH264Encoder * self, + GstH264SPS * sps, GstH264PPS * pps) +{ + GError *err = NULL; + GstVulkanEncoderParameters params; + VkVideoEncodeH264SessionParametersAddInfoKHR params_add; + + if (!gst_vulkan_h264_encoder_init_std_sps (self, sps)) + return GST_FLOW_ERROR; + if (!gst_vulkan_h264_encoder_init_std_pps (self, pps)) + return GST_FLOW_ERROR; + + /* *INDENT-OFF* */ + params_add = (VkVideoEncodeH264SessionParametersAddInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_ADD_INFO_KHR, + .pStdSPSs = &self->params.sps, + .stdSPSCount = 1, + .pStdPPSs = &self->params.pps, + .stdPPSCount = 1, + }; + params.h264 = (VkVideoEncodeH264SessionParametersCreateInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_CREATE_INFO_KHR, + .maxStdSPSCount = params_add.stdSPSCount, + .maxStdPPSCount = params_add.stdPPSCount, + .pParametersAddInfo = ¶ms_add, + }; + /* *INDENT-ON* */ + + if (!gst_vulkan_encoder_update_video_session_parameters (self->encoder, + ¶ms, &err)) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Unable to update session parameters with error %s", err->message), + (NULL)); + g_clear_error (&err); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_h264_encoder_new_parameters (GstH264Encoder * encoder, + GstH264SPS * sps, GstH264PPS * pps) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (encoder); + GError *err = NULL; + GstVulkanEncoderParametersOverrides overrides; + GstVulkanEncoderParametersFeedback feedback; + GstVulkanVideoCapabilities vk_caps; + GstFlowReturn ret; + gpointer data = NULL; + gsize data_size = 0; + StdVideoH264LevelIdc vk_max_level; + + if (!self->encoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("The vulkan encoder has not been initialized properly"), (NULL)); + return GST_FLOW_ERROR; + } + + /* gallium drivers always reply 10 level idc */ + gst_vulkan_encoder_caps (self->encoder, &vk_caps); + vk_max_level = vk_caps.encoder.codec.h264.maxLevelIdc; + if (vk_max_level > STD_VIDEO_H264_LEVEL_IDC_1_0) { + sps->level_idc = + MIN (gst_h264_level_idc_from_vk (vk_max_level), sps->level_idc); + } + + ret = gst_vulkan_h264_encoder_update_parameters (self, sps, pps); + if (ret != GST_FLOW_OK) + return ret; + + overrides = (GstVulkanEncoderParametersOverrides) { + .h264 = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_GET_INFO_KHR, + .stdSPSId = self->params.sps.seq_parameter_set_id, + .stdPPSId = self->params.pps.pic_parameter_set_id, + .writeStdPPS = VK_TRUE, + .writeStdSPS = VK_TRUE, + } + }; + + feedback = (GstVulkanEncoderParametersFeedback) { + .h264 = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_FEEDBACK_INFO_KHR, + } + }; + + if (!gst_vulkan_encoder_video_session_parameters_overrides (self->encoder, + &overrides, &feedback, &data_size, &data, &err)) + return GST_FLOW_ERROR; + + /* ignore overrides until we get a use case they are actually needed */ + feedback.h264.hasStdPPSOverrides = feedback.h264.hasStdSPSOverrides = 0; + + if (feedback.h264.hasStdSPSOverrides || feedback.h264.hasStdPPSOverrides) { + GstH264SPS new_sps; + GstH264PPS new_pps; + GST_LOG_OBJECT (self, "Vulkan driver overrode parameters:%s%s", + feedback.h264.hasStdSPSOverrides ? " SPS" : "", + feedback.h264.hasStdPPSOverrides ? " PPS" : ""); + + if (_h264_parameters_parse (self, data, data_size, &new_sps, &new_pps)) { + if (feedback.h264.hasStdSPSOverrides) + *sps = new_sps; + + if (feedback.h264.hasStdPPSOverrides) { + new_pps.sequence = sps; + *pps = new_pps; + } + + ret = gst_vulkan_h264_encoder_update_parameters (self, sps, pps); + if (ret != GST_FLOW_OK) + return ret; + } + } + + g_free (data); + + /* copy it to calculate coded buffer size (MVC extension not supported!) */ + self->sps = *sps; + self->pps = *pps; + self->pps.sequence = &self->sps; + + { + GstCaps *caps; + GstVideoInfo *info = &self->in_state->info; + const char *profile, *level; + GstVideoCodecState *out_state; + + profile = gst_vulkan_h264_profile_name (self->params.sps.profile_idc); + level = gst_vulkan_h264_level_name (self->params.sps.level_idc); + + if (!(profile && level)) + return GST_FLOW_ERROR; + + caps = gst_caps_new_simple ("video/x-h264", "profile", G_TYPE_STRING, + profile, "level", G_TYPE_STRING, level, "width", G_TYPE_INT, + GST_VIDEO_INFO_WIDTH (info), "height", G_TYPE_INT, + GST_VIDEO_INFO_HEIGHT (info), "alignment", G_TYPE_STRING, "au", + "stream-format", G_TYPE_STRING, "byte-stream", NULL); + + out_state = + gst_video_encoder_set_output_state (GST_VIDEO_ENCODER_CAST (self), + caps, self->in_state); + gst_video_codec_state_unref (out_state); + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_h264_encoder_new_output (GstH264Encoder * base, + GstVideoCodecFrame * codec_frame, GstH264EncoderFrame * h264_frame) +{ + GstVulkanH264EncoderFrame *vk_frame; + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (base); + + vk_frame = gst_vulkan_h264_encoder_frame_new (self, codec_frame); + if (!vk_frame) + return GST_FLOW_NOT_NEGOTIATED; + + gst_h264_encoder_frame_set_user_data (h264_frame, vk_frame, + gst_vulkan_h264_encoder_frame_free); + + return GST_FLOW_OK; +} + +static gboolean +_write_headers (GstVulkanH264Encoder * self, + GstVulkanH264EncoderFrame * vk_frame) +{ + GstMapInfo info; + guint aligned_offset, offset, orig_size, size, fillers; + GstH264BitWriterResult res; + guint8 aud_pic_type, *data; + GstVulkanVideoCapabilities vk_caps; + gboolean aud, ret = FALSE; + StdVideoH264PictureType pic_type = vk_frame->h264pic_info.primary_pic_type; + GstBuffer *buffer = vk_frame->picture.out_buffer; + + if (!gst_buffer_map (buffer, &info, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Failed to map output buffer"); + return FALSE; + } + + offset = 0; + data = info.data; + orig_size = size = info.size; + + GST_OBJECT_LOCK (self); + aud = self->prop.aud; + GST_OBJECT_UNLOCK (self); + + if (aud) { + guint8 nal_buf4096 = { 0, }; + guint nal_size = sizeof (nal_buf); + + switch (pic_type) { + case STD_VIDEO_H264_PICTURE_TYPE_IDR: + case STD_VIDEO_H264_PICTURE_TYPE_I: + aud_pic_type = 0; + break; + case STD_VIDEO_H264_PICTURE_TYPE_P: + aud_pic_type = 1; + break; + case STD_VIDEO_H264_PICTURE_TYPE_B: + aud_pic_type = 2; + break; + default: + g_assert_not_reached (); + break; + } + + res = gst_h264_bit_writer_aud (aud_pic_type, TRUE, nal_buf, &nal_size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the AUD header"); + goto bail; + } + + res = gst_h264_bit_writer_convert_to_nal (4, FALSE, TRUE, FALSE, nal_buf, + nal_size * 8, data, &size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the AUD bytes"); + goto bail; + } + + offset += size + 1; + } + + if (pic_type == STD_VIDEO_H264_PICTURE_TYPE_IDR) { + guint8 nal_buf4096 = { 0, }; + guint nal_size = sizeof (nal_buf); + + res = gst_h264_bit_writer_sps (&self->sps, TRUE, nal_buf, &nal_size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the sequence header"); + goto bail; + } + + data = info.data + offset; + size = orig_size - offset; + + res = gst_h264_bit_writer_convert_to_nal (4, FALSE, TRUE, FALSE, nal_buf, + nal_size * 8, data, &size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the SPS bytes"); + goto bail; + } + + offset += size + 1; + } + + if (pic_type == STD_VIDEO_H264_PICTURE_TYPE_I + || pic_type == STD_VIDEO_H264_PICTURE_TYPE_IDR) { + guint8 nal_buf4096 = { 0, }; + guint nal_size = sizeof (nal_buf); + + res = gst_h264_bit_writer_pps (&self->pps, TRUE, nal_buf, &nal_size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the picture header"); + goto bail; + } + + data = info.data + offset; + size = orig_size - offset; + + res = gst_h264_bit_writer_convert_to_nal (4, FALSE, TRUE, FALSE, nal_buf, + nal_size * 8, data, &size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the PPS bytes"); + goto bail; + } + + offset += size + 1; + } + + gst_vulkan_encoder_caps (self->encoder, &vk_caps); + aligned_offset = GST_ROUND_UP_N (offset, + vk_caps.caps.minBitstreamBufferOffsetAlignment); + + fillers = aligned_offset - offset; + if (fillers > 0) { + guint8 nal_buf4096 = { 0, }; + guint nal_size = sizeof (nal_buf); + + while (fillers < 7 /* filler header size */ ) + fillers += vk_caps.caps.minBitstreamBufferOffsetAlignment; + + fillers -= 7 /* filler header size */ ; + + res = gst_h264_bit_writer_filler (TRUE, fillers, nal_buf, &nal_size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate fillers"); + goto bail; + } + + data = info.data + offset; + size = orig_size - offset; + + res = gst_h264_bit_writer_convert_to_nal (4, FALSE, TRUE, FALSE, nal_buf, + nal_size * 8, data, &size); + if (res != GST_H264_BIT_WRITER_OK) { + GST_ERROR_OBJECT (self, "Failed to generate the fillers bytes"); + goto bail; + } + + offset += size + 1; + } + + vk_frame->picture.offset = offset; + + ret = TRUE; + +bail: + gst_buffer_unmap (buffer, &info); + return ret; +} + +static void +_setup_rc_pic (GstVulkanEncoderPicture * pic, + VkVideoEncodeRateControlInfoKHR * rc_info, + VkVideoEncodeRateControlLayerInfoKHR * rc_layer, gpointer data) +{ + GstVulkanH264Encoder *self = data; + GstVulkanH264EncoderFrame *vk_frame = (GstVulkanH264EncoderFrame *) pic; + GstH264Encoder *h264enc = GST_H264_ENCODER (self); + guint32 idr_period, num_bframes; + gboolean b_pyramid; + VkVideoEncodeH264RateControlFlagsKHR rc_flag; + + idr_period = gst_h264_encoder_get_idr_period (h264enc); + num_bframes = gst_h264_encoder_get_num_b_frames (h264enc); + b_pyramid = gst_h264_encoder_gop_is_b_pyramid (h264enc); + + rc_flag = b_pyramid ? + VK_VIDEO_ENCODE_H264_RATE_CONTROL_REFERENCE_PATTERN_DYADIC_BIT_KHR + : VK_VIDEO_ENCODE_H264_RATE_CONTROL_REFERENCE_PATTERN_FLAT_BIT_KHR; + + /* *INDENT-OFF* */ + vk_frame->vkrc_info = (VkVideoEncodeH264RateControlInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_RATE_CONTROL_INFO_KHR, + .flags = rc_flag | VK_VIDEO_ENCODE_H264_RATE_CONTROL_REGULAR_GOP_BIT_KHR, + .pNext = NULL, + .gopFrameCount = idr_period, + .idrPeriod = idr_period, + .consecutiveBFrameCount = num_bframes, + .temporalLayerCount = 0, + }; + /* *INDENT-ON* */ + + rc_info->pNext = &vk_frame->vkrc_info; + + if (rc_info->rateControlMode > + VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DISABLED_BIT_KHR) { + rc_layer->averageBitrate = self->rc.bitrate * 1024; + rc_layer->maxBitrate = self->rc.max_bitrate * 1024; + + /* virtualBufferSizeInMs ~ hrd_buffer_size * 1000LL / bitrate + * + * FIXME: add max-bitrate and coded-buffer-size properties to customize the + * bucket model + * + * for more information: https://www.youtube.com/watch?v=Mn8v1ojV80M */ + rc_info->virtualBufferSizeInMs = self->rc.cpb_size; + rc_info->initialVirtualBufferSizeInMs = self->rc.cpb_size * (3 / 4); + + /* *INDENT-OFF* */ + vk_frame->vkrc_layer_info = (VkVideoEncodeH264RateControlLayerInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_RATE_CONTROL_LAYER_INFO_KHR, + + .useMinQp = self->rc.min_qp > 0, + .minQp.qpI = self->rc.min_qp, + .minQp.qpP = self->rc.min_qp, + .minQp.qpB = self->rc.min_qp, + + .useMaxQp = self->rc.max_qp > 0, + .maxQp.qpI = self->rc.max_qp, + .maxQp.qpP = self->rc.max_qp, + .maxQp.qpB = self->rc.max_qp, + + .useMaxFrameSize = 0, + }; + /* *INDENT-ON* */ + + rc_layer->pNext = &vk_frame->vkrc_layer_info; + vk_frame->vkrc_info.temporalLayerCount = 1; + } +} + +static void +_setup_codec_pic (GstVulkanEncoderPicture * pic, VkVideoEncodeInfoKHR * info, + gpointer data) +{ + GstVulkanH264EncoderFrame *vk_frame = (GstVulkanH264EncoderFrame *) pic; + + info->pNext = &vk_frame->vkh264pic_info; + pic->dpb_slot.pNext = &vk_frame->vkref_info; + + /* *INDENT-OFF* */ + vk_frame->vkh264pic_info = (VkVideoEncodeH264PictureInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PICTURE_INFO_KHR, + .pNext = NULL, + .naluSliceEntryCount = 1, + .pNaluSliceEntries = &vk_frame->vkslice_info, /* filled in _setup_slice() */ + .pStdPictureInfo = &vk_frame->h264pic_info, /* filled in encode_frame() */ + .generatePrefixNalu = VK_FALSE, + }; + vk_frame->vkref_info = (VkVideoEncodeH264DpbSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_DPB_SLOT_INFO_KHR, + .pNext = NULL, + .pStdReferenceInfo = &vk_frame->ref_info, /* filled in encode_frame() */ + }; + /* *INDENT-ON* */ +} + +static guint8 +_get_slot_index (GArray * list, int i) +{ + GstH264EncoderFrame *h264_frame; + + h264_frame = g_array_index (list, GstH264EncoderFrame *, i); + return _GET_FRAME (h264_frame)->picture.dpb_slot.slotIndex; +} + +static void +_setup_ref_lists (GstH264EncoderFrame * h264_frame, GstH264SliceHdr * slice_hdr, + GArray * list0, GArray * list1) +{ + int i; + GstVulkanH264EncoderFrame *vk_frame = _GET_FRAME (h264_frame); + + /* *INDENT-OFF* */ + vk_frame->ref_list_info = (StdVideoEncodeH264ReferenceListsInfo) { + .flags = { + .ref_pic_list_modification_flag_l0 = 0, + .ref_pic_list_modification_flag_l1 = 0, + .reserved = 0, + }, + .num_ref_idx_l0_active_minus1 = + MIN (slice_hdr->num_ref_idx_l0_active_minus1, + STD_VIDEO_H264_MAX_NUM_LIST_REF), + .num_ref_idx_l1_active_minus1 = + MIN (slice_hdr->num_ref_idx_l1_active_minus1, + STD_VIDEO_H264_MAX_NUM_LIST_REF), + .RefPicList0 = { 0, }, /* filled below */ + .RefPicList1 = { 0, }, /* filled below */ + .refList0ModOpCount = MIN (slice_hdr->n_ref_pic_list_modification_l0, 33), + .refList1ModOpCount = MIN (slice_hdr->n_ref_pic_list_modification_l1, 33), + .refPicMarkingOpCount = + MIN (slice_hdr->dec_ref_pic_marking.n_ref_pic_marking, 10), + .reserved1 = { 0, }, + .pRefList0ModOperations = NULL, + .pRefList1ModOperations = NULL, + .pRefPicMarkingOperations = NULL, /*filled below */ + }; + /* *INDENT-ON* */ + + for (i = 0; i < STD_VIDEO_H264_MAX_NUM_LIST_REF; i++) { + if (i < list0->len) { + vk_frame->ref_list_info.RefPicList0i = _get_slot_index (list0, i); + } else { + vk_frame->ref_list_info.RefPicList0i = + STD_VIDEO_H264_NO_REFERENCE_PICTURE; + } + + if (i < list1->len) { + vk_frame->ref_list_info.RefPicList1i = _get_slot_index (list1, i); + } else { + vk_frame->ref_list_info.RefPicList1i = + STD_VIDEO_H264_NO_REFERENCE_PICTURE; + } + } + + for (i = 0; i < vk_frame->ref_list_info.refList0ModOpCount; i++) { + GstH264RefPicListModification *mod = + &slice_hdr->ref_pic_list_modification_l0i; + + /* *INDENT-OFF* */ + vk_frame->mods0i = (StdVideoEncodeH264RefListModEntry) { + .modification_of_pic_nums_idc = mod->modification_of_pic_nums_idc, + .abs_diff_pic_num_minus1 = mod->value.abs_diff_pic_num_minus1, + }; + /* *INDENT-ON* */ + } + if (vk_frame->ref_list_info.refList0ModOpCount > 0) + vk_frame->ref_list_info.pRefList0ModOperations = vk_frame->mods0; + + for (i = 0; i < vk_frame->ref_list_info.refList1ModOpCount; i++) { + GstH264RefPicListModification *mod = + &slice_hdr->ref_pic_list_modification_l1i; + + /* *INDENT-OFF* */ + vk_frame->mods1i = (StdVideoEncodeH264RefListModEntry) { + .modification_of_pic_nums_idc = mod->modification_of_pic_nums_idc, + .abs_diff_pic_num_minus1 = mod->value.abs_diff_pic_num_minus1, + }; + /* *INDENT-ON* */ + } + if (vk_frame->ref_list_info.refList1ModOpCount > 0) + vk_frame->ref_list_info.pRefList1ModOperations = vk_frame->mods1; + + for (i = 0; i < vk_frame->ref_list_info.refPicMarkingOpCount; i++) { + GstH264RefPicMarking *mmco = + &slice_hdr->dec_ref_pic_marking.ref_pic_markingi; + + /* *INDENT-OFF* */ + vk_frame->mmcoi = (StdVideoEncodeH264RefPicMarkingEntry) { + .long_term_frame_idx = mmco->long_term_frame_idx, + .max_long_term_frame_idx_plus1 = mmco->max_long_term_frame_idx_plus1, + .long_term_pic_num = mmco->long_term_pic_num, + .difference_of_pic_nums_minus1 = mmco->difference_of_pic_nums_minus1, + }; + /* *INDENT-ON* */ + } + if (vk_frame->ref_list_info.refPicMarkingOpCount > 0) + vk_frame->ref_list_info.pRefPicMarkingOperations = vk_frame->mmco; +} + +static void +_setup_slice (GstVulkanH264Encoder * self, GstH264EncoderFrame * h264_frame, + GstH264SliceHdr * slice_hdr) +{ + GstVulkanH264EncoderFrame *vk_frame = _GET_FRAME (h264_frame); + GstH264SliceType slice_type = h264_frame->type.slice_type; + + /* *INDENT-OFF* */ + vk_frame->slice_hdr = (StdVideoEncodeH264SliceHeader) { + .flags = (StdVideoEncodeH264SliceHeaderFlags) { + .direct_spatial_mv_pred_flag = slice_hdr->direct_spatial_mv_pred_flag, + .num_ref_idx_active_override_flag = + slice_hdr->num_ref_idx_active_override_flag, + }, + .first_mb_in_slice = slice_hdr->first_mb_in_slice, /* 0 */ + .slice_type = gst_vulkan_h264_slice_type(h264_frame->type.slice_type), + .cabac_init_idc = slice_hdr->cabac_init_idc, + .disable_deblocking_filter_idc = slice_hdr->disable_deblocking_filter_idc, + .slice_qp_delta = slice_hdr->slice_qp_delta, + .slice_alpha_c0_offset_div2 = slice_hdr->slice_alpha_c0_offset_div2, + .slice_beta_offset_div2 = slice_hdr->slice_beta_offset_div2, + .pWeightTable = NULL, + }; + + vk_frame->vkslice_info = (VkVideoEncodeH264NaluSliceInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_NALU_SLICE_INFO_KHR, + .pNext = NULL, + .constantQp = slice_type == GST_H264_P_SLICE ? self->rc.qp_p : + slice_type == GST_H264_B_SLICE ? self->rc.qp_b : + self->rc.qp_i, + .pStdSliceHeader = &vk_frame->slice_hdr, + }; + /* *INDENT-ON* */ + + vk_frame->slice_hdr.slice_qp_delta = vk_frame->vkslice_info.constantQp - + (self->params.pps.pic_init_qp_minus26 + 26); +} + +static void +_reset_rc_props (GstVulkanH264Encoder * self) +{ + GstVulkanVideoCapabilities vk_caps; + gint32 rc_mode; + + if (!self->encoder) + return; + + if (!gst_vulkan_encoder_caps (self->encoder, &vk_caps)) + return; + + GST_OBJECT_LOCK (self); + self->rc.ratecontrol = self->prop.ratecontrol; + self->rc.min_qp = (self->prop.min_qp > 0) ? + MAX (self->prop.min_qp, vk_caps.encoder.codec.h264.minQp) : 0; + self->rc.max_qp = (self->prop.max_qp > 0) ? + MIN (self->prop.max_qp, vk_caps.encoder.codec.h264.maxQp) : 0; + GST_OBJECT_UNLOCK (self); + + if (self->rc.ratecontrol == + VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DISABLED_BIT_KHR) { + GST_OBJECT_LOCK (self); + self->rc.qp_i = + CLAMP (self->prop.qp_i, vk_caps.encoder.codec.h264.minQp, + vk_caps.encoder.codec.h264.maxQp); + self->rc.qp_p = + CLAMP (self->prop.qp_p, vk_caps.encoder.codec.h264.minQp, + vk_caps.encoder.codec.h264.maxQp); + self->rc.qp_b = + CLAMP (self->prop.qp_b, vk_caps.encoder.codec.h264.minQp, + vk_caps.encoder.codec.h264.maxQp); + GST_OBJECT_UNLOCK (self); + } else { + self->rc.qp_i = 0; + self->rc.qp_p = 0; + self->rc.qp_b = 0; + } + + gst_vulkan_encoder_set_rc_mode (self->encoder, self->rc.ratecontrol); + rc_mode = gst_vulkan_encoder_rc_mode (self->encoder); + if (rc_mode != -1) { + self->rc.ratecontrol = rc_mode; + update_property_uint (self, &self->prop.ratecontrol, self->rc.ratecontrol, + PROP_RATECONTROL); + } + + update_property_uint (self, &self->prop.qp_i, self->rc.qp_i, PROP_QP_I); + update_property_uint (self, &self->prop.qp_p, self->rc.qp_p, PROP_QP_P); + update_property_uint (self, &self->prop.qp_b, self->rc.qp_b, PROP_QP_B); + update_property_uint (self, &self->prop.min_qp, self->rc.min_qp, PROP_MIN_QP); + update_property_uint (self, &self->prop.max_qp, self->rc.max_qp, PROP_MAX_QP); +} + +static StdVideoH264PictureType +_gst_slice_type_2_vk_pic_type (GstH264GOPFrame * frame) +{ + if ((frame->slice_type == GST_H264_I_SLICE) && frame->is_ref) + return STD_VIDEO_H264_PICTURE_TYPE_IDR; + switch (frame->slice_type) { + case GST_H264_B_SLICE: + return STD_VIDEO_H264_PICTURE_TYPE_B; + case GST_H264_P_SLICE: + return STD_VIDEO_H264_PICTURE_TYPE_P; + case GST_H264_I_SLICE: + return STD_VIDEO_H264_PICTURE_TYPE_I; + default: + GST_WARNING ("Unsupported slice type '%d' for picture", + frame->slice_type); + return STD_VIDEO_H264_PICTURE_TYPE_INVALID; + } +} + +static void +update_properties_unlocked (GstVulkanH264Encoder * self) +{ + if (!self->update_props) + return; + + GST_OBJECT_UNLOCK (self); + _reset_rc_props (self); + GST_OBJECT_LOCK (self); + + self->update_props = FALSE; +} + +static GstFlowReturn +gst_vulkan_h264_encoder_encode_frame (GstH264Encoder * base, + GstVideoCodecFrame * frame, GstH264EncoderFrame * h264_frame, + GstH264SliceHdr * slice_hdr, GArray * list0, GArray * list1) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (base); + GstVulkanEncoderPicture *ref_pics16 = { NULL, }; + gint i, j; + GstVulkanH264EncoderFrame *vk_frame = _GET_FRAME (h264_frame); + + if (!gst_vulkan_encoder_is_started (self->encoder)) + return GST_FLOW_NOT_NEGOTIATED; + + GST_OBJECT_LOCK (self); + update_properties_unlocked (self); + GST_OBJECT_UNLOCK (self); + + /* *INDENT-OFF* */ + vk_frame->h264pic_info = (StdVideoEncodeH264PictureInfo) { + .flags = { + .IdrPicFlag = ((h264_frame->type.slice_type == GST_H264_I_SLICE) + && h264_frame->type.is_ref), + .is_reference = h264_frame->type.is_ref, + .no_output_of_prior_pics_flag = + slice_hdr->dec_ref_pic_marking.no_output_of_prior_pics_flag, + .long_term_reference_flag = + slice_hdr->dec_ref_pic_marking.long_term_reference_flag, + .adaptive_ref_pic_marking_mode_flag = + slice_hdr->dec_ref_pic_marking.adaptive_ref_pic_marking_mode_flag, + }, + .seq_parameter_set_id = self->params.sps.seq_parameter_set_id, + .pic_parameter_set_id = self->params.pps.pic_parameter_set_id, + .idr_pic_id = slice_hdr->idr_pic_id, + .primary_pic_type = _gst_slice_type_2_vk_pic_type (&h264_frame->type), + .frame_num = h264_frame->gop_frame_num, + .PicOrderCnt = h264_frame->poc, + .temporal_id = 0, /* no support for MVC extension */ + .reserved1 = { 0, }, + .pRefLists = &vk_frame->ref_list_info, /* filled in setup_refs() */ + }; + + vk_frame->ref_info = (StdVideoEncodeH264ReferenceInfo) { + .flags = { + .used_for_long_term_reference = 0, + .reserved = 0, + }, + .primary_pic_type = vk_frame->h264pic_info.primary_pic_type, + .FrameNum = vk_frame->h264pic_info.frame_num, + .PicOrderCnt = vk_frame->h264pic_info.PicOrderCnt, + .long_term_frame_idx = 0, + .long_term_pic_num = 0, + .temporal_id = vk_frame->h264pic_info.temporal_id, + }; + /* *INDENT-ON* */ + + _setup_ref_lists (h264_frame, slice_hdr, list0, list1); + _setup_slice (self, h264_frame, slice_hdr); + + vk_frame->picture.codec_rc_info = &vk_frame->vkrc_info; + + g_assert (list0->len + list1->len <= 16); + for (i = 0; i < list0->len; i++) { + GstH264EncoderFrame *pic = g_array_index (list0, GstH264EncoderFrame *, i); + ref_picsi = &_GET_FRAME (pic)->picture; + } + for (j = 0; j < list1->len; j++) { + GstH264EncoderFrame *pic = g_array_index (list1, GstH264EncoderFrame *, j); + ref_picsi++ = &_GET_FRAME (pic)->picture; + } + + if (!_write_headers (self, vk_frame)) + return GST_FLOW_ERROR; + + if (!gst_vulkan_encoder_encode (self->encoder, &self->in_state->info, + &vk_frame->picture, i, ref_pics)) { + GST_ERROR_OBJECT (self, "Encode frame error"); + return GST_FLOW_ERROR; + } + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_h264_encoder_prepare_output (GstH264Encoder * base, + GstVideoCodecFrame * frame) +{ + GstH264EncoderFrame *h264_frame; + GstVulkanH264EncoderFrame *vk_frame; + + h264_frame = gst_video_codec_frame_get_user_data (frame); + vk_frame = _GET_FRAME (h264_frame); + + gst_buffer_replace (&frame->output_buffer, vk_frame->picture.out_buffer); + + return GST_FLOW_OK; +} + +static void +gst_vulkan_h264_encoder_reset (GstH264Encoder * base) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (base); + + GST_OBJECT_LOCK (self); + self->rc.bitrate = self->prop.bitrate; + self->rc.quality = self->prop.quality; + GST_OBJECT_UNLOCK (self); + + _reset_rc_props (self); + + self->coded_buffer_size = 0; +} + +static gboolean +gst_vulkan_h264_encoder_open (GstVideoEncoder * base) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (base); + GstVulkanH264EncoderClass *klass = GST_VULKAN_H264_ENCODER_GET_CLASS (self); + GstVulkanEncoderCallbacks callbacks = { _setup_codec_pic, _setup_rc_pic }; + + if (!gst_vulkan_ensure_element_data (GST_ELEMENT (self), NULL, + &self->instance)) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to retrieve vulkan instance"), (NULL)); + return FALSE; + } + + if (!gst_vulkan_ensure_element_device (GST_ELEMENT (self), self->instance, + &self->device, klass->device_index)) { + return FALSE; + } + + self->encode_queue = gst_vulkan_device_select_queue (self->device, + VK_QUEUE_VIDEO_ENCODE_BIT_KHR); + if (!self->encode_queue) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to create/retrieve vulkan H.264 encoder queue"), (NULL)); + gst_clear_object (&self->instance); + return FALSE; + } + + self->encoder = + gst_vulkan_encoder_create_from_queue (self->encode_queue, + VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR); + + if (!self->encoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to retrieve vulkan encoder"), (NULL)); + return FALSE; + } + + gst_vulkan_encoder_set_callbacks (self->encoder, &callbacks, self, NULL); + + return TRUE; +} + +static gboolean +gst_vulkan_h264_encoder_close (GstVideoEncoder * encoder) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (encoder); + + gst_clear_object (&self->encoder); + gst_clear_object (&self->encode_queue); + gst_clear_object (&self->device); + gst_clear_object (&self->instance); + + return TRUE; +} + +static gboolean +gst_vulkan_h264_encoder_stop (GstVideoEncoder * encoder) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (encoder); + + if (self->in_state) + gst_video_codec_state_unref (self->in_state); + self->in_state = NULL; + + gst_vulkan_encoder_stop (self->encoder); + + return GST_VIDEO_ENCODER_CLASS (parent_class)->stop (encoder); +} + +static gboolean +_query_context (GstVulkanH264Encoder * self, GstQuery * query) +{ + if (!self->encoder) + return FALSE; + if (gst_vulkan_handle_context_query (GST_ELEMENT (self), query, NULL, + self->instance, self->device)) + return TRUE; + + if (gst_vulkan_queue_handle_context_query (GST_ELEMENT (self), query, + self->encode_queue)) + return TRUE; + + return FALSE; +} + +static gboolean +gst_vulkan_h264_encoder_src_query (GstVideoEncoder * encoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_H264_ENCODER (encoder), query); + break; + default: + ret = GST_VIDEO_ENCODER_CLASS (parent_class)->src_query (encoder, query); + break; + } + + return ret; +} + +static gboolean +gst_vulkan_h264_encoder_sink_query (GstVideoEncoder * encoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_H264_ENCODER (encoder), query); + break; + default: + ret = GST_VIDEO_ENCODER_CLASS (parent_class)->sink_query (encoder, query); + break; + } + + return ret; +} + +static gboolean +gst_vulkan_h264_encoder_propose_allocation (GstVideoEncoder * venc, + GstQuery * query) +{ + gboolean need_pool; + GstCaps *caps, *profile_caps; + GstVideoInfo info; + guint size; + GstBufferPool *pool = NULL; + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (venc); + + if (!self->encoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("The vulkan encoder has not been initialized properly"), (NULL)); + return FALSE; + } + + gst_query_parse_allocation (query, &caps, &need_pool); + + if (caps == NULL) + return FALSE; + + if (!gst_video_info_from_caps (&info, caps)) + return FALSE; + + /* the normal size of a frame */ + size = info.size; + + if (!need_pool) { + gint height, width; + + width = GST_VIDEO_INFO_WIDTH (&info); + height = GST_VIDEO_INFO_HEIGHT (&info); + need_pool = self->coded_width != width || self->coded_height != height; + } + + if (need_pool) { + GstCaps *new_caps; + GstStructure *config; + GstVulkanVideoCapabilities vk_caps; + + new_caps = gst_caps_copy (caps); + gst_caps_set_simple (new_caps, "width", G_TYPE_INT, self->coded_width, + "height", G_TYPE_INT, self->coded_height, NULL); + + pool = gst_vulkan_image_buffer_pool_new (self->device); + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_set_params (config, new_caps, size, 0, 0); + gst_caps_unref (new_caps); + + profile_caps = gst_vulkan_encoder_profile_caps (self->encoder); + gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); + gst_caps_unref (profile_caps); + + gst_vulkan_image_buffer_pool_config_set_allocation_params (config, + VK_IMAGE_USAGE_TRANSFER_DST_BIT | + VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR, + VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, + VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, + VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT); + + if (!gst_vulkan_encoder_caps (self->encoder, &vk_caps)) { + gst_structure_free (config); + g_object_unref (pool); + return FALSE; + } + if ((vk_caps.caps. + flags & VK_VIDEO_CAPABILITY_SEPARATE_REFERENCE_IMAGES_BIT_KHR) + == 0) { + gst_structure_set (config, "num-layers", G_TYPE_UINT, + vk_caps.caps.maxDpbSlots, NULL); + } + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_WARNING_OBJECT (self, "Failed to set pool config"); + g_object_unref (pool); + return FALSE; + } + } + + gst_query_add_allocation_pool (query, pool, size, + self->sps.vui_parameters.max_dec_frame_buffering, 0); + if (pool) + gst_object_unref (pool); + + if (!gst_vulkan_encoder_create_dpb_pool (self->encoder, caps)) { + GST_ERROR_OBJECT (self, "Unable to create the dpb pool"); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_vulkan_h264_encoder_set_format (GstVideoEncoder * encoder, + GstVideoCodecState * state) +{ + gboolean ret; + + ret = GST_VIDEO_ENCODER_CLASS (parent_class)->set_format (encoder, state); + if (ret) + ret = gst_h264_encoder_reconfigure (GST_H264_ENCODER (encoder), TRUE); + return ret; +} + +static void +gst_vulkan_h264_encoder_init (GTypeInstance * instance, gpointer g_class) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (instance); + + gst_vulkan_buffer_memory_init_once (); + + self->prop.aud = TRUE; + self->prop.qp_i = 26; + self->prop.qp_p = 26; + self->prop.qp_b = 26; + self->prop.max_qp = 0; + self->prop.min_qp = 0; + self->prop.ratecontrol = VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DISABLED_BIT_KHR; + self->prop.quality = 2; +} + +static void +gst_vulkan_h264_encoder_get_property (GObject * object, guint property_id, + GValue * value, GParamSpec * pspec) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (object); + + GST_OBJECT_LOCK (self); + switch (property_id) { + case PROP_BITRATE: + g_value_set_uint (value, self->prop.bitrate); + break; + case PROP_AUD: + g_value_set_boolean (value, self->prop.aud); + break; + case PROP_QUALITY: + g_value_set_uint (value, self->prop.quality); + break; + case PROP_RATECONTROL: + g_value_set_enum (value, self->prop.ratecontrol); + break; + case PROP_QP_I: + g_value_set_uint (value, self->prop.qp_i); + break; + case PROP_QP_B: + g_value_set_uint (value, self->prop.qp_b); + break; + case PROP_QP_P: + g_value_set_uint (value, self->prop.qp_p); + break; + case PROP_MAX_QP: + g_value_set_uint (value, self->prop.max_qp); + break; + case PROP_MIN_QP: + g_value_set_uint (value, self->prop.min_qp); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); +} + +static void +gst_vulkan_h264_encoder_set_property (GObject * object, guint property_id, + const GValue * value, GParamSpec * pspec) +{ + GstVulkanH264Encoder *self = GST_VULKAN_H264_ENCODER (object); + GstH264Encoder *h264enc = GST_H264_ENCODER (object); + gboolean reconfigure = FALSE; + + GST_OBJECT_LOCK (self); + switch (property_id) { + case PROP_BITRATE: + self->prop.bitrate = g_value_get_uint (value); + reconfigure = TRUE; + break; + case PROP_AUD: + self->prop.aud = g_value_get_boolean (value); + break; + case PROP_QUALITY: + self->prop.quality = g_value_get_uint (value); + reconfigure = TRUE; + break; + case PROP_RATECONTROL: + self->prop.ratecontrol = g_value_get_enum (value); + reconfigure = TRUE; + break; + case PROP_QP_I: + self->prop.qp_i = g_value_get_uint (value); + self->update_props = TRUE; + break; + case PROP_QP_P: + self->prop.qp_p = g_value_get_uint (value); + self->update_props = TRUE; + break; + case PROP_QP_B: + self->prop.qp_b = g_value_get_uint (value); + self->update_props = TRUE; + break; + case PROP_MAX_QP: + self->prop.max_qp = g_value_get_uint (value); + self->update_props = TRUE; + break; + case PROP_MIN_QP: + self->prop.min_qp = g_value_get_uint (value); + self->update_props = TRUE; + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); + break; + } + GST_OBJECT_UNLOCK (self); + + if (reconfigure) + gst_h264_encoder_reconfigure (h264enc, FALSE); +} + +static void +gst_vulkan_h264_encoder_class_init (gpointer g_klass, gpointer class_data) +{ + GstVulkanH264EncoderClass *klass = g_klass; + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstVideoEncoderClass *encoder_class = GST_VIDEO_ENCODER_CLASS (klass); + GstH264EncoderClass *h264encoder_class = GST_H264_ENCODER_CLASS (klass); + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + struct CData *cdata = class_data; + GParamFlags param_flags = + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT + | GST_PARAM_MUTABLE_PLAYING; + gchar *long_name; + const gchar *name; + GstPadTemplate *sink_pad_template, *src_pad_template; + GstCaps *sink_doc_caps, *src_doc_caps; + + name = "Vulkan H.264 encoder"; + if (cdata->description) + long_name = g_strdup_printf ("%s on %s", name, cdata->description); + else + long_name = g_strdup (name); + + klass->device_index = cdata->device_index; + + gst_element_class_set_metadata (element_class, long_name, + "Codec/Encoder/Video/Hardware", "A H.264 video encoder based on Vulkan", + "Stéphane Cerveau <scerveau@igalia.com>, " + "Victor Jaquez <vjaquez@igalia.com>"); + + parent_class = g_type_class_peek_parent (klass); + + src_doc_caps = gst_caps_from_string ("video/x-h264, " + "profile = { (string) high, (string) main, (string) constrained-baseline }, " + "stream-format = (string) byte-stream, alignment = (string) au"); + sink_doc_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12")); + + sink_pad_template = + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->raw); + gst_element_class_add_pad_template (element_class, sink_pad_template); + + src_pad_template = + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, cdata->codec); + gst_element_class_add_pad_template (element_class, src_pad_template); + + gst_pad_template_set_documentation_caps (sink_pad_template, sink_doc_caps); + gst_caps_unref (sink_doc_caps); + + gst_pad_template_set_documentation_caps (src_pad_template, src_doc_caps); + gst_caps_unref (src_doc_caps); + + gobject_class->set_property = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_set_property); + gobject_class->get_property = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_get_property); + + encoder_class->open = GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_open); + encoder_class->close = GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_close); + encoder_class->stop = GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_stop); + encoder_class->src_query = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_src_query); + encoder_class->sink_query = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_sink_query); + encoder_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_propose_allocation); + encoder_class->set_format = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_set_format); + + h264encoder_class->new_sequence = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_new_sequence); + h264encoder_class->new_parameters = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_new_parameters); + h264encoder_class->new_output = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_new_output); + h264encoder_class->encode_frame = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_encode_frame); + h264encoder_class->prepare_output = + GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_prepare_output); + h264encoder_class->reset = GST_DEBUG_FUNCPTR (gst_vulkan_h264_encoder_reset); + + /** + * GstVulkanH264Encoder:bitrate: + * + * Bitrate is the amount of data (in kilobits) to process per second. It's + * both a function of the encoded bitstream data size of the encoded pictures + * as well as the frame rate used by the video sequence + * + * A higher bitrate will result in a better visual quality but it will result + * in a bigger file. A lower bitrate will result in a smaller file, but it + * will also result in a worse visual quality. + * + * Since: 1.28 + */ + propertiesPROP_BITRATE = g_param_spec_uint ("bitrate", "Bitrate (kbps)", + "The desired bitrate expressed in kbps (0: auto-calculate)", + 0, G_MAXUINT, 0, param_flags); + + /** + * GstVulkanH264Encoder:aud: + * + * Insert the AU (Access Unit) delimeter for each frame. + * + * Since: 1.28 + */ + propertiesPROP_AUD = g_param_spec_boolean ("aud", "Insert AUD", + "Insert AU (Access Unit) delimeter for each frame", TRUE, param_flags); + + /** + * GstVulkanH264Encoder:qp-i: + * + * Indicates the quantization parameter for all the slices in each I frame. + * It's only applied when the rate control mode + * (#GstVulkanH264Encoder:rc-mode) is CQP (constant quantization parameter). + * + * Lower QP values mean higher video quality, but larger file sizes or higher + * bitrates. + * + * Since: 1.28 + */ + propertiesPROP_QP_I = g_param_spec_uint ("qp-i", "Constant I frame QP", + "Constant quantization value for each I-frame slice", 0, 51, 26, + param_flags); + + /** + * GstVulkanH264Encoder:qp-p: + * + * Indicates the quantization parameter for all the slices in each P frame. + * It's only applied when the rate control mode + * (#GstVulkanH264Encoder:rc-mode) is CQP (constant quantization parameter). + * + * Lower QP values mean higher video quality, but larger file sizes or higher + * bitrates. + * + * Since: 1.28 + */ + propertiesPROP_QP_P = g_param_spec_uint ("qp-p", "Constant P frame QP", + "Constant quantization value for each P-frame slice", 0, 51, 26, + param_flags); + + /** + * GstVulkanH264Encoder:qp-b: + * + * Indicates the quantization parameter for all the slices in each B frame. + * It's only applied when the rate control mode + * (#GstVulkanH264Encoder:rc-mode) is CQP (constant quantization parameter). + * + * Lower QP values mean higher video quality, but larger file sizes or higher + * bitrates. + * + * Since: 1.28 + */ + propertiesPROP_QP_B = g_param_spec_uint ("qp-b", "Constant B frame QP", + "Constant quantization value for each B-frame slice", 0, 51, 26, + param_flags); + + /** + * GstVulkanH264Encoder:max-qp: + * + * Indicates the quantization parameter upper bound for each frame. It's only + * applied when the rate control mode (#GstVulkanH264Encoder:rc-mode) is + * either CBR (constant bitrate) or VBR (variable bitrate). + * + * Lower QP values mean higher video quality, but larger file sizes or higher + * bitrates. + * + * If zero, the upper bound will not be clamped. + * + * Since: 1.28 + */ + propertiesPROP_MAX_QP = g_param_spec_uint ("max-qp", "Maximum QP", + "Maximum quantization value for each frame (0: disabled)", 0, 51, 0, + param_flags); + + /** + * GstVulkanH264Encoder:min-qp: + * + * Indicates the quantization parameter lower bound for each frame. It's only + * applied when the rate control mode (#GstVulkanH264Encoder::rc-mode) is + * either CBR (constant bitrate) or VBR (variable bitrate). + * + * Lower QP values mean higher video quality, but larger file sizes or higher + * bitrates. + * + * If zero, the lower bound will not be clamped. + * + * Since: 1.28 + */ + propertiesPROP_MIN_QP = g_param_spec_uint ("min-qp", "Minimum QP", + "Minimum quantization value for each frame (0: disabled)", 0, 51, 0, + param_flags); + + /** + * GstVulkanH264Encoder:rate-control: + * + * Rate control algorithms adjust encoding parameters dynamically to regulate + * the output bitrate. This can involve managing Quantization Parameters (QP), + * quality, or other encoding parameters. + * + * Since: 1.28 + */ + propertiesPROP_RATECONTROL = g_param_spec_enum ("rate-control", + "rate control mode", "The encoding rate control mode to use", + GST_TYPE_VULKAN_ENCODER_RATE_CONTROL_MODE, + VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DISABLED_BIT_KHR, param_flags); + + /** + * GstVulkanH264Encoder:quality: + * + * Video encode quality level. + * + * Higher quality levels may produce higher quality videos at the cost of + * additional processing time. + * + * Since: 1.28 + */ + propertiesPROP_QUALITY = g_param_spec_uint ("quality", "quality level", + "Video encoding quality level", 0, 10, 2, param_flags); + + g_object_class_install_properties (gobject_class, N_PROPERTIES, properties); + + /* since GstVulkanEncoder is private API */ + gst_type_mark_as_plugin_api (GST_TYPE_VULKAN_ENCODER_RATE_CONTROL_MODE, 0); + + g_free (long_name); + g_free (cdata->description); + gst_clear_caps (&cdata->codec); + gst_clear_caps (&cdata->raw); + g_free (cdata); +} + +gboolean +gst_vulkan_h264_encoder_register (GstPlugin * plugin, GstVulkanDevice * device, + guint rank) +{ + static GOnce debug_once = G_ONCE_INIT; + GType type; + GTypeInfo type_info = { + .class_size = sizeof (GstVulkanH264EncoderClass), + .class_init = gst_vulkan_h264_encoder_class_init, + .instance_size = sizeof (GstVulkanH264Encoder), + .instance_init = gst_vulkan_h264_encoder_init, + }; + struct CData *cdata; + gboolean ret; + gchar *type_name, *feature_name; + GstCaps *codec = NULL, *raw = NULL; + + g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); + + if (!gst_vulkan_physical_device_codec_caps (device->physical_device, + VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, &codec, &raw)) { + gst_plugin_add_status_warning (plugin, + "Unable to query H.264 encoder properties"); + return FALSE; + } + + cdata = g_new (struct CData, 1); + cdata->description = NULL; + cdata->device_index = device->physical_device->device_index; + cdata->codec = codec; + cdata->raw = raw; + + /* class data will be leaked if the element never gets instantiated */ + GST_MINI_OBJECT_FLAG_SET (cdata->codec, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (cdata->raw, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + + gst_vulkan_create_feature_name (device, "GstVulkanH264Encoder", + "GstVulkanH264Device%dEncoder", &type_name, "vulkanh264enc", + "vulkanh264device%denc", &feature_name, &cdata->description, &rank); + + type_info.class_data = cdata; + + g_once (&debug_once, _register_debug_category, NULL); + type = g_type_register_static (GST_TYPE_H264_ENCODER, + type_name, &type_info, 0); + + ret = gst_element_register (plugin, feature_name, rank, type); + + g_free (type_name); + g_free (feature_name); + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh264enc.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * Author: Stéphane Cerveau <scerveau@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/vulkan/vulkan.h> + +G_BEGIN_DECLS + +gboolean gst_vulkan_h264_encoder_register (GstPlugin * plugin, + GstVulkanDevice * device, + guint rank); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkh265dec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh265dec.c
Changed
@@ -24,9 +24,11 @@ #include "vkh265dec.h" #include <gst/video/video.h> -#include <gst/vulkan/vulkan.h> -#include "gst/vulkan/gstvkdecoder-private.h" +#include <gst/codecs/gsth265decoder.h> +#include "gst/vulkan/gstvkdecoder-private.h" +#include "gst/vulkan/gstvkphysicaldevice-private.h" +#include "gstvkvideocaps.h" #include "gstvulkanelements.h" GST_DEBUG_CATEGORY_STATIC (gst_vulkan_h265_decoder_debug); @@ -42,6 +44,8 @@ { gchar *description; gint device_index; + GstCaps *codec; + GstCaps *raw; }; typedef struct _GstVulkanH265Decoder GstVulkanH265Decoder; @@ -142,18 +146,6 @@ gint device_index; }; -static GstStaticPadTemplate gst_vulkan_h265dec_sink_template = -GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, - GST_STATIC_CAPS ("video/x-h265, " - "profile = (string) main," - "stream-format = { (string) hvc1, (string) hev1, (string) byte-stream }, " - "alignment = (string) au")); - -static GstStaticPadTemplate gst_vulkan_h265dec_src_template = -GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12"))); - #define gst_vulkan_h265_decoder_parent_class parent_class static gpointer @@ -370,6 +362,12 @@ VkImageUsageFlags usage; GstVulkanVideoCapabilities vk_caps; + if (self->dpb_size == 0) { + return + GST_VIDEO_DECODER_CLASS (parent_class)->decide_allocation (decoder, + query); + } + gst_query_parse_allocation (query, &caps, NULL); if (!caps) return FALSE; @@ -414,7 +412,7 @@ gst_vulkan_image_buffer_pool_config_set_allocation_params (config, usage, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_IMAGE_LAYOUT_VIDEO_DECODE_DST_KHR, - VK_ACCESS_TRANSFER_WRITE_BIT); + VK_ACCESS_NONE_KHR); gst_vulkan_image_buffer_pool_config_set_decode_caps (config, profile_caps); gst_caps_unref (profile_caps); @@ -1612,6 +1610,13 @@ GstVulkanH265Decoder *self = GST_VULKAN_H265_DECODER (decoder); GstVulkanH265Picture *pic; GError *error = NULL; + VkVideoDecodeH265InlineSessionParametersInfoKHR inline_params = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_INLINE_SESSION_PARAMETERS_INFO_KHR, + .pStdSPS = &self->std_sps.sps, + .pStdPPS = &self->std_pps.pps, + .pStdVPS = &self->std_vps.vps, + }; GST_TRACE_OBJECT (self, "End picture"); @@ -1625,6 +1630,10 @@ pic->vk_h265pic.pSliceSegmentOffsets = (const guint32 *) pic->base.slice_offs->data; + if (gst_vulkan_decoder_has_feature (self->decoder, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) + vk_link_struct (&pic->base.decode_info, &inline_params); + GST_LOG_OBJECT (self, "Decoding frame, %d bytes %d slices", pic->vk_h265pic.pSliceSegmentOffsetspic->vk_h265pic.sliceSegmentCount, pic->vk_h265pic.sliceSegmentCount); @@ -1687,6 +1696,8 @@ struct CData *cdata = class_data; gchar *long_name; const gchar *name; + GstPadTemplate *sink_pad_template, *src_pad_template; + GstCaps *sink_doc_caps, *src_doc_caps; name = "Vulkan H.265 decoder"; if (cdata->description) @@ -1702,11 +1713,27 @@ parent_class = g_type_class_peek_parent (g_klass); - gst_element_class_add_static_pad_template (element_class, - &gst_vulkan_h265dec_sink_template); + sink_doc_caps = gst_caps_from_string ("video/x-h265, " + "profile = { (string) main, (string) main-10}, " + "stream-format = { (string) hvc1, (string) hev1, (string) byte-stream }, " + "alignment = (string) au"); + src_doc_caps = + gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12")); - gst_element_class_add_static_pad_template (element_class, - &gst_vulkan_h265dec_src_template); + sink_pad_template = + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->codec); + gst_element_class_add_pad_template (element_class, sink_pad_template); + + src_pad_template = + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, cdata->raw); + gst_element_class_add_pad_template (element_class, src_pad_template); + + gst_pad_template_set_documentation_caps (sink_pad_template, sink_doc_caps); + gst_caps_unref (sink_doc_caps); + + gst_pad_template_set_documentation_caps (src_pad_template, src_doc_caps); + gst_caps_unref (src_doc_caps); element_class->set_context = GST_DEBUG_FUNCPTR (gst_vulkan_h265_decoder_set_context); @@ -1738,6 +1765,8 @@ g_free (long_name); g_free (cdata->description); + gst_clear_caps (&cdata->codec); + gst_clear_caps (&cdata->raw); g_free (cdata); } @@ -1756,12 +1785,26 @@ struct CData *cdata; gboolean ret; gchar *type_name, *feature_name; + GstCaps *codec = NULL, *raw = NULL; + + g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); + if (!gst_vulkan_physical_device_codec_caps (device->physical_device, + VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR, &codec, &raw)) { + gst_plugin_add_status_warning (plugin, + "Unable to query H.265 decoder properties"); + return FALSE; + } cdata = g_new (struct CData, 1); cdata->description = NULL; cdata->device_index = device->physical_device->device_index; + cdata->codec = codec; + cdata->raw = raw; - g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + /* class data will be leaked if the element never gets instantiated */ + GST_MINI_OBJECT_FLAG_SET (cdata->codec, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (cdata->raw, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); gst_vulkan_create_feature_name (device, "GstVulkanH265Decoder", "GstVulkanH265Device%dDecoder", &type_name, "vulkanh265dec",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkh265dec.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkh265dec.h
Changed
@@ -19,13 +19,12 @@ #pragma once -#include <gst/codecs/gsth265decoder.h> - #include <gst/vulkan/vulkan.h> G_BEGIN_DECLS -gboolean -gst_vulkan_h265_decoder_register (GstPlugin * plugin, GstVulkanDevice *device, guint rank); +gboolean gst_vulkan_h265_decoder_register (GstPlugin * plugin, + GstVulkanDevice * device, + guint rank); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkimageidentity.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkimageidentity.c
Changed
@@ -97,8 +97,8 @@ gstelement_class = (GstElementClass *) klass; gstbasetransform_class = (GstBaseTransformClass *) klass; - gst_element_class_set_metadata (gstelement_class, "Vulkan Image Identity", - "Filter/Video", "A Vulkan image copier", + gst_element_class_set_static_metadata (gstelement_class, + "Vulkan Image Identity", "Filter/Video", "A Vulkan image copier", "Matthew Waters <matthew@centricular.com>"); gst_element_class_add_static_pad_template (gstelement_class,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkoverlaycompositor.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkoverlaycompositor.c
Changed
@@ -45,39 +45,31 @@ struct vk_overlay { - GstBuffer *buffer; - GstVideoOverlayComposition *composition; GstVideoOverlayRectangle *rectangle; GstVulkanFullScreenQuad *quad; }; static void -vk_overlay_clear (struct vk_overlay *overlay) +vk_overlay_free (struct vk_overlay *overlay) { - gst_clear_buffer (&overlay->buffer); - overlay->rectangle = NULL; - if (overlay->composition) - gst_video_overlay_composition_unref (overlay->composition); - overlay->composition = NULL; - + gst_video_overlay_rectangle_unref (overlay->rectangle); gst_clear_object (&overlay->quad); + g_free (overlay); } -static void -vk_overlay_init (struct vk_overlay *overlay, GstVulkanQueue * queue, - GstBuffer * buffer, GstVideoOverlayComposition * comp, +static struct vk_overlay * +vk_overlay_new (GstVulkanQueue * queue, GstBuffer * buffer, GstVideoOverlayRectangle * rectangle, GstVulkanHandle * vert, GstVulkanHandle * frag) { + struct vk_overlay *overlay = g_new0 (struct vk_overlay, 1); GstVideoOverlayFormatFlags flags; memset (overlay, 0, sizeof (*overlay)); flags = gst_video_overlay_rectangle_get_flags (rectangle); - overlay->buffer = gst_buffer_ref (buffer); - overlay->composition = gst_video_overlay_composition_ref (comp); - overlay->rectangle = rectangle; + overlay->rectangle = gst_video_overlay_rectangle_ref (rectangle); overlay->quad = gst_vulkan_full_screen_quad_new (queue); gst_vulkan_full_screen_quad_enable_clear (overlay->quad, FALSE); gst_vulkan_full_screen_quad_set_shaders (overlay->quad, vert, frag); @@ -93,6 +85,8 @@ VK_BLEND_FACTOR_SRC_ALPHA, VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA, VK_BLEND_FACTOR_ONE, VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA); } + + return overlay; } struct Vertex @@ -485,7 +479,7 @@ GstVulkanHandle *vert; GstVulkanHandle *frag; - GArray *overlays; + GQueue overlays; gboolean render_overlays; }; @@ -509,8 +503,9 @@ gstelement_class = (GstElementClass *) klass; gstbasetransform_class = (GstBaseTransformClass *) klass; - gst_element_class_set_metadata (gstelement_class, "Vulkan Overlay Compositor", - "Filter/Video", "Vulkan Overlay Composition element", + gst_element_class_set_static_metadata (gstelement_class, + "Vulkan Overlay Compositor", "Filter/Video", + "Vulkan Overlay Composition element", "Matthew Waters <matthew@centricular.com>"); gst_element_class_add_static_pad_template (gstelement_class, @@ -556,9 +551,7 @@ goto error; } - vk_overlay->overlays = g_array_new (FALSE, TRUE, sizeof (struct vk_overlay)); - g_array_set_clear_func (vk_overlay->overlays, - (GDestroyNotify) vk_overlay_clear); + g_queue_init (&vk_overlay->overlays); return TRUE; @@ -572,11 +565,7 @@ { GstVulkanOverlayCompositor *vk_overlay = GST_VULKAN_OVERLAY_COMPOSITOR (bt); - if (vk_overlay->overlays) { - g_array_set_size (vk_overlay->overlays, 0); - g_array_unref (vk_overlay->overlays); - } - vk_overlay->overlays = NULL; + g_queue_clear_full (&vk_overlay->overlays, (GDestroyNotify) vk_overlay_free); gst_clear_vulkan_handle (&vk_overlay->vert); gst_clear_vulkan_handle (&vk_overlay->frag); @@ -584,42 +573,6 @@ return GST_BASE_TRANSFORM_CLASS (parent_class)->stop (bt); } -static struct vk_overlay * -find_by_rectangle (GstVulkanOverlayCompositor * vk_overlay, - GstVideoOverlayRectangle * rectangle) -{ - int i; - - for (i = 0; i < vk_overlay->overlays->len; i++) { - struct vk_overlay *over = - &g_array_index (vk_overlay->overlays, struct vk_overlay, i); - - if (over->rectangle == rectangle) - return over; - } - - return NULL; -} - -static gboolean -overlay_in_rectangles (struct vk_overlay *over, - GstVideoOverlayComposition * composition) -{ - int i, n; - - n = gst_video_overlay_composition_n_rectangles (composition); - for (i = 0; i < n; i++) { - GstVideoOverlayRectangle *rect; - - rect = gst_video_overlay_composition_get_rectangle (composition, i); - - if (over->rectangle == rect) - return TRUE; - } - - return FALSE; -} - static GstCaps * gst_vulkan_overlay_compositor_transform_caps (GstBaseTransform * bt, GstPadDirection direction, GstCaps * caps, GstCaps * filter) @@ -704,15 +657,30 @@ return TRUE; } +static gint +_find_overlay_cmp (gconstpointer item, gconstpointer user_data) +{ + struct vk_overlay *overlay = (struct vk_overlay *) item; + GstVideoOverlayRectangle *rectangle = (GstVideoOverlayRectangle *) user_data; + return overlay->rectangle == rectangle ? 0 : 1; +} + +static gboolean +remove_overlay_meta_foreach (GstBuffer * buffer, GstMeta ** meta, + gpointer user_data) +{ + if ((*meta)->info->api == GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE) + *meta = NULL; + + return TRUE; +} + static GstFlowReturn gst_vulkan_overlay_compositor_transform_ip (GstBaseTransform * bt, GstBuffer * buffer) { GstVulkanOverlayCompositor *vk_overlay = GST_VULKAN_OVERLAY_COMPOSITOR (bt); - GstVideoOverlayCompositionMeta *ometa; - GstVideoOverlayComposition *comp = NULL; GError *error = NULL; - int i, n; if (!vk_overlay->render_overlays) { GST_LOG_OBJECT (bt, @@ -720,76 +688,58 @@ return GST_FLOW_OK; } - ometa = gst_buffer_get_video_overlay_composition_meta (buffer); - if (!ometa) { - GST_LOG_OBJECT (bt, - "no GstVideoOverlayCompositionMeta on buffer, passthrough"); - return GST_FLOW_OK; - } - - comp = gst_video_overlay_composition_ref (ometa->overlay); - gst_buffer_remove_meta (buffer, (GstMeta *) ometa); - ometa = NULL; - - n = gst_video_overlay_composition_n_rectangles (comp); - if (n == 0) { - GST_LOG_OBJECT (bt, - "GstVideoOverlayCompositionMeta has 0 rectangles, passthrough"); - return GST_FLOW_OK; - } - - GST_LOG_OBJECT (bt, - "rendering GstVideoOverlayCompositionMeta with %u rectangles", n); - for (i = 0; i < n; i++) { - GstVideoOverlayRectangle *rectangle; - struct vk_overlay *over; - - rectangle = gst_video_overlay_composition_get_rectangle (comp, i); - - over = find_by_rectangle (vk_overlay, rectangle); - if (!over) { - struct vk_overlay new_overlay = { 0, }; - - vk_overlay_init (&new_overlay, vk_overlay->parent.queue, buffer, comp, - rectangle, vk_overlay->vert, vk_overlay->frag); + /* Steal previous list of overlays */ + GList *overlays = vk_overlay->overlays.head; + g_queue_init (&vk_overlay->overlays); + + gpointer state = NULL; + GstMeta *meta; + while ((meta = + gst_buffer_iterate_meta_filtered (buffer, &state, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE)) != NULL) { + GstVideoOverlayCompositionMeta *ometa = + (GstVideoOverlayCompositionMeta *) meta; + guint n = gst_video_overlay_composition_n_rectangles (ometa->overlay); + + for (int i = 0; i < n; i++) { + GstVideoOverlayRectangle *rectangle = + gst_video_overlay_composition_get_rectangle (ometa->overlay, i); + struct vk_overlay *over; + + GList *l = g_list_find_custom (overlays, rectangle, _find_overlay_cmp); + if (l == NULL) { + over = vk_overlay_new (vk_overlay->parent.queue, buffer, + rectangle, vk_overlay->vert, vk_overlay->frag); + if (!vk_overlay_upload (over, &vk_overlay->parent.out_info, &error)) { + vk_overlay_free (over); + goto error; + } + g_queue_push_tail (&vk_overlay->overlays, over); + } else { + over = l->data; + overlays = g_list_remove_link (overlays, l); + g_queue_push_tail_link (&vk_overlay->overlays, l); + } - if (!vk_overlay_upload (&new_overlay, &vk_overlay->parent.out_info, + if (!gst_vulkan_full_screen_quad_set_output_buffer (over->quad, buffer, &error)) goto error; - g_array_append_val (vk_overlay->overlays, new_overlay); + if (!gst_vulkan_full_screen_quad_draw (over->quad, &error)) + goto error; } } - n = vk_overlay->overlays->len; - for (i = 0; i < n;) { - struct vk_overlay *over = - &g_array_index (vk_overlay->overlays, struct vk_overlay, i); - - if (!overlay_in_rectangles (over, comp)) { - g_array_remove_index (vk_overlay->overlays, i); - continue; - } - - if (!gst_vulkan_full_screen_quad_set_output_buffer (over->quad, buffer, - &error)) - goto error; - - if (!gst_vulkan_full_screen_quad_draw (over->quad, &error)) - goto error; - - i++; - } + /* Remove all composition metas, otherwise downstream might render them twice. */ + gst_buffer_foreach_meta (buffer, remove_overlay_meta_foreach, NULL); - if (comp) - gst_video_overlay_composition_unref (comp); + /* Free any previous overlays that are not in use anymore */ + g_list_free_full (overlays, (GDestroyNotify) vk_overlay_free); return GST_FLOW_OK; error: GST_ELEMENT_ERROR (bt, LIBRARY, FAILED, ("%s", error->message), (NULL)); g_clear_error (&error); - if (comp) - gst_video_overlay_composition_unref (comp); return GST_FLOW_ERROR; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkshaderspv.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkshaderspv.c
Changed
@@ -162,7 +162,7 @@ "SPIRV fragment source", NULL, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - gst_element_class_set_metadata (gstelement_class, "Vulkan Shader SPV", + gst_element_class_set_static_metadata (gstelement_class, "Vulkan Shader SPV", "Filter/Video", "Performs operations with SPIRV shaders in Vulkan", "Martin Reboredo <yakoyoku@gmail.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vksink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vksink.c
Changed
@@ -359,8 +359,8 @@ GError *error = NULL; GST_DEBUG ("changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkupload.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkupload.c
Changed
@@ -31,6 +31,7 @@ #include <string.h> #include "gstvulkanelements.h" +#include "gstvkutils.h" #include "vkupload.h" GST_DEBUG_CATEGORY (gst_debug_vulkan_upload); @@ -239,13 +240,33 @@ _buffer_propose_allocation (impl, decide_query, query); } +static gboolean +_copy_frames (const GstVideoInfo * vinfo, GstBuffer * inbuf, GstBuffer * outbuf) +{ + GstVideoFrame in_frame, out_frame; + gboolean copied; + + if (!gst_video_frame_map (&in_frame, vinfo, inbuf, GST_MAP_READ)) + return FALSE; + + if (!gst_video_frame_map (&out_frame, vinfo, outbuf, GST_MAP_WRITE)) { + gst_video_frame_unmap (&in_frame); + return FALSE; + } + + copied = gst_video_frame_copy (&out_frame, &in_frame); + + gst_video_frame_unmap (&in_frame); + gst_video_frame_unmap (&out_frame); + + return copied; +} + static GstFlowReturn _raw_to_buffer_perform (gpointer impl, GstBuffer * inbuf, GstBuffer ** outbuf) { struct RawToBufferUpload *raw = impl; - GstVideoFrame v_frame; - GstFlowReturn ret; - guint i, n_mems; + GstFlowReturn ret = GST_FLOW_ERROR; GstBufferPool *pool; pool = gst_base_transform_get_buffer_pool @@ -258,41 +279,12 @@ != GST_FLOW_OK) goto out; - if (!gst_video_frame_map (&v_frame, &raw->in_info, inbuf, GST_MAP_READ)) { + if (!_copy_frames (&raw->in_info, inbuf, *outbuf)) { GST_ELEMENT_ERROR (raw->upload, RESOURCE, NOT_FOUND, ("%s", "Failed to map input buffer"), NULL); - return GST_FLOW_ERROR; - } - - n_mems = gst_buffer_n_memory (*outbuf); - for (i = 0; i < n_mems; i++) { - GstMapInfo map_info; - gsize plane_size; - GstMemory *mem; - - mem = gst_buffer_peek_memory (*outbuf, i); - if (!gst_memory_map (GST_MEMORY_CAST (mem), &map_info, GST_MAP_WRITE)) { - GST_ELEMENT_ERROR (raw->upload, RESOURCE, NOT_FOUND, - ("%s", "Failed to map output memory"), NULL); - gst_buffer_unref (*outbuf); - *outbuf = NULL; - ret = GST_FLOW_ERROR; - goto out; - } - - plane_size = - GST_VIDEO_INFO_PLANE_STRIDE (&raw->out_info, - i) * GST_VIDEO_INFO_COMP_HEIGHT (&raw->out_info, i); - g_assert (plane_size <= map_info.size); - memcpy (map_info.data, v_frame.datai, plane_size); - - gst_memory_unmap (GST_MEMORY_CAST (mem), &map_info); + ret = GST_FLOW_ERROR; } - gst_video_frame_unmap (&v_frame); - - ret = GST_FLOW_OK; - out: gst_object_unref (pool); return ret; @@ -392,7 +384,6 @@ GArray *barriers = NULL; VkImageLayout dst_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL; GstBufferPool *pool; - GstVideoMeta *in_vmeta, *out_vmeta; pool = gst_base_transform_get_buffer_pool (GST_BASE_TRANSFORM_CAST (raw->upload)); @@ -459,78 +450,64 @@ g_clear_pointer (&barriers, g_array_unref); n_mems = gst_buffer_n_memory (*outbuf); - out_vmeta = gst_buffer_get_video_meta (*outbuf); n_planes = GST_VIDEO_INFO_N_PLANES (&raw->out_info); - in_vmeta = gst_buffer_get_video_meta (inbuf); - for (i = 0; i < n_planes; i++) { VkBufferImageCopy region; - GstMemory *in_mem, *out_mem; + GstMemory *mem; GstVulkanBufferMemory *buf_mem; GstVulkanImageMemory *img_mem; const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, }; VkImageAspectFlags plane_aspect; - guint idx, len; - gsize offset, skip; + guint32 width, height, row, img_h; - offset = in_vmeta ? in_vmeta->offseti - : GST_VIDEO_INFO_PLANE_OFFSET (&raw->in_info, i); - if (!gst_buffer_find_memory (inbuf, offset, 1, &idx, &len, &skip)) { - GST_WARNING_OBJECT (raw->upload, - "Input buffer plane %u, no memory at offset %" G_GSIZE_FORMAT, i, - offset); + mem = gst_vulkan_buffer_peek_plane_memory (inbuf, &raw->in_info, i); + if (!mem) goto unlock_error; - } - in_mem = gst_buffer_peek_memory (inbuf, i); - - if (!gst_is_vulkan_buffer_memory (in_mem)) { - GST_WARNING_OBJECT (raw->upload, "Input is not a GstVulkanBufferMemory"); + if (!gst_is_vulkan_buffer_memory (mem)) { + GST_WARNING_OBJECT (raw->upload, "Input buffer is not a Vulkan buffer"); goto unlock_error; } - buf_mem = (GstVulkanBufferMemory *) in_mem; + buf_mem = (GstVulkanBufferMemory *) mem; if (n_planes == n_mems) plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; else plane_aspect = aspectsi; + gst_vulkan_buffer_get_plane_dimensions (inbuf, &raw->in_info, i, &width, + &height, &row, &img_h); + /* *INDENT-OFF* */ region = (VkBufferImageCopy) { - .bufferOffset = 0, - .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&raw->in_info, i), - .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&raw->in_info, i), - .imageSubresource = { - .aspectMask = plane_aspect, - .mipLevel = 0, - .baseArrayLayer = 0, - .layerCount = 1, - }, - .imageOffset = { .x = 0, .y = 0, .z = 0, }, - .imageExtent = { - .width = GST_VIDEO_INFO_COMP_WIDTH (&raw->out_info, i), - .height = GST_VIDEO_INFO_COMP_HEIGHT (&raw->out_info, i), - .depth = 1, - } + .bufferOffset = 0, + .bufferRowLength = row, + .bufferImageHeight = img_h, + .imageSubresource = { + .aspectMask = plane_aspect, + .mipLevel = 0, + .baseArrayLayer = 0, + .layerCount = 1, + }, + .imageOffset = { .x = 0, .y = 0, .z = 0, }, + .imageExtent = { + .width = width, + .height = height, + .depth = 1, + } }; + /* *INDENT-ON* */ - offset = out_vmeta ? out_vmeta->offseti - : GST_VIDEO_INFO_PLANE_OFFSET (&raw->out_info, i); - if (!gst_buffer_find_memory (*outbuf, offset, 1, &idx, &len, &skip)) { - GST_WARNING_OBJECT (raw->upload, - "Output buffer plane %u, no memory at offset %" G_GSIZE_FORMAT, i, - offset); + mem = gst_vulkan_buffer_peek_plane_memory (*outbuf, &raw->out_info, i); + if (!mem) goto unlock_error; - } - out_mem = gst_buffer_peek_memory (*outbuf, idx); - - if (!gst_is_vulkan_image_memory (out_mem)) { - GST_WARNING_OBJECT (raw->upload, "Output is not a GstVulkanImageMemory"); + if (!gst_is_vulkan_image_memory (mem)) { + GST_WARNING_OBJECT (raw->upload, "Output buffer is not a Vulkan image"); goto unlock_error; } - img_mem = (GstVulkanImageMemory *) out_mem; + img_mem = (GstVulkanImageMemory *) mem; gst_vulkan_command_buffer_lock (cmd_buf); vkCmdCopyBufferToImage (cmd_buf->cmd, buf_mem->buffer, img_mem->image, @@ -679,7 +656,6 @@ guint i, n_planes, n_out_mems; VkImageLayout dst_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL; GstBufferPool *pool; - GstVideoMeta *in_vmeta, *out_vmeta; pool = gst_base_transform_get_buffer_pool (GST_BASE_TRANSFORM_CAST (raw->upload)); @@ -743,45 +719,32 @@ } g_clear_pointer (&barriers, g_array_unref); - in_vmeta = gst_buffer_get_video_meta (inbuf); n_out_mems = gst_buffer_n_memory (*outbuf); - out_vmeta = gst_buffer_get_video_meta (*outbuf); n_planes = GST_VIDEO_INFO_N_PLANES (&raw->in_info); for (i = 0; i < n_planes; i++) { VkBufferImageCopy region; - GstMemory *in_mem = NULL, *out_mem; + GstMemory *mem; GstVulkanBufferMemory *buf_mem; GstVulkanImageMemory *img_mem; const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, }; VkImageAspectFlags plane_aspect; - guint idx, len; - gsize offset, skip; + guint32 width, height, row, img_h; - offset = in_vmeta ? in_vmeta->offseti : - GST_VIDEO_INFO_PLANE_OFFSET (&raw->in_info, i); - if (!gst_buffer_find_memory (inbuf, offset, 1, &idx, &len, &skip)) { - GST_WARNING_OBJECT (raw->upload, - "Input buffer plane %u, no memory at offset %" G_GSIZE_FORMAT, i, - offset); + mem = gst_vulkan_buffer_peek_plane_memory (inbuf, &raw->in_info, i); + if (!mem) goto unlock_error; - } - in_mem = gst_buffer_peek_memory (inbuf, idx); - if (gst_is_vulkan_buffer_memory (in_mem)) { + if (gst_is_vulkan_buffer_memory (mem)) { GST_TRACE_OBJECT (raw->upload, "Input is a GstVulkanBufferMemory"); - buf_mem = (GstVulkanBufferMemory *) in_mem; } else if (in_vk_copy) { GST_TRACE_OBJECT (raw->upload, "Have buffer copy of GstVulkanBufferMemory"); - in_mem = gst_buffer_peek_memory (in_vk_copy, i); - g_assert (gst_is_vulkan_buffer_memory (in_mem)); - buf_mem = (GstVulkanBufferMemory *) in_mem; + mem = gst_buffer_peek_memory (in_vk_copy, i); + g_assert (gst_is_vulkan_buffer_memory (mem)); } else { - GstVideoFrame in_frame, out_frame; - GST_TRACE_OBJECT (raw->upload, "Copying input to a new GstVulkanBufferMemory"); if (!raw->in_pool) { @@ -806,69 +769,50 @@ goto unlock_error; } - if (!gst_video_frame_map (&in_frame, &raw->in_info, inbuf, GST_MAP_READ)) { - GST_WARNING_OBJECT (raw->upload, "Failed to map input buffer"); + if (!_copy_frames (&raw->in_info, inbuf, in_vk_copy)) { + GST_ERROR_OBJECT (raw->upload, "Failed to copy to Vulkan buffer"); goto unlock_error; } - if (!gst_video_frame_map (&out_frame, &raw->in_info, in_vk_copy, - GST_MAP_WRITE)) { - gst_video_frame_unmap (&in_frame); - GST_WARNING_OBJECT (raw->upload, "Failed to map input buffer"); - goto unlock_error; - } - - if (!gst_video_frame_copy (&out_frame, &in_frame)) { - gst_video_frame_unmap (&in_frame); - gst_video_frame_unmap (&out_frame); - GST_WARNING_OBJECT (raw->upload, "Failed to copy input buffer"); - goto unlock_error; - } - - gst_video_frame_unmap (&in_frame); - gst_video_frame_unmap (&out_frame); - - in_mem = gst_buffer_peek_memory (in_vk_copy, i); - buf_mem = (GstVulkanBufferMemory *) in_mem; + mem = gst_buffer_peek_memory (in_vk_copy, i); } - offset = out_vmeta ? out_vmeta->offseti : GST_VIDEO_INFO_PLANE_OFFSET (&raw->out_info, i); - if (!gst_buffer_find_memory (*outbuf, offset, 1, &idx, &len, &skip)) { - GST_WARNING_OBJECT (raw->upload, - "Output buffer plane %u, no memory at offset %" G_GSIZE_FORMAT, i, - offset); - goto unlock_error; - } - out_mem = gst_buffer_peek_memory (*outbuf, idx); + buf_mem = (GstVulkanBufferMemory *) mem; - if (!gst_is_vulkan_image_memory (out_mem)) { - GST_WARNING_OBJECT (raw->upload, "Output is not a GstVulkanImageMemory"); + mem = gst_vulkan_buffer_peek_plane_memory (*outbuf, &raw->out_info, i); + if (!mem) + goto unlock_error; + if (!gst_is_vulkan_image_memory (mem)) { + GST_WARNING_OBJECT (raw->upload, "Output buffer is not a Vulkan image"); goto unlock_error; } - img_mem = (GstVulkanImageMemory *) out_mem; + img_mem = (GstVulkanImageMemory *) mem; if (n_planes == n_out_mems) plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; else plane_aspect = aspectsi; + gst_vulkan_buffer_get_plane_dimensions (inbuf, &raw->in_info, i, &width, + &height, &row, &img_h); + /* *INDENT-OFF* */ region = (VkBufferImageCopy) { - .bufferOffset = 0, - .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&raw->in_info, i), - .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&raw->in_info, i), - .imageSubresource = { - .aspectMask = plane_aspect, - .mipLevel = 0, - .baseArrayLayer = 0, - .layerCount = 1, - }, - .imageOffset = { .x = 0, .y = 0, .z = 0, }, - .imageExtent = { - .width = GST_VIDEO_INFO_COMP_WIDTH (&raw->out_info, i), - .height = GST_VIDEO_INFO_COMP_HEIGHT (&raw->out_info, i), - .depth = 1, - } + .bufferOffset = 0, + .bufferRowLength = row, + .bufferImageHeight = img_h, + .imageSubresource = { + .aspectMask = plane_aspect, + .mipLevel = 0, + .baseArrayLayer = 0, + .layerCount = 1, + }, + .imageOffset = { .x = 0, .y = 0, .z = 0, }, + .imageExtent = { + .width = width, + .height = height, + .depth = 1, + } }; /* *INDENT-ON* */ @@ -1046,7 +990,7 @@ gobject_class->set_property = gst_vulkan_upload_set_property; gobject_class->get_property = gst_vulkan_upload_get_property; - gst_element_class_set_metadata (gstelement_class, "Vulkan Uploader", + gst_element_class_set_static_metadata (gstelement_class, "Vulkan Uploader", "Filter/Video", "A Vulkan data uploader", "Matthew Waters <matthew@centricular.com>"); @@ -1178,8 +1122,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG ("changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/vulkan/vkviewconvert.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkviewconvert.c
Changed
@@ -368,7 +368,7 @@ GstVulkanVideoFilter *vfilter = GST_VULKAN_VIDEO_FILTER (conv); GstVideoMultiviewMode in_mode, out_mode; GstVideoMultiviewFlags in_flags, out_flags; - struct ViewUpdate data; + struct ViewUpdate data = { 0 }; GstMapInfo map_info; guint l_index, r_index; gboolean mono_input = FALSE; @@ -610,8 +610,8 @@ GST_TYPE_VULKAN_STEREO_DOWNMIX, DEFAULT_DOWNMIX, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - gst_element_class_set_metadata (gstelement_class, "Vulkan View Convert", - "Filter/Video/Convert", "A Vulkan View Convert", + gst_element_class_set_static_metadata (gstelement_class, + "Vulkan View Convert", "Filter/Video/Convert", "A Vulkan View Convert", "Matthew Waters <matthew@centricular.com>"); gst_type_mark_as_plugin_api (GST_TYPE_VULKAN_STEREO_DOWNMIX, 0);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkvp9dec.c
Added
@@ -0,0 +1,1205 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-vulkanvp9dec + * @title: vulkanvp9dec + * @short_description: A Vulkan based VP9 video decoder + * + * vulkanvp9dec decodes VP9 bitstreams into raw video surfaces using + * Vulkan video extensions. + * + * + * ## Example launch line + * ``` + * gst-launch-1.0 filesrc location=video.webm ! matroskademux ! vp9parse ! vulkanvp9dec ! vulkandownload ! videoconvert ! autovideosink + * ``` + * + * Since: 1.28 + */ + +#include "vkvp9dec.h" + +#include <gst/video/video.h> +#include <gst/codecs/gstvp9decoder.h> + +#include "gst/vulkan/gstvkdecoder-private.h" +#include "gstvkvideocaps.h" +#include "gstvulkanelements.h" + +GST_DEBUG_CATEGORY_STATIC (gst_vulkan_vp9_decoder_debug); +#define GST_CAT_DEFAULT gst_vulkan_vp9_decoder_debug + +#define GST_VULKAN_VP9_DECODER(obj) ((GstVulkanVp9Decoder *) obj) +#define GST_VULKAN_VP9_DECODER_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), G_TYPE_FROM_INSTANCE (obj), GstVulkanVp9DecoderClass)) +#define GST_VULKAN_VP9_DECODER_CLASS(klass) ((GstVulkanVp9DecoderClass *) klass) + +static GstElementClass *parent_class = NULL; + +#define GST_VULKAN_VP9_MAX_DPB_SLOTS 32 + +struct CData +{ + gchar *description; + gint device_index; + GstCaps *codec; + GstCaps *raw; +}; + +typedef struct _GstVulkanVp9Decoder GstVulkanVp9Decoder; +typedef struct _GstVulkanVp9DecoderClass GstVulkanVp9DecoderClass; +typedef struct _GstVulkanVp9Picture GstVulkanVp9Picture; + +struct _GstVulkanVp9Decoder +{ + GstVp9Decoder parent; + + GstVulkanInstance *instance; + GstVulkanDevice *device; + GstVulkanQueue *graphic_queue, *decode_queue; + + GstVulkanDecoder *decoder; + + gboolean need_negotiation; + gboolean resolution_changed; + + gint coded_width, coded_height; + gint dpb_size; + + VkSamplerYcbcrRange range; + VkChromaLocation yloc; + + GstVideoCodecState *output_state; + GstVideoCodecState *input_state; + struct + { + StdVideoVP9ColorConfig color_config; + } vk; + + guint32 free_slot_mask; + gboolean last_show_frame; +}; + +struct _GstVulkanVp9Picture +{ + GstVulkanDecoderPicture base; + + StdVideoVP9Segmentation segmentation; + StdVideoVP9LoopFilter loop_filter; + + VkVideoDecodeVP9PictureInfoKHR vk_pic; + StdVideoDecodeVP9PictureInfo std_pic; + + gint32 slot_idx; + + /* Used to update the mask when this picture is freed. */ + guint32 *free_slot_mask; +}; +struct _GstVulkanVp9DecoderClass +{ + GstVp9DecoderClass parent; + + gint device_index; +}; + +#define gst_vulkan_vp9_decoder_parent_class parent_class + +static gpointer +_register_debug_category (gpointer data) +{ + GST_DEBUG_CATEGORY_INIT (gst_vulkan_vp9_decoder_debug, "vulkanvp9dec", 0, + "Vulkan VP9 decoder"); + + return NULL; +} + +static gboolean +_find_queues (GstVulkanDevice * device, GstVulkanQueue * queue, gpointer data) +{ + GstVulkanVp9Decoder *self = (GstVulkanVp9Decoder *) data; + guint32 flags = + device->physical_device->queue_family_propsqueue->family.queueFlags; + guint32 codec = + device->physical_device->queue_family_opsqueue->family.video; + + if (!self->graphic_queue + && ((flags & VK_QUEUE_GRAPHICS_BIT) == VK_QUEUE_GRAPHICS_BIT)) { + self->graphic_queue = (GstVulkanQueue *) gst_object_ref (queue); + } + + if (!self->decode_queue + && ((codec & VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR) + == VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR) + && ((flags & VK_QUEUE_VIDEO_DECODE_BIT_KHR) + == VK_QUEUE_VIDEO_DECODE_BIT_KHR)) { + self->decode_queue = (GstVulkanQueue *) gst_object_ref (queue); + } + + return !(self->decode_queue && self->graphic_queue); +} + +static gboolean +gst_vulkan_vp9_decoder_open (GstVideoDecoder * decoder) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + + if (!gst_vulkan_ensure_element_data (GST_ELEMENT (decoder), NULL, + &self->instance)) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to retrieve vulkan instance"), (NULL)); + return FALSE; + } + + if (!gst_vulkan_ensure_element_device (GST_ELEMENT (decoder), self->instance, + &self->device, 0)) { + return FALSE; + } + + if (!gst_vulkan_queue_run_context_query (GST_ELEMENT (self), + &self->graphic_queue)) { + GST_DEBUG_OBJECT (self, "No graphic queue retrieved from peer elements"); + } + + gst_vulkan_device_foreach_queue (self->device, _find_queues, self); + + if (!self->decode_queue) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to create/retrieve vulkan VP9 decoder queue"), (NULL)); + return FALSE; + } + + self->decoder = gst_vulkan_decoder_new_from_queue (self->decode_queue, + VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR); + if (!self->decoder) { + GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND, + ("Failed to create vulkan VP9 decoder"), (NULL)); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_vulkan_vp9_decoder_close (GstVideoDecoder * decoder) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + + gst_clear_object (&self->decoder); + gst_clear_object (&self->decode_queue); + gst_clear_object (&self->graphic_queue); + gst_clear_object (&self->device); + gst_clear_object (&self->instance); + + return TRUE; +} + +static gboolean +gst_vulkan_vp9_decoder_stop (GstVideoDecoder * decoder) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + + if (self->decoder) + gst_vulkan_decoder_stop (self->decoder); + + g_clear_pointer (&self->output_state, gst_video_codec_state_unref); + g_clear_pointer (&self->input_state, gst_video_codec_state_unref); + + return GST_VIDEO_DECODER_CLASS (parent_class)->stop (decoder); +} + +static void +gst_vulkan_vp9_decoder_set_context (GstElement * element, GstContext * context) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (element); + + gst_vulkan_handle_set_context (element, context, NULL, &self->instance); + + GST_ELEMENT_CLASS (parent_class)->set_context (element, context); +} + +static gboolean +_query_context (GstVulkanVp9Decoder * self, GstQuery * query) +{ + if (gst_vulkan_handle_context_query (GST_ELEMENT (self), query, NULL, + self->instance, self->device)) + return TRUE; + + if (gst_vulkan_queue_handle_context_query (GST_ELEMENT (self), query, + self->graphic_queue)) + return TRUE; + + return FALSE; +} + +static gboolean +gst_vulkan_vp9_decoder_src_query (GstVideoDecoder * decoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_VP9_DECODER (decoder), query); + break; + default: + ret = GST_VIDEO_DECODER_CLASS (parent_class)->src_query (decoder, query); + break; + } + + return ret; +} + +static gboolean +gst_vulkan_vp9_decoder_sink_query (GstVideoDecoder * decoder, GstQuery * query) +{ + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + ret = _query_context (GST_VULKAN_VP9_DECODER (decoder), query); + break; + default: + ret = GST_VIDEO_DECODER_CLASS (parent_class)->sink_query (decoder, query); + break; + } + + return ret; +} + +static gboolean +gst_vulkan_vp9_decoder_negotiate (GstVideoDecoder * decoder) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + VkVideoFormatPropertiesKHR format_prop; + GstVideoFormat format; + + /* Ignore downstream renegotiation request. */ + if (!self->need_negotiation) { + GST_DEBUG_OBJECT (decoder, + "Input state hasn't changed, no need to reconfigure downstream caps"); + goto bail; + } + + if (!gst_vulkan_decoder_out_format (self->decoder, &format_prop)) + return FALSE; + + self->need_negotiation = FALSE; + + if (self->output_state) + gst_video_codec_state_unref (self->output_state); + + format = gst_vulkan_format_to_video_format (format_prop.format); + self->output_state = gst_video_decoder_set_interlaced_output_state (decoder, + format, GST_VIDEO_INTERLACE_MODE_PROGRESSIVE, self->coded_width, + self->coded_height, self->input_state); + + self->output_state->caps = gst_video_info_to_caps (&self->output_state->info); + gst_caps_set_features_simple (self->output_state->caps, + gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, + NULL)); + + GST_INFO_OBJECT (self, "Negotiated caps %" GST_PTR_FORMAT, + self->output_state->caps); + +bail: + return GST_VIDEO_DECODER_CLASS (parent_class)->negotiate (decoder); +} + +static gboolean +gst_vulkan_vp9_decoder_decide_allocation (GstVideoDecoder * decoder, + GstQuery * query) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstCaps *new_caps, *profile_caps, *caps = NULL, *dpb_caps = NULL; + GstBufferPool *pool = NULL; + GstStructure *config; + guint size, min, max; + gboolean update_pool; + VkImageUsageFlags usage; + GstVulkanVideoCapabilities vk_caps; + + if (self->dpb_size == 0) { + return + GST_VIDEO_DECODER_CLASS (parent_class)->decide_allocation (decoder, + query); + } + + gst_query_parse_allocation (query, &caps, NULL); + if (!caps) + return FALSE; + if (!gst_vulkan_decoder_caps (self->decoder, &vk_caps)) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + update_pool = TRUE; + } else { + GstVideoInfo vinfo; + + gst_video_info_from_caps (&vinfo, caps); + size = GST_VIDEO_INFO_SIZE (&vinfo); + min = 2; + max = 0; + update_pool = FALSE; + } + + if (!(pool && GST_IS_VULKAN_IMAGE_BUFFER_POOL (pool))) { + gst_clear_object (&pool); + pool = gst_vulkan_image_buffer_pool_new (self->device); + } + + usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_SAMPLED_BIT + | VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR; + + if (!self->decoder->dedicated_dpb) { + min = MAX (min, MIN (self->dpb_size, vk_caps.caps.maxDpbSlots)); + max = 0; + usage |= VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR; + } + + new_caps = gst_caps_copy (caps); + gst_caps_set_simple (new_caps, "width", G_TYPE_INT, self->coded_width, + "height", G_TYPE_INT, self->coded_height, NULL); + profile_caps = gst_vulkan_decoder_profile_caps (self->decoder); + + config = gst_buffer_pool_get_config (pool); + + gst_buffer_pool_config_set_params (config, new_caps, size, min, max); + + gst_vulkan_image_buffer_pool_config_set_allocation_params (config, usage, + VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_IMAGE_LAYOUT_VIDEO_DECODE_DST_KHR, + VK_ACCESS_TRANSFER_WRITE_BIT); + gst_vulkan_image_buffer_pool_config_set_decode_caps (config, profile_caps); + + gst_caps_unref (profile_caps); + gst_caps_unref (new_caps); + + if (!gst_buffer_pool_set_config (pool, config)) + goto bail; + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + /* + * When the pool is destroyed during resolution changes, previously decoded + * reference frames stored in the DPBs are lost, which can cause decoding + * errors or corruption when those reference frames are needed for inter-frame + * prediction at different resolutions. By sizing the pool for the maximum + * supported resolution upfront, we ensure reference frame continuity across + * resolution changes. + */ + dpb_caps = gst_caps_copy (caps); + gst_caps_set_simple (dpb_caps, "width", G_TYPE_INT, + vk_caps.caps.maxCodedExtent.width, "height", G_TYPE_INT, + vk_caps.caps.maxCodedExtent.height, NULL); + + if (!gst_vulkan_decoder_create_dpb_pool (self->decoder, dpb_caps)) + goto bail; + gst_caps_unref (dpb_caps); + + return TRUE; + +bail: + { + gst_clear_caps (&new_caps); + gst_clear_caps (&dpb_caps); + gst_clear_object (&pool); + return FALSE; + } +} + +static VkVideoChromaSubsamplingFlagBitsKHR +_get_chroma_subsampling_flag (const GstVp9FrameHeader * seq_hdr) +{ + switch (seq_hdr->profile) { + case GST_VP9_PROFILE_0: + case GST_VP9_PROFILE_2: + return VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR; + /* TODO: Add caps negociation to support these video formats + * such as GST_VIDEO_FORMAT_Y42B or GST_VIDEO_FORMAT_Y444 etc. */ + case GST_VP9_PROFILE_1: + case GST_VP9_PROFILE_3: + if (seq_hdr->subsampling_x == 1 && seq_hdr->subsampling_y == 0) + return VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR; + else if (seq_hdr->subsampling_x == 0 && seq_hdr->subsampling_y == 0) + return VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR; + break; + default: + break; + } + return VK_VIDEO_CHROMA_SUBSAMPLING_INVALID_KHR; +} + +static VkVideoComponentBitDepthFlagBitsKHR +_get_component_bit_depth (const GstVp9FrameHeader * seq_hdr) +{ + switch (seq_hdr->bit_depth) { + case 8: + return VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR; + case 10: + return VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR; + case 12: + return VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR; + default: + return VK_VIDEO_COMPONENT_BIT_DEPTH_INVALID_KHR; + } +} + +static StdVideoVP9Profile +_get_vp9_profile (const GstVp9FrameHeader * seq_hdr) +{ + switch (seq_hdr->profile) { + case GST_VP9_PROFILE_0: + return STD_VIDEO_VP9_PROFILE_0; + case GST_VP9_PROFILE_1: + return STD_VIDEO_VP9_PROFILE_1; + case GST_VP9_PROFILE_2: + return STD_VIDEO_VP9_PROFILE_2; + case GST_VP9_PROFILE_3: + return STD_VIDEO_VP9_PROFILE_3; + default: + return STD_VIDEO_VP9_PROFILE_INVALID; + } +} + +static void +gst_vulkan_video_profile_from_vp9_frame_hdr (GstVulkanVideoProfile * profile, + const GstVp9FrameHeader * frame_hdr) +{ + /* *INDENT-OFF* */ + *profile = (GstVulkanVideoProfile) { + .profile = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &profile->usage, + .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR, + .chromaSubsampling = _get_chroma_subsampling_flag (frame_hdr), + .lumaBitDepth = _get_component_bit_depth(frame_hdr), + .chromaBitDepth = _get_component_bit_depth (frame_hdr), + }, + .usage.decode = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR, + .pNext = &profile->codec, + .videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR, + }, + .codec.vp9dec = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PROFILE_INFO_KHR, + .stdProfile = _get_vp9_profile (frame_hdr), + }, + }; + /* *INDENT-ON* */ +} + +static GstFlowReturn +gst_vulkan_vp9_decoder_new_sequence (GstVp9Decoder * decoder, + const GstVp9FrameHeader * frame_hdr, gint max_dpb_size) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVulkanVideoProfile profile; + GstVulkanVideoCapabilities vk_caps; + GError *error = NULL; + gint width = frame_hdr->width; + gint height = frame_hdr->height; + VkFormat old_format = VK_FORMAT_UNDEFINED; + VkVideoFormatPropertiesKHR format_prop; + + GST_DEBUG_OBJECT (self, "new sequence %dx%d", width, height); + + gst_vulkan_video_profile_from_vp9_frame_hdr (&profile, frame_hdr); + + if (gst_vulkan_decoder_is_started (self->decoder)) { + if (!gst_vulkan_video_profile_is_equal (&self->decoder->profile, &profile)) { + if (gst_vulkan_decoder_out_format (self->decoder, &format_prop)) + old_format = format_prop.format; + gst_vulkan_decoder_stop (self->decoder); + + } else { + self->need_negotiation = FALSE; + } + } + + if (!gst_vulkan_decoder_is_started (self->decoder)) { + self->need_negotiation = TRUE; + if (!gst_vulkan_decoder_start (self->decoder, &profile, &error)) { + GST_ERROR_OBJECT (self, "Couldn't start decoder: %s", + error ? error->message : ""); + g_clear_error (&error); + return GST_FLOW_ERROR; + } + } + + gst_vulkan_decoder_caps (self->decoder, &vk_caps); + + if (frame_hdr->width < vk_caps.caps.minCodedExtent.width + || frame_hdr->height < vk_caps.caps.minCodedExtent.height + || frame_hdr->width > vk_caps.caps.maxCodedExtent.width + || frame_hdr->height > vk_caps.caps.maxCodedExtent.height) { + + GST_ERROR_OBJECT (self, + "The following sequence can not be decoded because the frame dimension does not fit the decoder bounds: %dx%d" + ", minCodedExtent=%dx%d, maxCodedExtent=%dx%d", + frame_hdr->width, frame_hdr->height, vk_caps.caps.minCodedExtent.width, + vk_caps.caps.minCodedExtent.height, vk_caps.caps.maxCodedExtent.width, + vk_caps.caps.maxCodedExtent.height); + return GST_FLOW_ERROR; + } + self->resolution_changed = self->coded_width > 0 && self->coded_height > 0 + && (width != self->coded_width || height != self->coded_height); + self->need_negotiation &= (width != self->coded_width + || height != self->coded_height); + + self->coded_width = frame_hdr->width; + self->coded_height = frame_hdr->height; + + self->vk.color_config = (StdVideoVP9ColorConfig) { + /* *INDENT-OFF* */ + .flags = { + .color_range = frame_hdr->color_range, + }, + .BitDepth = frame_hdr->bit_depth, + .subsampling_x = frame_hdr->subsampling_x, + .subsampling_y = frame_hdr->subsampling_y, + .color_space = (StdVideoVP9ColorSpace)frame_hdr->color_space, + /* *INDENT-ON* */ + }; + + self->dpb_size = CLAMP (max_dpb_size, 0, GST_VULKAN_VP9_MAX_DPB_SLOTS); + + g_clear_pointer (&self->input_state, gst_video_codec_state_unref); + self->input_state = gst_video_codec_state_ref (decoder->input_state); + + /* Ycbcr sampler */ + { + VkSamplerYcbcrRange range; + VkChromaLocation yloc; + gboolean ret; + + ret = gst_vulkan_decoder_out_format (self->decoder, &format_prop); + g_assert (ret); + + range = (frame_hdr->color_range) ? + VK_SAMPLER_YCBCR_RANGE_ITU_FULL : VK_SAMPLER_YCBCR_RANGE_ITU_NARROW; + + yloc = VK_CHROMA_LOCATION_MIDPOINT; + + if (old_format != format_prop.format || range != self->range || + yloc != self->yloc) { + self->range = range; + self->yloc = yloc; + ret = + gst_vulkan_decoder_update_ycbcr_sampler (self->decoder, range, + VK_CHROMA_LOCATION_COSITED_EVEN, yloc, &error); + if (!ret && error) { + GST_WARNING_OBJECT (self, "Unable to create Ycbcr sampler: %s", + error->message); + g_clear_error (&error); + } + } + } + + return GST_FLOW_OK; +} + +static GstVulkanVp9Picture * +gst_vulkan_vp9_picture_new (GstVulkanVp9Decoder * self, GstBuffer * out) +{ + GstVulkanVp9Picture *pic; + + pic = g_new0 (GstVulkanVp9Picture, 1); + gst_vulkan_decoder_picture_init (self->decoder, &pic->base, out); + + pic->slot_idx = -1; + pic->free_slot_mask = &self->free_slot_mask; + + return pic; +} + +static void +gst_vulkan_vp9_picture_free (gpointer data) +{ + GstVulkanVp9Picture *pic = (GstVulkanVp9Picture *) data; + + // Mark our slot as free in the decoder, if we were assigned any. + if (pic->slot_idx >= 0 && pic->slot_idx < GST_VULKAN_VP9_MAX_DPB_SLOTS) + *pic->free_slot_mask &= ~(1 << pic->slot_idx); + + gst_vulkan_decoder_picture_release (&pic->base); + + g_free (pic); +} + +static GstFlowReturn +_check_resolution_change (GstVulkanVp9Decoder * self, GstVp9Picture * picture) +{ + const GstVp9FrameHeader *frame_hdr = &picture->frame_hdr; + if (!self->output_state) { + GST_DEBUG_OBJECT (self, "output_state not yet initialized"); + return GST_FLOW_OK; + } + + if (self->resolution_changed + || self->coded_width != frame_hdr->width + || self->coded_height != frame_hdr->height) { + GstVideoInfo *info = &self->output_state->info; + GST_VIDEO_INFO_WIDTH (info) = self->coded_width = frame_hdr->width; + GST_VIDEO_INFO_HEIGHT (info) = self->coded_height = frame_hdr->height; + + self->need_negotiation = TRUE; + + if (!gst_video_decoder_negotiate (GST_VIDEO_DECODER (self))) { + GST_ERROR_OBJECT (self, "Resolution changed, but failed to" + " negotiate with downstream"); + return GST_FLOW_NOT_NEGOTIATED; + } + self->resolution_changed = TRUE; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_vp9_decoder_new_picture (GstVp9Decoder * decoder, + GstVideoCodecFrame * frame, GstVp9Picture * picture) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVideoDecoder *vdec = GST_VIDEO_DECODER (decoder); + GstFlowReturn ret; + GstVulkanVp9Picture *pic; + + GST_TRACE_OBJECT (self, "New picture"); + + ret = _check_resolution_change (self, picture); + if (ret != GST_FLOW_OK) + return ret; + + if (self->need_negotiation) { + if (!gst_video_decoder_negotiate (vdec)) { + GST_ERROR_OBJECT (self, "Failed to negotiate with downstream"); + return GST_FLOW_NOT_NEGOTIATED; + } + } + + ret = gst_video_decoder_allocate_output_frame (vdec, frame); + if (ret != GST_FLOW_OK) + goto allocation_failed; + + pic = gst_vulkan_vp9_picture_new (self, frame->output_buffer); + gst_vp9_picture_set_user_data (picture, pic, gst_vulkan_vp9_picture_free); + + return GST_FLOW_OK; + +allocation_failed: + { + GST_WARNING_OBJECT (self, "Failed to allocated input or output buffer: %s", + gst_flow_get_name (ret)); + return ret; + } +} + +static void +_fill_ref_slot (GstVulkanVp9Decoder * self, GstVp9Picture * picture, + VkVideoReferenceSlotInfoKHR * slot, VkVideoPictureResourceInfoKHR * res, + GstVulkanDecoderPicture ** ref) +{ + GstVulkanVp9Picture *pic = + (GstVulkanVp9Picture *) gst_vp9_picture_get_user_data (picture); + + /* *INDENT-OFF* */ + *res = (VkVideoPictureResourceInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedExtent = { + .width = picture->frame_hdr.width, + .height = picture->frame_hdr.height, + }, + .baseArrayLayer = (self->decoder->layered_dpb && self->decoder->dedicated_dpb) ? pic->slot_idx : 0, + .imageViewBinding = pic->base.img_view_ref->view, + }; + + *slot = (VkVideoReferenceSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_REFERENCE_SLOT_INFO_KHR, + .pNext = NULL, + .slotIndex = pic->slot_idx, + .pPictureResource = res, + }; + /* *INDENT-ON* */ + + if (ref) + *ref = &pic->base; + + GST_TRACE_OBJECT (self, "0x%" G_GUINT64_FORMAT "x slotIndex: %d", + res->imageViewBinding, slot->slotIndex); +} + +/** + * _find_next_slot_idx: + * @self: The VP9 decoder instance + * + * Finds the next available slot index in the DPB. + * + * Returns: Valid slot index (0-31) or -1 if no slots available + */ +static gint32 +_find_next_slot_idx (GstVulkanVp9Decoder * self) +{ + gint32 i; + + for (i = 0; i < self->dpb_size; i++) { + if (!(self->free_slot_mask & (1 << i))) { + // Mark as used. + self->free_slot_mask |= (1 << i); + return i; + } + } + + GST_ERROR_OBJECT (self, + "Failed to find free DPB slot (dpb_size=%d, free_mask=0x%08x)", + self->dpb_size, self->free_slot_mask); + return -1; +} + +static GstFlowReturn +gst_vulkan_vp9_decoder_decode_picture (GstVp9Decoder * decoder, + GstVp9Picture * picture, GstVp9Dpb * dpb) +{ + + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVp9FrameHeader *fh = &picture->frame_hdr; + GstVp9QuantizationParams *qp = &fh->quantization_params; + GstVp9LoopFilterParams *lf = &fh->loop_filter_params; + GstVp9SegmentationParams *seg = &fh->segmentation_params; + GstVulkanVp9Picture *pic; + guint num_refs = 0; + guint i, j; + gboolean intra_only; + + GST_TRACE_OBJECT (self, "Start picture %p", picture); + + pic = gst_vp9_picture_get_user_data (picture); + + /* *INDENT-OFF* */ + pic->loop_filter = (StdVideoVP9LoopFilter) { + .flags = { + .loop_filter_delta_enabled = lf->loop_filter_delta_enabled, + .loop_filter_delta_update = lf->loop_filter_delta_update, + }, + .loop_filter_level = lf->loop_filter_level, + .loop_filter_sharpness = lf->loop_filter_sharpness, + .update_ref_delta = 0, + }; + /* *INDENT-ON* */ + + for (i = 0; i < STD_VIDEO_VP9_MAX_REF_FRAMES; i++) { + pic->loop_filter.loop_filter_ref_deltasi = lf->loop_filter_ref_deltasi; + pic->loop_filter.update_ref_delta |= lf->update_ref_deltai << i; + } + + for (i = 0; i < STD_VIDEO_VP9_LOOP_FILTER_ADJUSTMENTS; i++) { + pic->loop_filter.loop_filter_mode_deltasi = + lf->loop_filter_mode_deltasi; + pic->loop_filter.update_mode_delta |= lf->update_mode_deltai << i; + } + + /* *INDENT-OFF* */ + pic->segmentation = (StdVideoVP9Segmentation) { + .flags = (StdVideoVP9SegmentationFlags) { + .segmentation_update_map = seg->segmentation_update_map, + .segmentation_temporal_update = seg->segmentation_temporal_update, + .segmentation_update_data = seg->segmentation_update_data, + .segmentation_abs_or_delta_update = seg->segmentation_abs_or_delta_update, + }, + }; + /* *INDENT-N* */ + + for (i = 0; i < GST_VP9_MAX_SEGMENTS; i++) { + pic->segmentation.FeatureEnabledi = 0; + for (j = 0; j < GST_VP9_SEG_LVL_MAX; j++) { + pic->segmentation.FeatureEnabledi |= seg->feature_enabledij << j; + pic->segmentation.FeatureDataij = seg->feature_dataij; + } + } + memcpy (pic->segmentation.segmentation_tree_probs, + seg->segmentation_tree_probs, sizeof (seg->segmentation_tree_probs)); + memcpy (pic->segmentation.segmentation_pred_prob, seg->segmentation_pred_prob, + sizeof (seg->segmentation_pred_prob)); + + intra_only = (fh->frame_type == STD_VIDEO_VP9_FRAME_TYPE_KEY + || fh->intra_only); + + /* *INDENT-OFF* */ + pic->std_pic = (StdVideoDecodeVP9PictureInfo) { + .flags = (StdVideoDecodeVP9PictureInfoFlags) { + .error_resilient_mode = fh->error_resilient_mode, + .intra_only = fh->intra_only, + .allow_high_precision_mv = fh->allow_high_precision_mv, + .refresh_frame_context = fh->refresh_frame_context, + .frame_parallel_decoding_mode = fh->frame_parallel_decoding_mode, + .segmentation_enabled = seg->segmentation_enabled, + .show_frame = fh->show_frame, + .UsePrevFrameMvs = + (self->last_show_frame + && !intra_only + && fh->error_resilient_mode == 0 + && !self->resolution_changed), + }, + .profile = (StdVideoVP9Profile) fh->profile, + .frame_type = (StdVideoVP9FrameType) fh->frame_type, + .frame_context_idx = fh->frame_context_idx, + .reset_frame_context = fh->reset_frame_context, + .refresh_frame_flags = fh->refresh_frame_flags, + .ref_frame_sign_bias_mask = 0, + .interpolation_filter = + (StdVideoVP9InterpolationFilter) fh->interpolation_filter, + .base_q_idx = qp->base_q_idx, + .delta_q_y_dc = qp->delta_q_y_dc, + .delta_q_uv_dc = qp->delta_q_uv_dc, + .delta_q_uv_ac = qp->delta_q_uv_ac, + .tile_cols_log2 = fh->tile_cols_log2, + .tile_rows_log2 = fh->tile_rows_log2, + .pColorConfig = &self->vk.color_config, + .pLoopFilter = &pic->loop_filter, + .pSegmentation = seg->segmentation_enabled ? &pic->segmentation : NULL, + }; + /* *INDENT-ON* */ + self->resolution_changed = FALSE; + self->last_show_frame = fh->show_frame; + + for (i = 0; i < GST_VP9_REF_FRAME_MAX; i++) { + pic->std_pic.ref_frame_sign_bias_mask |= (fh->ref_frame_sign_biasi << i); + } + + /* *INDENT-OFF* */ + pic->vk_pic = (VkVideoDecodeVP9PictureInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PICTURE_INFO_KHR, + .pStdPictureInfo = &pic->std_pic, + .uncompressedHeaderOffset = 0, + .compressedHeaderOffset = fh->frame_header_length_in_bytes, + .tilesOffset = fh->frame_header_length_in_bytes + fh->header_size_in_bytes, + }; + /* *INDENT-ON* */ + + for (i = 0; i < VK_MAX_VIDEO_VP9_REFERENCES_PER_FRAME_KHR; i++) { + GstVp9Picture *ref_pic = dpb->pic_listfh->ref_frame_idxi; + if (ref_pic) { + GstVulkanVp9Picture *ref_vk_pic = + (GstVulkanVp9Picture *) gst_vp9_picture_get_user_data (ref_pic); + + pic->vk_pic.referenceNameSlotIndicesi = ref_vk_pic->slot_idx; + } else { + pic->vk_pic.referenceNameSlotIndicesi = -1; + } + } + + pic->slot_idx = _find_next_slot_idx (self); + if (pic->slot_idx < 0) { + GST_ERROR_OBJECT (self, "No free DPB slots available"); + return GST_FLOW_ERROR; + } + + /* fill main slot */ + _fill_ref_slot (self, picture, &pic->base.slot, &pic->base.pic_res, NULL); + + for (i = 0; i < GST_VP9_REF_FRAME_MAX; i++) { + GstVp9Picture *ref_pic = dpb->pic_listi; + gboolean found = FALSE; + GstVulkanVp9Picture *ref_vk_pic; + + if (!ref_pic) + continue; + + ref_vk_pic = + (GstVulkanVp9Picture *) gst_vp9_picture_get_user_data (ref_pic); + + for (j = 0; j < num_refs; j++) { + if (pic->base.slotsj.slotIndex == ref_vk_pic->slot_idx) { + found = TRUE; + break; + } + } + + if (!found) { + _fill_ref_slot (self, ref_pic, &pic->base.slotsnum_refs, + &pic->base.pics_resnum_refs, &pic->base.refsnum_refs); + num_refs++; + } + + } + + /* *INDENT-OFF* */ + pic->base.decode_info = (VkVideoDecodeInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_INFO_KHR, + .pNext = &pic->vk_pic, + .flags = 0x0, + .dstPictureResource = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedExtent = { picture->frame_hdr.width, picture->frame_hdr.height }, + .baseArrayLayer = 0, + .imageViewBinding = pic->base.img_view_out->view, + }, + .pSetupReferenceSlot = &pic->base.slot, + .referenceSlotCount = num_refs, + .pReferenceSlots = (const VkVideoReferenceSlotInfoKHR *) &pic->base.slots, + }; + /* *INDENT-ON* */ + + /* only wait if there's a buffer processed */ + if (GST_CODEC_PICTURE_FRAME_NUMBER (picture) > 0) { + if (!gst_vulkan_decoder_wait (self->decoder)) { + GST_ERROR_OBJECT (self, "Error at waiting for decoding operation to end"); + return GST_FLOW_ERROR; + } + } + + if (!gst_vulkan_decoder_append_slice (self->decoder, &pic->base, + picture->data, picture->size, FALSE)) + return GST_FLOW_ERROR; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_vp9_decoder_end_picture (GstVp9Decoder * decoder, + GstVp9Picture * picture) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVulkanVp9Picture *pic; + GError *error = NULL; + + GST_TRACE_OBJECT (self, "End picture %p", picture); + + pic = (GstVulkanVp9Picture *) gst_vp9_picture_get_user_data (picture); + g_assert (pic); + + if (pic->base.slice_offs->len == 0) + return GST_FLOW_OK; + + GST_TRACE_OBJECT (self, "Decoding frame, %p", picture); + + if (!gst_vulkan_decoder_decode (self->decoder, &pic->base, &error)) { + GST_ERROR_OBJECT (self, "Couldn't decode frame: %s", + error ? error->message : ""); + g_clear_error (&error); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_vulkan_vp9_decoder_output_picture (GstVp9Decoder * decoder, + GstVideoCodecFrame * frame, GstVp9Picture * picture) +{ + GstVideoDecoder *vdec = GST_VIDEO_DECODER (decoder); + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVideoCodecState *discont_state = + GST_CODEC_PICTURE (picture)->discont_state; + + GST_TRACE_OBJECT (self, "Output picture %p", picture); + + if (discont_state) { + g_clear_pointer (&self->input_state, gst_video_codec_state_unref); + self->input_state = gst_video_codec_state_ref (discont_state); + + self->need_negotiation = TRUE; + if (!gst_video_decoder_negotiate (vdec)) { + gst_vp9_picture_unref (picture); + GST_ERROR_OBJECT (self, "Could not re-negotiate with updated state"); + return GST_FLOW_ERROR; + } + } + + gst_vp9_picture_unref (picture); + + return gst_video_decoder_finish_frame (vdec, frame); +} + +static GstVp9Picture * +gst_vulkan_vp9_decoder_duplicate_picture (GstVp9Decoder * decoder, + GstVideoCodecFrame * frame, GstVp9Picture * picture) +{ + GstVulkanVp9Decoder *self = GST_VULKAN_VP9_DECODER (decoder); + GstVulkanVp9Picture *pic, *new_pic; + GstVp9Picture *new_picture; + + if (_check_resolution_change (self, picture) != GST_FLOW_OK) { + return NULL; + } + + pic = (GstVulkanVp9Picture *) gst_vp9_picture_get_user_data (picture); + if (!pic) { + GST_ERROR_OBJECT (self, "Parent picture does not have a vulkan picture"); + return NULL; + } + + new_picture = gst_vp9_picture_new (); + new_picture->frame_hdr = picture->frame_hdr; + new_pic = gst_vulkan_vp9_picture_new (self, pic->base.out); + + frame->output_buffer = gst_buffer_ref (new_pic->base.out); + + GST_LOG_OBJECT (self, "Duplicate output with buffer %" GST_PTR_FORMAT, pic); + + gst_vp9_picture_set_user_data (new_picture, new_pic, + gst_vulkan_vp9_picture_free); + + return new_picture; +} + +static void +gst_vulkan_vp9_decoder_init (GTypeInstance * instance, gpointer g_class) +{ + gst_vulkan_buffer_memory_init_once (); +} + +static void +gst_vulkan_vp9_decoder_class_init (gpointer g_klass, gpointer class_data) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (g_klass); + GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_CLASS (g_klass); + GstVp9DecoderClass *vp9decoder_class = GST_VP9_DECODER_CLASS (g_klass); + GstVulkanVp9DecoderClass *vk_vp9_class = + GST_VULKAN_VP9_DECODER_CLASS (g_klass); + struct CData *cdata = class_data; + gchar *long_name; + const gchar *name; + GstPadTemplate *sink_pad_template, *src_pad_template; + GstCaps *sink_doc_caps, *src_doc_caps; + + name = "Vulkan VP9 decoder"; + if (cdata->description) + long_name = g_strdup_printf ("%s on %s", name, cdata->description); + else + long_name = g_strdup (name); + + vk_vp9_class->device_index = cdata->device_index; + + gst_element_class_set_metadata (element_class, long_name, + "Codec/Decoder/Video/Hardware", "A VP9 video decoder based on Vulkan", + "Stephane Cerveau <scerveau@igalia.com>"); + + parent_class = g_type_class_peek_parent (g_klass); + + sink_doc_caps = gst_caps_from_string ("video/x-vp9, " + "profile = (string) { 0, 1, 2, 3 }, alignment = (string) frame"); + src_doc_caps = + gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, "NV12")); + + sink_pad_template = + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->codec); + gst_element_class_add_pad_template (element_class, sink_pad_template); + + src_pad_template = + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, cdata->raw); + gst_element_class_add_pad_template (element_class, src_pad_template); + + gst_pad_template_set_documentation_caps (sink_pad_template, sink_doc_caps); + gst_caps_unref (sink_doc_caps); + + gst_pad_template_set_documentation_caps (src_pad_template, src_doc_caps); + gst_caps_unref (src_doc_caps); + + element_class->set_context = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_set_context); + + decoder_class->src_query = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_src_query); + decoder_class->sink_query = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_sink_query); + decoder_class->open = GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_open); + decoder_class->close = GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_close); + decoder_class->stop = GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_stop); + decoder_class->negotiate = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_negotiate); + decoder_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_decide_allocation); + + vp9decoder_class->new_sequence = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_new_sequence); + vp9decoder_class->new_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_new_picture); + vp9decoder_class->decode_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_decode_picture); + vp9decoder_class->end_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_end_picture); + vp9decoder_class->output_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_output_picture); + vp9decoder_class->duplicate_picture = + GST_DEBUG_FUNCPTR (gst_vulkan_vp9_decoder_duplicate_picture); + + g_free (long_name); + g_free (cdata->description); + g_free (cdata); +} + +gboolean +gst_vulkan_vp9_decoder_register (GstPlugin * plugin, GstVulkanDevice * device, + guint rank) +{ + static GOnce debug_once = G_ONCE_INIT; + GType type; + GTypeInfo type_info = { + .class_size = sizeof (GstVulkanVp9DecoderClass), + .class_init = gst_vulkan_vp9_decoder_class_init, + .instance_size = sizeof (GstVulkanVp9Decoder), + .instance_init = gst_vulkan_vp9_decoder_init, + }; + struct CData *cdata; + gboolean ret; + gchar *type_name, *feature_name; + GstCaps *codec = NULL, *raw = NULL; + + g_return_val_if_fail (GST_IS_PLUGIN (plugin), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); + + if (!gst_vulkan_physical_device_codec_caps (device->physical_device, + VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR, &codec, &raw)) { + gst_plugin_add_status_warning (plugin, + "Unable to query VP9 decoder properties"); + return FALSE; + } + + cdata = g_new (struct CData, 1); + cdata->description = NULL; + cdata->device_index = device->physical_device->device_index; + cdata->codec = codec; + cdata->raw = raw; + + /* class data will be leaked if the element never gets instantiated */ + GST_MINI_OBJECT_FLAG_SET (cdata->codec, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (cdata->raw, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + + gst_vulkan_create_feature_name (device, "GstVulkanVp9Decoder", + "GstVulkanVp9Device%dDecoder", &type_name, "vulkanvp9dec", + "vulkanVp9device%ddec", &feature_name, &cdata->description, &rank); + + type_info.class_data = cdata; + + g_once (&debug_once, _register_debug_category, NULL); + type = g_type_register_static (GST_TYPE_VP9_DECODER, + type_name, &type_info, 0); + + ret = gst_element_register (plugin, feature_name, rank, type); + + g_free (type_name); + g_free (feature_name); + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/vulkan/vkvp9dec.h
Added
@@ -0,0 +1,30 @@ +/* GStreamer + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/vulkan/vulkan.h> + +G_BEGIN_DECLS + +gboolean gst_vulkan_vp9_decoder_register (GstPlugin * plugin, + GstVulkanDevice * device, + guint rank); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/wayland/gstwaylandsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wayland/gstwaylandsink.c
Changed
@@ -46,6 +46,7 @@ #include <drm_fourcc.h> #include <gst/allocators/allocators.h> +#include <gst/video/gstvideodmabufpool.h> #include <gst/video/videooverlay.h> /* signals */ @@ -61,8 +62,10 @@ PROP_0, PROP_DISPLAY, PROP_FULLSCREEN, + PROP_FULLSCREEN_OUTPUT, PROP_ROTATE_METHOD, PROP_DRM_DEVICE, + PROP_FORCE_ASPECT_RATIO, PROP_LAST }; @@ -159,6 +162,17 @@ g_object_class_install_property (gobject_class, PROP_FULLSCREEN, g_param_spec_boolean ("fullscreen", "Fullscreen", "Whether the surface should be made fullscreen ", FALSE, + G_PARAM_READWRITE | GST_PARAM_MUTABLE_PLAYING | + G_PARAM_STATIC_STRINGS)); + + /** + * waylandsink:fullscreen-output: + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_FULLSCREEN_OUTPUT, + g_param_spec_string ("fullscreen-output", "Wayland Output name", + "The name of the wayland output to fullscreen to.", NULL, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); /** @@ -171,7 +185,8 @@ "rotate method", "rotate method", GST_TYPE_VIDEO_ORIENTATION_METHOD, GST_VIDEO_ORIENTATION_IDENTITY, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + G_PARAM_READWRITE | GST_PARAM_MUTABLE_PLAYING | + G_PARAM_STATIC_STRINGS)); /** * waylandsink:drm-device: @@ -184,6 +199,18 @@ NULL, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT_ONLY)); + /** + * waylandsink:force-aspect-ratio: + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_FORCE_ASPECT_RATIO, + g_param_spec_boolean ("force-aspect-ratio", "Force aspect ratio", + "When enabled, scaling will respect original aspect ratio", + TRUE, + G_PARAM_READWRITE | GST_PARAM_MUTABLE_PLAYING | + G_PARAM_STATIC_STRINGS)); + /** * waylandsink:render-rectangle: @@ -194,6 +221,9 @@ * Since: 1.22 */ gst_video_overlay_install_properties (gobject_class, PROP_LAST); + + GST_DEBUG_CATEGORY_INIT (gstwayland_debug, "waylandsink", 0, + " wayland video sink"); } static void @@ -201,18 +231,27 @@ { g_mutex_init (&self->display_lock); g_mutex_init (&self->render_lock); + self->force_aspect_ratio = TRUE; } +/* must be called with the OBJECT_LOCK */ static void -gst_wayland_sink_set_fullscreen (GstWaylandSink * self, gboolean fullscreen) +gst_wayland_sink_set_fullscreen (GstWaylandSink * self, gboolean fullscreen, + const gchar * fullscreen_output) { - if (fullscreen == self->fullscreen) - return; - - g_mutex_lock (&self->render_lock); self->fullscreen = fullscreen; - gst_wl_window_ensure_fullscreen (self->window, fullscreen); - g_mutex_unlock (&self->render_lock); + + if (self->fullscreen_output != fullscreen_output) { + g_free (self->fullscreen_output); + self->fullscreen_output = g_strdup (fullscreen_output); + } + + if (self->window) { + g_mutex_lock (&self->render_lock); + gst_wl_window_ensure_fullscreen_for_output (self->window, fullscreen, + fullscreen_output); + g_mutex_unlock (&self->render_lock); + } } static void @@ -252,6 +291,23 @@ GST_OBJECT_UNLOCK (self); } +/* must be called with the OBJECT_LOCK */ +static void +gst_wayland_sink_set_force_aspect_ratio (GstWaylandSink * self, + gboolean force_aspect_ratio) +{ + if (force_aspect_ratio == self->force_aspect_ratio) + return; + + self->force_aspect_ratio = force_aspect_ratio; + if (self->window) { + g_mutex_lock (&self->render_lock); + gst_wl_window_set_force_aspect_ratio (self->window, + self->force_aspect_ratio); + g_mutex_unlock (&self->render_lock); + } +} + static void gst_wayland_sink_get_property (GObject * object, guint prop_id, GValue * value, GParamSpec * pspec) @@ -269,6 +325,11 @@ g_value_set_boolean (value, self->fullscreen); GST_OBJECT_UNLOCK (self); break; + case PROP_FULLSCREEN_OUTPUT: + GST_OBJECT_LOCK (self); + g_value_set_string (value, self->fullscreen_output); + GST_OBJECT_UNLOCK (self); + break; case PROP_ROTATE_METHOD: GST_OBJECT_LOCK (self); g_value_set_enum (value, self->current_rotate_method); @@ -279,6 +340,11 @@ g_value_set_string (value, self->drm_device); GST_OBJECT_UNLOCK (self); break; + case PROP_FORCE_ASPECT_RATIO: + GST_OBJECT_LOCK (self); + g_value_set_boolean (value, self->force_aspect_ratio); + GST_OBJECT_UNLOCK (self); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -300,7 +366,14 @@ break; case PROP_FULLSCREEN: GST_OBJECT_LOCK (self); - gst_wayland_sink_set_fullscreen (self, g_value_get_boolean (value)); + gst_wayland_sink_set_fullscreen (self, g_value_get_boolean (value), + self->fullscreen_output); + GST_OBJECT_UNLOCK (self); + break; + case PROP_FULLSCREEN_OUTPUT: + GST_OBJECT_LOCK (self); + gst_wayland_sink_set_fullscreen (self, self->fullscreen, + g_value_get_string (value)); GST_OBJECT_UNLOCK (self); break; case PROP_ROTATE_METHOD: @@ -313,6 +386,12 @@ self->drm_device = g_value_dup_string (value); GST_OBJECT_UNLOCK (self); break; + case PROP_FORCE_ASPECT_RATIO: + GST_OBJECT_LOCK (self); + gst_wayland_sink_set_force_aspect_ratio (self, + g_value_get_boolean (value)); + GST_OBJECT_UNLOCK (self); + break; default: if (!gst_video_overlay_set_property (object, PROP_LAST, prop_id, value)) G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); @@ -340,6 +419,7 @@ g_free (self->display_name); g_free (self->drm_device); + g_free (self->fullscreen_output); g_mutex_clear (&self->display_lock); g_mutex_clear (&self->render_lock); @@ -546,6 +626,9 @@ } break; + case GST_EVENT_FLUSH_STOP: + gst_wl_window_flush (self->window); + break; default: break; } @@ -689,26 +772,31 @@ GstWaylandSink *self = GST_WAYLAND_SINK (bsink);; gboolean use_dmabuf; - GST_DEBUG_OBJECT (self, "set caps %" GST_PTR_FORMAT, caps); + GST_INFO_OBJECT (self, "set caps %" GST_PTR_FORMAT, caps); if (gst_video_is_dma_drm_caps (caps)) { if (!gst_video_info_dma_drm_from_caps (&self->drm_info, caps)) goto invalid_format; if (!gst_video_info_dma_drm_to_video_info (&self->drm_info, - &self->video_info)) + &self->render_info)) goto invalid_format; } else { /* extract info from caps */ - if (!gst_video_info_from_caps (&self->video_info, caps)) + if (!gst_video_info_from_caps (&self->render_info, caps)) goto invalid_format; if (!gst_video_info_dma_drm_from_video_info (&self->drm_info, - &self->video_info, DRM_FORMAT_MOD_LINEAR)) + &self->render_info, DRM_FORMAT_MOD_LINEAR)) gst_video_info_dma_drm_init (&self->drm_info); } - self->video_info_changed = TRUE; + self->have_mastering_info = + gst_video_mastering_display_info_from_caps (&self->minfo, caps); + self->have_light_info = + gst_video_content_light_level_from_caps (&self->linfo, caps); + + self->render_info_changed = TRUE; self->skip_dumb_buffer_copy = FALSE; /* free pooled buffer used with previous caps */ @@ -726,7 +814,7 @@ &self->drm_info)) goto unsupported_drm_format; } else if (!gst_wl_display_check_format_for_shm (self->display, - &self->video_info)) { + &self->render_info)) { /* Note: we still support dmabuf in this case, but formats must also be * supported on SHM interface to ensure a fallback is possible as we are * not guarantied we'll get dmabuf in the buffers. */ @@ -735,6 +823,7 @@ /* Will be used to create buffer pools */ gst_caps_replace (&self->caps, caps); + self->video_info = self->render_info; return TRUE; @@ -754,7 +843,8 @@ unsupported_format: { GST_ERROR_OBJECT (self, "Format %s is not available on the display", - gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (&self->video_info))); + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT + (&self->render_info))); return FALSE; } } @@ -762,11 +852,11 @@ static gboolean gst_wayland_sink_propose_allocation (GstBaseSink * bsink, GstQuery * query) { + GstWaylandSink *self = GST_WAYLAND_SINK (bsink);; + GstAllocator *allocator = NULL; GstCaps *caps; GstBufferPool *pool = NULL; gboolean need_pool; - GstVideoInfoDmaDrm drm_info; - GstVideoInfo vinfo; guint size; gst_query_parse_allocation (query, &caps, &need_pool); @@ -775,38 +865,54 @@ return FALSE; if (gst_video_is_dma_drm_caps (caps)) { + GstVideoInfoDmaDrm drm_info; + if (!gst_video_info_dma_drm_from_caps (&drm_info, caps)) return FALSE; + size = drm_info.vinfo.size; } else { + GstVideoInfo vinfo; + /* extract info from caps */ if (!gst_video_info_from_caps (&vinfo, caps)) return FALSE; + size = vinfo.size; + + allocator = gst_udmabuf_allocator_get (); + if (!allocator) + allocator = gst_shm_allocator_get (); } if (need_pool && !gst_video_is_dma_drm_caps (caps)) { GstStructure *config; - pool = gst_wl_video_buffer_pool_new (); - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_set_params (config, caps, size, 2, 0); - gst_buffer_pool_config_set_allocator (config, - gst_shm_allocator_get (), NULL); - gst_buffer_pool_set_config (pool, config); + + if (GST_IS_UDMABUF_ALLOCATOR (allocator)) { + pool = gst_video_dmabuf_pool_new (); + } else { + pool = gst_wl_video_buffer_pool_new (); + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_set_params (config, caps, size, 2, 0); + gst_buffer_pool_config_set_allocator (config, + gst_object_ref (allocator), NULL); + gst_buffer_pool_set_config (pool, config); + } } gst_query_add_allocation_pool (query, pool, size, 2, 0); if (pool) g_object_unref (pool); - if (!gst_video_is_dma_drm_caps (caps)) { - GstAllocator *alloc = gst_shm_allocator_get (); - gst_query_add_allocation_param (query, alloc, NULL); - g_object_unref (alloc); - } + if (!gst_video_is_dma_drm_caps (caps)) + gst_query_add_allocation_param (query, allocator, NULL); gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); + if (gst_wl_display_get_viewporter (self->display)) + gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, NULL); + + gst_clear_object (&allocator); return TRUE; } @@ -816,14 +922,24 @@ { GstWlBuffer *wlbuffer; const GstVideoInfo *info = NULL; + const GstVideoMasteringDisplayInfo *minfo = NULL; + const GstVideoContentLightLevel *linfo = NULL; wlbuffer = gst_buffer_get_wl_buffer (self->display, self->last_buffer); - if (G_UNLIKELY (self->video_info_changed && !redraw)) { - info = &self->video_info; - self->video_info_changed = FALSE; + if (G_UNLIKELY (self->render_info_changed && !redraw)) { + info = &self->render_info; + + if (self->have_mastering_info) + minfo = &self->minfo; + + if (self->have_light_info) + linfo = &self->linfo; + + self->render_info_changed = FALSE; } - return gst_wl_window_render (self->window, wlbuffer, info); + + return gst_wl_window_render_hdr (self->window, wlbuffer, info, minfo, linfo); } static void @@ -837,6 +953,51 @@ } static GstFlowReturn +gst_wayland_sink_copy_frame (GstWaylandSink * self, GstBuffer * src_buffer, + GstBuffer * dst_buffer) +{ + GstVideoFrame src, dst; + + if (!gst_video_frame_map (&dst, &self->video_info, dst_buffer, GST_MAP_WRITE)) + goto dst_map_failed; + + if (!gst_video_frame_map (&src, &self->video_info, src_buffer, GST_MAP_READ)) { + gst_video_frame_unmap (&dst); + goto src_map_failed; + } + + gst_video_frame_copy (&dst, &src); + + gst_video_frame_unmap (&src); + gst_video_frame_unmap (&dst); + + /* Also copy the crop meta so its offloaded */ + GstVideoCropMeta *src_cmeta = gst_buffer_get_video_crop_meta (src_buffer); + if (src_cmeta) { + GstVideoCropMeta *dst_cmeta = gst_buffer_add_video_crop_meta (dst_buffer); + dst_cmeta->x = src_cmeta->x; + dst_cmeta->y = src_cmeta->y; + dst_cmeta->width = src_cmeta->width; + dst_cmeta->height = src_cmeta->height; + } + + return GST_FLOW_OK; + +src_map_failed: + { + GST_ELEMENT_ERROR (self, RESOURCE, READ, + ("Video memory can not be read from userspace."), (NULL)); + return GST_FLOW_ERROR; + } +dst_map_failed: + { + GST_ELEMENT_ERROR (self, RESOURCE, WRITE, + ("Video memory can not be written from userspace."), (NULL)); + return GST_FLOW_ERROR; + } +} + +static GstFlowReturn gst_wayland_sink_show_frame (GstVideoSink * vsink, GstBuffer * buffer) { GstWaylandSink *self = GST_WAYLAND_SINK (vsink); @@ -860,12 +1021,44 @@ if (!self->window) { /* if we were not provided a window, create one ourselves */ - self->window = gst_wl_window_new_toplevel (self->display, - &self->video_info, self->fullscreen, &self->render_lock); + self->window = gst_wl_window_new_toplevel_full (self->display, + &self->render_info, self->fullscreen, self->fullscreen_output, + &self->render_lock); g_signal_connect_object (self->window, "closed", G_CALLBACK (on_window_closed), self, 0); gst_wl_window_set_rotate_method (self->window, self->current_rotate_method); + gst_wl_window_set_force_aspect_ratio (self->window, + self->force_aspect_ratio); + } + } + + /* + * The GstVideoFrame fast copy can't crop, make sure the internal pool + * allocated buffers large enough to hold the padded frames. + */ + if (gst_buffer_get_video_crop_meta (buffer)) { + gint padded_width, padded_height; + GstVideoMeta *vmeta; + GstStructure *s; + + vmeta = gst_buffer_get_video_meta (buffer); + self->caps = gst_caps_make_writable (self->caps); + s = gst_caps_get_structure (self->caps, 0); + gst_structure_get (s, "width", G_TYPE_INT, &padded_width, + "height", G_TYPE_INT, &padded_height, NULL); + + if (vmeta->width != padded_width || vmeta->height != padded_height) { + gst_structure_set (s, "width", G_TYPE_INT, vmeta->width, + "height", G_TYPE_INT, vmeta->height, NULL); + + if (self->pool) { + gst_buffer_pool_set_active (self->pool, FALSE); + gst_clear_object (&self->pool); + } + + gst_video_info_set_format (&self->video_info, vmeta->format, + vmeta->width, vmeta->height); } } @@ -908,7 +1101,6 @@ * offloading the compositor from a copy helps maintaining a smoother * desktop. */ - GstVideoFrame src, dst; if (!gst_wayland_activate_drm_dumb_pool (self)) { self->skip_dumb_buffer_copy = TRUE; @@ -936,19 +1128,9 @@ wlbuffer = gst_buffer_add_wl_buffer (to_render, wbuf, self->display); } - if (!gst_video_frame_map (&dst, &self->video_info, to_render, - GST_MAP_WRITE)) - goto dst_map_failed; - - if (!gst_video_frame_map (&src, &self->video_info, buffer, GST_MAP_READ)) { - gst_video_frame_unmap (&dst); - goto src_map_failed; - } - - gst_video_frame_copy (&dst, &src); - - gst_video_frame_unmap (&src); - gst_video_frame_unmap (&dst); + ret = gst_wayland_sink_copy_frame (self, buffer, to_render); + if (ret != GST_FLOW_OK) + goto done; goto render; } @@ -956,15 +1138,13 @@ handle_shm: if (!wbuf && gst_wl_display_check_format_for_shm (self->display, - &self->video_info)) { + &self->render_info)) { if (gst_buffer_n_memory (buffer) == 1 && gst_is_fd_memory (mem)) wbuf = gst_wl_shm_memory_construct_wl_buffer (mem, self->display, &self->video_info); /* If nothing worked, copy into our internal pool */ if (!wbuf) { - GstVideoFrame src, dst; - /* we don't know how to create a wl_buffer directly from the provided * memory, so we have to copy the data to shm memory that we know how * to handle... */ @@ -995,19 +1175,9 @@ wlbuffer = gst_buffer_add_wl_buffer (to_render, wbuf, self->display); } - if (!gst_video_frame_map (&dst, &self->video_info, to_render, - GST_MAP_WRITE)) - goto dst_map_failed; - - if (!gst_video_frame_map (&src, &self->video_info, buffer, GST_MAP_READ)) { - gst_video_frame_unmap (&dst); - goto src_map_failed; - } - - gst_video_frame_copy (&dst, &src); - - gst_video_frame_unmap (&src); - gst_video_frame_unmap (&dst); + ret = gst_wayland_sink_copy_frame (self, buffer, to_render); + if (ret != GST_FLOW_OK) + goto done; goto render; } @@ -1071,20 +1241,6 @@ ret = GST_FLOW_ERROR; goto done; } -src_map_failed: - { - GST_ELEMENT_ERROR (self, RESOURCE, READ, - ("Video memory can not be read from userspace."), (NULL)); - ret = GST_FLOW_ERROR; - goto done; - } -dst_map_failed: - { - GST_ELEMENT_ERROR (self, RESOURCE, WRITE, - ("Video memory can not be written from userspace."), (NULL)); - ret = GST_FLOW_ERROR; - goto done; - } done: { g_mutex_unlock (&self->render_lock); @@ -1134,6 +1290,8 @@ &self->render_lock); gst_wl_window_set_rotate_method (self->window, self->current_rotate_method); + gst_wl_window_set_force_aspect_ratio (self->window, + self->force_aspect_ratio); } } else { GST_ERROR_OBJECT (self, "Failed to find display handle, " @@ -1187,9 +1345,6 @@ static gboolean plugin_init (GstPlugin * plugin) { - GST_DEBUG_CATEGORY_INIT (gstwayland_debug, "waylandsink", 0, - " wayland video sink"); - return GST_ELEMENT_REGISTER (waylandsink, plugin); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/wayland/gstwaylandsink.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wayland/gstwaylandsink.h
Changed
@@ -52,14 +52,22 @@ GstWlWindow *window; GstBufferPool *pool; - gboolean video_info_changed; + gboolean render_info_changed; + GstVideoInfo render_info; GstVideoInfo video_info; GstVideoInfoDmaDrm drm_info; + GstVideoMasteringDisplayInfo minfo; + GstVideoContentLightLevel linfo; + gboolean have_mastering_info; + gboolean have_light_info; gboolean fullscreen; + gchar *fullscreen_output; GstCaps *caps; gchar *display_name; + /* If both OBJECT_LOCK and render_lock are needed, + * OBJECT_LOCK must be taken first */ GMutex render_lock; GstBuffer *last_buffer; @@ -69,6 +77,7 @@ gchar *drm_device; gboolean skip_dumb_buffer_copy; + gboolean force_aspect_ratio; }; struct _GstWaylandSinkClass
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/gstwebrtcbin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/gstwebrtcbin.c
Changed
@@ -656,6 +656,7 @@ REQUEST_AUX_SENDER, REQUEST_POST_RTP_AUX_SENDER, ADD_ICE_CANDIDATE_FULL_SIGNAL, + CLOSE_SIGNAL, LAST_SIGNAL, }; @@ -1169,12 +1170,8 @@ } static void -_stop_thread (GstWebRTCBin * webrtc) +_quit_pc_loop (GstWebRTCBin * webrtc) { - GST_OBJECT_LOCK (webrtc); - webrtc->priv->is_closed = TRUE; - GST_OBJECT_UNLOCK (webrtc); - PC_LOCK (webrtc); g_main_loop_quit (webrtc->priv->loop); while (webrtc->priv->loop) @@ -1184,6 +1181,20 @@ g_thread_unref (webrtc->priv->thread); } +static void +_stop_thread (GstWebRTCBin * webrtc) +{ + GST_OBJECT_LOCK (webrtc); + if (webrtc->priv->is_closed) { + GST_OBJECT_UNLOCK (webrtc); + return; + } + webrtc->priv->is_closed = TRUE; + GST_OBJECT_UNLOCK (webrtc); + + _quit_pc_loop (webrtc); +} + static gboolean _execute_op (GstWebRTCBinTask * op) { @@ -1213,6 +1224,13 @@ PC_UNLOCK (op->webrtc); + if (op->deferred) { + GST_DEBUG_OBJECT (op->webrtc, + "Task successfully submitted, promise result is expected to be notified asynchronously"); + gst_clear_structure (&s); + goto out; + } + if (op->promise) gst_promise_reply (op->promise, s); else if (s) @@ -1238,9 +1256,10 @@ * be replied to in the case that @webrtc becomes closed between the idle * source addition and the the execution of the idle source. */ -gboolean -gst_webrtc_bin_enqueue_task (GstWebRTCBin * webrtc, GstWebRTCBinFunc func, - gpointer data, GDestroyNotify notify, GstPromise * promise) +static gboolean +gst_webrtc_bin_enqueue_task_full (GstWebRTCBin * webrtc, + GstWebRTCBinFunc func, gpointer data, + GDestroyNotify notify, GstPromise * promise, gboolean deferred) { GstWebRTCBinTask *op; GMainContext *ctx; @@ -1261,6 +1280,7 @@ op = g_new0 (GstWebRTCBinTask, 1); op->webrtc = webrtc; + op->deferred = deferred; op->op = func; op->data = data; op->notify = notify; @@ -1278,6 +1298,14 @@ return TRUE; } +gboolean +gst_webrtc_bin_enqueue_task (GstWebRTCBin * pc, GstWebRTCBinFunc func, + gpointer data, GDestroyNotify notify, GstPromise * promise) +{ + return gst_webrtc_bin_enqueue_task_full (pc, func, data, notify, promise, + FALSE); +} + void gst_webrtc_bin_get_peer_connection_stats (GstWebRTCBin * webrtc, guint * data_channels_opened, guint * data_channels_closed) @@ -1640,7 +1668,15 @@ if (new_state == GST_WEBRTC_ICE_GATHERING_STATE_COMPLETE) { ICE_LOCK (webrtc); if (webrtc->priv->pending_local_ice_candidates->len != 0) { - /* ICE candidates queued for emissiong -> we're gathering, not complete */ + /* ICE candidates queued for emission -> we're gathering, not complete */ + + const gchar *new_s = + _enum_value_to_string (GST_TYPE_WEBRTC_ICE_GATHERING_STATE, + GST_WEBRTC_ICE_GATHERING_STATE_GATHERING); + GST_INFO_OBJECT (webrtc, + "Deferring ICE gathering state change to %s(%u) due to pending candidates", + new_s, GST_WEBRTC_ICE_GATHERING_STATE_GATHERING); + new_state = GST_WEBRTC_ICE_GATHERING_STATE_GATHERING; } ICE_UNLOCK (webrtc); @@ -2153,7 +2189,7 @@ static GstCaps * _add_supported_attributes_to_caps (GstWebRTCBin * webrtc, - WebRTCTransceiver * trans, const GstCaps * caps) + WebRTCTransceiver * trans, GstCaps * caps) { GstWebRTCKind kind; GstCaps *ret; @@ -4171,7 +4207,7 @@ /* Verify that we didn't ignore any locked m-line transceivers */ for (i = 0; i < webrtc->priv->transceivers->len; i++) { - WebRTCTransceiver *wtrans; + WebRTCTransceiver *wtrans GST_UNUSED_ASSERT; trans = g_ptr_array_index (webrtc->priv->transceivers, i); wtrans = WEBRTC_TRANSCEIVER (trans); @@ -4403,16 +4439,6 @@ } } -static void -_get_rtx_target_pt_and_ssrc_from_caps (GstCaps * answer_caps, gint * target_pt, - guint * target_ssrc) -{ - const GstStructure *s = gst_caps_get_structure (answer_caps, 0); - - gst_structure_get_int (s, "payload", target_pt); - gst_structure_get_uint (s, "ssrc", target_ssrc); -} - /* TODO: use the options argument */ static GstSDPMessage * _create_answer_task (GstWebRTCBin * webrtc, const GstStructure * options, @@ -4540,8 +4566,6 @@ } mid = gst_sdp_media_get_attribute_val (media, "mid"); - /* XXX: not strictly required but a lot of functionality requires a mid */ - g_assert (mid); /* set the a=setup: attribute */ offer_setup = _get_dtls_setup_from_media (offer_media); @@ -4594,21 +4618,28 @@ GstWebRTCRTPTransceiver *rtp_trans = NULL; WebRTCTransceiver *trans = NULL; GstWebRTCRTPTransceiverDirection offer_dir, answer_dir; - gint target_pt = -1; - gint original_target_pt = -1; - guint target_ssrc = 0; gst_sdp_media_set_proto (media, "UDP/TLS/RTP/SAVPF"); offer_caps = _rtp_caps_from_media (offer_media); _remove_optional_offer_fields (offer_caps); - rtp_trans = _find_transceiver_for_mid (webrtc, mid); - if (!rtp_trans) { - g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_INVALID_STATE, - "Transceiver for media with mid %s not found", mid); - gst_caps_unref (offer_caps); - goto rejected; + if (mid) { + rtp_trans = _find_transceiver_for_mid (webrtc, mid); + if (!rtp_trans) { + g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_INVALID_STATE, + "Transceiver for media with mid %s not found", mid); + gst_caps_unref (offer_caps); + goto rejected; + } + } else { + rtp_trans = _find_transceiver_for_mline (webrtc, i); + if (!rtp_trans) { + g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_INVALID_STATE, + "Transceiver for media with mline %u not found", i); + gst_caps_unref (offer_caps); + goto rejected; + } } GstCaps *current_caps = _find_codec_preferences (webrtc, rtp_trans, i, error); @@ -4623,7 +4654,8 @@ const gchar *last_mid = gst_sdp_media_get_attribute_val (last_media, "mid"); /* FIXME: assumes no shenanigans with recycling transceivers */ - g_assert (g_strcmp0 (mid, last_mid) == 0); + if (mid != last_mid) + g_assert (g_strcmp0 (mid, last_mid) == 0); if (!current_caps) current_caps = _rtp_caps_from_media (last_media); } @@ -4676,25 +4708,42 @@ gst_structure_remove_fields (s, "rtcp-fb-nack", NULL); } - if (gst_sdp_media_set_media_from_caps (answer_caps, media) != GST_SDP_OK) { - GST_WARNING_OBJECT (webrtc, - "Could not build media from caps %" GST_PTR_FORMAT, answer_caps); - gst_clear_caps (&answer_caps); - gst_clear_caps (&offer_caps); - goto rejected; - } - - _get_rtx_target_pt_and_ssrc_from_caps (answer_caps, &target_pt, - &target_ssrc); + static const gchar *disallowed_payloads4 = { "rtx", "red", "ulpfec", + NULL + }; + guint answer_caps_size = gst_caps_get_size (answer_caps); + for (guint l = 0; l < answer_caps_size; l++) { + const GstStructure *s = gst_caps_get_structure (answer_caps, l); + const gchar *enc_name = gst_structure_get_string (s, "encoding-name"); + gchar *tmp = g_ascii_strdown (enc_name, -1); + gint target_pt = -1; + gint original_target_pt = -1; + guint target_ssrc = 0; - original_target_pt = target_pt; + if (g_strv_contains (disallowed_payloads, tmp)) { + g_free (tmp); + continue; + } + g_free (tmp); + if (gst_sdp_media_add_media_from_structure (s, media) != GST_SDP_OK) { + GST_WARNING_OBJECT (webrtc, + "Could not set media from %" GST_PTR_FORMAT, s); + gst_clear_caps (&answer_caps); + gst_clear_caps (&offer_caps); + goto rejected; + } - _media_add_fec (media, trans, offer_caps, &target_pt); - if (trans->do_nack) { - _media_add_rtx (media, trans, offer_caps, target_pt, target_ssrc); - if (target_pt != original_target_pt) - _media_add_rtx (media, trans, offer_caps, original_target_pt, - target_ssrc); + gst_structure_get_int (s, "payload", &target_pt); + gst_structure_get_uint (s, "ssrc", &target_ssrc); + original_target_pt = target_pt; + + _media_add_fec (media, trans, offer_caps, &target_pt); + if (trans->do_nack) { + _media_add_rtx (media, trans, offer_caps, target_pt, target_ssrc); + if (target_pt != original_target_pt) + _media_add_rtx (media, trans, offer_caps, original_target_pt, + target_ssrc); + } } if (answer_dir != GST_WEBRTC_RTP_TRANSCEIVER_DIRECTION_RECVONLY) @@ -5430,6 +5479,7 @@ ICE_LOCK (webrtc); g_array_append_val (webrtc->priv->pending_remote_ice_candidates, new); ICE_UNLOCK (webrtc); + gst_promise_reply (item->promise, NULL); } return; } @@ -6433,11 +6483,10 @@ mid = gst_sdp_media_get_attribute_val (media, "mid"); direction = _get_direction_from_media (media); - /* XXX: not strictly required but a lot of functionality requires a mid */ - if (!mid) { - g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, - "Missing mid attribute in media"); - goto out; + if (mid) { + trans = _find_transceiver_for_mid (webrtc, mid); + } else { + trans = _find_transceiver_for_mline (webrtc, i); } if (bundled) @@ -6445,7 +6494,6 @@ else transport_idx = i; - trans = _find_transceiver_for_mid (webrtc, mid); if (sd->source == SDP_LOCAL) { /* If the media description was not yet associated with an RTCRtpTransceiver object then run the following steps: */ @@ -6469,8 +6517,10 @@ } trans->mline = i; /* Set transceiver.Mid to transceiver.JsepMid */ - g_free (trans->mid); - trans->mid = g_strdup (mid); + g_clear_pointer (&trans->mid, g_free); + if (mid) { + trans->mid = g_strdup (mid); + } g_object_notify (G_OBJECT (trans), "mid"); /* If transceiver.Stopped is true, abort these sub steps */ if (trans->stopped) @@ -7116,6 +7166,9 @@ ICE_LOCK (webrtc); g_array_append_val (webrtc->priv->pending_remote_ice_candidates, new); ICE_UNLOCK (webrtc); + if (item->promise) { + gst_promise_reply (item->promise, NULL); + } } else { _add_ice_candidate (webrtc, item, FALSE); } @@ -7135,6 +7188,7 @@ const gchar * attr, GstPromise * promise) { IceCandidateItem *item; + gboolean defer_result = promise != NULL; item = g_new0 (IceCandidateItem, 1); item->mlineindex = mline; @@ -7145,9 +7199,10 @@ else if (!g_ascii_strncasecmp (attr, "candidate:", 10)) item->candidate = g_strdup_printf ("a=%s", attr); } - if (!gst_webrtc_bin_enqueue_task (webrtc, + + if (!gst_webrtc_bin_enqueue_task_full (webrtc, (GstWebRTCBinFunc) _add_ice_candidate_task, item, - (GDestroyNotify) _free_ice_candidate_item, promise)) { + (GDestroyNotify) _free_ice_candidate_item, promise, defer_result)) { GError *error = g_error_new (GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_INVALID_STATE, "Could not add ICE candidate. webrtcbin is closed"); @@ -7161,7 +7216,7 @@ } static GstStructure * -_on_local_ice_candidate_task (GstWebRTCBin * webrtc) +_on_local_ice_candidate_task (GstWebRTCBin * webrtc, gpointer data) { gsize i; GArray *items; @@ -7230,7 +7285,9 @@ } g_array_free (items, TRUE); - return NULL; + /* Clearing all pending ice candidates may have allowed the gathering + * state to transition to complete - so check it before exiting */ + return _update_ice_gathering_state_task (webrtc, data); } static void @@ -7433,7 +7490,10 @@ g_return_val_if_fail (GST_IS_WEBRTC_BIN (webrtc), NULL); g_return_val_if_fail (label != NULL, NULL); g_return_val_if_fail (strlen (label) <= 65535, NULL); - g_return_val_if_fail (webrtc->priv->is_closed != TRUE, NULL); + + if (webrtc->priv->is_closed) { + return NULL; + } if (!init_params || !gst_structure_get_boolean (init_params, "ordered", &ordered)) @@ -7575,7 +7635,7 @@ guint32 session_id = 0, ssrc = 0, pt = 0; SsrcMapItem *mid_entry; GstWebRTCRTPTransceiver *rtp_trans = NULL; - WebRTCTransceiver *trans; + WebRTCTransceiver *trans GST_UNUSED_ASSERT; TransportStream *stream; GstWebRTCBinPad *pad; guint media_idx; @@ -8234,8 +8294,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG ("changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:{ @@ -8270,6 +8330,7 @@ webrtc->priv->running = FALSE; break; case GST_STATE_CHANGE_READY_TO_NULL: + gst_webrtc_ice_close (webrtc->priv->ice, NULL); _stop_thread (webrtc); break; default: @@ -8615,6 +8676,164 @@ } } +struct close_data +{ + GWeakRef webrtc_weak; + GstPromise *promise; +}; + +static struct close_data * +close_data_new (GstWebRTCBin * webrtc, GstPromise * p) +{ + struct close_data *d = g_atomic_rc_box_new0 (struct close_data); + g_weak_ref_init (&d->webrtc_weak, webrtc); + if (p) + d->promise = gst_promise_ref (p); + return d; +} + +static void +close_data_clear (struct close_data *d) +{ + g_weak_ref_clear (&d->webrtc_weak); + if (d->promise) + gst_promise_unref (d->promise); +} + +static void +close_data_unref (struct close_data *d) +{ + g_atomic_rc_box_release_full (d, (GDestroyNotify) close_data_clear); +} + +static void +on_ice_closed (GstPromise * close_promise, gpointer user_data) +{ + struct close_data *d = (struct close_data *) user_data; + GstWebRTCBin *webrtc = g_weak_ref_get (&d->webrtc_weak); + + if (webrtc) { + GST_OBJECT_LOCK (webrtc); + /* 10. Set connection.IceConnectionState to "closed". This does not fire + * any event. */ + webrtc->ice_connection_state = GST_WEBRTC_ICE_CONNECTION_STATE_CLOSED; + + /* 11. Set connection.ConnectionState to "closed". This does not fire + * any event. */ + webrtc->peer_connection_state = GST_WEBRTC_PEER_CONNECTION_STATE_CLOSED; + GST_OBJECT_UNLOCK (webrtc); + gst_object_unref (webrtc); + } + + if (d->promise) + gst_promise_reply (d->promise, NULL); +} + +static void +gst_webrtc_bin_close (GstWebRTCBin * webrtc, GstPromise * promise) +{ + guint i; + GstPromise *close_promise = NULL; + struct close_data *d = NULL; + + /* https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close */ + + GST_OBJECT_LOCK (webrtc); + + /* 1. If connection.IsClosed is true, abort these steps. */ + if (webrtc->priv->is_closed) { + GError *error = NULL; + GstStructure *s = NULL; + + GST_OBJECT_UNLOCK (webrtc); + + error = + g_error_new (GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_INVALID_STATE, + "Connection is already closed"); + s = gst_structure_new ("application/x-gst-promise", "error", + G_TYPE_ERROR, error, NULL); + gst_promise_reply (promise, s); + g_clear_error (&error); + return; + } + + /* 2. Set connection.IsClosed to true. */ + webrtc->priv->is_closed = TRUE; + GST_OBJECT_UNLOCK (webrtc); + + _quit_pc_loop (webrtc); + + /* 3. Set connection.SignalingState to "closed". This does not fire any + * event. */ + GST_OBJECT_LOCK (webrtc); + webrtc->signaling_state = GST_WEBRTC_SIGNALING_STATE_CLOSED; + + /* 4. Let transceivers be the result of executing the CollectTransceivers + * algorithm. + * For every RTCRtpTransceiver transceiver in transceivers, run the + * following steps: */ + for (i = 0; i < webrtc->priv->transceivers->len; i++) { + GstWebRTCRTPTransceiver *rtp_trans = + g_ptr_array_index (webrtc->priv->transceivers, i); + + /* 4.1. If transceiver.Stopped is true, abort these sub steps. */ + if (rtp_trans->stopped) { + GST_TRACE_OBJECT (webrtc, "transceiver %p stopped", rtp_trans); + continue; + } + /* 4.2. Stop the RTCRtpTransceiver with transceiver and disappear. (Currently unsupported) */ + } + GST_OBJECT_UNLOCK (webrtc); + + /* 5. Set the ReadyState slot of each of connection's RTCDataChannels to + * "closed". */ + DC_LOCK (webrtc); + for (i = 0; i < webrtc->priv->data_channels->len; i++) { + WebRTCDataChannel *channel = + g_ptr_array_index (webrtc->priv->data_channels, i); + channel->parent.ready_state = GST_WEBRTC_DATA_CHANNEL_STATE_CLOSED; + } + DC_UNLOCK (webrtc); + + /* 6. If connection.SctpTransport is not null, tear down the underlying + * SCTP association by sending an SCTP ABORT chunk and set the + * SctpTransportState to "closed". */ + if (webrtc->priv->sctp_transport) { + gst_element_set_state (webrtc->priv->sctp_transport->sctpenc, + GST_STATE_READY); + } + + GST_OBJECT_LOCK (webrtc); + + /* 7. Set the DtlsTransportState slot of each of connection's + * RTCDtlsTransports to "closed". */ + for (i = 0; i < webrtc->priv->transceivers->len; i++) { + GstWebRTCRTPTransceiver *rtp_trans = + g_ptr_array_index (webrtc->priv->transceivers, i); + GstWebRTCDTLSTransport *transport; + + transport = webrtc_transceiver_get_dtls_transport (rtp_trans); + if (transport) { + transport->state = GST_WEBRTC_DTLS_TRANSPORT_STATE_CLOSED; + } + } + + GST_OBJECT_UNLOCK (webrtc); + + /* 8. Destroy connection's ICE Agent, abruptly ending any active ICE + * processing and releasing any relevant resources (e.g. TURN permissions). + * 9. Set the IceTransportState slot of each of connection's + * RTCIceTransports to "closed". */ + /* NOTE: We perform these operations asynchronously while the "abruptly" word + * from the spec suggests this should be done synchronously. */ + d = close_data_new (webrtc, promise); + close_promise = + gst_promise_new_with_change_func (on_ice_closed, d, + (GDestroyNotify) close_data_unref); + gst_webrtc_ice_close (webrtc->priv->ice, close_promise); + gst_promise_unref (close_promise); +} + static void gst_webrtc_bin_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec) @@ -8788,6 +9007,14 @@ g_array_free (webrtc->priv->ice_stream_map, TRUE); webrtc->priv->ice_stream_map = NULL; + if (webrtc->priv->sctp_transport) { + gst_element_set_locked_state (webrtc->priv->sctp_transport->sctpdec, FALSE); + gst_element_set_locked_state (webrtc->priv->sctp_transport->sctpenc, FALSE); + gst_element_set_state (webrtc->priv->sctp_transport->sctpdec, + GST_STATE_NULL); + gst_element_set_state (webrtc->priv->sctp_transport->sctpenc, + GST_STATE_NULL); + } g_clear_object (&webrtc->priv->sctp_transport); G_OBJECT_CLASS (parent_class)->dispose (object); @@ -8876,7 +9103,7 @@ gst_element_class_add_static_pad_template_with_gtype (element_class, &src_template, GST_TYPE_WEBRTC_BIN_SRC_PAD); - gst_element_class_set_metadata (element_class, "WebRTC Bin", + gst_element_class_set_static_metadata (element_class, "WebRTC Bin", "Filter/Network/WebRTC", "A bin for webrtc connections", "Matthew Waters <matthew@centricular.com>"); @@ -9158,6 +9385,22 @@ G_TYPE_NONE, 3, G_TYPE_UINT, G_TYPE_STRING, GST_TYPE_PROMISE); /** + * GstWebRTCBin::close: + * @object: the #webrtcbin + * @promise: (nullable): a #GstPromise to be notified when the task is + * complete. + * + * Invoke the close procedure as specified in + * https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close. + * + * Since: 1.28 + */ + gst_webrtc_bin_signalsCLOSE_SIGNAL = + g_signal_new_class_handler ("close", G_TYPE_FROM_CLASS (klass), + G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION, G_CALLBACK (gst_webrtc_bin_close), + NULL, NULL, NULL, G_TYPE_NONE, 1, GST_TYPE_PROMISE); + + /** * GstWebRTCBin::get-stats: * @object: the #webrtcbin * @pad: (nullable): A #GstPad to get the stats for, or %NULL for all @@ -9250,6 +9493,11 @@ * "protocol" G_TYPE_STRING Either "udp" or "tcp". Based on the "transport" defined in RFC 5245 * "relay-protocol" G_TYPE_STRING protocol used by the endpoint to communicate with the TURN server. Only present for local candidates. Either "udp", "tcp" or "tls" * "url" G_TYPE_STRING URL of the ICE server from which the candidate was obtained. Only present for local candidates + * "foundation" G_TYPE_STRING ICE foundation as defined in RFC 5245 section 15.1 (Since: 1.28) + * "related-address" G_TYPE_STRING ICE rel-addr as defined in RFC 5245 section 15.1. Only set for server-reflexive, peer-reflexive and relay candidates (Since: 1.28) + * "related-port" G_TYPE_UINT ICE rel-port as defined in RFC 5245 section 15.1. Only set for serverreflexive, peer-reflexive and relay candidates (Since: 1.28) + * "username-fragment" G_TYPE_STRING ICE username fragment as defined in RFC 5245 section 7.1.2.3 (Since: 1.28) + * "tcp-type" G_TYPE_STRING ICE candidate TCP type as defined in RTCIceTcpCandidateType (Since: 1.28) * * RTCIceCandidatePairStats supported fields (https://www.w3.org/TR/webrtc-stats/#candidatepair-dict*) (Since: 1.22) *
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/gstwebrtcbin.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/gstwebrtcbin.h
Changed
@@ -169,6 +169,7 @@ gpointer data; GDestroyNotify notify; GstPromise *promise; + gboolean deferred; } GstWebRTCBinTask; gboolean gst_webrtc_bin_enqueue_task (GstWebRTCBin * pc,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/gstwebrtcstats.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/gstwebrtcstats.c
Changed
@@ -615,6 +615,11 @@ long priority; DOMString url; DOMString relayProtocol; + DOMString foundation; + DOMString relatedAddress; + long relatedPort; + DOMString usernameFragment; + RTCIceTcpCandidateType tcpType; */ if (transport_id) @@ -630,6 +635,21 @@ NULL); if (can->url) gst_structure_set (stats, "url", G_TYPE_STRING, can->url, NULL); + if (can->ABI.abi.foundation) + gst_structure_set (stats, "foundation", G_TYPE_STRING, + can->ABI.abi.foundation, NULL); + if (can->ABI.abi.related_address) + gst_structure_set (stats, "related-address", G_TYPE_STRING, + can->ABI.abi.related_address, NULL); + if (can->ABI.abi.related_port != -1) + gst_structure_set (stats, "related-port", G_TYPE_UINT, + can->ABI.abi.related_port, NULL); + if (can->ABI.abi.username_fragment) + gst_structure_set (stats, "username-fragment", G_TYPE_STRING, + can->ABI.abi.username_fragment, NULL); + if (can->ABI.abi.tcp_type != GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE) + gst_structure_set (stats, "tcp-type", + GST_TYPE_WEBRTC_ICE_TCP_CANDIDATE_TYPE, can->ABI.abi.tcp_type, NULL); gst_structure_set (s, id, GST_TYPE_STRUCTURE, stats, NULL); gst_structure_free (stats); @@ -648,6 +668,7 @@ gchar *id; gchar *local_cand_id = NULL, *remote_cand_id = NULL; double ts; + GstWebRTCICECandidatePair *selected_pair; GstWebRTCICECandidateStats *local_cand = NULL, *remote_cand = NULL; gst_structure_get_double (s, "timestamp", &ts); @@ -694,23 +715,30 @@ unsigned long long responseBytesSent; */ - if (gst_webrtc_ice_get_selected_pair (webrtc->priv->ice, stream, - &local_cand, &remote_cand)) { - local_cand_id = - _get_stats_from_ice_candidates (webrtc, local_cand, transport_id, - "local", s); - remote_cand_id = - _get_stats_from_ice_candidates (webrtc, remote_cand, transport_id, - "remote", s); - - gst_structure_set (stats, "local-candidate-id", G_TYPE_STRING, - local_cand_id, NULL); - gst_structure_set (stats, "remote-candidate-id", G_TYPE_STRING, - remote_cand_id, NULL); - } else + selected_pair = + gst_webrtc_ice_transport_get_selected_candidate_pair (transport); + if (selected_pair) { + if (selected_pair->local) { + local_cand_id = + _get_stats_from_ice_candidates (webrtc, selected_pair->local->stats, + transport_id, "local", s); + gst_structure_set (stats, "local-candidate-id", G_TYPE_STRING, + local_cand_id, NULL); + } + if (selected_pair->remote) { + remote_cand_id = + _get_stats_from_ice_candidates (webrtc, selected_pair->remote->stats, + transport_id, "remote", s); + + gst_structure_set (stats, "remote-candidate-id", G_TYPE_STRING, + remote_cand_id, NULL); + } + gst_webrtc_ice_candidate_pair_free (selected_pair); + } else { GST_INFO_OBJECT (webrtc, "No selected ICE candidate pair was found for transport %s", GST_OBJECT_NAME (transport)); + } /* XXX: these stats are at the rtp session level but there isn't a specific * stats structure for that. The RTCIceCandidatePairStats is the closest with @@ -743,6 +771,7 @@ gchar *id; double ts; gchar *ice_id; + GstWebRTCDTLSRole dtls_role = GST_WEBRTC_DTLS_ROLE_UNKNOWN; gst_structure_get_double (s, "timestamp", &ts); @@ -779,6 +808,17 @@ g_free (ice_id); } + if (transport->state > GST_WEBRTC_DTLS_TRANSPORT_STATE_NEW) { + if (transport->client) { + dtls_role = GST_WEBRTC_DTLS_ROLE_CLIENT; + } else { + dtls_role = GST_WEBRTC_DTLS_ROLE_SERVER; + } + } + gst_structure_set (stats, "dtls-role", GST_TYPE_WEBRTC_DTLS_ROLE, dtls_role, + "dtls-state", GST_TYPE_WEBRTC_DTLS_TRANSPORT_STATE, transport->state, + NULL); + gst_structure_set (s, id, GST_TYPE_STRUCTURE, stats, NULL); gst_structure_free (stats); @@ -1015,6 +1055,28 @@ return TRUE; } +static void +_get_data_channel_transport_stats (GstWebRTCBin * webrtc, GstStructure * s) +{ + struct transport_stream_stats ts_stats = { + NULL, + }; + GObject *gst_rtp_session; + + if (!webrtc->priv->data_channel_transport) + return; + + ts_stats.stream = webrtc->priv->data_channel_transport; + + g_signal_emit_by_name (webrtc->rtpbin, "get-session", + ts_stats.stream->session_id, &gst_rtp_session); + + ts_stats.transport_id = + _get_stats_from_dtls_transport (webrtc, ts_stats.stream->transport, + GST_WEBRTC_ICE_STREAM (ts_stats.stream->stream), NULL, s); + g_clear_pointer (&ts_stats.transport_id, g_free); +} + GstStructure * gst_webrtc_bin_create_stats (GstWebRTCBin * webrtc, GstPad * pad) { @@ -1039,6 +1101,8 @@ gst_structure_free (pc_stats); } + _get_data_channel_transport_stats (webrtc, s); + if (pad) _get_stats_from_pad (webrtc, pad, s); else
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/transportreceivebin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/transportreceivebin.c
Changed
@@ -235,8 +235,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG ("changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:{ @@ -395,8 +395,9 @@ gst_element_class_add_static_pad_template (element_class, &data_sink_template); - gst_element_class_set_metadata (element_class, "WebRTC Transport Receive Bin", - "Filter/Network/WebRTC", "A bin for webrtc connections", + gst_element_class_set_static_metadata (element_class, + "WebRTC Transport Receive Bin", "Filter/Network/WebRTC", + "A bin for webrtc connections", "Matthew Waters <matthew@centricular.com>"); gobject_class->constructed = transport_receive_bin_constructed;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/transportsendbin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/transportsendbin.c
Changed
@@ -163,8 +163,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG_OBJECT (element, "changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:{ @@ -509,8 +509,9 @@ gst_element_class_add_static_pad_template (element_class, &data_sink_template); - gst_element_class_set_metadata (element_class, "WebRTC Transport Send Bin", - "Filter/Network/WebRTC", "A bin for webrtc connections", + gst_element_class_set_static_metadata (element_class, + "WebRTC Transport Send Bin", "Filter/Network/WebRTC", + "A bin for webrtc connections", "Matthew Waters <matthew@centricular.com>"); gobject_class->constructed = transport_send_bin_constructed;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/webrtcsctptransport.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/webrtcsctptransport.c
Changed
@@ -193,13 +193,14 @@ webrtc_sctp_transport_constructed (GObject * object) { WebRTCSCTPTransport *sctp = WEBRTC_SCTP_TRANSPORT (object); - guint association_id; - - association_id = g_random_int_range (0, G_MAXUINT16); sctp->sctpdec = g_object_ref_sink (gst_element_factory_make ("sctpdec", NULL)); - g_object_set (sctp->sctpdec, "sctp-association-id", association_id, NULL); + g_object_set (sctp->sctpdec, "automatic-association-id", TRUE, NULL); + + guint association_id; + g_object_get (sctp->sctpdec, "sctp-association-id", &association_id, NULL); + sctp->sctpenc = g_object_ref_sink (gst_element_factory_make ("sctpenc", NULL)); g_object_set (sctp->sctpenc, "sctp-association-id", association_id, NULL);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtc/webrtcsdp.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtc/webrtcsdp.c
Changed
@@ -193,16 +193,10 @@ } static gboolean -_media_has_mid (const GstSDPMedia * media, guint media_idx, GError ** error) +_media_has_mid (const GstSDPMedia * media, guint media_idx) { const gchar *mid = gst_sdp_media_get_attribute_val (media, "mid"); - if (IS_EMPTY_SDP_ATTRIBUTE (mid)) { - g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, - "media %u is missing or contains an empty \'mid\' attribute", - media_idx); - return FALSE; - } - return TRUE; + return !IS_EMPTY_SDP_ATTRIBUTE (mid); } const gchar * @@ -220,6 +214,25 @@ return ice_ufrag; } +/* https://datatracker.ietf.org/doc/html/rfc5245#section-15.4 */ +static gboolean +_validate_ice_attr (const gchar * attr, guint min_length) +{ + guint len = strlen (attr); + + if (len < min_length) + return FALSE; + + if (len > 256) + return FALSE; + + for (guint i = 0; i < len; i++) { + if (!g_ascii_isalnum (attri) && attri != '+' && attri != '/') + return FALSE; + } + return TRUE; +} + const gchar * _media_get_ice_pwd (const GstSDPMessage * msg, guint media_idx) { @@ -308,44 +321,61 @@ const GstSDPMedia *media = gst_sdp_message_get_media (sdp->sdp, i); const gchar *mid; gboolean media_in_bundle = FALSE; - if (!_media_has_mid (media, i, error)) - goto fail; - mid = gst_sdp_media_get_attribute_val (media, "mid"); - media_in_bundle = is_bundle - && g_strv_contains ((const gchar **) group_members, mid); - if (!_media_get_ice_ufrag (sdp->sdp, i)) { + const gchar *ice_ufrag; + const gchar *ice_pwd; + + if (_media_has_mid (media, i)) { + mid = gst_sdp_media_get_attribute_val (media, "mid"); + media_in_bundle = + is_bundle && g_strv_contains ((const gchar **) group_members, mid); + } + + ice_ufrag = _media_get_ice_ufrag (sdp->sdp, i); + if (!ice_ufrag) { g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, "media %u is missing or contains an empty \'ice-ufrag\' attribute", i); goto fail; } - if (!_media_get_ice_pwd (sdp->sdp, i)) { + if (!_validate_ice_attr (ice_ufrag, 4)) { + g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, + "media %u has an invalid \'ice-ufrag\' attribute", i); + goto fail; + } + ice_pwd = _media_get_ice_pwd (sdp->sdp, i); + if (!ice_pwd) { g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, "media %u is missing or contains an empty \'ice-pwd\' attribute", i); goto fail; } + if (!_validate_ice_attr (ice_pwd, 22)) { + g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, + "media %u has an invalid \'ice-pwd\' attribute", i); + goto fail; + } + if (!has_session_setup && !_media_has_setup (media, i, error)) goto fail; - /* check parameters in bundle are the same */ + /* Validate ICE ufrag and pwd attributes. According to RFC 8839 section 5.4: + * If two data streams have identical "ice-ufrag"s, they MUST have + * identical "ice-pwd"s. + */ if (media_in_bundle) { const gchar *ice_ufrag = gst_sdp_media_get_attribute_val (media, "ice-ufrag"); const gchar *ice_pwd = gst_sdp_media_get_attribute_val (media, "ice-pwd"); if (!bundle_ice_ufrag) bundle_ice_ufrag = ice_ufrag; - else if (g_strcmp0 (bundle_ice_ufrag, ice_ufrag) != 0) { - g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, - "media %u has different ice-ufrag values in bundle. " - "%s != %s", i, bundle_ice_ufrag, ice_ufrag); - goto fail; - } - if (!bundle_ice_pwd) { - bundle_ice_pwd = ice_pwd; - } else if (g_strcmp0 (bundle_ice_pwd, ice_pwd) != 0) { - g_set_error (error, GST_WEBRTC_ERROR, GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, - "media %u has different ice-pwd values in bundle. " - "%s != %s", i, bundle_ice_pwd, ice_pwd); - goto fail; + else if (g_strcmp0 (bundle_ice_ufrag, ice_ufrag) == 0) { + if (!bundle_ice_pwd) { + bundle_ice_pwd = ice_pwd; + } else if (g_strcmp0 (bundle_ice_pwd, ice_pwd) != 0) { + g_set_error (error, GST_WEBRTC_ERROR, + GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR, + "media %u shares ice-ufrag with another bundled media but has different ice-pwd values. " + "%s != %s", i, bundle_ice_pwd, ice_pwd); + goto fail; + } } } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/webrtcdsp/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/webrtcdsp/meson.build
Changed
@@ -8,6 +8,12 @@ 'gstwebrtcechoprobe.h', 'gstwebrtcdsp.h', + +webrtcdsp_opt = get_option('webrtcdsp') +if webrtcdsp_opt.disabled() + subdir_done() +endif + doc_sources = foreach s: webrtc_sources + webrtc_headers doc_sources += meson.current_source_dir() / s @@ -40,7 +46,7 @@ webrtc_dep = dependency('webrtc-audio-processing-2', version : '>= 2.0', allow_fallback : true, default_options : default_cppstd, - required : get_option('webrtcdsp')) + required : webrtcdsp_opt) endif if webrtc_dep.found()
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpe.h
Added
@@ -0,0 +1,24 @@ +/* Copyright (C) <2018, 2019, 2025> Philippe Normand <philn@igalia.com> + * Copyright (C) <2018, 2019> Žan Doberšek <zdobersek@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> + +#define DEFAULT_LOCATION "about:blank"
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpe2.cpp
Added
@@ -0,0 +1,48 @@ +/* Copyright (C) <2018, 2025> Philippe Normand <philn@igalia.com> + * Copyright (C) <2018> Žan Doberšek <zdobersek@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpevideosrc.h" +#include "gstwpe.h" + +GST_DEBUG_CATEGORY (wpe_video_src_debug); +GST_DEBUG_CATEGORY (wpe_view_debug); +GST_DEBUG_CATEGORY (wpe_src_debug); + +static gboolean +plugin_init (GstPlugin * plugin) +{ + gboolean result; + + GST_DEBUG_CATEGORY_INIT (wpe_video_src_debug, "wpevideosrc2", 0, + "WPE Video Source"); + GST_DEBUG_CATEGORY_INIT (wpe_view_debug, "wpeview2", 0, "WPE Threaded View"); + GST_DEBUG_CATEGORY_INIT (wpe_src_debug, "wpesrc2", 0, "WPE Source"); + + result = gst_element_register (plugin, "wpevideosrc2", GST_RANK_NONE, + GST_TYPE_WPE_VIDEO_SRC); + return result; +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, GST_VERSION_MINOR, + wpe2, "WPE src plugin", plugin_init, VERSION, GST_LICENSE, PACKAGE, + GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpedisplay.cpp
Added
@@ -0,0 +1,224 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpedisplay.h" +#include "gstwpeview.h" +#include "gstwpetoplevel.h" +#include <EGL/egl.h> +#include <gst/gl/gstglfeature.h> +#include <EGL/eglext.h> + +GST_DEBUG_CATEGORY_EXTERN (wpe_view_debug); +#define GST_CAT_DEFAULT wpe_view_debug + +enum +{ + SIGNAL_WPE_VIEW_CREATED, + LAST_SIGNAL +}; +static guint gst_wpe_display_signalsLAST_SIGNAL = { 0 }; + +struct _WPEDisplayGStreamer +{ + WPEDisplay parent; + + GstGLDisplay *gstDisplay; + GstGLContext *gstContext; + GstGLDisplayEGL *gstEGLDisplay; + + EGLDisplay eglDisplay; + WPEDRMDevice *drmDevice; +}; + +#define wpe_display_gstreamer_parent_class parent_class +G_DEFINE_TYPE (WPEDisplayGStreamer, wpe_display_gstreamer, WPE_TYPE_DISPLAY); + +typedef EGLBoolean (*eglQueryDisplayAttribEXTFunc) (EGLDisplay, EGLint, + EGLAttrib *); +typedef const char *(*eglQueryDeviceStringEXTFunc) (EGLDeviceEXT device, + EGLint name); + +typedef struct _VTable +{ + eglQueryDisplayAttribEXTFunc eglQueryDisplayAttribEXT; + eglQueryDeviceStringEXTFunc eglQueryDeviceStringEXT; +} VTable; + +static gboolean +wpe_display_gstreamer_connect (WPEDisplay * display, GError ** error) +{ + auto self = WPE_DISPLAY_GSTREAMER (display); + + if (!self->gstDisplay) + return TRUE; + + if (gst_gl_context_get_gl_platform (self->gstContext) == GST_GL_PLATFORM_EGL) { + self->gstEGLDisplay = gst_gl_display_egl_from_gl_display (self->gstDisplay); + } else { + g_set_error_literal (error, WPE_VIEW_ERROR, WPE_VIEW_ERROR_RENDER_FAILED, + "Available GStreamer GL Context is not EGL - not creating an EGL display from it"); + return FALSE; + } + + const gchar *egl_exts = eglQueryString (EGL_NO_DISPLAY, EGL_EXTENSIONS); + + self->eglDisplay = (EGLDisplay) + gst_gl_display_get_handle (GST_GL_DISPLAY (self->gstEGLDisplay)); + + if (!gst_gl_check_extension ("EGL_EXT_device_query", egl_exts)) { + g_set_error_literal (error, WPE_VIEW_ERROR, WPE_VIEW_ERROR_RENDER_FAILED, + "Failed to initialize rendering: 'EGL_EXT_device_query' not available"); + return FALSE; + } + + EGLDeviceEXT eglDevice; + VTable vt; + vt.eglQueryDisplayAttribEXT = (eglQueryDisplayAttribEXTFunc) + gst_gl_context_get_proc_address (self->gstContext, + "eglQueryDisplayAttribEXT"); + if (!vt.eglQueryDisplayAttribEXT (self->eglDisplay, EGL_DEVICE_EXT, + reinterpret_cast < EGLAttrib * >(&eglDevice))) { + g_set_error_literal (error, WPE_VIEW_ERROR, WPE_VIEW_ERROR_RENDER_FAILED, + "Failed to initialize rendering: 'EGLDeviceEXT' not available"); + return FALSE; + } + + vt.eglQueryDeviceStringEXT = (eglQueryDeviceStringEXTFunc) + gst_gl_context_get_proc_address (self->gstContext, + "eglQueryDeviceStringEXT"); + + const char *drmDevice = NULL; + const char *extensions = + vt.eglQueryDeviceStringEXT (eglDevice, EGL_EXTENSIONS); + if (gst_gl_check_extension ("EGL_EXT_device_drm", extensions)) { + drmDevice = vt.eglQueryDeviceStringEXT (eglDevice, EGL_DRM_DEVICE_FILE_EXT); + } else { + // For some unknown reason this path is triggered when using gtkglsink. + GST_DEBUG_OBJECT (self, + "'EGL_EXT_device_drm' extension missing, using empty DRM device and hoping for the best"); + drmDevice = ""; + } + + const char *drmRenderNode = NULL; + if (gst_gl_check_extension ("EGL_EXT_device_drm_render_node", extensions)) { + drmRenderNode = + vt.eglQueryDeviceStringEXT (eglDevice, EGL_DRM_RENDER_NODE_FILE_EXT); + } else { + GST_DEBUG_OBJECT (self, + "EGL_EXT_device_drm_render_node extension is missing, not setting drm_render_node path"); + } + + self->drmDevice = wpe_drm_device_new (drmDevice, drmRenderNode); + return TRUE; +} + +static WPEView * +wpe_display_gstreamer_create_view (WPEDisplay * display) +{ + auto gst_display = WPE_DISPLAY_GSTREAMER (display); + auto view = wpe_view_gstreamer_new (gst_display); + GValue args2 = { {0}, {0} }; + + g_value_init (&args0, WPE_TYPE_DISPLAY_GSTREAMER); + g_value_set_object (&args0, gst_display); + + g_value_init (&args1, WPE_TYPE_VIEW); + g_value_set_object (&args1, view); + + g_signal_emitv (args, gst_wpe_display_signalsSIGNAL_WPE_VIEW_CREATED, 0, + NULL); + + g_value_unset (&args0); + g_value_unset (&args1); + + auto toplevel = wpe_toplevel_gstreamer_new (gst_display); + wpe_view_set_toplevel (view, toplevel); + g_object_unref (toplevel); + + return view; +} + +static gpointer +wpe_display_gstreamer_get_egl_display (WPEDisplay * display, GError **) +{ + return WPE_DISPLAY_GSTREAMER (display)->eglDisplay; +} + +static WPEDRMDevice * +wpe_display_gstreamer_get_drm_device (WPEDisplay * display) +{ + return WPE_DISPLAY_GSTREAMER (display)->drmDevice; +} + +static void +wpe_display_gstreamer_init (WPEDisplayGStreamer * display) +{ + display->drmDevice = nullptr; +} + +static void +wpe_display_gstreamer_finalize (GObject * object) +{ + auto self = WPE_DISPLAY_GSTREAMER (object); + + if (self->drmDevice) + wpe_drm_device_unref (self->drmDevice); + + gst_clear_object (&self->gstEGLDisplay); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +wpe_display_gstreamer_class_init (WPEDisplayGStreamerClass * klass) +{ + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + gobject_class->finalize = wpe_display_gstreamer_finalize; + + WPEDisplayClass *displayClass = WPE_DISPLAY_CLASS (klass); + displayClass->connect = wpe_display_gstreamer_connect; + displayClass->create_view = wpe_display_gstreamer_create_view; + displayClass->get_egl_display = wpe_display_gstreamer_get_egl_display; + displayClass->get_drm_device = wpe_display_gstreamer_get_drm_device; + + gst_wpe_display_signalsSIGNAL_WPE_VIEW_CREATED = + g_signal_new ("wpe-view-created", G_TYPE_FROM_CLASS (klass), + G_SIGNAL_RUN_LAST, 0, NULL, NULL, NULL, G_TYPE_NONE, 1, WPE_TYPE_VIEW); +} + +WPEDisplay * +wpe_display_gstreamer_new () +{ + auto display = + WPE_DISPLAY_GSTREAMER (g_object_new (WPE_TYPE_DISPLAY_GSTREAMER, + nullptr)); + return WPE_DISPLAY (display); +} + +void +wpe_display_gstreamer_set_gl (WPEDisplay * display, GstGLDisplay * glDisplay, + GstGLContext * context) +{ + auto self = WPE_DISPLAY_GSTREAMER (display); + self->gstDisplay = glDisplay; + self->gstContext = context; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpedisplay.h
Added
@@ -0,0 +1,42 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef GstWPEDisplay_h +#define GstWPEDisplay_h + +#include <glib-object.h> +#include <wpe/wpe-platform.h> + +#include <gst/gl/egl/gstgldisplay_egl.h> +#include <gst/gl/gl.h> +#include <gst/gl/gstglfuncs.h> +#include "gstwpevideosrc.h" + +G_BEGIN_DECLS + +#define WPE_TYPE_DISPLAY_GSTREAMER (wpe_display_gstreamer_get_type()) +G_DECLARE_FINAL_TYPE(WPEDisplayGStreamer, wpe_display_gstreamer, WPE, + DISPLAY_GSTREAMER, WPEDisplay) + +WPEDisplay *wpe_display_gstreamer_new(); + +void wpe_display_gstreamer_set_gl(WPEDisplay *, GstGLDisplay *, GstGLContext *); + +G_END_DECLS + +#endif /* GstWPEDisplay_h */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpethreadedview.cpp
Added
@@ -0,0 +1,947 @@ +/* Copyright (C) <2018, 2019, 2020, 2025> Philippe Normand <philn@igalia.com> + * Copyright (C) <2018> Žan Doberšek <zdobersek@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpe.h" +#include "gstwpethreadedview.h" +#include "gstwpedisplay.h" +#include "gstwpeview.h" + +#include <gst/gl/gl.h> +#include <gst/gl/egl/gsteglimage.h> +#include <gst/gl/egl/gstgldisplay_egl.h> + +#include <cstdio> +#include <mutex> + +GST_DEBUG_CATEGORY_EXTERN (wpe_view_debug); +#define GST_CAT_DEFAULT wpe_view_debug + +/* *INDENT-OFF* */ +class GMutexHolder { +public: + GMutexHolder (GMutex & mutex) + :m(mutex) + { + g_mutex_lock (&m); + } + ~GMutexHolder () + { + g_mutex_unlock (&m); + } + +private: + GMutex &m; +}; +/* *INDENT-ON* */ + +static GstWPEContextThread *s_view = NULL; + +GstWPEContextThread & GstWPEContextThread::singleton () +{ + /* *INDENT-OFF* */ + static gsize initialized = 0; + /* *INDENT-ON* */ + + if (g_once_init_enter (&initialized)) { + s_view = new GstWPEContextThread; + + g_once_init_leave (&initialized, 1); + } + + return *s_view; +} + +GstWPEContextThread::GstWPEContextThread () +{ + g_mutex_init (&threading.mutex); + g_cond_init (&threading.cond); + threading.ready = FALSE; + + { + GMutexHolder lock (threading.mutex); + threading.thread = g_thread_new ("GstWPEContextThread", s_viewThread, this); + while (!threading.ready) { + g_cond_wait (&threading.cond, &threading.mutex); + } + GST_DEBUG ("thread spawned"); + } +} + +GstWPEContextThread::~GstWPEContextThread () +{ + if (threading.thread) { + g_thread_unref (threading.thread); + threading.thread = nullptr; + } + + g_mutex_clear (&threading.mutex); + g_cond_clear (&threading.cond); +} + +template < typename Function > void +GstWPEContextThread::dispatch (Function func) +{ + /* *INDENT-OFF* */ + struct Job { + Job (Function & f) + :func (f) + { + g_mutex_init (&mutex); + g_cond_init (&cond); + dispatched = FALSE; + } + ~Job () + { + g_mutex_clear (&mutex); + g_cond_clear (&cond); + } + + void dispatch () + { + GMutexHolder lock (mutex); + func (); + dispatched = TRUE; + g_cond_signal (&cond); + } + + void waitCompletion () + { + GMutexHolder lock (mutex); + while (!dispatched) { + g_cond_wait (&cond, &mutex); + } + } + + Function & func; + GMutex mutex; + GCond cond; + gboolean dispatched; + }; + /* *INDENT-ON* */ + + struct Job job (func); + GSource *source = g_idle_source_new (); + /* *INDENT-OFF* */ + g_source_set_callback (source,(gpointer data)->gboolean { + auto job = static_cast<struct Job *>(data); + job->dispatch (); + return G_SOURCE_REMOVE; + }, &job, nullptr); + /* *INDENT-ON* */ + g_source_set_priority (source, G_PRIORITY_DEFAULT); + g_source_attach (source, glib.context); + job.waitCompletion (); + g_source_unref (source); +} + +gpointer +GstWPEContextThread::s_viewThread (gpointer data) +{ + /* *INDENT-OFF* */ + auto &view = *static_cast<GstWPEContextThread *>(data); + /* *INDENT-ON* */ + + view.glib.context = g_main_context_new (); + view.glib.loop = g_main_loop_new (view.glib.context, FALSE); + + g_main_context_push_thread_default (view.glib.context); + + { + GSource *source = g_idle_source_new (); + /* *INDENT-OFF* */ + g_source_set_callback(source, (gpointer data) -> gboolean { + auto& view = *static_cast<GstWPEContextThread*>(data); + GMutexHolder lock (view.threading.mutex); + view.threading.ready = TRUE; + g_cond_signal(&view.threading.cond); + return G_SOURCE_REMOVE; + }, &view, nullptr); + /* *INDENT-ON* */ + g_source_attach (source, view.glib.context); + g_source_unref (source); + } + + g_main_loop_run (view.glib.loop); + + g_main_loop_unref (view.glib.loop); + view.glib.loop = nullptr; + + g_main_context_pop_thread_default (view.glib.context); + g_main_context_unref (view.glib.context); + view.glib.context = nullptr; + return nullptr; +} + +GstWPEThreadedView * +GstWPEContextThread::createWPEView (GstWpeVideoSrc2 * src, + GstGLContext * context, + GstGLDisplay * display, WPEDisplay * wpe_display, int width, int height) +{ + GST_DEBUG ("context %p display %p, size (%d,%d)", context, display, width, + height); + + GstWPEThreadedView *view = nullptr; + /* *INDENT-OFF* */ + dispatch(&() mutable { + if (!glib.web_context) { + glib.web_context = + WEBKIT_WEB_CONTEXT (g_object_new (WEBKIT_TYPE_WEB_CONTEXT, nullptr)); + } + view = + new GstWPEThreadedView (glib.web_context, src, context, display, wpe_display, + width, height); + }); + /* *INDENT-ON* */ + + if (view && view->hasUri ()) { + GST_DEBUG ("waiting load to finish"); + view->waitLoadCompletion (); + GST_DEBUG ("done"); + } + + return view; +} + +static gboolean +s_loadFailed (WebKitWebView *, WebKitLoadEvent, gchar * failing_uri, + GError * error, gpointer data) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (data); + + if (g_error_matches (error, WEBKIT_NETWORK_ERROR, + WEBKIT_NETWORK_ERROR_CANCELLED)) { + GST_INFO_OBJECT (src, "Loading cancelled."); + + return FALSE; + } + + GST_ELEMENT_ERROR (GST_ELEMENT_CAST (src), RESOURCE, FAILED, (NULL), + ("Failed to load %s (%s)", failing_uri, error->message)); + return FALSE; +} + +static gboolean +s_loadFailedWithTLSErrors (WebKitWebView *, gchar * failing_uri, + GTlsCertificate *, GTlsCertificateFlags, gpointer data) +{ + // Defer to load-failed. + return FALSE; +} + +static void +s_loadProgressChanged (GObject * object, GParamSpec *, gpointer data) +{ + GstElement *src = GST_ELEMENT_CAST (data); + // The src element is locked already so we can't call + // gst_element_post_message(). Instead retrieve the bus manually and use it + // directly. + GstBus *bus = GST_ELEMENT_BUS (src); + double estimatedProgress; + g_object_get (object, "estimated-load-progress", &estimatedProgress, nullptr); + gst_object_ref (bus); + gst_bus_post (bus, gst_message_new_element (GST_OBJECT_CAST (src), + gst_structure_new ("wpe-stats", "estimated-load-progress", + G_TYPE_DOUBLE, estimatedProgress * 100, nullptr))); + gst_object_unref (bus); +} + +static void +s_webProcessCrashed (WebKitWebView *, WebKitWebProcessTerminationReason reason, + gpointer data) +{ + /* *INDENT-OFF* */ + auto &view = *static_cast<GstWPEThreadedView *>(data); + /* *INDENT-ON* */ + auto *src = view.src (); + gchar *reason_str = + g_enum_to_string (WEBKIT_TYPE_WEB_PROCESS_TERMINATION_REASON, reason); + + // In case the crash happened while doing the initial URL loading, unlock + // the load completion waiting. + view.notifyLoadFinished (); + + // TODO: Emit a signal here and fallback to error system if signal wasn't handled by application? + + GST_ELEMENT_ERROR (GST_ELEMENT_CAST (src), RESOURCE, FAILED, (NULL), ("%s", + reason_str)); + + g_free (reason_str); +} + +/* *INDENT-OFF* */ +GstWPEThreadedView::GstWPEThreadedView( + WebKitWebContext *web_context, GstWpeVideoSrc2 *src, GstGLContext *context, + GstGLDisplay *display, WPEDisplay *wpe_display, int width, int height) + : m_src(src) { + g_mutex_init (&threading.ready_mutex); + g_cond_init (&threading.ready_cond); + threading.ready = FALSE; + + g_mutex_init (&images_mutex); + if (context) + gst.context = GST_GL_CONTEXT (gst_object_ref (context)); + if (display) + gst.display = GST_GL_DISPLAY (gst_object_ref (display)); + + wpe.width = width; + wpe.height = height; + + auto *defaultWebsitePolicies = webkit_website_policies_new_with_policies( + "autoplay", WEBKIT_AUTOPLAY_ALLOW, nullptr); + + webkit.view = WEBKIT_WEB_VIEW(g_object_new( + WEBKIT_TYPE_WEB_VIEW, "web-context", web_context, "display", wpe_display, + "website-policies", defaultWebsitePolicies, nullptr)); + + g_object_unref(defaultWebsitePolicies); + + wpe.view = webkit_web_view_get_wpe_view (webkit.view); + wpe_view_gstreamer_set_client (WPE_VIEW_GSTREAMER (wpe.view), this); + if (auto wpeToplevel = wpe_view_get_toplevel (wpe.view)) + wpe_toplevel_resize (wpeToplevel, width, height); + + // FIXME: unmap when appropriate and implement can_be_mapped if needed. + wpe_view_map (wpe.view); + + g_signal_connect (webkit.view, "load-failed", G_CALLBACK (s_loadFailed), src); + g_signal_connect (webkit.view, "load-failed-with-tls-errors", + G_CALLBACK (s_loadFailedWithTLSErrors), src); + g_signal_connect (webkit.view, "notify::estimated-load-progress", + G_CALLBACK (s_loadProgressChanged), src); + g_signal_connect (webkit.view, "web-process-terminated", + G_CALLBACK (s_webProcessCrashed), this); + + auto *settings = webkit_web_view_get_settings (webkit.view); + webkit_settings_set_enable_webaudio (settings, TRUE); + + gst_wpe_video_src_configure_web_view (src, webkit.view); + + gchar *location; + gboolean drawBackground = TRUE; + g_object_get (src, "location", &location, "draw-background", &drawBackground, nullptr); + setDrawBackground (drawBackground); + if (location) { + loadUriUnlocked (location); + g_free (location); + } +} +/* *INDENT-ON* */ + +GstWPEThreadedView::~GstWPEThreadedView () +{ + GstEGLImage *egl_pending = NULL; + GstEGLImage *egl_committed = NULL; + GstBuffer *shm_pending = NULL; + GstBuffer *shm_committed = NULL; + GST_TRACE ("%p destroying", this); + + g_mutex_clear (&threading.ready_mutex); + g_cond_clear (&threading.ready_cond); + + { + GMutexHolder lock (images_mutex); + + if (egl.pending) { + egl_pending = egl.pending; + egl.pending = nullptr; + } + if (egl.committed) { + egl_committed = egl.committed; + egl.committed = nullptr; + } + if (shm.pending) { + GST_TRACE ("%p freeing shm pending %" GST_PTR_FORMAT, this, shm.pending); + shm_pending = shm.pending; + shm.pending = nullptr; + } + if (shm.committed) { + GST_TRACE ("%p freeing shm commited %" GST_PTR_FORMAT, this, + shm.committed); + shm_committed = shm.committed; + shm.committed = nullptr; + } + } + + if (egl_pending) + gst_egl_image_unref (egl_pending); + if (egl_committed) + gst_egl_image_unref (egl_committed); + if (shm_pending) + gst_buffer_unref (shm_pending); + if (shm_committed) + gst_buffer_unref (shm_committed); + + /* *INDENT-OFF* */ + GstWPEContextThread::singleton().dispatch(&() { + if (webkit.view) { + g_object_unref (webkit.view); + webkit.view = nullptr; + } + }); + /* *INDENT-ON* */ + + if (gst.display_egl) { + gst_object_unref (gst.display_egl); + gst.display_egl = nullptr; + } + + if (gst.display) { + gst_object_unref (gst.display); + gst.display = nullptr; + } + + if (gst.context) { + gst_object_unref (gst.context); + gst.context = nullptr; + } + if (webkit.uri) { + g_free (webkit.uri); + webkit.uri = nullptr; + } + + g_mutex_clear (&images_mutex); + GST_TRACE ("%p destroyed", this); +} + +void +GstWPEThreadedView::notifyLoadFinished () +{ + GMutexHolder lock (threading.ready_mutex); + if (!threading.ready) { + threading.ready = TRUE; + g_cond_signal (&threading.ready_cond); + } +} + +void +GstWPEThreadedView::waitLoadCompletion () +{ + GMutexHolder lock (threading.ready_mutex); + while (!threading.ready) + g_cond_wait (&threading.ready_cond, &threading.ready_mutex); +} + +GstEGLImage * +GstWPEThreadedView::image () +{ + GstEGLImage *ret = nullptr; + bool dispatchFrameComplete = false; + GstEGLImage *prev_image = NULL; + + { + GMutexHolder lock (images_mutex); + + GST_TRACE ("pending %" GST_PTR_FORMAT " (%d) committed %" GST_PTR_FORMAT + " (%d)", egl.pending, + GST_IS_EGL_IMAGE (egl.pending) ? + GST_MINI_OBJECT_REFCOUNT_VALUE (GST_MINI_OBJECT_CAST (egl.pending)) : 0, + egl.committed, + GST_IS_EGL_IMAGE (egl.committed) ? + GST_MINI_OBJECT_REFCOUNT_VALUE (GST_MINI_OBJECT_CAST (egl.committed)) : + 0); + + if (egl.pending) { + prev_image = egl.committed; + egl.committed = egl.pending; + egl.pending = nullptr; + + dispatchFrameComplete = true; + } + + if (egl.committed) + ret = egl.committed; + } + + if (prev_image) { + gst_egl_image_unref (prev_image); + } + + if (dispatchFrameComplete) { + frameComplete (); + } + + return ret; +} + +GstBuffer * +GstWPEThreadedView::buffer () +{ + GstBuffer *ret = nullptr; + bool dispatchFrameComplete = false; + GstBuffer *prev_image = NULL; + + { + GMutexHolder lock (images_mutex); + + GST_TRACE ("pending %" GST_PTR_FORMAT " (%d) committed %" GST_PTR_FORMAT + " (%d)", shm.pending, + GST_IS_BUFFER (shm.pending) ? + GST_MINI_OBJECT_REFCOUNT_VALUE (GST_MINI_OBJECT_CAST (shm.pending)) : 0, + shm.committed, + GST_IS_BUFFER (shm.committed) ? + GST_MINI_OBJECT_REFCOUNT_VALUE (GST_MINI_OBJECT_CAST (shm.committed)) : + 0); + + if (shm.pending) { + prev_image = shm.committed; + shm.committed = shm.pending; + shm.pending = nullptr; + + dispatchFrameComplete = true; + } + + if (shm.committed) + ret = shm.committed; + } + + if (prev_image) + gst_buffer_unref (prev_image); + + if (dispatchFrameComplete) { + frameComplete (); + } + + return ret; +} + +void +GstWPEThreadedView::resize (int width, int height) +{ + GST_DEBUG ("resize to %dx%d", width, height); + wpe.width = width; + wpe.height = height; + if (auto wpeToplevel = wpe_view_get_toplevel (wpe.view)) + wpe_toplevel_resize (wpeToplevel, wpe.width, wpe.height); +} + +void +GstWPEThreadedView::clearBuffers () +{ + bool dispatchFrameComplete = false; + { + GMutexHolder lock (images_mutex); + + if (shm.pending) { + auto meta = gst_buffer_get_video_meta (shm.pending); + if (static_cast < int >(meta->width) != wpe.width || + static_cast < int >(meta->height) != wpe.height) { + gst_clear_buffer (&shm.pending); + dispatchFrameComplete = true; + } + } + + if (shm.committed) { + auto meta = gst_buffer_get_video_meta (shm.committed); + if (static_cast < int >(meta->width) != wpe.width || + static_cast < int >(meta->height) != wpe.height) { + gst_clear_buffer (&shm.committed); + dispatchFrameComplete = true; + } + } + } + + if (dispatchFrameComplete) { + frameComplete (); + // Wait until the next SHM buffer has been received. + threading.ready = false; + waitLoadCompletion (); + } +} + +void +GstWPEThreadedView::loadUriUnlocked (const gchar * uri) +{ + if (webkit.uri) + g_free (webkit.uri); + + GST_DEBUG ("loading %s", uri); + webkit.uri = g_strdup (uri); + webkit_web_view_load_uri (webkit.view, webkit.uri); +} + +void +GstWPEThreadedView::loadUri (const gchar * uri) +{ + s_view->dispatch (&() { + loadUriUnlocked (uri);}); +} + +static void +s_runJavascriptFinished (GObject * object, GAsyncResult * result, + gpointer user_data) +{ + GError *error = NULL; + g_autoptr (JSCValue) js_result = + webkit_web_view_evaluate_javascript_finish (WEBKIT_WEB_VIEW (object), + result, &error); + + // TODO: Pass result back to signal call site using a GstPromise? + (void) js_result; + + if (error) { + GST_WARNING ("Error running javascript: %s", error->message); + g_error_free (error); + } +} + +void +GstWPEThreadedView::runJavascript (const char *script) +{ + /* *INDENT-OFF* */ + s_view->dispatch(&() { + webkit_web_view_evaluate_javascript(webkit.view, script, -1, nullptr, + nullptr, nullptr, + s_runJavascriptFinished, nullptr); + }); + /* *INDENT-ON* */ +} + +void +GstWPEThreadedView::loadData (GBytes * bytes) +{ + /* *INDENT-OFF* */ + s_view->dispatch(this, bytes = g_bytes_ref(bytes)() { + webkit_web_view_load_bytes(webkit.view, bytes, nullptr, nullptr, nullptr); + g_bytes_unref(bytes); + }); + /* *INDENT-ON* */ +} + +void +GstWPEThreadedView::setDrawBackground (gboolean drawsBackground) +{ + GST_DEBUG ("%s background rendering", + drawsBackground ? "Enabling" : "Disabling"); + WebKitColor color; + webkit_color_parse (&color, drawsBackground ? "white" : "transparent"); + webkit_web_view_set_background_color (webkit.view, &color); +} + +struct WPEBufferContext +{ + GstWPEThreadedView *view; + WPEBuffer *buffer; +}; + +void +GstWPEThreadedView::s_releaseBuffer (gpointer data) +{ + /* *INDENT-OFF* */ + s_view->dispatch(&() { + WPEBufferContext *context = static_cast<WPEBufferContext *>(data); + if (WPE_IS_VIEW(context->view->wpe.view)) { + wpe_view_buffer_released(WPE_VIEW(context->view->wpe.view), + context->buffer); + } + g_object_unref(context->buffer); + g_free(context); + }); +/* *INDENT-ON* */ +} + +/* *INDENT-OFF* */ +gboolean GstWPEThreadedView::setPendingBuffer(WPEBuffer *buffer, GError **error) +{ + WPEBufferContext *bufferContext = g_new (WPEBufferContext, 1); + bufferContext->view = this; + bufferContext->buffer = g_object_ref (buffer); + + if (WPE_IS_BUFFER_DMA_BUF (buffer)) { + auto eglImage = wpe_buffer_import_to_egl_image (buffer, error); + if (*error) + return FALSE; + + auto *gstImage = + gst_egl_image_new_wrapped (gst.context, eglImage, GST_GL_RGBA, + bufferContext,(GstEGLImage *, gpointer data) { s_releaseBuffer (data); }); + { + GMutexHolder lock (images_mutex); + + GST_TRACE ("EGLImage %p wrapped in GstEGLImage %" GST_PTR_FORMAT, + eglImage, gstImage); + gst_clear_mini_object ((GstMiniObject **) & egl.pending); + egl.pending = gstImage; + + m_pending_buffer = g_object_ref (buffer); + notifyLoadFinished (); + } + return TRUE; + } + + if (!WPE_IS_BUFFER_SHM (buffer)) { + g_set_error_literal (error, WPE_VIEW_ERROR, WPE_VIEW_ERROR_RENDER_FAILED, + "Unsupported WPEBuffer format"); + return FALSE; + } + + GBytes *bytes = wpe_buffer_import_to_pixels (buffer, error); + if (!bytes) { + return FALSE; + } + + auto width = wpe_buffer_get_width (buffer); + auto height = wpe_buffer_get_height (buffer); + + guint stride; + g_object_get (buffer, "stride", &stride, nullptr); + + gsize size = g_bytes_get_size (bytes); + auto *gstBuffer = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_READONLY, + (gpointer) g_bytes_get_data (bytes, nullptr), size, 0, size, + bufferContext, s_releaseBuffer); + gsize offsets1; + gint strides1; + offsets0 = 0; + strides0 = stride; + gst_buffer_add_video_meta_full (gstBuffer, GST_VIDEO_FRAME_FLAG_NONE, + GST_VIDEO_FORMAT_BGRA, width, height, 1, offsets, strides); + + { + GMutexHolder lock (images_mutex); + GST_TRACE ("SHM buffer %p wrapped in buffer %" GST_PTR_FORMAT, buffer, + gstBuffer); + gst_clear_buffer (&shm.pending); + shm.pending = gstBuffer; + m_pending_buffer = g_object_ref (buffer); + notifyLoadFinished (); + } + return TRUE; +} +/* *INDENT-ON* */ + +static uint32_t +_pointer_modifiers_from_gst_event (GstEvent * ev) +{ + GstNavigationModifierType modifier_state; + uint32_t modifiers = 0; + + if (gst_navigation_event_parse_modifier_state (ev, &modifier_state)) { + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON1_MASK) + modifiers |= WPE_MODIFIER_POINTER_BUTTON1; + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON2_MASK) + modifiers |= WPE_MODIFIER_POINTER_BUTTON2; + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON3_MASK) + modifiers |= WPE_MODIFIER_POINTER_BUTTON3; + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON4_MASK) + modifiers |= WPE_MODIFIER_POINTER_BUTTON4; + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON5_MASK) + modifiers |= WPE_MODIFIER_POINTER_BUTTON5; + } + + return modifiers; +} + +static uint32_t +_keyboard_modifiers_from_gst_event (GstEvent * ev) +{ + GstNavigationModifierType modifier_state; + uint32_t modifiers = 0; + + if (gst_navigation_event_parse_modifier_state (ev, &modifier_state)) { + if (modifier_state & GST_NAVIGATION_MODIFIER_CONTROL_MASK) + modifiers |= WPE_MODIFIER_KEYBOARD_CONTROL; + if (modifier_state & GST_NAVIGATION_MODIFIER_SHIFT_MASK) + modifiers |= WPE_MODIFIER_KEYBOARD_SHIFT; + if (modifier_state & GST_NAVIGATION_MODIFIER_MOD1_MASK) + modifiers |= WPE_MODIFIER_KEYBOARD_ALT; + if (modifier_state & GST_NAVIGATION_MODIFIER_META_MASK) + modifiers |= WPE_MODIFIER_KEYBOARD_META; + } + + return modifiers; +} + +static WPEModifiers +modifiers_from_gst_event (GstEvent * event) +{ + /* *INDENT-OFF* */ + return static_cast<WPEModifiers> + (_pointer_modifiers_from_gst_event (event) | + _keyboard_modifiers_from_gst_event (event)); + /* *INDENT-ON* */ +} + +void +GstWPEThreadedView::frameComplete () +{ + GST_TRACE ("frame complete"); + /* *INDENT-OFF* */ + s_view->dispatch(&() { + if (m_committed_buffer) { + wpe_view_buffer_released(WPE_VIEW(wpe.view), m_committed_buffer); + g_object_unref(m_committed_buffer); + } + m_committed_buffer = m_pending_buffer; + wpe_view_buffer_rendered (WPE_VIEW (wpe.view), m_committed_buffer); + }); + /* *INDENT-ON* */ +} + +void +GstWPEThreadedView::dispatchEvent (WPEEvent * wpe_event) +{ + /* *INDENT-OFF* */ + s_view->dispatch(&() { + wpe_view_event(WPE_VIEW(wpe.view), wpe_event); + wpe_event_unref(wpe_event); + }); + /* *INDENT-ON* */ +} + +/* *INDENT-OFF* */ +gboolean GstWPEThreadedView::dispatchKeyboardEvent(GstEvent *event) { + const gchar *key; + if (!gst_navigation_event_parse_key_event (event, &key)) { + return FALSE; + } + + auto modifiers = static_cast<WPEModifiers>(_keyboard_modifiers_from_gst_event (event)); + auto timestamp = GST_TIME_AS_MSECONDS (GST_EVENT_TIMESTAMP (event)); + + /* FIXME: This is wrong... The GstNavigation API should pass + hardware-level information, not high-level keysym strings */ + gunichar *unichar; + glong items_written; + uint32_t keysym; + + unichar = g_utf8_to_ucs4_fast (key, -1, &items_written); + if (items_written == 1) + keysym = (uint32_t) xkb_utf32_to_keysym (*unichar); + else + keysym = (uint32_t) xkb_keysym_from_name (key, XKB_KEYSYM_NO_FLAGS); + + WPEEventType event_type = WPE_EVENT_NONE; + if (gst_navigation_event_get_type (event) == GST_NAVIGATION_EVENT_KEY_PRESS) + event_type = WPE_EVENT_KEYBOARD_KEY_DOWN; + else + event_type = WPE_EVENT_KEYBOARD_KEY_UP; + + dispatchEvent (wpe_event_keyboard_new (event_type, WPE_VIEW (wpe.view), + WPE_INPUT_SOURCE_KEYBOARD, timestamp, modifiers, keysym, keysym)); + return TRUE; +} + +gboolean GstWPEThreadedView::dispatchPointerEvent (GstEvent * event) +{ + gdouble x, y; + gint button; + if (!gst_navigation_event_parse_mouse_button_event (event, &button, &x, &y)) { + return FALSE; + } + + GstNavigationModifierType modifier_state; + guint wpe_button = 0; + if (gst_navigation_event_parse_modifier_state (event, &modifier_state)) { + if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON1_MASK) + wpe_button = WPE_BUTTON_PRIMARY; + else if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON2_MASK) + wpe_button = WPE_BUTTON_MIDDLE; + else if (modifier_state & GST_NAVIGATION_MODIFIER_BUTTON3_MASK) + wpe_button = WPE_BUTTON_SECONDARY; + } + + auto timestamp = GST_TIME_AS_MSECONDS (GST_EVENT_TIMESTAMP (event)); + guint press_count = 0; + WPEEventType type; + if (gst_navigation_event_get_type (event) == + GST_NAVIGATION_EVENT_MOUSE_BUTTON_PRESS) { + press_count = wpe_view_compute_press_count (WPE_VIEW (wpe.view), x, y, + wpe_button, timestamp); + type = WPE_EVENT_POINTER_DOWN; + } else { + type = WPE_EVENT_POINTER_UP; + } + dispatchEvent (wpe_event_pointer_button_new (type, WPE_VIEW (wpe.view), + WPE_INPUT_SOURCE_MOUSE, timestamp, modifiers_from_gst_event (event), + wpe_button, x, y, press_count)); + return TRUE; +} + +gboolean GstWPEThreadedView::dispatchPointerMoveEvent (GstEvent * event) +{ + gdouble x, y; + if (!gst_navigation_event_parse_mouse_move_event (event, &x, &y)) { + return FALSE; + } + + gdouble delta_x = 0; + gdouble delta_y = 0; + if (m_last_pointer_position) { + delta_x = x - m_last_pointer_position->first; + delta_y = y - m_last_pointer_position->second; + } + m_last_pointer_position = { x, y }; + + auto timestamp = GST_TIME_AS_MSECONDS (GST_EVENT_TIMESTAMP (event)); + dispatchEvent (wpe_event_pointer_move_new (WPE_EVENT_POINTER_MOVE, + WPE_VIEW (wpe.view), WPE_INPUT_SOURCE_MOUSE, timestamp, + modifiers_from_gst_event (event), x, y, delta_x, delta_y)); + return TRUE; +} + +gboolean GstWPEThreadedView::dispatchAxisEvent (GstEvent * event) +{ + gdouble x, y, delta_x, delta_y; + if (!gst_navigation_event_parse_mouse_scroll_event (event, &x, &y, &delta_x, + &delta_y)) { + return FALSE; + } + + auto timestamp = GST_TIME_AS_MSECONDS (GST_EVENT_TIMESTAMP (event)); + dispatchEvent (wpe_event_scroll_new (WPE_VIEW (wpe.view), + WPE_INPUT_SOURCE_MOUSE, timestamp, modifiers_from_gst_event (event), + delta_x, delta_y, TRUE, FALSE, x, y)); + + return TRUE; +} + +gboolean GstWPEThreadedView::dispatchTouchEvent (GstEvent * event) +{ + guint touch_id; + gdouble x, y; + if (!gst_navigation_event_parse_touch_event (event, &touch_id, &x, &y, NULL)) { + return FALSE; + } + + WPEEventType event_type = WPE_EVENT_NONE; + switch (gst_navigation_event_get_type (event)) { + case GST_NAVIGATION_EVENT_TOUCH_DOWN: + event_type = WPE_EVENT_TOUCH_DOWN; + break; + case GST_NAVIGATION_EVENT_TOUCH_MOTION: + event_type = WPE_EVENT_TOUCH_MOVE; + break; + case GST_NAVIGATION_EVENT_TOUCH_UP: + event_type = WPE_EVENT_TOUCH_UP; + break; + default: + break; + } + + auto timestamp = GST_TIME_AS_MSECONDS (GST_EVENT_TIMESTAMP (event)); + auto modifiers = static_cast<WPEModifiers>(_keyboard_modifiers_from_gst_event (event)); + dispatchEvent (wpe_event_touch_new (event_type, WPE_VIEW (wpe.view), + WPE_INPUT_SOURCE_TOUCHPAD, timestamp, modifiers, touch_id, x, y)); + return TRUE; +} +/* *INDENT-ON* */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpethreadedview.h
Added
@@ -0,0 +1,154 @@ +/* Copyright (C) <2018, 2025> Philippe Normand <philn@igalia.com> + * Copyright (C) <2018> Žan Doberšek <zdobersek@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <EGL/egl.h> +#include <glib.h> +#include <gst/gl/gstglfuncs.h> +#include <gst/gl/egl/gstgldisplay_egl.h> +#include <wpe/webkit.h> +#include "gstwpevideosrc.h" +#include <optional> +#include <utility> + +typedef struct _GstGLContext GstGLContext; +typedef struct _GstGLDisplay GstGLDisplay; +typedef struct _GstEGLImage GstEGLImage; + +class GstWPEThreadedView { +public: + GstWPEThreadedView(WebKitWebContext *, GstWpeVideoSrc2 *, GstGLContext *, + GstGLDisplay *, WPEDisplay *, int width, int height); + ~GstWPEThreadedView(); + + /* Used by gstwpeview */ + gboolean setPendingBuffer(WPEBuffer*, GError**); + + /* Used by wpevideosrc */ + void resize(int width, int height); + void loadUri(const gchar*); + void loadData(GBytes*); + void runJavascript(const gchar*); + void setDrawBackground(gboolean); + void clearBuffers(); + + GstEGLImage* image(); + GstBuffer* buffer(); + + gboolean dispatchKeyboardEvent(GstEvent*); + gboolean dispatchPointerEvent(GstEvent*); + gboolean dispatchPointerMoveEvent(GstEvent*); + gboolean dispatchAxisEvent(GstEvent*); + gboolean dispatchTouchEvent(GstEvent*); + + /* Used by GstWPEContextThread */ + bool hasUri() const { return webkit.uri; } + void disconnectLoadFailedSignal(); + void waitLoadCompletion(); + + GstWpeVideoSrc2 *src() const { return m_src; } + + void notifyLoadFinished(); + +private: + void frameComplete(); + + void dispatchEvent(WPEEvent*); + void loadUriUnlocked(const gchar*); + + static void s_releaseBuffer(gpointer); + + struct { + GstGLContext* context; + GstGLDisplay* display; + GstGLDisplayEGL* display_egl; + } gst { nullptr, nullptr, nullptr }; + + struct { + WPEView *view; + int width; + int height; + } wpe { nullptr, 0, 0 }; + + struct { + gchar* uri; + WebKitWebView* view; + } webkit = { nullptr, nullptr }; + + struct { + GMutex ready_mutex; + GCond ready_cond; + gboolean ready; + } threading; + + // This mutex guards access to either egl or shm resources declared below, + // depending on the runtime behavior. + GMutex images_mutex; + + struct { + GstEGLImage* pending; + GstEGLImage* committed; + } egl { nullptr, nullptr }; + + struct { + GstBuffer* pending; + GstBuffer* committed; + } shm { nullptr, nullptr }; + + struct { + gulong init_ext_sigid; + gulong extension_msg_sigid; + } audio {0, 0}; + + GstWpeVideoSrc2 *m_src { nullptr }; + + WPEBuffer *m_pending_buffer { nullptr }; + WPEBuffer *m_committed_buffer { nullptr }; + + std::optional<std::pair<gdouble, gdouble>> m_last_pointer_position; +}; + +class GstWPEContextThread { +public: + static GstWPEContextThread& singleton(); + + GstWPEContextThread(); + ~GstWPEContextThread(); + + GstWPEThreadedView* createWPEView(GstWpeVideoSrc2*, GstGLContext*, GstGLDisplay*, WPEDisplay*, int width, int height); + + template<typename Function> + void dispatch(Function); + +private: + static gpointer s_viewThread(gpointer); + struct { + GMutex mutex; + GCond cond; + gboolean ready; + GThread* thread { nullptr }; + } threading; + + struct { + GMainContext* context; + GMainLoop* loop; + WebKitWebContext* web_context; + } glib { nullptr, nullptr, nullptr }; +};
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpetoplevel.cpp
Added
@@ -0,0 +1,65 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpetoplevel.h" + +struct _WPEToplevelGStreamer +{ + WPEToplevel parent; +}; + +#define wpe_toplevel_gstreamer_parent_class parent_class +G_DEFINE_TYPE (WPEToplevelGStreamer, wpe_toplevel_gstreamer, WPE_TYPE_TOPLEVEL); + +static gboolean +wpe_toplevel_gstreamer_resize (WPEToplevel * toplevel, int width, int height) +{ + wpe_toplevel_resized (toplevel, width, height); + /* *INDENT-OFF* */ + wpe_toplevel_foreach_view(toplevel, (WPEToplevel *toplevel, WPEView *view, gpointer) -> gboolean { + int width, height; + wpe_toplevel_get_size (toplevel, &width, &height); + wpe_view_resized (view, width, height); + return FALSE; + }, nullptr); + /* *INDENT-ON* */ + return TRUE; +} + +static void +wpe_toplevel_gstreamer_init (WPEToplevelGStreamer * toplevel) +{ +} + +static void +wpe_toplevel_gstreamer_class_init (WPEToplevelGStreamerClass * klass) +{ + WPEToplevelClass *toplevelClass = WPE_TOPLEVEL_CLASS (klass); + toplevelClass->resize = wpe_toplevel_gstreamer_resize; +} + +WPEToplevel * +wpe_toplevel_gstreamer_new (WPEDisplayGStreamer * display) +{ + return WPE_TOPLEVEL (g_object_new (WPE_TYPE_TOPLEVEL_GSTREAMER, "display", + display, nullptr)); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpetoplevel.h
Added
@@ -0,0 +1,35 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef GstWPETopLevel_h +#define GstWPETopLevel_h + +#include <glib-object.h> +#include "gstwpedisplay.h" + +G_BEGIN_DECLS + +#define WPE_TYPE_TOPLEVEL_GSTREAMER (wpe_toplevel_gstreamer_get_type()) +G_DECLARE_FINAL_TYPE(WPEToplevelGStreamer, wpe_toplevel_gstreamer, WPE, + TOPLEVEL_GSTREAMER, WPEToplevel) + +WPEToplevel *wpe_toplevel_gstreamer_new(WPEDisplayGStreamer *); + +G_END_DECLS + +#endif /* GstWPETopLevel_h */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpevideosrc.cpp
Added
@@ -0,0 +1,829 @@ +/* Copyright (C) <2018, 2025> Philippe Normand <philn@igalia.com> + * Copyright (C) <2018> Žan Doberšek <zdobersek@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-wpevideosrc2 + * @title: wpevideosrc2 + * + * The wpevideosrc2 element is used to produce a video texture representing a web page + * rendered off-screen by WPE. + * + * Software rendering support is also available. This features allows + * wpevideosrc2 to be used on machines without GPU, and/or for testing purpose. + * To enable software rendering support, set the `LIBGL_ALWAYS_SOFTWARE=true` + * environment variable and make sure `video/x-raw, format=BGRA` caps are + * negotiated by the wpevideosrc2 element. + * + * As the webview loading is usually not instantaneous, the wpevideosrc2 element emits + * messages indicating the load progress, in percent. The value is an estimate + * based on the total number of bytes expected to be received for a document, + * including all its possible subresources and child documents. The application + * can handle these `element` messages synchronously for instance, in order to + * display a progress bar or other visual load indicator. The load percent value + * is stored in the message structure as a double value named + * `estimated-load-progress` and the structure name is `wpe-stats`. + * + * ## Example launch lines + * + * ```shell + * gst-launch-1.0 -v wpevideosrc2 location="https://gstreamer.freedesktop.org" ! queue ! glimagesink + * ``` + * Shows the GStreamer website homepage + * + * ```shell + * LIBGL_ALWAYS_SOFTWARE=true gst-launch-1.0 -v wpevideosrc2 num-buffers=50 location="https://gstreamer.freedesktop.org" \ + * videoconvert ! pngenc ! multifilesink location=/tmp/snapshot-%05d.png + * ``` + * Saves the first 50 video frames generated for the GStreamer website as PNG files in /tmp. + * + * ```shell + * gst-play-1.0 --videosink gtkglsink web+https://gstreamer.freedesktop.org + * ``` + * Shows the GStreamer website homepage as played with GstPlayer in a GTK+ window. + * + * ```shell + * gst-launch-1.0 glvideomixer name=m sink_1::zorder=0 ! glimagesink wpevideosrc2 location="file:///tmp/asset.html" draw-background=0 \ + * ! m. videotestsrc ! queue ! glupload ! glcolorconvert ! m. + * ``` + * Composite WPE with a video stream in a single OpenGL scene. + * + * ```shell + * gst-launch-1.0 glvideomixer name=m sink_1::zorder=0 sink_0::height=818 sink_0::width=1920 ! gtkglsink \ + * wpevideosrc2 location="file:///tmp/asset.html" draw-background=0 ! m. + * uridecodebin uri="http://example.com/Sintel.2010.1080p.mkv" name=d d. ! queue ! glupload ! glcolorconvert ! m. + * ``` + * Composite WPE with a video stream, sink_0 pad properties have to match the video dimensions. + * + * ```shell + * weston -S $HOME/weston-sock -B headless-backend.so --use-gl & + * WAYLAND_DISPLAY=$HOME/weston-sock gst-launch-1.0 wpevideosrc2 location=https://google.com ! queue ! fakevideosink + * ``` + * Render Google.com with WPE in a headless Weston compositor. This can be useful for server-side WPE video processing. + * + * Since: 1.28 + */ + +/* + * TODO: + * - Better navigation events handling (would require a new GstNavigation API) + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpe.h" +#include "gstwpevideosrc.h" +#include <gst/gl/gl.h> +#include <gst/gl/egl/gstglmemoryegl.h> +#include <gst/gl/wayland/gstgldisplay_wayland.h> +#include <gst/video/video.h> +#include <xkbcommon/xkbcommon.h> + +#include "gstwpethreadedview.h" +#include "gstwpedisplay.h" + +#define DEFAULT_WIDTH 1920 +#define DEFAULT_HEIGHT 1080 +#define DEFAULT_FPS_N 30 +#define DEFAULT_FPS_D 1 +#define DEFAULT_DRAW_BACKGROUND TRUE + +enum +{ + PROP_0, + PROP_LOCATION, + PROP_DRAW_BACKGROUND +}; + +enum +{ + SIGNAL_WPE_VIEW_CREATED, + SIGNAL_CONFIGURE_WEB_VIEW, + SIGNAL_LOAD_BYTES, + SIGNAL_RUN_JAVASCRIPT, + LAST_SIGNAL +}; +static guint gst_wpe_video_src_signalsLAST_SIGNAL = { 0 }; + +struct _GstWpeVideoSrc2 +{ + GstGLBaseSrc parent; + + /* properties */ + gchar *location; + gboolean draw_background; + + GBytes *bytes; + gboolean gl_enabled; + + gint64 n_frames; /* total frames sent */ + + WPEDisplay *display; + GstWPEThreadedView *view; + + GMutex lock; +}; + +#define WPE_LOCK(o) g_mutex_lock(&(o)->lock) +#define WPE_UNLOCK(o) g_mutex_unlock(&(o)->lock) + +GST_DEBUG_CATEGORY_EXTERN (wpe_video_src_debug); +#define GST_CAT_DEFAULT wpe_video_src_debug + +#define gst_wpe_video_src_parent_class parent_class +G_DEFINE_TYPE (GstWpeVideoSrc2, gst_wpe_video_src, GST_TYPE_GL_BASE_SRC); + +#define WPE_RAW_CAPS "video/x-raw, " \ + "format = (string) BGRA, " \ + "width = " GST_VIDEO_SIZE_RANGE ", " \ + "height = " GST_VIDEO_SIZE_RANGE ", " \ + "framerate = " GST_VIDEO_FPS_RANGE ", " \ + "pixel-aspect-ratio = (fraction)1/1" + +#define WPE_GL_CAPS "video/x-raw(memory:GLMemory), " \ + "format = (string) RGBA, " \ + "width = " GST_VIDEO_SIZE_RANGE ", " \ + "height = " GST_VIDEO_SIZE_RANGE ", " \ + "framerate = " GST_VIDEO_FPS_RANGE ", " \ + "pixel-aspect-ratio = (fraction)1/1, texture-target = (string)2D" + +#define WPE_VIDEO_SRC_CAPS WPE_GL_CAPS "; " WPE_RAW_CAPS +#define WPE_VIDEO_SRC_DOC_CAPS WPE_GL_CAPS "; video/x-raw, format = (string) BGRA" + +static GstStaticPadTemplate src_factory = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (WPE_VIDEO_SRC_CAPS)); + +#define GST_ELEMENT_PROGRESS(el, type, code, text) \ + G_STMT_START { \ + gchar *__txt = _gst_element_error_printf text; \ + gst_element_post_message( \ + GST_ELEMENT_CAST(el), \ + gst_message_new_progress(GST_OBJECT_CAST(el), \ + GST_PROGRESS_TYPE_##type, code, __txt)); \ + g_free(__txt); \ + } \ + G_STMT_END + +static GstFlowReturn +gst_wpe_video_src_create (GstBaseSrc * bsrc, guint64 offset, guint length, + GstBuffer ** buf) +{ + GstGLBaseSrc *gl_src = GST_GL_BASE_SRC (bsrc); + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (bsrc); + GstFlowReturn ret = GST_FLOW_ERROR; + GstBuffer *locked_buffer; + GstClockTime next_time; + gint64 ts_offset = 0; + + WPE_LOCK (src); + if (src->gl_enabled) { + WPE_UNLOCK (src); + return GST_CALL_PARENT_WITH_DEFAULT (GST_BASE_SRC_CLASS, create, (bsrc, + offset, length, buf), ret); + } + + locked_buffer = src->view->buffer (); + if (locked_buffer == NULL) { + WPE_UNLOCK (src); + GST_ELEMENT_ERROR (src, RESOURCE, FAILED, + ("WPE View did not render a buffer"), (NULL)); + return ret; + } + *buf = gst_buffer_copy_deep (locked_buffer); + + g_object_get (gl_src, "timestamp-offset", &ts_offset, NULL); + + /* The following code mimics the behaviour of GLBaseSrc::fill */ + GST_BUFFER_TIMESTAMP (*buf) = ts_offset + gl_src->running_time; + GST_BUFFER_OFFSET (*buf) = src->n_frames; + src->n_frames++; + GST_BUFFER_OFFSET_END (*buf) = src->n_frames; + if (gl_src->out_info.fps_n) { + next_time = gst_util_uint64_scale_int (src->n_frames * GST_SECOND, + gl_src->out_info.fps_d, gl_src->out_info.fps_n); + GST_BUFFER_DURATION (*buf) = next_time - gl_src->running_time; + } else { + next_time = ts_offset; + GST_BUFFER_DURATION (*buf) = GST_CLOCK_TIME_NONE; + } + + GST_LOG_OBJECT (src, "Created buffer from SHM %" GST_PTR_FORMAT, *buf); + + gl_src->running_time = next_time; + + ret = GST_FLOW_OK; + WPE_UNLOCK (src); + return ret; +} + +static GQuark +_egl_image_quark (void) +{ + static GQuark quark = 0; + + if (!quark) + quark = g_quark_from_static_string ("GstWPEEGLImage"); + return quark; +} + +static gboolean +gst_wpe_video_src_fill_memory (GstGLBaseSrc * bsrc, GstGLMemory * memory) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (bsrc); + const GstGLFuncs *gl; + guint tex_id; + GstEGLImage *locked_image; + + if (!gst_gl_context_check_feature (GST_GL_CONTEXT (bsrc->context), + "EGL_KHR_image_base")) { + GST_ERROR_OBJECT (src, "EGL_KHR_image_base is not supported"); + return FALSE; + } + + WPE_LOCK (src); + + gl = bsrc->context->gl_vtable; + tex_id = gst_gl_memory_get_texture_id (memory); + locked_image = src->view->image (); + + if (!locked_image) { + WPE_UNLOCK (src); + return TRUE; + } + // The EGLImage is implicitely associated with the memory we're filling, so we + // need to ensure their life cycles are tied. + gst_mini_object_set_qdata (GST_MINI_OBJECT_CAST (memory), _egl_image_quark (), + gst_egl_image_ref (locked_image), (GDestroyNotify) gst_egl_image_unref); + + gl->ActiveTexture (GL_TEXTURE0 + memory->plane); + gl->BindTexture (GL_TEXTURE_2D, tex_id); + gl->EGLImageTargetTexture2D (GL_TEXTURE_2D, + gst_egl_image_get_image (locked_image)); + WPE_UNLOCK (src); + return TRUE; +} + +static gboolean +gst_wpe_video_src_start (GstWpeVideoSrc2 * src) +{ + GstGLContext *context = NULL; + GstGLDisplay *display = NULL; + GstGLBaseSrc *base_src = GST_GL_BASE_SRC (src); + gboolean created_view = FALSE; + GBytes *bytes; + + GST_ELEMENT_PROGRESS (src, START, "open", ("Starting up")); + GST_INFO_OBJECT (src, "Starting up"); + WPE_LOCK (src); + + if (src->gl_enabled) { + context = base_src->context; + display = base_src->display; + } + + GST_DEBUG_OBJECT (src, "Will %sfill GLMemories", + src->gl_enabled ? "" : "NOT "); + + auto & thread = GstWPEContextThread::singleton (); + + if (!src->view) { + GST_ELEMENT_PROGRESS (src, CONTINUE, "open", ("Creating WPE WebView")); + + GError *error = nullptr; + wpe_display_gstreamer_set_gl (src->display, display, context); + + if (!wpe_display_connect (src->display, &error)) { + WPE_UNLOCK (src); + GST_ELEMENT_PROGRESS (src, ERROR, "open", + ("WPE display initialisation failed")); + GST_ELEMENT_ERROR (src, RESOURCE, FAILED, + ("Display initialisation failed: %s", error->message), (NULL)); + g_error_free (error); + return FALSE; + } + + src->view = + thread.createWPEView (src, context, display, src->display, + GST_VIDEO_INFO_WIDTH (&base_src->out_info), + GST_VIDEO_INFO_HEIGHT (&base_src->out_info)); + created_view = TRUE; + GST_DEBUG_OBJECT (src, "created view %p", src->view); + GST_ELEMENT_PROGRESS (src, CONTINUE, "open", ("WPE WebView is ready")); + } + + if (!created_view) { + GST_INFO_OBJECT (src, + "Re-starting after re-negotiation, clearing cached SHM buffers"); + src->view->clearBuffers (); + } + + GST_OBJECT_LOCK (src); + bytes = src->bytes; + src->bytes = NULL; + GST_OBJECT_UNLOCK (src); + + if (bytes != NULL) { + GST_ELEMENT_PROGRESS (src, CONTINUE, "open", ("Loading HTML data")); + src->view->loadData (bytes); + g_bytes_unref (bytes); + } + + if (created_view) { + src->n_frames = 0; + } + WPE_UNLOCK (src); + GST_ELEMENT_PROGRESS (src, COMPLETE, "open", ("Ready to produce buffers")); + return TRUE; +} + +static gboolean +gst_wpe_video_src_decide_allocation (GstBaseSrc * base_src, GstQuery * query) +{ + GstGLBaseSrc *gl_src = GST_GL_BASE_SRC (base_src); + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + GstCapsFeatures *caps_features; + + WPE_LOCK (src); + caps_features = gst_caps_get_features (gl_src->out_caps, 0); + if (caps_features != NULL + && gst_caps_features_contains (caps_features, + GST_CAPS_FEATURE_MEMORY_GL_MEMORY)) { + src->gl_enabled = TRUE; + } else { + src->gl_enabled = FALSE; + } + + if (src->gl_enabled) { + WPE_UNLOCK (src); + return GST_CALL_PARENT_WITH_DEFAULT (GST_BASE_SRC_CLASS, decide_allocation, + (base_src, query), FALSE); + } + WPE_UNLOCK (src); + return gst_wpe_video_src_start (src); +} + +static gboolean +gst_wpe_video_src_gl_start (GstGLBaseSrc * base_src) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + return gst_wpe_video_src_start (src); +} + +static void +gst_wpe_video_src_stop_unlocked (GstWpeVideoSrc2 * src) +{ + if (src->view) { + GST_DEBUG_OBJECT (src, "deleting view %p", src->view); + delete src->view; + src->view = NULL; + } +} + +static void +gst_wpe_video_src_gl_stop (GstGLBaseSrc * base_src) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + + WPE_LOCK (src); + gst_wpe_video_src_stop_unlocked (src); + WPE_UNLOCK (src); +} + +static gboolean +gst_wpe_video_src_stop (GstBaseSrc * base_src) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + + /* we can call this always, GstGLBaseSrc is smart enough to not crash if + * gst_gl_base_src_gl_start() has not been called from chaining up + * gst_wpe_video_src_decide_allocation() */ + if (!GST_CALL_PARENT_WITH_DEFAULT (GST_BASE_SRC_CLASS, stop, (base_src), + FALSE)) + return FALSE; + + WPE_LOCK (src); + + /* if gl-enabled, gst_wpe_video_src_stop_unlocked() would have already been called + * inside gst_wpe_video_src_gl_stop() from the base class stopping the OpenGL + * context */ + if (!src->gl_enabled) + gst_wpe_video_src_stop_unlocked (src); + + WPE_UNLOCK (src); + return TRUE; +} + +static GstCaps * +gst_wpe_video_src_fixate (GstBaseSrc * base_src, GstCaps * combined_caps) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + GstStructure *structure; + gint width, height; + GstCaps *caps; + + /* In situation where software GL support is explicitly requested, select raw + * caps, otherwise perform default caps negotiation. Unfortunately at this + * point we don't know yet if a GL context will be usable or not, so we can't + * check the element GstContext. + */ + if (!g_strcmp0 (g_getenv ("LIBGL_ALWAYS_SOFTWARE"), "true")) { + caps = gst_caps_from_string (WPE_RAW_CAPS); + } else { + caps = gst_caps_make_writable (combined_caps); + } + + structure = gst_caps_get_structure (caps, 0); + + gst_structure_fixate_field_nearest_int (structure, "width", DEFAULT_WIDTH); + gst_structure_fixate_field_nearest_int (structure, "height", DEFAULT_HEIGHT); + + if (gst_structure_has_field (structure, "framerate")) + gst_structure_fixate_field_nearest_fraction (structure, "framerate", + DEFAULT_FPS_N, DEFAULT_FPS_D); + else + gst_structure_set (structure, "framerate", GST_TYPE_FRACTION, DEFAULT_FPS_N, + DEFAULT_FPS_D, NULL); + + caps = GST_BASE_SRC_CLASS (parent_class)->fixate (base_src, caps); + GST_INFO_OBJECT (base_src, "Fixated caps to %" GST_PTR_FORMAT, caps); + + if (src->view) { + gst_structure_get (structure, "width", G_TYPE_INT, &width, "height", + G_TYPE_INT, &height, NULL); + src->view->resize (width, height); + } + return caps; +} + +void +gst_wpe_video_src_configure_web_view (GstWpeVideoSrc2 * src, + WebKitWebView * webview) +{ + GValue args2 = { {0}, {0} }; + + g_value_init (&args0, GST_TYPE_ELEMENT); + g_value_set_object (&args0, src); + g_value_init (&args1, G_TYPE_OBJECT); + g_value_set_object (&args1, webview); + + g_signal_emitv (args, gst_wpe_video_src_signalsSIGNAL_CONFIGURE_WEB_VIEW, 0, + NULL); + + g_value_unset (&args0); + g_value_unset (&args1); +} + +static void +gst_wpe_video_src_run_javascript (GstWpeVideoSrc2 * src, const gchar * script) +{ + if (src->view && GST_STATE (GST_ELEMENT_CAST (src)) > GST_STATE_NULL) { + GST_INFO_OBJECT (src, "running javascript"); + src->view->runJavascript (script); + } +} + +static void +gst_wpe_video_src_load_bytes (GstWpeVideoSrc2 * src, GBytes * bytes) +{ + if (src->view && GST_STATE (GST_ELEMENT_CAST (src)) > GST_STATE_NULL) { + src->view->loadData (bytes); + } else { + GST_OBJECT_LOCK (src); + if (src->bytes) + g_bytes_unref (src->bytes); + src->bytes = g_bytes_ref (bytes); + GST_OBJECT_UNLOCK (src); + } +} + +static gboolean +gst_wpe_video_src_set_location (GstWpeVideoSrc2 * src, const gchar * location, + GError ** error) +{ + GST_OBJECT_LOCK (src); + g_free (src->location); + src->location = g_strdup (location); + GST_OBJECT_UNLOCK (src); + + if (src->view) + src->view->loadUri (location); + + return TRUE; +} + +static void +gst_wpe_video_src_set_draw_background (GstWpeVideoSrc2 * src, + gboolean draw_background) +{ + GST_OBJECT_LOCK (src); + src->draw_background = draw_background; + GST_OBJECT_UNLOCK (src); + + if (src->view) + src->view->setDrawBackground (draw_background); +} + +static void +gst_wpe_video_src_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (object); + + switch (prop_id) { + case PROP_LOCATION: + { + const gchar *location; + + location = g_value_get_string (value); + if (location == NULL) { + GST_WARNING_OBJECT (src, "location property cannot be NULL"); + return; + } + + if (!gst_wpe_video_src_set_location (src, location, NULL)) { + GST_WARNING_OBJECT (src, "badly formatted location"); + return; + } + break; + } + case PROP_DRAW_BACKGROUND: + gst_wpe_video_src_set_draw_background (src, g_value_get_boolean (value)); + break; + default: + break; + } +} + +static void +gst_wpe_video_src_get_property (GObject * object, guint prop_id, GValue * value, + GParamSpec * pspec) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (object); + + switch (prop_id) { + case PROP_LOCATION: + GST_OBJECT_LOCK (src); + g_value_set_string (value, src->location); + GST_OBJECT_UNLOCK (src); + break; + case PROP_DRAW_BACKGROUND: + GST_OBJECT_LOCK (src); + g_value_set_boolean (value, src->draw_background); + GST_OBJECT_UNLOCK (src); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_wpe_video_src_event (GstBaseSrc * base_src, GstEvent * event) +{ + gboolean ret = FALSE; + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + + if (src->view && GST_EVENT_TYPE (event) == GST_EVENT_NAVIGATION) { + GST_DEBUG_OBJECT (src, "Processing event %" GST_PTR_FORMAT, event); + switch (gst_navigation_event_get_type (event)) { + case GST_NAVIGATION_EVENT_KEY_PRESS: + case GST_NAVIGATION_EVENT_KEY_RELEASE: + ret = src->view->dispatchKeyboardEvent (event); + break; + case GST_NAVIGATION_EVENT_MOUSE_BUTTON_PRESS: + case GST_NAVIGATION_EVENT_MOUSE_BUTTON_RELEASE: + ret = src->view->dispatchPointerEvent (event); + break; + case GST_NAVIGATION_EVENT_MOUSE_MOVE: + ret = src->view->dispatchPointerMoveEvent (event); + break; + case GST_NAVIGATION_EVENT_MOUSE_SCROLL: + ret = src->view->dispatchAxisEvent (event); + break; + case GST_NAVIGATION_EVENT_TOUCH_DOWN: + case GST_NAVIGATION_EVENT_TOUCH_MOTION: + case GST_NAVIGATION_EVENT_TOUCH_UP: + ret = src->view->dispatchTouchEvent (event); + break; + case GST_NAVIGATION_EVENT_TOUCH_FRAME: + case GST_NAVIGATION_EVENT_TOUCH_CANCEL: + break; + default: + break; + } + } + + if (!ret) { + ret = + GST_CALL_PARENT_WITH_DEFAULT (GST_BASE_SRC_CLASS, event, (base_src, + event), FALSE); + } + return ret; +} + +static gboolean +gst_wpe_video_src_query (GstBaseSrc * base_src, GstQuery * query) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (base_src); + GstGLBaseSrc *gl_src = GST_GL_BASE_SRC (base_src); + gboolean ret = FALSE; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_LATENCY:{ + GST_OBJECT_LOCK (src); + if (gl_src->out_info.fps_n > 0) { + GstClockTime latency; + + latency = gst_util_uint64_scale (GST_SECOND, gl_src->out_info.fps_d, + gl_src->out_info.fps_n); + GST_OBJECT_UNLOCK (src); + gst_query_set_latency (query, + gst_base_src_is_live (GST_BASE_SRC_CAST (src)), latency, + GST_CLOCK_TIME_NONE); + GST_DEBUG_OBJECT (src, "Reporting latency of %" GST_TIME_FORMAT, + GST_TIME_ARGS (latency)); + ret = TRUE; + } else { + GST_OBJECT_UNLOCK (src); + } + + break; + } + default: + ret = GST_CALL_PARENT_WITH_DEFAULT (GST_BASE_SRC_CLASS, query, + (base_src, query), FALSE); + break; + } + return ret; +} + +static void +on_view_created (WPEDisplayGStreamer *, WPEView * view, gpointer user_data) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (user_data); + GValue args2 = { {0}, {0} + }; + + g_value_init (&args0, GST_TYPE_WPE_VIDEO_SRC); + g_value_set_object (&args0, src); + + g_value_init (&args1, WPE_TYPE_VIEW); + g_value_set_object (&args1, view); + + g_signal_emitv (args, gst_wpe_video_src_signalsSIGNAL_WPE_VIEW_CREATED, 0, + NULL); + + g_value_unset (&args0); + g_value_unset (&args1); +} + +static void +gst_wpe_video_src_init (GstWpeVideoSrc2 * src) +{ + src->draw_background = DEFAULT_DRAW_BACKGROUND; + src->location = g_strdup (DEFAULT_LOCATION); + src->display = wpe_display_gstreamer_new (); + + g_signal_connect (src->display, "wpe-view-created", + G_CALLBACK (on_view_created), src); + + gst_base_src_set_live (GST_BASE_SRC_CAST (src), TRUE); + + g_mutex_init (&src->lock); +} + +static void +gst_wpe_video_src_finalize (GObject * object) +{ + GstWpeVideoSrc2 *src = GST_WPE_VIDEO_SRC (object); + + g_free (src->location); + g_clear_pointer (&src->bytes, g_bytes_unref); + g_mutex_clear (&src->lock); + g_clear_pointer (&src->display, g_object_unref); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_wpe_video_src_class_init (GstWpeVideoSrc2Class * klass) +{ + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + GstElementClass *gstelement_class = GST_ELEMENT_CLASS (klass); + GstGLBaseSrcClass *gl_base_src_class = GST_GL_BASE_SRC_CLASS (klass); + GstBaseSrcClass *base_src_class = GST_BASE_SRC_CLASS (klass); + GstPadTemplate *tmpl; + GstCaps *doc_caps; + + gobject_class->set_property = gst_wpe_video_src_set_property; + gobject_class->get_property = gst_wpe_video_src_get_property; + gobject_class->finalize = gst_wpe_video_src_finalize; + + g_object_class_install_property (gobject_class, PROP_LOCATION, + g_param_spec_string ("location", "location", + "The URL to display", + DEFAULT_LOCATION, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (gobject_class, PROP_DRAW_BACKGROUND, + g_param_spec_boolean ("draw-background", "Draws the background", + "Whether to draw the WebView background", DEFAULT_DRAW_BACKGROUND, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_set_static_metadata (gstelement_class, + "WPE source", "Source/Video", + "Creates a video stream from a WPE browser", + "Philippe Normand <philn@igalia.com>, Žan Doberšek <zdobersek@igalia.com>"); + + tmpl = gst_static_pad_template_get (&src_factory); + gst_element_class_add_pad_template (gstelement_class, tmpl); + + base_src_class->fixate = GST_DEBUG_FUNCPTR (gst_wpe_video_src_fixate); + base_src_class->create = GST_DEBUG_FUNCPTR (gst_wpe_video_src_create); + base_src_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_wpe_video_src_decide_allocation); + base_src_class->stop = GST_DEBUG_FUNCPTR (gst_wpe_video_src_stop); + base_src_class->event = GST_DEBUG_FUNCPTR (gst_wpe_video_src_event); + base_src_class->query = GST_DEBUG_FUNCPTR (gst_wpe_video_src_query); + + gl_base_src_class->supported_gl_api = + static_cast < GstGLAPI > + (GST_GL_API_OPENGL | GST_GL_API_OPENGL3 | GST_GL_API_GLES2); + gl_base_src_class->gl_start = GST_DEBUG_FUNCPTR (gst_wpe_video_src_gl_start); + gl_base_src_class->gl_stop = GST_DEBUG_FUNCPTR (gst_wpe_video_src_gl_stop); + gl_base_src_class->fill_gl_memory = + GST_DEBUG_FUNCPTR (gst_wpe_video_src_fill_memory); + + doc_caps = gst_caps_from_string (WPE_VIDEO_SRC_DOC_CAPS); + gst_pad_template_set_documentation_caps (tmpl, doc_caps); + gst_clear_caps (&doc_caps); + + /** + * GstWpeVideoSrc2::wpe-view-created: + * @src: the object which received the signal + * @view: the #WPEView that was created + * + * This signal can be used to hook into the WPEView signals as soon as it was + * created. + */ + gst_wpe_video_src_signalsSIGNAL_WPE_VIEW_CREATED = + g_signal_new ("wpe-view-created", G_TYPE_FROM_CLASS (klass), + G_SIGNAL_RUN_LAST, 0, NULL, NULL, NULL, G_TYPE_NONE, 1, WPE_TYPE_VIEW); + + /** + * GstWpeVideoSrc2::configure-web-view: + * @src: the object which received the signal + * @webview: the webView + * + * Allow application to configure the webView settings. + */ + gst_wpe_video_src_signalsSIGNAL_CONFIGURE_WEB_VIEW = + g_signal_new ("configure-web-view", G_TYPE_FROM_CLASS (klass), + G_SIGNAL_RUN_LAST, 0, NULL, NULL, NULL, G_TYPE_NONE, 1, G_TYPE_OBJECT); + + /** + * GstWpeVideoSrc2::load-bytes: + * @src: the object which received the signal + * @bytes: the GBytes data to load + * + * Load the specified bytes into the internal webView. + */ + gst_wpe_video_src_signalsSIGNAL_LOAD_BYTES = + g_signal_new_class_handler ("load-bytes", G_TYPE_FROM_CLASS (klass), + static_cast < GSignalFlags > (G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION), + G_CALLBACK (gst_wpe_video_src_load_bytes), NULL, NULL, NULL, + G_TYPE_NONE, 1, G_TYPE_BYTES); + + /** + * GstWpeVideoSrc2::run-javascript: + * @src: the object which received the signal + * @script: the script to run + * + * Asynchronously run script in the context of the current page on the + * internal webView. + * + */ + gst_wpe_video_src_signalsSIGNAL_RUN_JAVASCRIPT = + g_signal_new_class_handler ("run-javascript", G_TYPE_FROM_CLASS (klass), + static_cast < GSignalFlags > (G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION), + G_CALLBACK (gst_wpe_video_src_run_javascript), NULL, NULL, NULL, + G_TYPE_NONE, 1, G_TYPE_STRING); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpevideosrc.h
Added
@@ -0,0 +1,17 @@ +#pragma once + +#include <wpe/webkit.h> +#include <gst/gl/gl.h> +#include "gstwpeview.h" + +typedef struct _GstWpeVideoSrc2 GstWpeVideoSrc2; + +G_BEGIN_DECLS + + +#define GST_TYPE_WPE_VIDEO_SRC (gst_wpe_video_src_get_type ()) +G_DECLARE_FINAL_TYPE (GstWpeVideoSrc2, gst_wpe_video_src, GST, WPE_VIDEO_SRC, GstGLBaseSrc); + +void gst_wpe_video_src_configure_web_view (GstWpeVideoSrc2 * src, WebKitWebView * webview); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpeview.cpp
Added
@@ -0,0 +1,69 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwpeview.h" +#include "gstwpethreadedview.h" + +struct _WPEViewGStreamer +{ + WPEView parent; + + GstWPEThreadedView *client; +}; + +#define wpe_view_gstreamer_parent_class parent_class +G_DEFINE_TYPE (WPEViewGStreamer, wpe_view_gstreamer, WPE_TYPE_VIEW); + +static gboolean +wpe_view_gstreamer_render_buffer (WPEView * view, WPEBuffer * buffer, + const WPERectangle *, guint, GError ** error) +{ + auto self = WPE_VIEW_GSTREAMER (view); + // TODO: Add support for damage rects. + return self->client->setPendingBuffer (buffer, error); +} + +static void +wpe_view_gstreamer_init (WPEViewGStreamer * view) +{ +} + +static void +wpe_view_gstreamer_class_init (WPEViewGStreamerClass * klass) +{ + WPEViewClass *viewClass = WPE_VIEW_CLASS (klass); + viewClass->render_buffer = wpe_view_gstreamer_render_buffer; +} + +WPEView * +wpe_view_gstreamer_new (WPEDisplayGStreamer * display) +{ + return WPE_VIEW (g_object_new (WPE_TYPE_VIEW_GSTREAMER, "display", display, + nullptr)); +} + +void +wpe_view_gstreamer_set_client (WPEViewGStreamer * view, + GstWPEThreadedView * client) +{ + view->client = client; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/gstwpeview.h
Added
@@ -0,0 +1,41 @@ +/* Copyright (C) <2025> Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef GstWPEView_h +#define GstWPEView_h + +#include <glib-object.h> +#include "gstwpedisplay.h" + +class GstWPEThreadedView; + +G_BEGIN_DECLS + +#define WPE_TYPE_VIEW_GSTREAMER (wpe_view_gstreamer_get_type()) +G_DECLARE_FINAL_TYPE(WPEViewGStreamer, wpe_view_gstreamer, WPE, + VIEW_GSTREAMER, WPEView) + +typedef struct _WPEDisplayGStreamer WPEDisplayGStreamer; + +WPEView *wpe_view_gstreamer_new(WPEDisplayGStreamer *); + +void wpe_view_gstreamer_set_client(WPEViewGStreamer*, GstWPEThreadedView*); + +G_END_DECLS + +#endif /* GstWPEView_h */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/wpe2/meson.build
Added
@@ -0,0 +1,57 @@ +wpe_sources = + 'gstwpethreadedview.cpp','gstwpe2.cpp', + 'gstwpevideosrc.cpp', + 'gstwpedisplay.cpp', + 'gstwpeview.cpp', + 'gstwpetoplevel.cpp', + + +wpe_headers = + 'gstwpevideosrc.h', + 'gstwpe2.h', + 'gstwpethreadedview.h', + + +doc_sources = +foreach s: wpe_sources + wpe_headers + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'wpe': pathsep.join(doc_sources) +} + +wpe_feat = get_option('wpe2').require(gstgl_dep.found(), + error_message : 'wpe plugin enabled but GL support was not detected') + +if not wpe_feat.allowed() + subdir_done() +endif + +wpewebkit_dep = dependency('wpe-webkit-2.0', version: '>=2.50.0', required: wpe_feat) +if not wpewebkit_dep.found() + subdir_done() +endif + +if not cc.check_header('wpe/wpe-platform.h', dependencies: wpewebkit_dep, required: wpe_feat) + subdir_done() +endif + +egl_dep = dependency('egl', required : wpe_feat) +xkbcommon_dep = dependency('xkbcommon', version : '>= 0.8', required : wpe_feat) + +if not (egl_dep.found() and xkbcommon_dep.found()) + subdir_done() +endif + +gstwpe = library('gstwpe2', + wpe_sources, + override_options : 'cpp_std=c++17', + dependencies : egl_dep, wpewebkit_dep, gstallocators_dep, gstaudio_dep, gstvideo_dep, + gstbase_dep, gstgl_dep, xkbcommon_dep, + cpp_args : gst_plugins_bad_args + '-DHAVE_CONFIG_H=1', + include_directories : configinc, + install : true, + install_dir : plugins_install_dir) + +plugins += gstwpe
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/x265/gstx265enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/x265/gstx265enc.c
Changed
@@ -199,6 +199,8 @@ GValue * value, GParamSpec * pspec); static gboolean x265enc_element_init (GstPlugin * plugin); +static GstBuffer *gst_x265_enc_get_header_buffer (GstX265Enc * encoder); + #define gst_x265_enc_parent_class parent_class G_DEFINE_TYPE_WITH_CODE (GstX265Enc, gst_x265_enc, GST_TYPE_VIDEO_ENCODER, G_IMPLEMENT_INTERFACE (GST_TYPE_PRESET, NULL)); @@ -561,6 +563,7 @@ gst_x265_enc_init (GstX265Enc * encoder) { encoder->push_header = TRUE; + encoder->header_buffer = NULL; encoder->bitrate = PROP_BITRATE_DEFAULT; encoder->qp = PROP_QP_DEFAULT; @@ -658,6 +661,7 @@ GST_DEBUG_OBJECT (encoder, "stop encoder"); + gst_clear_buffer (&x265enc->header_buffer); gst_x265_enc_flush_frames (x265enc, FALSE); gst_x265_enc_close_encoder (x265enc); gst_x265_enc_dequeue_all_frames (x265enc); @@ -693,6 +697,8 @@ { GstX265Enc *encoder = GST_X265_ENC (object); + gst_clear_buffer (&encoder->header_buffer); + if (encoder->input_state) gst_video_codec_state_unref (encoder->input_state); encoder->input_state = NULL; @@ -1000,6 +1006,7 @@ encoder->api->encoder_parameters (encoder->x265enc, &encoder->x265param); encoder->push_header = TRUE; + encoder->header_buffer = gst_x265_enc_get_header_buffer (encoder); return TRUE; } @@ -1251,7 +1258,9 @@ tags = gst_tag_list_new_empty (); gst_tag_list_add (tags, GST_TAG_MERGE_REPLACE, GST_TAG_ENCODER, "x265", - GST_TAG_ENCODER_VERSION, x265_version_str, NULL); + GST_TAG_ENCODER_VERSION, x265_version_str, + GST_TAG_MAXIMUM_BITRATE, encoder->bitrate * 1024, + GST_TAG_NOMINAL_BITRATE, encoder->bitrate * 1024, NULL); gst_video_encoder_merge_tags (GST_VIDEO_ENCODER (encoder), tags, GST_TAG_MERGE_REPLACE); gst_tag_list_unref (tags); @@ -1615,12 +1624,11 @@ frame->output_buffer = out_buf; - if (encoder->push_header) { - GstBuffer *header; - - header = gst_x265_enc_get_header_buffer (encoder); - frame->output_buffer = gst_buffer_append (header, frame->output_buffer); + if (encoder->push_header && encoder->header_buffer) { + frame->output_buffer = + gst_buffer_append (encoder->header_buffer, frame->output_buffer); encoder->push_header = FALSE; + encoder->header_buffer = NULL; } GST_LOG_OBJECT (encoder,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/ext/x265/gstx265enc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/ext/x265/gstx265enc.h
Changed
@@ -50,6 +50,7 @@ x265_param x265param; GstClockTime dts_offset; gboolean push_header; + GstBuffer *header_buffer; const x265_api *api; /* List of frame/buffer mapping structs for
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/analytics.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/analytics.h
Changed
@@ -31,6 +31,10 @@ #include <gst/analytics/gstanalyticsobjectdetectionmtd.h> #include <gst/analytics/gstanalyticsobjecttrackingmtd.h> #include <gst/analytics/gstanalyticssegmentationmtd.h> +#include <gst/analytics/gstanalyticstensormtd.h> #include <gst/analytics/gsttensormeta.h> +#include <gst/analytics/gstanalyticsbatchmeta.h> +#include <gst/analytics/gstanalytics_image_util.h> +#include <gst/analytics/modelinfo.h> #endif /* __ANALYTICS_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalytics_image_util.c
Added
@@ -0,0 +1,226 @@ +/* GStreamer + * Copyright (C) 2025 Collabora Ltd + * @author: Daniel Morin <daniel.morin@dmohub.org> + * + * gstanalytics_image_util.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "gstanalytics_image_util.h" + +/* Evaluate if there's an intersection between segement s1 and s2 */ +static guint +linear_intersection_uint (guint s1_min, guint s1_max, guint s2_min, + guint s2_max) +{ + guint tmp; + if (s1_max > s2_min && s2_max > s1_min) { + if (s1_min > s2_min) { + tmp = (s2_max > s1_max) ? s1_max : s2_max; + return tmp - s1_min; + } else { + tmp = (s1_max > s2_max) ? s2_max : s1_max; + return tmp - s2_min; + } + } + return 0.0f; +} + +static gfloat +linear_intersection_float (gfloat s1_min, gfloat s1_max, gfloat s2_min, + gfloat s2_max) +{ + gfloat tmp; + if (s1_max > s2_min && s2_max > s1_min) { + if (s1_min > s2_min) { + tmp = (s2_max > s1_max) ? s1_max : s2_max; + return tmp - s1_min; + } else { + tmp = (s1_max > s2_max) ? s2_max : s1_max; + return tmp - s2_min; + } + } + return 0.0f; +} + + +static gboolean +_clips_and_adj_dim_int (gint * xy, gint * wh) +{ + g_return_val_if_fail (xy != NULL, FALSE); + g_return_val_if_fail (wh != NULL, FALSE); + + if (*xy < 0) { + if ((*xy + *wh) < 0) { + /* Bouding box completly outside the visible area */ + return FALSE; + } + + /* reduce width by the portion that is negative */ + *wh += *xy; + *xy = 0; + } + return TRUE; +} + +static gboolean +_clips_and_adj_dim_float (gfloat * xy, gfloat * wh) +{ + g_return_val_if_fail (xy != NULL, FALSE); + g_return_val_if_fail (wh != NULL, FALSE); + + if (*xy < 0.0f) { + if ((*xy + *wh) < 0.0f) { + /* Bouding box completly outside the visible area */ + return FALSE; + } + + /* reduce width by the portion that is negative */ + *wh += *xy; + *xy = 0.0; + } + return TRUE; +} + +/** + * gst_analytics_image_util_iou_int: + * @bb1_x: Bounding box 1, X coordinate + * @bb1_y: Bounding box 1, Y coordinate + * @bb1_w: Bounding box 1, width + * @bb1_h: Bounding box 1, height + * @bb2_x: Bounding box 2, X coordinate + * @bb2_y: Bounding box 2, Y coordinate + * @bb2_w: Bounding box 2, width + * @bb2_h: Bounding box 2, height + * + * Calculate the intersection over the union (IoU) of the two areas defined by + * the bounding box 1 and bounding box 2. IoU is a measure of how much two + * regions overlap. + * + * Return: IoU of bb1 and bb2. + * + * Since: 1.28 + */ +gfloat +gst_analytics_image_util_iou_int (gint bb1_x, gint bb1_y, gint bb1_w, + gint bb1_h, gint bb2_x, gint bb2_y, gint bb2_w, gint bb2_h) +{ + if (_clips_and_adj_dim_int (&bb1_x, &bb1_w) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_int (&bb1_y, &bb1_h) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_int (&bb2_x, &bb2_w) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_int (&bb2_y, &bb2_h) == FALSE) { + return 0.0f; + } + + /* Rational: linear intersection is much faster to calculate then + * 2d intersection. We project the two bounding boxes considered for + * intersection on one axis and verify if the segments the create intersect. + * If they don't, the bounding boxes can't intersect in 2d and we don't + * need to verify if they intersect on the other dimension. If they + * intersect on the first dimension we verify if they intersec on the other + * dimension. Again if the don't intersect the bounding boxes can't intersect + * on in a 2D space. If they intersected on both axis we calculate the IoU.*/ + const guint x_intersection = + linear_intersection_uint (bb1_x, bb1_x + bb1_w, bb2_x, bb2_x + bb2_w); + if (x_intersection > 0) { + const guint y_intersection = linear_intersection_uint (bb1_y, bb1_y + bb1_h, + bb2_y, bb2_y + bb2_h); + if (y_intersection > 0) { + const guint bb1_area = bb1_w * bb1_h; + const guint bb2_area = bb2_w * bb2_h; + const guint intersect_area = x_intersection * y_intersection; + const guint union_area = bb1_area + bb2_area - intersect_area; + return union_area == 0 ? 0.0f : ((gfloat) intersect_area) / union_area; + } + } + + return 0.0f; +} + +/** + * gst_analytics_image_util_iou_float: + * @bb1_x: Bounding box 1, X coordinate + * @bb1_y: Bounding box 1, Y coordinate + * @bb1_w: Bounding box 1, width + * @bb1_h: Bounding box 1, height + * @bb2_x: Bounding box 2, X coordinate + * @bb2_y: Bounding box 2, Y coordinate + * @bb2_w: Bounding box 2, width + * @bb2_h: Bounding box 2, height + * + * Calculate the intersection over the union (IoU) of the two areas defined by + * the bounding box 1 and bounding box 2. IoU is a measure of how much two + * regions overlap. + * + * Return: IoU of bb1 and bb2. + * + * Since: 1.28 + */ +gfloat +gst_analytics_image_util_iou_float (gfloat bb1_x, gfloat bb1_y, gfloat bb1_w, + gfloat bb1_h, gfloat bb2_x, gfloat bb2_y, gfloat bb2_w, gfloat bb2_h) +{ + if (_clips_and_adj_dim_float (&bb1_x, &bb1_w) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_float (&bb1_y, &bb1_h) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_float (&bb2_x, &bb2_w) == FALSE) { + return 0.0f; + } + + if (_clips_and_adj_dim_float (&bb2_y, &bb2_h) == FALSE) { + return 0.0f; + } + + /* Rational: linear intersection is much faster to calculate then + * 2d intersection. We project the two bounding boxes considered for + * intersection on one axis and verify if the segments the create intersect. + * If they don't, the bounding boxes can't intersect in 2d and we don't + * need to verify if they intersect on the other dimension. If they + * intersect on the first dimension we verify if they intersec on the other + * dimension. Again if the don't intersect the bounding boxes can't intersect + * on in a 2D space. If they intersected on both axis we calculate the IoU.*/ + const gfloat x_intersection = + linear_intersection_float (bb1_x, bb1_x + bb1_w, bb2_x, bb2_x + bb2_w); + if (x_intersection > 0) { + const float y_intersection = + linear_intersection_float (bb1_y, bb1_y + bb1_h, + bb2_y, bb2_y + bb2_h); + if (y_intersection > 0) { + const gfloat bb1_area = bb1_w * bb1_h; + const gfloat bb2_area = bb2_w * bb2_h; + const gfloat intersect_area = x_intersection * y_intersection; + const gfloat union_area = bb1_area + bb2_area - intersect_area; + return union_area == 0.0f ? 0.0f : intersect_area / union_area; + } + } + + return 0.0f; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalytics_image_util.h
Added
@@ -0,0 +1,42 @@ +/* GStreamer + * Copyright (C) 2025 Collabora Ltd + * @author: Daniel Morin <daniel.morin@dmohub.org> + * + * gstanalyticssegmentationmtd.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_ANALYTICS_IMAGE_UTIL_H__ +#define __GST_ANALYTICS_IMAGE_UTIL_H__ + +#include <gst/gst.h> +#include <gst/analytics/analytics-meta-prelude.h> + +G_BEGIN_DECLS + +GST_ANALYTICS_META_API +gfloat gst_analytics_image_util_iou_int (gint bb1_x, gint bb1_y, gint bb1_w, + gint bb1_h, gint bb2_x, gint bb2_y, gint bb2_w, gint bb2_h); + +GST_ANALYTICS_META_API +gfloat gst_analytics_image_util_iou_float (gfloat bb1_x, gfloat bb1_y, gfloat + bb1_w, gfloat bb1_h, gfloat bb2_x, gfloat bb2_y, gfloat bb2_w, gfloat + bb2_h); + +G_END_DECLS + +#endif /* __GST_ANALYTICS_IMAGE_UTIL_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticsbatchmeta.c
Added
@@ -0,0 +1,268 @@ +/* + * GStreamer + * + * Copyright (C) 2025 Sebastian Dröge <sebastian@centricular.com> + * + * gstanalyticsbatchmeta.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "gstanalyticsbatchmeta.h" + +static gboolean +gst_analytics_batch_meta_transform (GstBuffer * dest, GstMeta * meta, + GstBuffer * buffer, GQuark type, gpointer data) +{ + GstAnalyticsBatchMeta *dmeta, *smeta; + + smeta = (GstAnalyticsBatchMeta *) meta; + + if (GST_META_TRANSFORM_IS_COPY (type)) { + smeta = (GstAnalyticsBatchMeta *) meta; + dmeta = gst_buffer_add_analytics_batch_meta (dest); + if (!dmeta) + return FALSE; + GST_TRACE ("copy analytics batch metadata"); + + dmeta->streams = g_new (GstAnalyticsBatchStream, smeta->n_streams); + for (gsize i = 0; i < smeta->n_streams; i++) { + GstAnalyticsBatchStream *sstream = &smeta->streamsi; + GstAnalyticsBatchStream *dstream = &dmeta->streamsi; + + dstream->index = sstream->index; + + dstream->sticky_events = g_new (GstEvent *, sstream->n_sticky_events); + for (gsize j = 0; j < sstream->n_sticky_events; j++) { + dstream->sticky_eventsj = gst_event_ref (sstream->sticky_eventsj); + } + dstream->n_sticky_events = sstream->n_sticky_events; + + dstream->objects = g_new (GstMiniObject *, sstream->n_objects); + for (gsize j = 0; j < sstream->n_objects; j++) { + dstream->objectsj = gst_mini_object_ref (sstream->objectsj); + } + dstream->n_objects = sstream->n_objects; + } + dmeta->n_streams = smeta->n_streams; + + } else { + GST_WARNING + ("gst_analytics_batch_meta_transform: transform type %u not supported", + type); + return FALSE; + } + return TRUE; +} + +static gboolean +gst_analytics_batch_meta_init (GstMeta * meta, gpointer params, + GstBuffer * buffer) +{ + GstAnalyticsBatchMeta *bmeta = (GstAnalyticsBatchMeta *) meta; + + bmeta->streams = NULL; + bmeta->n_streams = 0; + + return TRUE; +} + +static void +gst_analytics_batch_meta_free (GstMeta * meta, GstBuffer * buffer) +{ + GstAnalyticsBatchMeta *bmeta = (GstAnalyticsBatchMeta *) meta; + + for (gsize i = 0; i < bmeta->n_streams; i++) { + GstAnalyticsBatchStream *stream = &bmeta->streamsi; + + for (gsize j = 0; j < stream->n_sticky_events; j++) + gst_event_unref (stream->sticky_eventsj); + + for (gsize j = 0; j < stream->n_objects; j++) + gst_mini_object_unref (stream->objectsj); + + g_clear_pointer (&stream->objects, g_free); + } + + g_free (bmeta->streams); +} + +/** + * gst_analytics_batch_meta_api_get_type: (skip) + * + * Since: 1.28 + */ +GType +gst_analytics_batch_meta_api_get_type (void) +{ + static GType type = 0; + static const gchar *tags = { NULL }; + + if (g_once_init_enter (&type)) { + GType _type = gst_meta_api_type_register ("GstAnalyticsBatchMetaAPI", tags); + g_once_init_leave (&type, _type); + } + return type; +} + + +/** + * gst_analytics_batch_meta_get_info: (skip) + * + * Since: 1.28 + */ +const GstMetaInfo * +gst_analytics_batch_meta_get_info (void) +{ + static const GstMetaInfo *tmeta_info = NULL; + + if (g_once_init_enter (&tmeta_info)) { + const GstMetaInfo *meta = + gst_meta_register (gst_analytics_batch_meta_api_get_type (), + "GstAnalyticsBatchMeta", + sizeof (GstAnalyticsBatchMeta), + gst_analytics_batch_meta_init, + gst_analytics_batch_meta_free, + gst_analytics_batch_meta_transform); + g_once_init_leave (&tmeta_info, meta); + } + return tmeta_info; +} + +/** + * gst_buffer_add_analytics_batch_meta: + * @buffer: A writable #GstBuffer + * + * Adds a #GstAnalyticsBatchMeta to a buffer or returns the existing one + * + * Returns: (transfer none): The new #GstAnalyticsBatchMeta + * + * Since: 1.28 + */ + +GstAnalyticsBatchMeta * +gst_buffer_add_analytics_batch_meta (GstBuffer * buffer) +{ + return (GstAnalyticsBatchMeta *) gst_buffer_add_meta (buffer, + gst_analytics_batch_meta_get_info (), NULL); +} + +/** + * gst_buffer_get_analytics_batch_meta: + * @buffer: A #GstBuffer + * + * Gets the #GstAnalyticsBatchMeta from a buffer + * + * Returns: (nullable)(transfer none): The #GstAnalyticsBatchMeta if there is one + * + * Since: 1.28 + */ +GstAnalyticsBatchMeta * +gst_buffer_get_analytics_batch_meta (GstBuffer * buffer) +{ + return (GstAnalyticsBatchMeta *) gst_buffer_get_meta (buffer, + GST_ANALYTICS_BATCH_META_API_TYPE); +} + +/** + * gst_analytics_batch_stream_get_stream_id: + * @stream: A #GstAnalyticsBatchStream + * + * Gets the current stream id from a stream + * + * Returns: (nullable) (transfer none): The stream id if there is any + * + * Since: 1.28 + */ +const gchar * +gst_analytics_batch_stream_get_stream_id (GstAnalyticsBatchStream * stream) +{ + g_return_val_if_fail (stream != NULL, NULL); + + for (gsize i = 0; i < stream->n_sticky_events; i++) { + GstEvent *event = stream->sticky_eventsi; + + if (GST_EVENT_TYPE (event) == GST_EVENT_STREAM_START) { + const gchar *stream_id; + + gst_event_parse_stream_start (event, &stream_id); + + return stream_id; + } + } + + return NULL; +} + +/** + * gst_analytics_batch_stream_get_caps: + * @stream: A #GstAnalyticsBatchStream + * + * Gets the #GstCaps from a stream + * + * Returns: (nullable) (transfer none): The #GstCaps if there are any + * + * Since: 1.28 + */ +GstCaps * +gst_analytics_batch_stream_get_caps (GstAnalyticsBatchStream * stream) +{ + g_return_val_if_fail (stream != NULL, NULL); + + for (gsize i = 0; i < stream->n_sticky_events; i++) { + GstEvent *event = stream->sticky_eventsi; + + if (GST_EVENT_TYPE (event) == GST_EVENT_CAPS) { + GstCaps *caps; + + gst_event_parse_caps (event, &caps); + + return caps; + } + } + + return NULL; +} + +/** + * gst_analytics_batch_stream_get_segment: + * @stream: A #GstAnalyticsBatchStream + * + * Gets the #GstSegment from a stream + * + * Returns: (nullable) (transfer none): The #GstSegment if there is one + * + * Since: 1.28 + */ +const GstSegment * +gst_analytics_batch_stream_get_segment (GstAnalyticsBatchStream * stream) +{ + g_return_val_if_fail (stream != NULL, NULL); + + for (gsize i = 0; i < stream->n_sticky_events; i++) { + GstEvent *event = stream->sticky_eventsi; + + if (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT) { + const GstSegment *segment; + + gst_event_parse_segment (event, &segment); + + return segment; + } + } + + return NULL; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticsbatchmeta.h
Added
@@ -0,0 +1,153 @@ +/* + * GStreamer + * + * Copyright (C) 2025 Sebastian Dröge <sebastian@centricular.com> + * + * gstanalyticsbatchmeta.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#ifndef __GST_ANALYTICS_BATCH_META_H__ +#define __GST_ANALYTICS_BATCH_META_H__ + +#include <gst/gst.h> +#include <gst/analytics/analytics-meta-prelude.h> + +G_BEGIN_DECLS + +/** + * GstAnalyticsBatchStream: + * @index: Index of the stream in the meta's stream array + * @sticky_events: (array length=n_sticky_events): The sticky events store before any of the mini objects in the @objects fields are processed + * @n_sticky_events: Number of sticky events + * @objects: (nullable) (array length=n_objects): #GstMiniObject in this batch for this stream. Those are serialized mini objects: buffers, bufferlists and serialized events + * @n_objects: Number of objects + * + * Since: 1.28 + */ +typedef struct _GstAnalyticsBatchStream { + guint index; + + GstEvent **sticky_events; + gsize n_sticky_events; + + GstMiniObject **objects; + gsize n_objects; + + /* <private> */ + gpointer paddingGST_PADDING; +} GstAnalyticsBatchStream; + +/** + * GstAnalyticsBatchMeta: + * @meta: parent + * @streams: (nullable) (array length=n_streams): #GstAnalyticsBatchStream for this batch + * @n_streams: Number of streams + * + * This meta represents a batch of buffers from one or more streams together + * with the relevant events to be able to interpret the buffers and to be able + * to reconstruct the original streams. + * + * When used for multiple streams and batching them temporarily, caps of type + * `multistream/x-analytics-batch(meta:GstAnalyticsBatchMeta)` should be used, + * with the original caps of each stream in an array-typed `streams` field. The + * original caps of each stream might be extended by additional fields and the + * order of the streams in the array corresponds to the order of the @streams + * array of the meta. In this case, empty buffers would be used without any + * #GstMemory and + * + * When used for a single stream, the original caps might be used together with + * the `meta:GstAnalyticsBatchMeta` caps feature and potentially extended by + * additional fields to describe the kind of batching and its configuration, + * e.g. that each batch is made of 25% overlapping 320x320 slices of the + * original video frame. + * + * The timestamp, duration and other metadata of each batch can be retrieved + * from the parent buffer of this meta. + * + * Since: 1.28 + */ +typedef struct _GstAnalyticsBatchMeta +{ + GstMeta meta; + + GstAnalyticsBatchStream *streams; + gsize n_streams; +} GstAnalyticsBatchMeta; + +/** + * GST_ANALYTICS_BATCH_META_API_TYPE: + * + * The Analytics Batch Meta API type + * + * Since: 1.28 + */ +#define GST_ANALYTICS_BATCH_META_API_TYPE \ + (gst_analytics_batch_meta_api_get_type()) + +/** + * GST_ANALYTICS_BATCH_META_INFO: (skip) + * + * The Analytics Batch Meta API Info + * + * Since: 1.28 + */ +#define GST_ANALYTICS_BATCH_META_INFO \ + (gst_analytics_batch_meta_get_info()) + +/** + * GST_CAPS_FEATURE_META_GST_ANALYTICS_BATCH_META: + * + * The caps feature to be used on streams that make use of this meta. + * + * Since: 1.28 + */ +#define GST_CAPS_FEATURE_META_GST_ANALYTICS_BATCH_META "meta:GstAnalyticsBatchMeta" + +GST_ANALYTICS_META_API +GType gst_analytics_batch_meta_api_get_type (void); + +GST_ANALYTICS_META_API +const GstMetaInfo *gst_analytics_batch_meta_get_info (void); + +GST_ANALYTICS_META_API +GstAnalyticsBatchMeta * +gst_buffer_add_analytics_batch_meta (GstBuffer * buffer); + +GST_ANALYTICS_META_API +GstAnalyticsBatchMeta * +gst_buffer_get_analytics_batch_meta (GstBuffer * buffer); + +GST_ANALYTICS_META_API +const gchar * +gst_analytics_batch_stream_get_stream_id (GstAnalyticsBatchStream * stream); + +GST_ANALYTICS_META_API +GstCaps * +gst_analytics_batch_stream_get_caps (GstAnalyticsBatchStream * stream); + +GST_ANALYTICS_META_API +const GstSegment * +gst_analytics_batch_stream_get_segment (GstAnalyticsBatchStream * stream); + +G_END_DECLS + +#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gstanalyticsmeta.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticsmeta.c
Changed
@@ -44,8 +44,8 @@ * Since: 1.24 */ -GST_DEBUG_CATEGORY_STATIC (an_relation_meta_debug); -#define GST_CAT_AN_RELATION an_relation_meta_debug +G_GNUC_INTERNAL GST_DEBUG_CATEGORY (gst_analytics_relation_meta_debug); +#define GST_CAT_DEFAULT gst_analytics_relation_meta_debug /* * GstAnalyticsRelatableMtdData: @@ -121,7 +121,7 @@ GstAnalyticsRelatableMtdData *rv; g_return_val_if_fail (meta, NULL); if (an_meta_id >= meta->rel_order) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, "Invalid parameter"); + GST_ERROR ("Invalid parameter"); return NULL; } rv = (GstAnalyticsRelatableMtdData *) @@ -184,7 +184,7 @@ instance->id); if (rlt == NULL) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, "Invalid parameter"); + GST_ERROR ("Invalid parameter"); return 0; } @@ -205,8 +205,6 @@ { GstAnalyticsMtdImpl *impl = (GstAnalyticsMtdImpl *) type; - g_return_val_if_fail (impl != NULL, NULL); - if (type == GST_ANALYTICS_MTD_TYPE_ANY) return "ANY"; else @@ -298,7 +296,7 @@ if (g_once_init_enter (&type)) { GType newType = gst_meta_api_type_register ("GstAnalyticsRelationMetaAPI", tags); - GST_DEBUG_CATEGORY_INIT (an_relation_meta_debug, "anrelmeta", + GST_DEBUG_CATEGORY_INIT (gst_analytics_relation_meta_debug, "anrelmeta", GST_DEBUG_FG_BLACK, "Content analysis meta relations meta"); g_once_init_leave (&type, newType); } @@ -315,8 +313,7 @@ g_return_val_if_fail (params != NULL, FALSE); - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Relation order:%" G_GSIZE_FORMAT, - *((gsize *) params)); + GST_TRACE ("Relation order:%" G_GSIZE_FORMAT, *((gsize *) params)); rmeta->rel_order_increment = rel_params->initial_relation_order; rmeta->rel_order = rmeta->rel_order_increment; @@ -332,8 +329,7 @@ if (buffer->pool) GST_META_FLAG_SET (meta, GST_META_FLAG_POOLED); - GST_CAT_DEBUG (GST_CAT_AN_RELATION, - "Content analysis meta-relation meta(%p, order=%" G_GSIZE_FORMAT + GST_DEBUG ("Content analysis meta-relation meta(%p, order=%" G_GSIZE_FORMAT ") created for buffer(%p)", (gpointer) rmeta, *(gsize *) params, (gpointer) buffer); return TRUE; @@ -344,8 +340,7 @@ { GstAnalyticsRelationMeta *rmeta = (GstAnalyticsRelationMeta *) meta; - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "Content analysis meta-data(%p) freed for buffer(%p)", + GST_TRACE ("Content analysis meta-data(%p) freed for buffer(%p)", (gpointer) rmeta, (gpointer) buffer); gst_analytics_relation_meta_clear (buffer, meta); @@ -357,89 +352,100 @@ static gboolean gst_analytics_relation_meta_transform (GstBuffer * transbuf, - GstMeta * meta, GstBuffer * buffer, GQuark type, gpointer data) + GstMeta * src_meta, GstBuffer * buffer, GQuark type, gpointer data) { + GstAnalyticsRelationMeta *src_rmeta = (GstAnalyticsRelationMeta *) src_meta; + GstAnalyticsRelationMeta *dst_rmeta = (GstAnalyticsRelationMeta *) + gst_buffer_get_meta (transbuf, GST_ANALYTICS_RELATION_META_API_TYPE); + guint i; + guint *free_match = NULL; + guint *match = NULL; + + if (!GST_META_TRANSFORM_IS_COPY (type) && + !GST_VIDEO_META_TRANSFORM_IS_SCALE (type) && + !GST_VIDEO_META_TRANSFORM_IS_MATRIX (type)) + return FALSE; - GST_CAT_TRACE (GST_CAT_AN_RELATION, "meta transform %s", - g_quark_to_string (type)); + if (dst_rmeta == NULL) { + GstAnalyticsRelationMetaInitParams init_params = { + src_rmeta->rel_order, src_rmeta->max_size + }; - if (GST_META_TRANSFORM_IS_COPY (type) || - GST_VIDEO_META_TRANSFORM_IS_SCALE (type)) { - GstAnalyticsRelationMeta *rmeta = (GstAnalyticsRelationMeta *) meta; - GstAnalyticsRelationMeta *new = (GstAnalyticsRelationMeta *) - gst_buffer_get_meta (transbuf, GST_ANALYTICS_RELATION_META_API_TYPE); + GST_TRACE ("meta transform creating new meta rel_order:%" G_GSIZE_FORMAT + " max_size:%" G_GSIZE_FORMAT, + init_params.initial_relation_order, init_params.initial_buf_size); - if (new == NULL) { - GstAnalyticsRelationMetaInitParams init_params = { - rmeta->rel_order, rmeta->max_size - }; + dst_rmeta = + gst_buffer_add_analytics_relation_meta_full (transbuf, &init_params); + } - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "meta transform creating new meta rel_order:%" G_GSIZE_FORMAT - " max_size:%" G_GSIZE_FORMAT, - init_params.initial_relation_order, init_params.initial_buf_size); - new = - gst_buffer_add_analytics_relation_meta_full (transbuf, &init_params); - } + /* If it's under 2K, do it on the stack, otherwise, use the heap */ + /* Our default is 5 */ + if (src_rmeta->length < 2048 / sizeof (guint)) + match = g_alloca (src_rmeta->length * sizeof (guint)); + else + free_match = match = g_malloc (src_rmeta->length * sizeof (guint)); - if (new->offset == 0) { - guint i; + for (i = 0; i < src_rmeta->length; i++) { + GstAnalyticsRelatableMtdData *src_mtd_data = + (GstAnalyticsRelatableMtdData *) + (src_rmeta->mtd_data_lookupi + src_rmeta->analysis_results); + GstAnalyticsMtd dst_mtd; - if (new->rel_order < rmeta->rel_order) { - g_free (new->adj_mat); - g_free (new->mtd_data_lookup); - new->adj_mat = gst_analytics_relation_adj_mat_create (rmeta->rel_order); - new->mtd_data_lookup = g_malloc0 (sizeof (gpointer) * rmeta->rel_order); - new->rel_order = rmeta->rel_order; - } + if (src_mtd_data->impl == NULL) { + matchi = G_MAXUINT; + continue; + } - if (new->max_size < rmeta->max_size) { - g_free (new->analysis_results); - new->analysis_results = g_malloc (rmeta->max_size); - new->max_size = rmeta->max_size; - } + gpointer dst_data = gst_analytics_relation_meta_add_mtd (dst_rmeta, + src_mtd_data->impl, src_mtd_data->size, &dst_mtd); + + memcpy (dst_data, src_mtd_data->data, src_mtd_data->size); - if (rmeta->rel_order == new->rel_order) { - memcpy (new->adj_mat + new->rel_order, rmeta->adj_mat + - rmeta->rel_order, rmeta->rel_order * rmeta->rel_order); + if (src_mtd_data->impl->mtd_meta_transform) { + if (src_mtd_data->impl->mtd_meta_transform (transbuf, &dst_mtd, buffer, + type, data)) { + matchi = dst_mtd.id; } else { - /* When destination adj_mat has a higher order than source we need - * to copy by row to have the correct alignment */ - for (gsize r = 0; r < rmeta->rel_order; r++) { - memcpy (new->adj_matr, rmeta->adj_matr, rmeta->rel_order); - } - } - memcpy (new->mtd_data_lookup, rmeta->mtd_data_lookup, - sizeof (gpointer) * rmeta->rel_order); - memcpy (new->analysis_results, rmeta->analysis_results, rmeta->offset); - - new->length = rmeta->length; - new->next_id = rmeta->next_id; - new->offset = rmeta->offset; - - for (i = 0; i < new->length; i++) { - GstAnalyticsRelatableMtdData *rlt_mtd_data = - (GstAnalyticsRelatableMtdData *) (new->mtd_data_lookupi + - new->analysis_results); - if (rlt_mtd_data->impl && rlt_mtd_data->impl->mtd_meta_transform) { - GstAnalyticsMtd transmtd; - transmtd.id = rlt_mtd_data->id; - transmtd.meta = new; - rlt_mtd_data->impl->mtd_meta_transform (transbuf, &transmtd, buffer, - type, data); - } + GstAnalyticsRelatableMtdData *dst_mtd_data = + (GstAnalyticsRelatableMtdData *) + (dst_rmeta->mtd_data_lookupdst_mtd.id + + dst_rmeta->analysis_results); + + dst_mtd_data->impl = NULL; + matchi = G_MAXUINT; } - return TRUE; } else { - g_warning ("Trying to copy GstAnalyticsRelationMeta into non-empty meta"); - g_debug ("ofs:%" G_GSIZE_FORMAT, new->offset); + matchi = dst_mtd.id; + } + } + + for (i = 0; i < src_rmeta->length; i++) { + GstAnalyticsRelatableMtdData *src_mtd_data_i = + (GstAnalyticsRelatableMtdData *) + (src_rmeta->mtd_data_lookupi + src_rmeta->analysis_results); + guint j; + + if (matchi == G_MAXUINT) + continue; + + for (j = 0; j < src_rmeta->length; j++) { + GstAnalyticsRelatableMtdData *src_mtd_data_j = + (GstAnalyticsRelatableMtdData *) + (src_rmeta->mtd_data_lookupj + src_rmeta->analysis_results); - return FALSE; + if (matchj == G_MAXUINT) + continue; + + dst_rmeta->adj_matmatchimatchj |= + src_rmeta->adj_matsrc_mtd_data_i->idsrc_mtd_data_j->id; } } - return FALSE; + g_free (free_match); + + return TRUE; } static void @@ -536,9 +542,8 @@ memset (level, -1, sizeof (gint) * adj_mat_order); memset (parent, -1, sizeof (gint) * adj_mat_order); - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "Performing bfs to find relation(%x) starting from %d with less than %" - G_GSIZE_FORMAT " edges from start", edge_mask, start, max_span); + GST_TRACE ("Performing bfs to find relation(%x) starting from %d with less" + " than %" G_GSIZE_FORMAT " edges from start", edge_mask, start, max_span); // vertex that has a relation with itself if (adj_matstartstart & edge_mask) { @@ -555,8 +560,7 @@ if (levelj == -1) { levelj = i; parentj = GPOINTER_TO_INT (iter->data); - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Parent of %" G_GSIZE_FORMAT - " is %d", j, parentj); + GST_TRACE ("Parent of %" G_GSIZE_FORMAT " is %d", j, parentj); next_frontier = g_slist_prepend (next_frontier, GINT_TO_POINTER ((gint) j)); } @@ -610,19 +614,16 @@ if (meta->rel_order > an_meta_first_id && meta->rel_order > an_meta_second_id) { types = meta->adj_matan_meta_first_idan_meta_second_id; } else { - GST_CAT_DEBUG (GST_CAT_AN_RELATION, - "an_meta_first(%u) and an_meta_second(%u) must be inferior to %" + GST_DEBUG ("an_meta_first(%u) and an_meta_second(%u) must be inferior to %" G_GSIZE_FORMAT, an_meta_first_id, an_meta_second_id, meta->rel_order); if (an_meta_first_id >= meta->rel_order) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, - "an_meta_first(%u) must be from a call to " + GST_ERROR ("an_meta_first(%u) must be from a call to " "gst_analytics_mtd_get_id(...)", an_meta_first_id); } if (an_meta_second_id >= meta->rel_order) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, - "an_meta_second(%u) must be from a call to " + GST_ERROR ("an_meta_second(%u) must be from a call to " "gst_analytics_mtd_get_id(...)", an_meta_second_id); } } @@ -653,13 +654,12 @@ g_return_val_if_fail (meta, FALSE); if (an_meta_first_id >= meta->rel_order || an_meta_second_id >= meta->rel_order) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, "Invalid parameter"); + GST_ERROR ("Invalid parameter"); return FALSE; } meta->adj_matan_meta_first_idan_meta_second_id = type; - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "Relation %x set between %u and %u", - type, an_meta_first_id, an_meta_second_id); + GST_TRACE ("Relation %x set between %u and %u", type, an_meta_first_id, + an_meta_second_id); return TRUE; } @@ -707,18 +707,15 @@ g_return_val_if_fail (rmeta, FALSE); if (!rmeta) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, "Invalid parameter"); + GST_ERROR ("Invalid parameter"); return EINVAL; } adj_mat_order = rmeta->rel_order; if (adj_mat_order < (an_meta_first_id + 1) || adj_mat_order < (an_meta_second_id + 1)) { - - GST_CAT_DEBUG (GST_CAT_AN_RELATION, - "Testing relation existence for analysis-meta that have no index in " - "adj-mat."); - + GST_DEBUG ("Testing relation existence for analysis-meta that have no" + " index in adj-mat."); return FALSE; } @@ -758,8 +755,7 @@ (const guint8 **) adj_mat, adj_mat_order, cond_types, span, level, parent); - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Adj order:%" G_GSIZE_FORMAT, - adj_mat_order); + GST_TRACE ("Adj order:%" G_GSIZE_FORMAT, adj_mat_order); rv = levelan_meta_second_id != -1; if (rv && relations_path) { @@ -783,7 +779,7 @@ g_array_index (path, gint, --path_left) = an_meta_second_id; //path = g_slist_prepend (path, GINT_TO_POINTER (an_meta_second_id)); while (i != -1 && i != an_meta_second_id) { - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Relation parent of %d", i); + GST_TRACE ("Relation parent of %d", i); g_array_index (path, gint, --path_left) = i; //path = g_slist_prepend (path, GINT_TO_POINTER (i)); i = parenti; @@ -800,10 +796,8 @@ g_free (parent); } - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "Relation %x between %d and %d %s", - cond_types, an_meta_first_id, an_meta_second_id, - rv ? "exist" : "does not exist"); + GST_TRACE ("Relation %x between %d and %d %s", cond_types, an_meta_first_id, + an_meta_second_id, rv ? "exist" : "does not exist"); return rv; } @@ -902,8 +896,7 @@ gpointer mem; guint8 **new_adj_mat; gsize new_mem_cap, new_rel_order; - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Adding relatable metadata to rmeta %p", - meta); + GST_TRACE ("Adding relatable metadata to rmeta %p", meta); object_size = sizeof (GstAnalyticsRelatableMtdData); object_size += sizeof (gpointer) * (size / sizeof (gpointer)); @@ -950,12 +943,10 @@ rlt_mtd->id = dest->id; rlt_mtd->meta = meta; } - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Add %p relatable type=%s (%" - G_GSIZE_FORMAT " / %" G_GSIZE_FORMAT ").", dest, - impl->name, new_size, meta->max_size); + GST_TRACE ("Add %p relatable type=%s (%" G_GSIZE_FORMAT " / %" + G_GSIZE_FORMAT ").", dest, impl->name, new_size, meta->max_size); } else { - GST_CAT_ERROR (GST_CAT_AN_RELATION, - "Failed to add relatable, out-of-space (%" G_GSIZE_FORMAT " / %" + GST_ERROR ("Failed to add relatable, out-of-space (%" G_GSIZE_FORMAT " / %" G_GSIZE_FORMAT ").", new_size, meta->max_size); } return &dest->data0; @@ -988,7 +979,7 @@ rlt->meta = NULL; if (an_meta_id >= meta->length) { - GST_CAT_ERROR (GST_CAT_AN_RELATION, "Invalid parameter"); + GST_ERROR ("Invalid parameter"); return FALSE; } @@ -1055,8 +1046,7 @@ GstAnalyticsRelatableMtdData *rlt_mtd_data = NULL; gsize i; - GST_CAT_TRACE (GST_CAT_AN_RELATION, - "Looking for %s related to %u by %d", + GST_TRACE ("Looking for %s related to %u by %d", gst_analytics_mtd_type_get_name (type), an_meta_id, relation_type); g_return_val_if_fail (rmeta != NULL, FALSE); @@ -1075,9 +1065,8 @@ adj_mat_order = meta->rel_order; if (adj_mat_order < (an_meta_id + 1)) { - GST_CAT_DEBUG (GST_CAT_AN_RELATION, - "Testing relation existence for analysis-meta that have no index in " - "adj-mat."); + GST_DEBUG ("Testing relation existence for analysis-meta that have no" + " index in adj-mat."); return FALSE; } @@ -1093,8 +1082,7 @@ if (state) { *state = GSIZE_TO_POINTER (G_MINSSIZE | i); } - GST_CAT_TRACE (GST_CAT_AN_RELATION, "Found match at %" G_GSIZE_FORMAT, - i); + GST_TRACE ("Found match at %" G_GSIZE_FORMAT, i); break; } rlt_mtd_data = NULL;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticsobjectdetectionmtd.c
Changed
@@ -28,6 +28,9 @@ #include <gst/video/video.h> #include <math.h> +GST_DEBUG_CATEGORY_EXTERN (gst_analytics_relation_meta_debug); +#define GST_CAT_DEFAULT gst_analytics_relation_meta_debug + /** * SECTION:gstanalyticsobjectdetectionmtd * @title: GstAnalyticsODMtd @@ -75,7 +78,43 @@ gst_analytics_od_mtd_meta_transform (GstBuffer * transbuf, GstAnalyticsMtd * transmtd, GstBuffer * buffer, GQuark type, gpointer data) { - if (GST_VIDEO_META_TRANSFORM_IS_SCALE (type)) { + if (GST_VIDEO_META_TRANSFORM_IS_MATRIX (type)) { + GstVideoMetaTransformMatrix *trans = data; + GstAnalyticsODMtdData *oddata = + gst_analytics_relation_meta_get_mtd_data (transmtd->meta, + transmtd->id); + GstVideoRectangle rect = { oddata->x, oddata->y, oddata->w, oddata->h }; + + gboolean is_diagonal = trans->matrix01 == 0 && trans->matrix10 == 0; + gboolean is_antidiagonal = trans->matrix00 == 0 && + trans->matrix11 == 0; + + if (!is_diagonal && !is_antidiagonal) { + GST_WARNING ("Transformation not possible from buffer %" GST_PTR_FORMAT + " to buffer %" GST_PTR_FORMAT, buffer, transbuf); + return FALSE; + } else if (is_diagonal) { + if (trans->matrix00 == 0 || trans->matrix11 == 0) { + GST_WARNING ("Transformation not possible from buffer %" GST_PTR_FORMAT + " to buffer %" GST_PTR_FORMAT, buffer, transbuf); + return FALSE; + } + } else { + if (trans->matrix01 == 0 || trans->matrix10 == 0) { + GST_WARNING ("Transformation not possible from buffer %" GST_PTR_FORMAT + " to buffer %" GST_PTR_FORMAT, buffer, transbuf); + return FALSE; + } + } + + if (!gst_video_meta_transform_matrix_rectangle (trans, &rect)) + return FALSE; + + oddata->x = rect.x; + oddata->y = rect.y; + oddata->w = rect.w; + oddata->h = rect.h; + } else if (GST_VIDEO_META_TRANSFORM_IS_SCALE (type)) { GstVideoMetaTransform *trans = data; gint ow, oh, nw, nh; GstAnalyticsODMtdData *oddata; @@ -99,6 +138,8 @@ oddata->h *= nh; oddata->h /= oh; + } else { + return FALSE; } return TRUE;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gstanalyticssegmentationmtd.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticssegmentationmtd.c
Changed
@@ -26,6 +26,9 @@ #include "gstanalyticssegmentationmtd.h" #include <gst/video/video-info.h> +GST_DEBUG_CATEGORY_EXTERN (gst_analytics_relation_meta_debug); +#define GST_CAT_DEFAULT gst_analytics_relation_meta_debug + /** * SECTION: gstanalyticssegmentationmtd * @title: GstAnalyticsSegmentationMtd @@ -390,12 +393,15 @@ { const gsize region_ids_size = sizeof (guint) * region_count; const gsize size = sizeof (GstAnalyticsSegMtdData) + region_ids_size; + + g_return_val_if_fail (instance != NULL, FALSE); +#ifndef G_DISABLE_CHECKS GstVideoMeta *vmeta = gst_buffer_get_video_meta (buffer); g_return_val_if_fail (vmeta != NULL, FALSE); - g_return_val_if_fail (instance != NULL, FALSE); g_return_val_if_fail (vmeta->format == GST_VIDEO_FORMAT_GRAY8 || vmeta->format == GST_VIDEO_FORMAT_GRAY16_BE || vmeta->format == GST_VIDEO_FORMAT_GRAY16_LE, FALSE); +#endif GstAnalyticsSegMtdData *mtddata = NULL; mtddata = @@ -429,11 +435,34 @@ gst_analytics_segmentation_mtd_transform (GstBuffer * transbuf, GstAnalyticsMtd * transmtd, GstBuffer * buffer, GQuark type, gpointer data) { - GstAnalyticsSegMtdData *segdata; - if (GST_META_TRANSFORM_IS_COPY (type)) { - segdata = gst_analytics_relation_meta_get_mtd_data (transmtd->meta, - transmtd->id); + GstAnalyticsSegMtdData *segdata = + gst_analytics_relation_meta_get_mtd_data (transmtd->meta, + transmtd->id); + + if (transbuf != buffer) gst_buffer_ref (segdata->masks); + + if (GST_VIDEO_META_TRANSFORM_IS_MATRIX (type)) { + GstVideoMetaTransformMatrix *trans = data; + GstVideoRectangle rect = { segdata->masks_loc_x, segdata->masks_loc_y, + segdata->masks_loc_w, segdata->masks_loc_h + }; + + if (trans->matrix01 != 0 || trans->matrix10 != 0 || + trans->matrix00 < 0 || trans->matrix11 < 0) { + GST_WARNING ("Segmentation meta doesn't support rotations or flips," + " not copying from buffer %" GST_PTR_FORMAT " to buffer: %" + GST_PTR_FORMAT, buffer, transbuf); + return FALSE; + } + + if (!gst_video_meta_transform_matrix_rectangle (trans, &rect)) + return FALSE; + + segdata->masks_loc_x = rect.x; + segdata->masks_loc_y = rect.y; + segdata->masks_loc_w = rect.w; + segdata->masks_loc_h = rect.h; } else if (GST_VIDEO_META_TRANSFORM_IS_SCALE (type)) { GstVideoMetaTransform *trans = data; gint ow, oh, nw, nh; @@ -443,9 +472,6 @@ oh = GST_VIDEO_INFO_HEIGHT (trans->in_info); nh = GST_VIDEO_INFO_HEIGHT (trans->out_info); - segdata = gst_analytics_relation_meta_get_mtd_data (transmtd->meta, - transmtd->id); - segdata->masks_loc_x *= nw; segdata->masks_loc_x /= ow; @@ -458,9 +484,6 @@ segdata->masks_loc_h *= nh; segdata->masks_loc_h /= oh; - if (transbuf != buffer) { - gst_buffer_ref (segdata->masks); - } } return TRUE;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gstanalyticssegmentationmtd.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticssegmentationmtd.h
Changed
@@ -68,7 +68,7 @@ GstBuffer * gst_analytics_segmentation_mtd_get_mask (const GstAnalyticsSegmentationMtd * handle, gint * masks_loc_x, gint * masks_loc_y, guint * masks_loc_w, guint * - masks_loc_h); + masks_loc_h) G_GNUC_WARN_UNUSED_RESULT; GST_ANALYTICS_META_API gboolean
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticstensormtd.c
Added
@@ -0,0 +1,235 @@ +/* GStreamer + * Copyright (C) 2024 Collabora Ltd + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gstanalyticssegmentmeta.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstanalyticstensormtd.h" +#include <gst/video/video.h> + +/** + * SECTION: gstanalyticstensormtd + * @title: GstAnalyticsTensorMtd + * @short_description: Analytics metadata to store tensor inside a + * #GstAnalyticsRelationMeta + * @symbols: + * - GstAnalyticsTensorMtd + * @see_also: #GstAnalyticsMtd, #GstAnalyticsRelationMeta + * + * This type of metadata holds a tensor. It can be used to store tensor as + * analytics-meta for their ability to relate to each others. For example + * in a multi-model analytics pipeline, we sometime have one model input match + * the output of the other model. In this context it can be useful to keep the + * ancestry relation between first tensor, output of first inference, and the + * second tensor, output from second inference. Another use-case for + * #GstAnalyticsTensorMtd is to transport tensors from inference element to a + * post-processing element using a computing graph framework, like ONNX. + * Essentially #GstAnalyticsTensorMtd is a GstBuffer encapsulated by a + * analytics-meta with additional parameters describing the tensor. + * + * Since 1.28 + */ + +static void gst_analytics_tensor_mtd_clear (GstBuffer * buffer, + GstAnalyticsMtd * mtd); + +static gboolean +gst_analytics_tensor_mtd_transform (GstBuffer * transbuf, + GstAnalyticsMtd * transmtd, GstBuffer * buffer, GQuark type, gpointer data); + +static const GstAnalyticsMtdImpl tensor_impl = { + "tensor", + gst_analytics_tensor_mtd_transform, + gst_analytics_tensor_mtd_clear +}; + + +typedef GstTensor GstAnalyticsTensorMtdData; + +/** + * gst_analytics_tensor_mtd_get_mtd_type: + * + * Get an id that represent tensor metadata type + * + * Returns: Opaque id of the #GstAnalyticsMtd type + * + * Since: 1.28 + */ +GstAnalyticsMtdType +gst_analytics_tensor_mtd_get_mtd_type (void) +{ + return (GstAnalyticsMtdType) & tensor_impl; +} + +/** + * gst_analytics_tensor_mtd_get_tensor: + * @instance: Instance of #GstAnalyticsTensorMtd + * + * Get tensor + * + * Returns: (transfer none): a #GstTensor + * + * Since: 1.28 + */ +GstTensor * +gst_analytics_tensor_mtd_get_tensor (const GstAnalyticsTensorMtd * instance) +{ + GstAnalyticsTensorMtdData *mtddata; + + g_return_val_if_fail (instance, NULL); + + mtddata = + gst_analytics_relation_meta_get_mtd_data (instance->meta, instance->id); + g_return_val_if_fail (mtddata != NULL, NULL); + + return mtddata; +} + +/** + * gst_analytics_relation_meta_add_tensor_mtd: + * @instance: Instance + * @num_dims: The number of dimensions in the tensor + * @tensor_mtd: (out)(nullable): Handle update with newly added tensor mtd. + * Add tensor mtd to @instance. + * + * Add a new #GstAnalyticsTensorMtd holding a #GstTensor to @instance. The + * #GstTensor needs to be filled. + * + * Returns: Added successfully + * + * Since: 1.28 + */ +gboolean +gst_analytics_relation_meta_add_tensor_mtd (GstAnalyticsRelationMeta + * meta, gsize num_dims, GstAnalyticsTensorMtd * tensor_mtd) +{ + GstTensor *tensor; + + tensor = gst_analytics_relation_meta_add_mtd (meta, &tensor_impl, + sizeof (GstAnalyticsTensorMtdData) + (num_dims * sizeof (gsize)), + tensor_mtd); + + if (tensor == NULL) + return FALSE; + + memset (tensor, 0, sizeof (GstTensor)); + tensor->num_dims = num_dims; + + return TRUE; +} + +/** + * gst_analytics_relation_meta_add_tensor_mtd_simple: + * @instance: Instance + * @id: semantically identify the contents of the tensor + * @data_type: #GstTensorDataType of tensor data + * @data: (transfer full): #GstBuffer holding tensor data + * @dims_order: Indicate tensor dimension indexing order + * @num_dims: number of tensor dimensions + * @dims: (array length=num_dims): size of tensor in each dimension. + * A value of 0 means the dimension is dynamic. + * @tensor_mtd: (out)(nullable): Handle update with newly added tensor mtd. + * Add tensor mtd to @instance. + * + * Add a new #GstAnalyticsTensorMtd holding a #GstTensor to @instance. + * + * Returns: Added successfully + * + * Since: 1.28 + */ +gboolean +gst_analytics_relation_meta_add_tensor_mtd_simple (GstAnalyticsRelationMeta + * meta, GQuark id, GstTensorDataType data_type, GstBuffer * data, + GstTensorDimOrder dims_order, gsize num_dims, gsize * dims, + GstAnalyticsTensorMtd * tensor_mtd) +{ + GstTensor *tensor; + GstAnalyticsTensorMtd mtd; + + if (!gst_analytics_relation_meta_add_tensor_mtd (meta, num_dims, &mtd)) + return FALSE; + + tensor = gst_analytics_relation_meta_get_mtd_data (meta, mtd.id); + + if (!gst_tensor_set_simple (tensor, id, data_type, data, + dims_order, num_dims, dims)) + return FALSE; + + if (tensor_mtd) + *tensor_mtd = mtd; + + return TRUE; +} + +static void +gst_analytics_tensor_mtd_clear (GstBuffer * buffer, GstAnalyticsMtd * mtd) +{ + GstAnalyticsTensorMtdData *tensor; + gsize num_dims; + + tensor = gst_analytics_relation_meta_get_mtd_data (mtd->meta, mtd->id); + g_assert (tensor); + + num_dims = tensor->num_dims; + if (tensor->data) + gst_buffer_unref (tensor->data); + + memset (tensor, 0, sizeof (GstTensor)); + tensor->num_dims = num_dims; +} + +static gboolean +gst_analytics_tensor_mtd_transform (GstBuffer * transbuf, + GstAnalyticsMtd * transmtd, GstBuffer * buffer, GQuark type, gpointer data) +{ + GstTensor *tensor; + + tensor = gst_analytics_relation_meta_get_mtd_data (transmtd->meta, + transmtd->id); + + if (tensor->data) + tensor->data = gst_buffer_ref (tensor->data); + + return TRUE; +} + +/** + * gst_analytics_relation_meta_get_tensor_mtd: + * @meta: Instance of #GstAnalyticsRelationMeta + * @an_meta_id: Id of #GstAnalyticsTensorMtd instance to retrieve + * @rlt: (out caller-allocates)(not nullable): Will be filled with relatable + * meta + * + * Fill @rlt if a analytics-meta with id == @an_meta_id exist in @meta instance, + * otherwise this method return FALSE and @rlt is invalid. + * + * Returns: TRUE if successful. + * + * Since: 1.28 + */ +gboolean +gst_analytics_relation_meta_get_tensor_mtd (GstAnalyticsRelationMeta * meta, + guint an_meta_id, GstAnalyticsTensorMtd * rlt) +{ + return gst_analytics_relation_meta_get_mtd (meta, an_meta_id, + gst_analytics_tensor_mtd_get_mtd_type (), (GstAnalyticsTensorMtd *) rlt); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gstanalyticstensormtd.h
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) 2024 Collabora Ltd + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gstanalyticstensormtd.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_ANALYTICS_TENSOR_MTD_H__ +#define __GST_ANALYTICS_TENSOR_MTD_H__ + +#include <gst/gst.h> +#include <gst/analytics/analytics-meta-prelude.h> +#include <gst/analytics/gstanalyticsmeta.h> +#include <gst/analytics/gsttensor.h> + +G_BEGIN_DECLS + +/** + * GstAnalyticsTensorMtd: + * @id: Instance identifier + * @meta: Instance of #GstAnalyticsRelationMeta where the analytics-metadata + * identified by @id is stored + * + * Handle containing data required to use gst_analytics_tensor_mtd APIs. + * This type is generally expected to be allocated on stack. + * + * Since: 1.28 + */ +typedef struct _GstAnalyticsMtd GstAnalyticsTensorMtd; + + +GST_ANALYTICS_META_API +GstAnalyticsMtdType +gst_analytics_tensor_mtd_get_mtd_type (void); + +GST_ANALYTICS_META_API +GstTensor * +gst_analytics_tensor_mtd_get_tensor (const GstAnalyticsTensorMtd * instance); + +GST_ANALYTICS_META_API +gboolean +gst_analytics_relation_meta_add_tensor_mtd (GstAnalyticsRelationMeta * + instance, gsize num_dims, GstAnalyticsTensorMtd * tensor_mtd); + +GST_ANALYTICS_META_API +gboolean +gst_analytics_relation_meta_add_tensor_mtd_simple (GstAnalyticsRelationMeta * + instance, GQuark id, GstTensorDataType data_type, + GstBuffer * data, GstTensorDimOrder dims_order, gsize num_dims, + gsize * dims, GstAnalyticsTensorMtd * tensor_mtd); + +GST_ANALYTICS_META_API +gboolean +gst_analytics_relation_meta_get_tensor_mtd (GstAnalyticsRelationMeta * meta, + guint an_meta_id, GstAnalyticsTensorMtd * rlt); + +G_END_DECLS +#endif /* __GST_ANALYTICS_TENSOR_MTD_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gsttensor.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gsttensor.c
Changed
@@ -91,11 +91,14 @@ * @data: (transfer full): #GstBuffer holding tensor data * @dims_order: Indicate tensor dimension indexing order * @num_dims: number of tensor dimensions - * @dims: (array length=num_dims): tensor dimensions. Value of 0 mean the - * dimension is dynamic. + * @dims: (array length=num_dims): size of tensor in each dimension. + * A value of 0 means the dimension is dynamic. * * Allocates a new #GstTensor of @dims_order ROW_MAJOR or COLUMN_MAJOR and - * with an interleaved layout + * with an interleaved layout. + * + * For example, a two-dimensional tensor with 32 rows and 4 columns, @dims would + * be the two element array `32, 4`. * * Returns: A newly allocated #GstTensor * @@ -106,17 +109,56 @@ GstTensorDimOrder dims_order, gsize num_dims, gsize * dims) { GstTensor *tensor; + + tensor = gst_tensor_alloc (num_dims); + if (!gst_tensor_set_simple (tensor, id, + data_type, data, dims_order, num_dims, dims)) { + g_free (tensor); + return NULL; + } + + return tensor; +} + + +/** + * gst_tensor_set_simple: + * @tensor: a #GstTensor + * @id: semantically identify the contents of the tensor + * @data_type: #GstTensorDataType of tensor data + * @data: (transfer full): #GstBuffer holding tensor data + * @dims_order: Indicate tensor dimension indexing order + * @num_dims: number of tensor dimensions + * @dims: (array length=num_dims): size of tensor in each dimension. + * A value of 0 means the dimension is dynamic. + * + * Sets the content of a #GstTensor of @dims_order ROW_MAJOR or COLUMN_MAJOR and + * with an interleaved layout. The #GstTensor must have exactly num_dims. + * + * For example, a two-dimensional tensor with 32 rows and 4 columns, @dims would + * be the two element array `32, 4`. + * + * Returns: TRUE if it coudl be set correctly + * + * Since: 1.28 + */ +gboolean +gst_tensor_set_simple (GstTensor * tensor, GQuark id, + GstTensorDataType data_type, GstBuffer * data, + GstTensorDimOrder dims_order, gsize num_dims, gsize * dims) +{ gsize num_elements = 1; gsize i; gboolean dynamic_tensor_size = FALSE; /* Update this if adding more to GstTensorDataType */ - g_return_val_if_fail (data_type <= GST_TENSOR_DATA_TYPE_BFLOAT16, NULL); + g_return_val_if_fail (data_type <= GST_TENSOR_DATA_TYPE_BFLOAT16, FALSE); - g_return_val_if_fail (GST_IS_BUFFER (data), NULL); + g_return_val_if_fail (GST_IS_BUFFER (data), FALSE); g_return_val_if_fail (dims_order == GST_TENSOR_DIM_ORDER_ROW_MAJOR || - dims_order == GST_TENSOR_DIM_ORDER_COL_MAJOR, NULL); - g_return_val_if_fail (num_dims > 0, NULL); + dims_order == GST_TENSOR_DIM_ORDER_COL_MAJOR, FALSE); + g_return_val_if_fail (num_dims > 0, FALSE); + g_return_val_if_fail (tensor->num_dims == num_dims, FALSE); for (i = 0; i < num_dims; i++) { dynamic_tensor_size = dimsi == 0; @@ -135,10 +177,11 @@ " but buffer has size %zu", size_for_elements (data_type, num_elements), num_elements, gst_buffer_get_size (data)); - return NULL; + return FALSE; } - tensor = gst_tensor_alloc (num_dims); + memset (tensor, 0, sizeof (GstTensor)); + tensor->id = id; tensor->layout = GST_TENSOR_LAYOUT_CONTIGUOUS; tensor->data_type = data_type; @@ -147,7 +190,7 @@ tensor->num_dims = num_dims; memcpy (tensor->dims, dims, sizeof (gsize) * num_dims); - return tensor; + return TRUE; } /** @@ -209,3 +252,117 @@ *num_dims = tensor->num_dims; return tensor->dims; } + +/** + * gst_tensor_data_type_get_name: + * @data_type: a #GstTensorDataType + * + * Get a string version of the data type + * + * Returns: a constant string with the name of the data type + * + * Since: 1.28 + */ +const gchar * +gst_tensor_data_type_get_name (GstTensorDataType data_type) +{ + switch (data_type) { + case GST_TENSOR_DATA_TYPE_INT4: + return "int4"; + case GST_TENSOR_DATA_TYPE_INT8: + return "int8"; + case GST_TENSOR_DATA_TYPE_INT16: + return "int16"; + case GST_TENSOR_DATA_TYPE_INT32: + return "int32"; + case GST_TENSOR_DATA_TYPE_INT64: + return "int64"; + case GST_TENSOR_DATA_TYPE_UINT4: + return "uint4"; + case GST_TENSOR_DATA_TYPE_UINT8: + return "uint8"; + case GST_TENSOR_DATA_TYPE_UINT16: + return "uint16"; + case GST_TENSOR_DATA_TYPE_UINT32: + return "uint32"; + case GST_TENSOR_DATA_TYPE_UINT64: + return "uint64"; + case GST_TENSOR_DATA_TYPE_FLOAT16: + return "float16"; + case GST_TENSOR_DATA_TYPE_FLOAT32: + return "float32"; + case GST_TENSOR_DATA_TYPE_FLOAT64: + return "float64"; + case GST_TENSOR_DATA_TYPE_BFLOAT16: + return "bfloat16"; + default: + return NULL; + } +} + +/** + * gst_tensor_check_type: + * @tensor: A #GstTensor + * @data_type: The data type of the tensor + * @order: The order of the tensor to read from the memory + * @num_dims: The number of dimensions that the tensor can have + * @dims: (array length=num_dims)(nullable): An optional array of dimensions, where G_MAXSIZE means ANY. + * + * Validate the tensor whether it mathces the reading order, dimensions and the data type. + * Validate whether the #GstBuffer has enough size to hold the tensor data. + * + * Returns: TRUE if the #GstTensor has the reading order from the memory matching @order, + * dimensions matching @num_dims, data type matching @data_type + * Otherwise FALSE will be returned. + * + * Since: 1.28 + */ +gboolean +gst_tensor_check_type (const GstTensor * tensor, GstTensorDataType data_type, + GstTensorDimOrder order, gsize num_dims, const gsize * dims) +{ + gsize num_elements = 1, tensor_size, i; + + if (tensor->dims_order != order) { + GST_DEBUG ("Tensor \"%s\" has order %d, expected %d", + g_quark_to_string (tensor->id), tensor->dims_order, order); + return FALSE; + } + + if (tensor->num_dims != num_dims) { + GST_DEBUG ("Tensor \"%s\" has %zu dimensions, expected %zu", + g_quark_to_string (tensor->id), tensor->num_dims, num_dims); + return FALSE; + } + + if (tensor->data_type != data_type) { + GST_DEBUG ("Tensor \"%s\" has data type \"%s\", expected \"%s\".", + g_quark_to_string (tensor->id), + gst_tensor_data_type_get_name (tensor->data_type), + gst_tensor_data_type_get_name (data_type)); + return FALSE; + } + + for (i = 0; i < tensor->num_dims; i++) { + num_elements *= tensor->dimsi; + + if (dims) { + if (dimsi != G_MAXSIZE && dimsi != tensor->dimsi) { + GST_DEBUG ("Tensor \"%s\" has dim%zu=%zu but expect dim%zu=%zu", + g_quark_to_string (tensor->id), i, tensor->dimsi, i, dimsi); + return FALSE; + } + } + } + + tensor_size = size_for_elements (tensor->data_type, num_elements); + + if (gst_buffer_get_size (tensor->data) < tensor_size) { + GST_ERROR ("Expected tensor \"%s\" buffer of size %zu (%zu elements)," + " but buffer has size %zu", g_quark_to_string (tensor->id), + tensor_size, num_elements, gst_buffer_get_size (tensor->data)); + return FALSE; + } + + return TRUE; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gsttensor.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gsttensor.h
Changed
@@ -66,7 +66,75 @@ GST_TENSOR_DATA_TYPE_FLOAT16, GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DATA_TYPE_FLOAT64, - GST_TENSOR_DATA_TYPE_BFLOAT16 + GST_TENSOR_DATA_TYPE_BFLOAT16, + /** + * GST_TENSOR_DATA_TYPE_STRING: + * + * UTF-8 string + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_STRING, + /** + * GST_TENSOR_DATA_TYPE_BOOL: + * + * A boolean value stored in 1 byte. + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_BOOL, + /** + * GST_TENSOR_DATA_TYPE_COMPLEX64: + * + * A 64-bit complex number stored in 2 32-bit values. + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_COMPLEX64, + /** + * GST_TENSOR_DATA_TYPE_COMPLEX128: + * + * A 128-bit complex number stored in 2 64-bit values. + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_COMPLEX128, + /** + * GST_TENSOR_DATA_TYPE_FLOAT8E4M3FN: + * + * A non-IEEE 8-bit floating point format with 4 exponent bits and 3 mantissa bits, with NaN and no infinite values (FN). + * See this paper for more details(https://onnx.ai/onnx/technical/float8.html) + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_FLOAT8E4M3FN, + /** + * GST_TENSOR_DATA_TYPE_FLOAT8E4M3FNUZ: + * + * A non-IEEE 8-bit floating point format with 4 exponent bits and 3 mantissa bits, with NaN, no infinite values (FN) and no negative zero (UZ). + * See this paper for more details(https://onnx.ai/onnx/technical/float8.html) + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_FLOAT8E4M3FNUZ, + /** + * GST_TENSOR_DATA_TYPE_FLOAT8E5M2: + * + * A non-IEEE 8-bit floating point format with 5 exponent bits and 2 mantissa bits. + * See this paper for more details(https://onnx.ai/onnx/technical/float8.html) + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_FLOAT8E5M2, + /** + * GST_TENSOR_DATA_TYPE_FLOAT8E5M2FNUZ: + * + * A non-IEEE 8-bit floating point format with 5 exponent bits and 2 mantissa bits, with NaN, no infinite values (FN) and no negative zero (UZ). + * See this paper for more details(https://onnx.ai/onnx/technical/float8.html) + * + * Since: 1.28 + */ + GST_TENSOR_DATA_TYPE_FLOAT8E5M2FNUZ } GstTensorDataType; /** @@ -127,7 +195,7 @@ #define GST_TYPE_TENSOR (gst_tensor_get_type()) GST_ANALYTICS_META_API -GstTensor * gst_tensor_alloc (gsize num_dims); +GstTensor * gst_tensor_alloc (gsize num_dims) G_GNUC_WARN_UNUSED_RESULT; GST_ANALYTICS_META_API GstTensor * gst_tensor_new_simple (GQuark id, @@ -135,13 +203,18 @@ GstBuffer * data, GstTensorDimOrder dims_order, gsize num_dims, - gsize * dims); + gsize * dims) G_GNUC_WARN_UNUSED_RESULT; + +GST_ANALYTICS_META_API +gboolean gst_tensor_set_simple (GstTensor * tensor, GQuark id, + GstTensorDataType data_type, GstBuffer * data, + GstTensorDimOrder dims_order, gsize num_dims, gsize * dims); GST_ANALYTICS_META_API void gst_tensor_free (GstTensor * tensor); GST_ANALYTICS_META_API -GstTensor * gst_tensor_copy (const GstTensor * tensor); +GstTensor * gst_tensor_copy (const GstTensor * tensor) G_GNUC_WARN_UNUSED_RESULT; GST_ANALYTICS_META_API gsize * gst_tensor_get_dims (GstTensor * tensor, gsize * num_dims); @@ -149,6 +222,14 @@ GST_ANALYTICS_META_API GType gst_tensor_get_type (void); +GST_ANALYTICS_META_API +const gchar *gst_tensor_data_type_get_name (GstTensorDataType data_type); + +GST_ANALYTICS_META_API +gboolean gst_tensor_check_type(const GstTensor * tensor, + GstTensorDataType data_type, GstTensorDimOrder order, gsize num_dims, + const gsize *dims); + G_END_DECLS #endif /* __GST_TENSOR_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gsttensormeta.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gsttensormeta.c
Changed
@@ -23,6 +23,33 @@ #include "gsttensormeta.h" static gboolean +gst_tensor_meta_transform (GstBuffer * dest, GstMeta * meta, + GstBuffer * buffer, GQuark type, gpointer data) +{ + GstTensorMeta *dmeta, *smeta; + + smeta = (GstTensorMeta *) meta; + + if (GST_META_TRANSFORM_IS_COPY (type)) { + smeta = (GstTensorMeta *) meta; + dmeta = gst_buffer_add_tensor_meta (dest); + if (!dmeta) + return FALSE; + GST_TRACE ("copy tensor metadata"); + dmeta->num_tensors = smeta->num_tensors; + dmeta->tensors = g_new (GstTensor *, smeta->num_tensors); + for (int i = 0; i < smeta->num_tensors; i++) { + dmeta->tensorsi = gst_tensor_copy (smeta->tensorsi); + } + } else { + GST_WARNING ("gst_tensor_meta_transform: transform type %u not supported", + type); + return FALSE; + } + return TRUE; +} + +static gboolean gst_tensor_meta_init (GstMeta * meta, gpointer params, GstBuffer * buffer) { GstTensorMeta *tmeta = (GstTensorMeta *) meta; @@ -80,7 +107,7 @@ sizeof (GstTensorMeta), gst_tensor_meta_init, gst_tensor_meta_free, - NULL); /* tensor_meta_transform not implemented */ + gst_tensor_meta_transform); g_once_init_leave (&tmeta_info, meta); } return tmeta_info; @@ -148,6 +175,69 @@ } /** + * gst_tensor_meta_get_by_id: + * @tmeta: A #GstTensorMeta + * @id: A #GQuark identifying tensor-encoding + * + * Get the first tensor from the #GstTensorMeta identified by @id. + * + * Return: (nullable)(transfer none): a GstTensor with id matching @id. + * Otherwise NULL will be returned. + * + * Since: 1.28 + */ +const GstTensor * +gst_tensor_meta_get_by_id (GstTensorMeta * tmeta, GQuark id) +{ + g_return_val_if_fail (tmeta != NULL, NULL); + g_return_val_if_fail (tmeta->tensors, NULL); + + for (int i = 0; i < tmeta->num_tensors; ++i) { + if (tmeta->tensorsi->id == id) + return tmeta->tensorsi; + } + + return NULL; +} + +/** + * gst_tensor_meta_get_typed_tensor: + * @tmeta: A #GstTensorMeta + * @tensor_id: A #GQuark identifying the tensor-encoding + * @data_type: The data type of the tensor + * @order: The order of the tensor to read from the memory + * @num_dims: The number of dimensions that the tensor can have + * @dims: (array length=num_dims)(nullable): An optional array of dimensions, where G_MAXSIZE means ANY. + * + * Get the first tensor from the #GstTensorMeta identified by + * @tensor_id, matching the reading order, dimensions and the data + * type and optionally the dimensions. Validate whether the + * #GstBuffer has enough size to hold the tensor data. + * + * Return: (nullable) (transfer none): a matching #GstTensor, + * otherwise NULL + * + * Since: 1.28 + */ +const GstTensor * +gst_tensor_meta_get_typed_tensor (GstTensorMeta * tmeta, + GQuark tensor_id, GstTensorDataType data_type, GstTensorDimOrder order, + gsize num_dims, const gsize * dims) +{ + const GstTensor *tensor; + + tensor = gst_tensor_meta_get_by_id (tmeta, tensor_id); + + if (tensor == NULL) + return NULL; + + if (!gst_tensor_check_type (tensor, data_type, order, num_dims, dims)) + return NULL; + + return tensor; +} + +/** * gst_tensor_meta_get: * @tmeta: A #GstTensorMeta * @index: The number of the tensor to get
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/gsttensormeta.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/gsttensormeta.h
Changed
@@ -80,6 +80,14 @@ GstTensor **tensors); GST_ANALYTICS_META_API +const GstTensor *gst_tensor_meta_get_by_id (GstTensorMeta *tmeta, GQuark id); + +GST_ANALYTICS_META_API +const GstTensor *gst_tensor_meta_get_typed_tensor (GstTensorMeta * tmeta, + GQuark tensor_id, GstTensorDataType data_type, GstTensorDimOrder order, + gsize num_dims, const gsize * dims); + +GST_ANALYTICS_META_API const GstTensor *gst_tensor_meta_get (GstTensorMeta *tmeta, gsize index); GST_ANALYTICS_META_API
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/analytics/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/meson.build
Changed
@@ -3,8 +3,12 @@ 'gstanalyticsobjectdetectionmtd.c', 'gstanalyticsobjecttrackingmtd.c', 'gstanalyticssegmentationmtd.c', + 'gstanalyticsbatchmeta.c', + 'gstanalyticstensormtd.c', 'gsttensormeta.c', - 'gsttensor.c') + 'gsttensor.c', + 'gstanalytics_image_util.c', + 'modelinfo.c') analytics_headers = files( 'analytics.h', 'gstanalyticsmeta.h', @@ -13,8 +17,13 @@ 'gstanalyticsobjectdetectionmtd.h', 'gstanalyticsobjecttrackingmtd.h', 'gstanalyticssegmentationmtd.h', + 'gstanalyticstensormtd.h', + 'gstanalyticsbatchmeta.h', 'gsttensormeta.h', - 'gsttensor.h') + 'gsttensor.h', + 'gstanalytics_image_util.h', + 'modelinfo.h') + install_headers(analytics_headers, subdir : 'gstreamer-1.0/gst/analytics') doc_sources =
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/modelinfo.c
Added
@@ -0,0 +1,903 @@ +/* + * GStreamer + * Copyright (C) 2025 Collabora Ltd. + * @author: Olivier Crete <olivier.crete@collabora.com> + * + * modeinfo.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "modelinfo.h" + +/** + * SECTION: GstAnalyticsModelInfo + * @title: GstAnalyticsModelInfo + * @short_description: A GstAnalyticsModelInfo to store model information + * @symbols: + * - GstAnalyticsModelInfo + * + * The #GstAnalyticsModelInfo is an object storing artifical neural network + * model metadata describing the input and output tensors. These information's + * are required by inference elements. + * + * The ".modelinfo" files describe the additional metadata for + * a given serialized model file such as a `.tflite`, `.onnx` or `.pte` files. + * + * The ModelInfo files are ini-style. Each section is matched to a + * particular input or output tensor. + * + * The title of the section must match the name of the tensor in the model file. + * + * The fields used to match the modelinfo to the model are: + * `\title\`: The name of the tensor, must be unique + * `dims`: The dimensions as a comma-separated list of ints. -1 matches a dynamic dimension and is a wildcard + * `dir`: Either "input" or "output" + * `type`: The data type match #GstTensorDataType, one of: + * `int4` + * `int8` + * `int16` + * `int32` + * `int64` + * `uint4` + * `uint8` + * `uint16` + * `uint32` + * `uint64` + * `float16` + * `float32` + * `float64` + * `bfloat16` + * + * Based on these fields, the following metadata is applied to output tensors: + * `id`: The tensor ID so other elements can identity it, ideally registered in the Tensor ID Registry(https://github.com/collabora/tensor-id-registry/blob/main/tensor-id-register.md). + * `group-id`: The group ID that groups related tensors together (e.g., all outputs from the same model) + * `dims-order`: The dimension ordering, either "row-major" or "col-major". Defaults to "row-major" if not specified. + * + * Those fields are applied to input tensors for normalization: + * `ranges`: semicolon-separated list of comma-separated pairs of floats, + * each representing (min, max) for a single channel or dimension. + * For per-channel normalization: `ranges=0.0,255.0;-1.0,1.0;0.0,1.0` (R,G,B) + * For single range (applies to all channels): `ranges=0.0,255.0` + * The inference elements will convert 8-bit input 0-255 to target ranges using: + * outputi = inputi * scalei + offseti + * where for each channel i: + * scalei = (maxi - mini) / 255.0 + * offseti = mini + * + * Common ranges: + * `0.0,255.0` - No normalization (passthrough, scale=1.0, offset=0.0) + * `0.0,1.0` - Normalized to 0,1 range (scale≈0.00392, offset=0.0) + * `-1.0,1.0` - Normalized to -1,1 range (scale≈0.00784, offset=-1.0) + * `16.0,235.0` - TV/limited range (scale≈0.859, offset=16.0) + * + * Other fields are ignored for now. + * + * The API is meant to be used by inference elements + * + * Since: 1.28 + */ + +/** + * gst_analytics_modelinfo_get_type: + * + * Get the GType of the #GstAnalyticsModelInfo boxed type. + * + * Returns: The GType + * + * Since: 1.28 + */ +G_DEFINE_BOXED_TYPE (GstAnalyticsModelInfo, gst_analytics_modelinfo, + (GBoxedCopyFunc) g_key_file_ref, + (GBoxedFreeFunc) gst_analytics_modelinfo_free) +#define GST_CAT_DEFAULT analytics_modelinfo_debug + GST_DEBUG_CATEGORY (analytics_modelinfo_debug); + + static gboolean + key_file_string_matches (GKeyFile * keyfile, const gchar * group, + const gchar * key, const gchar * value) +{ + gchar *kf_value = g_key_file_get_string (keyfile, group, key, NULL); + + gboolean matches = !g_strcmp0 (kf_value, value); + + g_free (kf_value); + + return matches; +} + +/** + * modelinfo_check_version: + * @kf: The loaded GKeyFile + * + * Checks if the modelinfo version is supported. Files without version + * are treated as version 1.0 for backward compatibility. + * + * Returns: TRUE if version is supported, FALSE otherwise + */ +static gboolean +modelinfo_check_version (GKeyFile * kf) +{ + gchar *file_version; + gboolean has_version_section; + gboolean supported = FALSE; + gchar **version_parts; + gint major = 0, minor = 0; + + /* Check if modelinfo section exists */ + has_version_section = g_key_file_has_group (kf, GST_MODELINFO_SECTION_NAME); + + if (!has_version_section) { + /* v1.0 is the first public version and requires modelinfo section. */ + GST_ERROR ("No modelinfo section found. This is a pre-v1.0 format file. " + "Please regenerate modelinfo using modelinfo-generator.py to create " + "a v%s compatible file.", GST_MODELINFO_VERSION_STR); + return FALSE; + } + + /* Get version string */ + file_version = g_key_file_get_string (kf, GST_MODELINFO_SECTION_NAME, + "version", NULL); + + if (!file_version) { + GST_ERROR ("Modelinfo section exists but no version field found. " + "v1.0 is the first public version and requires version field. " + "Please regenerate modelinfo using modelinfo-generator.py to create " + "a v%s compatible file.", GST_MODELINFO_VERSION_STR); + return FALSE; + } + + /* Parse version string (format: "Major.Minor") */ + version_parts = g_strsplit (file_version, ".", -1); + + if (!version_parts || !version_parts0 || !version_parts1 || + version_parts2 != NULL) { + GST_ERROR ("Invalid version format: '%s'. Expected format: 'Major.Minor'", + file_version); + g_strfreev (version_parts); + g_free (file_version); + return FALSE; + } + + major = g_ascii_strtoll (version_parts0, NULL, 10); + minor = g_ascii_strtoll (version_parts1, NULL, 10); + + /* Check if version is supported + * Major version must match exactly. + * Minor versions can be older (backward compatible within same major) */ + if (major != GST_MODELINFO_VERSION_MAJOR) { + /* Major version mismatch - not supported */ + if (major < GST_MODELINFO_VERSION_MAJOR) { + GST_ERROR + ("Modelinfo major version %d is not supported by this version of " + "GStreamer (current major: %d). Please use the modelinfo-generator.py " + "script with --upgrade to upgrade the file to version %s.", major, + GST_MODELINFO_VERSION_MAJOR, GST_MODELINFO_VERSION_STR); + } else { + GST_ERROR ("Modelinfo version %s is not supported by this version of " + "GStreamer (current: %s). Please upgrade GStreamer.", + file_version, GST_MODELINFO_VERSION_STR); + } + supported = FALSE; + } else if (minor > GST_MODELINFO_VERSION_MINOR) { + /* Newer minor version in same major - log warning but still supported */ + GST_WARNING ("Modelinfo minor version %d is newer than supported (%d). " + "Some features may not be available.", + minor, GST_MODELINFO_VERSION_MINOR); + supported = TRUE; + } else { + /* Same major, same or older minor - fully supported */ + supported = TRUE; + } + + g_strfreev (version_parts); + g_free (file_version); + return supported; +} + +/** + * gst_analytics_modelinfo_get_id: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @tensor_name: The name of the tensor + * + * Get the tensor ID from the modelinfo for the specified tensor name. + * + * The tensor ID is ideally registered in the Tensor ID Registry(https://github.com/collabora/tensor-id-registry/blob/main/tensor-id-register.md). + * + * Returns: (nullable) (transfer full): The tensor ID string, or %NULL if not found. + * The caller must free this with g_free() when done. + * + * Since: 1.28 + */ +gchar * +gst_analytics_modelinfo_get_id (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar *id = g_key_file_get_string (kf, tensor_name, "id", NULL); + + /* Check for placeholder that needs to be filled */ + if (id && g_str_has_prefix (id, "PLACEHOLDER")) { + GST_WARNING ("Modelinfo file contains unresolved placeholder for id " + "in tensor '%s': %s. Please regenerate the modelinfo file using " + "modelinfo-generator.py --prompt and provide the correct values.", + tensor_name, id); + } + + return id; +} + +/** + * gst_analytics_modelinfo_get_group_id: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * + * Get the group ID that groups related tensors together (e.g., all outputs + * from the same model). + * + * The group ID is stored in the modelinfo section and is global for all + * tensors in the model. + * + * Returns: (nullable) (transfer full): The group ID string, or %NULL if not found. + * The caller must free this with g_free() when done. + * + * Since: 1.28 + */ +gchar * +gst_analytics_modelinfo_get_group_id (GstAnalyticsModelInfo * modelinfo) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar *group_id; + + /* group-id is in modelinfo section (global for all tensors in v2.0+) + * Major version compatibility is already checked in modelinfo_load() */ + group_id = g_key_file_get_string (kf, GST_MODELINFO_SECTION_NAME, + "group-id", NULL); + + /* Check for placeholder that needs to be filled */ + if (group_id && g_str_has_prefix (group_id, "PLACEHOLDER")) { + GST_WARNING + ("Modelinfo file contains unresolved placeholder for group-id: %s. " + "Please regenerate the modelinfo file using " + "modelinfo-generator.py --prompt and provide the correct values.", + group_id); + } + + return group_id; +} + +/** + * gst_analytics_modelinfo_get_quark_id: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @tensor_name: The name of the tensor + * + * Get the tensor ID as a GQuark for efficient string comparison and storage. + * + * Using GQuark is more efficient than string comparison when you need to + * compare multiple IDs. + * + * Returns: The GQuark of the tensor ID, or 0 if not found + * + * Since: 1.28 + */ +GQuark +gst_analytics_modelinfo_get_quark_id (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + GQuark q = 0; + gchar *id = g_key_file_get_string (kf, tensor_name, "id", NULL); + + if (id) + q = g_quark_from_string (id); + g_free (id); + + return q; +} + +/** + * gst_analytics_modelinfo_get_quark_group_id: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * + * Get the group ID as a GQuark for efficient string comparison and storage. + * + * Using GQuark is more efficient than string comparison when you need to + * compare multiple group IDs. + * + * Returns: The GQuark of the group ID, or 0 if not found + * + * Since: 1.28 + */ +GQuark +gst_analytics_modelinfo_get_quark_group_id (GstAnalyticsModelInfo * modelinfo) +{ + GQuark q = 0; + gchar *id = gst_analytics_modelinfo_get_group_id (modelinfo); + + if (id) + q = g_quark_from_string (id); + g_free (id); + + return q; +} + +static gboolean +modelinfo_check_direction (GKeyFile * kf, + const gchar * tensor_name, GstAnalyticsModelInfoTensorDirection dir) +{ + gchar *value; + gboolean ret = FALSE; + + if (dir == MODELINFO_DIRECTION_UNKNOWN) + return TRUE; + + value = g_key_file_get_string (kf, tensor_name, "dir", NULL); + if (!value) + return TRUE; + + if (dir == MODELINFO_DIRECTION_INPUT) + ret = g_str_equal (value, "input"); + if (dir == MODELINFO_DIRECTION_OUTPUT) + ret = g_str_equal (value, "output"); + + g_free (value); + + return ret; +} + +static gboolean +modelinfo_validate_internal (GKeyFile * kf, const gchar * tensor_name, + GstAnalyticsModelInfoTensorDirection dir, GstTensorDataType data_type, + gsize num_dims, const gsize * dims, gboolean accept_no_dims) +{ + gsize kf_dims_length = 0; + gint *kf_dims; + gsize i; + gboolean ret = FALSE; + + if (!key_file_string_matches (kf, tensor_name, "type", + gst_tensor_data_type_get_name (data_type))) + return FALSE; + + if (!modelinfo_check_direction (kf, tensor_name, dir)) + return FALSE; + + if (!g_key_file_has_key (kf, tensor_name, "dims", NULL)) + return accept_no_dims; + + kf_dims = g_key_file_get_integer_list (kf, tensor_name, "dims", + &kf_dims_length, NULL); + if (kf_dims == NULL) { + GST_ERROR ("Invalid model info file, dims in %s is no in the" + " right format", tensor_name); + return FALSE; + } + + if (kf_dims_length != num_dims) + goto done; + + for (i = 0; i < kf_dims_length; i++) { + /* If the keyfile contains dims < 0, then its a wildcard, + * accept anything */ + if (kf_dimsi < 0) + continue; + /* Dimensions of size "-1" means dynamic, but we didn't accept a wildcard, + * reject it */ + if (dimsi == G_MAXSIZE) + goto done; + + if (kf_dimsi != dimsi) + goto done; + } + + ret = TRUE; +done: + g_free (kf_dims); + return ret; +} + +static gboolean +modelinfo_validate (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name, GstAnalyticsModelInfoTensorDirection dir, + GstTensorDataType data_type, gsize num_dims, const gsize * dims) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + + return modelinfo_validate_internal (kf, tensor_name, dir, data_type, + num_dims, dims, TRUE); +} + +static gboolean +modelinfo_has_tensor_name (GstAnalyticsModelInfo * modelinfo, + const char *tensor_name) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + + return g_key_file_has_group (kf, tensor_name); +} + +static gchar * +modelinfo_find_tensor_name_by_index (GstAnalyticsModelInfo * modelinfo, + GstAnalyticsModelInfoTensorDirection dir, gsize index) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar **groups; + gsize i, j; + gchar *tensor_name = NULL; + + groups = g_key_file_get_groups (kf, NULL); + + for (i = 0, j = 0; groupsi; i++) { + if (!modelinfo_check_direction (kf, groupsi, dir)) + continue; + + if (index == j++) { + tensor_name = g_strdup (groupsi); + break; + } + + j++; + } + + g_strfreev (groups); + return tensor_name; +} + +static gchar * +modelinfo_find_tensor_name_by_dims (GstAnalyticsModelInfo * modelinfo, + GstAnalyticsModelInfoTensorDirection dir, GstTensorDataType data_type, + gsize num_dims, const gsize * dims) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar **groups; + gsize i; + gchar *tensor_name = NULL; + + groups = g_key_file_get_groups (kf, NULL); + + for (i = 0; groupsi; i++) { + if (modelinfo_validate_internal (kf, groupsi, dir, data_type, + num_dims, dims, FALSE)) { + tensor_name = g_strdup (groupsi); + break; + } + } + + g_strfreev (groups); + return tensor_name; +} + + +/** + * gst_analytics_modelinfo_load: + * @model_filename: (type filename): Path to the model file (e.g., "model.onnx", "model.tflite") + * + * Load a modelinfo file associated with the given model file. + * + * This function attempts to load a `.modelinfo` file in the following order: + * 1. `{model_filename}.modelinfo` + * 2. `{model_filename_without_extension}.modelinfo` + * + * The modelinfo file contains metadata for the model's input and output tensors, + * including normalization ranges, dimension ordering, tensor IDs, etc. + * + * The loaded modelinfo must be freed with gst_analytics_modelinfo_free() + * when no longer needed. + * + * Returns: (transfer full) (nullable): A new #GstAnalyticsModelInfo instance, + * or %NULL if the modelinfo file could not be found or loaded. + * + * Since: 1.28 + */ +GstAnalyticsModelInfo * +gst_analytics_modelinfo_load (const gchar * model_filename) +{ + GKeyFile *kf = g_key_file_new (); + gchar *filename; + gboolean ret; + gchar *last_dot; + + g_key_file_set_list_separator (kf, ','); + + GST_DEBUG_CATEGORY_INIT (analytics_modelinfo_debug, "modelinfo", + 0, "analytics model info"); + + filename = g_strconcat (model_filename, ".modelinfo", NULL); + ret = g_key_file_load_from_file (kf, filename, G_KEY_FILE_NONE, NULL); + g_free (filename); + if (ret) { + /* Version check */ + if (!modelinfo_check_version (kf)) { + GST_ERROR ("Unsupported modelinfo version in file"); + g_key_file_free (kf); + return NULL; + } + return (GstAnalyticsModelInfo *) kf; + } + + last_dot = g_utf8_strrchr (model_filename, -1, '.'); + if (last_dot && !g_utf8_strchr (last_dot, -1, '/')) { + gchar *tmp = g_strndup (model_filename, last_dot - model_filename); + filename = g_strconcat (tmp, ".modelinfo", NULL); + g_free (tmp); + ret = g_key_file_load_from_file (kf, filename, G_KEY_FILE_NONE, NULL); + g_free (filename); + if (ret) { + /* Version check */ + if (!modelinfo_check_version (kf)) { + GST_ERROR ("Unsupported modelinfo version in file"); + g_key_file_free (kf); + return NULL; + } + return (GstAnalyticsModelInfo *) kf; + } + } + + g_key_file_free (kf); + return NULL; +} + +/** + * gst_analytics_modelinfo_free: + * @model_info: (transfer full) (nullable): Instance of #GstAnalyticsModelInfo + * + * Free a modelinfo object allocated by gst_analytics_modelinfo_load(). + * + * This function should be called when the modelinfo is no longer needed + * to release the associated resources. + * + * Since: 1.28 + */ +void +gst_analytics_modelinfo_free (GstAnalyticsModelInfo * modelinfo) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + + g_key_file_free (kf); +} + + +/** + * gst_analytics_modelinfo_find_tensor_name: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @dir: The tensor direction (input or output) + * @index: The tensor index within the specified direction + * @in_tensor_name: (nullable): An optional tensor name hint to check first + * @data_type: The tensor data type to match + * @num_dims: The number of dimensions + * @dims: (array length=num_dims): The dimension sizes. Use -1 for dynamic dimensions. + * + * Find the name of a tensor in the modelinfo that matches the given criteria. + * + * The function performs the following checks in order: + * 1. If @in_tensor_name is provided and exists in modelinfo, validate it matches + * 2. Search by index for the specified direction and validate + * 3. Search by dimensions and data type + * + * Returns: (nullable) (transfer full): The tensor name if found, or %NULL otherwise. + * The caller must free this with g_free() when done. + * + * Since: 1.28 + */ +gchar * +gst_analytics_modelinfo_find_tensor_name (GstAnalyticsModelInfo * modelinfo, + GstAnalyticsModelInfoTensorDirection dir, gsize index, + const gchar * in_tensor_name, GstTensorDataType data_type, gsize num_dims, + const gsize * dims) +{ + gchar *tensor_name = NULL; + + if (in_tensor_name && modelinfo_has_tensor_name (modelinfo, in_tensor_name)) { + if (modelinfo_validate (modelinfo, in_tensor_name, dir, data_type, + num_dims, dims)) { + return g_strdup (in_tensor_name); + } + } + + tensor_name = modelinfo_find_tensor_name_by_index (modelinfo, dir, index); + if (tensor_name) { + if (modelinfo_validate (modelinfo, tensor_name, dir, data_type, + num_dims, dims)) { + return tensor_name; + } + g_free (tensor_name); + } + + return modelinfo_find_tensor_name_by_dims (modelinfo, dir, data_type, + num_dims, dims); +} + + +/** + * gst_analytics_modelinfo_get_target_ranges: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @tensor_name: The name of the tensor + * @num_ranges: (out): The number of ranges + * @mins: (out) (transfer full) (array length=num_ranges): The minimum values for each target range + * @maxs: (out) (transfer full) (array length=num_ranges): The maximum values for each target range + * + * Retrieve all target ranges (min/max pairs) expected by the model for a given tensor. + * + * This function retrieves all target ranges from the `ranges` field in the modelinfo. + * Each range represents the expected input range for a channel or dimension that the + * model requires. + * + * The function reads from the `ranges` field: Semicolon-separated list of + * comma-separated pairs (min,max) for per-channel target ranges + * (e.g., "0.0,1.0;-1.0,1.0;0.0,1.0" for RGB channels with different normalization targets). + * + * The caller must free @mins and @maxs with g_free() when done. + * + * Returns: %TRUE if range information was found and valid, %FALSE otherwise + * + * Since: 1.28 + */ +gboolean +gst_analytics_modelinfo_get_target_ranges (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name, gsize * num_ranges, gdouble ** mins, + gdouble ** maxs) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar *ranges_str = NULL; + gchar **range_parts = NULL; + gsize local_num_ranges = 0; + gsize i; + + *mins = NULL; + *maxs = NULL; + *num_ranges = 0; + + /* Parse 'ranges' field: "min,max;..." */ + ranges_str = g_key_file_get_string (kf, tensor_name, "ranges", NULL); + if (!ranges_str) { + GST_DEBUG ("Tensor '%s': no ranges specified, returning FALSE", + tensor_name); + return FALSE; + } + + /* Check for placeholder */ + if (g_str_has_prefix (ranges_str, "PLACEHOLDER")) { + GST_ERROR + ("Modelinfo file contains unresolved placeholder for ranges in tensor '%s'. " + "Please regenerate the modelinfo file using modelinfo-generator.py --prompt " + "and provide the correct values.", tensor_name); + g_free (ranges_str); + return FALSE; + } + + /* Parse ranges: semicolon-separated, each is "min,max" */ + range_parts = g_strsplit (ranges_str, ";", -1); + local_num_ranges = g_strv_length (range_parts); + + *mins = g_new (gdouble, local_num_ranges); + *maxs = g_new (gdouble, local_num_ranges); + + for (i = 0; i < local_num_ranges; i++) { + gchar **minmax = g_strsplit (range_partsi, ",", 2); + if (g_strv_length (minmax) == 2) { + (*mins)i = g_ascii_strtod (minmax0, NULL); + (*maxs)i = g_ascii_strtod (minmax1, NULL); + GST_DEBUG ("Tensor '%s'%zu: range=%f, %f", + tensor_name, i, (*mins)i, (*maxs)i); + } else { + GST_ERROR ("Invalid range format in tensor '%s'%zu: %s", + tensor_name, i, range_partsi); + g_strfreev (minmax); + g_free (*mins); + g_free (*maxs); + *mins = NULL; + *maxs = NULL; + g_strfreev (range_parts); + g_free (ranges_str); + return FALSE; + } + g_strfreev (minmax); + } + + *num_ranges = local_num_ranges; + g_strfreev (range_parts); + g_free (ranges_str); + + return TRUE; +} + +/** + * gst_analytics_modelinfo_get_input_scales_offsets: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @tensor_name: The name of the tensor + * @num_input_ranges: The number of input ranges (channels/dimensions) + * @input_mins: (array length=num_input_ranges): The minimum values of the actual input data for each channel + * @input_maxs: (array length=num_input_ranges): The maximum values of the actual input data for each channel + * @num_output_ranges: (out): The number of output ranges/scale-offset pairs + * @output_scales: (out) (transfer full) (array length=num_output_ranges): The scale values for normalization + * @output_offsets: (out) (transfer full) (array length=num_output_ranges): The offset values for normalization + * + * Calculate normalization scales and offsets to transform input data to the target range. + * + * This function calculates transformation parameters to convert from the actual input data range + * input_min, input_max to the target range expected by the model target_min, target_max: + * `normalized_valuei = inputi * output_scalei + output_offseti` + * + * The target ranges are read from the modelinfo `ranges` field: Semicolon-separated list of + * comma-separated pairs (min,max) for per-channel target ranges + * (e.g., "0.0,255.0;-1.0,1.0;0.0,1.0" for RGB channels with different target ranges). + * + * Common input ranges: + * - 0.0, 255.0: 8-bit unsigned (uint8) + * - -128.0, 127.0: 8-bit signed (int8) + * - 0.0, 65535.0: 16-bit unsigned (uint16) + * - -32768.0, 32767.0: 16-bit signed (int16) + * - 0.0, 1.0: Normalized float + * - -1.0, 1.0: Normalized signed float + * + * The number of input ranges (@num_input_ranges) must equal the number of target ranges + * in the modelinfo. The function will return FALSE if they don't match. + * + * The caller must free @output_scales and @output_offsets with g_free() when done. + * + * Returns: %TRUE on success, %FALSE on error, if ranges field is not found, or if @num_input_ranges + * doesn't match the number of target ranges in the modelinfo + * + * Since: 1.28 + */ +gboolean +gst_analytics_modelinfo_get_input_scales_offsets (GstAnalyticsModelInfo * + modelinfo, const gchar * tensor_name, gsize num_input_ranges, + const gdouble * input_mins, const gdouble * input_maxs, + gsize * num_output_ranges, gdouble ** output_scales, + gdouble ** output_offsets) +{ + gdouble *target_mins = NULL; + gdouble *target_maxs = NULL; + gsize num_target_ranges; + gsize i; + gdouble target_min, target_max; + gdouble input_min, input_max; + gdouble scale, offset; + + *output_scales = NULL; + *output_offsets = NULL; + *num_output_ranges = 0; + + /* Get target ranges from modelinfo */ + if (!gst_analytics_modelinfo_get_target_ranges (modelinfo, tensor_name, + &num_target_ranges, &target_mins, &target_maxs)) { + GST_DEBUG ("Tensor '%s': no ranges specified, returning FALSE", + tensor_name); + return FALSE; + } + + /* Validate that input ranges match target ranges */ + if (num_input_ranges != num_target_ranges) { + GST_ERROR + ("Tensor '%s': number of input ranges (%zu) doesn't match number of " + "target ranges in modelinfo (%zu)", tensor_name, num_input_ranges, + num_target_ranges); + g_free (target_mins); + g_free (target_maxs); + return FALSE; + } + + /* Allocate output arrays */ + *output_scales = g_new (gdouble, num_target_ranges); + *output_offsets = g_new (gdouble, num_target_ranges); + + /* Calculate scale and offset for each channel */ + for (i = 0; i < num_target_ranges; i++) { + target_min = target_minsi; + target_max = target_maxsi; + input_min = input_minsi; + input_max = input_maxsi; + + /* Calculate scale and offset to transform from input range to target range + * Formula: output = input * scale + offset + * where: scale = (target_max - target_min) / (input_max - input_min) + * offset = target_min - input_min * scale */ + scale = (target_max - target_min) / (input_max - input_min); + offset = target_min - input_min * scale; + + (*output_scales)i = scale; + (*output_offsets)i = offset; + + GST_DEBUG ("Tensor '%s'%zu: input=%f, %f, target=%f, %f to scale=%f, " + "offset=%f", tensor_name, i, input_min, input_max, target_min, + target_max, scale, offset); + } + + *num_output_ranges = num_target_ranges; + g_free (target_mins); + g_free (target_maxs); + + return TRUE; +} + +/** + * gst_analytics_modelinfo_get_dims_order: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * @tensor_name: The name of the tensor + * + * Retrieve the dimension ordering for a given tensor. + * + * The dimension ordering specifies how multi-dimensional tensor data is + * laid out in memory: + * - Row-major (C/NumPy style): Last dimension changes fastest in memory + * - Column-major (Fortran style): First dimension changes fastest in memory + * + * If not specified in the modelinfo, defaults to row-major. + * + * Returns: The dimension order as #GstTensorDimOrder + * + * Since: 1.28 + */ +GstTensorDimOrder +gst_analytics_modelinfo_get_dims_order (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar *dims_order_str; + GstTensorDimOrder dims_order; + + dims_order_str = g_key_file_get_string (kf, tensor_name, "dims-order", NULL); + + /* Default to row-major if not specified */ + if (dims_order_str && g_str_equal (dims_order_str, "col-major")) + dims_order = GST_TENSOR_DIM_ORDER_COL_MAJOR; + else + dims_order = GST_TENSOR_DIM_ORDER_ROW_MAJOR; + + g_free (dims_order_str); + return dims_order; +} + +/** + * gst_analytics_modelinfo_get_version: + * @modelinfo: Instance of #GstAnalyticsModelInfo + * + * Retrieve the version string of the modelinfo file format. + * + * The version is in the format "Major.Minor" and is stored in the + * modelinfo section of the modelinfo file. + * + * Returns: (transfer full): The version string (e.g., "1.0"). + * The caller must free this with g_free() when done. + * Defaults to "1.0" if not specified. + * + * Since: 1.28 + */ +gchar * +gst_analytics_modelinfo_get_version (GstAnalyticsModelInfo * modelinfo) +{ + GKeyFile *kf = (GKeyFile *) modelinfo; + gchar *version; + + if (!g_key_file_has_group (kf, GST_MODELINFO_SECTION_NAME)) { + /* No version section means version 1.0 */ + return g_strdup ("1.0"); + } + + version = g_key_file_get_string (kf, GST_MODELINFO_SECTION_NAME, + "version", NULL); + + if (!version) { + /* Section exists but no version field, default to 1.0 */ + return g_strdup ("1.0"); + } + + return version; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/analytics/modelinfo.h
Added
@@ -0,0 +1,158 @@ +/* + * GStreamer + * Copyright (C) 2025 Collabora Ltd. + * @author: Olivier Crete <olivier.crete@collabora.com> + * + * modeinfo.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#include <glib.h> +#include <gst/analytics/analytics-meta-prelude.h> +#include <gst/analytics/gsttensor.h> + +#pragma once + +/** + * GST_MODELINFO_VERSION_MAJOR: + * + * The current major version of the modelinfo format + * + * Since: 1.28 + */ +#define GST_MODELINFO_VERSION_MAJOR (1) + +/** + * GST_MODELINFO_VERSION_MINOR: + * + * The current minor version of the modelinfo format + * + * Since: 1.28 + */ +#define GST_MODELINFO_VERSION_MINOR (0) + +/** + * GST_MODELINFO_VERSION_STR: + * + * The current version string for the modelinfo format. + * This MUST be updated whenever the format changes. + * + * Since: 1.28 + */ +#define GST_MODELINFO_VERSION_STR "1.0" + +/** + * GST_MODELINFO_SECTION_NAME: + * + * The name of the modelinfo header section + * + * Since: 1.28 + */ +#define GST_MODELINFO_SECTION_NAME "modelinfo" + +G_BEGIN_DECLS + +/** + * GstAnalyticsModelInfoTensorDirection: + * @MODELINFO_DIRECTION_UNKNOWN: Tensor location is unknown + * @MODELINFO_DIRECTION_INPUT: Input tensor + * @MODELINFO_DIRECTION_OUTPUT: Output tensor + * + * Since: 1.28 + */ +typedef enum { + MODELINFO_DIRECTION_UNKNOWN, + MODELINFO_DIRECTION_INPUT, + MODELINFO_DIRECTION_OUTPUT, +} GstAnalyticsModelInfoTensorDirection; + +/** + * GstAnalyticsModelInfo: + * + * The #GstAnalyticsModelInfo is an object storing artifical neural network + * model metadata describing the input and output tensors. These information's + * are required by inference elements. + * + * Since: 1.28 + */ +typedef struct _ModelInfo GstAnalyticsModelInfo; + +GST_ANALYTICS_META_API +GType gst_analytics_modelinfo_get_type (void); + +/** + * GST_ANALYTICS_MODELINFO_TYPE: + * + * The model info type + * + * Since: 1.28 + */ +#define GST_ANALYTICS_MODELINFO_TYPE (gst_analytics_modelinfo_get_type()) + +GST_ANALYTICS_META_API +GstAnalyticsModelInfo * +gst_analytics_modelinfo_load (const gchar *model_filename); + +GST_ANALYTICS_META_API +gchar * +gst_analytics_modelinfo_find_tensor_name (GstAnalyticsModelInfo * modelinfo, + GstAnalyticsModelInfoTensorDirection dir, gsize index, const gchar *in_tensor_name, + GstTensorDataType data_type, gsize num_dims, const gsize * dims); + +GST_ANALYTICS_META_API +gchar * +gst_analytics_modelinfo_get_id (GstAnalyticsModelInfo *modelinfo, const gchar * tensor_name); + +GST_ANALYTICS_META_API +gchar * +gst_analytics_modelinfo_get_group_id (GstAnalyticsModelInfo * modelinfo); + +GST_ANALYTICS_META_API +GQuark +gst_analytics_modelinfo_get_quark_id (GstAnalyticsModelInfo *modelinfo, const gchar * tensor_name); + +GST_ANALYTICS_META_API +GQuark +gst_analytics_modelinfo_get_quark_group_id (GstAnalyticsModelInfo * modelinfo); + +GST_ANALYTICS_META_API +gboolean +gst_analytics_modelinfo_get_target_ranges (GstAnalyticsModelInfo * modelinfo, + const gchar *tensor_name, gsize *num_ranges, gdouble **mins, gdouble **maxs); + +GST_ANALYTICS_META_API +gboolean +gst_analytics_modelinfo_get_input_scales_offsets (GstAnalyticsModelInfo * modelinfo, + const gchar *tensor_name, gsize num_input_ranges, const gdouble *input_mins, + const gdouble *input_maxs, gsize *num_output_ranges, gdouble **output_scales, + gdouble **output_offsets); + +GST_ANALYTICS_META_API +GstTensorDimOrder +gst_analytics_modelinfo_get_dims_order (GstAnalyticsModelInfo * modelinfo, + const gchar * tensor_name); + +GST_ANALYTICS_META_API +gchar * +gst_analytics_modelinfo_get_version (GstAnalyticsModelInfo * modelinfo); + +GST_ANALYTICS_META_API +void +gst_analytics_modelinfo_free (GstAnalyticsModelInfo *model_info); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/audio/gstnonstreamaudiodecoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/audio/gstnonstreamaudiodecoder.h
Changed
@@ -403,7 +403,7 @@ GST_AUDIO_BAD_API GstBuffer *gst_nonstream_audio_decoder_allocate_output_buffer (GstNonstreamAudioDecoder * dec, - gsize size); + gsize size) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/audio/gstplanaraudioadapter.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/audio/gstplanaraudioadapter.c
Changed
@@ -416,7 +416,7 @@ if (!buffer) buffer = cur; else - gst_buffer_append (buffer, cur); + buffer = gst_buffer_append (buffer, cur); need -= take_from_cur; cur_skip = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/audio/gstplanaraudioadapter.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/audio/gstplanaraudioadapter.h
Changed
@@ -57,7 +57,7 @@ GType gst_planar_audio_adapter_get_type (void); GST_AUDIO_BAD_API -GstPlanarAudioAdapter * gst_planar_audio_adapter_new (void) G_GNUC_MALLOC; +GstPlanarAudioAdapter * gst_planar_audio_adapter_new (void) G_GNUC_MALLOC G_GNUC_WARN_UNUSED_RESULT; GST_AUDIO_BAD_API void gst_planar_audio_adapter_configure (GstPlanarAudioAdapter * adapter, @@ -76,11 +76,11 @@ GST_AUDIO_BAD_API GstBuffer * gst_planar_audio_adapter_get_buffer (GstPlanarAudioAdapter * adapter, - gsize nsamples, GstMapFlags flags); + gsize nsamples, GstMapFlags flags) G_GNUC_WARN_UNUSED_RESULT; GST_AUDIO_BAD_API GstBuffer * gst_planar_audio_adapter_take_buffer (GstPlanarAudioAdapter * adapter, - gsize nsamples, GstMapFlags flags); + gsize nsamples, GstMapFlags flags) G_GNUC_WARN_UNUSED_RESULT; GST_AUDIO_BAD_API gsize gst_planar_audio_adapter_available (GstPlanarAudioAdapter * adapter);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gstav1parser.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gstav1parser.c
Changed
@@ -2045,7 +2045,7 @@ gint i, j; GstAV1ParserResult retval = GST_AV1_PARSER_OK; gint clipped_value /* clippedValue */ ; - GstAV1SegmenationParams *seg_params; + GstAV1SegmentationParams *seg_params; gint feature_value = 0; const guint8 segmentation_feature_bitsGST_AV1_SEG_LVL_MAX = { @@ -2119,7 +2119,7 @@ } } else { gint8 ref_idx; - GstAV1SegmenationParams *ref_seg_params; + GstAV1SegmentationParams *ref_seg_params; /* Copy it from prime_ref */ if (frame_header->primary_ref_frame >= GST_AV1_PRIMARY_REF_NONE) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gstav1parser.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gstav1parser.h
Changed
@@ -110,7 +110,15 @@ typedef struct _GstAV1MetadataTimecode GstAV1MetadataTimecode; typedef struct _GstAV1LoopFilterParams GstAV1LoopFilterParams; typedef struct _GstAV1QuantizationParams GstAV1QuantizationParams; -typedef struct _GstAV1SegmenationParams GstAV1SegmenationParams; +typedef struct _GstAV1SegmentationParams GstAV1SegmentationParams; +/** + * GstAV1SegmenationParams: + * + * Deprecated: 1.28: Use GstAV1SegmentationParams. + */ +#ifndef GST_DISABLE_DEPRECATED +#define GstAV1SegmenationParams GstAV1SegmentationParams +#endif typedef struct _GstAV1TileInfo GstAV1TileInfo; typedef struct _GstAV1CDEFParams GstAV1CDEFParams; typedef struct _GstAV1LoopRestorationParams GstAV1LoopRestorationParams; @@ -1151,7 +1159,7 @@ }; /** - * GstAV1SegmenationParams: + * GstAV1SegmentationParams: * @segmentation_enabled: equal to 1 indicates that this frame makes use of the segmentation * tool; @segmentation_enabled equal to 0 indicates that the frame does not use segmentation. * @segmentation_update_map: equal to 1 indicates that the segmentation map are updated during @@ -1173,7 +1181,7 @@ * This is used when decoding the segment id to only decode choices corresponding to used * segments. */ -struct _GstAV1SegmenationParams { +struct _GstAV1SegmentationParams { gboolean segmentation_enabled; guint8 segmentation_update_map; guint8 segmentation_temporal_update; @@ -1506,7 +1514,7 @@ * for performing inter prediction. * @loop_filter_params: a #GstAV1LoopFilterParams holding the loop filter parameters. * @quantization_params: a #GstAV1QuantizationParams holding the quantization parameters. - * @segmentation_params: a #GstAV1SegmenationParams holding the segementation parameters. + * @segmentation_params: a #GstAV1SegmentationParams holding the segmentation parameters. * @tile_info: a #GstAV1TileInfo holding the tile info. * @cdef_params: a #GstAV1CDEFParams holding the CDEF paramters. * @loop_restoration_params: a #GstAV1LoopRestorationParams holding the loop restoration parameters. @@ -1580,7 +1588,7 @@ GstAV1InterpolationFilter interpolation_filter; GstAV1LoopFilterParams loop_filter_params; GstAV1QuantizationParams quantization_params; - GstAV1SegmenationParams segmentation_params; + GstAV1SegmentationParams segmentation_params; GstAV1TileInfo tile_info; GstAV1CDEFParams cdef_params; GstAV1LoopRestorationParams loop_restoration_params; @@ -1649,7 +1657,7 @@ guint8 ref_subsampling_y; /* RefSubsamplingY */ guint8 ref_bit_depth; /* RefBitDepth */ guint32 ref_order_hint; /* RefOrderHint */ - GstAV1SegmenationParams ref_segmentation_params; + GstAV1SegmentationParams ref_segmentation_params; GstAV1GlobalMotionParams ref_global_motion_params; GstAV1LoopFilterParams ref_lf_params; GstAV1FilmGrainParams ref_film_grain_params;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth264bitwriter.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth264bitwriter.c
Changed
@@ -172,27 +172,48 @@ size = 64; } - if (memcmp (scaling_list, default_listsi, size)) - scaling_list_present_flag = TRUE; + switch (i) { + case 0: + case 3: + case 6: + case 7: + if (memcmp (scaling_list, default_listsi, size)) + scaling_list_present_flag = TRUE; + break; + case 1: + case 2: + case 4: + case 5: + if (memcmp (scaling_list, scaling_lists_4x4i - 1, size)) + scaling_list_present_flag = TRUE; + break; + case 8: + case 9: + case 10: + case 11: + if (memcmp (scaling_list, scaling_lists_8x8i - 6 - 2, size)) + scaling_list_present_flag = TRUE; + break; + default: + break; + } WRITE_BITS (bw, scaling_list_present_flag, 1); if (scaling_list_present_flag) { guint8 last_scale, next_scale; gint8 delta_scale; - for (j = 0; j < size; j++) { - last_scale = next_scale = 8; + last_scale = next_scale = 8; - for (j = 0; j < size; j++) { - if (next_scale != 0) { - delta_scale = (gint8) (scaling_listj - last_scale); + for (j = 0; j < size; j++) { + if (next_scale != 0) { + delta_scale = (gint8) (scaling_listj - last_scale); - WRITE_SE (bw, delta_scale); + WRITE_SE (bw, delta_scale); - next_scale = scaling_listj; - } - last_scale = (next_scale == 0) ? last_scale : next_scale; + next_scale = scaling_listj; } + last_scale = (next_scale == 0) ? last_scale : next_scale; } } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth264parser.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth264parser.c
Changed
@@ -2278,6 +2278,7 @@ return GST_H264_PARSER_BROKEN_LINK; } pps->sequence = sps; + pps->sps_id = sps_id; qp_bd_offset = 6 * (sps->bit_depth_luma_minus8 + sps->separate_colour_plane_flag); @@ -2470,6 +2471,7 @@ } slice->pps = pps; + slice->pps_id = pps_id; sps = pps->sequence; if (!sps) { GST_WARNING ("couldn't find associated sequence parameter set with id: %d",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth264parser.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth264parser.h
Changed
@@ -925,6 +925,15 @@ /* Since: 1.18 */ guint8 pic_scaling_matrix_present_flag; + + /** + * _GstH264PPS.sps_id: + * + * SPS id + * + * Since: 1.28 + */ + guint sps_id; }; struct _GstH264RefPicListModification @@ -1059,6 +1068,15 @@ * delta_pic_order_cnt1. (Since: 1.18) */ guint pic_order_cnt_bit_size; + + /** + * _GstH264SliceHdr.pps_id: + * + * PPS id + * + * Since: 1.28 + */ + guint pps_id; }; /**
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth265bitwriter.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth265bitwriter.c
Changed
@@ -659,7 +659,7 @@ /* If some previous matrix is the same, just ref it. */ scaling_list_pred_matrix_id_delta = 0; for (j = 0; j < matrixId; j++) { - gboolean ret; + gboolean ret GST_UNUSED_ASSERT; guint8 size2; const guint8 *sl2; gint16 scaling_list_dc_coef_minus8_2 = 8;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth265parser.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth265parser.c
Changed
@@ -66,7 +66,6 @@ #include "nalutils.h" #include "gsth265parser.h" -#include "gsth265parser-private.h" #include <gst/base/gstbytereader.h> #include <gst/base/gstbitreader.h> @@ -2765,6 +2764,7 @@ return err; } + slice->pps_id = pps_id; slice->pps = pps; sps = pps->sps; if (!sps) { @@ -5171,27 +5171,40 @@ #undef SKIP_CONFIG_BITS } +/** + * gst_h265_parser_link_slice_hdr: + * @parser: a #GstH265Parser + * @slice: The #GstH265SliceHdr to fill. + * + * Link SPS and PPS of @parser to @slice. @slice must be valid and parsed + * already by @parser or other #GstH265Parser + * + * Returns: a #GstH265ParserResult + * + * Since: 1.28 + */ GstH265ParserResult -gst_h265_parser_link_slice_hdr (GstH265Parser * parser, GstH265SliceHdr * slice, - guint pps_id) +gst_h265_parser_link_slice_hdr (GstH265Parser * parser, GstH265SliceHdr * slice) { GstH265ParserResult ret; GstH265PPS *pps; g_return_val_if_fail (parser, GST_H265_PARSER_ERROR); g_return_val_if_fail (slice, GST_H265_PARSER_ERROR); - g_return_val_if_fail (pps_id < GST_H265_MAX_PPS_COUNT, GST_H265_PARSER_ERROR); + g_return_val_if_fail (slice->pps_id < GST_H265_MAX_PPS_COUNT, + GST_H265_PARSER_ERROR); - pps = gst_h265_parser_get_pps (parser, pps_id); + pps = gst_h265_parser_get_pps (parser, slice->pps_id); if (!pps) { GST_WARNING - ("couldn't find associated picture parameter set with id: %d", pps_id); + ("couldn't find associated picture parameter set with id: %d", + slice->pps_id); return GST_H265_PARSER_BROKEN_LINK; } ret = gst_h265_parser_fill_pps (parser, pps); if (ret != GST_H265_PARSER_OK) { - GST_WARNING ("couldn't fill pps id: %d", pps_id); + GST_WARNING ("couldn't fill pps id: %d", slice->pps_id); return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth265parser.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth265parser.h
Changed
@@ -1556,6 +1556,15 @@ * Since: 1.22 */ guint long_term_ref_pic_set_size; + + /** + * _GstH265SliceHdr.pps_id: + * + * PPS id + * + * Since: 1.28 + */ + guint pps_id; }; struct _GstH265PicTiming @@ -2119,6 +2128,10 @@ GstH265PPS * pps); GST_CODEC_PARSERS_API +GstH265ParserResult gst_h265_parser_link_slice_hdr (GstH265Parser * parser, + GstH265SliceHdr * slice); + +GST_CODEC_PARSERS_API void gst_h265_parser_free (GstH265Parser * parser); GST_CODEC_PARSERS_API
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth266parser.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth266parser.c
Changed
@@ -675,7 +675,7 @@ const GstH266SPS * sps, const GstH266PPS * pps) { const GstH266RefPicListStruct *ref_list; - gint i, j, num_ltrp_entries; + gint i, j, num_ltrp_entries GST_UNUSED_ASSERT; GST_LOG ("parsing \"ref_pic_lists\""); @@ -1513,6 +1513,55 @@ } static GstH266ParserResult +gst_h266_parser_parse_registered_user_data (GstH266Parser * parser, + GstH266RegisteredUserData * rud, NalReader * nr, guint payload_size) +{ + guint8 *data = NULL; + guint i; + + rud->data = NULL; + rud->size = 0; + + if (payload_size < 2) { + GST_WARNING ("Too small payload size %d", payload_size); + return GST_H266_PARSER_BROKEN_DATA; + } + + READ_UINT8 (nr, rud->country_code, 8); + --payload_size; + + if (rud->country_code == 0xFF) { + READ_UINT8 (nr, rud->country_code_extension, 8); + --payload_size; + } else { + rud->country_code_extension = 0; + } + + if (payload_size < 1) { + GST_WARNING ("No more remaining payload data to store"); + return GST_H266_PARSER_BROKEN_DATA; + } + + data = g_malloc (payload_size); + for (i = 0; i < payload_size; ++i) { + READ_UINT8 (nr, datai, 8); + } + + GST_MEMDUMP ("SEI user data", data, payload_size); + + rud->data = data; + rud->size = payload_size; + return GST_H266_PARSER_OK; + +error: + { + GST_WARNING ("error parsing \"Registered User Data\""); + g_free (data); + return GST_H266_PARSER_ERROR; + } +} + +static GstH266ParserResult gst_h266_parser_parse_du_info (GstH266DUInfo * dui, NalReader * nr, const GstH266BufferingPeriod * bp, guint8 TemporalId) { @@ -1929,6 +1978,180 @@ } /** + * gst_h266_parser_identify_and_split_nalu_vvc: + * @parser: a #GstH266Parser + * @data: The data to parse, must be the beging of the Nal unit + * @offset: the offset from which to parse @data + * @size: the size of @data + * @nal_length_size: the size in bytes of the VVC nal length prefix. + * @nalus: a caller allocated #GArray of #GstH266NalUnit where to store parsed nal headers + * @consumed: the size of consumed bytes + * + * Parses @data for packetized (e.g., vvc1/vvi1) bitstream and + * sets @nalus. In addition to nal identifying process, + * this method scans start-code prefix to split malformed packet into + * actual nal chunks. + * + * Returns: a #GstH266ParserResult + * + * Since: 1.28 + */ +GstH266ParserResult +gst_h266_parser_identify_and_split_nalu_vvc (GstH266Parser * parser, + const guint8 * data, guint offset, gsize size, guint8 nal_length_size, + GArray * nalus, gsize * consumed) +{ + GstBitReader br; + guint nalu_size; + guint remaining; + guint off; + guint sc_size; + + g_return_val_if_fail (parser != NULL, GST_H266_PARSER_ERROR); + g_return_val_if_fail (data != NULL, GST_H266_PARSER_ERROR); + g_return_val_if_fail (nalus != NULL, GST_H266_PARSER_ERROR); + g_return_val_if_fail (nal_length_size > 0 && nal_length_size < 5, + GST_H266_PARSER_ERROR); + + g_array_set_size (nalus, 0); + + if (consumed) + *consumed = 0; + + /* Would overflow guint below otherwise: the callers needs to ensure that + * this never happens */ + if (offset > G_MAXUINT32 - nal_length_size) { + GST_WARNING ("offset + nal_length_size overflow"); + return GST_H266_PARSER_BROKEN_DATA; + } + + if (size < offset + nal_length_size) { + GST_DEBUG ("Can't parse, buffer has too small size %" G_GSIZE_FORMAT + ", offset %u", size, offset); + return GST_H266_PARSER_ERROR; + } + + /* Read nal unit size and unwrap the size field */ + gst_bit_reader_init (&br, data + offset, size - offset); + nalu_size = gst_bit_reader_get_bits_uint32_unchecked (&br, + nal_length_size * 8); + + if (nalu_size < 2) { + GST_WARNING ("too small nal size %d", nalu_size); + return GST_H266_PARSER_BROKEN_DATA; + } + + if (size < (gsize) nalu_size + nal_length_size) { + GST_WARNING ("larger nalu size %d than data size %" G_GSIZE_FORMAT, + nalu_size + nal_length_size, size); + return GST_H266_PARSER_BROKEN_DATA; + } + + if (consumed) + *consumed = nalu_size + nal_length_size; + + off = offset + nal_length_size; + remaining = nalu_size; + sc_size = nal_length_size; + + /* Drop trailing start-code since it will not be scanned */ + if (remaining >= 3) { + if (dataoff + remaining - 1 == 0x01 && dataoff + remaining - 2 == 0x00 + && dataoff + remaining - 3 == 0x00) { + remaining -= 3; + + /* 4 bytes start-code */ + if (remaining > 0 && dataoff + remaining - 1 == 0x00) + remaining--; + } + } + + /* Looping to split malformed nal units. nal-length field was dropped above + * so expected bitstream structure are: + * + * <complete nalu> + * | nalu | + * sc scan result will be -1 and handled in CONDITION-A + * + * <nalu with startcode prefix> + * | SC | nalu | + * Hit CONDITION-C first then terminated in CONDITION-A + * + * <first nal has no startcode but others have> + * | nalu | SC | nalu | ... + * CONDITION-B handles those cases + */ + do { + GstH266NalUnit nalu; + gint sc_offset = -1; + guint skip_size = 0; + + memset (&nalu, 0, sizeof (GstH266NalUnit)); + + /* startcode 3 bytes + minimum nal size 2 */ + if (remaining >= 5) + sc_offset = scan_for_start_codes (data + off, remaining); + + if (sc_offset < 0) { + if (remaining >= 2) { + /* CONDITION-A */ + /* Last chunk */ + nalu.size = remaining; + nalu.sc_offset = off - sc_size; + nalu.offset = off; + nalu.data = (guint8 *) data; + nalu.valid = TRUE; + + gst_h266_parse_nalu_header (&nalu); + g_array_append_val (nalus, nalu); + } + break; + } else if ((sc_offset == 2 && dataoff + sc_offset - 1 != 0) + || sc_offset > 2) { + /* CONDITION-B */ + /* Found trailing startcode prefix */ + + nalu.size = sc_offset; + if (dataoff + sc_offset - 1 == 0) { + /* 4 bytes start code */ + nalu.size--; + } + + nalu.sc_offset = off - sc_size; + nalu.offset = off; + nalu.data = (guint8 *) data; + nalu.valid = TRUE; + + gst_h266_parse_nalu_header (&nalu); + g_array_append_val (nalus, nalu); + } else { + /* CONDITION-C */ + /* startcode located at beginning of this chunk without actual nal data. + * skip this start code */ + } + + skip_size = sc_offset + 3; + if (skip_size >= remaining) + break; + + /* no more nal-length bytes but 3bytes startcode */ + sc_size = 3; + if (sc_offset > 0 && dataoff + sc_offset - 1 == 0) + sc_size++; + + remaining -= skip_size; + off += skip_size; + } while (remaining >= 2); + + if (nalus->len > 0) + return GST_H266_PARSER_OK; + + GST_WARNING ("No nal found"); + + return GST_H266_PARSER_BROKEN_DATA; +} + +/** * gst_h266_parser_parse_nal: * @parser: a #GstH266Parser * @nalu: The #GstH266NalUnit to parse @@ -5818,6 +6041,10 @@ res = gst_h266_parser_parse_pic_timing (&sei->payload.pic_timing, nr, &parser->buffering_period.payload.buffering_period, nal_tid); break; + case GST_H266_SEI_REGISTERED_USER_DATA: + res = gst_h266_parser_parse_registered_user_data (parser, + &sei->payload.registered_user_data, nr, payload_size >> 3); + break; case GST_H266_SEI_DU_INFO: if (!parser->last_buffering_period) { GST_WARNING ("No buffering_period SEI.");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecparsers/gsth266parser.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecparsers/gsth266parser.h
Changed
@@ -448,6 +448,7 @@ typedef struct _GstH266BufferingPeriod GstH266BufferingPeriod; typedef struct _GstH266PicTiming GstH266PicTiming; +typedef struct _GstH266RegisteredUserData GstH266RegisteredUserData; typedef struct _GstH266DUInfo GstH266DUInfo; typedef struct _GstH266ScalableNesting GstH266ScalableNesting; typedef struct _GstH266SubPicLevelInfo GstH266SubPicLevelInfo; @@ -3073,6 +3074,28 @@ }; /** + * GstH266RegisteredUserData: + * + * The User data registered by Rec. ITU-T T.35 SEI message. + * + * @country_code: an itu_t_t35_country_code. + * @country_code_extension: an itu_t_t35_country_code_extension_byte. + * Should be ignored when @country_code is not 0xff + * @size: the size of @data in bytes + * @data: the data of itu_t_t35_payload_byte + * excluding @country_code and @country_code_extension + * + * Since: 1.28 + */ +struct _GstH266RegisteredUserData +{ + guint8 country_code; + guint8 country_code_extension; + guint size; + const guint8 *data; +}; + +/** * GstH266DUInfo: * * Structure defining the H266 decoding unit info. @@ -3232,6 +3255,7 @@ * @scalable_nesting: scalable nesting sei of #GstH266ScalableNesting. * @subpic_level_info: subpicture level info sei of #GstH266SubPicLevelInfo. * @frame_field_info: frame field info sei of #GstH266FrameFieldInfo. + * @registered_user_data: registered user data sei of #GstH266RegisteredUserData. (Since: 1.28) * * Since: 1.26 */ @@ -3247,7 +3271,19 @@ GstH266SubPicLevelInfo subpic_level_info; GstH266FrameFieldInfo frame_field_info; + /** + * _GstH266SEIMessage.payload.registered_user_data: + * + * Registered user data sei of #GstH266RegisteredUserData. + * + * Since: 1.28 + */ + GstH266RegisteredUserData registered_user_data; + /* ... could implement more */ + + /*< private >*/ + gpointer paddingGST_PADDING_LARGE; } payload; }; @@ -3471,6 +3507,16 @@ gsize size, guint8 nal_length_size, GstH266NalUnit * nalu); + +GST_CODEC_PARSERS_API +GstH266ParserResult gst_h266_parser_identify_and_split_nalu_vvc (GstH266Parser * parser, + const guint8 * data, + guint offset, + gsize size, + guint8 nal_length_size, + GArray * nalus, + gsize * consumed); + GST_CODEC_PARSERS_API GstH266ParserResult gst_h266_parser_parse_nal (GstH266Parser * parser, GstH266NalUnit * nalu);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstav1decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstav1decoder.c
Changed
@@ -19,7 +19,7 @@ /** * SECTION:gstav1decoder - * @title: Gstav1Decoder + * @title: GstAV1Decoder * @short_description: Base class to implement stateless AV1 decoders * @sources: * - gstav1picture.h
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstav1picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstav1picture.h
Changed
@@ -88,7 +88,7 @@ GType gst_av1_picture_get_type (void); GST_CODECS_API -GstAV1Picture * gst_av1_picture_new (void); +GstAV1Picture * gst_av1_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstAV1Picture * gst_av1_picture_ref (GstAV1Picture * picture)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstcodecpicture.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstcodecpicture.c
Changed
@@ -17,6 +17,12 @@ * Boston, MA 02110-1301, USA. */ +/** + * SECTION:gstcodecpicture + * @title: GstCodecPicture + * @short_description: Base struct for coded picture representation + */ + #ifdef HAVE_CONFIG_H #include <config.h> #endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth264decoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth264decoder.h
Changed
@@ -249,7 +249,7 @@ GST_CODECS_API GstH264Picture * gst_h264_decoder_get_picture (GstH264Decoder * decoder, - guint32 system_frame_number); + guint32 system_frame_number) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth264picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth264picture.h
Changed
@@ -182,7 +182,7 @@ GType gst_h264_picture_get_type (void); GST_CODECS_API -GstH264Picture * gst_h264_picture_new (void); +GstH264Picture * gst_h264_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstH264Picture * gst_h264_picture_ref (GstH264Picture * picture) @@ -306,11 +306,11 @@ GArray * out); GST_CODECS_API -GArray * gst_h264_dpb_get_pictures_all (GstH264Dpb * dpb); +GArray * gst_h264_dpb_get_pictures_all (GstH264Dpb * dpb) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API GstH264Picture * gst_h264_dpb_get_picture (GstH264Dpb * dpb, - guint32 system_frame_number); + guint32 system_frame_number) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API gint gst_h264_dpb_get_size (GstH264Dpb * dpb); @@ -325,7 +325,7 @@ GST_CODECS_API GstH264Picture * gst_h264_dpb_bump (GstH264Dpb * dpb, - gboolean drain); + gboolean drain) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API void gst_h264_dpb_set_last_output (GstH264Dpb * dpb,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth265decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth265decoder.c
Changed
@@ -32,7 +32,6 @@ #include <gst/base/base.h> #include "gsth265decoder.h" -#include <gst/codecparsers/gsth265parser-private.h> GST_DEBUG_CATEGORY (gst_h265_decoder_debug); #define GST_CAT_DEFAULT gst_h265_decoder_debug @@ -152,7 +151,6 @@ GstH265Slice slice; } unit; GstH265NalUnitType nalu_type; - guint pps_id; } GstH265DecoderNalUnit; typedef struct @@ -961,7 +959,6 @@ decoder_nalu.unit.slice = slice; decoder_nalu.nalu_type = nalu->type; - decoder_nalu.pps_id = slice.header.pps->id; g_array_append_val (priv->nalu, decoder_nalu); @@ -1084,8 +1081,7 @@ break; } - rst = gst_h265_parser_link_slice_hdr (priv->parser, - &nalu->unit.slice.header, nalu->pps_id); + rst = gst_h265_parser_link_slice_hdr (priv->parser, &nalu->unit.slice.header); if (rst != GST_H265_PARSER_OK) { GST_ERROR_OBJECT (self, "Couldn't update slice header");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth265decoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth265decoder.h
Changed
@@ -196,7 +196,7 @@ GST_CODECS_API GstH265Picture * gst_h265_decoder_get_picture (GstH265Decoder * decoder, - guint32 system_frame_number); + guint32 system_frame_number) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth265picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth265picture.h
Changed
@@ -93,7 +93,7 @@ GType gst_h265_picture_get_type (void); GST_CODECS_API -GstH265Picture * gst_h265_picture_new (void); +GstH265Picture * gst_h265_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstH265Picture * gst_h265_picture_ref (GstH265Picture * picture) @@ -182,26 +182,26 @@ GST_CODECS_API GstH265Picture * gst_h265_dpb_get_ref_by_poc (GstH265Dpb * dpb, - gint poc); + gint poc) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API GstH265Picture * gst_h265_dpb_get_ref_by_poc_lsb (GstH265Dpb * dpb, - gint poc_lsb); + gint poc_lsb) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API GstH265Picture * gst_h265_dpb_get_short_ref_by_poc (GstH265Dpb * dpb, - gint poc); + gint poc) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API GstH265Picture * gst_h265_dpb_get_long_ref_by_poc (GstH265Dpb * dpb, - gint poc); + gint poc) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API -GArray * gst_h265_dpb_get_pictures_all (GstH265Dpb * dpb); +GArray * gst_h265_dpb_get_pictures_all (GstH265Dpb * dpb) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API GstH265Picture * gst_h265_dpb_get_picture (GstH265Dpb * dpb, - guint32 system_frame_number); + guint32 system_frame_number) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API gint gst_h265_dpb_get_size (GstH265Dpb * dpb); @@ -214,7 +214,7 @@ GST_CODECS_API GstH265Picture * gst_h265_dpb_bump (GstH265Dpb * dpb, - gboolean drain); + gboolean drain) G_GNUC_WARN_UNUSED_RESULT; G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstH265Picture, gst_h265_picture_unref)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth266decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth266decoder.c
Changed
@@ -102,6 +102,9 @@ gboolean input_state_changed; GstFlowReturn last_flow; + + /* Split packetized data into actual nal chunks (for malformed stream) */ + GArray *split_nalu; }; typedef struct @@ -254,6 +257,73 @@ } } +static GstFlowReturn +gst_h266_decoder_parse_codec_data (GstH266Decoder * self, const guint8 * data, + gsize size) +{ + GstH266DecoderPrivate *priv = self->priv; + GstH266Parser *parser = priv->parser; + GstH266ParserResult pres; + GstFlowReturn ret = GST_FLOW_ERROR; + GstH266VPS vps; + GstH266SPS sps; + GstH266PPS pps; + GstH266DecoderConfigRecord *config = NULL; + guint i, j; + + pres = gst_h266_parser_parse_decoder_config_record (parser, + data, size, &config); + if (pres != GST_H266_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse vvcC data"); + return GST_FLOW_ERROR; + } + + priv->nal_length_size = config->length_size_minus_one + 1; + GST_DEBUG_OBJECT (self, "nal length size %u", priv->nal_length_size); + + for (i = 0; i < config->nalu_array->len; i++) { + GstH266DecoderConfigRecordNalUnitArray *array = + &g_array_index (config->nalu_array, + GstH266DecoderConfigRecordNalUnitArray, i); + + for (j = 0; j < array->nalu->len; j++) { + GstH266NalUnit *nalu = &g_array_index (array->nalu, GstH266NalUnit, j); + + switch (nalu->type) { + case GST_H266_NAL_VPS: + pres = gst_h266_parser_parse_vps (parser, nalu, &vps); + if (pres != GST_H266_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse VPS"); + goto out; + } + break; + case GST_H266_NAL_SPS: + pres = gst_h266_parser_parse_sps (parser, nalu, &sps); + if (pres != GST_H266_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse SPS"); + goto out; + } + break; + case GST_H266_NAL_PPS: + pres = gst_h266_parser_parse_pps (parser, nalu, &pps); + if (pres != GST_H266_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse PPS"); + goto out; + } + break; + default: + break; + } + } + } + + ret = GST_FLOW_OK; + +out: + gst_h266_decoder_config_record_free (config); + return ret; +} + static gboolean gst_h266_decoder_set_format (GstVideoDecoder * decoder, GstVideoCodecState * state) @@ -319,9 +389,16 @@ } if (state->codec_data) { - /* TODO: */ - GST_WARNING_OBJECT (self, "vvc1 or vvi1 mode is not supported now."); - return FALSE; + GstMapInfo map; + + gst_buffer_map (state->codec_data, &map, GST_MAP_READ); + if (gst_h266_decoder_parse_codec_data (self, map.data, map.size) != + GST_FLOW_OK) { + /* keep going without error. + * Probably inband SPS/PPS might be valid data */ + GST_WARNING_OBJECT (self, "Failed to handle codec data"); + } + gst_buffer_unmap (state->codec_data, &map); } return TRUE; @@ -1520,9 +1597,29 @@ if (priv->in_format == GST_H266_DECODER_FORMAT_VVC1 || priv->in_format == GST_H266_DECODER_FORMAT_VVI1) { - gst_buffer_unmap (in_buf, &map); - gst_h266_decoder_reset_frame_state (self); - return GST_FLOW_NOT_SUPPORTED; + guint offset = 0; + gsize consumed; + + do { + pres = gst_h266_parser_identify_and_split_nalu_vvc (priv->parser, + map.data, offset, map.size, priv->nal_length_size, priv->split_nalu, + &consumed); + if (pres != GST_H266_PARSER_OK) + break; + + for (i = 0; i < priv->split_nalu->len; i++) { + GstH266NalUnit *nl = + &g_array_index (priv->split_nalu, GstH266NalUnit, i); + pres = gst_h266_decoder_parse_nalu (self, nl); + if (pres != GST_H266_PARSER_OK) + break; + } + + if (pres != GST_H266_PARSER_OK) + break; + + offset += consumed; + } while (pres == GST_H266_PARSER_OK); } else { pres = gst_h266_parser_identify_nalu (priv->parser, map.data, 0, map.size, &nalu); @@ -1606,6 +1703,7 @@ for (i = 0; i < GST_H266_APS_TYPE_MAX; i++) g_array_unref (self->aps_listi); + g_array_unref (priv->split_nalu); gst_queue_array_free (priv->output_queue); G_OBJECT_CLASS (parent_class)->finalize (object); @@ -1634,6 +1732,7 @@ gst_queue_array_new_for_struct (sizeof (GstH266DecoderOutputFrame), 1); gst_queue_array_set_clear_func (priv->output_queue, (GDestroyNotify) gst_h266_decoder_clear_output_frame); + priv->split_nalu = g_array_new (FALSE, FALSE, sizeof (GstH266NalUnit)); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gsth266picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gsth266picture.h
Changed
@@ -108,7 +108,7 @@ GType gst_h266_picture_get_type (void); GST_CODECS_API -GstH266Picture *gst_h266_picture_new (void); +GstH266Picture *gst_h266_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstH266Picture * gst_h266_picture_ref (GstH266Picture * picture) @@ -195,7 +195,7 @@ guint max_latency_increase, guint max_dec_pic_buffering); GST_CODECS_API -GstH266Picture *gst_h266_dpb_bump (GstH266Dpb * dpb, gboolean drain); +GstH266Picture *gst_h266_dpb_bump (GstH266Dpb * dpb, gboolean drain) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API void gst_h266_dpb_mark_all_non_ref (GstH266Dpb * dpb); @@ -205,13 +205,13 @@ GST_CODECS_API GstH266Picture *gst_h266_dpb_get_picture_by_poc_lsb (GstH266Dpb * dpb, - gint poc_lsb); + gint poc_lsb) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API -GstH266Picture *gst_h266_dpb_get_picture_by_poc (GstH266Dpb * dpb, gint poc); +GstH266Picture *gst_h266_dpb_get_picture_by_poc (GstH266Dpb * dpb, gint poc) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API -GArray *gst_h266_dpb_get_pictures_all (GstH266Dpb * dpb); +GArray *gst_h266_dpb_get_pictures_all (GstH266Dpb * dpb) G_GNUC_WARN_UNUSED_RESULT; G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstH266Picture, gst_h266_picture_unref)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstmpeg2decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstmpeg2decoder.c
Changed
@@ -192,11 +192,13 @@ #define QUANT_MATRIX_EXT_INIT (GstMpegVideoQuantMatrixExt) { 0xff, { 0, } } +#ifndef G_DISABLE_ASSERT static inline gboolean _pic_hdr_is_valid (GstMpegVideoPictureHdr * hdr) { return hdr->tsn != 0xffff; } +#endif #define PIC_HDR_INIT (GstMpegVideoPictureHdr) { 0xffff, 0, }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstmpeg2picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstmpeg2picture.h
Changed
@@ -110,7 +110,7 @@ GType gst_mpeg2_picture_get_type (void); GST_CODECS_API -GstMpeg2Picture * gst_mpeg2_picture_new (void); +GstMpeg2Picture * gst_mpeg2_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstMpeg2Picture * gst_mpeg2_picture_ref (GstMpeg2Picture * picture) @@ -183,7 +183,7 @@ void gst_mpeg2_dpb_add (GstMpeg2Dpb * dpb, GstMpeg2Picture * picture); GST_CODECS_API -GstMpeg2Picture * gst_mpeg2_dpb_bump (GstMpeg2Dpb * dpb); +GstMpeg2Picture * gst_mpeg2_dpb_bump (GstMpeg2Dpb * dpb) G_GNUC_WARN_UNUSED_RESULT; GST_CODECS_API gboolean gst_mpeg2_dpb_need_bump (GstMpeg2Dpb * dpb);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstvp8decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstvp8decoder.c
Changed
@@ -289,7 +289,7 @@ break; default: GST_WARNING_OBJECT (self, "unrecognized copy_buffer_to_golden %d", - frame_hdr->copy_buffer_to_alternate); + frame_hdr->copy_buffer_to_golden); break; } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstvp8picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstvp8picture.h
Changed
@@ -50,7 +50,7 @@ GType gst_vp8_picture_get_type (void); GST_CODECS_API -GstVp8Picture * gst_vp8_picture_new (void); +GstVp8Picture * gst_vp8_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstVp8Picture * gst_vp8_picture_ref (GstVp8Picture * picture)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstvp9decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstvp9decoder.c
Changed
@@ -48,7 +48,7 @@ */ /** * SECTION:gstvp9decoder - * @title: Gstvp9Decoder + * @title: GstVp9Decoder * @short_description: Base class to implement stateless VP9 decoders * @sources: * - gstvp9picture.h
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstvp9picture.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstvp9picture.h
Changed
@@ -50,7 +50,7 @@ GType gst_vp9_picture_get_type (void); GST_CODECS_API -GstVp9Picture * gst_vp9_picture_new (void); +GstVp9Picture * gst_vp9_picture_new (void) G_GNUC_WARN_UNUSED_RESULT; static inline GstVp9Picture * gst_vp9_picture_ref (GstVp9Picture * picture)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/codecs/gstvp9statefulparser.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/codecs/gstvp9statefulparser.c
Changed
@@ -1387,7 +1387,7 @@ } /** - * gst_vp9_stateful_parser_new: + * gst_vp9_stateful_parser_new: (skip) * * Creates a new #GstVp9StatefulParser. It should be freed with * gst_vp9_stateful_parser_free() after use.
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudabufferpool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudabufferpool.h
Changed
@@ -70,7 +70,7 @@ GType gst_cuda_buffer_pool_get_type (void); GST_CUDA_API -GstBufferPool * gst_cuda_buffer_pool_new (GstCudaContext * context); +GstBufferPool * gst_cuda_buffer_pool_new (GstCudaContext * context) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API GstCudaStream * gst_buffer_pool_config_get_cuda_stream (GstStructure * config);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudacontext.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudacontext.h
Changed
@@ -72,10 +72,10 @@ GType gst_cuda_context_get_type (void); GST_CUDA_API -GstCudaContext * gst_cuda_context_new (guint device_id); +GstCudaContext * gst_cuda_context_new (guint device_id) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API -GstCudaContext * gst_cuda_context_new_wrapped (CUcontext handler, CUdevice device); +GstCudaContext * gst_cuda_context_new_wrapped (CUcontext handler, CUdevice device) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API gboolean gst_cuda_context_push (GstCudaContext * ctx);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudamemory.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudamemory.cpp
Changed
@@ -269,6 +269,7 @@ case GST_VIDEO_FORMAT_BGRx: case GST_VIDEO_FORMAT_ARGB: case GST_VIDEO_FORMAT_ABGR: + case GST_VIDEO_FORMAT_ARGB64: case GST_VIDEO_FORMAT_RGB: case GST_VIDEO_FORMAT_BGR: case GST_VIDEO_FORMAT_BGR10A2_LE: @@ -815,6 +816,7 @@ MAKE_FORMAT_RGBP (GBR_12LE, UNSIGNED_INT16), MAKE_FORMAT_RGBP (GBR_16LE, UNSIGNED_INT16), MAKE_FORMAT_RGBAP (GBRA, UNSIGNED_INT8), + MAKE_FORMAT_RGB (VUYA, UNSIGNED_INT8), }; /** @@ -1167,6 +1169,42 @@ } /** + * gst_cuda_allocator_alloc_stream_ordered: + * @allocator: (transfer none) (allow-none): a #GstCudaAllocator + * @context: (transfer none): a #GstCudaContext + * @stream: (transfer none): a #GstCudaStream + * @info: a #GstVideoInfo + * + * Returns: (transfer full) (nullable): a newly allocated #GstCudaMemory + * + * Since: 1.28 + */ +GstMemory * +gst_cuda_allocator_alloc_stream_ordered (GstCudaAllocator * allocator, + GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info) +{ + guint alloc_height; + + g_return_val_if_fail (GST_IS_CUDA_CONTEXT (context), nullptr); + g_return_val_if_fail (GST_IS_CUDA_STREAM (stream), nullptr); + g_return_val_if_fail (info != nullptr, nullptr); + + if (stream->context != context) { + GST_ERROR_OBJECT (context, + "stream object is holding different CUDA context"); + return nullptr; + } + + if (!allocator) + allocator = (GstCudaAllocator *) _gst_cuda_allocator; + + alloc_height = gst_cuda_allocator_calculate_alloc_height (info); + + return gst_cuda_allocator_alloc_internal (allocator, context, stream, + info, info->stride0, alloc_height, TRUE, nullptr); +} + +/** * gst_cuda_allocator_set_active: * @allocator: a #GstCudaAllocator * @active: the new active state
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudamemory.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudamemory.h
Changed
@@ -84,6 +84,24 @@ #define GST_MAP_CUDA (GST_MAP_FLAG_LAST << 1) /** + * GST_MAP_READ_CUDA: (value 131073) (type GstMapFlags) + * + * GstMapFlags value alias for GST_MAP_READ | GST_MAP_CUDA + * + * Since: 1.28 + */ +#define GST_MAP_READ_CUDA ((GstMapFlags) (GST_MAP_READ | GST_MAP_CUDA)) + +/** + * GST_MAP_WRITE_CUDA: (value 131074) (type GstMapFlags) + * + * GstMapFlags value alias for GST_MAP_WRITE | GST_MAP_CUDA + * + * Since: 1.28 + */ +#define GST_MAP_WRITE_CUDA ((GstMapFlags) (GST_MAP_WRITE | GST_MAP_CUDA)) + +/** * GST_CUDA_MEMORY_TYPE_NAME: * * Name of cuda memory type @@ -299,7 +317,13 @@ GstMemory * gst_cuda_allocator_alloc (GstCudaAllocator * allocator, GstCudaContext * context, GstCudaStream * stream, - const GstVideoInfo * info); + const GstVideoInfo * info) G_GNUC_WARN_UNUSED_RESULT; + +GST_CUDA_API +GstMemory * gst_cuda_allocator_alloc_stream_ordered (GstCudaAllocator * allocator, + GstCudaContext * context, + GstCudaStream * stream, + const GstVideoInfo * info) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API gboolean gst_cuda_allocator_set_active (GstCudaAllocator * allocator, @@ -312,7 +336,7 @@ const GstVideoInfo * info, CUdeviceptr dev_ptr, gpointer user_data, - GDestroyNotify notify); + GDestroyNotify notify) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API GstMemory * gst_cuda_allocator_virtual_alloc (GstCudaAllocator * allocator, @@ -320,7 +344,7 @@ GstCudaStream * stream, const GstVideoInfo * info, const CUmemAllocationProp * prop, - CUmemAllocationGranularity_flags granularity_flags); + CUmemAllocationGranularity_flags granularity_flags) G_GNUC_WARN_UNUSED_RESULT; /** * GstCudaPoolAllocator: @@ -357,20 +381,20 @@ GST_CUDA_API GstCudaPoolAllocator * gst_cuda_pool_allocator_new (GstCudaContext * context, GstCudaStream * stream, - const GstVideoInfo * info); + const GstVideoInfo * info) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API GstCudaPoolAllocator * gst_cuda_pool_allocator_new_for_virtual_memory (GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, const CUmemAllocationProp * prop, - CUmemAllocationGranularity_flags granularity_flags); + CUmemAllocationGranularity_flags granularity_flags) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API GstCudaPoolAllocator * gst_cuda_pool_allocator_new_full (GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, - GstStructure * config); + GstStructure * config) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API GstFlowReturn gst_cuda_pool_allocator_acquire_memory (GstCudaPoolAllocator * allocator, GstMemory ** memory);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudamemorypool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudamemorypool.h
Changed
@@ -53,7 +53,7 @@ GST_CUDA_API GstCudaMemoryPool * gst_cuda_memory_pool_new (GstCudaContext * context, - const CUmemPoolProps * props); + const CUmemPoolProps * props) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API CUmemoryPool gst_cuda_memory_pool_get_handle (GstCudaMemoryPool * pool);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudanvrtc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudanvrtc.cpp
Changed
@@ -323,7 +323,7 @@ ret = NvrtcCreateProgram (&prog, source, nullptr, 0, nullptr, nullptr); if (ret != NVRTC_SUCCESS) { - GST_ERROR ("couldn't create nvrtc program, ret %d", ret); + GST_WARNING ("couldn't create nvrtc program, ret %d", ret); return nullptr; } @@ -344,12 +344,12 @@ if (ret != NVRTC_SUCCESS) { gsize log_size; - GST_ERROR ("couldn't compile nvrtc program, ret %d", ret); + GST_WARNING ("couldn't compile nvrtc program, ret %d", ret); if (NvrtcGetProgramLogSize (prog, &log_size) == NVRTC_SUCCESS && log_size > 0) { gchar *compile_log = (gchar *) g_alloca (log_size); if (NvrtcGetProgramLog (prog, compile_log) == NVRTC_SUCCESS) { - GST_ERROR ("nvrtc compile log %s", compile_log); + GST_INFO ("nvrtc compile log %s", compile_log); } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/cuda/gstcudastream.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/cuda/gstcudastream.h
Changed
@@ -52,7 +52,7 @@ GType gst_cuda_stream_get_type (void); GST_CUDA_API -GstCudaStream * gst_cuda_stream_new (GstCudaContext * context); +GstCudaStream * gst_cuda_stream_new (GstCudaContext * context) G_GNUC_WARN_UNUSED_RESULT; GST_CUDA_API CUstream gst_cuda_stream_get_handle (GstCudaStream * stream);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d11/gstd3d11converter.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d11/gstd3d11converter.cpp
Changed
@@ -155,7 +155,7 @@ { FLOAT alpha; FLOAT padding3; - FLOAT padding_other4; + FLOAT padding_other8; }; struct PSConstBuffer @@ -2132,6 +2132,7 @@ } self->device = (GstD3D11Device *) gst_object_ref (device); + memset (&priv->alpha_data, 0, sizeof (priv->alpha_data)); priv->alpha_data.alpha = 1.0; priv->in_info = *in_info; priv->preproc_info = *in_info;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12.h
Changed
@@ -37,4 +37,5 @@ #include <gst/d3d12/gstd3d12frame.h> #include <gst/d3d12/gstd3d12memory.h> #include <gst/d3d12/gstd3d12utils.h> - +#include <gst/d3d12/gstd3d12stagingmemory.h> +#include <gst/d3d12/gstd3d12stagingbufferpool.h>
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12_fwd.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12_fwd.h
Changed
@@ -49,6 +49,9 @@ typedef struct _GstD3D12Memory GstD3D12Memory; typedef struct _GstD3D12MemoryPrivate GstD3D12MemoryPrivate; +typedef struct _GstD3D12StagingMemory GstD3D12StagingMemory; +typedef struct _GstD3D12StagingMemoryPrivate GstD3D12StagingMemoryPrivate; + typedef struct _GstD3D12Allocator GstD3D12Allocator; typedef struct _GstD3D12AllocatorClass GstD3D12AllocatorClass; typedef struct _GstD3D12AllocatorPrivate GstD3D12AllocatorPrivate; @@ -57,6 +60,10 @@ typedef struct _GstD3D12PoolAllocatorClass GstD3D12PoolAllocatorClass; typedef struct _GstD3D12PoolAllocatorPrivate GstD3D12PoolAllocatorPrivate; +typedef struct _GstD3D12StagingAllocator GstD3D12StagingAllocator; +typedef struct _GstD3D12StagingAllocatorClass GstD3D12StagingAllocatorClass; +typedef struct _GstD3D12StagingAllocatorPrivate GstD3D12StagingAllocatorPrivate; + typedef struct _GstD3D12Format GstD3D12Format; typedef struct _GstD3D12AllocationParams GstD3D12AllocationParams; @@ -65,6 +72,10 @@ typedef struct _GstD3D12BufferPoolClass GstD3D12BufferPoolClass; typedef struct _GstD3D12BufferPoolPrivate GstD3D12BufferPoolPrivate; +typedef struct _GstD3D12StagingBufferPool GstD3D12StagingBufferPool; +typedef struct _GstD3D12StagingBufferPoolClass GstD3D12StagingBufferPoolClass; +typedef struct _GstD3D12StagingBufferPoolPrivate GstD3D12StagingBufferPoolPrivate; + typedef struct _GstD3D12CmdAllocPool GstD3D12CmdAllocPool; typedef struct _GstD3D12CmdAllocPoolClass GstD3D12CmdAllocPoolClass; typedef struct _GstD3D12CmdAllocPoolPrivate GstD3D12CmdAllocPoolPrivate;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12compat.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12compat.h
Changed
@@ -58,4 +58,17 @@ return desc; #endif } + +template <typename T> +LUID +GetAdapterLuid (T device) +{ +#if defined(_MSC_VER) || !defined(_WIN32) + return device->GetAdapterLuid (); +#else + LUID luid; + device->GetAdapterLuid (&luid); + return luid; +#endif +} #endif /* __cplusplus */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12converter-builder.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12converter-builder.cpp
Changed
@@ -181,6 +181,11 @@ D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC_WHILE_SET_AT_EXECUTE)); } + range_v1_1.push_back (CD3DX12_DESCRIPTOR_RANGE1 + (D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 6, 0, + D3D12_DESCRIPTOR_RANGE_FLAG_DESCRIPTORS_VOLATILE | + D3D12_DESCRIPTOR_RANGE_FLAG_DATA_VOLATILE)); + param.InitAsDescriptorTable (range_v1_1.size (), range_v1_1.data (), D3D12_SHADER_VISIBILITY_PIXEL); param_list_v1_1.push_back (param); @@ -190,11 +195,9 @@ sampler_range_v1_1.push_back (CD3DX12_DESCRIPTOR_RANGE1 (D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DESCRIPTORS_VOLATILE)); - if (build_lut) { - sampler_range_v1_1.push_back (CD3DX12_DESCRIPTOR_RANGE1 - (D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 1, 0, - D3D12_DESCRIPTOR_RANGE_FLAG_DESCRIPTORS_VOLATILE)); - } + sampler_range_v1_1.push_back (CD3DX12_DESCRIPTOR_RANGE1 + (D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 1, 0, + D3D12_DESCRIPTOR_RANGE_FLAG_DESCRIPTORS_VOLATILE)); param.InitAsDescriptorTable (sampler_range_v1_1.size (), sampler_range_v1_1.data (), D3D12_SHADER_VISIBILITY_PIXEL); param_list_v1_1.push_back (param); @@ -206,7 +209,7 @@ /* PS alpha constant value, maybe updated */ ps_root_const_ = (UINT) param_list_v1_1.size (); - param.InitAsConstants (8, 1, 0, D3D12_SHADER_VISIBILITY_PIXEL); + param.InitAsConstants (12, 1, 0, D3D12_SHADER_VISIBILITY_PIXEL); param_list_v1_1.push_back (param); /* PS CBV, this is static */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12converter-private.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12converter-private.h
Changed
@@ -44,4 +44,27 @@ gfloat brightness, gfloat contrast); +GST_D3D12_API +gboolean gst_d3d12_converter_set_remap (GstD3D12Converter * converter, + ID3D12Resource * remap_vector); + +GST_D3D12_API +gboolean gst_d3d12_converter_update_viewport (GstD3D12Converter * converter, + gint x, + gint y, + gint width, + gint height); + +GST_D3D12_API +gboolean gst_d3d12_converter_convert_buffer_for_uv_remap (GstD3D12Converter * converter, + GstBuffer * in_buf, + GstBuffer * out_buf, + GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * command_list, + gboolean execute_gpu_wait, + guint num_remap, + ID3D12Resource ** lut, + GstVideoRectangle * viewport, + guint64 * border_color); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12converter.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12converter.cpp
Changed
@@ -167,8 +167,10 @@ struct PSConstBufferDyn { float alphaFactor; - float padding3; + UINT samplerRemap; + float padding2; float hsvcFactor4; + float bgColor4; }; struct VertexData @@ -284,6 +286,7 @@ ConvertCtxCommon() { const_data_dyn.alphaFactor = 1.0; + const_data_dyn.samplerRemap = 0; const_data_dyn.hsvcFactor0 = DEFAULT_HUE; const_data_dyn.hsvcFactor1 = DEFAULT_SATURATION; const_data_dyn.hsvcFactor2 = DEFAULT_BRIGHTNESS; @@ -317,11 +320,11 @@ ComPtr<ID3D12Resource> gamma_enc_lut; ComPtr<ID3D12DescriptorHeap> gamma_lut_heap; ComPtr<ID3D12DescriptorHeap> sampler_heap; + ComPtr<ID3D12Resource> sampler_remap; D3D12_VIEWPORT viewportGST_VIDEO_MAX_PLANES; D3D12_RECT scissor_rectGST_VIDEO_MAX_PLANES; ComPtr<ID3D12Fence> setup_fence; guint64 setup_fence_val = 0; - gboolean have_lut = FALSE; gboolean need_color_balance = FALSE; PSConstBufferDyn const_data_dyn; }; @@ -427,6 +430,7 @@ gboolean clear_background = FALSE; FLOAT clear_color44; GstD3D12ColorMatrix clear_color_matrix; + GstD3D12ColorMatrix in_clear_color_matrix; GstVideoOrientationMethod video_direction = GST_VIDEO_ORIENTATION_IDENTITY; gboolean color_balance_enabled = FALSE; @@ -467,6 +471,12 @@ static void gst_d3d12_converter_finalize (GObject * object); static void gst_d3d12_converter_calculate_border_color (GstD3D12Converter * self); +static gboolean +gst_d3d12_converter_set_remap_unlocked (GstD3D12Converter * self, + ID3D12Resource * remap_vector); +static gboolean +gst_d3d12_converter_update_viewport_unlocked (GstD3D12Converter * self, + gint x, gint y, gint width, gint height); #define gst_d3d12_converter_parent_class parent_class G_DEFINE_TYPE (GstD3D12Converter, gst_d3d12_converter, GST_TYPE_OBJECT); @@ -863,7 +873,7 @@ const GstVideoInfo * in_info, const GstVideoInfo * out_info, D3D12_FILTER sampler_filter, const DXGI_SAMPLE_DESC * sample_desc, const D3D12_BLEND_DESC * blend_desc, - CONVERT_TYPE * convert_type, gboolean have_lut, + CONVERT_TYPE * convert_type, gboolean color_balance_enabled, GstD3D12ConverterAlphaMode src_alpha, GstD3D12ConverterAlphaMode dst_alpha, PSConstBuffer * const_data, ConvertCtxCommonPtr ref) @@ -873,6 +883,7 @@ VertexData vertex_data4; ComPtr < ID3D12Resource > upload_buf; GstD3D12Format in_format, out_format; + gboolean have_lut = FALSE; if (!gst_d3d12_device_get_format (self->device, GST_VIDEO_INFO_FORMAT (in_info), &in_format)) { @@ -932,6 +943,8 @@ } ctx->pipeline_datai.quad_data.resize (psblob_size); + if (ctx->pipeline_datai.crs->HaveLut ()) + have_lut = TRUE; } D3D12_SHADER_BYTECODE vs_blob; @@ -998,6 +1011,9 @@ if (have_lut) num_srv += 2; + /* for sampler remap SRV */ + num_srv++; + if (priv->max_srv_desc < num_srv) priv->max_srv_desc = num_srv; @@ -1006,7 +1022,6 @@ ctx->comm = std::make_shared < ConvertCtxCommon > (); auto comm = ctx->comm; - comm->have_lut = have_lut; if (!gst_d3d12_converter_create_sampler (self, sampler_filter, &comm->sampler_heap)) { @@ -1723,11 +1738,13 @@ { auto priv = self->priv; GstD3D12ColorMatrix *m = &priv->clear_color_matrix; + GstD3D12ColorMatrix *in_m = &priv->in_clear_color_matrix; const GstVideoInfo *out_info = &priv->out_info; gdouble a; gdouble rgb3; gdouble converted3; GstVideoFormat format = GST_VIDEO_INFO_FORMAT (out_info); + auto comm = priv->main_ctx->comm; a = ((priv->border_color & 0xffff000000000000) >> 48) / (gdouble) G_MAXUINT16; rgb0 = @@ -1738,12 +1755,19 @@ for (guint i = 0; i < 3; i++) { convertedi = 0; + comm->const_data_dyn.bgColori = 0; for (guint j = 0; j < 3; j++) { convertedi += m->matrixij * rgbj; + comm->const_data_dyn.bgColori += in_m->matrixij * rgbj; } convertedi += m->offseti; + comm->const_data_dyn.bgColori += in_m->offseti; + convertedi = CLAMP (convertedi, m->mini, m->maxi); + comm->const_data_dyn.bgColori = CLAMP (comm->const_data_dyn.bgColori, + in_m->mini, in_m->maxi); } + comm->const_data_dyn.bgColor3 = a; GST_DEBUG_OBJECT (self, "Calculated background color ARGB: %f, %f, %f, %f", a, converted0, converted1, converted2); @@ -1932,18 +1956,46 @@ } } +static void +gst_d3d12_converter_calculate_remap_border_color (GstD3D12Converter * self, + guint64 color) +{ + auto priv = self->priv; + GstD3D12ColorMatrix *in_m = &priv->in_clear_color_matrix; + gdouble a; + gdouble rgb3; + auto comm = priv->main_ctx->comm; + + a = ((color & 0xffff000000000000) >> 48) / (gdouble) G_MAXUINT16; + rgb0 = ((color & 0x0000ffff00000000) >> 32) / (gdouble) G_MAXUINT16; + rgb1 = ((color & 0x00000000ffff0000) >> 16) / (gdouble) G_MAXUINT16; + rgb2 = (color & 0x000000000000ffff) / (gdouble) G_MAXUINT16; + + for (guint i = 0; i < 3; i++) { + comm->const_data_dyn.bgColori = 0; + + for (guint j = 0; j < 3; j++) + comm->const_data_dyn.bgColori += in_m->matrixij * rgbj; + + comm->const_data_dyn.bgColori += in_m->offseti; + + comm->const_data_dyn.bgColori = CLAMP (comm->const_data_dyn.bgColori, + in_m->mini, in_m->maxi); + } + + comm->const_data_dyn.bgColor3 = a; +} + static gboolean gst_d3d12_converter_setup_colorspace (GstD3D12Converter * self, const GstVideoInfo * in_info, const GstVideoInfo * out_info, gboolean allow_gamma, gboolean allow_primaries, - gboolean color_balance_enabled, gboolean * have_lut, - CONVERT_TYPE * convert_type, PSConstBuffer * const_data) + gboolean color_balance_enabled, CONVERT_TYPE * convert_type, + PSConstBuffer * const_data) { GstVideoInfo matrix_in_info; GstVideoInfo matrix_out_info; - *have_lut = FALSE; - convert_type0 = CONVERT_TYPE::IDENTITY; convert_type1 = CONVERT_TYPE::COLOR_BALANCE; if (GST_VIDEO_INFO_IS_RGB (in_info) != GST_VIDEO_INFO_IS_RGB (out_info)) { @@ -2010,11 +2062,6 @@ return FALSE; } - if (convert_type0 == CONVERT_TYPE::GAMMA || - convert_type0 == CONVERT_TYPE::PRIMARY || color_balance_enabled) { - *have_lut = TRUE; - } - return TRUE; } @@ -2184,11 +2231,31 @@ &yuv_info, &priv->clear_color_matrix); } - gst_d3d12_converter_calculate_border_color (self); + if (GST_VIDEO_INFO_IS_RGB (&priv->in_info)) { + GstVideoInfo rgb_info = priv->in_info; + rgb_info.colorimetry.range = GST_VIDEO_COLOR_RANGE_0_255; + gst_d3d12_color_range_adjust_matrix_unorm (&rgb_info, &priv->in_info, + &priv->in_clear_color_matrix); + } else { + GstVideoInfo rgb_info; + GstVideoInfo yuv_info; + + gst_video_info_set_format (&rgb_info, GST_VIDEO_FORMAT_RGBA64_LE, + priv->in_info.width, priv->in_info.height); + convert_info_gray_to_yuv (&priv->in_info, &yuv_info); + + if (yuv_info.colorimetry.matrix == GST_VIDEO_COLOR_MATRIX_UNKNOWN || + yuv_info.colorimetry.matrix == GST_VIDEO_COLOR_MATRIX_RGB) { + GST_WARNING_OBJECT (self, "Invalid matrix is detected"); + yuv_info.colorimetry.matrix = GST_VIDEO_COLOR_MATRIX_BT709; + } + + gst_d3d12_rgb_to_yuv_matrix_unorm (&rgb_info, + &yuv_info, &priv->in_clear_color_matrix); + } PSConstBuffer const_data2; CONVERT_TYPE convert_type2; - gboolean have_lut = FALSE; if (priv->mipgen_enabled) { GstVideoFormat mipgen_format = GST_VIDEO_FORMAT_RGBA; @@ -2228,7 +2295,7 @@ * a supported mip format */ if (mipgen_format != GST_VIDEO_INFO_FORMAT (&priv->in_info)) { if (!gst_d3d12_converter_setup_colorspace (self, &priv->in_info, - &priv->mipgen_info, FALSE, FALSE, FALSE, &have_lut, convert_type, + &priv->mipgen_info, FALSE, FALSE, FALSE, convert_type, const_data)) { gst_object_unref (self); return nullptr; @@ -2242,7 +2309,7 @@ priv->mipgen_ctx = gst_d3d12_converter_setup_resource (self, &priv->in_info, &priv->mipgen_info, DEFAULT_SAMPLER_FILTER, &sample_desc_default, - &blend_desc_default, convert_type, have_lut, FALSE, + &blend_desc_default, convert_type, FALSE, GST_D3D12_CONVERTER_ALPHA_MODE_STRAIGHT, GST_D3D12_CONVERTER_ALPHA_MODE_STRAIGHT, const_data, nullptr); if (!priv->mipgen_ctx) { @@ -2292,14 +2359,14 @@ if (!gst_d3d12_converter_setup_colorspace (self, &priv->in_info, &priv->out_info, allow_gamma, allow_primaries, - priv->color_balance_enabled, &have_lut, convert_type, const_data)) { + priv->color_balance_enabled, convert_type, const_data)) { gst_object_unref (self); return nullptr; } priv->main_ctx = gst_d3d12_converter_setup_resource (self, &priv->in_info, &priv->out_info, sampler_filter, &priv->sample_desc, &priv->blend_desc, - convert_type, have_lut, priv->color_balance_enabled, + convert_type, priv->color_balance_enabled, priv->src_alpha_mode, priv->dst_alpha_mode, const_data, nullptr); if (!priv->main_ctx) { gst_object_unref (self); @@ -2309,14 +2376,14 @@ if (priv->mipgen_ctx) { if (!gst_d3d12_converter_setup_colorspace (self, &priv->mipgen_info, &priv->out_info, allow_gamma, allow_primaries, - priv->color_balance_enabled, &have_lut, convert_type, const_data)) { + priv->color_balance_enabled, convert_type, const_data)) { gst_object_unref (self); return nullptr; } priv->post_mipgen_ctx = gst_d3d12_converter_setup_resource (self, &priv->mipgen_info, &priv->out_info, sampler_filter, &priv->sample_desc, - &priv->blend_desc, convert_type, have_lut, priv->color_balance_enabled, + &priv->blend_desc, convert_type, priv->color_balance_enabled, priv->src_alpha_mode, priv->dst_alpha_mode, const_data, priv->main_ctx->comm); if (!priv->post_mipgen_ctx) { @@ -2325,6 +2392,8 @@ } } + gst_d3d12_converter_calculate_border_color (self); + D3D12_DESCRIPTOR_HEAP_DESC srv_heap_desc = { }; srv_heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; srv_heap_desc.NumDescriptors = priv->max_srv_desc; @@ -2484,6 +2553,12 @@ GST_DEBUG_OBJECT (self, "Vertex updated"); } + guint pipeline_index = 0; + if (!is_internal && comm->need_color_balance) + pipeline_index = 1; + + auto & pipeline_data = ctx->pipeline_datapipeline_index; + auto device = gst_d3d12_device_get_device_handle (self->device); GstD3D12DescHeap *descriptor; @@ -2505,10 +2580,22 @@ cpu_handle.Offset (priv->srv_inc_size); } - if (comm->have_lut) { + if (pipeline_data.crs->HaveLut ()) { device->CopyDescriptorsSimple (2, cpu_handle, GetCPUDescriptorHandleForHeapStart (comm->gamma_lut_heap), D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + cpu_handle.Offset (2, priv->srv_inc_size); + } + + auto prev_sampler_remap = comm->const_data_dyn.samplerRemap; + + /* Do not enable UV remap for intermediate conversion */ + if (is_internal) + comm->const_data_dyn.samplerRemap = 0; + + if (comm->const_data_dyn.samplerRemap) { + device->CreateShaderResourceView (comm->sampler_remap.Get (), + nullptr, cpu_handle); } if (priv->clear_background) { @@ -2522,9 +2609,6 @@ reorder_rtv_handles (GST_VIDEO_INFO_FORMAT (&out_frame->info), out_frame->rtv_desc_handle, reordered_rtv_handle); - guint pipeline_index = comm->need_color_balance ? 1 : 0; - auto & pipeline_data = ctx->pipeline_datapipeline_index; - auto pso = pipeline_data.quad_data0.pso.Get (); cl->SetGraphicsRootSignature (pipeline_data.rs.Get ()); cl->SetPipelineState (pso); @@ -2578,6 +2662,15 @@ FENCE_NOTIFY_COM (ctx->vertex_upload.Detach ())); } + if (ctx->comm->sampler_remap && comm->const_data_dyn.samplerRemap) { + ComPtr < ID3D12Resource > remap_clone = ctx->comm->sampler_remap; + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_COM (remap_clone.Detach ())); + } + + /* Restore remap flag */ + comm->const_data_dyn.samplerRemap = prev_sampler_remap; + auto sampler = comm->sampler_heap.Get (); sampler->AddRef (); gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_COM (sampler)); @@ -2619,43 +2712,17 @@ priv->auto_mipgen_level = priv->mipgen_desc.MipLevels; } -/** - * gst_d3d12_converter_convert_buffer: - * @converter: a #GstD3D12Converter - * @in_buf: a #GstBuffer - * @out_buf: a #GstBuffer - * @fence_data: a #GstD3D12FenceData - * @command_list: a ID3D12GraphicsCommandList - * @execute_gpu_wait: Executes wait operation against @queue - * - * Records command list for conversion operation. converter will attach - * conversion command associated resources such as command allocator - * to @fence_data. - * - * If @execute_wait is %TRUE and buffers are associated with external fences, - * this method will schedule GPU wait operation against @queue. - * - * Returns: %TRUE if successful - * - * Since: 1.26 - */ -gboolean -gst_d3d12_converter_convert_buffer (GstD3D12Converter * converter, +static gboolean +gst_d3d12_converter_convert_buffer_internal (GstD3D12Converter * converter, GstBuffer * in_buf, GstBuffer * out_buf, GstD3D12FenceData * fence_data, - ID3D12GraphicsCommandList * command_list, gboolean execute_gpu_wait) + ID3D12GraphicsCommandList * command_list, gboolean execute_gpu_wait, + guint num_remap, ID3D12Resource ** lut, GstVideoRectangle * viewport, + guint64 * border_color) { - g_return_val_if_fail (GST_IS_D3D12_CONVERTER (converter), FALSE); - g_return_val_if_fail (GST_IS_BUFFER (in_buf), FALSE); - g_return_val_if_fail (GST_IS_BUFFER (out_buf), FALSE); - g_return_val_if_fail (fence_data, FALSE); - g_return_val_if_fail (command_list, FALSE); - GstD3D12Frame in_frame; GstD3D12Frame out_frame; auto priv = converter->priv; - std::lock_guard < std::mutex > lk (priv->prop_lock); - auto render_target = gst_d3d12_pack_acquire_render_target (priv->pack, out_buf); if (!render_target) { @@ -2881,20 +2948,91 @@ mipgen_frame.srv_desc_handle0 = cpu_handle; } - if (priv->post_mipgen_ctx) { - ret = gst_d3d12_converter_execute (converter, - &mipgen_frame, &out_frame, priv->post_mipgen_ctx, - FALSE, fence_data, command_list); + if (num_remap == 0) { + if (priv->post_mipgen_ctx) { + ret = gst_d3d12_converter_execute (converter, + &mipgen_frame, &out_frame, priv->post_mipgen_ctx, + FALSE, fence_data, command_list); + } else { + ret = gst_d3d12_converter_execute (converter, + &mipgen_frame, &out_frame, priv->main_ctx, + FALSE, fence_data, command_list); + } } else { - ret = gst_d3d12_converter_execute (converter, - &mipgen_frame, &out_frame, priv->main_ctx, - FALSE, fence_data, command_list); + auto prev_remap = priv->main_ctx->comm->sampler_remap; + auto prev_x = priv->dest_x; + auto prev_y = priv->dest_y; + auto prev_w = priv->dest_width; + auto prev_h = priv->dest_height; + FLOAT prev_color4; + for (guint i = 0; i < 4; i++) + prev_colori = priv->main_ctx->comm->const_data_dyn.bgColori; + + for (guint i = 0; i < num_remap; i++) { + gst_d3d12_converter_set_remap_unlocked (converter, luti); + gst_d3d12_converter_update_viewport_unlocked (converter, + viewporti.x, viewporti.y, viewporti.w, viewporti.h); + gst_d3d12_converter_calculate_remap_border_color (converter, + border_colori); + + if (priv->post_mipgen_ctx) { + ret = gst_d3d12_converter_execute (converter, + &mipgen_frame, &out_frame, priv->post_mipgen_ctx, + FALSE, fence_data, command_list); + } else { + ret = gst_d3d12_converter_execute (converter, + &mipgen_frame, &out_frame, priv->main_ctx, + FALSE, fence_data, command_list); + } + + if (!ret) + break; + } + + /* Restore previous state */ + gst_d3d12_converter_set_remap_unlocked (converter, prev_remap.Get ()); + gst_d3d12_converter_update_viewport_unlocked (converter, + prev_x, prev_y, prev_w, prev_h); + for (guint i = 0; i < 4; i++) + priv->main_ctx->comm->const_data_dyn.bgColori = prev_colori; } gst_d3d12_frame_unmap (&mipgen_frame); } else { - ret = gst_d3d12_converter_execute (converter, - &in_frame, &out_frame, priv->main_ctx, FALSE, fence_data, command_list); + if (num_remap == 0) { + ret = gst_d3d12_converter_execute (converter, &in_frame, &out_frame, + priv->main_ctx, FALSE, fence_data, command_list); + } else { + auto prev_remap = priv->main_ctx->comm->sampler_remap; + auto prev_x = priv->dest_x; + auto prev_y = priv->dest_y; + auto prev_w = priv->dest_width; + auto prev_h = priv->dest_height; + FLOAT prev_color4; + for (guint i = 0; i < 4; i++) + prev_colori = priv->main_ctx->comm->const_data_dyn.bgColori; + + for (guint i = 0; i < num_remap; i++) { + gst_d3d12_converter_set_remap_unlocked (converter, luti); + gst_d3d12_converter_update_viewport_unlocked (converter, + viewporti.x, viewporti.y, viewporti.w, viewporti.h); + gst_d3d12_converter_calculate_remap_border_color (converter, + border_colori); + + ret = gst_d3d12_converter_execute (converter, &in_frame, &out_frame, + priv->main_ctx, FALSE, fence_data, command_list); + + if (!ret) + break; + } + + /* Restore previous state */ + gst_d3d12_converter_set_remap_unlocked (converter, prev_remap.Get ()); + gst_d3d12_converter_update_viewport_unlocked (converter, + prev_x, prev_y, prev_w, prev_h); + for (guint i = 0; i < 4; i++) + priv->main_ctx->comm->const_data_dyn.bgColori = prev_colori; + } } if (ret) { @@ -2919,6 +3057,66 @@ } /** + * gst_d3d12_converter_convert_buffer: + * @converter: a #GstD3D12Converter + * @in_buf: a #GstBuffer + * @out_buf: a #GstBuffer + * @fence_data: a #GstD3D12FenceData + * @command_list: a ID3D12GraphicsCommandList + * @execute_gpu_wait: Executes wait operation against @queue + * + * Records command list for conversion operation. converter will attach + * conversion command associated resources such as command allocator + * to @fence_data. + * + * If @execute_wait is %TRUE and buffers are associated with external fences, + * this method will schedule GPU wait operation against @queue. + * + * Returns: %TRUE if successful + * + * Since: 1.26 + */ +gboolean +gst_d3d12_converter_convert_buffer (GstD3D12Converter * converter, + GstBuffer * in_buf, GstBuffer * out_buf, GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * command_list, gboolean execute_gpu_wait) +{ + g_return_val_if_fail (GST_IS_D3D12_CONVERTER (converter), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (in_buf), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (out_buf), FALSE); + g_return_val_if_fail (fence_data, FALSE); + g_return_val_if_fail (command_list, FALSE); + + auto priv = converter->priv; + std::lock_guard < std::mutex > lk (priv->prop_lock); + + return gst_d3d12_converter_convert_buffer_internal (converter, + in_buf, out_buf, fence_data, command_list, execute_gpu_wait, + 0, nullptr, nullptr, nullptr); +} + +gboolean +gst_d3d12_converter_convert_buffer_for_uv_remap (GstD3D12Converter * converter, + GstBuffer * in_buf, GstBuffer * out_buf, GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * command_list, gboolean execute_gpu_wait, + guint num_remap, ID3D12Resource ** lut, GstVideoRectangle * viewport, + guint64 * border_color) +{ + g_return_val_if_fail (GST_IS_D3D12_CONVERTER (converter), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (in_buf), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (out_buf), FALSE); + g_return_val_if_fail (fence_data, FALSE); + g_return_val_if_fail (command_list, FALSE); + + auto priv = converter->priv; + std::lock_guard < std::mutex > lk (priv->prop_lock); + + return gst_d3d12_converter_convert_buffer_internal (converter, + in_buf, out_buf, fence_data, command_list, execute_gpu_wait, + num_remap, lut, viewport, border_color); +} + +/** * gst_d3d12_converter_update_blend_state: * @converter: a #GstD3D12Converter * @blend_desc: (nullable): D3D12_BLEND_DESC @@ -3053,3 +3251,61 @@ return FALSE; } + +static gboolean +gst_d3d12_converter_set_remap_unlocked (GstD3D12Converter * self, + ID3D12Resource * remap_vector) +{ + auto priv = self->priv; + auto comm = priv->main_ctx->comm; + + comm->sampler_remap = remap_vector; + if (remap_vector) + comm->const_data_dyn.samplerRemap = 1; + else + comm->const_data_dyn.samplerRemap = 0; + + return TRUE; +} + +gboolean +gst_d3d12_converter_set_remap (GstD3D12Converter * converter, + ID3D12Resource * remap_vector) +{ + g_return_val_if_fail (GST_IS_D3D12_CONVERTER (converter), FALSE); + + auto priv = converter->priv; + std::lock_guard < std::mutex > lk (priv->prop_lock); + return gst_d3d12_converter_set_remap_unlocked (converter, remap_vector); +} + +static gboolean +gst_d3d12_converter_update_viewport_unlocked (GstD3D12Converter * self, + gint x, gint y, gint width, gint height) +{ + auto priv = self->priv; + + if (priv->dest_x != x || priv->dest_y != y || priv->dest_width != width || + priv->dest_height != height) { + priv->dest_x = x; + priv->dest_y = y; + priv->dest_width = width; + priv->dest_height = height; + priv->update_dest_rect = TRUE; + } + + return TRUE; +} + +gboolean +gst_d3d12_converter_update_viewport (GstD3D12Converter * converter, gint x, + gint y, gint width, gint height) +{ + g_return_val_if_fail (GST_IS_D3D12_CONVERTER (converter), FALSE); + + auto priv = converter->priv; + std::lock_guard < std::mutex > lk (priv->prop_lock); + + return gst_d3d12_converter_update_viewport_unlocked (converter, x, y, width, + height); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12frame.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12frame.cpp
Changed
@@ -354,6 +354,29 @@ gst_d3d12_frame_copy (GstD3D12Frame * dest, const GstD3D12Frame * src, guint64 * fence_value) { + return gst_d3d12_frame_copy_full (dest, src, D3D12_COMMAND_LIST_TYPE_DIRECT, + nullptr, fence_value); +} + +/** + * gst_d3d12_frame_copy_full: + * @dest: a #GstD3D12Frame + * @src: a #GstD3D12Frame + * @queue_type: queue type on which the copy command will be performed + * @fence: (out) (transfer full) (allow-none): a ID3D12Fence + * @fence_value: (out): a fence value for the copy operation + * + * Copy the contents from @src to @dest. + * + * Returns: %TRUE on success. + * + * Since: 1.28 + */ +gboolean +gst_d3d12_frame_copy_full (GstD3D12Frame * dest, const GstD3D12Frame * src, + D3D12_COMMAND_LIST_TYPE queue_type, ID3D12Fence ** fence, + guint64 * fence_value) +{ g_return_val_if_fail (dest, FALSE); g_return_val_if_fail (src, FALSE); g_return_val_if_fail (dest->device, FALSE); @@ -361,6 +384,16 @@ g_return_val_if_fail (GST_VIDEO_INFO_FORMAT (&dest->info) == GST_VIDEO_INFO_FORMAT (&src->info), FALSE); + switch (queue_type) { + case D3D12_COMMAND_LIST_TYPE_DIRECT: + case D3D12_COMMAND_LIST_TYPE_COMPUTE: + case D3D12_COMMAND_LIST_TYPE_COPY: + break; + default: + GST_ERROR ("Invalid queue type %d", queue_type); + return FALSE; + } + if (!gst_d3d12_device_is_equal (dest->device, src->device)) { GST_ERROR ("Cross device copy is not supported"); return FALSE; @@ -394,11 +427,21 @@ } } - return gst_d3d12_device_copy_texture_region (dest->device, + auto ret = gst_d3d12_device_copy_texture_region (dest->device, GST_VIDEO_INFO_N_PLANES (&dest->info), args, fence_data, (guint) fences_to_wait.size (), fences_to_wait.data (), - fence_values_to_wait.data (), D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_values_to_wait.data (), queue_type, fence_value); + + if (!ret) + return FALSE; + + if (fence) { + *fence = gst_d3d12_device_get_fence_handle (dest->device, queue_type); + (*fence)->AddRef (); + } + + return TRUE; } /** @@ -418,6 +461,30 @@ gst_d3d12_frame_copy_plane (GstD3D12Frame * dest, const GstD3D12Frame * src, guint plane, guint64 * fence_value) { + return gst_d3d12_frame_copy_plane_full (dest, src, plane, + D3D12_COMMAND_LIST_TYPE_DIRECT, nullptr, fence_value); +} + +/** + * gst_d3d12_frame_copy_plane_full: + * @dest: a #GstD3D12Frame + * @src: a #GstD3D12Frame + * @plane: a plane + * @queue_type: queue type on which the copy command will be performed + * @fence: (out) (transfer full) (allow-none): a ID3D12Fence + * @fence_value: (out): a fence value for the copy operation + * + * Copy the plane with index @plane from @src to @dest. + * + * Returns: %TRUE on success. + * + * Since: 1.28 + */ +gboolean +gst_d3d12_frame_copy_plane_full (GstD3D12Frame * dest, + const GstD3D12Frame * src, guint plane, D3D12_COMMAND_LIST_TYPE queue_type, + ID3D12Fence ** fence, guint64 * fence_value) +{ g_return_val_if_fail (dest, FALSE); g_return_val_if_fail (src, FALSE); g_return_val_if_fail (dest->device, FALSE); @@ -426,6 +493,16 @@ GST_VIDEO_INFO_FORMAT (&src->info), FALSE); g_return_val_if_fail (plane < GST_VIDEO_INFO_N_PLANES (&dest->info), FALSE); + switch (queue_type) { + case D3D12_COMMAND_LIST_TYPE_DIRECT: + case D3D12_COMMAND_LIST_TYPE_COMPUTE: + case D3D12_COMMAND_LIST_TYPE_COPY: + break; + default: + GST_ERROR ("Invalid queue type %d", queue_type); + return FALSE; + } + if (!gst_d3d12_device_is_equal (dest->device, src->device)) { GST_ERROR ("Cross device copy is not supported"); return FALSE; @@ -443,7 +520,7 @@ FENCE_NOTIFY_MINI_OBJECT (gst_buffer_ref (src->buffer))); auto cq = gst_d3d12_device_get_cmd_queue (src->device, - D3D12_COMMAND_LIST_TYPE_DIRECT); + queue_type); auto cq_handle = gst_d3d12_cmd_queue_get_handle (cq); if (src->fenceplane.fence) @@ -452,9 +529,18 @@ if (dest->fenceplane.fence) cq_handle->Wait (dest->fenceplane.fence, dest->fenceplane.fence_value); - return gst_d3d12_device_copy_texture_region (dest->device, 1, &args, - fence_data, 0, nullptr, nullptr, D3D12_COMMAND_LIST_TYPE_DIRECT, + auto ret = gst_d3d12_device_copy_texture_region (dest->device, 1, &args, + fence_data, 0, nullptr, nullptr, queue_type, fence_value); + if (!ret) + return ret; + + if (fence) { + *fence = gst_d3d12_device_get_fence_handle (dest->device, queue_type); + (*fence)->AddRef (); + } + + return TRUE; } /**
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12frame.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12frame.h
Changed
@@ -108,12 +108,27 @@ guint64 * fence_value); GST_D3D12_API +gboolean gst_d3d12_frame_copy_full (GstD3D12Frame * dest, + const GstD3D12Frame * src, + D3D12_COMMAND_LIST_TYPE queue_type, + ID3D12Fence ** fence, + guint64 * fence_value); + +GST_D3D12_API gboolean gst_d3d12_frame_copy_plane (GstD3D12Frame * dest, const GstD3D12Frame * src, guint plane, guint64 * fence_value); GST_D3D12_API +gboolean gst_d3d12_frame_copy_plane_full (GstD3D12Frame * dest, + const GstD3D12Frame * src, + guint plane, + D3D12_COMMAND_LIST_TYPE queue_type, + ID3D12Fence ** fence, + guint64 * fence_value); + +GST_D3D12_API gboolean gst_d3d12_frame_fence_gpu_wait (const GstD3D12Frame * frame, GstD3D12CmdQueue * queue);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12stagingbufferpool.cpp
Added
@@ -0,0 +1,255 @@ +/* GStreamer + * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12.h" +#include "gstd3d12memory-private.h" +#include <directx/d3dx12.h> + +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_staging_buffer_pool_debug); +#define GST_CAT_DEFAULT gst_d3d12_staging_buffer_pool_debug + +struct _GstD3D12StagingBufferPoolPrivate +{ + GstVideoInfo info; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layoutGST_VIDEO_MAX_PLANES; + gint strideGST_VIDEO_MAX_PLANES; + gsize offsetGST_VIDEO_MAX_PLANES; + gsize total_mem_size; + guint layout_count; +}; + +#define gst_d3d12_staging_buffer_pool_parent_class parent_class +G_DEFINE_TYPE_WITH_PRIVATE (GstD3D12StagingBufferPool, + gst_d3d12_staging_buffer_pool, GST_TYPE_BUFFER_POOL); + +static const gchar **gst_d3d12_staging_buffer_pool_get_options (GstBufferPool * + pool); +static gboolean gst_d3d12_staging_buffer_pool_set_config (GstBufferPool * pool, + GstStructure * config); +static GstFlowReturn gst_d3d12_staging_buffer_pool_alloc_buffer (GstBufferPool * + pool, GstBuffer ** buffer, GstBufferPoolAcquireParams * params); + +static void +gst_d3d12_staging_buffer_pool_class_init (GstD3D12StagingBufferPoolClass * + klass) +{ + auto pool_class = GST_BUFFER_POOL_CLASS (klass); + + pool_class->get_options = gst_d3d12_staging_buffer_pool_get_options; + pool_class->set_config = gst_d3d12_staging_buffer_pool_set_config; + pool_class->alloc_buffer = gst_d3d12_staging_buffer_pool_alloc_buffer; + + GST_DEBUG_CATEGORY_INIT (gst_d3d12_staging_buffer_pool_debug, + "d3d12stagingbufferpool", 0, "d3d12stagingbufferpool"); +} + +static void +gst_d3d12_staging_buffer_pool_init (GstD3D12StagingBufferPool * self) +{ + self->priv = (GstD3D12StagingBufferPoolPrivate *) + gst_d3d12_staging_buffer_pool_get_instance_private (self); +} + +static const gchar ** +gst_d3d12_staging_buffer_pool_get_options (GstBufferPool * pool) +{ + /* NOTE: d3d12 memory does not support alignment */ + static const gchar *options = + { GST_BUFFER_POOL_OPTION_VIDEO_META, nullptr }; + + return options; +} + +static void +gst_d3d12_staging_buffer_pool_do_align (D3D12_RESOURCE_DESC & desc) +{ + UINT width_align = + D3D12_PROPERTY_LAYOUT_FORMAT_TABLE::GetWidthAlignment (desc.Format); + UINT height_align = + D3D12_PROPERTY_LAYOUT_FORMAT_TABLE::GetHeightAlignment (desc.Format); + + if (width_align > 1) + desc.Width = GST_ROUND_UP_N (desc.Width, (UINT64) width_align); + + if (height_align > 1) + desc.Height = GST_ROUND_UP_N (desc.Height, height_align); +} + +static gboolean +gst_d3d12_staging_buffer_pool_set_config (GstBufferPool * pool, + GstStructure * config) +{ + auto self = GST_D3D12_STAGING_BUFFER_POOL (pool); + auto priv = self->priv; + GstCaps *caps = nullptr; + guint min_buffers, max_buffers; + + if (!gst_buffer_pool_config_get_params (config, &caps, nullptr, &min_buffers, + &max_buffers)) { + GST_WARNING_OBJECT (self, "invalid config"); + return FALSE; + } + + if (!caps) { + GST_WARNING_OBJECT (self, "Empty caps"); + return FALSE; + } + + if (!gst_video_info_from_caps (&priv->info, caps)) { + GST_WARNING_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + GST_LOG_OBJECT (self, "%dx%d, caps %" GST_PTR_FORMAT, priv->info.width, + priv->info.height, caps); + + GstD3D12Format d3d12_format; + auto format = GST_VIDEO_INFO_FORMAT (&priv->info); + if (!gst_d3d12_device_get_format (self->device, format, &d3d12_format)) { + GST_ERROR_OBJECT (self, "%s is not supported", + gst_video_format_to_string (format)); + return FALSE; + } + + memset (priv->stride, 0, sizeof (priv->stride)); + memset (priv->offset, 0, sizeof (priv->offset)); + memset (priv->layout, 0, sizeof (priv->layout)); + priv->layout_count = 0; + priv->total_mem_size = 0; + + auto device = gst_d3d12_device_get_device_handle (self->device); + + if (d3d12_format.dxgi_format != DXGI_FORMAT_UNKNOWN) { + auto desc = CD3DX12_RESOURCE_DESC::Tex2D (d3d12_format.dxgi_format, + priv->info.width, priv->info.height, 1, 1, 1, 0, + D3D12_RESOURCE_FLAG_NONE); + + gst_d3d12_staging_buffer_pool_do_align (desc); + + auto num_planes = D3D12GetFormatPlaneCount (device, + d3d12_format.dxgi_format); + + UINT64 mem_size; + device->GetCopyableFootprints (&desc, 0, num_planes, 0, + priv->layout, nullptr, nullptr, &mem_size); + for (guint i = 0; i < num_planes; i++) { + priv->stridei = priv->layouti.Footprint.RowPitch; + priv->offseti = (gsize) priv->layouti.Offset; + } + + priv->layout_count = num_planes; + priv->total_mem_size = mem_size; + } else { + auto finfo = priv->info.finfo; + UINT64 base_offset = 0; + + for (guint i = 0; i < GST_VIDEO_MAX_PLANES; i++) { + if (d3d12_format.resource_formati == DXGI_FORMAT_UNKNOWN) + break; + + gint compGST_VIDEO_MAX_COMPONENTS; + gst_video_format_info_component (finfo, i, comp); + + guint width = GST_VIDEO_INFO_COMP_WIDTH (&priv->info, comp0); + guint height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->info, comp0); + width = MAX (width, 1); + height = MAX (height, 1); + + auto desc = CD3DX12_RESOURCE_DESC::Tex2D (d3d12_format.resource_formati, + width, height, 1, 1, 1, 0, D3D12_RESOURCE_FLAG_NONE); + + gst_d3d12_staging_buffer_pool_do_align (desc); + + UINT64 mem_size; + device->GetCopyableFootprints (&desc, 0, 1, base_offset, + &priv->layouti, nullptr, nullptr, &mem_size); + + priv->stridei = priv->layouti.Footprint.RowPitch; + priv->offseti = (gsize) priv->layouti.Offset; + + base_offset += mem_size; + + priv->layout_count++; + base_offset = GST_ROUND_UP_N (base_offset, + (UINT64) D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT); + } + + priv->total_mem_size = (gsize) base_offset; + } + + gst_buffer_pool_config_set_params (config, + caps, priv->total_mem_size, min_buffers, max_buffers); + + return GST_BUFFER_POOL_CLASS (parent_class)->set_config (pool, config); +} + +static GstFlowReturn +gst_d3d12_staging_buffer_pool_alloc_buffer (GstBufferPool * pool, + GstBuffer ** buffer, GstBufferPoolAcquireParams * params) +{ + auto self = GST_D3D12_STAGING_BUFFER_POOL (pool); + auto priv = self->priv; + auto info = &priv->info; + + auto mem = gst_d3d12_staging_allocator_alloc (nullptr, self->device, + priv->layout_count, priv->layout, priv->total_mem_size); + if (!mem) { + GST_ERROR_OBJECT (self, "Couldn't allocate memory"); + return GST_FLOW_ERROR; + } + + auto buf = gst_buffer_new (); + gst_buffer_append_memory (buf, mem); + + gst_buffer_add_video_meta_full (buf, GST_VIDEO_FRAME_FLAG_NONE, + GST_VIDEO_INFO_FORMAT (info), GST_VIDEO_INFO_WIDTH (info), + GST_VIDEO_INFO_HEIGHT (info), GST_VIDEO_INFO_N_PLANES (info), + priv->offset, priv->stride); + + *buffer = buf; + + return GST_FLOW_OK; +} + +/** + * gst_d3d12_staging_buffer_pool_new: + * @device: a #GstD3D12Device to use + * + * Returns: (transfer full): a #GstBufferPool that allocates buffers with + * #GstD3D12StagingMemory + * + * Since: 1.28 + */ +GstBufferPool * +gst_d3d12_staging_buffer_pool_new (GstD3D12Device * device) +{ + g_return_val_if_fail (GST_IS_D3D12_DEVICE (device), nullptr); + + auto self = (GstD3D12StagingBufferPool *) + g_object_new (GST_TYPE_D3D12_STAGING_BUFFER_POOL, nullptr); + gst_object_ref_sink (self); + + self->device = (GstD3D12Device *) gst_object_ref (device); + + return GST_BUFFER_POOL_CAST (self); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12stagingbufferpool.h
Added
@@ -0,0 +1,72 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/d3d12/gstd3d12_fwd.h> + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_STAGING_BUFFER_POOL (gst_d3d12_staging_buffer_pool_get_type ()) +#define GST_D3D12_STAGING_BUFFER_POOL(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), GST_TYPE_D3D12_STAGING_BUFFER_POOL, GstD3D12StagingBufferPool)) +#define GST_D3D12_STAGING_BUFFER_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_D3D12_STAGING_BUFFER_POOL, GstD3D12StagingBufferPoolClass)) +#define GST_IS_D3D12_STAGING_BUFFER_POOL(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_D3D12_STAGING_BUFFER_POOL)) +#define GST_IS_D3D12_STAGING_BUFFER_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_D3D12_STAGING_BUFFER_POOL)) +#define GST_D3D12_STAGING_BUFFER_POOL_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), GST_TYPE_D3D12_STAGING_BUFFER_POOL, GstD3D12StagingBufferPoolClass)) +#define GST_D3D12_STAGING_BUFFER_POOL_CAST(obj) ((GstD3D12StagingBufferPool*)(obj)) + +/** + * GstD3D12StagingBufferPool: + * + * Opaque GstD3D12StagingBufferPool struct + * + * Since: 1.28 + */ +struct _GstD3D12StagingBufferPool +{ + GstBufferPool parent; + + GstD3D12Device *device; + + /*< private >*/ + GstD3D12StagingBufferPoolPrivate *priv; +}; + +/** + * GstD3D12StagingBufferPoolClass: + * + * Opaque GstD3D12StagingBufferPoolClass struct + * + * Since: 1.28 + */ +struct _GstD3D12StagingBufferPoolClass +{ + GstBufferPoolClass parent_class; +}; + +GST_D3D12_API +GType gst_d3d12_staging_buffer_pool_get_type (void); + +GST_D3D12_API +GstBufferPool * gst_d3d12_staging_buffer_pool_new (GstD3D12Device * device); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12stagingmemory.cpp
Added
@@ -0,0 +1,426 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12.h" +#include "gstd3d12-private.h" +#include <wrl.h> +#include <directx/d3dx12.h> + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +/* *INDENT-ON* */ + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + + GST_D3D12_CALL_ONCE_BEGIN { + cat = _gst_debug_category_new ("d3d12stagingmemory", + 0, "d3d12stagingmemory"); + } GST_D3D12_CALL_ONCE_END; + + return cat; +} +#endif + +static GstD3D12StagingAllocator *_d3d12_memory_allocator = nullptr; + +/* *INDENT-OFF* */ +struct _GstD3D12StagingMemoryPrivate +{ + ~_GstD3D12StagingMemoryPrivate () + { + SetFence (nullptr, 0, true); + } + + void SetFence (ID3D12Fence * new_fence, guint64 new_fence_val, bool wait) + { + if (fence && fence.Get () != new_fence && wait) { + auto completed = fence->GetCompletedValue (); + if (completed < fence_val) + fence->SetEventOnCompletion (fence_val, nullptr); + } + + fence = new_fence; + if (new_fence) + fence_val = new_fence_val; + else + fence_val = 0; + } + + ComPtr<ID3D12Resource> resource; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layoutGST_VIDEO_MAX_PLANES; + guint num_layouts; + + std::mutex lock; + + ComPtr<ID3D12Fence> fence; + UINT64 fence_val = 0; + INT64 cpu_write_count = 0; +}; +/* *INDENT-ON* */ + +/** + * gst_is_d3d12_staging_memory: + * @mem: a #GstMemory + * + * Returns: %TRUE if @mem is allocated by #GstD3D12StagingAllocator + * + * Since: 1.28 + */ +gboolean +gst_is_d3d12_staging_memory (GstMemory * mem) +{ + return mem != nullptr && mem->allocator != nullptr && + (GST_IS_D3D12_STAGING_ALLOCATOR (mem->allocator)); +} + +/** + * gst_d3d12_staging_memory_sync: + * @mem: a #GstD3D12StagingMemory + * + * Wait for pending GPU operation + * + * Returns: %TRUE if successful + * + * Since: 1.28 + */ +gboolean +gst_d3d12_staging_memory_sync (GstD3D12StagingMemory * mem) +{ + g_return_val_if_fail (gst_is_d3d12_staging_memory (GST_MEMORY_CAST (mem)), + FALSE); + + auto priv = mem->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + priv->SetFence (nullptr, 0, true); + + return TRUE; +} + +/** + * gst_d3d12_staging_memory_get_layout: + * @mem: a #GstD3D12StagingMemory + * @index: layout index + * @layout: D3D12_PLACED_SUBRESOURCE_FOOTPRINT + * + * Gets copyable resource layout for @index + * + * Returns: %TRUE if layout information is available for @index + * + * Since: 1.28 + */ +gboolean +gst_d3d12_staging_memory_get_layout (GstD3D12StagingMemory * mem, + guint index, D3D12_PLACED_SUBRESOURCE_FOOTPRINT * layout) +{ + g_return_val_if_fail (gst_is_d3d12_staging_memory (GST_MEMORY_CAST (mem)), + FALSE); + g_return_val_if_fail (layout, FALSE); + + auto priv = mem->priv; + if (index >= priv->num_layouts) + return FALSE; + + *layout = priv->layoutindex; + + return TRUE; +} + +/** + * gst_d3d12_staging_memory_set_fence: + * @mem: a #GstD3D12StagingMemory + * @fence: (allow-none): a ID3D12Fence + * @fence_value: fence value + * @wait: waits for previously configured fence if any + * + * Replace fence object of @mem with new @fence. + * This method will block calling thread for synchronization + * if @wait is %TRUE and configured fence is different from new @fence + * + * Since: 1.28 + */ +void +gst_d3d12_staging_memory_set_fence (GstD3D12StagingMemory * mem, + ID3D12Fence * fence, guint64 fence_value, gboolean wait) +{ + g_return_if_fail (gst_is_d3d12_staging_memory (GST_MEMORY_CAST (mem))); + g_return_if_fail (gst_mini_object_is_writable ((GstMiniObject *) mem)); + + auto priv = mem->priv; + std::lock_guard < std::mutex > lk (priv->lock); + priv->SetFence (fence, fence_value, wait); +} + +/** + * gst_d3d12_staging_memory_get_fence: + * @mem: a #GstD3D12StagingMemory + * @fence: (out) (transfer full) (allow-none): a ID3D12Fence + * @fence_value: (out) (allow-none): fence value + * + * Gets configured fence and fence value. Valid operations against returned + * fence object are ID3D12Fence::GetCompletedValue() and + * ID3D12Fence::SetEventOnCompletion(). Caller should not try to update + * completed value via ID3D12Fence::Signal() since the fence is likely + * owned by external component and shared only for read-only operations. + * + * Returns: %TRUE if @mem has configured fence object + * + * Since: 1.28 + */ +gboolean +gst_d3d12_staging_memory_get_fence (GstD3D12StagingMemory * mem, + ID3D12Fence ** fence, guint64 * fence_value) +{ + g_return_val_if_fail (gst_is_d3d12_staging_memory (GST_MEMORY_CAST (mem)), + FALSE); + + auto priv = mem->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->fence) { + if (fence) { + *fence = priv->fence.Get (); + (*fence)->AddRef (); + } + + if (fence_value) + *fence_value = priv->fence_val; + + return TRUE; + } + + return FALSE; +} + +static gpointer +gst_d3d12_staging_memory_map_full (GstMemory * mem, GstMapInfo * info, + gsize maxsize) +{ + auto dmem = GST_D3D12_STAGING_MEMORY_CAST (mem); + auto priv = dmem->priv; + auto flags = info->flags; + + std::lock_guard < std::mutex > lk (priv->lock); + if ((flags & GST_MAP_D3D12) == GST_MAP_D3D12) { + if (priv->cpu_write_count > 0) { + GST_INFO_OBJECT (dmem->device, "CPU write map count %" G_GINT64_FORMAT, + priv->cpu_write_count); + return nullptr; + } + + return priv->resource.Get (); + } + + priv->SetFence (nullptr, 0, true); + + gpointer ret; + D3D12_RANGE range = { }; + if ((flags & GST_MAP_READ) == GST_MAP_READ) + range.End = mem->size; + + auto hr = priv->resource->Map (0, &range, &ret); + if (!gst_d3d12_result (hr, dmem->device)) { + GST_ERROR_OBJECT (dmem->device, "Couldn't map memory"); + return nullptr; + } + + if ((flags & GST_MAP_WRITE) == GST_MAP_WRITE) + priv->cpu_write_count++; + + return ret; +} + +static void +gst_d3d12_staging_memory_unmap_full (GstMemory * mem, GstMapInfo * info) +{ + auto dmem = GST_D3D12_STAGING_MEMORY_CAST (mem); + auto priv = dmem->priv; + auto flags = info->flags; + + std::lock_guard < std::mutex > lk (priv->lock); + if ((flags & GST_MAP_D3D12) == GST_MAP_D3D12) + return; + + D3D12_RANGE range = { }; + if ((flags & GST_MAP_WRITE) == GST_MAP_WRITE) { + range.End = mem->size; + if (priv->cpu_write_count <= 0) + GST_WARNING_OBJECT (dmem->device, "Couldn't trace CPU write map count"); + else + priv->cpu_write_count--; + } + + priv->resource->Unmap (0, &range); +} + +static GstMemory * +gst_d3d12_staging_memory_share (GstMemory * mem, gssize offset, gssize size) +{ + return nullptr; +} + +struct _GstD3D12StagingAllocatorPrivate +{ + gpointer padding; +}; + +#define gst_d3d12_staging_allocator_parent_class parent_class +G_DEFINE_TYPE_WITH_PRIVATE (GstD3D12StagingAllocator, + gst_d3d12_staging_allocator, GST_TYPE_ALLOCATOR); + +static GstMemory *gst_d3d12_staging_allocator_dummy_alloc (GstAllocator * + allocator, gsize size, GstAllocationParams * params); +static void gst_d3d12_staging_allocator_free (GstAllocator * allocator, + GstMemory * mem); + +static void +gst_d3d12_staging_allocator_class_init (GstD3D12StagingAllocatorClass * klass) +{ + auto allocator_class = GST_ALLOCATOR_CLASS (klass); + + allocator_class->alloc = gst_d3d12_staging_allocator_dummy_alloc; + allocator_class->free = gst_d3d12_staging_allocator_free; +} + +static void +gst_d3d12_staging_allocator_init (GstD3D12StagingAllocator * self) +{ + auto alloc = GST_ALLOCATOR_CAST (self); + + self->priv = (GstD3D12StagingAllocatorPrivate *) + gst_d3d12_staging_allocator_get_instance_private (self); + + alloc->mem_type = GST_D3D12_STAGING_MEMORY_NAME; + alloc->mem_map_full = gst_d3d12_staging_memory_map_full; + alloc->mem_unmap_full = gst_d3d12_staging_memory_unmap_full; + alloc->mem_share = gst_d3d12_staging_memory_share; + + GST_OBJECT_FLAG_SET (alloc, GST_ALLOCATOR_FLAG_CUSTOM_ALLOC); +} + +static GstMemory * +gst_d3d12_staging_allocator_dummy_alloc (GstAllocator * allocator, gsize size, + GstAllocationParams * params) +{ + g_return_val_if_reached (nullptr); +} + +static void +gst_d3d12_staging_allocator_free (GstAllocator * allocator, GstMemory * mem) +{ + auto dmem = GST_D3D12_STAGING_MEMORY_CAST (mem); + + GST_LOG_OBJECT (allocator, "Free memory %p", mem); + + delete dmem->priv; + + gst_clear_object (&dmem->device); + g_free (dmem); +} + +static void +gst_d3d12_staging_memory_init_once (void) +{ + GST_D3D12_CALL_ONCE_BEGIN { + _d3d12_memory_allocator = (GstD3D12StagingAllocator *) + g_object_new (GST_TYPE_D3D12_STAGING_ALLOCATOR, nullptr); + gst_object_ref_sink (_d3d12_memory_allocator); + gst_object_ref (_d3d12_memory_allocator); + + gst_allocator_register (GST_D3D12_STAGING_MEMORY_NAME, + GST_ALLOCATOR_CAST (_d3d12_memory_allocator)); + } GST_D3D12_CALL_ONCE_END; +} + +/** + * gst_d3d12_staging_allocator_alloc: + * @allocator: (allow-none): a #GstD3D12StagingAllocator + * @device: a GstD3D12Device + * @num_layouts: layout count + * @layouts: an array of D3D12_PLACED_SUBRESOURCE_FOOTPRINT + * @total_bytes: Total bytes to allocate + * + * Allocates staging resource allocated in custom heap + * D3D12_CPU_PAGE_PROPERTY_WRITE_BACK + D3D12_MEMORY_POOL_L0. + * + * Returns: (transfer full) (nullable): a newly allocated #GstD3D12StagingMemory + * or otherwise %NULL if allocation failed + * + * Since: 1.28 + */ +GstMemory * +gst_d3d12_staging_allocator_alloc (GstD3D12StagingAllocator * allocator, + GstD3D12Device * device, guint num_layouts, + const D3D12_PLACED_SUBRESOURCE_FOOTPRINT * layouts, gsize total_bytes) +{ + g_return_val_if_fail (GST_IS_D3D12_DEVICE (device), nullptr); + g_return_val_if_fail (num_layouts > 0, nullptr); + g_return_val_if_fail (num_layouts <= GST_VIDEO_MAX_PLANES, nullptr); + g_return_val_if_fail (layouts, nullptr); + g_return_val_if_fail (total_bytes > 0, nullptr); + + if (!allocator) { + gst_d3d12_staging_memory_init_once (); + allocator = _d3d12_memory_allocator; + } + + auto device_handle = gst_d3d12_device_get_device_handle (device); + D3D12_HEAP_PROPERTIES prop = + CD3DX12_HEAP_PROPERTIES (D3D12_CPU_PAGE_PROPERTY_WRITE_BACK, + D3D12_MEMORY_POOL_L0); + D3D12_RESOURCE_DESC desc = CD3DX12_RESOURCE_DESC::Buffer (total_bytes); + D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; + if (gst_d3d12_device_non_zeroed_supported (device)) + heap_flags = D3D12_HEAP_FLAG_CREATE_NOT_ZEROED; + + ComPtr < ID3D12Resource > resource; + auto hr = device_handle->CreateCommittedResource (&prop, heap_flags, + &desc, D3D12_RESOURCE_STATE_COMMON, nullptr, IID_PPV_ARGS (&resource)); + if (!gst_d3d12_result (hr, device)) { + GST_ERROR_OBJECT (device, "Couldn't allocate resource"); + return nullptr; + } + + auto priv = new GstD3D12StagingMemoryPrivate (); + + priv->num_layouts = num_layouts; + priv->resource = resource; + + for (guint i = 0; i < num_layouts; i++) + priv->layouti = layoutsi; + + auto mem = g_new0 (GstD3D12StagingMemory, 1); + mem->priv = priv; + mem->device = (GstD3D12Device *) gst_object_ref (device); + + gst_memory_init (GST_MEMORY_CAST (mem), + (GstMemoryFlags) 0, GST_ALLOCATOR_CAST (allocator), nullptr, + total_bytes, 0, 0, total_bytes); + + return GST_MEMORY_CAST (mem); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12stagingmemory.h
Added
@@ -0,0 +1,130 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/d3d12/gstd3d12_fwd.h> + +G_BEGIN_DECLS + +#define GST_D3D12_STAGING_MEMORY_CAST(obj) ((GstD3D12StagingMemory *)obj) + +#define GST_TYPE_D3D12_STAGING_ALLOCATOR (gst_d3d12_staging_allocator_get_type()) +#define GST_D3D12_STAGING_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_D3D12_STAGING_ALLOCATOR, GstD3D12Allocator)) +#define GST_D3D12_STAGING_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_D3D12_STAGING_ALLOCATOR, GstD3D12AllocatorClass)) +#define GST_IS_D3D12_STAGING_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_D3D12_STAGING_ALLOCATOR)) +#define GST_IS_D3D12_STAGING_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_D3D12_STAGING_ALLOCATOR)) +#define GST_D3D12_STAGING_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_D3D12_STAGING_ALLOCATOR, GstD3D12AllocatorClass)) +#define GST_D3D12_STAGING_ALLOCATOR_CAST(obj) ((GstD3D12Allocator *)obj) + +/** + * GST_D3D12_STAGING_MEMORY_NAME: + * + * The name of the Direct3D12 staging memory + * + * Since: 1.28 + */ +#define GST_D3D12_STAGING_MEMORY_NAME "D3D12StagingMemory" + +/** + * GstD3D12StagingMemory: + * + * Opaque GstD3D12StagingMemory struct + * + * Since: 1.28 + */ +struct _GstD3D12StagingMemory +{ + GstMemory mem; + + GstD3D12Device *device; + + /*< private >*/ + GstD3D12StagingMemoryPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +GST_D3D12_API +gboolean gst_is_d3d12_staging_memory (GstMemory * mem); + +GST_D3D12_API +gboolean gst_d3d12_staging_memory_sync (GstD3D12StagingMemory * mem); + +GST_D3D12_API +gboolean gst_d3d12_staging_memory_get_layout (GstD3D12StagingMemory * mem, + guint index, + D3D12_PLACED_SUBRESOURCE_FOOTPRINT * layout); + +GST_D3D12_API +void gst_d3d12_staging_memory_set_fence (GstD3D12StagingMemory * mem, + ID3D12Fence * fence, + guint64 fence_value, + gboolean wait); + +GST_D3D12_API +gboolean gst_d3d12_staging_memory_get_fence (GstD3D12StagingMemory * mem, + ID3D12Fence ** fence, + guint64 * fence_value); + +/** + * GstD3D12StagingAllocator: + * + * Opaque GstD3D12StagingAllocator struct + * + * Since: 1.28 + */ +struct _GstD3D12StagingAllocator +{ + GstAllocator allocator; + + /*< private >*/ + GstD3D12StagingAllocatorPrivate *priv; + + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstD3D12StagingAllocatorClass: + * + * Opaque GstD3D12AllocatorClass struct + * + * Since: 1.28 + */ +struct _GstD3D12StagingAllocatorClass +{ + GstAllocatorClass allocator_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING_LARGE; +}; + +GST_D3D12_API +GType gst_d3d12_staging_allocator_get_type (void); + +GST_D3D12_API +GstMemory * gst_d3d12_staging_allocator_alloc (GstD3D12StagingAllocator * allocator, + GstD3D12Device * device, + guint num_layouts, + const D3D12_PLACED_SUBRESOURCE_FOOTPRINT * layouts, + gsize total_bytes); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12utils.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12utils.cpp
Changed
@@ -26,8 +26,10 @@ #include <mutex> #include <atomic> #include <directx/d3dx12.h> +#include <wrl.h> /* *INDENT-OFF* */ +using namespace Microsoft::WRL; static std::recursive_mutex context_lock_; /* *INDENT-ON* */ @@ -582,6 +584,198 @@ return device; } +static gboolean +is_staging_buffer (GstBuffer * buffer) +{ + if (gst_buffer_n_memory (buffer) != 1) + return FALSE; + + auto mem = gst_buffer_peek_memory (buffer, 0); + return gst_is_d3d12_staging_memory (mem); +} + +static gboolean +is_d3d12_buffer (GstBuffer * buffer) +{ + auto mem = gst_buffer_peek_memory (buffer, 0); + return gst_is_d3d12_memory (mem); +} + +static gboolean +try_d3d12_to_staging_copy (GstBuffer * dst, GstBuffer * src, + const GstVideoInfo * info, D3D12_COMMAND_LIST_TYPE queue_type) +{ + if (!is_staging_buffer (dst)) + return FALSE; + + if (!is_d3d12_buffer (src)) + return FALSE; + + auto device = get_device_from_buffer (src); + if (!device) + return FALSE; + + auto dmem = (GstD3D12StagingMemory *) gst_buffer_peek_memory (dst, 0); + if (!gst_d3d12_device_is_equal (dmem->device, device)) + return FALSE; + + GstD3D12Frame frame; + if (!gst_d3d12_frame_map (&frame, info, src, GST_MAP_READ_D3D12, + GST_D3D12_FRAME_MAP_FLAG_NONE)) { + return FALSE; + } + + GstMapInfo map; + if (!gst_memory_map (GST_MEMORY_CAST (dmem), &map, GST_MAP_WRITE_D3D12)) { + gst_d3d12_frame_unmap (&frame); + return FALSE; + } + + GstD3D12CopyTextureRegionArgs argsGST_VIDEO_MAX_PLANES = { }; + D3D12_BOX src_boxGST_VIDEO_MAX_PLANES = { }; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layoutGST_VIDEO_MAX_PLANES = { }; + auto resource = (ID3D12Resource *) map.data; + + for (guint i = 0; i < GST_VIDEO_INFO_N_PLANES (info); i++) { + auto sbox = &src_boxi; + gst_d3d12_staging_memory_get_layout (dmem, i, &layouti); + + sbox->left = 0; + sbox->top = 0; + sbox->right = MIN (layouti.Footprint.Width, + (UINT) frame.plane_recti.right); + sbox->bottom = MIN (layouti.Footprint.Height, + (UINT) frame.plane_recti.bottom); + sbox->front = 0; + sbox->back = 1; + + argsi.src = CD3DX12_TEXTURE_COPY_LOCATION (frame.datai, + frame.subresource_indexi); + argsi.dst = CD3DX12_TEXTURE_COPY_LOCATION (resource, layouti); + argsi.src_box = &src_boxi; + } + + GstD3D12FenceData *fence_data; + gst_d3d12_device_acquire_fence_data (device, &fence_data); + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (gst_buffer_ref (src))); + + std::vector < ID3D12Fence * >fences_to_wait; + std::vector < guint64 > fence_values_to_wait; + + for (guint i = 0; i < G_N_ELEMENTS (frame.fence); i++) { + if (frame.fencei.fence) { + fences_to_wait.push_back (frame.fencei.fence); + fence_values_to_wait.push_back (frame.fencei.fence_value); + } + } + + guint64 fence_val = 0; + auto ret = gst_d3d12_device_copy_texture_region (device, + GST_VIDEO_INFO_N_PLANES (info), args, fence_data, + (guint) fences_to_wait.size (), fences_to_wait.data (), + fence_values_to_wait.data (), queue_type, + &fence_val); + + gst_memory_unmap (GST_MEMORY_CAST (dmem), &map); + gst_d3d12_frame_unmap (&frame); + + auto fence = gst_d3d12_device_get_fence_handle (device, queue_type); + gst_d3d12_staging_memory_set_fence (dmem, fence, fence_val, FALSE); + + GST_TRACE ("Copy d3d12 to staging result %d", ret); + + return ret; +} + +static gboolean +try_staging_to_d3d12_copy (GstBuffer * dst, GstBuffer * src, + const GstVideoInfo * info, D3D12_COMMAND_LIST_TYPE queue_type) +{ + if (!is_staging_buffer (src)) + return FALSE; + + if (!is_d3d12_buffer (dst)) + return FALSE; + + auto device = get_device_from_buffer (dst); + if (!device) + return FALSE; + + auto dmem = (GstD3D12StagingMemory *) gst_buffer_peek_memory (src, 0); + if (!gst_d3d12_device_is_equal (dmem->device, device)) + return FALSE; + + GstD3D12Frame frame; + if (!gst_d3d12_frame_map (&frame, info, dst, GST_MAP_WRITE_D3D12, + GST_D3D12_FRAME_MAP_FLAG_NONE)) { + return FALSE; + } + + GstMapInfo map; + if (!gst_memory_map (GST_MEMORY_CAST (dmem), &map, GST_MAP_READ_D3D12)) { + gst_d3d12_frame_unmap (&frame); + return FALSE; + } + + GstD3D12CopyTextureRegionArgs argsGST_VIDEO_MAX_PLANES = { }; + D3D12_BOX src_boxGST_VIDEO_MAX_PLANES = { }; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layoutGST_VIDEO_MAX_PLANES = { }; + + auto resource = (ID3D12Resource *) map.data; + + for (guint i = 0; i < GST_VIDEO_INFO_N_PLANES (info); i++) { + auto sbox = &src_boxi; + gst_d3d12_staging_memory_get_layout (dmem, i, &layouti); + + sbox->left = 0; + sbox->top = 0; + sbox->right = MIN (layouti.Footprint.Width, + (UINT) frame.plane_recti.right); + sbox->bottom = MIN (layouti.Footprint.Height, + (UINT) frame.plane_recti.bottom); + sbox->front = 0; + sbox->back = 1; + + argsi.src = CD3DX12_TEXTURE_COPY_LOCATION (resource, layouti); + argsi.dst = CD3DX12_TEXTURE_COPY_LOCATION (frame.datai, + frame.subresource_indexi); + argsi.src_box = &src_boxi; + } + + GstD3D12FenceData *fence_data; + gst_d3d12_device_acquire_fence_data (device, &fence_data); + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (gst_buffer_ref (src))); + + std::vector < ID3D12Fence * >fences_to_wait; + std::vector < guint64 > fence_values_to_wait; + + for (guint i = 0; i < G_N_ELEMENTS (frame.fence); i++) { + if (frame.fencei.fence) { + fences_to_wait.push_back (frame.fencei.fence); + fence_values_to_wait.push_back (frame.fencei.fence_value); + } + } + + guint64 fence_val = 0; + auto ret = gst_d3d12_device_copy_texture_region (device, + GST_VIDEO_INFO_N_PLANES (info), args, fence_data, + (guint) fences_to_wait.size (), fences_to_wait.data (), + fence_values_to_wait.data (), queue_type, + &fence_val); + + gst_memory_unmap (GST_MEMORY_CAST (dmem), &map); + gst_d3d12_frame_unmap (&frame); + + auto fence = gst_d3d12_device_get_fence_handle (device, queue_type); + gst_d3d12_buffer_set_fence (dst, fence, fence_val, FALSE); + + GST_TRACE ("Copy staging to d3d12 result %d", ret); + + return ret; +} + /** * gst_d3d12_buffer_copy_into: * @dest: a #GstBuffer @@ -597,10 +791,47 @@ gst_d3d12_buffer_copy_into (GstBuffer * dest, GstBuffer * src, const GstVideoInfo * info) { + return gst_d3d12_buffer_copy_into_full (dest, + src, info, D3D12_COMMAND_LIST_TYPE_DIRECT); +} + +/** + * gst_d3d12_buffer_copy_into_full: + * @dest: a #GstBuffer + * @src: a #GstBuffer + * @info: a #GstVideoInfo + * @queue_type: command queue type to use + * + * Copy @src data into @dest using command queue specified by @queue_type. + * This method executes only memory copy. + * Use gst_buffer_copy_into() method for metadata copy + * + * Since: 1.28 + */ +gboolean +gst_d3d12_buffer_copy_into_full (GstBuffer * dest, GstBuffer * src, + const GstVideoInfo * info, D3D12_COMMAND_LIST_TYPE queue_type) +{ g_return_val_if_fail (GST_IS_BUFFER (dest), FALSE); g_return_val_if_fail (GST_IS_BUFFER (src), FALSE); g_return_val_if_fail (info, FALSE); + switch (queue_type) { + case D3D12_COMMAND_LIST_TYPE_DIRECT: + case D3D12_COMMAND_LIST_TYPE_COMPUTE: + case D3D12_COMMAND_LIST_TYPE_COPY: + break; + default: + GST_ERROR ("Invalid queue type %d", queue_type); + return FALSE; + } + + if (try_d3d12_to_staging_copy (dest, src, info, queue_type)) + return TRUE; + + if (try_staging_to_d3d12_copy (dest, src, info, queue_type)) + return TRUE; + auto num_mem = gst_buffer_n_memory (dest); if (gst_buffer_n_memory (src) != num_mem) return gst_d3d12_buffer_copy_into_fallback (dest, src, info); @@ -628,15 +859,14 @@ } guint64 fence_val = 0; - auto ret = gst_d3d12_frame_copy (&dest_frame, &src_frame, &fence_val); + ComPtr < ID3D12Fence > fence; + auto ret = gst_d3d12_frame_copy_full (&dest_frame, &src_frame, queue_type, + &fence, &fence_val); gst_d3d12_frame_unmap (&dest_frame); gst_d3d12_frame_unmap (&src_frame); - if (ret) { - auto fence = gst_d3d12_device_get_fence_handle (dest_device, - D3D12_COMMAND_LIST_TYPE_DIRECT); - gst_d3d12_buffer_set_fence (dest, fence, fence_val, FALSE); - } + if (ret) + gst_d3d12_buffer_set_fence (dest, fence.Get (), fence_val, FALSE); return ret; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/gstd3d12utils.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/gstd3d12utils.h
Changed
@@ -67,6 +67,12 @@ const GstVideoInfo * info); GST_D3D12_API +gboolean gst_d3d12_buffer_copy_into_full (GstBuffer * dest, + GstBuffer * src, + const GstVideoInfo * info, + D3D12_COMMAND_LIST_TYPE queue_type); + +GST_D3D12_API void gst_d3d12_buffer_set_fence (GstBuffer * buffer, ID3D12Fence * fence, guint64 fence_value,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3d12/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3d12/meson.build
Changed
@@ -14,6 +14,8 @@ 'gstd3d12frame.cpp', 'gstd3d12memory.cpp', 'gstd3d12mipgen.cpp', + 'gstd3d12stagingbufferpool.cpp', + 'gstd3d12stagingmemory.cpp', 'gstd3d12utils.cpp', @@ -31,6 +33,8 @@ 'gstd3d12format.h', 'gstd3d12frame.h', 'gstd3d12memory.h', + 'gstd3d12stagingbufferpool.h', + 'gstd3d12stagingmemory.h', 'gstd3d12utils.h',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/converter-hlsl/PSMain_converter.hlsl
Changed
@@ -21,8 +21,10 @@ cbuffer PsConstBufferDyn : register(b1) { float alphaFactor; - float3 padding_0; + uint remapUV; + float2 padding_0; float4 hsvcFactor; + float4 bg_color; }; struct PSColorSpace @@ -49,6 +51,7 @@ Texture2D shaderTexture_3 : register(t3); Texture1D<float> gammaDecLUT : register(t4); Texture1D<float> gammaEncLUT: register(t5); +Texture2D samplerRemap : register(t6); SamplerState samplerState : register(s0); SamplerState lutSamplerState : register(s1); @@ -1460,15 +1463,28 @@ SAMPLER g_sampler; CONVERTER g_converter; OUTPUT_BUILDER g_builder; - return g_builder.Build (g_converter.Execute (g_sampler.Execute (input.Texture))); + float2 uv; + branch if (remapUV) { + float4 val = samplerRemap.Sample(lutSamplerState, input.Texture); + if (val.w < 0.5) + return g_builder.Build (g_converter.Execute (bg_color)); + + uv = val.xy; + } else { + uv = input.Texture; + } + + return g_builder.Build (g_converter.Execute (g_sampler.Execute (uv))); } #else /* BUILDING_HLSL */ static const char str_PSMain_converter = "cbuffer PsConstBufferDyn : register(b1)\n" "{\n" " float alphaFactor;\n" -" float3 padding_0;\n" +" uint remapUV;\n" +" float2 padding_0;\n" " float4 hsvcFactor;\n" +" float4 bg_color;\n" "};\n" "\n" "struct PSColorSpace\n" @@ -1495,6 +1511,7 @@ "Texture2D shaderTexture_3 : register(t3);\n" "Texture1D<float> gammaDecLUT : register(t4);\n" "Texture1D<float> gammaEncLUT: register(t5);\n" +"Texture2D samplerRemap : register(t6);\n" "\n" "SamplerState samplerState : register(s0);\n" "SamplerState lutSamplerState : register(s1);\n" @@ -2906,6 +2923,17 @@ " SAMPLER g_sampler;\n" " CONVERTER g_converter;\n" " OUTPUT_BUILDER g_builder;\n" -" return g_builder.Build (g_converter.Execute (g_sampler.Execute (input.Texture)));\n" +" float2 uv;\n" +" branch if (remapUV) {\n" +" float4 val = samplerRemap.Sample(lutSamplerState, input.Texture);\n" +" if (val.w < 0.5)\n" +" return g_builder.Build (g_converter.Execute (bg_color));\n" +"\n" +" uv = val.xy;\n" +" } else {\n" +" uv = input.Texture;\n" +" }\n" +"\n" +" return g_builder.Build (g_converter.Execute (g_sampler.Execute (uv)));\n" "}\n"; #endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3dshader/gstd3dshadercache.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/gstd3dshadercache.cpp
Changed
@@ -78,6 +78,10 @@ {GST_D3D_PLUGIN_PS_SAMPLE_SCRGB_TONEMAP, BUILD_SOURCE (PSMain_sample_scrgb_tonemap)}, {GST_D3D_PLUGIN_PS_SAMPLE_SCRGB, BUILD_SOURCE (PSMain_sample_scrgb)}, {GST_D3D_PLUGIN_PS_SNOW, BUILD_SOURCE (PSMain_snow)}, + {GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL_PREMUL, BUILD_SOURCE (PSMain_sample_bgra_to_vuya_full_premul)}, + {GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL, BUILD_SOURCE (PSMain_sample_bgra_to_vuya_full)}, + {GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED_PREMUL, BUILD_SOURCE (PSMain_sample_bgra_to_vuya_limited_premul)}, + {GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED, BUILD_SOURCE (PSMain_sample_bgra_to_vuya_limited)}, }; static const ShaderItem g_vs_map = { @@ -96,6 +100,12 @@ {GST_D3D_PLUGIN_CS_YADIF_1_12, BUILD_SOURCE (CSMain_yadif_1_12)}, {GST_D3D_PLUGIN_CS_YADIF_2, BUILD_SOURCE (CSMain_yadif_2)}, {GST_D3D_PLUGIN_CS_YADIF_4, BUILD_SOURCE (CSMain_yadif_4)}, + {GST_D3D_PLUGIN_CS_FISHEYE_EQUIRECT, BUILD_SOURCE (CSMain_fisheye_equirect)}, + {GST_D3D_PLUGIN_CS_FISHEYE_PANORAMA, BUILD_SOURCE (CSMain_fisheye_panorama)}, + {GST_D3D_PLUGIN_CS_FISHEYE_PERSPECTIVE, BUILD_SOURCE (CSMain_fisheye_perspective)}, + {GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, BUILD_SOURCE (CSMain_weave_interlace_1)}, + {GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_2, BUILD_SOURCE (CSMain_weave_interlace_2)}, + {GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_4, BUILD_SOURCE (CSMain_weave_interlace_4)}, }; #undef BUILD_SOURCE
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3dshader/gstd3dshadercache.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/gstd3dshadercache.h
Changed
@@ -38,6 +38,10 @@ GST_D3D_PLUGIN_PS_SAMPLE_SCRGB_TONEMAP, GST_D3D_PLUGIN_PS_SAMPLE_SCRGB, GST_D3D_PLUGIN_PS_SNOW, + GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL_PREMUL, + GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL, + GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED_PREMUL, + GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED, GST_D3D_PLUGIN_PS_LAST } GstD3DPluginPS; @@ -62,6 +66,12 @@ GST_D3D_PLUGIN_CS_YADIF_1_12, GST_D3D_PLUGIN_CS_YADIF_2, GST_D3D_PLUGIN_CS_YADIF_4, + GST_D3D_PLUGIN_CS_FISHEYE_EQUIRECT, + GST_D3D_PLUGIN_CS_FISHEYE_PANORAMA, + GST_D3D_PLUGIN_CS_FISHEYE_PERSPECTIVE, + GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, + GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_2, + GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_4, GST_D3D_PLUGIN_CS_LAST, } GstD3DPluginCS;
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_equirect.hlsl
Added
@@ -0,0 +1,167 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +RWTexture2D<float4> uvLUT : register(u0); + +cbuffer Parameters : register(b0) +{ + float2 fisheyeCenter; + float2 fisheyeRadius; + + float maxAngle; + float horizontalFOV; + float verticalFOV; + float rollAngle; // unused + + float2 roiOffset; + float2 roiScale; + + float4 padding; + + float3x3 RotationMatrix; +}; + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID) +{ + uint width, height; + uvLUT.GetDimensions(width, height); + if (DTid.x >= width || DTid.y >= height) + return; + + // Compute normalized screen coordinate + float2 uv = float2(DTid.x, DTid.y) / float2(width, height); + + // Apply ROI cropping and scaling + float2 uv_roi = roiOffset + uv * roiScale; + + // Convert to spherical coordinates (delta = latitude, psi = longitude) + float delta = verticalFOV * (uv_roi.y - 0.5); // up-down angle + float psi = horizontalFOV * (uv_roi.x - 0.5); // left-right angle + + // Convert spherical to 3D ray (Z-forward) + float cosD = cos(delta); + float sinD = sin(delta); + float cosP = cos(psi); + float sinP = sin(psi); + + float3 ray = float3( + cosD * sinP, // X + sinD, // Y + cosD * cosP // Z + ); + + // Apply rotation matrix + float3 rotatedRay = mul(RotationMatrix, ray); + rotatedRay = normalize(rotatedRay); + + // Convert back to spherical angles + float theta = acos(rotatedRay.z); // zenith angle + + float4 fishUV = float4(0.0, 0.0, 0.0, 1.0); + if (theta <= maxAngle) { + // azimuth angle + float phi = atan2(rotatedRay.y, rotatedRay.x); + + // Map to fisheye UV via equidistant projection + float2 r = (fisheyeRadius / maxAngle) * theta; + fishUV.xy = fisheyeCenter + r * float2(cos(phi), sin(phi)); + } else { + // Out of view + fishUV.w = 0.0; + } + + uvLUTDTid.xy = fishUV; +} +#else +static const char str_CSMain_fisheye_equirect = +"RWTexture2D<float4> uvLUT : register(u0);\n" +"\n" +"cbuffer Parameters : register(b0)\n" +"{\n" +" float2 fisheyeCenter;\n" +" float2 fisheyeRadius;\n" +"\n" +" float maxAngle;\n" +" float horizontalFOV;\n" +" float verticalFOV;\n" +" float rollAngle; // unused\n" +"\n" +" float2 roiOffset;\n" +" float2 roiScale;\n" +"\n" +" float4 padding;\n" +"\n" +" float3x3 RotationMatrix;\n" +"};\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID)\n" +"{\n" +" uint width, height;\n" +" uvLUT.GetDimensions(width, height);\n" +" if (DTid.x >= width || DTid.y >= height)\n" +" return;\n" +"\n" +" // Compute normalized screen coordinate\n" +" float2 uv = float2(DTid.x, DTid.y) / float2(width, height);\n" +"\n" +" // Apply ROI cropping and scaling\n" +" float2 uv_roi = roiOffset + uv * roiScale;\n" +"\n" +" // Convert to spherical coordinates (delta = latitude, psi = longitude)\n" +" float delta = verticalFOV * (uv_roi.y - 0.5); // up-down angle\n" +" float psi = horizontalFOV * (uv_roi.x - 0.5); // left-right angle\n" +"\n" +" // Convert spherical to 3D ray (Z-forward)\n" +" float cosD = cos(delta);\n" +" float sinD = sin(delta);\n" +" float cosP = cos(psi);\n" +" float sinP = sin(psi);\n" +"\n" +" float3 ray = float3(\n" +" cosD * sinP, // X\n" +" sinD, // Y\n" +" cosD * cosP // Z\n" +" );\n" +"\n" +" // Apply rotation matrix\n" +" float3 rotatedRay = mul(RotationMatrix, ray);\n" +" rotatedRay = normalize(rotatedRay);\n" +"\n" +" // Convert back to spherical angles\n" +" float theta = acos(rotatedRay.z); // zenith angle\n" +"\n" +" float4 fishUV = float4(0.0, 0.0, 0.0, 1.0);\n" +" if (theta <= maxAngle) {\n" +" // azimuth angle\n" +" float phi = atan2(rotatedRay.y, rotatedRay.x);\n" +"\n" +" // Map to fisheye UV via equidistant projection\n" +" float2 r = (fisheyeRadius / maxAngle) * theta;\n" +" fishUV.xy = fisheyeCenter + r * float2(cos(phi), sin(phi));\n" +" } else {\n" +" // Out of view\n" +" fishUV.w = 0.0;\n" +" }\n" +"\n" +" uvLUTDTid.xy = fishUV;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_panorama.hlsl
Added
@@ -0,0 +1,131 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +RWTexture2D<float4> uvLUT : register(u0); + +cbuffer Parameters : register(b0) +{ + float2 fisheyeCenter; + float2 fisheyeRadius; + + float maxAngle; + float horizontalFOV; // Unused + float verticalFOV; // Unused + float rollAngle; + + float2 roiOffset; + float2 roiScale; + + float innerRadius; + float3 padding; + + float3x3 RotationMatrix; // Unused +}; + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID) +{ + uint width, height; + uvLUT.GetDimensions(width, height); + if (DTid.x >= width || DTid.y >= height) + return; + + // Compute normalized screen coordinate + float2 uv = float2(DTid.xy) / float2(width, height); + + // Apply ROI cropping and scaling + float2 uv_roi = roiOffset + uv * roiScale; + + // Zenith angle (theta): 0 = center, maxAngle = outer edge + float minTheta = maxAngle * saturate(innerRadius); + float theta = lerp(minTheta, maxAngle, 1.0 - uv_roi.y); + + // Map to azimuthal angle (phi) across full 360 degrees + float phi = -6.28318530718 * (uv_roi.x - 0.5) + rollAngle; + + float4 fishUV = float4(0.0, 0.0, 0.0, 1.0); + if (theta >= minTheta && theta <= maxAngle) { + // Convert spherical coordinates to 2D fisheye UV using equidistant projection + float2 r = (fisheyeRadius / maxAngle) * theta; + fishUV.xy = fisheyeCenter + r * float2(cos(phi), -sin(phi)); + } else { + // Out of view + fishUV.w = 0.0; + } + + uvLUTDTid.xy = fishUV; +} +#else +static const char str_CSMain_fisheye_panorama = +"RWTexture2D<float4> uvLUT : register(u0);\n" +"\n" +"cbuffer Parameters : register(b0)\n" +"{\n" +" float2 fisheyeCenter;\n" +" float2 fisheyeRadius;\n" +"\n" +" float maxAngle;\n" +" float horizontalFOV; // Unused\n" +" float verticalFOV; // Unused\n" +" float rollAngle;\n" +"\n" +" float2 roiOffset;\n" +" float2 roiScale;\n" +"\n" +" float innerRadius;\n" +" float3 padding;\n" +"\n" +" float3x3 RotationMatrix; // Unused\n" +"};\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID)\n" +"{\n" +" uint width, height;\n" +" uvLUT.GetDimensions(width, height);\n" +" if (DTid.x >= width || DTid.y >= height)\n" +" return;\n" +"\n" +" // Compute normalized screen coordinate\n" +" float2 uv = float2(DTid.xy) / float2(width, height);\n" +"\n" +" // Apply ROI cropping and scaling\n" +" float2 uv_roi = roiOffset + uv * roiScale;\n" +"\n" +" // Zenith angle (theta): 0 = center, maxAngle = outer edge\n" +" float minTheta = maxAngle * saturate(innerRadius);\n" +" float theta = lerp(minTheta, maxAngle, 1.0 - uv_roi.y);\n" +"\n" +" // Map to azimuthal angle (phi) across full 360 degrees\n" +" float phi = -6.28318530718 * (uv_roi.x - 0.5) + rollAngle;\n" +"\n" +" float4 fishUV = float4(0.0, 0.0, 0.0, 1.0);\n" +" if (theta >= minTheta && theta <= maxAngle) {\n" +" // Convert spherical coordinates to 2D fisheye UV using equidistant projection\n" +" float2 r = (fisheyeRadius / maxAngle) * theta;\n" +" fishUV.xy = fisheyeCenter + r * float2(cos(phi), -sin(phi));\n" +" } else {\n" +" // Out of view\n" +" fishUV.w = 0.0;\n" +" }\n" +"\n" +" uvLUTDTid.xy = fishUV;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_fisheye_perspective.hlsl
Added
@@ -0,0 +1,151 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +RWTexture2D<float4> uvLUT : register(u0); + +cbuffer Parameters : register(b0) +{ + float2 fisheyeCenter; + float2 fisheyeRadius; + + float maxAngle; + float horizontalFOV; // unused + float verticalFOV; // unused + float rollAngle; // Unused + + float2 roiOffset; + float2 roiScale; + + float padding; + float invFocalLenX; + float invFocalLenY; + float otherPadding; + + float3x3 RotationMatrix; +}; + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID) +{ + uint width, height; + uvLUT.GetDimensions(width, height); + if (DTid.x >= width || DTid.y >= height) + return; + + // Compute normalized screen coordinate + float2 uv = float2(DTid.xy) / float2(width, height); + + // Apply ROI cropping and scaling + float2 uv_roi = roiOffset + uv * roiScale; + + // Convert to NDC -1, 1 + float2 uv_ndc = uv_roi * 2.0 - 1.0; + + // Compute view ray from perspective FOV (pinhole model) + float x = invFocalLenX * uv_ndc.x; + float y = invFocalLenY * uv_ndc.y; + float3 localRay = normalize(float3(x, y, 1.0)); + + float3 worldRay = normalize(mul(RotationMatrix, localRay)); + + // Compute angle from Z-axis (zenith angle) + float angle = acos(worldRay.z); + + float4 fishUV = float4(0.0, 0.0, 0.0, 1.0); + if (angle <= maxAngle) { + // Project to fisheye image using equidistant projection + float phi = atan2(worldRay.y, worldRay.x); + + float2 r = (fisheyeRadius / maxAngle) * angle; + fishUV.xy = fisheyeCenter + r * float2(cos(phi), sin(phi)); + } else { + // Out of view + fishUV.w = 0.0; + } + + uvLUTDTid.xy = fishUV; +} +#else +static const char str_CSMain_fisheye_perspective = +"RWTexture2D<float4> uvLUT : register(u0);\n" +"\n" +"cbuffer Parameters : register(b0)\n" +"{\n" +" float2 fisheyeCenter;\n" +" float2 fisheyeRadius;\n" +"\n" +" float maxAngle;\n" +" float horizontalFOV; // unused\n" +" float verticalFOV; // unused\n" +" float rollAngle; // Unused\n" +"\n" +" float2 roiOffset;\n" +" float2 roiScale;\n" +"\n" +" float padding;\n" +" float invFocalLenX;\n" +" float invFocalLenY;\n" +" float otherPadding;\n" +"\n" +" float3x3 RotationMatrix;\n" +"};\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 DTid : SV_DispatchThreadID)\n" +"{\n" +" uint width, height;\n" +" uvLUT.GetDimensions(width, height);\n" +" if (DTid.x >= width || DTid.y >= height)\n" +" return;\n" +"\n" +" // Compute normalized screen coordinate\n" +" float2 uv = float2(DTid.xy) / float2(width, height);\n" +"\n" +" // Apply ROI cropping and scaling\n" +" float2 uv_roi = roiOffset + uv * roiScale;\n" +"\n" +" // Convert to NDC -1, 1\n" +" float2 uv_ndc = uv_roi * 2.0 - 1.0;\n" +"\n" +" // Compute view ray from perspective FOV (pinhole model)\n" +" float x = invFocalLenX * uv_ndc.x;\n" +" float y = invFocalLenY * uv_ndc.y;\n" +" float3 localRay = normalize(float3(x, y, 1.0));\n" +"\n" +" float3 worldRay = normalize(mul(RotationMatrix, localRay));\n" +"\n" +" // Compute angle from Z-axis (zenith angle)\n" +" float angle = acos(worldRay.z);\n" +"\n" +" float4 fishUV = float4(0.0, 0.0, 0.0, 1.0);\n" +" if (angle <= maxAngle) {\n" +" // Project to fisheye image using equidistant projection\n" +" float phi = atan2(worldRay.y, worldRay.x);\n" +"\n" +" float2 r = (fisheyeRadius / maxAngle) * angle;\n" +" fishUV.xy = fisheyeCenter + r * float2(cos(phi), sin(phi));\n" +" } else {\n" +" // Out of view\n" +" fishUV.w = 0.0;\n" +" }\n" +"\n" +" uvLUTDTid.xy = fishUV;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_weave_interlace_1.hlsl
Added
@@ -0,0 +1,89 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +cbuffer WeaveCBData : register(b0) +{ + uint Width; + uint Height; + uint Mode; + uint FieldOrder; // 0 = tff, 1 = bff +}; + +Texture2D<float> srcTexA : register(t0); +Texture2D<float> srcTexB : register(t1); +RWTexture2D<unorm float> outTex : register(u0); + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 tid : SV_DispatchThreadID) +{ + uint x = tid.x; + uint y = tid.y; + + if (x >= Width || y >= Height) + return; + + bool is_top = ((y & 1u) == 0u); + if (FieldOrder == 1) + is_top = !is_top; + + float val; + if (is_top) + val = srcTexA.Load (uint3 (x, y, 0)); + else + val = srcTexB.Load (uint3 (x, y, 0)); + + outTexuint2(x, y) = val; +} +#else +static const char str_CSMain_weave_interlace_1 = +"cbuffer WeaveCBData : register(b0)\n" +"{\n" +" uint Width;\n" +" uint Height;\n" +" uint Mode;\n" +" uint FieldOrder; // 0 = tff, 1 = bff\n" +"};\n" +"\n" +"Texture2D<float> srcTexA : register(t0);\n" +"Texture2D<float> srcTexB : register(t1);\n" +"RWTexture2D<unorm float> outTex : register(u0);\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 tid : SV_DispatchThreadID)\n" +"{\n" +" uint x = tid.x;\n" +" uint y = tid.y;\n" +"\n" +" if (x >= Width || y >= Height)\n" +" return;\n" +"\n" +" bool is_top = ((y & 1u) == 0u);\n" +" if (FieldOrder == 1)\n" +" is_top = !is_top;\n" +"\n" +" float val;\n" +" if (is_top)\n" +" val = srcTexA.Load (uint3 (x, y, 0));\n" +" else\n" +" val = srcTexB.Load (uint3 (x, y, 0));\n" +"\n" +" outTexuint2(x, y) = val;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_weave_interlace_2.hlsl
Added
@@ -0,0 +1,89 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +cbuffer WeaveCBData : register(b0) +{ + uint Width; + uint Height; + uint Mode; + uint FieldOrder; // 0 = tff, 1 = bff +}; + +Texture2D<float2> srcTexA : register(t0); +Texture2D<float2> srcTexB : register(t1); +RWTexture2D<unorm float2> outTex : register(u0); + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 tid : SV_DispatchThreadID) +{ + uint x = tid.x; + uint y = tid.y; + + if (x >= Width || y >= Height) + return; + + bool is_top = ((y & 1u) == 0u); + if (FieldOrder == 1) + is_top = !is_top; + + float2 val; + if (is_top) + val = srcTexA.Load (uint3 (x, y, 0)); + else + val = srcTexB.Load (uint3 (x, y, 0)); + + outTexuint2(x, y) = val; +} +#else +static const char str_CSMain_weave_interlace_2 = +"cbuffer WeaveCBData : register(b0)\n" +"{\n" +" uint Width;\n" +" uint Height;\n" +" uint Mode;\n" +" uint FieldOrder; // 0 = tff, 1 = bff\n" +"};\n" +"\n" +"Texture2D<float2> srcTexA : register(t0);\n" +"Texture2D<float2> srcTexB : register(t1);\n" +"RWTexture2D<unorm float2> outTex : register(u0);\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 tid : SV_DispatchThreadID)\n" +"{\n" +" uint x = tid.x;\n" +" uint y = tid.y;\n" +"\n" +" if (x >= Width || y >= Height)\n" +" return;\n" +"\n" +" bool is_top = ((y & 1u) == 0u);\n" +" if (FieldOrder == 1)\n" +" is_top = !is_top;\n" +"\n" +" float2 val;\n" +" if (is_top)\n" +" val = srcTexA.Load (uint3 (x, y, 0));\n" +" else\n" +" val = srcTexB.Load (uint3 (x, y, 0));\n" +"\n" +" outTexuint2(x, y) = val;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/CSMain_weave_interlace_4.hlsl
Added
@@ -0,0 +1,89 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +cbuffer WeaveCBData : register(b0) +{ + uint Width; + uint Height; + uint Mode; + uint FieldOrder; // 0 = tff, 1 = bff +}; + +Texture2D<float4> srcTexA : register(t0); +Texture2D<float4> srcTexB : register(t1); +RWTexture2D<unorm float4> outTex : register(u0); + +numthreads(8, 8, 1) +void ENTRY_POINT (uint3 tid : SV_DispatchThreadID) +{ + uint x = tid.x; + uint y = tid.y; + + if (x >= Width || y >= Height) + return; + + bool is_top = ((y & 1u) == 0u); + if (FieldOrder == 1) + is_top = !is_top; + + float4 val; + if (is_top) + val = srcTexA.Load (uint3 (x, y, 0)); + else + val = srcTexB.Load (uint3 (x, y, 0)); + + outTexuint2(x, y) = val; +} +#else +static const char str_CSMain_weave_interlace_4 = +"cbuffer WeaveCBData : register(b0)\n" +"{\n" +" uint Width;\n" +" uint Height;\n" +" uint Mode;\n" +" uint FieldOrder; // 0 = tff, 1 = bff\n" +"};\n" +"\n" +"Texture2D<float4> srcTexA : register(t0);\n" +"Texture2D<float4> srcTexB : register(t1);\n" +"RWTexture2D<unorm float4> outTex : register(u0);\n" +"\n" +"numthreads(8, 8, 1)\n" +"void ENTRY_POINT (uint3 tid : SV_DispatchThreadID)\n" +"{\n" +" uint x = tid.x;\n" +" uint y = tid.y;\n" +"\n" +" if (x >= Width || y >= Height)\n" +" return;\n" +"\n" +" bool is_top = ((y & 1u) == 0u);\n" +" if (FieldOrder == 1)\n" +" is_top = !is_top;\n" +"\n" +" float4 val;\n" +" if (is_top)\n" +" val = srcTexA.Load (uint3 (x, y, 0));\n" +" else\n" +" val = srcTexB.Load (uint3 (x, y, 0));\n" +"\n" +" outTexuint2(x, y) = val;\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_full.hlsl
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +Texture2D shaderTexture; +SamplerState samplerState; + +struct PS_INPUT +{ + float4 Position : SV_POSITION; + float2 Texture : TEXCOORD; +}; + +static const float3x3 RGB2YCbCr = { + 0.2126, 0.7152, 0.0722, // Y + -0.1146, -0.3854, 0.5000, // Cb + 0.5000, -0.4542, -0.0458 // Cr +}; + +static const float3 Offset = float3 (0.0, 0.5, 0.5); + +float4 ENTRY_POINT (PS_INPUT input): SV_TARGET +{ + float4 bgra = shaderTexture.Sample (samplerState, input.Texture); + float3 rgb = float3(bgra.r, bgra.g, bgra.b); + float3 yuv = mul (RGB2YCbCr, rgb) + Offset; + + return float4(yuv.z, yuv.y, yuv.x, bgra.a); +} +#else +static const char str_PSMain_sample_bgra_to_vuya_full = +"Texture2D shaderTexture;\n" +"SamplerState samplerState;\n" +"\n" +"struct PS_INPUT\n" +"{\n" +" float4 Position : SV_POSITION;\n" +" float2 Texture : TEXCOORD;\n" +"};\n" +"\n" +"static const float3x3 RGB2YCbCr = {\n" +" 0.2126, 0.7152, 0.0722, // Y\n" +" -0.1146, -0.3854, 0.5000, // Cb\n" +" 0.5000, -0.4542, -0.0458 // Cr\n" +"};\n" +"\n" +"static const float3 Offset = float3 (0.0, 0.5, 0.5);\n" +"\n" +"float4 ENTRY_POINT (PS_INPUT input): SV_TARGET\n" +"{\n" +" float4 bgra = shaderTexture.Sample (samplerState, input.Texture);\n" +" float3 rgb = float3(bgra.r, bgra.g, bgra.b);\n" +" float3 yuv = mul (RGB2YCbCr, rgb) + Offset;\n" +"\n" +" return float4(yuv.z, yuv.y, yuv.x, bgra.a);\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_full_premul.hlsl
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +Texture2D shaderTexture; +SamplerState samplerState; + +struct PS_INPUT +{ + float4 Position : SV_POSITION; + float2 Texture : TEXCOORD; +}; + +static const float3x3 RGB2YCbCr = { + 0.2126, 0.7152, 0.0722, // Y + -0.1146, -0.3854, 0.5000, // Cb + 0.5000, -0.4542, -0.0458 // Cr +}; + +static const float3 Offset = float3 (0.0, 0.5, 0.5); + +float4 ENTRY_POINT (PS_INPUT input): SV_TARGET +{ + float4 bgra = shaderTexture.Sample (samplerState, input.Texture); + float3 rgb = float3(bgra.r, bgra.g, bgra.b) * bgra.a; + float3 yuv = mul (RGB2YCbCr, rgb) + Offset; + + return float4 (yuv.z, yuv.y, yuv.x, bgra.a); +} +#else +static const char str_PSMain_sample_bgra_to_vuya_full_premul = +"Texture2D shaderTexture;\n" +"SamplerState samplerState;\n" +"\n" +"struct PS_INPUT\n" +"{\n" +" float4 Position : SV_POSITION;\n" +" float2 Texture : TEXCOORD;\n" +"};\n" +"\n" +"static const float3x3 RGB2YCbCr = {\n" +" 0.2126, 0.7152, 0.0722, // Y\n" +" -0.1146, -0.3854, 0.5000, // Cb\n" +" 0.5000, -0.4542, -0.0458 // Cr\n" +"};\n" +"\n" +"static const float3 Offset = float3 (0.0, 0.5, 0.5);\n" +"\n" +"float4 ENTRY_POINT (PS_INPUT input): SV_TARGET\n" +"{\n" +" float4 bgra = shaderTexture.Sample (samplerState, input.Texture);\n" +" float3 rgb = float3(bgra.r, bgra.g, bgra.b) * bgra.a;\n" +" float3 yuv = mul (RGB2YCbCr, rgb) + Offset;\n" +"\n" +" return float4 (yuv.z, yuv.y, yuv.x, bgra.a);\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_limited.hlsl
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +Texture2D shaderTexture; +SamplerState samplerState; + +struct PS_INPUT +{ + float4 Position : SV_POSITION; + float2 Texture : TEXCOORD; +}; + +static const float3x3 RGB2YCbCr = { + 0.1826, 0.6142, 0.0620, // Y + -0.1006, -0.3386, 0.4392, // Cb + 0.4392, -0.3989, -0.0403 // Cr +}; + +static const float3 Offset = float3 (0.0625, 0.5, 0.5); + +float4 ENTRY_POINT (PS_INPUT input): SV_TARGET +{ + float4 bgra = shaderTexture.Sample (samplerState, input.Texture); + float3 rgb = float3(bgra.r, bgra.g, bgra.b) * bgra.a; + float3 yuv = mul (RGB2YCbCr, rgb) + Offset; + + return float4 (yuv.z, yuv.y, yuv.x, bgra.a); +} +#else +static const char str_PSMain_sample_bgra_to_vuya_limited = +"Texture2D shaderTexture;\n" +"SamplerState samplerState;\n" +"\n" +"struct PS_INPUT\n" +"{\n" +" float4 Position : SV_POSITION;\n" +" float2 Texture : TEXCOORD;\n" +"};\n" +"\n" +"static const float3x3 RGB2YCbCr = {\n" +" 0.1826, 0.6142, 0.0620, // Y\n" +" -0.1006, -0.3386, 0.4392, // Cb\n" +" 0.4392, -0.3989, -0.0403 // Cr\n" +"};\n" +"\n" +"static const float3 Offset = float3 (0.0625, 0.5, 0.5);\n" +"\n" +"float4 ENTRY_POINT (PS_INPUT input): SV_TARGET\n" +"{\n" +" float4 bgra = shaderTexture.Sample (samplerState, input.Texture);\n" +" float3 rgb = float3(bgra.r, bgra.g, bgra.b) * bgra.a;\n" +" float3 yuv = mul (RGB2YCbCr, rgb) + Offset;\n" +"\n" +" return float4 (yuv.z, yuv.y, yuv.x, bgra.a);\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/PSMain_sample_bgra_to_vuya_limited_premul.hlsl
Added
@@ -0,0 +1,73 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef BUILDING_HLSL +Texture2D shaderTexture; +SamplerState samplerState; + +struct PS_INPUT +{ + float4 Position : SV_POSITION; + float2 Texture : TEXCOORD; +}; + +static const float3x3 RGB2YCbCr = { + 0.1826, 0.6142, 0.0620, // Y + -0.1006, -0.3386, 0.4392, // Cb + 0.4392, -0.3989, -0.0403 // Cr +}; + +static const float3 Offset = float3 (0.0625, 0.5, 0.5); + +float4 ENTRY_POINT (PS_INPUT input): SV_TARGET +{ + float4 bgra = shaderTexture.Sample (samplerState, input.Texture); + float3 rgb = float3(bgra.r, bgra.g, bgra.b); + float3 yuv = mul (RGB2YCbCr, rgb) + Offset; + + return float4(yuv.z, yuv.y, yuv.x, bgra.a); +} +#else +static const char str_PSMain_sample_bgra_to_vuya_limited_premul = +"Texture2D shaderTexture;\n" +"SamplerState samplerState;\n" +"\n" +"struct PS_INPUT\n" +"{\n" +" float4 Position : SV_POSITION;\n" +" float2 Texture : TEXCOORD;\n" +"};\n" +"\n" +"static const float3x3 RGB2YCbCr = {\n" +" 0.1826, 0.6142, 0.0620, // Y\n" +" -0.1006, -0.3386, 0.4392, // Cb\n" +" 0.4392, -0.3989, -0.0403 // Cr\n" +"};\n" +"\n" +"static const float3 Offset = float3 (0.0625, 0.5, 0.5);\n" +"\n" +"float4 ENTRY_POINT (PS_INPUT input): SV_TARGET\n" +"{\n" +" float4 bgra = shaderTexture.Sample (samplerState, input.Texture);\n" +" float3 rgb = float3(bgra.r, bgra.g, bgra.b);\n" +" float3 yuv = mul (RGB2YCbCr, rgb) + Offset;\n" +"\n" +" return float4(yuv.z, yuv.y, yuv.x, bgra.a);\n" +"}\n"; +#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/hlsl.h
Changed
@@ -28,6 +28,10 @@ #include "PSMain_sample.hlsl" #include "PSMain_sample_scrgb_tonemap.hlsl" #include "PSMain_sample_scrgb.hlsl" +#include "PSMain_sample_bgra_to_vuya_full_premul.hlsl" +#include "PSMain_sample_bgra_to_vuya_full.hlsl" +#include "PSMain_sample_bgra_to_vuya_limited_premul.hlsl" +#include "PSMain_sample_bgra_to_vuya_limited.hlsl" #include "PSMain_snow.hlsl" #include "VSMain_color.hlsl" #include "VSMain_coord.hlsl" @@ -41,3 +45,9 @@ #include "CSMain_yadif_1_12.hlsl" #include "CSMain_yadif_2.hlsl" #include "CSMain_yadif_4.hlsl" +#include "CSMain_fisheye_equirect.hlsl" +#include "CSMain_fisheye_panorama.hlsl" +#include "CSMain_fisheye_perspective.hlsl" +#include "CSMain_weave_interlace_1.hlsl" +#include "CSMain_weave_interlace_2.hlsl" +#include "CSMain_weave_interlace_4.hlsl"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/d3dshader/plugin-hlsl/meson.build
Changed
@@ -8,6 +8,10 @@ 'PSMain_sample', 'ps', 'PSMain_sample_scrgb_tonemap', 'ps', 'PSMain_sample_scrgb', 'ps', + 'PSMain_sample_bgra_to_vuya_full_premul', 'ps', + 'PSMain_sample_bgra_to_vuya_full', 'ps', + 'PSMain_sample_bgra_to_vuya_limited_premul', 'ps', + 'PSMain_sample_bgra_to_vuya_limited', 'ps', 'PSMain_snow', 'ps', 'VSMain_color', 'vs', 'VSMain_coord', 'vs', @@ -21,6 +25,12 @@ 'CSMain_yadif_1', 'cs', 'CSMain_yadif_2', 'cs', 'CSMain_yadif_4', 'cs', + 'CSMain_fisheye_equirect', 'cs', + 'CSMain_fisheye_panorama', 'cs', + 'CSMain_fisheye_perspective', 'cs', + 'CSMain_weave_interlace_1', 'cs', + 'CSMain_weave_interlace_2', 'cs', + 'CSMain_weave_interlace_4', 'cs', shader_model = '5_0'
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-enums.cpp
Added
@@ -0,0 +1,44 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-enums.h" +#include <mutex> + +GType +gst_hip_vendor_get_type (void) +{ + static std::once_flag once; + static GType type = 0; + static const GEnumValue vendor = { + {GST_HIP_VENDOR_UNKNOWN, "Unknown", "unknown"}, + {GST_HIP_VENDOR_AMD, "AMD", "amd"}, + {GST_HIP_VENDOR_NVIDIA, "NVIDIA", "nvidia"}, + {0, nullptr, nullptr}, + }; + + std::call_once (once,&() { + type = g_enum_register_static ("GstHipVendor", vendor); + }); + + return type; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-enums.h
Added
@@ -0,0 +1,45 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> + +G_BEGIN_DECLS + +/** + * GstHipVendor: + * + * Since: 1.28 + */ +typedef enum +{ + GST_HIP_VENDOR_UNKNOWN, + GST_HIP_VENDOR_AMD, + GST_HIP_VENDOR_NVIDIA, +} GstHipVendor; + +#define GST_TYPE_HIP_VENDOR (gst_hip_vendor_get_type()) + +GST_HIP_API +GType gst_hip_vendor_get_type (void); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-gl.h
Added
@@ -0,0 +1,26 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/hip/gsthip.h> +#include <gst/hip/gsthip-interop-gl.h> +#include <gst/hip/hip-gst-gl.h> +#include <gst/gl/gl.h> +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-interop-gl.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/gl/gl.h> +#include <gst/hip/gsthip-interop.h> + +G_BEGIN_DECLS + +GST_HIP_API +hipError_t gst_hip_gl_get_graphics_resource_from_memory (GstHipDevice * device, + GstMemory * mem, + GstHipGraphicsResource ** resource); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-interop.cpp
Added
@@ -0,0 +1,377 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-config.h" +#include "gsthip.h" + +#include <mutex> +#include <condition_variable> + +#ifdef HAVE_GST_GL +#include "gsthip-gl.h" +#endif + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hip-interop", 0, "hip-interop"); + }); + + return cat; +} +#endif + +#ifdef HAVE_GST_GL +static void +unregister_resource_on_gl_thread (GstGLContext * gl_context, + GstHipGraphicsResource * resource); +#endif + +/* *INDENT-OFF* */ +struct _GstHipGraphicsResource : public GstMiniObject +{ + _GstHipGraphicsResource () + { + } + + ~_GstHipGraphicsResource () + { +#ifdef HAVE_GST_GL + if (gl_context) { + gst_gl_context_thread_add (gl_context, + (GstGLContextThreadFunc) unregister_resource_on_gl_thread, + this); + + gst_object_unref (gl_context); + } else +#else + if (gst_hip_device_set_current (device)) + HipGraphicsUnregisterResource (vendor, handle); +#endif + + gst_object_unref (device); + } + + GstHipDevice *device = nullptr; + GstHipVendor vendor = GST_HIP_VENDOR_UNKNOWN; + hipGraphicsResource_t handle = nullptr; + std::mutex lock; + std::condition_variable cond; + guint64 map_count = 0; + void *mapped_dev_ptr = nullptr; + size_t mapped_size = 0; + hipStream_t mapped_stream = nullptr; +#ifdef HAVE_GST_GL + GstGLContext *gl_context = nullptr; +#endif +}; +/* *INDENT-ON* */ + +#ifdef HAVE_GST_GL +static void +unregister_resource_on_gl_thread (GstGLContext * gl_context, + GstHipGraphicsResource * resource) +{ + if (gst_hip_device_set_current (resource->device)) + HipGraphicsUnregisterResource (resource->vendor, resource->handle); +} +#endif + +GST_DEFINE_MINI_OBJECT_TYPE (GstHipGraphicsResource, gst_hip_graphics_resource); + +/** + * gst_hip_graphics_resource_map: + * @resource: a #GstHipGraphicsResource + * @stream: (type gpointer) (nullable): a hipStream_t handle + * + * Map registered @resource for I/O operation + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_graphics_resource_map (GstHipGraphicsResource * resource, + hipStream_t stream) +{ + g_return_val_if_fail (resource, hipErrorInvalidValue); + + std::unique_lock < std::mutex > lk (resource->lock); + + if (resource->map_count > 0) { + if (stream == resource->mapped_stream) { + resource->map_count++; + return hipSuccess; + } + + while (resource->map_count > 0) + resource->cond.wait (lk); + } + + auto ret = HipGraphicsMapResources (resource->vendor, 1, &resource->handle, + stream); + if (!gst_hip_result (ret, resource->vendor)) + return ret; + + resource->map_count++; + resource->mapped_stream = stream; + return hipSuccess; +} + +/** + * gst_hip_graphics_resource_unmap: + * @resource: a #GstHipGraphicsResource + * @stream: (type gpointer) (nullable): a hipStream_t handle + * + * Unmap mapped @resource via gst_hip_graphics_resource_map() + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_graphics_resource_unmap (GstHipGraphicsResource * resource, + hipStream_t stream) +{ + g_return_val_if_fail (resource, hipErrorInvalidValue); + + std::lock_guard < std::mutex > lk (resource->lock); + + if (resource->map_count == 0) { + GST_WARNING ("resource %p is not mapped", resource); + return hipErrorNotMapped; + } + + resource->map_count--; + + if (resource->map_count > 0) + return hipSuccess; + + auto ret = HipGraphicsUnmapResources (resource->vendor, 1, &resource->handle, + stream); + + resource->mapped_stream = nullptr; + resource->mapped_dev_ptr = nullptr; + resource->mapped_size = 0; + + resource->cond.notify_all (); + + return ret; +} + +/** + * gst_hip_graphics_resource_get_mapped_pointer: + * @resource: a #GstHipGraphicsResource + * @dev_ptr: (out) (optional): a pointer to mapped device memory + * @size: (out) (optional): the size of mapped device memory + * + * Get mapped device pointer from @resource. + * Caller must map @resource via gst_hip_graphics_resource_map() + * before getting mapped device memory + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_graphics_resource_get_mapped_pointer (GstHipGraphicsResource * resource, + void **dev_ptr, size_t *size) +{ + g_return_val_if_fail (resource, hipErrorInvalidValue); + + std::lock_guard < std::mutex > lk (resource->lock); + + if (resource->map_count == 0) { + GST_WARNING ("resource %p is not mapped", resource); + return hipErrorNotMapped; + } + + if (!resource->mapped_dev_ptr) { + auto ret = HipGraphicsResourceGetMappedPointer (resource->vendor, + &resource->mapped_dev_ptr, &resource->mapped_size, resource->handle); + if (!gst_hip_result (ret, resource->vendor)) + return ret; + } + + if (dev_ptr) + *dev_ptr = resource->mapped_dev_ptr; + + if (size) + *size = resource->mapped_size; + + return hipSuccess; +} + +/** + * gst_hip_graphics_resource_ref: + * @resource: a #GstHipGraphicsResource + * + * Increments the reference count on @resource + * + * Returns: (transfer full): a pointer to @resource + * + * Since: 1.28 + */ +GstHipGraphicsResource * +gst_hip_graphics_resource_ref (GstHipGraphicsResource * resource) +{ + return (GstHipGraphicsResource *) gst_mini_object_ref (resource); +} + +/** + * gst_hip_graphics_resource_unref: + * @resource: a #GstHipGraphicsResource + * + * Decrements the reference count on @resource + * + * Since: 1.28 + */ +void +gst_hip_graphics_resource_unref (GstHipGraphicsResource * resource) +{ + gst_mini_object_unref (resource); +} + +/** + * gst_clear_hip_graphics_resource: (skip) + * @resource: a pointer to a #GstHipGraphicsResource + * + * Clears a reference to the @resource + * + * Since: 1.28 + */ +void +gst_clear_hip_graphics_resource (GstHipGraphicsResource ** resource) +{ + gst_clear_mini_object (resource); +} + +#ifdef HAVE_GST_GL +static void +gst_hip_graphics_resource_free (GstHipGraphicsResource * resource) +{ + delete resource; +} + +struct GetResourceData +{ + GstHipGraphicsResource *resource = nullptr; + hipError_t ret = hipSuccess; + GstMemory *gl_mem; + GstHipDevice *device; +}; + +static void +get_resource_on_gl_thread (GstGLContext * gl_context, GetResourceData * data) +{ + static GQuark gl_quark = 0; + static std::once_flag once; + + std::call_once (once,& { + gl_quark = g_quark_from_static_string ("GstHipGraphicsResourceGL"); + }); + + auto resource = (GstHipGraphicsResource *) + gst_mini_object_get_qdata ((GstMiniObject *) data->gl_mem, gl_quark); + + if (resource) { + data->resource = gst_hip_graphics_resource_ref (resource); + data->ret = hipSuccess; + return; + } + + auto vendor = gst_hip_device_get_vendor (data->device); + auto ret = HipSetDevice (vendor, gst_hip_device_get_device_id (data->device)); + if (!gst_hip_result (ret, vendor)) { + data->ret = ret; + return; + } + + auto pbo = (GstGLMemoryPBO *) data->gl_mem; + hipGraphicsResource *handle; + ret = HipGraphicsGLRegisterBuffer (vendor, + &handle, pbo->pbo->id, hipGraphicsRegisterFlagsNone); + if (!gst_hip_result (ret, vendor)) { + data->ret = ret; + return; + } + + auto new_resource = new GstHipGraphicsResource (); + new_resource->device = (GstHipDevice *) gst_object_ref (data->device); + new_resource->gl_context = (GstGLContext *) gst_object_ref (gl_context); + new_resource->vendor = vendor; + new_resource->handle = handle; + + gst_mini_object_init (new_resource, 0, gst_hip_graphics_resource_get_type (), + nullptr, nullptr, + (GstMiniObjectFreeFunction) gst_hip_graphics_resource_free); + + gst_mini_object_set_qdata ((GstMiniObject *) data->gl_mem, gl_quark, + gst_hip_graphics_resource_ref (new_resource), + (GDestroyNotify) gst_mini_object_unref); + + data->resource = new_resource; + data->ret = hipSuccess; +} + +/** + * gst_hip_gl_get_graphics_resource_from_memory: + * @device: a #GstHipDevice + * @mem: a #GstMemory + * @resource: (out) (transfer full) (nullable): a location to store created #GstHipGraphicsResource + * + * Creates a new #GstHipGraphicsResource from gl memory. + * @mem must be a valid #GstGLMemoryPBO + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_gl_get_graphics_resource_from_memory (GstHipDevice * device, + GstMemory * mem, GstHipGraphicsResource ** resource) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), hipErrorInvalidValue); + g_return_val_if_fail (gst_is_gl_memory_pbo (mem), hipErrorInvalidValue); + g_return_val_if_fail (resource, hipErrorInvalidValue); + + GetResourceData data; + data.device = device; + data.gl_mem = mem; + + gst_gl_context_thread_add (GST_GL_BASE_MEMORY_CAST (mem)->context, + (GstGLContextThreadFunc) get_resource_on_gl_thread, &data); + + if (data.ret != hipSuccess) + return data.ret; + + *resource = data.resource; + return hipSuccess; +} +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-interop.h
Added
@@ -0,0 +1,53 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/gsthip_fwd.h> + +G_BEGIN_DECLS + +GST_HIP_API +GType gst_hip_graphics_resource_get_type (void); + +GST_HIP_API +hipError_t gst_hip_graphics_resource_map (GstHipGraphicsResource * resource, + hipStream_t stream); + +GST_HIP_API +hipError_t gst_hip_graphics_resource_unmap (GstHipGraphicsResource * resource, + hipStream_t stream); + +GST_HIP_API +hipError_t gst_hip_graphics_resource_get_mapped_pointer (GstHipGraphicsResource * resource, + void ** dev_ptr, + size_t * size); + +GST_HIP_API +GstHipGraphicsResource * gst_hip_graphics_resource_ref (GstHipGraphicsResource * resource); + +GST_HIP_API +void gst_hip_graphics_resource_unref (GstHipGraphicsResource * resource); + +GST_HIP_API +void gst_clear_hip_graphics_resource (GstHipGraphicsResource ** resource); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip-private.h
Added
@@ -0,0 +1,30 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/hip/gsthip.h> + +G_BEGIN_DECLS + +GST_HIP_API +void gst_hip_memory_init_once (void); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip.h
Added
@@ -0,0 +1,44 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#ifndef GST_USE_UNSTABLE_API +#pragma message ("The hip library from gst-plugins-bad is unstable API and may change in future.") +#pragma message ("You can define GST_USE_UNSTABLE_API to avoid this warning.") +#endif + +#include <hip/hip_runtime.h> + +#include <gst/gst.h> +#include <gst/hip/hip-gst.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> +#include <gst/hip/gsthip-interop.h> +#include <gst/hip/gsthipbufferpool.h> +#include <gst/hip/gsthipdevice.h> +#include <gst/hip/gsthipevent.h> +#include <gst/hip/gsthiploader.h> +#include <gst/hip/gsthipmemory.h> +#include <gst/hip/gsthiprtc.h> +#include <gst/hip/gsthipstream.h> +#include <gst/hip/gsthiputils.h> + +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthip_fwd.h
Added
@@ -0,0 +1,57 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> +#include <hip/hip_runtime.h> + +G_BEGIN_DECLS + +typedef struct _GstHipDevice GstHipDevice; +typedef struct _GstHipDeviceClass GstHipDeviceClass; +typedef struct _GstHipDevicePrivate GstHipDevicePrivate; + +typedef struct _GstHipMemory GstHipMemory; +typedef struct _GstHipMemoryPrivate GstHipMemoryPrivate; + +typedef struct _GstHipAllocator GstHipAllocator; +typedef struct _GstHipAllocatorClass GstHipAllocatorClass; +typedef struct _GstHipAllocatorPrivate GstHipAllocatorPrivate; + +typedef struct _GstHipPoolAllocator GstHipPoolAllocator; +typedef struct _GstHipPoolAllocatorClass GstHipPoolAllocatorClass; +typedef struct _GstHipPoolAllocatorPrivate GstHipPoolAllocatorPrivate; + +typedef struct _GstHipBufferPool GstHipBufferPool; +typedef struct _GstHipBufferPoolClass GstHipBufferPoolClass; +typedef struct _GstHipBufferPoolPrivate GstHipBufferPoolPrivate; + +typedef struct _GstHipGraphicsResource GstHipGraphicsResource; + +typedef struct _GstHipStream GstHipStream; + +typedef struct _GstHipEvent GstHipEvent; +typedef struct _GstHipEventPool GstHipEventPool; +typedef struct _GstHipEventPoolClass GstHipEventPoolClass; +typedef struct _GstHipEventPoolPrivate GstHipEventPoolPrivate; + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipbufferpool.cpp
Added
@@ -0,0 +1,258 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip.h" + +GST_DEBUG_CATEGORY_STATIC (gst_hip_buffer_pool_debug); +#define GST_CAT_DEFAULT gst_hip_buffer_pool_debug + +struct _GstHipBufferPoolPrivate +{ + GstVideoInfo info; + GstHipPoolAllocator *alloc = nullptr; +}; + +#define gst_hip_buffer_pool_parent_class parent_class +G_DEFINE_TYPE (GstHipBufferPool, gst_hip_buffer_pool, GST_TYPE_BUFFER_POOL); + +static void gst_hip_buffer_pool_finalize (GObject * object); +static const gchar **gst_hip_buffer_pool_get_options (GstBufferPool * pool); +static gboolean gst_hip_buffer_pool_set_config (GstBufferPool * pool, + GstStructure * config); +static gboolean gst_hip_buffer_pool_start (GstBufferPool * pool); +static gboolean gst_hip_buffer_pool_stop (GstBufferPool * pool); +static GstFlowReturn gst_hip_buffer_pool_alloc (GstBufferPool * pool, + GstBuffer ** buffer, GstBufferPoolAcquireParams * params); +static GstFlowReturn gst_hip_buffer_pool_acquire_buffer (GstBufferPool * pool, + GstBuffer ** buffer, GstBufferPoolAcquireParams * params); + +static void +gst_hip_buffer_pool_class_init (GstHipBufferPoolClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto pool_class = GST_BUFFER_POOL_CLASS (klass); + + object_class->finalize = gst_hip_buffer_pool_finalize; + + pool_class->get_options = gst_hip_buffer_pool_get_options; + pool_class->set_config = gst_hip_buffer_pool_set_config; + pool_class->start = gst_hip_buffer_pool_start; + pool_class->stop = gst_hip_buffer_pool_stop; + pool_class->alloc_buffer = gst_hip_buffer_pool_alloc; + pool_class->acquire_buffer = gst_hip_buffer_pool_acquire_buffer; + + GST_DEBUG_CATEGORY_INIT (gst_hip_buffer_pool_debug, "hipbufferpool", 0, + "hipbufferpool"); +} + +static void +gst_hip_buffer_pool_init (GstHipBufferPool * self) +{ + self->priv = new GstHipBufferPoolPrivate (); +} + +static void +gst_hip_buffer_pool_finalize (GObject * object) +{ + auto self = GST_HIP_BUFFER_POOL (object); + auto priv = self->priv; + + if (priv->alloc) { + gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), FALSE); + gst_clear_object (&priv->alloc); + } + + gst_clear_object (&self->device); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static const gchar ** +gst_hip_buffer_pool_get_options (GstBufferPool * pool) +{ + static const gchar *options = { GST_BUFFER_POOL_OPTION_VIDEO_META, nullptr + }; + + return options; +} + +static gboolean +gst_hip_buffer_pool_set_config (GstBufferPool * pool, GstStructure * config) +{ + auto self = GST_HIP_BUFFER_POOL (pool); + auto priv = self->priv; + GstCaps *caps = nullptr; + guint size, min_buffers, max_buffers; + GstVideoInfo info; + GstMemory *mem = nullptr; + + if (!gst_buffer_pool_config_get_params (config, &caps, &size, &min_buffers, + &max_buffers)) { + GST_WARNING_OBJECT (self, "invalid config"); + return FALSE; + } + + if (!caps) { + GST_WARNING_OBJECT (pool, "no caps in config"); + return FALSE; + } + + if (!gst_video_info_from_caps (&info, caps)) { + GST_WARNING_OBJECT (self, "Failed to convert caps to video-info"); + return FALSE; + } + + if (priv->alloc) { + gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), FALSE); + gst_clear_object (&priv->alloc); + } + + priv->alloc = gst_hip_pool_allocator_new (self->device, &info); + + if (!priv->alloc) { + GST_ERROR_OBJECT (self, "Couldn't create allocator"); + return FALSE; + } + + if (!gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't set active"); + return FALSE; + } + + gst_hip_pool_allocator_acquire_memory (priv->alloc, &mem); + gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), FALSE); + if (!mem) { + GST_WARNING_OBJECT (self, "Failed to allocate memory"); + return FALSE; + } + + auto hmem = GST_HIP_MEMORY_CAST (mem); + + gst_buffer_pool_config_set_params (config, caps, + GST_VIDEO_INFO_SIZE (&hmem->info), min_buffers, max_buffers); + + priv->info = info; + + gst_memory_unref (mem); + + return GST_BUFFER_POOL_CLASS (parent_class)->set_config (pool, config); +} + +static GstFlowReturn +gst_hip_buffer_pool_alloc (GstBufferPool * pool, GstBuffer ** buffer, + GstBufferPoolAcquireParams * params) +{ + auto self = GST_HIP_BUFFER_POOL (pool); + auto priv = self->priv; + GstVideoInfo *info = &priv->info; + GstMemory *mem; + GstFlowReturn ret; + + ret = gst_hip_pool_allocator_acquire_memory (priv->alloc, &mem); + if (ret != GST_FLOW_OK) { + GST_ERROR_OBJECT (self, "Couldn't acquire memory"); + return ret; + } + + auto buf = gst_buffer_new (); + gst_buffer_append_memory (buf, mem); + + auto hmem = GST_HIP_MEMORY_CAST (mem); + gst_hip_memory_sync (hmem); + gst_buffer_add_video_meta_full (buf, GST_VIDEO_FRAME_FLAG_NONE, + GST_VIDEO_INFO_FORMAT (info), GST_VIDEO_INFO_WIDTH (info), + GST_VIDEO_INFO_HEIGHT (info), GST_VIDEO_INFO_N_PLANES (info), + hmem->info.offset, hmem->info.stride); + + *buffer = buf; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_hip_buffer_pool_acquire_buffer (GstBufferPool * pool, + GstBuffer ** buffer, GstBufferPoolAcquireParams * params) +{ + auto ret = GST_BUFFER_POOL_CLASS (parent_class)->acquire_buffer (pool, + buffer, params); + if (ret != GST_FLOW_OK) + return ret; + + auto mem = (GstHipMemory *) gst_buffer_peek_memory (*buffer, 0); + gst_hip_memory_sync (mem); + + return GST_FLOW_OK; +} + +static gboolean +gst_hip_buffer_pool_start (GstBufferPool * pool) +{ + auto self = GST_HIP_BUFFER_POOL (pool); + auto priv = self->priv; + + if (!gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't activate allocator"); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_hip_buffer_pool_stop (GstBufferPool * pool) +{ + auto self = GST_HIP_BUFFER_POOL (pool); + auto priv = self->priv; + + if (priv->alloc) + gst_hip_allocator_set_active (GST_HIP_ALLOCATOR (priv->alloc), FALSE); + + return GST_BUFFER_POOL_CLASS (parent_class)->stop (pool); +} + +/** + * gst_hip_buffer_pool_new: + * @device: a #GstHipDevice + * + * Creates new #GstHipBufferPool instance + * + * Returns: (transfer full): a #GstBufferPool that allocates buffers with + * #GstHipMemory + * + * Since: 1.28 + */ +GstBufferPool * +gst_hip_buffer_pool_new (GstHipDevice * device) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), nullptr); + + auto self = (GstHipBufferPool *) + g_object_new (GST_TYPE_HIP_BUFFER_POOL, nullptr); + gst_object_ref_sink (self); + + self->device = (GstHipDevice *) gst_object_ref (device); + + return GST_BUFFER_POOL_CAST (self); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipbufferpool.h
Added
@@ -0,0 +1,76 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/hip/gsthip_fwd.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_BUFFER_POOL (gst_hip_buffer_pool_get_type ()) +#define GST_HIP_BUFFER_POOL(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj),GST_TYPE_HIP_BUFFER_POOL,GstHipBufferPool)) +#define GST_HIP_BUFFER_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_HIP_BUFFER_POOL,GstHipBufferPoolClass)) +#define GST_HIP_BUFFER_POOL_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_BUFFER_POOL,GstHipBufferPoolClass)) +#define GST_IS_HIP_BUFFER_POOL(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj),GST_TYPE_HIP_BUFFER_POOL)) +#define GST_IS_HIP_BUFFER_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_HIP_BUFFER_POOL)) +#define GST_HIP_BUFFER_POOL_CAST(obj) ((GstHipBufferPool*)(obj)) + +/** + * GstHipBufferPool: + * + * Opaque GstHipBufferPool struct + * + * Since: 1.28 + */ +struct _GstHipBufferPool +{ + GstBufferPool parent; + + GstHipDevice *device; + + /*< private >*/ + GstHipBufferPoolPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstHipBufferPoolClass: + * + * Opaque GstHipBufferPoolClass struct + * + * Since: 1.28 + */ +struct _GstHipBufferPoolClass +{ + GstBufferPoolClass parent_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING; +}; + +GST_HIP_API +GType gst_hip_buffer_pool_get_type (void); + +GST_HIP_API +GstBufferPool * gst_hip_buffer_pool_new (GstHipDevice * device); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipdevice.cpp
Added
@@ -0,0 +1,338 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip.h" +#include "gsthip-private.h" +#include <mutex> + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hipdevice", 0, "hipdevice"); + }); + + return cat; +} +#endif + +enum +{ + PROP_0, + PROP_DEVICE_ID, + PROP_VENDOR, + PROP_TEXTURE2D_SUPPORT, +}; + +struct _GstHipDevicePrivate +{ + ~_GstHipDevicePrivate () + { + gst_clear_hip_stream (&stream); + } + guint device_id; + GstHipVendor vendor; + gboolean texture_support; + GstHipStream *stream = nullptr; +}; + +#define gst_hip_device_parent_class parent_class +G_DEFINE_TYPE (GstHipDevice, gst_hip_device, GST_TYPE_OBJECT); + +static void gst_hip_device_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); +static void gst_hip_device_finalize (GObject * object); + +static void +gst_hip_device_class_init (GstHipDeviceClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + + object_class->get_property = gst_hip_device_get_property; + object_class->finalize = gst_hip_device_finalize; + + g_object_class_install_property (object_class, PROP_DEVICE_ID, + g_param_spec_uint ("device-id", "Device ID", "Device ID", + 0, G_MAXUINT, 0, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_VENDOR, + g_param_spec_enum ("vendor", "Vendor", "Vendor", + GST_TYPE_HIP_VENDOR, GST_HIP_VENDOR_UNKNOWN, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_TEXTURE2D_SUPPORT, + g_param_spec_boolean ("texture2d-support", "Texture2D support", + "Texture2D support", FALSE, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS))); + + gst_hip_memory_init_once (); +} + +static void +gst_hip_device_init (GstHipDevice * self) +{ + self->priv = new GstHipDevicePrivate (); +} + +static void +gst_hip_device_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_DEVICE (object); + auto priv = self->priv; + + switch (prop_id) { + case PROP_DEVICE_ID: + g_value_set_uint (value, priv->device_id); + break; + case PROP_VENDOR: + g_value_set_enum (value, priv->vendor); + break; + case PROP_TEXTURE2D_SUPPORT: + g_value_set_boolean (value, priv->texture_support); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_device_finalize (GObject * object) +{ + auto self = GST_HIP_DEVICE (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +/** + * gst_hip_device_new: + * @vendor: a #GstHipVendor + * @device_id: device identifier + * + * Creates a new device instance with @vendor and @device_id. + * + * Returns: (transfer full) (nullable): a #GstHipDevice if succeeded, + * otherwise %NULL + * + * Since: 1.28 + */ +GstHipDevice * +gst_hip_device_new (GstHipVendor vendor, guint device_id) +{ + if (vendor == GST_HIP_VENDOR_UNKNOWN) { + if (gst_hip_load_library (GST_HIP_VENDOR_AMD)) + vendor = GST_HIP_VENDOR_AMD; + else if (gst_hip_load_library (GST_HIP_VENDOR_NVIDIA)) + vendor = GST_HIP_VENDOR_NVIDIA; + else + return nullptr; + } + + if (!gst_hip_load_library (vendor)) { + GST_INFO ("Couldn't load HIP library"); + return nullptr; + } + + int num_dev = 0; + auto hip_ret = HipGetDeviceCount (vendor, &num_dev); + if (hip_ret != hipSuccess || num_dev <= 0) { + GST_DEBUG ("No supported HIP device, error: %d", hip_ret); + return nullptr; + } + + if ((guint) num_dev <= device_id) { + GST_DEBUG ("Num device %d <= requested device id %d", num_dev, device_id); + return nullptr; + } + + gboolean texture_support = FALSE; + int val = 0; + hip_ret = HipDeviceGetAttribute (vendor, &val, + hipDeviceAttributeMaxTexture2DWidth, device_id); + if (hip_ret == hipSuccess && val > 0) { + hip_ret = HipDeviceGetAttribute (vendor, &val, + hipDeviceAttributeMaxTexture2DHeight, device_id); + if (hip_ret == hipSuccess && val > 0) { + hip_ret = HipDeviceGetAttribute (vendor, &val, + hipDeviceAttributeTextureAlignment, device_id); + if (hip_ret == hipSuccess && val > 0) { + texture_support = TRUE; + } + } + } + + auto stream = gst_hip_stream_new (vendor, device_id); + if (!stream) { + GST_ERROR ("Couldn't create stream"); + return nullptr; + } + + auto self = (GstHipDevice *) g_object_new (GST_TYPE_HIP_DEVICE, nullptr); + gst_object_ref_sink (self); + self->priv->device_id = device_id; + self->priv->vendor = vendor; + self->priv->texture_support = texture_support; + self->priv->stream = stream; + + return self; +} + +/** + * gst_hip_device_set_current: + * @device: a #GstHipDevice + * + * Sets @device to current stack via hipSetDevice + * + * Returns: %TRUE if hipSetDevice call succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_device_set_current (GstHipDevice * device) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), FALSE); + + auto priv = device->priv; + auto hip_ret = HipSetDevice (priv->vendor, priv->device_id); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (device, "hipSetDevice result %d", hip_ret); + return FALSE; + } + + return TRUE; +} + +/** + * gst_hip_device_get_attribute: + * @device: a #GstHipDevice + * @attr: (type gint): a hipDeviceAttribute_t value + * @value: (out): an attribute value + * + * Gets a device attribute via hipDeviceGetAttribute + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_device_get_attribute (GstHipDevice * device, hipDeviceAttribute_t attr, + gint * value) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), hipErrorInvalidDevice); + + auto priv = device->priv; + + return HipDeviceGetAttribute (priv->vendor, value, attr, priv->device_id); +} + +/** + * gst_hip_device_is_equal: + * @device1: a #GstHipDevice + * @device2: a #GstHipDevice + * + * Checks equality of @device1 and @device2 + * + * Returns: %TRUE if both devices are associated with the same hardware device + * + * Since: 1.28 + */ +gboolean +gst_hip_device_is_equal (GstHipDevice * device1, GstHipDevice * device2) +{ + if (!device1 || !device2) + return FALSE; + + g_return_val_if_fail (GST_IS_HIP_DEVICE (device1), FALSE); + g_return_val_if_fail (GST_IS_HIP_DEVICE (device2), FALSE); + + if (device1 == device2) + return TRUE; + + if (device1->priv->device_id == device2->priv->device_id && + device1->priv->vendor == device2->priv->vendor) { + return TRUE; + } + + return FALSE; +} + +/** + * gst_hip_device_get_vendor: + * @device: a #GstHipDevice + * + * Gets vendor of @device + * + * Returns: #GstHipVendor + * + * Since: 1.28 + */ +GstHipVendor +gst_hip_device_get_vendor (GstHipDevice * device) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), GST_HIP_VENDOR_UNKNOWN); + + return device->priv->vendor; +} + +/** + * gst_hip_device_get_device_id: + * @device: a #GstHipDevice + * + * Gets numeric device identifier of @device + * + * Returns: the device identifier + * + * Since: 1.28 + */ +guint +gst_hip_device_get_device_id (GstHipDevice * device) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), (guint) - 1); + + return device->priv->device_id; +} + +/** + * gst_hip_device_get_stream: + * @device: a #GstHipDevice + * + * Gets per #GstHipDevice default #GstHipStream owned by @device + * + * Returns: a #GstHipStream + * + * Since: 1.28 + */ +GstHipStream * +gst_hip_device_get_stream (GstHipDevice * device) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), nullptr); + + return device->priv->stream; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipdevice.h
Added
@@ -0,0 +1,97 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_DEVICE (gst_hip_device_get_type()) +#define GST_HIP_DEVICE(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_HIP_DEVICE,GstHipDevice)) +#define GST_HIP_DEVICE_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_HIP_DEVICE,GstHipDeviceClass)) +#define GST_HIP_DEVICE_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_DEVICE,GstHipDeviceClass)) +#define GST_IS_HIP_DEVICE(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj),GST_TYPE_HIP_DEVICE)) +#define GST_IS_HIP_DEVICE_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_HIP_DEVICE)) + +#define GST_HIP_DEVICE_CONTEXT_TYPE "gst.hip.device" + +/** + * GstHipDevice: + * + * Opaque GstHipDevice struct + * + * Since: 1.28 + */ +struct _GstHipDevice +{ + GstObject object; + + /*< private >*/ + GstHipDevicePrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstHipDeviceClass: + * + * Opaque GstHipDeviceClass struct + * + * Since: 1.28 + */ +struct _GstHipDeviceClass +{ + GstObjectClass parent_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING; +}; + +GST_HIP_API +GType gst_hip_device_get_type (void); + +GST_HIP_API +GstHipDevice * gst_hip_device_new (GstHipVendor vendor, + guint device_id); + +GST_HIP_API +gboolean gst_hip_device_set_current (GstHipDevice * device); + +GST_HIP_API +hipError_t gst_hip_device_get_attribute (GstHipDevice * device, + hipDeviceAttribute_t attr, + gint * value); + +GST_HIP_API +gboolean gst_hip_device_is_equal (GstHipDevice * device1, + GstHipDevice * device2); + +GST_HIP_API +GstHipVendor gst_hip_device_get_vendor (GstHipDevice * device); + +GST_HIP_API +guint gst_hip_device_get_device_id (GstHipDevice * device); + +GST_HIP_API +GstHipStream * gst_hip_device_get_stream (GstHipDevice * device); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipevent.cpp
Added
@@ -0,0 +1,382 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-config.h" +#include "gsthip.h" +#include <mutex> +#include <queue> + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hipevent", 0, "hipevent"); + }); + + return cat; +} +#endif + +/* *INDENT-OFF* */ +struct _GstHipEvent : public GstMiniObject +{ + ~_GstHipEvent () + { + if (handle) { + auto hip_ret = HipSetDevice (vendor, device_id); + if (gst_hip_result (hip_ret, vendor)) { + HipEventSynchronize (vendor, handle); + HipEventDestroy (vendor, handle); + } + } + } + + GstHipEventPool *pool = nullptr; + hipEvent_t handle = nullptr; + GstHipVendor vendor; + guint device_id; +}; + +struct _GstHipEventPoolPrivate +{ + ~_GstHipEventPoolPrivate () + { + while (!event_pool.empty ()) { + auto event = event_pool.front (); + event_pool.pop (); + gst_mini_object_unref (event); + } + } + + GstHipVendor vendor; + guint device_id; + std::mutex lock; + std::queue<GstHipEvent *>event_pool; +}; +/* *INDENT-ON* */ + +GST_DEFINE_MINI_OBJECT_TYPE (GstHipEvent, gst_hip_event); + +static void gst_hip_event_pool_finalize (GObject * object); + +#define gst_hip_event_pool_parent_class parent_class +G_DEFINE_TYPE (GstHipEventPool, gst_hip_event_pool, GST_TYPE_OBJECT); + +static void +gst_hip_event_pool_class_init (GstHipEventPoolClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + + object_class->finalize = gst_hip_event_pool_finalize; +} + +static void +gst_hip_event_pool_init (GstHipEventPool * self) +{ + self->priv = new GstHipEventPoolPrivate (); +} + +static void +gst_hip_event_pool_finalize (GObject * object) +{ + auto self = GST_HIP_EVENT_POOL (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +/** + * gst_hip_event_pool_new: + * @vendor: a #GstHipVendor + * @device_id: device identifier + * + * Creates a new event pool instance with @vendor and @device_id. + * + * Returns: (transfer full) (nullable): a #GstHipEventPool if succeeded, + * otherwise %NULL + * + * Since: 1.28 + */ +GstHipEventPool * +gst_hip_event_pool_new (GstHipVendor vendor, guint device_id) +{ + g_return_val_if_fail (vendor != GST_HIP_VENDOR_UNKNOWN, nullptr); + + auto self = (GstHipEventPool *) + g_object_new (GST_TYPE_HIP_EVENT_POOL, nullptr); + gst_object_ref_sink (self); + + auto priv = self->priv; + priv->vendor = vendor; + priv->device_id = device_id; + + return self; +} + +static void +gst_hip_event_pool_release (GstHipEventPool * pool, GstHipEvent * event) +{ + auto priv = pool->priv; + { + std::lock_guard < std::mutex > lk (priv->lock); + event->dispose = nullptr; + event->pool = nullptr; + priv->event_pool.push (event); + } + + gst_object_unref (pool); +} + +static gboolean +gst_hip_event_dispose (GstHipEvent * event) +{ + if (!event->pool) + return TRUE; + + gst_mini_object_ref (event); + gst_hip_event_pool_release (event->pool, event); + + return FALSE; +} + +static void +gst_hip_event_free (GstHipEvent * event) +{ + delete event; +} + +/** + * gst_hip_event_pool_acquire: + * @pool: a #GstHipEventPool + * @event: (out) (transfer full) (nullable): a location to store #GstHipEvent + * + * Acquires #GstHipEvent from @pool + * + * Returns: %TRUE if succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_event_pool_acquire (GstHipEventPool * pool, GstHipEvent ** event) +{ + g_return_val_if_fail (GST_IS_HIP_EVENT_POOL (pool), FALSE); + g_return_val_if_fail (event, FALSE); + + *event = nullptr; + + auto priv = pool->priv; + GstHipEvent *new_event = nullptr; + + { + std::lock_guard < std::mutex > lk (priv->lock); + if (!priv->event_pool.empty ()) { + new_event = priv->event_pool.front (); + priv->event_pool.pop (); + } + } + + if (!new_event) { + auto hip_ret = HipSetDevice (priv->vendor, priv->device_id); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (pool, "Couldn't set device"); + return FALSE; + } + + hipEvent_t handle; + hip_ret = HipEventCreateWithFlags (priv->vendor, &handle, + hipEventDisableTiming); + + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (pool, "Couldn't create event"); + return FALSE; + } + + new_event = new GstHipEvent (); + new_event->handle = handle; + new_event->vendor = priv->vendor; + new_event->device_id = priv->device_id; + + gst_mini_object_init (new_event, 0, gst_hip_event_get_type (), + nullptr, nullptr, (GstMiniObjectFreeFunction) gst_hip_event_free); + } + + new_event->pool = (GstHipEventPool *) gst_object_ref (pool); + new_event->dispose = (GstMiniObjectDisposeFunction) gst_hip_event_dispose; + + *event = new_event; + + return TRUE; +} + +/** + * gst_hip_event_get_vendor: + * @event: a #GstHipEvent + * + * Gets device vendor of @event object + * + * Returns: #GstHipVendor + * + * Since: 1.28 + */ +GstHipVendor +gst_hip_event_get_vendor (GstHipEvent * event) +{ + g_return_val_if_fail (event, GST_HIP_VENDOR_UNKNOWN); + + return event->vendor; +} + +/** + * gst_hip_event_get_device_id: + * @event: a #GstHipEvent + * + * Gets numeric device identifier of @event object + * + * Returns: device identifier + * + * Since: 1.28 + */ +guint +gst_hip_event_get_device_id (GstHipEvent * event) +{ + g_return_val_if_fail (event, G_MAXUINT); + + return event->vendor; +} + +/** + * gst_hip_event_record: + * @event: a #GstHipEvent + * @stream: (type gpointer): a hipStream_t handle + * + * Records operations currently scheduled by @stream to @event + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_event_record (GstHipEvent * event, hipStream_t stream) +{ + g_return_val_if_fail (event, hipErrorInvalidValue); + + auto hip_ret = HipSetDevice (event->vendor, event->device_id); + if (!gst_hip_result (hip_ret, event->vendor)) + return hip_ret; + + return HipEventRecord (event->vendor, event->handle, stream); +} + +/** + * gst_hip_event_query: + * @event: a #GstHipEvent + * + * Queries event status via hipEventQuery() + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_event_query (GstHipEvent * event) +{ + g_return_val_if_fail (event, hipErrorInvalidValue); + + auto hip_ret = HipSetDevice (event->vendor, event->device_id); + if (!gst_hip_result (hip_ret, event->vendor)) + return hip_ret; + + return HipEventQuery (event->vendor, event->handle); +} + +/** + * gst_hip_event_synchronize: + * @event: a #GstHipEvent + * + * Waits for recorded operations via hipEventSynchronize() + * + * Returns: (type gint): hipError_t error code + * + * Since: 1.28 + */ +hipError_t +gst_hip_event_synchronize (GstHipEvent * event) +{ + g_return_val_if_fail (event, hipErrorInvalidValue); + + auto hip_ret = HipSetDevice (event->vendor, event->device_id); + if (!gst_hip_result (hip_ret, event->vendor)) + return hip_ret; + + return HipEventSynchronize (event->vendor, event->handle); +} + +/** + * gst_hip_event_ref: + * @event: a #GstHipEvent + * + * Increments the reference count on @event + * + * Returns: (transfer full): a pointer to @event + * + * Since: 1.28 + */ +GstHipEvent * +gst_hip_event_ref (GstHipEvent * event) +{ + return (GstHipEvent *) gst_mini_object_ref (event); +} + +/** + * gst_hip_event_unref: + * @event: a #GstHipEvent + * + * Decrements the reference count on @event + * + * Since: 1.28 + */ +void +gst_hip_event_unref (GstHipEvent * event) +{ + return gst_mini_object_unref (event); +} + +/** + * gst_clear_hip_event: (skip) + * @event: a pointer to a #GstHipEvent + * + * Clears a reference to the @event + * + * Since: 1.28 + */ +void +gst_clear_hip_event (GstHipEvent ** event) +{ + gst_clear_mini_object (event); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipevent.h
Added
@@ -0,0 +1,107 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_EVENT_POOL (gst_hip_event_pool_get_type ()) +#define GST_HIP_EVENT_POOL(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), GST_TYPE_HIP_EVENT_POOL, GstHipEventPool)) +#define GST_HIP_EVENT_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_HIP_EVENT_POOL, GstHipEventPoolClass)) +#define GST_IS_HIP_EVENT_POOL(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_HIP_EVENT_POOL)) +#define GST_IS_HIP_EVENT_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_HIP_EVENT_POOL)) +#define GST_HIP_EVENT_POOL_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), GST_TYPE_HIP_EVENT_POOL, GstHipEventPoolClass)) +#define GST_HIP_EVENT_POOL_CAST(obj) ((GstHipEventPool*)(obj)) + +/** + * GstHipEventPool: + * + * Opaque GstHipEventPool struct + * + * Since: 1.28 + */ +struct _GstHipEventPool +{ + GstObject parent; + + /*< private >*/ + GstHipEventPoolPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstHipEventPoolClass: + * + * Opaque GstHipEventPoolClass struct + * + * Since: 1.28 + */ +struct _GstHipEventPoolClass +{ + GstObjectClass parent_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING; +}; + +GST_HIP_API +GType gst_hip_event_pool_get_type (void); + +GST_HIP_API +GType gst_hip_event_get_type (void); + +GST_HIP_API +GstHipEventPool * gst_hip_event_pool_new (GstHipVendor vendor, + guint device_id); + +GST_HIP_API +gboolean gst_hip_event_pool_acquire (GstHipEventPool * pool, + GstHipEvent ** event); + +GST_HIP_API +GstHipVendor gst_hip_event_get_vendor (GstHipEvent * event); + +GST_HIP_API +guint gst_hip_event_get_device_id (GstHipEvent * event); + +GST_HIP_API +hipError_t gst_hip_event_record (GstHipEvent * event, + hipStream_t stream); + +GST_HIP_API +hipError_t gst_hip_event_query (GstHipEvent * event); + +GST_HIP_API +hipError_t gst_hip_event_synchronize (GstHipEvent * event); + +GST_HIP_API +GstHipEvent * gst_hip_event_ref (GstHipEvent * event); + +GST_HIP_API +void gst_hip_event_unref (GstHipEvent * event); + +GST_HIP_API +void gst_clear_hip_event (GstHipEvent ** event); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiploader.cpp
Added
@@ -0,0 +1,1319 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-config.h" + +#include "gsthip.h" +#include "gsthiploader.h" +#include <gmodule.h> +#include <mutex> +#include <hip/nvidia_hip_runtime_api.h> +#include <string.h> +#include "gsthiputils-private.h" + +#ifdef HAVE_GST_GL +#include "gsthip-gl.h" +#include <cudaGL.h> +#endif + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hiploader", 0, "hiploader"); + }); + + return cat; +} +#endif + +/* *INDENT-OFF* */ +struct GstHipFuncTableAmd +{ + gboolean loaded = FALSE; + + hipError_t (*hipInit) (unsigned int flags); + hipError_t (*hipDriverGetVersion) (int *driverVersion); + hipError_t (*hipRuntimeGetVersion) (int *runtimeVersion); + const char *(*hipGetErrorName) (hipError_t hip_error); + const char *(*hipGetErrorString) (hipError_t hipError); + hipError_t (*hipGetDeviceCount) (int *count); + hipError_t (*hipGetDeviceProperties) (hipDeviceProp_t * prop, int deviceId); + hipError_t (*hipDeviceGetAttribute) (int *pi, hipDeviceAttribute_t attr, + int deviceId); + hipError_t (*hipSetDevice) (int deviceId); + hipError_t (*hipMalloc) (void **ptr, size_t size); + hipError_t (*hipFree) (void *ptr); + hipError_t (*hipHostMalloc) (void **ptr, size_t size, unsigned int flags); + hipError_t (*hipHostFree) (void *ptr); + hipError_t (*hipStreamCreate) (hipStream_t* stream); + hipError_t (*hipStreamDestroy) (hipStream_t stream); + hipError_t (*hipStreamSynchronize) (hipStream_t stream); + hipError_t (*hipEventCreateWithFlags) (hipEvent_t* event, unsigned flags); + hipError_t (*hipEventRecord) (hipEvent_t event, hipStream_t stream); + hipError_t (*hipEventDestroy) (hipEvent_t event); + hipError_t (*hipEventSynchronize) (hipEvent_t event); + hipError_t (*hipEventQuery) (hipEvent_t event); + hipError_t (*hipModuleLoadData) (hipModule_t * module, const void *image); + hipError_t (*hipModuleUnload) (hipModule_t module); + hipError_t (*hipModuleGetFunction) (hipFunction_t * function, + hipModule_t module, const char *kname); + hipError_t (*hipModuleLaunchKernel) (hipFunction_t f, unsigned int gridDimX, + unsigned int gridDimY, + unsigned int gridDimZ, unsigned int blockDimX, + unsigned int blockDimY, unsigned int blockDimZ, + unsigned int sharedMemBytes, hipStream_t stream, + void **kernelParams, void **extra); + hipError_t (*hipMemcpyParam2DAsync) (const hip_Memcpy2D * pCopy, + hipStream_t stream); + hipError_t (*hipMemsetD8Async) (hipDeviceptr_t dest, unsigned char value, + size_t count, hipStream_t stream); + hipError_t (*hipMemsetD16Async) (hipDeviceptr_t dest, unsigned short value, + size_t count, hipStream_t stream); + hipError_t (*hipMemsetD32Async) (hipDeviceptr_t dst, int value, size_t count, + hipStream_t stream); + hipError_t (*hipTexObjectCreate) (hipTextureObject_t * pTexObject, + const HIP_RESOURCE_DESC * pResDesc, const HIP_TEXTURE_DESC * pTexDesc, + const HIP_RESOURCE_VIEW_DESC * pResViewDesc); + hipError_t (*hipTexObjectDestroy) (hipTextureObject_t texObject); + hipError_t (*hipGraphicsMapResources) (int count, + hipGraphicsResource_t* resources, hipStream_t stream); + hipError_t (*hipGraphicsResourceGetMappedPointer) (void** devPtr, + size_t* size, hipGraphicsResource_t resource); + hipError_t (*hipGraphicsUnmapResources) (int count, + hipGraphicsResource_t* resources, hipStream_t stream); + hipError_t (*hipGraphicsUnregisterResource) (hipGraphicsResource_t resource); +#ifdef HAVE_GST_GL + hipError_t (*hipGLGetDevices) (unsigned int* pHipDeviceCount, + int* pHipDevices, unsigned int hipDeviceCount, + hipGLDeviceList deviceList); + hipError_t (*hipGraphicsGLRegisterBuffer) (hipGraphicsResource** resource, + unsigned int buffer, unsigned int flags); +#endif +}; + +struct GstHipFuncTableCuda +{ + gboolean loaded = FALSE; + + CUresult (CUDAAPI *cuInit) (unsigned int flags); + CUresult (CUDAAPI *cuDriverGetVersion) (int *driverVersion); + CUresult (CUDAAPI *cuDeviceGetAttribute) (int *pi, + CUdevice_attribute attrib, CUdevice dev); + CUresult (CUDAAPI *cuModuleLoadData) (CUmodule * module, const void *image); + CUresult (CUDAAPI *cuModuleUnload) (CUmodule module); + CUresult (CUDAAPI *cuModuleGetFunction) (CUfunction * function, + CUmodule module, const char *kname); + CUresult (CUDAAPI *cuLaunchKernel) (CUfunction f, unsigned int gridDimX, + unsigned int gridDimY, + unsigned int gridDimZ, unsigned int blockDimX, + unsigned int blockDimY, unsigned int blockDimZ, + unsigned int sharedMemBytes, CUstream stream, + void **kernelParams, void **extra); + CUresult (CUDAAPI *cuMemcpy2DAsync) (const CUDA_MEMCPY2D * pCopy, + CUstream stream); + CUresult (CUDAAPI *cuMemsetD8Async) (CUdeviceptr dstDevice, + unsigned char uc, size_t N, CUstream hStream); + CUresult (CUDAAPI *cuMemsetD16Async) (CUdeviceptr dstDevice, + unsigned short us, size_t N, CUstream hStream); + CUresult (CUDAAPI *cuMemsetD32Async) (CUdeviceptr dstDevice, unsigned int ui, + size_t N, CUstream hStream); + CUresult (CUDAAPI *cuTexObjectCreate) (CUtexObject * pTexObject, + const CUDA_RESOURCE_DESC * pResDesc, const CUDA_TEXTURE_DESC * pTexDesc, + const CUDA_RESOURCE_VIEW_DESC * pResViewDesc); + CUresult (CUDAAPI *cuTexObjectDestroy) (CUtexObject texObject); +}; + +struct GstHipFuncTableCudaRt +{ + gboolean loaded = FALSE; + + cudaError_t (CUDAAPI *cudaRuntimeGetVersion) (int *runtimeVersion); + const char * (CUDAAPI *cudaGetErrorName) (cudaError_t error); + const char * (CUDAAPI *cudaGetErrorString) (cudaError_t error); + cudaError_t (CUDAAPI *cudaGetDeviceCount) (int *count); + cudaError_t (CUDAAPI *cudaGetDeviceProperties) (struct cudaDeviceProp * prop, + int device); + cudaError_t (CUDAAPI *cudaDeviceGetAttribute) (int *value, enum cudaDeviceAttr attr, + int device); + cudaError_t (CUDAAPI *cudaSetDevice) (int device); + cudaError_t (CUDAAPI *cudaMalloc) (void **ptr, size_t size); + cudaError_t (CUDAAPI *cudaFree) (void *ptr); + cudaError_t (CUDAAPI *cudaMallocHost) (void **ptr, size_t size, unsigned int flags); + cudaError_t (CUDAAPI *cudaFreeHost) (void *ptr); + cudaError_t (CUDAAPI *cudaStreamCreate) (cudaStream_t *pStream); + cudaError_t (CUDAAPI *cudaStreamDestroy) (cudaStream_t stream); + cudaError_t (CUDAAPI *cudaStreamSynchronize) (cudaStream_t stream); + cudaError_t (CUDAAPI *cudaEventCreateWithFlags) (cudaEvent_t *event, + unsigned int flags); + cudaError_t (CUDAAPI *cudaEventRecord) (cudaEvent_t event, cudaStream_t stream); + cudaError_t (CUDAAPI *cudaEventDestroy) (cudaEvent_t event); + cudaError_t (CUDAAPI *cudaEventSynchronize)(cudaEvent_t event); + cudaError_t (CUDAAPI *cudaEventQuery) (cudaEvent_t event); + cudaError_t (CUDAAPI *cudaGraphicsMapResources) (int count, + cudaGraphicsResource_t *resources, cudaStream_t stream); + cudaError_t (CUDAAPI *cudaGraphicsResourceGetMappedPointer) (void **devPtr, + size_t *size, cudaGraphicsResource_t resource); + cudaError_t (CUDAAPI *cudaGraphicsUnmapResources) (int count, + cudaGraphicsResource_t *resources, cudaStream_t stream); + cudaError_t (CUDAAPI *cudaGraphicsUnregisterResource) (cudaGraphicsResource_t resource); +#ifdef HAVE_GST_GL + cudaError_t (CUDAAPI *cudaGLGetDevices) (unsigned int *pCudaDeviceCount, + int *pCudaDevices, unsigned int cudaDeviceCount, + enum cudaGLDeviceList deviceList); + cudaError_t (CUDAAPI *cudaGraphicsGLRegisterBuffer) (struct cudaGraphicsResource **resource, + unsigned int buffer, unsigned int flags); +#endif +}; +/* *INDENT-ON* */ + +static GstHipFuncTableAmd amd_ftable = { }; +static GstHipFuncTableCuda cuda_ftable = { }; +static GstHipFuncTableCudaRt cudart_ftable = { }; + +#define LOAD_SYMBOL(name) G_STMT_START { \ + if (!g_module_symbol (module, G_STRINGIFY (name), (gpointer *) &table->name)) { \ + GST_ERROR ("Failed to load '%s', %s", G_STRINGIFY (name), g_module_error()); \ + g_module_close (module); \ + return; \ + } \ +} G_STMT_END; + +static void +load_amd_func_table (void) +{ + GModule *module = nullptr; + +#ifndef G_OS_WIN32 + module = g_module_open ("libamdhip64.so.7", G_MODULE_BIND_LAZY); + if (module) { + GST_INFO ("Loaded libamdhip64.so.7"); + } else { + module = g_module_open ("libamdhip64.so.6", G_MODULE_BIND_LAZY); + if (module) + GST_INFO ("Loaded libamdhip64.so.6"); + } + + if (!module) + module = load_hiplib_from_root ("/opt/rocm", "lib", "libamdhip64.so.", ""); +#else + /* Prefer hip dll in SDK */ + auto hip_root = g_getenv ("HIP_PATH"); + if (hip_root) { + module = load_hiplib_from_root (hip_root, "bin", "amdhip64_", ".dll"); + } + + /* Try dll in System32 */ + module = g_module_open ("amdhip64_7.dll", G_MODULE_BIND_LAZY); + if (module) { + GST_INFO ("Loaded amdhip64_7.dll"); + } else { + module = g_module_open ("amdhip64_6.dll", G_MODULE_BIND_LAZY); + if (module) + GST_INFO ("Loaded amdhip64_6.dll"); + } +#endif + + if (!module) { + GST_INFO ("Couldn't open HIP library"); + return; + } + + auto table = &amd_ftable; + LOAD_SYMBOL (hipInit); + LOAD_SYMBOL (hipDriverGetVersion); + LOAD_SYMBOL (hipRuntimeGetVersion); + LOAD_SYMBOL (hipGetErrorName); + LOAD_SYMBOL (hipGetErrorString); + LOAD_SYMBOL (hipGetDeviceCount); + LOAD_SYMBOL (hipGetDeviceProperties); + LOAD_SYMBOL (hipDeviceGetAttribute); + LOAD_SYMBOL (hipSetDevice); + LOAD_SYMBOL (hipMalloc); + LOAD_SYMBOL (hipFree); + LOAD_SYMBOL (hipHostMalloc); + LOAD_SYMBOL (hipHostFree); + LOAD_SYMBOL (hipStreamCreate); + LOAD_SYMBOL (hipStreamDestroy); + LOAD_SYMBOL (hipStreamSynchronize); + LOAD_SYMBOL (hipEventCreateWithFlags); + LOAD_SYMBOL (hipEventRecord); + LOAD_SYMBOL (hipEventDestroy); + LOAD_SYMBOL (hipEventSynchronize); + LOAD_SYMBOL (hipEventQuery); + LOAD_SYMBOL (hipModuleLoadData); + LOAD_SYMBOL (hipModuleUnload); + LOAD_SYMBOL (hipModuleGetFunction); + LOAD_SYMBOL (hipModuleLaunchKernel); + LOAD_SYMBOL (hipMemcpyParam2DAsync); + LOAD_SYMBOL (hipMemsetD8Async); + LOAD_SYMBOL (hipMemsetD16Async); + LOAD_SYMBOL (hipMemsetD32Async); + LOAD_SYMBOL (hipTexObjectCreate); + LOAD_SYMBOL (hipTexObjectDestroy); + LOAD_SYMBOL (hipGraphicsMapResources); + LOAD_SYMBOL (hipGraphicsResourceGetMappedPointer); + LOAD_SYMBOL (hipGraphicsUnmapResources); + LOAD_SYMBOL (hipGraphicsUnregisterResource); +#ifdef HAVE_GST_GL + LOAD_SYMBOL (hipGLGetDevices); + LOAD_SYMBOL (hipGraphicsGLRegisterBuffer); +#endif + + table->loaded = TRUE; +} + +static void +load_cuda_func_table (void) +{ + GModule *module = nullptr; +#ifndef G_OS_WIN32 + module = g_module_open ("libcuda.so", G_MODULE_BIND_LAZY); +#else + module = g_module_open ("nvcuda.dll", G_MODULE_BIND_LAZY); +#endif + + if (!module) { + GST_INFO ("Couldn't open CUDA library"); + return; + } + + auto table = &cuda_ftable; + LOAD_SYMBOL (cuInit); + LOAD_SYMBOL (cuDriverGetVersion); + LOAD_SYMBOL (cuModuleLoadData); + LOAD_SYMBOL (cuModuleUnload); + LOAD_SYMBOL (cuModuleGetFunction); + LOAD_SYMBOL (cuLaunchKernel); + LOAD_SYMBOL (cuMemcpy2DAsync); + LOAD_SYMBOL (cuMemsetD8Async); + LOAD_SYMBOL (cuMemsetD16Async); + LOAD_SYMBOL (cuMemsetD32Async); + LOAD_SYMBOL (cuTexObjectCreate); + LOAD_SYMBOL (cuTexObjectDestroy); + + table->loaded = TRUE; +} + +static void +load_cudart_func_table (guint major_ver, guint minor_ver) +{ + GModule *module = nullptr; + auto module_name = g_getenv ("GST_HIP_CUDART_LIBNAME"); + if (module_name) + module = g_module_open (module_name, G_MODULE_BIND_LAZY); + + if (!module) { +#ifndef G_OS_WIN32 + module = g_module_open ("libcudart.so", G_MODULE_BIND_LAZY); +#else + auto lib_name = g_strdup_printf ("cudart64_%d.dll", major_ver); + module = g_module_open (lib_name, G_MODULE_BIND_LAZY); + g_free (lib_name); + + if (!module) { + lib_name = g_strdup_printf ("cudart64_%d%d.dll", major_ver, minor_ver); + module = g_module_open (lib_name, G_MODULE_BIND_LAZY); + g_free (lib_name); + } + + if (!module) { + auto cuda_root = g_getenv ("CUDA_PATH"); + if (cuda_root) { + auto path = g_build_path (G_DIR_SEPARATOR_S, cuda_root, "bin", nullptr); + auto dir = g_dir_open (path, 0, nullptr); + if (dir) { + const gchar *name; + while ((name = g_dir_read_name (dir))) { + if (g_str_has_prefix (name, "cudart64_") && + g_str_has_suffix (name, ".dll")) { + auto lib_path = g_build_filename (path, name, nullptr); + module = g_module_open (lib_path, G_MODULE_BIND_LAZY); + g_free (lib_path); + break; + } + } + + g_dir_close (dir); + } + g_free (path); + } + } +#endif + } + + if (!module) { + GST_INFO ("Couldn't open CUDA runtime library"); + return; + } + + auto table = &cudart_ftable; + LOAD_SYMBOL (cudaRuntimeGetVersion); + LOAD_SYMBOL (cudaGetErrorName); + LOAD_SYMBOL (cudaGetErrorString); + LOAD_SYMBOL (cudaGetDeviceCount); + LOAD_SYMBOL (cudaGetDeviceProperties); + LOAD_SYMBOL (cudaDeviceGetAttribute); + LOAD_SYMBOL (cudaSetDevice); + LOAD_SYMBOL (cudaMalloc); + LOAD_SYMBOL (cudaFree); + LOAD_SYMBOL (cudaMallocHost); + LOAD_SYMBOL (cudaFreeHost); + LOAD_SYMBOL (cudaStreamCreate); + LOAD_SYMBOL (cudaStreamDestroy); + LOAD_SYMBOL (cudaStreamSynchronize); + LOAD_SYMBOL (cudaEventCreateWithFlags); + LOAD_SYMBOL (cudaEventRecord); + LOAD_SYMBOL (cudaEventDestroy); + LOAD_SYMBOL (cudaEventSynchronize); + LOAD_SYMBOL (cudaEventQuery); + LOAD_SYMBOL (cudaGraphicsMapResources); + LOAD_SYMBOL (cudaGraphicsResourceGetMappedPointer); + LOAD_SYMBOL (cudaGraphicsUnmapResources); + LOAD_SYMBOL (cudaGraphicsUnregisterResource); +#ifdef HAVE_GST_GL + LOAD_SYMBOL (cudaGLGetDevices); + LOAD_SYMBOL (cudaGraphicsGLRegisterBuffer); +#endif + + table->loaded = TRUE; +} + +/* *INDENT-OFF* */ +static gboolean +gst_hip_load_library_amd (void) +{ + static std::once_flag once; + std::call_once (once,() { + load_amd_func_table (); + if (amd_ftable.loaded) { + auto ret = amd_ftable.hipInit (0); + if (ret != hipSuccess) + amd_ftable.loaded = FALSE; + } + }); + + return amd_ftable.loaded; +} + +static gboolean +gst_hip_load_library_nvidia (void) +{ + static std::once_flag once; + std::call_once (once,() { + load_cuda_func_table (); + if (cuda_ftable.loaded) { + auto ret = cuda_ftable.cuInit (0); + if (ret != CUDA_SUCCESS) { + cuda_ftable.loaded = FALSE; + return; + } + + int cuda_ver = 0; + ret = cuda_ftable.cuDriverGetVersion (&cuda_ver); + if (ret != CUDA_SUCCESS) + return; + + int major_ver = cuda_ver / 1000; + int minor_ver = (cuda_ver % 1000) / 10; + load_cudart_func_table (major_ver, minor_ver); + } + }); + + if (!cuda_ftable.loaded || !cudart_ftable.loaded) + return FALSE; + + return TRUE; +} +/* *INDENT-ON* */ + +/** + * gst_hip_load_library: + * @vendor: a #GstHipVendor + * + * Opens @vendor specific runtime libraries + * + * Returns: %TRUE if succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_load_library (GstHipVendor vendor) +{ + switch (vendor) { + case GST_HIP_VENDOR_AMD: + return gst_hip_load_library_amd (); + case GST_HIP_VENDOR_NVIDIA: + return gst_hip_load_library_nvidia (); + case GST_HIP_VENDOR_UNKNOWN: + if (gst_hip_load_library_amd () || gst_hip_load_library_nvidia ()) + return TRUE; + break; + } + + return FALSE; +} + +#define CHECK_VENDOR(v) \ + g_return_val_if_fail (vendor != GST_HIP_VENDOR_UNKNOWN, \ + hipErrorNotInitialized); \ + g_return_val_if_fail (gst_hip_load_library (vendor), hipErrorNotInitialized); + + +hipError_t +HipInit (GstHipVendor vendor, unsigned int flags) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipInit (flags); + + auto cuda_ret = cuda_ftable.cuInit (flags); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipDriverGetVersion (GstHipVendor vendor, int *driverVersion) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipDriverGetVersion (driverVersion); + + auto cuda_ret = cuda_ftable.cuDriverGetVersion (driverVersion); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipRuntimeGetVersion (GstHipVendor vendor, int *runtimeVersion) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipRuntimeGetVersion (runtimeVersion); + + auto cuda_ret = cudart_ftable.cudaRuntimeGetVersion (runtimeVersion); + return hipCUDAErrorTohipError (cuda_ret); +} + +const char * +HipGetErrorName (GstHipVendor vendor, hipError_t hip_error) +{ + g_return_val_if_fail (vendor != GST_HIP_VENDOR_UNKNOWN, nullptr); + g_return_val_if_fail (gst_hip_load_library (vendor), nullptr); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGetErrorName (hip_error); + + auto cuda_ret = hipErrorToCudaError (hip_error); + return cudart_ftable.cudaGetErrorName (cuda_ret); +} + +const char * +HipGetErrorString (GstHipVendor vendor, hipError_t hipError) +{ + g_return_val_if_fail (vendor != GST_HIP_VENDOR_UNKNOWN, nullptr); + g_return_val_if_fail (gst_hip_load_library (vendor), nullptr); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGetErrorString (hipError); + + auto cuda_ret = hipErrorToCudaError (hipError); + return cudart_ftable.cudaGetErrorString (cuda_ret); +} + +hipError_t +HipGetDeviceCount (GstHipVendor vendor, int *count) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGetDeviceCount (count); + + auto cuda_ret = cudart_ftable.cudaGetDeviceCount (count); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipGetDeviceProperties (GstHipVendor vendor, hipDeviceProp_t * prop, + int deviceId) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGetDeviceProperties (prop, deviceId); + + if (!prop) + return hipErrorInvalidValue; + + struct cudaDeviceProp cdprop; + auto cuda_ret = cudart_ftable.cudaGetDeviceProperties (&cdprop, deviceId); + if (cuda_ret != cudaSuccess) + return hipCUDAErrorTohipError (cuda_ret); + + strncpy (prop->name, cdprop.name, 256); + strncpy (prop->uuid.bytes, cdprop.uuid.bytes, 16); + strncpy (prop->luid, cdprop.luid, 8); + prop->luidDeviceNodeMask = cdprop.luidDeviceNodeMask; + prop->totalGlobalMem = cdprop.totalGlobalMem; + prop->sharedMemPerBlock = cdprop.sharedMemPerBlock; + prop->regsPerBlock = cdprop.regsPerBlock; + prop->memPitch = cdprop.memPitch; + prop->maxThreadsPerBlock = cdprop.maxThreadsPerBlock; + prop->maxThreadsDim0 = cdprop.maxThreadsDim0; + prop->maxThreadsDim1 = cdprop.maxThreadsDim1; + prop->maxThreadsDim2 = cdprop.maxThreadsDim2; + prop->maxGridSize0 = cdprop.maxGridSize0; + prop->maxGridSize1 = cdprop.maxGridSize1; + prop->maxGridSize2 = cdprop.maxGridSize2; + prop->clockRate = cdprop.clockRate; + prop->totalConstMem = cdprop.totalConstMem; + prop->major = cdprop.major; + prop->minor = cdprop.minor; + prop->textureAlignment = cdprop.textureAlignment; + prop->texturePitchAlignment = cdprop.texturePitchAlignment; + prop->deviceOverlap = cdprop.deviceOverlap; + prop->multiProcessorCount = cdprop.multiProcessorCount; + prop->kernelExecTimeoutEnabled = cdprop.kernelExecTimeoutEnabled; + prop->integrated = cdprop.integrated; + prop->canMapHostMemory = cdprop.canMapHostMemory; + prop->computeMode = cdprop.computeMode; + prop->maxTexture1D = cdprop.maxTexture1D; + prop->maxTexture1DMipmap = cdprop.maxTexture1DMipmap; + prop->maxTexture1DLinear = cdprop.maxTexture1DLinear; + prop->maxTexture2D0 = cdprop.maxTexture2D0; + prop->maxTexture2D1 = cdprop.maxTexture2D1; + prop->maxTexture2DMipmap0 = cdprop.maxTexture2DMipmap0; + prop->maxTexture2DMipmap1 = cdprop.maxTexture2DMipmap1; + prop->maxTexture2DLinear0 = cdprop.maxTexture2DLinear0; + prop->maxTexture2DLinear1 = cdprop.maxTexture2DLinear1; + prop->maxTexture2DLinear2 = cdprop.maxTexture2DLinear2; + prop->maxTexture2DGather0 = cdprop.maxTexture2DGather0; + prop->maxTexture2DGather1 = cdprop.maxTexture2DGather1; + prop->maxTexture3D0 = cdprop.maxTexture3D0; + prop->maxTexture3D1 = cdprop.maxTexture3D1; + prop->maxTexture3D2 = cdprop.maxTexture3D2; + prop->maxTexture3DAlt0 = cdprop.maxTexture3DAlt0; + prop->maxTexture3DAlt1 = cdprop.maxTexture3DAlt1; + prop->maxTexture3DAlt2 = cdprop.maxTexture3DAlt2; + prop->maxTextureCubemap = cdprop.maxTextureCubemap; + prop->maxTexture1DLayered0 = cdprop.maxTexture1DLayered0; + prop->maxTexture1DLayered1 = cdprop.maxTexture1DLayered1; + prop->maxTexture2DLayered0 = cdprop.maxTexture2DLayered0; + prop->maxTexture2DLayered1 = cdprop.maxTexture2DLayered1; + prop->maxTexture2DLayered2 = cdprop.maxTexture2DLayered2; + prop->maxTextureCubemapLayered0 = cdprop.maxTextureCubemapLayered0; + prop->maxTextureCubemapLayered1 = cdprop.maxTextureCubemapLayered1; + prop->maxSurface1D = cdprop.maxSurface1D; + prop->maxSurface2D0 = cdprop.maxSurface2D0; + prop->maxSurface2D1 = cdprop.maxSurface2D1; + prop->maxSurface3D0 = cdprop.maxSurface3D0; + prop->maxSurface3D1 = cdprop.maxSurface3D1; + prop->maxSurface3D2 = cdprop.maxSurface3D2; + prop->maxSurface1DLayered0 = cdprop.maxSurface1DLayered0; + prop->maxSurface1DLayered1 = cdprop.maxSurface1DLayered1; + prop->maxSurface2DLayered0 = cdprop.maxSurface2DLayered0; + prop->maxSurface2DLayered1 = cdprop.maxSurface2DLayered1; + prop->maxSurface2DLayered2 = cdprop.maxSurface2DLayered2; + prop->maxSurfaceCubemap = cdprop.maxSurfaceCubemap; + prop->maxSurfaceCubemapLayered0 = cdprop.maxSurfaceCubemapLayered0; + prop->maxSurfaceCubemapLayered1 = cdprop.maxSurfaceCubemapLayered1; + prop->surfaceAlignment = cdprop.surfaceAlignment; + prop->concurrentKernels = cdprop.concurrentKernels; + prop->ECCEnabled = cdprop.ECCEnabled; + prop->pciBusID = cdprop.pciBusID; + prop->pciDeviceID = cdprop.pciDeviceID; + prop->pciDomainID = cdprop.pciDomainID; + prop->tccDriver = cdprop.tccDriver; + prop->asyncEngineCount = cdprop.asyncEngineCount; + prop->unifiedAddressing = cdprop.unifiedAddressing; + prop->memoryClockRate = cdprop.memoryClockRate; + prop->memoryBusWidth = cdprop.memoryBusWidth; + prop->l2CacheSize = cdprop.l2CacheSize; + prop->maxThreadsPerMultiProcessor = cdprop.maxThreadsPerMultiProcessor; + prop->streamPrioritiesSupported = cdprop.streamPrioritiesSupported; + prop->globalL1CacheSupported = cdprop.globalL1CacheSupported; + prop->localL1CacheSupported = cdprop.localL1CacheSupported; + prop->sharedMemPerMultiprocessor = cdprop.sharedMemPerMultiprocessor; + prop->regsPerMultiprocessor = cdprop.regsPerMultiprocessor; + prop->managedMemory = cdprop.managedMemory; + prop->isMultiGpuBoard = cdprop.isMultiGpuBoard; + prop->multiGpuBoardGroupID = cdprop.multiGpuBoardGroupID; + prop->hostNativeAtomicSupported = cdprop.hostNativeAtomicSupported; + prop->singleToDoublePrecisionPerfRatio = + cdprop.singleToDoublePrecisionPerfRatio; + prop->pageableMemoryAccess = cdprop.pageableMemoryAccess; + prop->concurrentManagedAccess = cdprop.concurrentManagedAccess; + prop->computePreemptionSupported = cdprop.computePreemptionSupported; + prop->canUseHostPointerForRegisteredMem = + cdprop.canUseHostPointerForRegisteredMem; + prop->cooperativeLaunch = cdprop.cooperativeLaunch; + prop->cooperativeMultiDeviceLaunch = cdprop.cooperativeMultiDeviceLaunch; + prop->sharedMemPerBlockOptin = cdprop.sharedMemPerBlockOptin; + prop->pageableMemoryAccessUsesHostPageTables = + cdprop.pageableMemoryAccessUsesHostPageTables; + prop->directManagedMemAccessFromHost = cdprop.directManagedMemAccessFromHost; + prop->accessPolicyMaxWindowSize = cdprop.accessPolicyMaxWindowSize; + prop->maxBlocksPerMultiProcessor = cdprop.maxBlocksPerMultiProcessor; + prop->persistingL2CacheMaxSize = cdprop.persistingL2CacheMaxSize; + prop->reservedSharedMemPerBlock = cdprop.reservedSharedMemPerBlock; + prop->warpSize = cdprop.warpSize; + prop->clusterLaunch = cdprop.clusterLaunch; + prop->deferredMappingHipArraySupported = + cdprop.deferredMappingCudaArraySupported; + prop->gpuDirectRDMAFlushWritesOptions = + cdprop.gpuDirectRDMAFlushWritesOptions; + prop->gpuDirectRDMASupported = cdprop.gpuDirectRDMASupported; + prop->gpuDirectRDMAWritesOrdering = cdprop.gpuDirectRDMAWritesOrdering; + prop->hostRegisterReadOnlySupported = cdprop.hostRegisterReadOnlySupported; + prop->hostRegisterSupported = cdprop.hostRegisterSupported; + prop->ipcEventSupported = cdprop.ipcEventSupported; + prop->memoryPoolSupportedHandleTypes = cdprop.memoryPoolSupportedHandleTypes; + prop->memoryPoolsSupported = cdprop.memoryPoolsSupported; + prop->sparseHipArraySupported = cdprop.sparseCudaArraySupported; + prop->timelineSemaphoreInteropSupported = + cdprop.timelineSemaphoreInteropSupported; + prop->unifiedFunctionPointers = cdprop.unifiedFunctionPointers; + + return hipSuccess; +} + +hipError_t +HipDeviceGetAttribute (GstHipVendor vendor, int *pi, hipDeviceAttribute_t attr, + int deviceId) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipDeviceGetAttribute (pi, attr, deviceId); + + enum cudaDeviceAttr cdattr; + switch (attr) { + case hipDeviceAttributeMaxThreadsPerBlock: + cdattr = cudaDevAttrMaxThreadsPerBlock; + break; + case hipDeviceAttributeMaxBlockDimX: + cdattr = cudaDevAttrMaxBlockDimX; + break; + case hipDeviceAttributeMaxBlockDimY: + cdattr = cudaDevAttrMaxBlockDimY; + break; + case hipDeviceAttributeMaxBlockDimZ: + cdattr = cudaDevAttrMaxBlockDimZ; + break; + case hipDeviceAttributeMaxGridDimX: + cdattr = cudaDevAttrMaxGridDimX; + break; + case hipDeviceAttributeMaxGridDimY: + cdattr = cudaDevAttrMaxGridDimY; + break; + case hipDeviceAttributeMaxGridDimZ: + cdattr = cudaDevAttrMaxGridDimZ; + break; + case hipDeviceAttributeMaxSharedMemoryPerBlock: + cdattr = cudaDevAttrMaxSharedMemoryPerBlock; + break; + case hipDeviceAttributeTotalConstantMemory: + cdattr = cudaDevAttrTotalConstantMemory; + break; + case hipDeviceAttributeWarpSize: + cdattr = cudaDevAttrWarpSize; + break; + case hipDeviceAttributeMaxRegistersPerBlock: + cdattr = cudaDevAttrMaxRegistersPerBlock; + break; + case hipDeviceAttributeClockRate: + cdattr = cudaDevAttrClockRate; + break; + case hipDeviceAttributeMemoryClockRate: + cdattr = cudaDevAttrMemoryClockRate; + break; + case hipDeviceAttributeMemoryBusWidth: + cdattr = cudaDevAttrGlobalMemoryBusWidth; + break; + case hipDeviceAttributeMultiprocessorCount: + cdattr = cudaDevAttrMultiProcessorCount; + break; + case hipDeviceAttributeComputeMode: + cdattr = cudaDevAttrComputeMode; + break; + case hipDeviceAttributeL2CacheSize: + cdattr = cudaDevAttrL2CacheSize; + break; + case hipDeviceAttributeMaxThreadsPerMultiProcessor: + cdattr = cudaDevAttrMaxThreadsPerMultiProcessor; + break; + case hipDeviceAttributeComputeCapabilityMajor: + cdattr = cudaDevAttrComputeCapabilityMajor; + break; + case hipDeviceAttributeComputeCapabilityMinor: + cdattr = cudaDevAttrComputeCapabilityMinor; + break; + case hipDeviceAttributeConcurrentKernels: + cdattr = cudaDevAttrConcurrentKernels; + break; + case hipDeviceAttributePciBusId: + cdattr = cudaDevAttrPciBusId; + break; + case hipDeviceAttributePciDeviceId: + cdattr = cudaDevAttrPciDeviceId; + break; + case hipDeviceAttributeMaxSharedMemoryPerMultiprocessor: + cdattr = cudaDevAttrMaxSharedMemoryPerMultiprocessor; + break; + case hipDeviceAttributeIsMultiGpuBoard: + cdattr = cudaDevAttrIsMultiGpuBoard; + break; + case hipDeviceAttributeIntegrated: + cdattr = cudaDevAttrIntegrated; + break; + case hipDeviceAttributeMaxTexture1DWidth: + cdattr = cudaDevAttrMaxTexture1DWidth; + break; + case hipDeviceAttributeMaxTexture2DWidth: + cdattr = cudaDevAttrMaxTexture2DWidth; + break; + case hipDeviceAttributeMaxTexture2DHeight: + cdattr = cudaDevAttrMaxTexture2DHeight; + break; + case hipDeviceAttributeMaxTexture3DWidth: + cdattr = cudaDevAttrMaxTexture3DWidth; + break; + case hipDeviceAttributeMaxTexture3DHeight: + cdattr = cudaDevAttrMaxTexture3DHeight; + break; + case hipDeviceAttributeMaxTexture3DDepth: + cdattr = cudaDevAttrMaxTexture3DDepth; + break; + case hipDeviceAttributeMaxPitch: + cdattr = cudaDevAttrMaxPitch; + break; + case hipDeviceAttributeTextureAlignment: + cdattr = cudaDevAttrTextureAlignment; + break; + case hipDeviceAttributeTexturePitchAlignment: + cdattr = cudaDevAttrTexturePitchAlignment; + break; + case hipDeviceAttributeKernelExecTimeout: + cdattr = cudaDevAttrKernelExecTimeout; + break; + case hipDeviceAttributeCanMapHostMemory: + cdattr = cudaDevAttrCanMapHostMemory; + break; + case hipDeviceAttributeEccEnabled: + cdattr = cudaDevAttrEccEnabled; + break; + case hipDeviceAttributeCooperativeLaunch: + cdattr = cudaDevAttrCooperativeLaunch; + break; + case hipDeviceAttributeCooperativeMultiDeviceLaunch: + cdattr = cudaDevAttrCooperativeMultiDeviceLaunch; + break; + case hipDeviceAttributeHostRegisterSupported: + cdattr = cudaDevAttrHostRegisterSupported; + break; + case hipDeviceAttributeConcurrentManagedAccess: + cdattr = cudaDevAttrConcurrentManagedAccess; + break; + case hipDeviceAttributeManagedMemory: + cdattr = cudaDevAttrManagedMemory; + break; + case hipDeviceAttributePageableMemoryAccessUsesHostPageTables: + cdattr = cudaDevAttrPageableMemoryAccessUsesHostPageTables; + break; + case hipDeviceAttributePageableMemoryAccess: + cdattr = cudaDevAttrPageableMemoryAccess; + break; + case hipDeviceAttributeDirectManagedMemAccessFromHost: + cdattr = cudaDevAttrDirectManagedMemAccessFromHost; + break; + case hipDeviceAttributeGlobalL1CacheSupported: + cdattr = cudaDevAttrGlobalL1CacheSupported; + break; + case hipDeviceAttributeMaxBlocksPerMultiProcessor: + cdattr = cudaDevAttrMaxBlocksPerMultiprocessor; + break; + case hipDeviceAttributeMultiGpuBoardGroupID: + cdattr = cudaDevAttrMultiGpuBoardGroupID; + break; + case hipDeviceAttributeReservedSharedMemPerBlock: + cdattr = cudaDevAttrReservedSharedMemoryPerBlock; + break; + case hipDeviceAttributeSingleToDoublePrecisionPerfRatio: + cdattr = cudaDevAttrSingleToDoublePrecisionPerfRatio; + break; + case hipDeviceAttributeStreamPrioritiesSupported: + cdattr = cudaDevAttrStreamPrioritiesSupported; + break; + case hipDeviceAttributeSurfaceAlignment: + cdattr = cudaDevAttrSurfaceAlignment; + break; + case hipDeviceAttributeTccDriver: + cdattr = cudaDevAttrTccDriver; + break; + case hipDeviceAttributeUnifiedAddressing: + cdattr = cudaDevAttrUnifiedAddressing; + break; + case hipDeviceAttributeMemoryPoolsSupported: + cdattr = cudaDevAttrMemoryPoolsSupported; + break; + case hipDeviceAttributeVirtualMemoryManagementSupported: + { + auto cuda_ret = cuda_ftable.cuDeviceGetAttribute (pi, + CU_DEVICE_ATTRIBUTE_VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED, + deviceId); + return hipCUResultTohipError (cuda_ret); + } + case hipDeviceAttributeAccessPolicyMaxWindowSize: + cdattr = cudaDevAttrMaxAccessPolicyWindowSize; + break; + case hipDeviceAttributeAsyncEngineCount: + cdattr = cudaDevAttrAsyncEngineCount; + break; + case hipDeviceAttributeCanUseHostPointerForRegisteredMem: + cdattr = cudaDevAttrCanUseHostPointerForRegisteredMem; + break; + case hipDeviceAttributeComputePreemptionSupported: + cdattr = cudaDevAttrComputePreemptionSupported; + break; + case hipDeviceAttributeHostNativeAtomicSupported: + cdattr = cudaDevAttrHostNativeAtomicSupported; + break; + default: + return hipCUDAErrorTohipError (cudaErrorInvalidValue); + } + + auto cuda_ret = cudart_ftable.cudaDeviceGetAttribute (pi, cdattr, deviceId); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipSetDevice (GstHipVendor vendor, int deviceId) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipSetDevice (deviceId); + + auto cuda_ret = cudart_ftable.cudaSetDevice (deviceId); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipMalloc (GstHipVendor vendor, void **ptr, size_t size) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipMalloc (ptr, size); + + auto cuda_ret = cudart_ftable.cudaMalloc (ptr, size); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipFree (GstHipVendor vendor, void *ptr) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipFree (ptr); + + auto cuda_ret = cudart_ftable.cudaFree (ptr); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipHostMalloc (GstHipVendor vendor, void **ptr, size_t size, unsigned int flags) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipHostMalloc (ptr, size, flags); + + auto cuda_ret = cudart_ftable.cudaMallocHost (ptr, size, flags); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipHostFree (GstHipVendor vendor, void *ptr) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipHostFree (ptr); + + auto cuda_ret = cudart_ftable.cudaFreeHost (ptr); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipStreamCreate (GstHipVendor vendor, hipStream_t * stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipStreamCreate (stream); + + auto cuda_ret = cudart_ftable.cudaStreamCreate ((cudaStream_t *) stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipStreamDestroy (GstHipVendor vendor, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipStreamDestroy (stream); + + auto cuda_ret = cudart_ftable.cudaStreamDestroy (stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipStreamSynchronize (GstHipVendor vendor, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipStreamSynchronize (stream); + + auto cuda_ret = cudart_ftable.cudaStreamSynchronize (stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipEventCreateWithFlags (GstHipVendor vendor, hipEvent_t * event, + unsigned flags) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipEventCreateWithFlags (event, flags); + + auto cuda_ret = cudart_ftable.cudaEventCreateWithFlags ((cudaEvent_t *) event, + flags); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipEventRecord (GstHipVendor vendor, hipEvent_t event, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipEventRecord (event, stream); + + auto cuda_ret = cudart_ftable.cudaEventRecord ((cudaEvent_t) event, stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipEventDestroy (GstHipVendor vendor, hipEvent_t event) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipEventDestroy (event); + + auto cuda_ret = cudart_ftable.cudaEventDestroy ((cudaEvent_t) event); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipEventSynchronize (GstHipVendor vendor, hipEvent_t event) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipEventSynchronize (event); + + auto cuda_ret = cudart_ftable.cudaEventSynchronize ((cudaEvent_t) event); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipEventQuery (GstHipVendor vendor, hipEvent_t event) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipEventQuery (event); + + auto cuda_ret = cudart_ftable.cudaEventQuery ((cudaEvent_t) event); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipModuleLoadData (GstHipVendor vendor, hipModule_t * module, const void *image) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipModuleLoadData (module, image); + + auto cuda_ret = cuda_ftable.cuModuleLoadData ((CUmodule *) module, image); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipModuleUnload (GstHipVendor vendor, hipModule_t module) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipModuleUnload (module); + + auto cuda_ret = cuda_ftable.cuModuleUnload ((CUmodule) module); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipModuleGetFunction (GstHipVendor vendor, hipFunction_t * function, + hipModule_t module, const char *kname) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipModuleGetFunction (function, module, kname); + + + auto cuda_ret = cuda_ftable.cuModuleGetFunction ((CUfunction *) function, + (CUmodule) module, kname); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipModuleLaunchKernel (GstHipVendor vendor, hipFunction_t f, + unsigned int gridDimX, unsigned int gridDimY, unsigned int gridDimZ, + unsigned int blockDimX, unsigned int blockDimY, unsigned int blockDimZ, + unsigned int sharedMemBytes, hipStream_t stream, void **kernelParams, + void **extra) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipModuleLaunchKernel (f, gridDimX, gridDimY, gridDimZ, + blockDimX, blockDimY, blockDimZ, sharedMemBytes, stream, + kernelParams, extra); + + auto cuda_ret = cuda_ftable.cuLaunchKernel ((CUfunction) f, gridDimX, + gridDimY, gridDimZ, + blockDimX, blockDimY, blockDimZ, sharedMemBytes, (CUstream) stream, + kernelParams, extra); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipMemcpyParam2DAsync (GstHipVendor vendor, const hip_Memcpy2D * pCopy, + hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipMemcpyParam2DAsync (pCopy, stream); + + CUresult cuda_ret; + if (!pCopy) { + cuda_ret = cuda_ftable.cuMemcpy2DAsync (nullptr, (CUstream) stream); + } else { + CUDA_MEMCPY2D cudaCopy = { }; + hipMemcpy2DTocudaMemcpy2D (cudaCopy, pCopy); + cuda_ret = cuda_ftable.cuMemcpy2DAsync (&cudaCopy, (CUstream) stream); + } + + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipMemsetD8Async (GstHipVendor vendor, hipDeviceptr_t dest, unsigned char value, + size_t count, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipMemsetD8Async (dest, value, count, stream); + + auto cuda_ret = cuda_ftable.cuMemsetD8Async ((CUdeviceptr) dest, value, + count, (CUstream) stream); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipMemsetD16Async (GstHipVendor vendor, hipDeviceptr_t dest, + unsigned short value, size_t count, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipMemsetD16Async (dest, value, count, stream); + + auto cuda_ret = cuda_ftable.cuMemsetD16Async ((CUdeviceptr) dest, value, + count, (CUstream) stream); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipMemsetD32Async (GstHipVendor vendor, hipDeviceptr_t dst, int value, + size_t count, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipMemsetD32Async (dst, value, count, stream); + + auto cuda_ret = cuda_ftable.cuMemsetD32Async ((CUdeviceptr) dst, value, + count, (CUstream) stream); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipTexObjectCreate (GstHipVendor vendor, hipTextureObject_t * pTexObject, + const HIP_RESOURCE_DESC * pResDesc, + const HIP_TEXTURE_DESC * pTexDesc, + const HIP_RESOURCE_VIEW_DESC * pResViewDesc) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipTexObjectCreate (pTexObject, pResDesc, pTexDesc, + pResViewDesc); + + auto cuda_ret = cuda_ftable.cuTexObjectCreate ((CUtexObject *) pTexObject, + (const CUDA_RESOURCE_DESC *) pResDesc, + (const CUDA_TEXTURE_DESC *) pTexDesc, + (const CUDA_RESOURCE_VIEW_DESC *) pResViewDesc); + + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipTexObjectDestroy (GstHipVendor vendor, hipTextureObject_t texObject) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipTexObjectDestroy (texObject); + + auto cuda_ret = cuda_ftable.cuTexObjectDestroy ((CUtexObject) texObject); + return hipCUResultTohipError (cuda_ret); +} + +hipError_t +HipGraphicsMapResources (GstHipVendor vendor, int count, + hipGraphicsResource_t * resources, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGraphicsMapResources (count, resources, stream); + + auto cuda_ret = cudart_ftable.cudaGraphicsMapResources (count, + (cudaGraphicsResource_t *) resources, stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipGraphicsResourceGetMappedPointer (GstHipVendor vendor, void **devPtr, + size_t *size, hipGraphicsResource_t resource) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) { + return amd_ftable.hipGraphicsResourceGetMappedPointer (devPtr, + size, resource); + } + + auto cuda_ret = cudart_ftable.cudaGraphicsResourceGetMappedPointer (devPtr, + size, (cudaGraphicsResource_t) resource); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipGraphicsUnmapResources (GstHipVendor vendor, int count, + hipGraphicsResource_t * resources, hipStream_t stream) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGraphicsUnmapResources (count, resources, stream); + + auto cuda_ret = cudart_ftable.cudaGraphicsUnmapResources (count, + (cudaGraphicsResource_t *) resources, stream); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipGraphicsUnregisterResource (GstHipVendor vendor, + hipGraphicsResource_t resource) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGraphicsUnregisterResource (resource); + + auto cuda_ret = + cudart_ftable.cudaGraphicsUnregisterResource ((cudaGraphicsResource_t) + resource); + return hipCUDAErrorTohipError (cuda_ret); +} + +#ifdef HAVE_GST_GL +hipError_t +HipGLGetDevices (GstHipVendor vendor, unsigned int *pHipDeviceCount, + int *pHipDevices, unsigned int hipDeviceCount, hipGLDeviceList deviceList) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) { + return amd_ftable.hipGLGetDevices (pHipDeviceCount, pHipDevices, + hipDeviceCount, deviceList); + } + + auto cuda_ret = cudart_ftable.cudaGLGetDevices (pHipDeviceCount, pHipDevices, + hipDeviceCount, (enum cudaGLDeviceList) deviceList); + return hipCUDAErrorTohipError (cuda_ret); +} + +hipError_t +HipGraphicsGLRegisterBuffer (GstHipVendor vendor, + hipGraphicsResource ** resource, unsigned int buffer, unsigned int flags) +{ + CHECK_VENDOR (vendor); + + if (vendor == GST_HIP_VENDOR_AMD) + return amd_ftable.hipGraphicsGLRegisterBuffer (resource, buffer, flags); + + auto cuda_ret = + cudart_ftable.cudaGraphicsGLRegisterBuffer ((struct cudaGraphicsResource + **) resource, + buffer, flags); + return hipCUDAErrorTohipError (cuda_ret); +} +#endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiploader.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +GST_HIP_API +gboolean gst_hip_load_library (GstHipVendor vendor); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipmemory.cpp
Added
@@ -0,0 +1,1212 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip.h" +#include "gsthip-private.h" +#include <mutex> +#include <condition_variable> +#include <queue> + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hipallocator", 0, "hipallocator"); + }); + + return cat; +} +#endif + +static GstHipAllocator *_hip_memory_allocator = nullptr; +#define N_TEX_ADDR_MODES 4 +#define N_TEX_FILTER_MODES 2 +struct _GstHipMemoryPrivate +{ + ~_GstHipMemoryPrivate () + { + gst_clear_hip_event (&event); + gst_clear_hip_stream (&stream); + } + + GstHipVendor vendor; + void *data = nullptr; + void *staging = nullptr; + gsize pitch = 0; + guint width_in_bytes = 0; + guint height = 0; + gboolean texture_support = FALSE; + hipTextureObject_t texture4N_TEX_ADDR_MODESN_TEX_FILTER_MODES = { }; + GstHipStream *stream = nullptr; + GstHipEvent *event = nullptr; + + std::mutex lock; +}; + +struct _GstHipAllocatorPrivate +{ + GstMemoryCopyFunction fallback_copy; +}; + +#define gst_hip_allocator_parent_class parent_class +G_DEFINE_TYPE_WITH_PRIVATE (GstHipAllocator, + gst_hip_allocator, GST_TYPE_ALLOCATOR); + +static void gst_hip_allocator_free (GstAllocator * allocator, + GstMemory * memory); + +static gpointer hip_mem_map (GstMemory * mem, gsize maxsize, GstMapFlags flags); +static void hip_mem_unmap (GstMemory * mem); +static GstMemory *hip_mem_copy (GstMemory * mem, gssize offset, gssize size); + +static GstMemory * +gst_hip_allocator_dummy_alloc (GstAllocator * allocator, gsize size, + GstAllocationParams * params) +{ + g_return_val_if_reached (nullptr); +} + +static void +gst_hip_allocator_class_init (GstHipAllocatorClass * klass) +{ + auto alloc_class = GST_ALLOCATOR_CLASS (klass); + + alloc_class->alloc = GST_DEBUG_FUNCPTR (gst_hip_allocator_dummy_alloc); + alloc_class->free = GST_DEBUG_FUNCPTR (gst_hip_allocator_free); +} + +static void +gst_hip_allocator_init (GstHipAllocator * allocator) +{ + GstAllocator *alloc = GST_ALLOCATOR_CAST (allocator); + GstHipAllocatorPrivate *priv; + + priv = allocator->priv = (GstHipAllocatorPrivate *) + gst_hip_allocator_get_instance_private (allocator); + + alloc->mem_type = GST_HIP_MEMORY_NAME; + + alloc->mem_map = hip_mem_map; + alloc->mem_unmap = hip_mem_unmap; + + /* Store pointer to default mem_copy method for fallback copy */ + priv->fallback_copy = alloc->mem_copy; + alloc->mem_copy = hip_mem_copy; + + GST_OBJECT_FLAG_SET (allocator, GST_ALLOCATOR_FLAG_CUSTOM_ALLOC); +} + +static gboolean +gst_hip_allocator_update_info (const GstVideoInfo * reference, + gsize pitch, gsize alloc_height, GstVideoInfo * aligned) +{ + GstVideoInfo ret = *reference; + guint height = reference->height; + + ret.size = pitch * alloc_height; + + switch (GST_VIDEO_INFO_FORMAT (reference)) { + case GST_VIDEO_FORMAT_I420: + case GST_VIDEO_FORMAT_YV12: + case GST_VIDEO_FORMAT_I420_10LE: + case GST_VIDEO_FORMAT_I420_12LE: + { + guint chroma_height = GST_ROUND_UP_2 (height) / 2; + /* we are wasting space yes, but required so that this memory + * can be used in kernel function */ + ret.stride0 = pitch; + ret.stride1 = pitch; + ret.stride2 = pitch; + ret.offset0 = 0; + ret.offset1 = ret.stride0 * height; + ret.offset2 = ret.offset1 + (ret.stride1 * chroma_height); + break; + } + case GST_VIDEO_FORMAT_Y42B: + case GST_VIDEO_FORMAT_I422_10LE: + case GST_VIDEO_FORMAT_I422_12LE: + ret.stride0 = pitch; + ret.stride1 = pitch; + ret.stride2 = pitch; + ret.offset0 = 0; + ret.offset1 = ret.stride0 * height; + ret.offset2 = ret.offset1 + (ret.stride1 * height); + break; + case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_NV21: + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + ret.stride0 = pitch; + ret.stride1 = pitch; + ret.offset0 = 0; + ret.offset1 = ret.stride0 * height; + break; + case GST_VIDEO_FORMAT_Y444: + case GST_VIDEO_FORMAT_Y444_10LE: + case GST_VIDEO_FORMAT_Y444_12LE: + case GST_VIDEO_FORMAT_Y444_16LE: + case GST_VIDEO_FORMAT_RGBP: + case GST_VIDEO_FORMAT_BGRP: + case GST_VIDEO_FORMAT_GBR: + case GST_VIDEO_FORMAT_GBR_10LE: + case GST_VIDEO_FORMAT_GBR_12LE: + case GST_VIDEO_FORMAT_GBR_16LE: + ret.stride0 = pitch; + ret.stride1 = pitch; + ret.stride2 = pitch; + ret.offset0 = 0; + ret.offset1 = ret.stride0 * height; + ret.offset2 = ret.offset1 * 2; + break; + case GST_VIDEO_FORMAT_GBRA: + ret.stride0 = pitch; + ret.stride1 = pitch; + ret.stride2 = pitch; + ret.stride3 = pitch; + ret.offset0 = 0; + ret.offset1 = ret.stride0 * height; + ret.offset2 = ret.offset1 * 2; + ret.offset3 = ret.offset1 * 3; + break; + case GST_VIDEO_FORMAT_BGRA: + case GST_VIDEO_FORMAT_RGBA: + case GST_VIDEO_FORMAT_RGBx: + case GST_VIDEO_FORMAT_BGRx: + case GST_VIDEO_FORMAT_ARGB: + case GST_VIDEO_FORMAT_ABGR: + case GST_VIDEO_FORMAT_RGB: + case GST_VIDEO_FORMAT_BGR: + case GST_VIDEO_FORMAT_BGR10A2_LE: + case GST_VIDEO_FORMAT_RGB10A2_LE: + case GST_VIDEO_FORMAT_YUY2: + case GST_VIDEO_FORMAT_UYVY: + case GST_VIDEO_FORMAT_VUYA: + ret.stride0 = pitch; + ret.offset0 = 0; + break; + default: + return FALSE; + } + + *aligned = ret; + + return TRUE; +} + +static size_t +do_align (size_t value, size_t align) +{ + if (align == 0) + return value; + + return ((value + align - 1) / align) * align; +} + +static GstMemory * +gst_hip_allocator_alloc_internal (GstHipAllocator * self, + GstHipDevice * device, const GstVideoInfo * info, + guint width_in_bytes, guint alloc_height, GstHipStream * stream) +{ + hipError_t hip_ret = hipSuccess; + + if (!gst_hip_device_set_current (device)) + return nullptr; + + auto vendor = gst_hip_device_get_vendor (device); + gint texture_align = 0; + gst_hip_device_get_attribute (device, + hipDeviceAttributeTextureAlignment, &texture_align); + if (texture_align <= 0) + texture_align = 0; + auto pitch = do_align (width_in_bytes, texture_align); + + void *data; + hip_ret = HipMalloc (vendor, &data, pitch * alloc_height); + if (!gst_hip_result (hip_ret, vendor)) { + GST_ERROR_OBJECT (self, "Failed to allocate memory"); + return nullptr; + } + + GstVideoInfo alloc_info; + if (!gst_hip_allocator_update_info (info, pitch, alloc_height, &alloc_info)) { + GST_ERROR_OBJECT (self, "Couldn't calculate aligned info"); + HipFree (vendor, data); + return nullptr; + } + + auto mem = g_new0 (GstHipMemory, 1); + mem->device = (GstHipDevice *) gst_object_ref (device); + mem->info = alloc_info; + + auto priv = new GstHipMemoryPrivate (); + mem->priv = priv; + + priv->data = data; + priv->pitch = pitch; + priv->width_in_bytes = width_in_bytes; + priv->height = alloc_height; + priv->vendor = vendor; + priv->stream = stream; + if (stream) + gst_hip_stream_ref (stream); + + g_object_get (device, "texture2d-support", &priv->texture_support, nullptr); + + gst_memory_init (GST_MEMORY_CAST (mem), (GstMemoryFlags) 0, + GST_ALLOCATOR_CAST (self), nullptr, alloc_info.size, 0, 0, + alloc_info.size); + + return GST_MEMORY_CAST (mem); +} + +static void +gst_hip_allocator_free (GstAllocator * allocator, GstMemory * mem) +{ + auto hmem = GST_HIP_MEMORY_CAST (mem); + auto priv = hmem->priv; + + gst_hip_device_set_current (hmem->device); + + for (guint i = 0; i < 4; i++) { + for (guint j = 0; j < N_TEX_ADDR_MODES; j++) { + for (guint k = 0; k < N_TEX_FILTER_MODES; k++) { + if (priv->textureijk) { + HipTexObjectDestroy (priv->vendor, priv->textureijk); + } + } + } + } + + HipFree (priv->vendor, priv->data); + + if (priv->staging) + HipHostFree (priv->vendor, priv->staging); + + gst_object_unref (hmem->device); + + delete hmem->priv; + + g_free (mem); +} + +static gboolean +gst_hip_memory_upload (GstHipAllocator * self, GstHipMemory * mem) +{ + auto priv = mem->priv; + hip_Memcpy2D param = { }; + + if (!priv->staging || + !GST_MEMORY_FLAG_IS_SET (mem, GST_HIP_MEMORY_TRANSFER_NEED_UPLOAD)) { + return TRUE; + } + + if (!gst_hip_device_set_current (mem->device)) { + GST_ERROR_OBJECT (self, "Failed to set device"); + return FALSE; + } + + param.srcMemoryType = hipMemoryTypeHost; + param.srcHost = priv->staging; + param.srcPitch = priv->pitch; + + param.dstMemoryType = hipMemoryTypeDevice; + param.dstDevice = priv->data; + param.dstPitch = priv->pitch; + param.WidthInBytes = priv->width_in_bytes; + param.Height = priv->height; + + auto stream = gst_hip_stream_get_handle (priv->stream); + auto hip_ret = HipMemcpyParam2DAsync (priv->vendor, ¶m, stream); + if (gst_hip_result (hip_ret, priv->vendor)) + hip_ret = HipStreamSynchronize (priv->vendor, stream); + + /* Already synchronized */ + gst_clear_hip_event (&priv->event); + + GST_MEMORY_FLAG_UNSET (mem, GST_HIP_MEMORY_TRANSFER_NEED_UPLOAD); + + return gst_hip_result (hip_ret, priv->vendor); +} + +static gboolean +gst_hip_memory_download (GstHipAllocator * self, GstHipMemory * mem) +{ + auto priv = mem->priv; + hip_Memcpy2D param = { }; + + if (!GST_MEMORY_FLAG_IS_SET (mem, GST_HIP_MEMORY_TRANSFER_NEED_DOWNLOAD)) + return TRUE; + + if (!gst_hip_device_set_current (mem->device)) { + GST_ERROR_OBJECT (self, "Failed to push cuda context"); + return FALSE; + } + + if (!priv->staging) { + auto hip_ret = HipHostMalloc (priv->vendor, + &priv->staging, GST_MEMORY_CAST (mem)->size, 0); + + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Failed to allocate staging memory"); + return FALSE; + } + } + + param.srcMemoryType = hipMemoryTypeDevice; + param.srcDevice = priv->data; + param.srcPitch = priv->pitch; + + param.dstMemoryType = hipMemoryTypeHost; + param.dstHost = priv->staging; + param.dstPitch = priv->pitch; + param.WidthInBytes = priv->width_in_bytes; + param.Height = priv->height; + auto stream = gst_hip_stream_get_handle (priv->stream); + + auto hip_ret = HipMemcpyParam2DAsync (priv->vendor, ¶m, stream); + if (gst_hip_result (hip_ret, priv->vendor)) + hip_ret = HipStreamSynchronize (priv->vendor, stream); + + /* Already synchronized */ + gst_clear_hip_event (&priv->event); + + GST_MEMORY_FLAG_UNSET (mem, GST_HIP_MEMORY_TRANSFER_NEED_DOWNLOAD); + + return gst_hip_result (hip_ret, priv->vendor); +} + +static gpointer +hip_mem_map (GstMemory * mem, gsize maxsize, GstMapFlags flags) +{ + auto self = GST_HIP_ALLOCATOR (mem->allocator); + auto hmem = GST_HIP_MEMORY_CAST (mem); + auto priv = hmem->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + if ((flags & GST_MAP_HIP) == GST_MAP_HIP) { + if (!gst_hip_memory_upload (self, hmem)) + return nullptr; + + if ((flags & GST_MAP_WRITE) != 0) + GST_MINI_OBJECT_FLAG_SET (mem, GST_HIP_MEMORY_TRANSFER_NEED_DOWNLOAD); + + return priv->data; + } + + /* First CPU access, must be downloaded */ + if (!priv->staging) + GST_MINI_OBJECT_FLAG_SET (mem, GST_HIP_MEMORY_TRANSFER_NEED_DOWNLOAD); + + if (!gst_hip_memory_download (self, hmem)) + return nullptr; + + if ((flags & GST_MAP_WRITE) != 0) + GST_MINI_OBJECT_FLAG_SET (mem, GST_HIP_MEMORY_TRANSFER_NEED_UPLOAD); + + return priv->staging; +} + +static void +hip_mem_unmap (GstMemory * mem) +{ + /* Nothing to do */ +} + +static GstMemory * +hip_mem_copy (GstMemory * mem, gssize offset, gssize size) +{ + auto self = GST_HIP_ALLOCATOR (mem->allocator); + auto src_mem = GST_HIP_MEMORY_CAST (mem); + auto vendor = src_mem->priv->vendor; + auto device = src_mem->device; + GstMapInfo src_info, dst_info; + hip_Memcpy2D param = { }; + GstMemory *copy = nullptr; + auto stream = gst_hip_device_get_stream (device); + + /* non-zero offset or different size is not supported */ + if (offset != 0 || (size != -1 && (gsize) size != mem->size)) { + GST_DEBUG_OBJECT (self, "Different size/offset, try fallback copy"); + return self->priv->fallback_copy (mem, offset, size); + } + + if (GST_IS_HIP_POOL_ALLOCATOR (self)) { + gst_hip_pool_allocator_acquire_memory (GST_HIP_POOL_ALLOCATOR (self), + ©); + } + + if (!copy) { + copy = gst_hip_allocator_alloc_internal (self, device, + &src_mem->info, src_mem->priv->width_in_bytes, src_mem->priv->height, + stream); + } + + if (!copy) { + GST_ERROR_OBJECT (self, "Failed to allocate memory for copying"); + return nullptr; + } + + if (!gst_memory_map (mem, &src_info, GST_MAP_READ_HIP)) { + GST_ERROR_OBJECT (self, "Failed to map src memory"); + gst_memory_unref (copy); + return nullptr; + } + + if (!gst_memory_map (copy, &dst_info, GST_MAP_WRITE_HIP)) { + GST_ERROR_OBJECT (self, "Failed to map dst memory"); + gst_memory_unmap (mem, &src_info); + gst_memory_unref (copy); + return nullptr; + } + + if (!gst_hip_device_set_current (device)) { + GST_ERROR_OBJECT (self, "Failed to set device"); + gst_memory_unmap (mem, &src_info); + gst_memory_unmap (copy, &dst_info); + + return nullptr; + } + + param.srcMemoryType = hipMemoryTypeDevice; + param.srcDevice = src_info.data; + param.srcPitch = src_mem->priv->pitch; + + param.dstMemoryType = hipMemoryTypeDevice; + param.dstDevice = dst_info.data; + param.dstPitch = src_mem->priv->pitch; + param.WidthInBytes = src_mem->priv->width_in_bytes; + param.Height = src_mem->priv->height; + + auto stream_handle = gst_hip_stream_get_handle (stream); + + auto ret = HipMemcpyParam2DAsync (vendor, ¶m, stream_handle); + if (gst_hip_result (ret, vendor)) + ret = HipStreamSynchronize (vendor, stream_handle); + + gst_memory_unmap (mem, &src_info); + gst_memory_unmap (copy, &dst_info); + + if (!gst_hip_result (ret, vendor)) { + GST_ERROR_OBJECT (self, "Failed to copy memory"); + gst_memory_unref (copy); + return nullptr; + } + + return copy; +} + +void +gst_hip_memory_init_once (void) +{ + static std::once_flag once; + + std::call_once (once,& { + _hip_memory_allocator = + (GstHipAllocator *) g_object_new (GST_TYPE_HIP_ALLOCATOR, nullptr); + gst_object_ref_sink (_hip_memory_allocator); + gst_object_ref (_hip_memory_allocator); + gst_allocator_register (GST_HIP_MEMORY_NAME, + GST_ALLOCATOR_CAST (_hip_memory_allocator)); + }); +} + +/** + * gst_is_hip_memory: + * @mem: a #GstMemory + * + * Returns: %TRUE if @mem is a #GstHipMemory + * + * Since: 1.28 + */ +gboolean +gst_is_hip_memory (GstMemory * mem) +{ + return mem != nullptr && mem->allocator != nullptr && + GST_IS_HIP_ALLOCATOR (mem->allocator); +} + +typedef struct _TextureFormat +{ + GstVideoFormat format; + hipArray_Format array_formatGST_VIDEO_MAX_COMPONENTS; + guint channelsGST_VIDEO_MAX_COMPONENTS; +} TextureFormat; + +#define HIP_AD_FORMAT_NONE ((hipArray_Format) 0) +#define MAKE_FORMAT_YUV_PLANAR(f,cf) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE }, {1, 1, 1, 0} } +#define MAKE_FORMAT_YUV_SEMI_PLANAR(f,cf) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_NONE, HIP_AD_FORMAT_NONE }, {1, 2, 0, 0} } +#define MAKE_FORMAT_RGB(f,cf) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE, \ + HIP_AD_FORMAT_NONE, HIP_AD_FORMAT_NONE }, {4, 0, 0, 0} } +#define MAKE_FORMAT_RGBP(f,cf) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE }, {1, 1, 1, 0} } +#define MAKE_FORMAT_RGBAP(f,cf) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf }, {1, 1, 1, 1} } + +static const TextureFormat format_map = { + MAKE_FORMAT_YUV_PLANAR (I420, UNSIGNED_INT8), + MAKE_FORMAT_YUV_PLANAR (YV12, UNSIGNED_INT8), + MAKE_FORMAT_YUV_SEMI_PLANAR (NV12, UNSIGNED_INT8), + MAKE_FORMAT_YUV_SEMI_PLANAR (NV21, UNSIGNED_INT8), + MAKE_FORMAT_YUV_SEMI_PLANAR (P010_10LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_SEMI_PLANAR (P012_LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_SEMI_PLANAR (P016_LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (I420_10LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (I420_12LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (Y444, UNSIGNED_INT8), + MAKE_FORMAT_YUV_PLANAR (Y444_10LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (Y444_12LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (Y444_16LE, UNSIGNED_INT16), + MAKE_FORMAT_RGB (RGBA, UNSIGNED_INT8), + MAKE_FORMAT_RGB (BGRA, UNSIGNED_INT8), + MAKE_FORMAT_RGB (RGBx, UNSIGNED_INT8), + MAKE_FORMAT_RGB (BGRx, UNSIGNED_INT8), + MAKE_FORMAT_RGB (ARGB, UNSIGNED_INT8), + MAKE_FORMAT_RGB (ARGB64, UNSIGNED_INT16), + MAKE_FORMAT_RGB (ABGR, UNSIGNED_INT8), + MAKE_FORMAT_YUV_PLANAR (Y42B, UNSIGNED_INT8), + MAKE_FORMAT_YUV_PLANAR (I422_10LE, UNSIGNED_INT16), + MAKE_FORMAT_YUV_PLANAR (I422_12LE, UNSIGNED_INT16), + MAKE_FORMAT_RGBP (RGBP, UNSIGNED_INT8), + MAKE_FORMAT_RGBP (BGRP, UNSIGNED_INT8), + MAKE_FORMAT_RGBP (GBR, UNSIGNED_INT8), + MAKE_FORMAT_RGBP (GBR_10LE, UNSIGNED_INT16), + MAKE_FORMAT_RGBP (GBR_12LE, UNSIGNED_INT16), + MAKE_FORMAT_RGBP (GBR_16LE, UNSIGNED_INT16), + MAKE_FORMAT_RGBAP (GBRA, UNSIGNED_INT8), + MAKE_FORMAT_RGB (VUYA, UNSIGNED_INT8), +}; + +/** + * gst_hip_memory_get_texture: + * @mem: a #GstHipMemory + * @plane: the plane index + * @filter_mode: (type gint): filter mode + * @address_mode: (type gint): address mode + * @texture: (type gpointer) (out) (transfer none): a pointer to hipTextureObject_t object + * + * Creates hipTextureObject_t with given parameters + * + * Returns: %TRUE if succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_memory_get_texture (GstHipMemory * mem, guint plane, + HIPfilter_mode filter_mode, HIPaddress_mode address_mode, + hipTextureObject_t * texture) +{ + g_return_val_if_fail (gst_is_hip_memory (GST_MEMORY_CAST (mem)), FALSE); + g_return_val_if_fail (GST_VIDEO_INFO_N_PLANES (&mem->info) > plane, FALSE); + g_return_val_if_fail (texture, FALSE); + + auto priv = mem->priv; + + if (!priv->texture_support) { + GST_WARNING_OBJECT (mem->device, "Texture not supported"); + return FALSE; + } + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->textureplaneaddress_modefilter_mode) { + *texture = priv->textureplaneaddress_modefilter_mode; + return TRUE; + } + + const TextureFormat *format = nullptr; + for (guint i = 0; i < G_N_ELEMENTS (format_map); i++) { + if (format_mapi.format == GST_VIDEO_INFO_FORMAT (&mem->info)) { + format = &format_mapi; + break; + } + } + + if (!format) { + GST_WARNING_OBJECT (mem->device, "Not supported format %s", + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (&mem->info))); + return FALSE; + } + + if (!gst_hip_device_set_current (mem->device)) { + GST_ERROR_OBJECT (mem->device, "Couldn't set current"); + return FALSE; + } + + auto src_ptr = ((guint8 *) priv->data) + mem->info.offsetplane; + + HIP_RESOURCE_DESC res_desc = { }; + HIP_TEXTURE_DESC tex_desc = { }; + + res_desc.resType = HIP_RESOURCE_TYPE_PITCH2D; + res_desc.res.pitch2D.format = format->array_formatplane; + res_desc.res.pitch2D.numChannels = format->channelsplane; + res_desc.res.pitch2D.width = GST_VIDEO_INFO_COMP_WIDTH (&mem->info, plane); + res_desc.res.pitch2D.height = GST_VIDEO_INFO_COMP_HEIGHT (&mem->info, plane); + res_desc.res.pitch2D.pitchInBytes = + GST_VIDEO_INFO_PLANE_STRIDE (&mem->info, plane); + res_desc.res.pitch2D.devPtr = src_ptr; + + tex_desc.filterMode = (HIPfilter_mode) filter_mode; + /* Will read texture value as a normalized 0, 1 float value + * with 0, 1) coordinates */ + tex_desc.flags = HIP_TRSF_NORMALIZED_COORDINATES; + tex_desc.addressMode0 = address_mode; + tex_desc.addressMode1 = address_mode; + tex_desc.addressMode2 = address_mode; + + hipTextureObject_t tex_obj; + auto hip_ret = + HipTexObjectCreate (priv->vendor, &tex_obj, &res_desc, &tex_desc, + nullptr); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (mem->device, "Couldn't create texture object"); + return FALSE; + } + + priv->textureplaneaddress_modefilter_mode = tex_obj; + + *texture = tex_obj; + + return TRUE; +} + +/** + * gst_hip_memory_get_stream: + * @mem: a #GstHipMemory + * + * Gets HIP stream object associated with @mem + * + * Returns: (transfer none) (nullable): a #GstHipStream or %NULL if default + * HIP stream is in use + * + * Since: 1.28 + */ +GstHipStream * +gst_hip_memory_get_stream (GstHipMemory * mem) +{ + g_return_val_if_fail (gst_is_hip_memory (GST_MEMORY_CAST (mem)), nullptr); + + return mem->priv->stream; +} + +/** + * gst_hip_memory_set_event: + * @mem: a #GstHipMemory + * @event: (transfer none) (allow-none): a #GstHipEvent + * + * Sets @event to @mem for later synchronization operation + * + * Since: 1.28 + */ +void +gst_hip_memory_set_event (GstHipMemory * mem, GstHipEvent * event) +{ + g_return_if_fail (gst_is_hip_memory (GST_MEMORY_CAST (mem))); + + auto priv = mem->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + gst_clear_hip_event (&priv->event); + priv->event = event; + if (priv->event) + gst_hip_event_ref (priv->event); +} + +/** + * gst_hip_memory_sync: + * @mem: a #GstHipMemory + * + * Waits for device synchronization by using previously configured #GstHipEvent + * via gst_hip_memory_set_event() + * + * Since: 1.28 + */ +void +gst_hip_memory_sync (GstHipMemory * mem) +{ + g_return_if_fail (gst_is_hip_memory (GST_MEMORY_CAST (mem))); + + auto priv = mem->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->event) + gst_hip_event_synchronize (priv->event); + + gst_clear_hip_event (&priv->event); +} + +static guint +gst_hip_allocator_calculate_alloc_height (const GstVideoInfo * info) +{ + guint alloc_height; + + alloc_height = GST_VIDEO_INFO_HEIGHT (info); + + /* make sure valid height for subsampled formats */ + switch (GST_VIDEO_INFO_FORMAT (info)) { + case GST_VIDEO_FORMAT_I420: + case GST_VIDEO_FORMAT_YV12: + case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + case GST_VIDEO_FORMAT_I420_10LE: + case GST_VIDEO_FORMAT_I420_12LE: + alloc_height = GST_ROUND_UP_2 (alloc_height); + break; + default: + break; + } + + switch (GST_VIDEO_INFO_FORMAT (info)) { + case GST_VIDEO_FORMAT_I420: + case GST_VIDEO_FORMAT_YV12: + case GST_VIDEO_FORMAT_I420_10LE: + case GST_VIDEO_FORMAT_I420_12LE: + alloc_height *= 2; + break; + case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_NV21: + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + alloc_height += alloc_height / 2; + break; + case GST_VIDEO_FORMAT_Y42B: + case GST_VIDEO_FORMAT_I422_10LE: + case GST_VIDEO_FORMAT_I422_12LE: + case GST_VIDEO_FORMAT_Y444: + case GST_VIDEO_FORMAT_Y444_10LE: + case GST_VIDEO_FORMAT_Y444_12LE: + case GST_VIDEO_FORMAT_Y444_16LE: + case GST_VIDEO_FORMAT_RGBP: + case GST_VIDEO_FORMAT_BGRP: + case GST_VIDEO_FORMAT_GBR: + case GST_VIDEO_FORMAT_GBR_10LE: + case GST_VIDEO_FORMAT_GBR_12LE: + case GST_VIDEO_FORMAT_GBR_16LE: + alloc_height *= 3; + break; + case GST_VIDEO_FORMAT_GBRA: + alloc_height *= 4; + break; + default: + break; + } + + return alloc_height; +} + +/** + * gst_hip_allocator_alloc: + * @allocator: (allow-none): a #GstHipAllocator + * @device: a #GstHipDevice + * @info: a #GstVideoInfo + * + * Allocates a new GstHipMemory + * + * Returns: (transfer full) (nullable): a newly allocated #GstHipMemory + * or %NULL if allocation failed + * + * Since: 1.28 + */ +GstMemory * +gst_hip_allocator_alloc (GstHipAllocator * allocator, + GstHipDevice * device, const GstVideoInfo * info) +{ + guint alloc_height; + + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), nullptr); + g_return_val_if_fail (info, nullptr); + + if (!allocator) + allocator = (GstHipAllocator *) _hip_memory_allocator; + + alloc_height = gst_hip_allocator_calculate_alloc_height (info); + + return gst_hip_allocator_alloc_internal (allocator, device, + info, info->stride0, alloc_height, gst_hip_device_get_stream (device)); +} + +/** + * gst_hip_allocator_set_active: + * @allocator: a #GstCudaAllocator + * @active: the new active state + * + * Controls the active state of @allocator. + * + * Returns: %TRUE if active state of @allocator was successfully updated. + * + * Since: 1.28 + */ +gboolean +gst_hip_allocator_set_active (GstHipAllocator * allocator, gboolean active) +{ + g_return_val_if_fail (GST_IS_HIP_ALLOCATOR (allocator), FALSE); + + auto klass = GST_HIP_ALLOCATOR_GET_CLASS (allocator); + if (klass->set_active) + return klass->set_active (allocator, active); + + return TRUE; +} + +struct _GstHipPoolAllocatorPrivate +{ + std::queue < GstMemory * >queue; + + std::mutex lock; + std::condition_variable cond; + gboolean started = FALSE; + gboolean active = FALSE; + + guint outstanding = 0; + guint cur_mems = 0; + guint alloc_height; + gboolean flushing = FALSE; +}; + +static void gst_hip_pool_allocator_finalize (GObject * object); + +static gboolean +gst_hip_pool_allocator_set_active (GstHipAllocator * allocator, + gboolean active); + +static gboolean gst_hip_pool_allocator_start (GstHipPoolAllocator * self); +static gboolean gst_hip_pool_allocator_stop (GstHipPoolAllocator * self); +static gboolean gst_hip_memory_release (GstMiniObject * obj); + +#define gst_hip_pool_allocator_parent_class pool_alloc_parent_class +G_DEFINE_TYPE (GstHipPoolAllocator, gst_hip_pool_allocator, + GST_TYPE_HIP_ALLOCATOR); + +static void +gst_hip_pool_allocator_class_init (GstHipPoolAllocatorClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto hipalloc_class = GST_HIP_ALLOCATOR_CLASS (klass); + + object_class->finalize = gst_hip_pool_allocator_finalize; + hipalloc_class->set_active = gst_hip_pool_allocator_set_active; +} + +static void +gst_hip_pool_allocator_init (GstHipPoolAllocator * self) +{ + self->priv = new GstHipPoolAllocatorPrivate (); +} + +static void +gst_hip_pool_allocator_finalize (GObject * object) +{ + auto self = GST_HIP_POOL_ALLOCATOR (object); + + GST_DEBUG_OBJECT (self, "Finalize"); + + gst_hip_pool_allocator_stop (self); + delete self->priv; + + g_clear_object (&self->device); + + G_OBJECT_CLASS (pool_alloc_parent_class)->finalize (object); +} + +static gboolean +gst_hip_pool_allocator_start (GstHipPoolAllocator * self) +{ + auto priv = self->priv; + + priv->started = TRUE; + return TRUE; +} + +static gboolean +gst_hip_pool_allocator_set_active (GstHipAllocator * allocator, gboolean active) +{ + auto self = GST_HIP_POOL_ALLOCATOR (allocator); + auto priv = self->priv; + + GST_LOG_OBJECT (self, "active %d", active); + + std::unique_lock < std::mutex > lk (priv->lock); + /* just return if we are already in the right state */ + if (priv->active == active) { + GST_LOG_OBJECT (self, "allocator was in the right state"); + return TRUE; + } + + if (active) { + if (!gst_hip_pool_allocator_start (self)) { + GST_ERROR_OBJECT (self, "start failed"); + return FALSE; + } + + priv->active = TRUE; + priv->flushing = FALSE; + } else { + priv->flushing = TRUE; + priv->active = FALSE; + + priv->cond.notify_all (); + + /* when all memory objects are in the pool, free them. Else they will be + * freed when they are released */ + GST_LOG_OBJECT (self, "outstanding memories %d, (in queue %u)", + priv->outstanding, (guint) priv->queue.size ()); + if (priv->outstanding == 0) { + if (!gst_hip_pool_allocator_stop (self)) { + GST_ERROR_OBJECT (self, "stop failed"); + return FALSE; + } + } + } + + return TRUE; +} + +static void +gst_hip_pool_allocator_free_memory (GstHipPoolAllocator * self, GstMemory * mem) +{ + auto priv = self->priv; + + priv->cur_mems--; + GST_LOG_OBJECT (self, "freeing memory %p (%u left)", mem, priv->cur_mems); + + GST_MINI_OBJECT_CAST (mem)->dispose = nullptr; + gst_memory_unref (mem); +} + +/* must be called with the lock */ +static void +gst_hip_pool_allocator_clear_queue (GstHipPoolAllocator * self) +{ + auto priv = self->priv; + + GST_LOG_OBJECT (self, "Clearing queue"); + + while (!priv->queue.empty ()) { + GstMemory *mem = priv->queue.front (); + priv->queue.pop (); + gst_hip_pool_allocator_free_memory (self, mem); + } + + GST_LOG_OBJECT (self, "Clear done"); +} + +/* must be called with the lock */ +static gboolean +gst_hip_pool_allocator_stop (GstHipPoolAllocator * self) +{ + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Stop"); + + if (priv->started) { + gst_hip_pool_allocator_clear_queue (self); + priv->started = FALSE; + } + + return TRUE; +} + +static void +gst_hip_pool_allocator_release_memory (GstHipPoolAllocator * self, + GstMemory * mem) +{ + auto priv = self->priv; + + GST_LOG_OBJECT (self, "Released memory %p", mem); + + GST_MINI_OBJECT_CAST (mem)->dispose = nullptr; + mem->allocator = (GstAllocator *) gst_object_ref (_hip_memory_allocator); + + /* keep it around in our queue */ + priv->queue.push (mem); + priv->outstanding--; + if (priv->outstanding == 0 && priv->flushing) + gst_hip_pool_allocator_stop (self); + priv->cond.notify_all (); + priv->lock.unlock (); + + gst_object_unref (self); +} + +static gboolean +gst_hip_memory_release (GstMiniObject * obj) +{ + GstMemory *mem = GST_MEMORY_CAST (obj); + + g_assert (mem->allocator); + + if (!GST_IS_HIP_POOL_ALLOCATOR (mem->allocator)) { + GST_LOG_OBJECT (mem->allocator, "Not our memory, free"); + return TRUE; + } + + auto self = GST_HIP_POOL_ALLOCATOR (mem->allocator); + auto priv = self->priv; + + priv->lock.lock (); + /* return the memory to the allocator */ + gst_memory_ref (mem); + gst_hip_pool_allocator_release_memory (self, mem); + + return FALSE; +} + +static GstFlowReturn +gst_hip_pool_allocator_alloc (GstHipPoolAllocator * self, GstMemory ** mem) +{ + auto priv = self->priv; + + auto new_mem = gst_hip_allocator_alloc_internal (_hip_memory_allocator, + self->device, &self->info, self->info.stride0, priv->alloc_height, + gst_hip_device_get_stream (self->device)); + + if (!new_mem) { + GST_ERROR_OBJECT (self, "Failed to allocate new memory"); + return GST_FLOW_ERROR; + } + + priv->cur_mems++; + *mem = new_mem; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_hip_pool_allocator_acquire_memory_internal (GstHipPoolAllocator * self, + GstMemory ** memory, std::unique_lock < std::mutex > &lk) +{ + auto priv = self->priv; + GstFlowReturn ret = GST_FLOW_ERROR; + + do { + if (priv->flushing) { + GST_DEBUG_OBJECT (self, "we are flushing"); + return GST_FLOW_FLUSHING; + } + + if (!priv->queue.empty ()) { + *memory = priv->queue.front (); + priv->queue.pop (); + GST_LOG_OBJECT (self, "acquired memory %p", *memory); + return GST_FLOW_OK; + } + + /* no memory, try to allocate some more */ + GST_LOG_OBJECT (self, "no memory, trying to allocate"); + ret = gst_hip_pool_allocator_alloc (self, memory); + if (ret == GST_FLOW_OK) + return ret; + + /* something went wrong, return error */ + if (ret != GST_FLOW_EOS) + break; + + GST_LOG_OBJECT (self, "waiting for free memory or flushing"); + priv->cond.wait (lk); + } while (TRUE); + + return ret; +} + +/** + * gst_hip_pool_allocator_new: + * @device: a #GstHipDevice + * @info: a #GstVideoInfo + * + * Creates a new #GstHipPoolAllocator instance + * + * Returns: (transfer full): a #GstHipPoolAllocator + * + * Since: 1.28 + */ +GstHipPoolAllocator * +gst_hip_pool_allocator_new (GstHipDevice * device, const GstVideoInfo * info) +{ + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), nullptr); + g_return_val_if_fail (info, nullptr); + + auto self = (GstHipPoolAllocator *) g_object_new (GST_TYPE_HIP_POOL_ALLOCATOR, + nullptr); + gst_object_ref_sink (self); + + self->device = (GstHipDevice *) gst_object_ref (device); + self->info = *info; + + self->priv->alloc_height = gst_hip_allocator_calculate_alloc_height (info); + + return self; +} + +/** + * gst_hip_pool_allocator_acquire_memory: + * @allocator: a #GstHipPoolAllocator + * @memory: (out) (transfer full) (nullable): a #GstMemory + * + * Acquires a #GstMemory from @allocator. @memory should point to a memory + * location that can hold a pointer to the new #GstMemory. + * + * Returns: a #GstFlowReturn such as %GST_FLOW_FLUSHING when the allocator is + * inactive. + * + * Since: 1.28 + */ +GstFlowReturn +gst_hip_pool_allocator_acquire_memory (GstHipPoolAllocator * allocator, + GstMemory ** memory) +{ + g_return_val_if_fail (GST_IS_HIP_POOL_ALLOCATOR (allocator), GST_FLOW_ERROR); + g_return_val_if_fail (memory, GST_FLOW_ERROR); + GstFlowReturn ret; + + auto priv = allocator->priv; + + std::unique_lock < std::mutex > lk (priv->lock); + ret = gst_hip_pool_allocator_acquire_memory_internal (allocator, memory, lk); + + if (ret == GST_FLOW_OK) { + GstMemory *mem = *memory; + /* Replace default allocator with ours */ + gst_object_unref (mem->allocator); + mem->allocator = (GstAllocator *) gst_object_ref (allocator); + GST_MINI_OBJECT_CAST (mem)->dispose = gst_hip_memory_release; + priv->outstanding++; + } + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipmemory.h
Added
@@ -0,0 +1,191 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/hip/gsthip_fwd.h> + +G_BEGIN_DECLS + +#define GST_HIP_MEMORY_CAST(obj) ((GstHipMemory *)obj) + +#define GST_TYPE_HIP_ALLOCATOR (gst_hip_allocator_get_type()) +#define GST_HIP_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_HIP_ALLOCATOR, GstHipAllocator)) +#define GST_HIP_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_HIP_ALLOCATOR, GstHipAllocatorClass)) +#define GST_IS_HIP_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_HIP_ALLOCATOR)) +#define GST_IS_HIP_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_HIP_ALLOCATOR)) +#define GST_HIP_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_ALLOCATOR, GstHipAllocatorClass)) +#define GST_HIP_ALLOCATOR_CAST(obj) ((GstHipAllocator *)obj) + +#define GST_TYPE_HIP_POOL_ALLOCATOR (gst_hip_pool_allocator_get_type()) +#define GST_HIP_POOL_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_HIP_POOL_ALLOCATOR, GstHipPoolAllocator)) +#define GST_HIP_POOL_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_HIP_POOL_ALLOCATOR, GstHipPoolAllocatorClass)) +#define GST_IS_HIP_POOL_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_HIP_POOL_ALLOCATOR)) +#define GST_IS_HIP_POOL_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_HIP_POOL_ALLOCATOR)) +#define GST_HIP_POOL_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_POOL_ALLOCATOR, GstHipPoolAllocatorClass)) +#define GST_HIP_POOL_ALLOCATOR_CAST(obj) ((GstHipPoolAllocator *)obj) + +#define GST_HIP_MEMORY_NAME "HIPMemory" +#define GST_CAPS_FEATURE_MEMORY_HIP_MEMORY "memory:HIPMemory" +#define GST_MAP_HIP ((GstMapFlags) (GST_MAP_FLAG_LAST << 1)) +#define GST_MAP_READ_HIP ((GstMapFlags) (GST_MAP_READ | GST_MAP_HIP)) +#define GST_MAP_WRITE_HIP ((GstMapFlags) (GST_MAP_WRITE | GST_MAP_HIP)) + +typedef enum +{ + GST_HIP_MEMORY_TRANSFER_NEED_DOWNLOAD = (GST_MEMORY_FLAG_LAST << 0), + GST_HIP_MEMORY_TRANSFER_NEED_UPLOAD = (GST_MEMORY_FLAG_LAST << 1) +} GstHipMemoryTransfer; + +/** + * GstHipMemory: + * + * Opaque GstHipMemory struct + * + * Since: 1.28 + */ +struct _GstHipMemory +{ + GstMemory mem; + + /*< public >*/ + GstHipDevice *device; + GstVideoInfo info; + + /*< private >*/ + GstHipMemoryPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +GST_HIP_API +gboolean gst_is_hip_memory (GstMemory * mem); + +GST_HIP_API +gboolean gst_hip_memory_get_texture (GstHipMemory * mem, + guint plane, + HIPfilter_mode filter_mode, + HIPaddress_mode address_mode, + hipTextureObject_t * texture); + +GST_HIP_API +GstHipStream * gst_hip_memory_get_stream (GstHipMemory * mem); + +GST_HIP_API +void gst_hip_memory_set_event (GstHipMemory * mem, + GstHipEvent * event); + +GST_HIP_API +void gst_hip_memory_sync (GstHipMemory * mem); + +/** + * GstHipAllocator: + * + * Opaque GstHipAllocator struct + * + * Since: 1.28 + */ +struct _GstHipAllocator +{ + GstAllocator allocator; + + /*< private >*/ + GstHipAllocatorPrivate *priv; + + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstHipAllocatorClass: + * + * Opaque GstHipAllocatorClass struct + * + * Since: 1.28 + */ +struct _GstHipAllocatorClass +{ + GstAllocatorClass allocator_class; + + gboolean (*set_active) (GstHipAllocator * allocator, + gboolean active); + + /*< private >*/ + gpointer _gst_reservedGST_PADDING_LARGE; +}; + +GST_HIP_API +GType gst_hip_allocator_get_type (void); + +GST_HIP_API +GstMemory * gst_hip_allocator_alloc (GstHipAllocator * allocator, + GstHipDevice * device, + const GstVideoInfo * info); + +GST_HIP_API +gboolean gst_hip_allocator_set_active (GstHipAllocator * allocator, + gboolean active); + +/** + * GstHipPoolAllocator: + * + * Opaque GstHipPoolAllocator struct + * + * Since: 1.28 + */ +struct _GstHipPoolAllocator +{ + GstHipAllocator parent; + + GstHipDevice *device; + GstVideoInfo info; + + /*< private >*/ + GstHipPoolAllocatorPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +/** + * GstHipPoolAllocatorClass: + * + * Opaque GstHipPoolAllocatorClass struct + * + * Since: 1.28 + */ +struct _GstHipPoolAllocatorClass +{ + GstHipAllocatorClass parent_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING; +}; + +GST_HIP_API +GType gst_hip_pool_allocator_get_type (void); + +GST_HIP_API +GstHipPoolAllocator * gst_hip_pool_allocator_new (GstHipDevice * device, + const GstVideoInfo * info); + +GST_HIP_API +GstFlowReturn gst_hip_pool_allocator_acquire_memory (GstHipPoolAllocator * allocator, + GstMemory ** memory); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiprtc.cpp
Added
@@ -0,0 +1,432 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip.h" +#include <hip/hiprtc.h> +#include <mutex> +#include <vector> +#include <string> +#include <gmodule.h> +#include <string.h> +#include "gsthiputils-private.h" + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hiprtc", 0, "hiprtc"); + }); + + return cat; +} +#endif + +#define LOAD_SYMBOL(name) G_STMT_START { \ + if (!g_module_symbol (module, G_STRINGIFY (name), (gpointer *) &table->name)) { \ + GST_ERROR ("Failed to load '%s', %s", G_STRINGIFY (name), g_module_error()); \ + g_module_close (module); \ + return; \ + } \ +} G_STMT_END; + +/* *INDENT-OFF* */ +struct GstHipRtcFuncTableAmd +{ + gboolean loaded = FALSE; + + hiprtcResult (*hiprtcCreateProgram) (hiprtcProgram * prog, + const char *src, + const char *name, + int numHeaders, const char **headers, const char **includeNames); + hiprtcResult (*hiprtcCompileProgram) (hiprtcProgram prog, + int numOptions, const char **options); + hiprtcResult (*hiprtcGetProgramLog) (hiprtcProgram prog, char *log); + hiprtcResult (*hiprtcGetProgramLogSize) (hiprtcProgram prog, + size_t *logSizeRet); + hiprtcResult (*hiprtcGetCodeSize) (hiprtcProgram prog, size_t *codeSizeRet); + hiprtcResult (*hiprtcGetCode) (hiprtcProgram prog, char *code); + hiprtcResult (*hiprtcDestroyProgram) (hiprtcProgram * prog); +}; + +typedef gpointer nvrtcProgram; + +typedef enum { + NVRTC_SUCCESS = 0, +} nvrtcResult; + + +struct GstHipRtcFuncTableNvidia +{ + gboolean loaded = FALSE; + + nvrtcResult (*nvrtcCompileProgram) (nvrtcProgram prog, int numOptions, + const char **options); + nvrtcResult (*nvrtcCreateProgram) (nvrtcProgram * prog, const char *src, + const char *name, int numHeaders, const char **headers, + const char **includeNames); + nvrtcResult (*nvrtcDestroyProgram) (nvrtcProgram * prog); + nvrtcResult (*nvrtcGetPTX) (nvrtcProgram prog, char *ptx); + nvrtcResult (*nvrtcGetPTXSize) (nvrtcProgram prog, size_t * ptxSizeRet); + nvrtcResult (*nvrtcGetProgramLog) (nvrtcProgram prog, char *log); + nvrtcResult (*nvrtcGetProgramLogSize) (nvrtcProgram prog, + size_t * logSizeRet); +}; + +/* *INDENT-ON* */ + +static GstHipRtcFuncTableAmd amd_ftable = { }; +static GstHipRtcFuncTableNvidia nvidia_ftable = { }; + +static void +load_rtc_amd_func_table (void) +{ + GModule *module = nullptr; + auto module_name = g_getenv ("GST_HIP_HIPRTC_LIBNAME"); + if (module_name) + module = g_module_open (module_name, G_MODULE_BIND_LAZY); + + if (!module) { +#ifndef G_OS_WIN32 + // Keep this logic in sync with gsthiploader.cpp to ensure that the order + // of searching is the same, and both libs are loaded from the same place + module = g_module_open ("libhiprtc.so.7", G_MODULE_BIND_LAZY); + if (module) { + GST_INFO ("Loaded libhiprtc.so.7"); + } else { + module = g_module_open ("libhiprtc.so.6", G_MODULE_BIND_LAZY); + if (module) + GST_INFO ("Loaded libhiprtc.so.6"); + } + + if (!module) + module = load_hiplib_from_root ("/opt/rocm", "lib", "libhiprtc.so.", ""); +#else + int version = 0; + auto hip_ret = HipRuntimeGetVersion (GST_HIP_VENDOR_AMD, &version); + if (hip_ret != hipSuccess) + return; + + int major = version / 10000000; + int minor = (version - (major * 10000000)) / 100000; + auto lib_name = g_strdup_printf ("hiprtc%02d%02d.dll", major, minor); + /* Prefer hip dll in SDK */ + auto hip_root = g_getenv ("HIP_PATH"); + if (hip_root) { + auto lib_path = g_build_filename (hip_root, "bin", lib_name, nullptr); + module = g_module_open (lib_path, G_MODULE_BIND_LAZY); + g_free (lib_path); + } + + if (!module) + module = g_module_open (lib_name, G_MODULE_BIND_LAZY); + + g_free (lib_name); +#endif + } + + if (!module) { + GST_INFO ("Couldn't open HIP RTC library"); + return; + } + + auto table = &amd_ftable; + LOAD_SYMBOL (hiprtcCreateProgram); + LOAD_SYMBOL (hiprtcCompileProgram); + LOAD_SYMBOL (hiprtcGetProgramLog); + LOAD_SYMBOL (hiprtcGetProgramLogSize); + LOAD_SYMBOL (hiprtcGetCodeSize); + LOAD_SYMBOL (hiprtcGetCode); + LOAD_SYMBOL (hiprtcDestroyProgram); + + table->loaded = TRUE; +} + +/* *INDENT-OFF* */ +static gboolean +gst_hip_rtc_load_library_amd (void) +{ + static std::once_flag once; + std::call_once (once,() { + if (!gst_hip_load_library (GST_HIP_VENDOR_AMD)) + return; + + load_rtc_amd_func_table (); + }); + + return amd_ftable.loaded; +} +/* *INDENT-ON* */ + +static void +load_rtc_nvidia_func_table (void) +{ + GModule *module = nullptr; + auto module_name = g_getenv ("GST_HIP_NVRTC_LIBNAME"); + if (module_name) + module = g_module_open (module_name, G_MODULE_BIND_LAZY); + + if (!module) { +#ifndef G_OS_WIN32 + module = g_module_open ("libnvrtc.so", G_MODULE_BIND_LAZY); +#else + int version = 0; + auto hip_ret = HipDriverGetVersion (GST_HIP_VENDOR_NVIDIA, &version); + if (hip_ret != hipSuccess) + return; + + int major = version / 1000; + int minor = (version % 1000) / 10; + auto lib_name = g_strdup_printf ("nvrtc64_%d%d_0.dll", major, minor); + module = g_module_open (lib_name, G_MODULE_BIND_LAZY); + g_free (lib_name); + + if (!module) { + lib_name = g_strdup_printf ("nvrtc64_%d0_0.dll", major); + module = g_module_open (lib_name, G_MODULE_BIND_LAZY); + g_free (lib_name); + } + + if (!module) { + auto cuda_root = g_getenv ("CUDA_PATH"); + if (cuda_root) { + auto path = g_build_path (G_DIR_SEPARATOR_S, cuda_root, "bin", nullptr); + auto dir = g_dir_open (path, 0, nullptr); + if (dir) { + const gchar *name; + while ((name = g_dir_read_name (dir))) { + if (g_str_has_prefix (name, "nvrtc64_") && + g_str_has_suffix (name, "_0.dll")) { + auto lib_path = g_build_filename (path, name, nullptr); + module = g_module_open (lib_path, G_MODULE_BIND_LAZY); + g_free (lib_path); + break; + } + } + + g_dir_close (dir); + } + g_free (path); + } + } +#endif + } + + if (!module) { + GST_INFO ("Couldn't open NVRTC library"); + return; + } + + auto table = &nvidia_ftable; + LOAD_SYMBOL (nvrtcCompileProgram); + LOAD_SYMBOL (nvrtcCreateProgram); + LOAD_SYMBOL (nvrtcDestroyProgram); + LOAD_SYMBOL (nvrtcGetPTX); + LOAD_SYMBOL (nvrtcGetPTXSize); + LOAD_SYMBOL (nvrtcGetProgramLog); + LOAD_SYMBOL (nvrtcGetProgramLogSize); + + table->loaded = TRUE; +} + +/* *INDENT-OFF* */ +static gboolean +gst_hip_rtc_load_library_nvidia (void) +{ + static std::once_flag once; + std::call_once (once,() { + if (!gst_hip_load_library (GST_HIP_VENDOR_NVIDIA)) + return; + + load_rtc_nvidia_func_table (); + }); + + return nvidia_ftable.loaded; +} +/* *INDENT-ON* */ + +/** + * gst_hip_rtc_load_library: + * @vendor: a #GstHipVendor + * + * Opens @vendor specific runtime compiler libraries + * + * Returns: %TRUE if succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_rtc_load_library (GstHipVendor vendor) +{ + switch (vendor) { + case GST_HIP_VENDOR_AMD: + return gst_hip_rtc_load_library_amd (); + case GST_HIP_VENDOR_NVIDIA: + return gst_hip_rtc_load_library_nvidia (); + case GST_HIP_VENDOR_UNKNOWN: + if (gst_hip_rtc_load_library_amd () || gst_hip_rtc_load_library_nvidia ()) + return TRUE; + break; + } + + return FALSE; +} + +static gchar * +gst_hip_rtc_compile_amd (GstHipDevice * device, + const gchar * source, const gchar ** options, guint num_options) +{ + hiprtcProgram prog; + auto rtc_ret = amd_ftable.hiprtcCreateProgram (&prog, source, "program.cpp", + 0, nullptr, nullptr); + + if (rtc_ret != HIPRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't create program, ret: %d", rtc_ret); + return nullptr; + } + + rtc_ret = amd_ftable.hiprtcCompileProgram (prog, num_options, options); + if (rtc_ret != HIPRTC_SUCCESS) { + size_t log_size = 0; + gchar *err_str = nullptr; + rtc_ret = amd_ftable.hiprtcGetProgramLogSize (prog, &log_size); + if (rtc_ret == HIPRTC_SUCCESS) { + err_str = (gchar *) g_malloc0 (log_size); + err_strlog_size - 1 = '\0'; + amd_ftable.hiprtcGetProgramLog (prog, err_str); + } + + GST_ERROR_OBJECT (device, "Couldn't compile program, ret: %d (%s)", + rtc_ret, GST_STR_NULL (err_str)); + g_free (err_str); + return nullptr; + } + + size_t code_size; + rtc_ret = amd_ftable.hiprtcGetCodeSize (prog, &code_size); + if (rtc_ret != HIPRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't get code size, ret: %d", rtc_ret); + return nullptr; + } + + auto code = (gchar *) g_malloc0 (code_size); + rtc_ret = amd_ftable.hiprtcGetCode (prog, code); + + if (rtc_ret != HIPRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't get code, ret: %d", rtc_ret); + g_free (code); + return nullptr; + } + + amd_ftable.hiprtcDestroyProgram (&prog); + + return code; +} + +static gchar * +gst_hip_rtc_compile_nvidia (GstHipDevice * device, + const gchar * source, const gchar ** options, guint num_options) +{ + nvrtcProgram prog; + auto rtc_ret = nvidia_ftable.nvrtcCreateProgram (&prog, source, "program.cpp", + 0, nullptr, nullptr); + + if (rtc_ret != NVRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't create program, ret: %d", rtc_ret); + return nullptr; + } + + rtc_ret = nvidia_ftable.nvrtcCompileProgram (prog, num_options, options); + if (rtc_ret != NVRTC_SUCCESS) { + size_t log_size = 0; + gchar *err_str = nullptr; + rtc_ret = nvidia_ftable.nvrtcGetProgramLogSize (prog, &log_size); + if (rtc_ret == NVRTC_SUCCESS) { + err_str = (gchar *) g_malloc0 (log_size); + err_strlog_size - 1 = '\0'; + nvidia_ftable.nvrtcGetProgramLog (prog, err_str); + } + + GST_ERROR_OBJECT (device, "Couldn't compile program, ret: %d (%s)", + rtc_ret, GST_STR_NULL (err_str)); + g_free (err_str); + return nullptr; + } + + size_t code_size; + rtc_ret = nvidia_ftable.nvrtcGetPTXSize (prog, &code_size); + if (rtc_ret != NVRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't get code size, ret: %d", rtc_ret); + return nullptr; + } + + auto code = (gchar *) g_malloc0 (code_size); + rtc_ret = nvidia_ftable.nvrtcGetPTX (prog, code); + + if (rtc_ret != NVRTC_SUCCESS) { + GST_ERROR_OBJECT (device, "Couldn't get code, ret: %d", rtc_ret); + g_free (code); + return nullptr; + } + + nvidia_ftable.nvrtcDestroyProgram (&prog); + + return code; +} + +/** + * gst_hip_rtc_compile: + * @device: a #GstHipDevice + * @source: HIP kernel source + * @options: array of compile option string + * @num_options: option array size + * + * Compiles @source with given compile options + * + * Returns: (transfer full) (nullable): Compiled kernel blob or %NULL if failed. + * * + * Since: 1.28 + */ +gchar * +gst_hip_rtc_compile (GstHipDevice * device, + const gchar * source, const gchar ** options, guint num_options) +{ + auto vendor = gst_hip_device_get_vendor (device); + if (!gst_hip_rtc_load_library (vendor)) + return nullptr; + + switch (vendor) { + case GST_HIP_VENDOR_AMD: + return gst_hip_rtc_compile_amd (device, source, options, num_options); + case GST_HIP_VENDOR_NVIDIA: + return gst_hip_rtc_compile_nvidia (device, source, options, num_options); + default: + break; + } + + return nullptr; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiprtc.h
Added
@@ -0,0 +1,38 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +GST_HIP_API +gboolean gst_hip_rtc_load_library (GstHipVendor vendor); + +GST_HIP_API +gchar * gst_hip_rtc_compile (GstHipDevice * device, + const gchar * source, + const gchar ** options, + guint num_options); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipstream.cpp
Added
@@ -0,0 +1,252 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-config.h" +#include "gsthip.h" +#include <mutex> + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hipstream", 0, "hipstream"); + }); + + return cat; +} +#endif + +/* *INDENT-OFF* */ +struct _GstHipStream : public GstMiniObject +{ + ~_GstHipStream () + { + if (handle) { + auto hip_ret = HipSetDevice (vendor, device_id); + if (gst_hip_result (hip_ret, vendor)) + HipStreamDestroy (vendor, handle); + } + + gst_clear_object (&event_pool); + } + + hipStream_t handle = nullptr; + GstHipEventPool *event_pool = nullptr; + GstHipVendor vendor; + guint device_id; +}; +/* *INDENT-ON* */ + +static void +gst_hip_stream_free (GstHipStream * stream) +{ + delete stream; +} + +GST_DEFINE_MINI_OBJECT_TYPE (GstHipStream, gst_hip_stream); + +/** + * gst_hip_stream_new: + * @vendor: a #GstHipVendor + * @device_id: device identifier + * + * Creates a new #GstHipStream object + * + * Returns: (transfer full) (nullable): a #GstHipStream object or %NULL if failed + * + * Since: 1.28 + */ +GstHipStream * +gst_hip_stream_new (GstHipVendor vendor, guint device_id) +{ + g_return_val_if_fail (vendor != GST_HIP_VENDOR_UNKNOWN, nullptr); + + auto hip_ret = HipSetDevice (vendor, device_id); + if (!gst_hip_result (hip_ret, vendor)) { + GST_ERROR ("Couldn't set device"); + return nullptr; + } + + hipStream_t handle; + hip_ret = HipStreamCreate (vendor, &handle); + if (!gst_hip_result (hip_ret, vendor)) { + GST_ERROR ("Couldn't create stream"); + return nullptr; + } + + auto stream = new GstHipStream (); + stream->handle = handle; + stream->vendor = vendor; + stream->device_id = device_id; + stream->event_pool = gst_hip_event_pool_new (vendor, device_id); + + gst_mini_object_init (stream, 0, gst_hip_stream_get_type (), + nullptr, nullptr, (GstMiniObjectFreeFunction) gst_hip_stream_free); + + return stream; +} + +/** + * gst_hip_stream_get_vendor: + * @stream: a #GstHipStream + * + * Gets device vendor of @stream object + * + * Returns: #GstHipVendor + * + * Since: 1.28 + */ +GstHipVendor +gst_hip_stream_get_vendor (GstHipStream * stream) +{ + g_return_val_if_fail (stream, GST_HIP_VENDOR_UNKNOWN); + + return stream->vendor; +} + +/** + * gst_hip_stream_get_device_id: + * @stream: a #GstHipStream + * + * Gets numeric device identifier of @stream object + * + * Returns: device identifier + * + * Since: 1.28 + */ +guint +gst_hip_stream_get_device_id (GstHipStream * stream) +{ + g_return_val_if_fail (stream, G_MAXUINT); + + return stream->vendor; +} + +/** + * gst_hip_stream_get_handle: + * @stream: (allow-none): a #GstHipStream + * + * Gets hipStream_t handle owned by @stream + * + * Returns: (type gpointer) (transfer none): hipStream_t handle + * + * Since: 1.28 + */ +hipStream_t +gst_hip_stream_get_handle (GstHipStream * stream) +{ + if (!stream) + return nullptr; + + return stream->handle; +} + +/** + * gst_hip_stream_record_event: + * @stream: a #GstHipStream + * @event: (out) (transfer full) (nullable): a location to store #GstHipEvent + * + * Records currently scheduled operations in @stream to #GstHipEvent + * + * Returns: %TRUE if succeeded + * + * Since: 1.28 + */ +gboolean +gst_hip_stream_record_event (GstHipStream * stream, GstHipEvent ** event) +{ + g_return_val_if_fail (stream, FALSE); + g_return_val_if_fail (event, FALSE); + + auto hip_ret = HipSetDevice (stream->vendor, stream->device_id); + if (!gst_hip_result (hip_ret, stream->vendor)) { + GST_ERROR ("Couldn't set device"); + return FALSE; + } + + GstHipEvent *new_event; + if (!gst_hip_event_pool_acquire (stream->event_pool, &new_event)) { + GST_ERROR ("Couldn't acquire event"); + return FALSE; + } + + hip_ret = gst_hip_event_record (new_event, stream->handle); + if (!gst_hip_result (hip_ret, stream->vendor)) { + GST_ERROR ("Couldn't record event"); + gst_hip_event_unref (new_event); + return FALSE; + } + + *event = new_event; + + return TRUE; +} + +/** + * gst_hip_stream_ref: + * @stream: a #GstHipStream + * + * Increments the reference count on @stream + * + * Returns: (transfer full): a pointer to @stream + * + * Since: 1.28 + */ +GstHipStream * +gst_hip_stream_ref (GstHipStream * stream) +{ + return (GstHipStream *) gst_mini_object_ref (stream); +} + +/** + * gst_hip_stream_unref: + * @stream: a #GstHipStream + * + * Decrements the reference count on @stream + * + * Since: 1.28 + */ +void +gst_hip_stream_unref (GstHipStream * stream) +{ + return gst_mini_object_unref (stream); +} + +/** + * gst_clear_hip_stream: (skip) + * @stream: a pointer to a #GstHipStream + * + * Clears a reference to the @stream + * + * Since: 1.28 + */ +void +gst_clear_hip_stream (GstHipStream ** stream) +{ + gst_clear_mini_object (stream); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthipstream.h
Added
@@ -0,0 +1,58 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +GST_HIP_API +GType gst_hip_stream_get_type (void); + +GST_HIP_API +GstHipStream * gst_hip_stream_new (GstHipVendor vendor, + guint device_id); + +GST_HIP_API +GstHipVendor gst_hip_stream_get_vendor (GstHipStream * stream); + +GST_HIP_API +guint gst_hip_stream_get_device_id (GstHipStream * stream); + +GST_HIP_API +hipStream_t gst_hip_stream_get_handle (GstHipStream * stream); + +GST_HIP_API +gboolean gst_hip_stream_record_event (GstHipStream * stream, + GstHipEvent ** event); + +GST_HIP_API +GstHipStream * gst_hip_stream_ref (GstHipStream * stream); + +GST_HIP_API +void gst_hip_stream_unref (GstHipStream * stream); + +GST_HIP_API +void gst_clear_hip_stream (GstHipStream ** stream); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiputils-private.h
Added
@@ -0,0 +1,33 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gmodule.h> +#include <gst/gst.h> + +G_BEGIN_DECLS + +GModule * load_hiplib_from_root (const char * hip_root, + const char * subdir, + const char * prefix, + const char * suffix); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiputils.cpp
Added
@@ -0,0 +1,343 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip.h" +#include <mutex> +#include <gmodule.h> +#include "gsthiputils-private.h" + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + static std::once_flag once; + + std::call_once (once,& { + cat = _gst_debug_category_new ("hiputils", 0, "hiputils"); + }); + + return cat; +} +#endif + +/* + * Note: this function's usage of g_dir_read_name() on Win32 is inefficient + * because of UTF16-UTF8 conversions, so it cannot be used in directories with + * lots of files like C:\Windows\System32. Should be changed to + * `FindFirstFileEx()` etc if that becomes needed. + */ +GModule * +load_hiplib_from_root (const char *hip_root, const char *subdir, + const char *prefix, const char *suffix) +{ + GModule *module = nullptr; + char *path = g_build_path (G_DIR_SEPARATOR_S, hip_root, subdir, nullptr); + GDir *dir = g_dir_open (path, 0, nullptr); + if (dir) { + const gchar *name; + while ((name = g_dir_read_name (dir))) { + if (g_str_has_prefix (name, prefix) && g_str_has_suffix (name, suffix)) { + char *lib_path = g_build_filename (path, name, nullptr); + module = g_module_open (lib_path, G_MODULE_BIND_LAZY); + GST_INFO ("Loaded %s", lib_path); + g_free (lib_path); + break; + } + } + g_dir_close (dir); + } + g_free (path); + return module; +} + +gboolean +_gst_hip_result (hipError_t result, GstHipVendor vendor, GstDebugCategory * cat, + const gchar * file, const gchar * function, gint line) +{ + if (result != hipSuccess) { +#ifndef GST_DISABLE_GST_DEBUG + if (vendor != GST_HIP_VENDOR_UNKNOWN) { + auto error_name = HipGetErrorName (vendor, result); + auto error_str = HipGetErrorString (vendor, result); + gst_debug_log (cat, GST_LEVEL_ERROR, file, function, line, + NULL, "HIP call failed: %s, %s", error_name, error_str); + } +#endif + return FALSE; + } + + return TRUE; +} + +static void +context_set_hip_device (GstContext * context, GstHipDevice * device) +{ + g_return_if_fail (context != nullptr); + + guint device_id; + GstHipVendor vendor; + g_object_get (device, "device-id", &device_id, "vendor", &vendor, nullptr); + + auto s = gst_context_writable_structure (context); + gst_structure_set (s, "device", GST_TYPE_HIP_DEVICE, device, + "vendor", GST_TYPE_HIP_VENDOR, vendor, + "device-id", G_TYPE_UINT, device_id, nullptr); +} + +static gboolean +pad_query (const GValue * item, GValue * value, gpointer user_data) +{ + GstPad *pad = (GstPad *) g_value_get_object (item); + GstQuery *query = (GstQuery *) user_data; + gboolean res; + + res = gst_pad_peer_query (pad, query); + if (res) { + g_value_set_boolean (value, TRUE); + return FALSE; + } + + return TRUE; +} + +static gboolean +run_query (GstElement * element, GstQuery * query, GstPadDirection direction) +{ + GstIterator *it; + GstIteratorFoldFunction func = pad_query; + GValue res = G_VALUE_INIT; + + g_value_init (&res, G_TYPE_BOOLEAN); + g_value_set_boolean (&res, FALSE); + + /* Ask neighbor */ + if (direction == GST_PAD_SRC) + it = gst_element_iterate_src_pads (element); + else + it = gst_element_iterate_sink_pads (element); + + while (gst_iterator_fold (it, func, &res, query) == GST_ITERATOR_RESYNC) + gst_iterator_resync (it); + + gst_iterator_free (it); + + return g_value_get_boolean (&res); +} + +static void +run_hip_context_query (GstElement * element, GstHipDevice ** device) +{ + GstQuery *query; + GstContext *ctx = nullptr; + + query = gst_query_new_context (GST_HIP_DEVICE_CONTEXT_TYPE); + if (run_query (element, query, GST_PAD_SRC)) { + gst_query_parse_context (query, &ctx); + if (ctx) + gst_element_set_context (element, ctx); + } + + if (*device == nullptr && run_query (element, query, GST_PAD_SINK)) { + gst_query_parse_context (query, &ctx); + if (ctx) + gst_element_set_context (element, ctx); + } + + if (*device == nullptr) { + auto msg = gst_message_new_need_context (GST_OBJECT_CAST (element), + GST_HIP_DEVICE_CONTEXT_TYPE); + gst_element_post_message (element, msg); + } + + gst_query_unref (query); +} + +/** + * gst_hip_ensure_element_data: + * @element: the #GstElement running the query + * @vendor: a #GstHipVendor + * @device_id: preferred device-id, pass device_id >=0 when + * the device_id explicitly required. Otherwise, set -1. + * @device: (inout): the resulting #GstHipDevice + * + * Perform the steps necessary for retrieving a #GstHipDevice from the + * surrounding elements or from the application using the #GstContext mechanism. + * + * If the content of @device is not %NULL, then no #GstContext query is + * necessary for #GstHipDevice. + * + * Returns: whether a #GstHipDevice exists in @device + * + * Since: 1.28 + */ +gboolean +gst_hip_ensure_element_data (GstElement * element, GstHipVendor vendor, + gint device_id, GstHipDevice ** device) +{ + if (*device) + return TRUE; + + run_hip_context_query (element, device); + if (*device) + return TRUE; + + guint target_device_id = 0; + if (device_id > 0) + target_device_id = device_id; + + *device = gst_hip_device_new (vendor, target_device_id); + + if (*device == nullptr) { + GST_ERROR_OBJECT (element, + "Couldn't create new device with device id %d", target_device_id); + return FALSE; + } else { + auto ctx = gst_context_new_hip_device (*device); + gst_element_set_context (element, ctx); + auto msg = gst_message_new_have_context (GST_OBJECT_CAST (element), ctx); + gst_element_post_message (GST_ELEMENT_CAST (element), msg); + } + + return TRUE; +} + +/** + * gst_hip_handle_set_context: + * @element: a #GstElement + * @context: a #GstContext + * @vendor: a #GstHipVendor + * @device_id: preferred device-id, pass device_id >=0 when + * the device_id explicitly required. Otherwise, set -1. + * @device: (inout) (transfer full): location of a #GstHipDevice + * + * Helper function for implementing #GstElementClass.set_context() in + * HIP capable elements. + * + * Retrieves the #GstHipDevice in @context and places the result in @device. + * + * Returns: whether the @device could be set successfully + * + * Since: 1.28 + */ +gboolean +gst_hip_handle_set_context (GstElement * element, GstContext * context, + GstHipVendor vendor, gint device_id, GstHipDevice ** device) +{ + g_return_val_if_fail (GST_IS_ELEMENT (element), FALSE); + g_return_val_if_fail (device != nullptr, FALSE); + + if (!context) + return FALSE; + + auto context_type = gst_context_get_context_type (context); + if (g_strcmp0 (context_type, GST_HIP_DEVICE_CONTEXT_TYPE) == 0) { + GstHipDevice *other_device = nullptr; + guint other_idx = 0; + GstHipVendor other_vendor; + + /* If we had device already, will not replace it */ + if (*device) + return TRUE; + + auto s = gst_context_get_structure (context); + if (gst_structure_get (s, "device", GST_TYPE_HIP_DEVICE, &other_device, + "vendor", GST_TYPE_HIP_VENDOR, &other_vendor, + "device-id", G_TYPE_UINT, &other_idx, nullptr)) { + if ((device_id == -1 || (guint) device_id == other_idx) && + (vendor == GST_HIP_VENDOR_UNKNOWN || vendor == other_vendor)) { + *device = other_device; + return TRUE; + } + + gst_object_unref (other_device); + } + } + + return FALSE; +} + +/** + * gst_hip_handle_context_query: + * @element: a #GstElement + * @query: a #GstQuery of type %GST_QUERY_CONTEXT + * @device: (transfer none) (nullable): a #GstHipDevice + * + * Returns: Whether the @query was successfully responded to from the passed + * @context. + * + * Since: 1.28 + */ +gboolean +gst_hip_handle_context_query (GstElement * element, GstQuery * query, + GstHipDevice * device) +{ + const gchar *context_type; + GstContext *context; + + g_return_val_if_fail (GST_IS_ELEMENT (element), FALSE); + g_return_val_if_fail (GST_IS_QUERY (query), FALSE); + + if (!GST_IS_HIP_DEVICE (device)) + return FALSE; + + gst_query_parse_context_type (query, &context_type); + if (g_strcmp0 (context_type, GST_HIP_DEVICE_CONTEXT_TYPE) != 0) + return FALSE; + + GstContext *old_ctx = nullptr; + gst_query_parse_context (query, &old_ctx); + if (old_ctx) + context = gst_context_copy (old_ctx); + else + context = gst_context_new (GST_HIP_DEVICE_CONTEXT_TYPE, TRUE); + + context_set_hip_device (context, device); + gst_query_set_context (query, context); + gst_context_unref (context); + + GST_DEBUG_OBJECT (element, "successfully set %" GST_PTR_FORMAT + " on %" GST_PTR_FORMAT, device, query); + + return TRUE; +} + +/** + * gst_context_new_hip_device: + * @device: (transfer none): a #GstHipDevice + * + * Returns: (transfer full): a new #GstContext embedding the @device + * + * Since: 1.28 + */ +GstContext * +gst_context_new_hip_device (GstHipDevice * device) +{ + g_return_val_if_fail (GST_HIP_DEVICE (device), nullptr); + + auto ctx = gst_context_new (GST_HIP_DEVICE_CONTEXT_TYPE, TRUE); + context_set_hip_device (ctx, device); + + return ctx; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/gsthiputils.h
Added
@@ -0,0 +1,77 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gmodule.h> +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip_fwd.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +GST_HIP_API +gboolean _gst_hip_result (hipError_t result, + GstHipVendor vendor, + GstDebugCategory * cat, + const gchar * file, + const gchar * function, + gint line); + +/** + * gst_hip_result: + * @result: HIP device API return code `hipError_t` + * @vendor: a #GstHipVendor + * + * Returns: %TRUE if HIP device API call result is hipSuccess + * + * Since: 1.28 + */ +#ifndef GST_DISABLE_GST_DEBUG +#define gst_hip_result(result,vendor) \ +_gst_hip_result(result, vendor, GST_CAT_DEFAULT, __FILE__, GST_FUNCTION, __LINE__) +#else +#define gst_hip_result(result,vendor) \ +_gst_hip_result(result, vendor, NULL, __FILE__, GST_FUNCTION, __LINE__) +#endif /* GST_DISABLE_GST_DEBUG */ + +GST_HIP_API +gboolean gst_hip_ensure_element_data (GstElement * element, + GstHipVendor vendor, + gint device_id, + GstHipDevice ** device); + +GST_HIP_API +gboolean gst_hip_handle_set_context (GstElement * element, + GstContext * context, + GstHipVendor vendor, + gint device_id, + GstHipDevice ** device); + +GST_HIP_API +gboolean gst_hip_handle_context_query (GstElement * element, + GstQuery * query, + GstHipDevice * device); + +GST_HIP_API +GstContext * gst_context_new_hip_device (GstHipDevice * device); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/hip-gst-gl.h
Added
@@ -0,0 +1,44 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip-enums.h> +#include <hip/hip_runtime.h> +#include <hip/hip_gl_interop.h> + +G_BEGIN_DECLS + +GST_HIP_API +hipError_t HipGLGetDevices (GstHipVendor vendor, + unsigned int* pHipDeviceCount, + int* pHipDevices, + unsigned int hipDeviceCount, + hipGLDeviceList deviceList); + +GST_HIP_API +hipError_t HipGraphicsGLRegisterBuffer (GstHipVendor vendor, + hipGraphicsResource** resource, + unsigned int buffer, + unsigned int flags); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/hip-gst.h
Added
@@ -0,0 +1,211 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <hip/hip_runtime.h> +#include <gst/hip/hip-prelude.h> +#include <gst/hip/gsthip-enums.h> + +G_BEGIN_DECLS + +GST_HIP_API +hipError_t HipInit (GstHipVendor vendor, + unsigned int flags); + +GST_HIP_API +hipError_t HipDriverGetVersion (GstHipVendor vendor, + int* driverVersion); + +GST_HIP_API +hipError_t HipRuntimeGetVersion (GstHipVendor vendor, + int* runtimeVersion); + +GST_HIP_API +const char* HipGetErrorName (GstHipVendor vendor, + hipError_t hip_error); + +GST_HIP_API +const char* HipGetErrorString (GstHipVendor vendor, + hipError_t hipError); + +GST_HIP_API +hipError_t HipGetDeviceCount (GstHipVendor vendor, + int* count); + +GST_HIP_API +hipError_t HipGetDeviceProperties (GstHipVendor vendor, + hipDeviceProp_t* prop, + int deviceId); + +GST_HIP_API +hipError_t HipDeviceGetAttribute (GstHipVendor vendor, + int* pi, + hipDeviceAttribute_t attr, + int deviceId); + +GST_HIP_API +hipError_t HipSetDevice (GstHipVendor vendor, + int deviceId); + +GST_HIP_API +hipError_t HipMalloc (GstHipVendor vendor, + void** ptr, + size_t size); + +GST_HIP_API +hipError_t HipFree (GstHipVendor vendor, + void* ptr); + +GST_HIP_API +hipError_t HipHostMalloc (GstHipVendor vendor, + void** ptr, + size_t size, + unsigned int flags); + +GST_HIP_API +hipError_t HipHostFree (GstHipVendor vendor, + void* ptr); + +GST_HIP_API +hipError_t HipStreamCreate (GstHipVendor vendor, + hipStream_t* stream); + +GST_HIP_API +hipError_t HipStreamDestroy (GstHipVendor vendor, + hipStream_t stream); + +GST_HIP_API +hipError_t HipStreamSynchronize (GstHipVendor vendor, + hipStream_t stream); + +GST_HIP_API +hipError_t HipEventCreateWithFlags (GstHipVendor vendor, + hipEvent_t* event, + unsigned flags); + +GST_HIP_API +hipError_t HipEventRecord (GstHipVendor vendor, + hipEvent_t event, + hipStream_t stream); + +GST_HIP_API +hipError_t HipEventDestroy (GstHipVendor vendor, + hipEvent_t event); + +GST_HIP_API +hipError_t HipEventSynchronize (GstHipVendor vendor, + hipEvent_t event); + +GST_HIP_API +hipError_t HipEventQuery (GstHipVendor vendor, + hipEvent_t event); + +GST_HIP_API +hipError_t HipModuleLoadData (GstHipVendor vendor, + hipModule_t* module, + const void* image); + +GST_HIP_API +hipError_t HipModuleUnload (GstHipVendor vendor, + hipModule_t module); + +GST_HIP_API +hipError_t HipModuleGetFunction (GstHipVendor vendor, + hipFunction_t* function, + hipModule_t module, + const char* kname); + +GST_HIP_API +hipError_t HipModuleLaunchKernel (GstHipVendor vendor, + hipFunction_t f, + unsigned int gridDimX, + unsigned int gridDimY, + unsigned int gridDimZ, + unsigned int blockDimX, + unsigned int blockDimY, + unsigned int blockDimZ, + unsigned int sharedMemBytes, + hipStream_t stream, + void** kernelParams, + void** extra); + +GST_HIP_API +hipError_t HipMemcpyParam2DAsync (GstHipVendor vendor, + const hip_Memcpy2D* pCopy, + hipStream_t stream); + +GST_HIP_API +hipError_t HipMemsetD8Async (GstHipVendor vendor, + hipDeviceptr_t dest, + unsigned char value, + size_t count, + hipStream_t stream); + +GST_HIP_API +hipError_t HipMemsetD16Async (GstHipVendor vendor, + hipDeviceptr_t dest, + unsigned short value, + size_t count, + hipStream_t stream); + +GST_HIP_API +hipError_t HipMemsetD32Async (GstHipVendor vendor, + hipDeviceptr_t dst, + int value, + size_t count, + hipStream_t stream); + +GST_HIP_API +hipError_t HipTexObjectCreate (GstHipVendor vendor, + hipTextureObject_t* pTexObject, + const HIP_RESOURCE_DESC* pResDesc, + const HIP_TEXTURE_DESC* pTexDesc, + const HIP_RESOURCE_VIEW_DESC* pResViewDesc); + +GST_HIP_API +hipError_t HipTexObjectDestroy (GstHipVendor vendor, + hipTextureObject_t texObject); + +GST_HIP_API +hipError_t HipGraphicsMapResources (GstHipVendor vendor, + int count, + hipGraphicsResource_t* resources, + hipStream_t stream); + +GST_HIP_API +hipError_t HipGraphicsResourceGetMappedPointer (GstHipVendor vendor, + void** devPtr, + size_t* size, + hipGraphicsResource_t resource); + +GST_HIP_API +hipError_t HipGraphicsUnmapResources (GstHipVendor vendor, + int count, + hipGraphicsResource_t* resources, + hipStream_t stream); + +GST_HIP_API +hipError_t HipGraphicsUnregisterResource (GstHipVendor vendor, + hipGraphicsResource_t resource); + +G_END_DECLS + +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/hip-prelude.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) 2025 GStreamer developers + * + * -prelude.h: prelude include header for gst-hip library + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> + +#ifndef GST_HIP_API +# ifdef BUILDING_GST_HIP +# define GST_HIP_API GST_API_EXPORT /* from config.h */ +# else +# define GST_HIP_API GST_API_IMPORT +# endif +#endif + +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/meson.build
Added
@@ -0,0 +1,178 @@ +hip_sources = + 'gsthip-enums.cpp', + 'gsthip-interop.cpp', + 'gsthipbufferpool.cpp', + 'gsthipdevice.cpp', + 'gsthipevent.cpp', + 'gsthiploader.cpp', + 'gsthipmemory.cpp', + 'gsthiprtc.cpp', + 'gsthipstream.cpp', + 'gsthiputils.cpp', + + +hip_headers = + 'gsthip_fwd.h', + 'gsthip-enums.h', + 'gsthip-interop.h', + 'gsthip.h', + 'gsthipbufferpool.h', + 'gsthipdevice.h', + 'gsthipevent.h', + 'gsthiploader.h', + 'gsthipmemory.h', + 'gsthiprtc.h', + 'gsthipstream.h', + 'gsthiputils.h', + 'hip-prelude.h', + + +hipgst_headers = + 'hip-gst.h', + + +hip_gl_headers = + 'gsthip-gl.h', + 'gsthip-interop-gl.h', + + +hipgst_gl_headers = + 'hip-gst-gl.h', + + +gsthip_dep = dependency('', required : false) +gsthip_gl_dep = dependency('', required : false) + +hip_option = get_option('hip') +if hip_option.disabled() + subdir_done() +endif + +if host_system not in 'linux', 'windows' + subdir_done() +endif + +extra_args = + '-DGST_USE_UNSTABLE_API', + '-DBUILDING_GST_HIP', + '-DG_LOG_DOMAIN="GStreamer-HIP"', + + +extra_deps = + +hip_cdata = configuration_data() +if gstgl_dep.found() + hip_cdata.set('HAVE_GST_GL', true) + extra_deps += gstgl_dep +endif + +configure_file( + output: 'gsthip-config.h', + configuration: hip_cdata, +) + +hipstub_incdir = include_directories('./stub') + +pkg_name = 'gstreamer-hip-' + api_version +gsthip = library('gsthip', hip_sources, + c_args : gst_plugins_bad_args + extra_args, + cpp_args: gst_plugins_bad_args + extra_args, + include_directories : configinc, libsinc, hipstub_incdir, + dependencies : gst_dep, gstbase_dep, gstvideo_dep, gmodule_dep + extra_deps, + version : libversion, + soversion : soversion, + install : true, + override_options : 'cpp_std=c++14', +) + +gen_sources = +library_def = {'lib': gsthip} + +stub_path = meson.current_source_dir() / 'stub' + +if build_gir + gir_includes = 'Gst-1.0', 'GstBase-1.0', 'GstVideo-1.0' + + gir = { + 'sources' : hip_sources + hip_headers, + 'namespace' : 'GstHip', + 'nsversion' : api_version, + 'identifier_prefix' : 'Gst', + 'symbol_prefix' : 'gst', + 'export_packages' : pkg_name, + 'includes' : gir_includes, + 'install' : true, + 'extra_args' : gir_init_section + '-DGST_USE_UNSTABLE_API', '-I' + stub_path, + 'dependencies' : gst_dep, gstbase_dep, gstvideo_dep, + } + + library_def += {'gir': gir} + hip_gir = 'GstHip-1.0' + if not static_build + hip_gir = gnome.generate_gir(gsthip, kwargs: gir) + library_def += {'gir_targets': hip_gir} + gen_sources += hip_gir + endif + +endif +gst_libraries += pkg_name, library_def + +pkgconfig.generate( + libraries : gst_dep, gstbase_dep, gstvideo_dep, gsthip, + variables : pkgconfig_variables, + subdirs : pkgconfig_subdirs, + name : pkg_name, + description : 'GStreamer HIP library', +) + +install_headers(hip_headers + hipgst_headers, subdir : 'gstreamer-1.0/gst/hip') +gsthip_dep = declare_dependency(link_with : gsthip, + include_directories : libsinc, + dependencies : gst_dep, gstbase_dep, gstvideo_dep, + sources: gen_sources) +meson.override_dependency(pkg_name, gsthip_dep) + +if gstgl_dep.found() + pkg_name = 'gstreamer-hip-gl-' + api_version + + hip_gl_gir = + if build_gir + gir_includes += 'GstGL-1.0', hip_gir0 + + gir = { + 'sources' : hip_gl_headers + 'gsthip-interop.cpp', + 'namespace' : 'GstHipGL', + 'nsversion' : api_version, + 'identifier_prefix' : 'Gst', + 'symbol_prefix' : 'gst', + 'export_packages' : pkg_name, + 'includes' : gir_includes, + 'install' : true, + 'extra_args' : gir_init_section + '-DGST_USE_UNSTABLE_API', '-I' + stub_path, + 'dependencies' : gst_dep, gstbase_dep, gstvideo_dep, gstgl_dep, + } + + library_def += {'gir': gir} + if not static_build + hip_gl_gir = gnome.generate_gir(gsthip, kwargs: gir) + library_def += {'gir_targets': hip_gl_gir} + endif + endif + + gst_libraries += pkg_name, library_def + + pkgconfig.generate( + libraries : gst_dep, gstbase_dep, gstvideo_dep, gsthip, gstgl_dep, + variables : pkgconfig_variables, + subdirs : pkgconfig_subdirs, + name : pkg_name, + description : 'GStreamer HIP library (OpenGL specifics)', + ) + + install_headers(hip_gl_headers + hipgst_gl_headers, subdir : 'gstreamer-1.0/gst/hip') + gsthip_gl_dep = declare_dependency(link_with : gsthip, + include_directories : libsinc, + dependencies : gst_dep, gstbase_dep, gstvideo_dep, gstgl_dep, + sources: hip_gl_gir) + meson.override_dependency(pkg_name, gsthip_gl_dep) +endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/cuda.h
Added
@@ -0,0 +1,599 @@ +/* CUDA stub header + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <glib.h> + +G_BEGIN_DECLS + +typedef gpointer CUcontext; +typedef gpointer CUgraphicsResource; +typedef gpointer CUstream; +typedef gpointer CUarray; +typedef gpointer CUmodule; +typedef gpointer CUfunction; +typedef gpointer CUmipmappedArray; +typedef gpointer CUevent; +typedef gpointer CUmemoryPool; +typedef gpointer CUexternalMemory; +typedef gpointer CUexternalSemaphore; + +typedef guint64 CUtexObject; +typedef guintptr CUdeviceptr; +typedef gint CUdevice; + +typedef enum +{ + CUDA_SUCCESS = 0, + CUDA_ERROR_INVALID_VALUE = 1, + CUDA_ERROR_OUT_OF_MEMORY = 2, + CUDA_ERROR_NOT_INITIALIZED = 3, + CUDA_ERROR_DEINITIALIZED = 4, + CUDA_ERROR_PROFILER_DISABLED = 5, + CUDA_ERROR_PROFILER_NOT_INITIALIZED = 6, + CUDA_ERROR_PROFILER_ALREADY_STARTED = 7, + CUDA_ERROR_PROFILER_ALREADY_STOPPED = 8, + CUDA_ERROR_STUB_LIBRARY = 34, + CUDA_ERROR_DEVICE_UNAVAILABLE = 46, + CUDA_ERROR_NO_DEVICE = 100, + CUDA_ERROR_INVALID_DEVICE = 101, + CUDA_ERROR_DEVICE_NOT_LICENSED = 102, + CUDA_ERROR_INVALID_IMAGE = 200, + CUDA_ERROR_INVALID_CONTEXT = 201, + CUDA_ERROR_CONTEXT_ALREADY_CURRENT = 202, + CUDA_ERROR_MAP_FAILED = 205, + CUDA_ERROR_UNMAP_FAILED = 206, + CUDA_ERROR_ARRAY_IS_MAPPED = 207, + CUDA_ERROR_ALREADY_MAPPED = 208, + CUDA_ERROR_NO_BINARY_FOR_GPU = 209, + CUDA_ERROR_ALREADY_ACQUIRED = 210, + CUDA_ERROR_NOT_MAPPED = 211, + CUDA_ERROR_NOT_MAPPED_AS_ARRAY = 212, + CUDA_ERROR_NOT_MAPPED_AS_POINTER = 213, + CUDA_ERROR_ECC_UNCORRECTABLE = 214, + CUDA_ERROR_UNSUPPORTED_LIMIT = 215, + CUDA_ERROR_CONTEXT_ALREADY_IN_USE = 216, + CUDA_ERROR_PEER_ACCESS_UNSUPPORTED = 217, + CUDA_ERROR_INVALID_PTX = 218, + CUDA_ERROR_INVALID_GRAPHICS_CONTEXT = 219, + CUDA_ERROR_NVLINK_UNCORRECTABLE = 220, + CUDA_ERROR_JIT_COMPILER_NOT_FOUND = 221, + CUDA_ERROR_UNSUPPORTED_PTX_VERSION = 222, + CUDA_ERROR_JIT_COMPILATION_DISABLED = 223, + CUDA_ERROR_UNSUPPORTED_EXEC_AFFINITY = 224, + CUDA_ERROR_UNSUPPORTED_DEVSIDE_SYNC = 225, + CUDA_ERROR_INVALID_SOURCE = 300, + CUDA_ERROR_FILE_NOT_FOUND = 301, + CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND = 302, + CUDA_ERROR_SHARED_OBJECT_INIT_FAILED = 303, + CUDA_ERROR_OPERATING_SYSTEM = 304, + CUDA_ERROR_INVALID_HANDLE = 400, + CUDA_ERROR_ILLEGAL_STATE = 401, + CUDA_ERROR_LOSSY_QUERY = 402, + CUDA_ERROR_NOT_FOUND = 500, + CUDA_ERROR_NOT_READY = 600, + CUDA_ERROR_ILLEGAL_ADDRESS = 700, + CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES = 701, + CUDA_ERROR_LAUNCH_TIMEOUT = 702, + CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING = 703, + CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED = 704, + CUDA_ERROR_PEER_ACCESS_NOT_ENABLED = 705, + CUDA_ERROR_PRIMARY_CONTEXT_ACTIVE = 708, + CUDA_ERROR_CONTEXT_IS_DESTROYED = 709, + CUDA_ERROR_ASSERT = 710, + CUDA_ERROR_TOO_MANY_PEERS = 711, + CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED = 712, + CUDA_ERROR_HOST_MEMORY_NOT_REGISTERED = 713, + CUDA_ERROR_HARDWARE_STACK_ERROR = 714, + CUDA_ERROR_ILLEGAL_INSTRUCTION = 715, + CUDA_ERROR_MISALIGNED_ADDRESS = 716, + CUDA_ERROR_INVALID_ADDRESS_SPACE = 717, + CUDA_ERROR_INVALID_PC = 718, + CUDA_ERROR_LAUNCH_FAILED = 719, + CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE = 720, + CUDA_ERROR_NOT_PERMITTED = 800, + CUDA_ERROR_NOT_SUPPORTED = 801, + CUDA_ERROR_SYSTEM_NOT_READY = 802, + CUDA_ERROR_SYSTEM_DRIVER_MISMATCH = 803, + CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE = 804, + CUDA_ERROR_MPS_CONNECTION_FAILED = 805, + CUDA_ERROR_MPS_RPC_FAILURE = 806, + CUDA_ERROR_MPS_SERVER_NOT_READY = 807, + CUDA_ERROR_MPS_MAX_CLIENTS_REACHED = 808, + CUDA_ERROR_MPS_MAX_CONNECTIONS_REACHED = 809, + CUDA_ERROR_MPS_CLIENT_TERMINATED = 810, + CUDA_ERROR_CDP_NOT_SUPPORTED = 811, + CUDA_ERROR_CDP_VERSION_MISMATCH = 812, + CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED = 900, + CUDA_ERROR_STREAM_CAPTURE_INVALIDATED = 901, + CUDA_ERROR_STREAM_CAPTURE_MERGE = 902, + CUDA_ERROR_STREAM_CAPTURE_UNMATCHED = 903, + CUDA_ERROR_STREAM_CAPTURE_UNJOINED = 904, + CUDA_ERROR_STREAM_CAPTURE_ISOLATION = 905, + CUDA_ERROR_STREAM_CAPTURE_IMPLICIT = 906, + CUDA_ERROR_CAPTURED_EVENT = 907, + CUDA_ERROR_STREAM_CAPTURE_WRONG_THREAD = 908, + CUDA_ERROR_TIMEOUT = 909, + CUDA_ERROR_GRAPH_EXEC_UPDATE_FAILURE = 910, + CUDA_ERROR_EXTERNAL_DEVICE = 911, + CUDA_ERROR_INVALID_CLUSTER_SIZE = 912, + CUDA_ERROR_FUNCTION_NOT_LOADED = 913, + CUDA_ERROR_INVALID_RESOURCE_TYPE = 914, + CUDA_ERROR_INVALID_RESOURCE_CONFIGURATION = 915, + CUDA_ERROR_UNKNOWN = 999 +} CUresult; + +typedef enum +{ + CU_MEMORYTYPE_HOST = 1, + CU_MEMORYTYPE_DEVICE = 2, + CU_MEMORYTYPE_ARRAY = 3, + CU_MEMORYTYPE_UNIFIED = 4, +} CUmemorytype; + +typedef enum +{ + CU_DEVICE_ATTRIBUTE_TEXTURE_ALIGNMENT = 14, + CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING = 41, + CU_DEVICE_ATTRIBUTE_COMPUTE_CAPABILITY_MAJOR = 75, + CU_DEVICE_ATTRIBUTE_COMPUTE_CAPABILITY_MINOR = 76, + CU_DEVICE_ATTRIBUTE_VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED = 102, + CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR_SUPPORTED = 103, + CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_WIN32_HANDLE_SUPPORTED = 104, + CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_WIN32_KMT_HANDLE_SUPPORTED = 105, + CU_DEVICE_ATTRIBUTE_MEMORY_POOLS_SUPPORTED = 115, +} CUdevice_attribute; + +typedef enum +{ + CU_GRAPHICS_REGISTER_FLAGS_NONE = 0x00, + CU_GRAPHICS_REGISTER_FLAGS_READ_ONLY = 0x01, + CU_GRAPHICS_REGISTER_FLAGS_WRITE_DISCARD = 0x02, + CU_GRAPHICS_REGISTER_FLAGS_SURFACE_LOAD_STORE = 0x04, + CU_GRAPHICS_REGISTER_FLAGS_TEXTURE_GATHER = 0x08, +} CUgraphicsRegisterFlags; + +typedef enum +{ + CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE = 0x00, + CU_GRAPHICS_MAP_RESOURCE_FLAGS_READ_ONLY = 0x01, + CU_GRAPHICS_MAP_RESOURCE_FLAGS_WRITE_DISCARD = 0x02, +} CUgraphicsMapResourceFlags; + +typedef enum +{ + CU_STREAM_DEFAULT = 0x0, + CU_STREAM_NON_BLOCKING = 0x1 +} CUstream_flags; + +typedef enum +{ + CU_TR_FILTER_MODE_POINT = 0, + CU_TR_FILTER_MODE_LINEAR = 1 +} CUfilter_mode; + +typedef enum +{ + CU_TR_ADDRESS_MODE_WRAP = 0, + CU_TR_ADDRESS_MODE_CLAMP = 1, + CU_TR_ADDRESS_MODE_MIRROR = 2, + CU_TR_ADDRESS_MODE_BORDER = 3 +} CUaddress_mode; + +typedef enum +{ + CU_RESOURCE_TYPE_ARRAY = 0, + CU_RESOURCE_TYPE_MIPMAPPED_ARRAY = 1, + CU_RESOURCE_TYPE_LINEAR = 2, + CU_RESOURCE_TYPE_PITCH2D = 3 +} CUresourcetype; + +typedef enum +{ + CU_AD_FORMAT_UNSIGNED_INT8 = 1, + CU_AD_FORMAT_UNSIGNED_INT16 = 2, +} CUarray_format; + +typedef enum +{ + CU_RES_VIEW_FORMAT_NONE = 0, +} CUresourceViewFormat; + +typedef enum +{ + CU_EVENT_DEFAULT = 0x0, + CU_EVENT_BLOCKING_SYNC = 0x1, + CU_EVENT_DISABLE_TIMING = 0x2, + CU_EVENT_INTERPROCESS = 0x4, +} CUevent_flags; + +typedef enum +{ + CU_LIMIT_STACK_SIZE = 0x0, + CU_LIMIT_PRINTF_FIFO_SIZE = 0x1, + CU_LIMIT_MALLOC_HEAP_SIZE = 0x2, + CU_LIMIT_DEV_RUNTIME_SYNC_DEPTH = 0x3, + CU_LIMIT_DEV_RUNTIME_PENDING_LAUNCH_COUNT = 0x4, + CU_LIMIT_MAX_L2_FETCH_GRANULARITY = 0x5, + CU_LIMIT_PERSISTING_L2_CACHE_SIZE = 0x6, + CU_LIMIT_SHMEM_SIZE = 0x7, + CU_LIMIT_CIG_ENABLED = 0x8, + CU_LIMIT_CIG_SHMEM_FALLBACK_ENABLED = 0x9, +} CUlimit; + +typedef struct +{ + gsize srcXInBytes; + gsize srcY; + CUmemorytype srcMemoryType; + gconstpointer srcHost; + CUdeviceptr srcDevice; + CUarray srcArray; + gsize srcPitch; + + gsize dstXInBytes; + gsize dstY; + CUmemorytype dstMemoryType; + gpointer dstHost; + CUdeviceptr dstDevice; + CUarray dstArray; + gsize dstPitch; + + gsize WidthInBytes; + gsize Height; +} CUDA_MEMCPY2D; + +typedef struct +{ + CUaddress_mode addressMode3; + CUfilter_mode filterMode; + guint flags; + guint maxAnisotropy; + CUfilter_mode mipmapFilterMode; + gfloat mipmapLevelBias; + gfloat minMipmapLevelClamp; + gfloat maxMipmapLevelClamp; + gfloat borderColor4; + gint reserved12; +} CUDA_TEXTURE_DESC; + +typedef struct +{ + CUresourcetype resType; + + union { + struct { + CUarray hArray; + } array; + struct { + CUmipmappedArray hMipmappedArray; + } mipmap; + struct { + CUdeviceptr devPtr; + CUarray_format format; + guint numChannels; + gsize sizeInBytes; + } linear; + struct { + CUdeviceptr devPtr; + CUarray_format format; + guint numChannels; + gsize width; + gsize height; + gsize pitchInBytes; + } pitch2D; + struct { + gint reserved32; + } reserved; + } res; + + guint flags; +} CUDA_RESOURCE_DESC; + +typedef struct +{ + CUresourceViewFormat format; + gsize width; + gsize height; + gsize depth; + guint firstMipmapLevel; + guint lastMipmapLevel; + guint firstLayer; + guint lastLayer; + guint reserved16; +} CUDA_RESOURCE_VIEW_DESC; + +typedef enum +{ + CU_IPC_MEM_LAZY_ENABLE_PEER_ACCESS = 0x1 +} CUipcMem_flags; + +#define CU_IPC_HANDLE_SIZE 64 +typedef struct +{ + char reservedCU_IPC_HANDLE_SIZE; +} CUipcMemHandle; + +typedef struct +{ + char reservedCU_IPC_HANDLE_SIZE; +} CUipcEventHandle; + +typedef unsigned long long CUmemGenericAllocationHandle; + +typedef enum +{ + CU_MEM_HANDLE_TYPE_NONE = 0x0, + CU_MEM_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR = 0x1, + CU_MEM_HANDLE_TYPE_WIN32 = 0x2, + CU_MEM_HANDLE_TYPE_WIN32_KMT = 0x4, + CU_MEM_HANDLE_TYPE_MAX = 0x7FFFFFFF +} CUmemAllocationHandleType; + +typedef enum +{ + CU_MEM_ACCESS_FLAGS_PROT_NONE = 0x0, + CU_MEM_ACCESS_FLAGS_PROT_READ = 0x1, + CU_MEM_ACCESS_FLAGS_PROT_READWRITE = 0x3, + CU_MEM_ACCESS_FLAGS_PROT_MAX = 0x7FFFFFFF +} CUmemAccess_flags; + +typedef enum +{ + CU_MEM_LOCATION_TYPE_INVALID = 0x0, + CU_MEM_LOCATION_TYPE_DEVICE = 0x1, + CU_MEM_LOCATION_TYPE_MAX = 0x7FFFFFFF +} CUmemLocationType; + +typedef enum CUmemAllocationType_enum { + CU_MEM_ALLOCATION_TYPE_INVALID = 0x0, + CU_MEM_ALLOCATION_TYPE_PINNED = 0x1, + CU_MEM_ALLOCATION_TYPE_MAX = 0x7FFFFFFF +} CUmemAllocationType; + +typedef enum +{ + CU_MEM_ALLOC_GRANULARITY_MINIMUM = 0x0, + CU_MEM_ALLOC_GRANULARITY_RECOMMENDED = 0x1 +} CUmemAllocationGranularity_flags; + +typedef struct +{ + CUmemLocationType type; + int id; +} CUmemLocation; + +typedef struct +{ + unsigned char compressionType; + unsigned char gpuDirectRDMACapable; + unsigned short usage; + unsigned char reserved4; +} CUmemAllocationPropAllocFlags; + +typedef struct +{ + CUmemAllocationType type; + CUmemAllocationHandleType requestedHandleTypes; + CUmemLocation location; + void *win32HandleMetaData; + CUmemAllocationPropAllocFlags allocFlags; +} CUmemAllocationProp; + +typedef struct +{ + CUmemLocation location; + CUmemAccess_flags flags; +} CUmemAccessDesc; + +typedef struct +{ + CUmemAllocationType allocType; + CUmemAllocationHandleType handleTypes; + CUmemLocation location; + void *win32SecurityAttributes; + size_t maxSize; + unsigned char reserved56; +} CUmemPoolProps; + +typedef enum +{ + CU_MEMPOOL_ATTR_REUSE_FOLLOW_EVENT_DEPENDENCIES = 1, + CU_MEMPOOL_ATTR_REUSE_ALLOW_OPPORTUNISTIC, + CU_MEMPOOL_ATTR_REUSE_ALLOW_INTERNAL_DEPENDENCIES, + CU_MEMPOOL_ATTR_RELEASE_THRESHOLD, + CU_MEMPOOL_ATTR_RESERVED_MEM_CURRENT, + CU_MEMPOOL_ATTR_RESERVED_MEM_HIGH, + CU_MEMPOOL_ATTR_USED_MEM_CURRENT, + CU_MEMPOOL_ATTR_USED_MEM_HIGH, +} CUmemPool_attribute; + +typedef struct +{ + unsigned long long offset; + unsigned long long size; + unsigned int flags; + unsigned int reserved16; +} CUDA_EXTERNAL_MEMORY_BUFFER_DESC; + +typedef enum +{ + CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD = 1, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32 = 2, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT = 3, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP = 4, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE = 5, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE = 6, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE_KMT = 7, + CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF = 8 +} CUexternalMemoryHandleType; + +/** + * CUDA_EXTERNAL_MEMORY_HANDLE_DESC: (skip) (attributes doc.skip=true) + */ +typedef struct +{ + CUexternalMemoryHandleType type; + union { + int fd; + struct { + void *handle; + const void *name; + } win32; + const void *nvSciBufObject; + } handle; + unsigned long long size; + unsigned int flags; + unsigned int reserved16; +} CUDA_EXTERNAL_MEMORY_HANDLE_DESC; + +typedef enum +{ + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD = 1, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32 = 2, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT = 3, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE = 4, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_FENCE = 5, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC = 6, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX = 7, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX_KMT = 8, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_FD = 9, + CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_WIN32 = 10 +} CUexternalSemaphoreHandleType; + +/** + * CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC: (skip) (attributes doc.skip=true) + */ +typedef struct +{ + CUexternalSemaphoreHandleType type; + union { + int fd; + struct { + void *handle; + const void *name; + } win32; + const void* nvSciSyncObj; + } handle; + unsigned int flags; + unsigned int reserved16; +} CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC; + +/** + * CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS: (skip) (attributes doc.skip=true) + */ +typedef struct +{ + struct { + struct { + unsigned long long value; + } fence; + union { + void *fence; + unsigned long long reserved; + } nvSciSync; + struct { + unsigned long long key; + } keyedMutex; + unsigned int reserved12; + } params; + unsigned int flags; + unsigned int reserved16; +} CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS; + +/** + * CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS: (skip) (attributes doc.skip=true) + */ +typedef struct +{ + struct { + struct { + unsigned long long value; + } fence; + union { + void *fence; + unsigned long long reserved; + } nvSciSync; + struct { + unsigned long long key; + unsigned int timeoutMs; + } keyedMutex; + unsigned int reserved10; + } params; + unsigned int flags; + unsigned int reserved16; +} CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS; + +typedef struct +{ + size_t Width; + size_t Height; + size_t Depth; + CUarray_format Format; + unsigned int NumChannels; + unsigned int Flags; +} CUDA_ARRAY3D_DESCRIPTOR; + +typedef struct +{ + unsigned long long offset; + CUDA_ARRAY3D_DESCRIPTOR arrayDesc; + unsigned int numLevels; + unsigned int reserved16; +} CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC; + +#define CUDA_VERSION 10000 + +#ifdef _WIN32 +#define CUDAAPI __stdcall +#else +#define CUDAAPI +#endif + +#define cuCtxCreate cuCtxCreate_v2 +#define cuCtxDestroy cuCtxDestroy_v2 +#define cuCtxPopCurrent cuCtxPopCurrent_v2 +#define cuCtxPushCurrent cuCtxPushCurrent_v2 +#define cuGraphicsResourceGetMappedPointer cuGraphicsResourceGetMappedPointer_v2 +#define cuGraphicsResourceSetMapFlags cuGraphicsResourceSetMapFlags_v2 + +#define cuStreamDestroy cuStreamDestroy_v2 + +#define cuMemAlloc cuMemAlloc_v2 +#define cuMemAllocPitch cuMemAllocPitch_v2 +#define cuMemAllocHost cuMemAllocHost_v2 +#define cuMemcpy2D cuMemcpy2D_v2 +#define cuMemcpy2DAsync cuMemcpy2DAsync_v2 +#define cuMemcpyDtoD cuMemcpyDtoD_v2 +#define cuMemcpyDtoDAsync cuMemcpyDtoDAsync_v2 +#define cuMemcpyDtoH cuMemcpyDtoH_v2 +#define cuMemcpyDtoHAsync cuMemcpyDtoHAsync_v2 +#define cuMemcpyHtoD cuMemcpyHtoD_v2 +#define cuMemcpyHtoDAsync cuMemcpyHtoDAsync_v2 +#define cuMemFree cuMemFree_v2 +#define cuMemsetD2D8 cuMemsetD2D8_v2 +#define cuMemsetD2D16 cuMemsetD2D16_v2 +#define cuMemsetD2D32 cuMemsetD2D32_v2 + +#define cuEventDestroy cuEventDestroy_v2 + +#define CU_TRSF_READ_AS_INTEGER 1 + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/cudaD3D11.h
Added
@@ -0,0 +1,32 @@ +/* CUDA stub header + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <glib.h> + +G_BEGIN_DECLS + +typedef enum +{ + CU_D3D11_DEVICE_LIST_ALL = 0x01, + CU_D3D11_DEVICE_LIST_CURRENT_FRAME = 0x02, + CU_D3D11_DEVICE_LIST_NEXT_FRAME = 0x03, +} CUd3d11DeviceList; + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/cudaGL.h
Added
@@ -0,0 +1,39 @@ +/* CUDA stub header + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <glib.h> + +G_BEGIN_DECLS +typedef enum +{ + CU_GL_DEVICE_LIST_ALL = 0x01, +} CUGLDeviceList; + +enum cudaGLDeviceList +{ + cudaGLDeviceListAll = 1, + cudaGLDeviceListCurrentFrame = 2, + cudaGLDeviceListNextFrame = 3 +}; + +#define cuGLGetDevices cuGLGetDevices_v2 + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/driver_types.h
Added
@@ -0,0 +1,405 @@ +/* CUDA stub header + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <glib.h> + +G_BEGIN_DECLS + +enum cudaError +{ + cudaSuccess = 0, + cudaErrorInvalidValue = 1, + cudaErrorMemoryAllocation = 2, + cudaErrorInitializationError = 3, + cudaErrorCudartUnloading = 4, + cudaErrorProfilerDisabled = 5, + cudaErrorProfilerNotInitialized = 6, + cudaErrorProfilerAlreadyStarted = 7, + cudaErrorProfilerAlreadyStopped = 8, + cudaErrorInvalidConfiguration = 9, + cudaErrorInvalidPitchValue = 12, + cudaErrorInvalidSymbol = 13, + cudaErrorInvalidHostPointer = 16, + cudaErrorInvalidDevicePointer = 17, + cudaErrorInvalidTexture = 18, + cudaErrorInvalidTextureBinding = 19, + cudaErrorInvalidChannelDescriptor = 20, + cudaErrorInvalidMemcpyDirection = 21, + cudaErrorAddressOfConstant = 22, + cudaErrorTextureFetchFailed = 23, + cudaErrorTextureNotBound = 24, + cudaErrorSynchronizationError = 25, + cudaErrorInvalidFilterSetting = 26, + cudaErrorInvalidNormSetting = 27, + cudaErrorMixedDeviceExecution = 28, + cudaErrorNotYetImplemented = 31, + cudaErrorMemoryValueTooLarge = 32, + cudaErrorStubLibrary = 34, + cudaErrorInsufficientDriver = 35, + cudaErrorCallRequiresNewerDriver = 36, + cudaErrorInvalidSurface = 37, + cudaErrorDuplicateVariableName = 43, + cudaErrorDuplicateTextureName = 44, + cudaErrorDuplicateSurfaceName = 45, + cudaErrorDevicesUnavailable = 46, + cudaErrorIncompatibleDriverContext = 49, + cudaErrorMissingConfiguration = 52, + cudaErrorPriorLaunchFailure = 53, + cudaErrorLaunchMaxDepthExceeded = 65, + cudaErrorLaunchFileScopedTex = 66, + cudaErrorLaunchFileScopedSurf = 67, + cudaErrorSyncDepthExceeded = 68, + cudaErrorLaunchPendingCountExceeded = 69, + cudaErrorInvalidDeviceFunction = 98, + cudaErrorNoDevice = 100, + cudaErrorInvalidDevice = 101, + cudaErrorDeviceNotLicensed = 102, + cudaErrorSoftwareValidityNotEstablished = 103, + cudaErrorInvalidKernelImage = 200, + cudaErrorDeviceUninitialized = 201, + cudaErrorMapBufferObjectFailed = 205, + cudaErrorUnmapBufferObjectFailed = 206, + cudaErrorArrayIsMapped = 207, + cudaErrorAlreadyMapped = 208, + cudaErrorNoKernelImageForDevice = 209, + cudaErrorAlreadyAcquired = 210, + cudaErrorNotMapped = 211, + cudaErrorNotMappedAsArray = 212, + cudaErrorNotMappedAsPointer = 213, + cudaErrorECCUncorrectable = 214, + cudaErrorUnsupportedLimit = 215, + cudaErrorDeviceAlreadyInUse = 216, + cudaErrorPeerAccessUnsupported = 217, + cudaErrorInvalidPtx = 218, + cudaErrorInvalidGraphicsContext = 219, + cudaErrorNvlinkUncorrectable = 220, + cudaErrorJitCompilerNotFound = 221, + cudaErrorUnsupportedPtxVersion = 222, + cudaErrorJitCompilationDisabled = 223, + cudaErrorUnsupportedExecAffinity = 224, + cudaErrorUnsupportedDevSideSync = 225, + cudaErrorInvalidSource = 300, + cudaErrorFileNotFound = 301, + cudaErrorSharedObjectSymbolNotFound = 302, + cudaErrorSharedObjectInitFailed = 303, + cudaErrorOperatingSystem = 304, + cudaErrorInvalidResourceHandle = 400, + cudaErrorIllegalState = 401, + cudaErrorLossyQuery = 402, + cudaErrorSymbolNotFound = 500, + cudaErrorNotReady = 600, + cudaErrorIllegalAddress = 700, + cudaErrorLaunchOutOfResources = 701, + cudaErrorLaunchTimeout = 702, + cudaErrorLaunchIncompatibleTexturing = 703, + cudaErrorPeerAccessAlreadyEnabled = 704, + cudaErrorPeerAccessNotEnabled = 705, + cudaErrorSetOnActiveProcess = 708, + cudaErrorContextIsDestroyed = 709, + cudaErrorAssert = 710, + cudaErrorTooManyPeers = 711, + cudaErrorHostMemoryAlreadyRegistered = 712, + cudaErrorHostMemoryNotRegistered = 713, + cudaErrorHardwareStackError = 714, + cudaErrorIllegalInstruction = 715, + cudaErrorMisalignedAddress = 716, + cudaErrorInvalidAddressSpace = 717, + cudaErrorInvalidPc = 718, + cudaErrorLaunchFailure = 719, + cudaErrorCooperativeLaunchTooLarge = 720, + cudaErrorNotPermitted = 800, + cudaErrorNotSupported = 801, + cudaErrorSystemNotReady = 802, + cudaErrorSystemDriverMismatch = 803, + cudaErrorCompatNotSupportedOnDevice = 804, + cudaErrorMpsConnectionFailed = 805, + cudaErrorMpsRpcFailure = 806, + cudaErrorMpsServerNotReady = 807, + cudaErrorMpsMaxClientsReached = 808, + cudaErrorMpsMaxConnectionsReached = 809, + cudaErrorMpsClientTerminated = 810, + cudaErrorCdpNotSupported = 811, + cudaErrorCdpVersionMismatch = 812, + cudaErrorStreamCaptureUnsupported = 900, + cudaErrorStreamCaptureInvalidated = 901, + cudaErrorStreamCaptureMerge = 902, + cudaErrorStreamCaptureUnmatched = 903, + cudaErrorStreamCaptureUnjoined = 904, + cudaErrorStreamCaptureIsolation = 905, + cudaErrorStreamCaptureImplicit = 906, + cudaErrorCapturedEvent = 907, + cudaErrorStreamCaptureWrongThread = 908, + cudaErrorTimeout = 909, + cudaErrorGraphExecUpdateFailure = 910, + cudaErrorExternalDevice = 911, + cudaErrorInvalidClusterSize = 912, + cudaErrorUnknown = 999, + cudaErrorApiFailureBase = 10000 +}; + +typedef enum cudaError cudaError_t; + +typedef struct +{ + char bytes16; +} cudaUUID_t; + +struct cudaDeviceProp +{ + char name256; + cudaUUID_t uuid; + char luid8; + unsigned int luidDeviceNodeMask; + size_t totalGlobalMem; + size_t sharedMemPerBlock; + int regsPerBlock; + int warpSize; + size_t memPitch; + int maxThreadsPerBlock; + int maxThreadsDim3; + int maxGridSize3; + int clockRate; + size_t totalConstMem; + int major; + int minor; + size_t textureAlignment; + size_t texturePitchAlignment; + int deviceOverlap; + int multiProcessorCount; + int kernelExecTimeoutEnabled; + int integrated; + int canMapHostMemory; + int computeMode; + int maxTexture1D; + int maxTexture1DMipmap; + int maxTexture1DLinear; + int maxTexture2D2; + int maxTexture2DMipmap2; + int maxTexture2DLinear3; + int maxTexture2DGather2; + int maxTexture3D3; + int maxTexture3DAlt3; + int maxTextureCubemap; + int maxTexture1DLayered2; + int maxTexture2DLayered3; + int maxTextureCubemapLayered2; + int maxSurface1D; + int maxSurface2D2; + int maxSurface3D3; + int maxSurface1DLayered2; + int maxSurface2DLayered3; + int maxSurfaceCubemap; + int maxSurfaceCubemapLayered2; + size_t surfaceAlignment; + int concurrentKernels; + int ECCEnabled; + int pciBusID; + int pciDeviceID; + int pciDomainID; + int tccDriver; + int asyncEngineCount; + int unifiedAddressing; + int memoryClockRate; + int memoryBusWidth; + int l2CacheSize; + int persistingL2CacheMaxSize; + int maxThreadsPerMultiProcessor; + int streamPrioritiesSupported; + int globalL1CacheSupported; + int localL1CacheSupported; + size_t sharedMemPerMultiprocessor; + int regsPerMultiprocessor; + int managedMemory; + int isMultiGpuBoard; + int multiGpuBoardGroupID; + int hostNativeAtomicSupported; + int singleToDoublePrecisionPerfRatio; + int pageableMemoryAccess; + int concurrentManagedAccess; + int computePreemptionSupported; + int canUseHostPointerForRegisteredMem; + int cooperativeLaunch; + int cooperativeMultiDeviceLaunch; + size_t sharedMemPerBlockOptin; + int pageableMemoryAccessUsesHostPageTables; + int directManagedMemAccessFromHost; + int maxBlocksPerMultiProcessor; + int accessPolicyMaxWindowSize; + size_t reservedSharedMemPerBlock; + int hostRegisterSupported; + int sparseCudaArraySupported; + int hostRegisterReadOnlySupported; + int timelineSemaphoreInteropSupported; + int memoryPoolsSupported; + int gpuDirectRDMASupported; + unsigned int gpuDirectRDMAFlushWritesOptions; + int gpuDirectRDMAWritesOrdering; + unsigned int memoryPoolSupportedHandleTypes; + int deferredMappingCudaArraySupported; + int ipcEventSupported; + int clusterLaunch; + int unifiedFunctionPointers; + int reserved22; + int reserved11; + int reserved60; +}; + +enum cudaDeviceAttr +{ + cudaDevAttrMaxThreadsPerBlock = 1, + cudaDevAttrMaxBlockDimX = 2, + cudaDevAttrMaxBlockDimY = 3, + cudaDevAttrMaxBlockDimZ = 4, + cudaDevAttrMaxGridDimX = 5, + cudaDevAttrMaxGridDimY = 6, + cudaDevAttrMaxGridDimZ = 7, + cudaDevAttrMaxSharedMemoryPerBlock = 8, + cudaDevAttrTotalConstantMemory = 9, + cudaDevAttrWarpSize = 10, + cudaDevAttrMaxPitch = 11, + cudaDevAttrMaxRegistersPerBlock = 12, + cudaDevAttrClockRate = 13, + cudaDevAttrTextureAlignment = 14, + cudaDevAttrGpuOverlap = 15, + cudaDevAttrMultiProcessorCount = 16, + cudaDevAttrKernelExecTimeout = 17, + cudaDevAttrIntegrated = 18, + cudaDevAttrCanMapHostMemory = 19, + cudaDevAttrComputeMode = 20, + cudaDevAttrMaxTexture1DWidth = 21, + cudaDevAttrMaxTexture2DWidth = 22, + cudaDevAttrMaxTexture2DHeight = 23, + cudaDevAttrMaxTexture3DWidth = 24, + cudaDevAttrMaxTexture3DHeight = 25, + cudaDevAttrMaxTexture3DDepth = 26, + cudaDevAttrMaxTexture2DLayeredWidth = 27, + cudaDevAttrMaxTexture2DLayeredHeight = 28, + cudaDevAttrMaxTexture2DLayeredLayers = 29, + cudaDevAttrSurfaceAlignment = 30, + cudaDevAttrConcurrentKernels = 31, + cudaDevAttrEccEnabled = 32, + cudaDevAttrPciBusId = 33, + cudaDevAttrPciDeviceId = 34, + cudaDevAttrTccDriver = 35, + cudaDevAttrMemoryClockRate = 36, + cudaDevAttrGlobalMemoryBusWidth = 37, + cudaDevAttrL2CacheSize = 38, + cudaDevAttrMaxThreadsPerMultiProcessor = 39, + cudaDevAttrAsyncEngineCount = 40, + cudaDevAttrUnifiedAddressing = 41, + cudaDevAttrMaxTexture1DLayeredWidth = 42, + cudaDevAttrMaxTexture1DLayeredLayers = 43, + cudaDevAttrMaxTexture2DGatherWidth = 45, + cudaDevAttrMaxTexture2DGatherHeight = 46, + cudaDevAttrMaxTexture3DWidthAlt = 47, + cudaDevAttrMaxTexture3DHeightAlt = 48, + cudaDevAttrMaxTexture3DDepthAlt = 49, + cudaDevAttrPciDomainId = 50, + cudaDevAttrTexturePitchAlignment = 51, + cudaDevAttrMaxTextureCubemapWidth = 52, + cudaDevAttrMaxTextureCubemapLayeredWidth = 53, + cudaDevAttrMaxTextureCubemapLayeredLayers = 54, + cudaDevAttrMaxSurface1DWidth = 55, + cudaDevAttrMaxSurface2DWidth = 56, + cudaDevAttrMaxSurface2DHeight = 57, + cudaDevAttrMaxSurface3DWidth = 58, + cudaDevAttrMaxSurface3DHeight = 59, + cudaDevAttrMaxSurface3DDepth = 60, + cudaDevAttrMaxSurface1DLayeredWidth = 61, + cudaDevAttrMaxSurface1DLayeredLayers = 62, + cudaDevAttrMaxSurface2DLayeredWidth = 63, + cudaDevAttrMaxSurface2DLayeredHeight = 64, + cudaDevAttrMaxSurface2DLayeredLayers = 65, + cudaDevAttrMaxSurfaceCubemapWidth = 66, + cudaDevAttrMaxSurfaceCubemapLayeredWidth = 67, + cudaDevAttrMaxSurfaceCubemapLayeredLayers = 68, + cudaDevAttrMaxTexture1DLinearWidth = 69, + cudaDevAttrMaxTexture2DLinearWidth = 70, + cudaDevAttrMaxTexture2DLinearHeight = 71, + cudaDevAttrMaxTexture2DLinearPitch = 72, + cudaDevAttrMaxTexture2DMipmappedWidth = 73, + cudaDevAttrMaxTexture2DMipmappedHeight = 74, + cudaDevAttrComputeCapabilityMajor = 75, + cudaDevAttrComputeCapabilityMinor = 76, + cudaDevAttrMaxTexture1DMipmappedWidth = 77, + cudaDevAttrStreamPrioritiesSupported = 78, + cudaDevAttrGlobalL1CacheSupported = 79, + cudaDevAttrLocalL1CacheSupported = 80, + cudaDevAttrMaxSharedMemoryPerMultiprocessor = 81, + cudaDevAttrMaxRegistersPerMultiprocessor = 82, + cudaDevAttrManagedMemory = 83, + cudaDevAttrIsMultiGpuBoard = 84, + cudaDevAttrMultiGpuBoardGroupID = 85, + cudaDevAttrHostNativeAtomicSupported = 86, + cudaDevAttrSingleToDoublePrecisionPerfRatio = 87, + cudaDevAttrPageableMemoryAccess = 88, + cudaDevAttrConcurrentManagedAccess = 89, + cudaDevAttrComputePreemptionSupported = 90, + cudaDevAttrCanUseHostPointerForRegisteredMem = 91, + cudaDevAttrReserved92 = 92, + cudaDevAttrReserved93 = 93, + cudaDevAttrReserved94 = 94, + cudaDevAttrCooperativeLaunch = 95, + cudaDevAttrCooperativeMultiDeviceLaunch = 96, + cudaDevAttrMaxSharedMemoryPerBlockOptin = 97, + cudaDevAttrCanFlushRemoteWrites = 98, + cudaDevAttrHostRegisterSupported = 99, + cudaDevAttrPageableMemoryAccessUsesHostPageTables = 100, + cudaDevAttrDirectManagedMemAccessFromHost = 101, + cudaDevAttrMaxBlocksPerMultiprocessor = 106, + cudaDevAttrMaxPersistingL2CacheSize = 108, + cudaDevAttrMaxAccessPolicyWindowSize = 109, + cudaDevAttrReservedSharedMemoryPerBlock = 111, + cudaDevAttrSparseCudaArraySupported = 112, + cudaDevAttrHostRegisterReadOnlySupported = 113, + cudaDevAttrTimelineSemaphoreInteropSupported = 114, + cudaDevAttrMaxTimelineSemaphoreInteropSupported = 114, + cudaDevAttrMemoryPoolsSupported = 115, + cudaDevAttrGPUDirectRDMASupported = 116, + cudaDevAttrGPUDirectRDMAFlushWritesOptions = 117, + cudaDevAttrGPUDirectRDMAWritesOrdering = 118, + cudaDevAttrMemoryPoolSupportedHandleTypes = 119, + cudaDevAttrClusterLaunch = 120, + cudaDevAttrDeferredMappingCudaArraySupported = 121, + cudaDevAttrReserved122 = 122, + cudaDevAttrReserved123 = 123, + cudaDevAttrReserved124 = 124, + cudaDevAttrIpcEventSupport = 125, + cudaDevAttrMemSyncDomainCount = 126, + cudaDevAttrReserved127 = 127, + cudaDevAttrReserved128 = 128, + cudaDevAttrReserved129 = 129, + cudaDevAttrNumaConfig = 130, + cudaDevAttrNumaId = 131, + cudaDevAttrReserved132 = 132, + cudaDevAttrMpsEnabled = 133, + cudaDevAttrHostNumaId = 134, + cudaDevAttrMax +}; + +typedef gpointer cudaStream_t; + +struct cudaGraphicsResource; + +typedef struct cudaGraphicsResource *cudaGraphicsResource_t; +typedef struct CUevent_st *cudaEvent_t; + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/driver_types.h
Added
@@ -0,0 +1,429 @@ +/* +Copyright (c) 2015 - 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +#include <hip/hip_runtime_api.h> + +#ifndef __cplusplus +#include <stdbool.h> +#endif + +typedef void* hipDeviceptr_t; +typedef enum hipChannelFormatKind { + hipChannelFormatKindSigned = 0, + hipChannelFormatKindUnsigned = 1, + hipChannelFormatKindFloat = 2, + hipChannelFormatKindNone = 3 +}hipChannelFormatKind; +typedef struct hipChannelFormatDesc { + int x; + int y; + int z; + int w; + enum hipChannelFormatKind f; +}hipChannelFormatDesc; +#define HIP_TRSA_OVERRIDE_FORMAT 0x01 +#define HIP_TRSF_READ_AS_INTEGER 0x01 +#define HIP_TRSF_NORMALIZED_COORDINATES 0x02 +#define HIP_TRSF_SRGB 0x10 + +typedef struct hipArray* hipArray_t; +typedef const struct hipArray* hipArray_const_t; +typedef enum hipArray_Format { + HIP_AD_FORMAT_UNSIGNED_INT8 = 0x01, + HIP_AD_FORMAT_UNSIGNED_INT16 = 0x02, + HIP_AD_FORMAT_UNSIGNED_INT32 = 0x03, + HIP_AD_FORMAT_SIGNED_INT8 = 0x08, + HIP_AD_FORMAT_SIGNED_INT16 = 0x09, + HIP_AD_FORMAT_SIGNED_INT32 = 0x0a, + HIP_AD_FORMAT_HALF = 0x10, + HIP_AD_FORMAT_FLOAT = 0x20 +}hipArray_Format; +typedef struct HIP_ARRAY_DESCRIPTOR { + size_t Width; + size_t Height; + enum hipArray_Format Format; + unsigned int NumChannels; +}HIP_ARRAY_DESCRIPTOR; +typedef struct HIP_ARRAY3D_DESCRIPTOR { + size_t Width; + size_t Height; + size_t Depth; + enum hipArray_Format Format; + unsigned int NumChannels; + unsigned int Flags; +}HIP_ARRAY3D_DESCRIPTOR; +typedef struct hip_Memcpy2D { + size_t srcXInBytes; + size_t srcY; + hipMemoryType srcMemoryType; + const void* srcHost; + hipDeviceptr_t srcDevice; + hipArray_t srcArray; + size_t srcPitch; + size_t dstXInBytes; + size_t dstY; + hipMemoryType dstMemoryType; + void* dstHost; + hipDeviceptr_t dstDevice; + hipArray_t dstArray; + size_t dstPitch; + size_t WidthInBytes; + size_t Height; +} hip_Memcpy2D; +typedef struct hipMipmappedArray { + void* data; + struct hipChannelFormatDesc desc; + unsigned int type; + unsigned int width; + unsigned int height; + unsigned int depth; + unsigned int min_mipmap_level; + unsigned int max_mipmap_level; + unsigned int flags; + enum hipArray_Format format; + unsigned int num_channels; +} hipMipmappedArray; +typedef struct hipMipmappedArray* hipMipmappedArray_t; +typedef hipMipmappedArray_t hipmipmappedArray; +typedef const struct hipMipmappedArray* hipMipmappedArray_const_t; +/** + * hip resource types + */ +typedef enum hipResourceType { + hipResourceTypeArray = 0x00, + hipResourceTypeMipmappedArray = 0x01, + hipResourceTypeLinear = 0x02, + hipResourceTypePitch2D = 0x03 +}hipResourceType; +typedef enum HIPresourcetype_enum { + HIP_RESOURCE_TYPE_ARRAY = 0x00, /**< Array resoure */ + HIP_RESOURCE_TYPE_MIPMAPPED_ARRAY = 0x01, /**< Mipmapped array resource */ + HIP_RESOURCE_TYPE_LINEAR = 0x02, /**< Linear resource */ + HIP_RESOURCE_TYPE_PITCH2D = 0x03 /**< Pitch 2D resource */ +} HIPresourcetype, hipResourcetype; +/** + * hip address modes + */ +typedef enum HIPaddress_mode_enum { + HIP_TR_ADDRESS_MODE_WRAP = 0, + HIP_TR_ADDRESS_MODE_CLAMP = 1, + HIP_TR_ADDRESS_MODE_MIRROR = 2, + HIP_TR_ADDRESS_MODE_BORDER = 3 +} HIPaddress_mode; +/** + * hip filter modes + */ +typedef enum HIPfilter_mode_enum { + HIP_TR_FILTER_MODE_POINT = 0, + HIP_TR_FILTER_MODE_LINEAR = 1 +} HIPfilter_mode; +/** + * Texture descriptor + */ +typedef struct HIP_TEXTURE_DESC_st { + HIPaddress_mode addressMode3; /**< Address modes */ + HIPfilter_mode filterMode; /**< Filter mode */ + unsigned int flags; /**< Flags */ + unsigned int maxAnisotropy; /**< Maximum anisotropy ratio */ + HIPfilter_mode mipmapFilterMode; /**< Mipmap filter mode */ + float mipmapLevelBias; /**< Mipmap level bias */ + float minMipmapLevelClamp; /**< Mipmap minimum level clamp */ + float maxMipmapLevelClamp; /**< Mipmap maximum level clamp */ + float borderColor4; /**< Border Color */ + int reserved12; +} HIP_TEXTURE_DESC; +/** + * hip texture resource view formats + */ +typedef enum hipResourceViewFormat { + hipResViewFormatNone = 0x00, + hipResViewFormatUnsignedChar1 = 0x01, + hipResViewFormatUnsignedChar2 = 0x02, + hipResViewFormatUnsignedChar4 = 0x03, + hipResViewFormatSignedChar1 = 0x04, + hipResViewFormatSignedChar2 = 0x05, + hipResViewFormatSignedChar4 = 0x06, + hipResViewFormatUnsignedShort1 = 0x07, + hipResViewFormatUnsignedShort2 = 0x08, + hipResViewFormatUnsignedShort4 = 0x09, + hipResViewFormatSignedShort1 = 0x0a, + hipResViewFormatSignedShort2 = 0x0b, + hipResViewFormatSignedShort4 = 0x0c, + hipResViewFormatUnsignedInt1 = 0x0d, + hipResViewFormatUnsignedInt2 = 0x0e, + hipResViewFormatUnsignedInt4 = 0x0f, + hipResViewFormatSignedInt1 = 0x10, + hipResViewFormatSignedInt2 = 0x11, + hipResViewFormatSignedInt4 = 0x12, + hipResViewFormatHalf1 = 0x13, + hipResViewFormatHalf2 = 0x14, + hipResViewFormatHalf4 = 0x15, + hipResViewFormatFloat1 = 0x16, + hipResViewFormatFloat2 = 0x17, + hipResViewFormatFloat4 = 0x18, + hipResViewFormatUnsignedBlockCompressed1 = 0x19, + hipResViewFormatUnsignedBlockCompressed2 = 0x1a, + hipResViewFormatUnsignedBlockCompressed3 = 0x1b, + hipResViewFormatUnsignedBlockCompressed4 = 0x1c, + hipResViewFormatSignedBlockCompressed4 = 0x1d, + hipResViewFormatUnsignedBlockCompressed5 = 0x1e, + hipResViewFormatSignedBlockCompressed5 = 0x1f, + hipResViewFormatUnsignedBlockCompressed6H = 0x20, + hipResViewFormatSignedBlockCompressed6H = 0x21, + hipResViewFormatUnsignedBlockCompressed7 = 0x22 +}hipResourceViewFormat; +typedef enum HIPresourceViewFormat_enum +{ + HIP_RES_VIEW_FORMAT_NONE = 0x00, /**< No resource view format (use underlying resource format) */ + HIP_RES_VIEW_FORMAT_UINT_1X8 = 0x01, /**< 1 channel unsigned 8-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_2X8 = 0x02, /**< 2 channel unsigned 8-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_4X8 = 0x03, /**< 4 channel unsigned 8-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_1X8 = 0x04, /**< 1 channel signed 8-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_2X8 = 0x05, /**< 2 channel signed 8-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_4X8 = 0x06, /**< 4 channel signed 8-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_1X16 = 0x07, /**< 1 channel unsigned 16-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_2X16 = 0x08, /**< 2 channel unsigned 16-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_4X16 = 0x09, /**< 4 channel unsigned 16-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_1X16 = 0x0a, /**< 1 channel signed 16-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_2X16 = 0x0b, /**< 2 channel signed 16-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_4X16 = 0x0c, /**< 4 channel signed 16-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_1X32 = 0x0d, /**< 1 channel unsigned 32-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_2X32 = 0x0e, /**< 2 channel unsigned 32-bit integers */ + HIP_RES_VIEW_FORMAT_UINT_4X32 = 0x0f, /**< 4 channel unsigned 32-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_1X32 = 0x10, /**< 1 channel signed 32-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_2X32 = 0x11, /**< 2 channel signed 32-bit integers */ + HIP_RES_VIEW_FORMAT_SINT_4X32 = 0x12, /**< 4 channel signed 32-bit integers */ + HIP_RES_VIEW_FORMAT_FLOAT_1X16 = 0x13, /**< 1 channel 16-bit floating point */ + HIP_RES_VIEW_FORMAT_FLOAT_2X16 = 0x14, /**< 2 channel 16-bit floating point */ + HIP_RES_VIEW_FORMAT_FLOAT_4X16 = 0x15, /**< 4 channel 16-bit floating point */ + HIP_RES_VIEW_FORMAT_FLOAT_1X32 = 0x16, /**< 1 channel 32-bit floating point */ + HIP_RES_VIEW_FORMAT_FLOAT_2X32 = 0x17, /**< 2 channel 32-bit floating point */ + HIP_RES_VIEW_FORMAT_FLOAT_4X32 = 0x18, /**< 4 channel 32-bit floating point */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC1 = 0x19, /**< Block compressed 1 */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC2 = 0x1a, /**< Block compressed 2 */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC3 = 0x1b, /**< Block compressed 3 */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC4 = 0x1c, /**< Block compressed 4 unsigned */ + HIP_RES_VIEW_FORMAT_SIGNED_BC4 = 0x1d, /**< Block compressed 4 signed */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC5 = 0x1e, /**< Block compressed 5 unsigned */ + HIP_RES_VIEW_FORMAT_SIGNED_BC5 = 0x1f, /**< Block compressed 5 signed */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC6H = 0x20, /**< Block compressed 6 unsigned half-float */ + HIP_RES_VIEW_FORMAT_SIGNED_BC6H = 0x21, /**< Block compressed 6 signed half-float */ + HIP_RES_VIEW_FORMAT_UNSIGNED_BC7 = 0x22 /**< Block compressed 7 */ +} HIPresourceViewFormat; +/** + * HIP resource descriptor + */ +typedef struct hipResourceDesc { + enum hipResourceType resType; + union { + struct { + hipArray_t array; + } array; + struct { + hipMipmappedArray_t mipmap; + } mipmap; + struct { + void* devPtr; + struct hipChannelFormatDesc desc; + size_t sizeInBytes; + } linear; + struct { + void* devPtr; + struct hipChannelFormatDesc desc; + size_t width; + size_t height; + size_t pitchInBytes; + } pitch2D; + } res; +}hipResourceDesc; +typedef struct HIP_RESOURCE_DESC_st +{ + HIPresourcetype resType; /**< Resource type */ + union { + struct { + hipArray_t hArray; /**< HIP array */ + } array; + struct { + hipMipmappedArray_t hMipmappedArray; /**< HIP mipmapped array */ + } mipmap; + struct { + hipDeviceptr_t devPtr; /**< Device pointer */ + hipArray_Format format; /**< Array format */ + unsigned int numChannels; /**< Channels per array element */ + size_t sizeInBytes; /**< Size in bytes */ + } linear; + struct { + hipDeviceptr_t devPtr; /**< Device pointer */ + hipArray_Format format; /**< Array format */ + unsigned int numChannels; /**< Channels per array element */ + size_t width; /**< Width of the array in elements */ + size_t height; /**< Height of the array in elements */ + size_t pitchInBytes; /**< Pitch between two rows in bytes */ + } pitch2D; + struct { + int reserved32; + } reserved; + } res; + unsigned int flags; /**< Flags (must be zero) */ +} HIP_RESOURCE_DESC; +/** + * hip resource view descriptor + */ +struct hipResourceViewDesc { + enum hipResourceViewFormat format; + size_t width; + size_t height; + size_t depth; + unsigned int firstMipmapLevel; + unsigned int lastMipmapLevel; + unsigned int firstLayer; + unsigned int lastLayer; +}; +/** + * Resource view descriptor + */ +typedef struct HIP_RESOURCE_VIEW_DESC_st +{ + HIPresourceViewFormat format; /**< Resource view format */ + size_t width; /**< Width of the resource view */ + size_t height; /**< Height of the resource view */ + size_t depth; /**< Depth of the resource view */ + unsigned int firstMipmapLevel; /**< First defined mipmap level */ + unsigned int lastMipmapLevel; /**< Last defined mipmap level */ + unsigned int firstLayer; /**< First layer index */ + unsigned int lastLayer; /**< Last layer index */ + unsigned int reserved16; +} HIP_RESOURCE_VIEW_DESC; +/** + * Memory copy types + * + */ +typedef enum hipMemcpyKind { + hipMemcpyHostToHost = 0, ///< Host-to-Host Copy + hipMemcpyHostToDevice = 1, ///< Host-to-Device Copy + hipMemcpyDeviceToHost = 2, ///< Device-to-Host Copy + hipMemcpyDeviceToDevice = 3, ///< Device-to-Device Copy + hipMemcpyDefault = 4, ///< Runtime will automatically determine + ///<copy-kind based on virtual addresses. + hipMemcpyDeviceToDeviceNoCU = 1024 ///< Device-to-Device Copy without using compute units +} hipMemcpyKind; +typedef struct hipPitchedPtr { + void* ptr; + size_t pitch; + size_t xsize; + size_t ysize; +}hipPitchedPtr; +typedef struct hipExtent { + size_t width; // Width in elements when referring to array memory, in bytes when referring to + // linear memory + size_t height; + size_t depth; +}hipExtent; +typedef struct hipPos { + size_t x; + size_t y; + size_t z; +}hipPos; +typedef struct hipMemcpy3DParms { + hipArray_t srcArray; + struct hipPos srcPos; + struct hipPitchedPtr srcPtr; + hipArray_t dstArray; + struct hipPos dstPos; + struct hipPitchedPtr dstPtr; + struct hipExtent extent; + enum hipMemcpyKind kind; +} hipMemcpy3DParms; +typedef struct HIP_MEMCPY3D { + size_t srcXInBytes; + size_t srcY; + size_t srcZ; + size_t srcLOD; + hipMemoryType srcMemoryType; + const void* srcHost; + hipDeviceptr_t srcDevice; + hipArray_t srcArray; + size_t srcPitch; + size_t srcHeight; + size_t dstXInBytes; + size_t dstY; + size_t dstZ; + size_t dstLOD; + hipMemoryType dstMemoryType; + void* dstHost; + hipDeviceptr_t dstDevice; + hipArray_t dstArray; + size_t dstPitch; + size_t dstHeight; + size_t WidthInBytes; + size_t Height; + size_t Depth; +} HIP_MEMCPY3D; +typedef enum hipFunction_attribute { + HIP_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK, + HIP_FUNC_ATTRIBUTE_SHARED_SIZE_BYTES, + HIP_FUNC_ATTRIBUTE_CONST_SIZE_BYTES, + HIP_FUNC_ATTRIBUTE_LOCAL_SIZE_BYTES, + HIP_FUNC_ATTRIBUTE_NUM_REGS, + HIP_FUNC_ATTRIBUTE_PTX_VERSION, + HIP_FUNC_ATTRIBUTE_BINARY_VERSION, + HIP_FUNC_ATTRIBUTE_CACHE_MODE_CA, + HIP_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES, + HIP_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT, + HIP_FUNC_ATTRIBUTE_MAX +} hipFunction_attribute; + +typedef enum hipPointer_attribute { + HIP_POINTER_ATTRIBUTE_CONTEXT = 1, ///< The context on which a pointer was allocated + ///< @warning - not supported in HIP + HIP_POINTER_ATTRIBUTE_MEMORY_TYPE, ///< memory type describing location of a pointer + HIP_POINTER_ATTRIBUTE_DEVICE_POINTER,///< address at which the pointer is allocated on device + HIP_POINTER_ATTRIBUTE_HOST_POINTER, ///< address at which the pointer is allocated on host + HIP_POINTER_ATTRIBUTE_P2P_TOKENS, ///< A pair of tokens for use with linux kernel interface + ///< @warning - not supported in HIP + HIP_POINTER_ATTRIBUTE_SYNC_MEMOPS, ///< Synchronize every synchronous memory operation + ///< initiated on this region + HIP_POINTER_ATTRIBUTE_BUFFER_ID, ///< Unique ID for an allocated memory region + HIP_POINTER_ATTRIBUTE_IS_MANAGED, ///< Indicates if the pointer points to managed memory + HIP_POINTER_ATTRIBUTE_DEVICE_ORDINAL,///< device ordinal of a device on which a pointer + ///< was allocated or registered + HIP_POINTER_ATTRIBUTE_IS_LEGACY_HIP_IPC_CAPABLE, ///< if this pointer maps to an allocation + ///< that is suitable for hipIpcGetMemHandle + ///< @warning - not supported in HIP + HIP_POINTER_ATTRIBUTE_RANGE_START_ADDR,///< Starting address for this requested pointer + HIP_POINTER_ATTRIBUTE_RANGE_SIZE, ///< Size of the address range for this requested pointer + HIP_POINTER_ATTRIBUTE_MAPPED, ///< tells if this pointer is in a valid address range + ///< that is mapped to a backing allocation + HIP_POINTER_ATTRIBUTE_ALLOWED_HANDLE_TYPES,///< Bitmask of allowed hipmemAllocationHandleType + ///< for this allocation @warning - not supported in HIP + HIP_POINTER_ATTRIBUTE_IS_GPU_DIRECT_RDMA_CAPABLE, ///< returns if the memory referenced by + ///< this pointer can be used with the GPUDirect RDMA API + ///< @warning - not supported in HIP + HIP_POINTER_ATTRIBUTE_ACCESS_FLAGS, ///< Returns the access flags the device associated with + ///< for the corresponding memory referenced by the ptr + HIP_POINTER_ATTRIBUTE_MEMPOOL_HANDLE ///< Returns the mempool handle for the allocation if + ///< it was allocated from a mempool + ///< @warning - not supported in HIP +} hipPointer_attribute; +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/hip_gl_interop.h
Added
@@ -0,0 +1,31 @@ +/* +Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +typedef enum hipGLDeviceList { + hipGLDeviceListAll = 1, ///< All hip devices used by current OpenGL context. + hipGLDeviceListCurrentFrame = 2, ///< Hip devices used by current OpenGL context in current + ///< frame + hipGLDeviceListNextFrame = 3 ///< Hip devices used by current OpenGL context in next + ///< frame. +} hipGLDeviceList; \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/hip_runtime.h
Added
@@ -0,0 +1,27 @@ +/* +Copyright (c) 2015 - 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +#include <hip/texture_types.h> +#include <hip/hip_runtime_api.h> +#include <hip/driver_types.h> \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/hip_runtime_api.h
Added
@@ -0,0 +1,508 @@ +/* +Copyright (c) 2015 - 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +#include <stdint.h> +#include <stddef.h> +#include <hip/texture_types.h> + +typedef struct { + // 32-bit Atomics + unsigned hasGlobalInt32Atomics : 1; ///< 32-bit integer atomics for global memory. + unsigned hasGlobalFloatAtomicExch : 1; ///< 32-bit float atomic exch for global memory. + unsigned hasSharedInt32Atomics : 1; ///< 32-bit integer atomics for shared memory. + unsigned hasSharedFloatAtomicExch : 1; ///< 32-bit float atomic exch for shared memory. + unsigned hasFloatAtomicAdd : 1; ///< 32-bit float atomic add in global and shared memory. + + // 64-bit Atomics + unsigned hasGlobalInt64Atomics : 1; ///< 64-bit integer atomics for global memory. + unsigned hasSharedInt64Atomics : 1; ///< 64-bit integer atomics for shared memory. + + // Doubles + unsigned hasDoubles : 1; ///< Double-precision floating point. + + // Warp cross-lane operations + unsigned hasWarpVote : 1; ///< Warp vote instructions (__any, __all). + unsigned hasWarpBallot : 1; ///< Warp ballot instructions (__ballot). + unsigned hasWarpShuffle : 1; ///< Warp shuffle operations. (__shfl_*). + unsigned hasFunnelShift : 1; ///< Funnel two words into one with shift&mask caps. + + // Sync + unsigned hasThreadFenceSystem : 1; ///< __threadfence_system. + unsigned hasSyncThreadsExt : 1; ///< __syncthreads_count, syncthreads_and, syncthreads_or. + + // Misc + unsigned hasSurfaceFuncs : 1; ///< Surface functions. + unsigned has3dGrid : 1; ///< Grid and group dims are 3D (rather than 2D). + unsigned hasDynamicParallelism : 1; ///< Dynamic parallelism. +} hipDeviceArch_t; + +typedef struct hipUUID_t { + char bytes16; +} hipUUID; + +typedef enum hipDeviceAttribute_t { + hipDeviceAttributeCudaCompatibleBegin = 0, + + hipDeviceAttributeEccEnabled = hipDeviceAttributeCudaCompatibleBegin, ///< Whether ECC support is enabled. + hipDeviceAttributeAccessPolicyMaxWindowSize, ///< Cuda only. The maximum size of the window policy in bytes. + hipDeviceAttributeAsyncEngineCount, ///< Asynchronous engines number. + hipDeviceAttributeCanMapHostMemory, ///< Whether host memory can be mapped into device address space + hipDeviceAttributeCanUseHostPointerForRegisteredMem,///< Device can access host registered memory + ///< at the same virtual address as the CPU + hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz. + hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in. + hipDeviceAttributeComputePreemptionSupported, ///< Device supports Compute Preemption. + hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels concurrently. + hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory concurrently with the CPU + hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch + hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices + hipDeviceAttributeDeviceOverlap, ///< Device can concurrently copy memory and execute a kernel. + ///< Deprecated. Use instead asyncEngineCount. + hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on + ///< the device without migration + hipDeviceAttributeGlobalL1CacheSupported, ///< Device supports caching globals in L1 + hipDeviceAttributeHostNativeAtomicSupported, ///< Link between the device and the host supports native atomic operations + hipDeviceAttributeIntegrated, ///< Device is integrated GPU + hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices. + hipDeviceAttributeKernelExecTimeout, ///< Run time limit for kernels executed on the device + hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2 cache. + hipDeviceAttributeLocalL1CacheSupported, ///< caching locals in L1 is supported + hipDeviceAttributeLuid, ///< 8-byte locally unique identifier in 8 bytes. Undefined on TCC and non-Windows platforms + hipDeviceAttributeLuidDeviceNodeMask, ///< Luid device node mask. Undefined on TCC and non-Windows platforms + hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number. + hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system + hipDeviceAttributeMaxBlocksPerMultiProcessor, ///< Max block size per multiprocessor + hipDeviceAttributeMaxBlockDimX, ///< Max block size in width. + hipDeviceAttributeMaxBlockDimY, ///< Max block size in height. + hipDeviceAttributeMaxBlockDimZ, ///< Max block size in depth. + hipDeviceAttributeMaxGridDimX, ///< Max grid size in width. + hipDeviceAttributeMaxGridDimY, ///< Max grid size in height. + hipDeviceAttributeMaxGridDimZ, ///< Max grid size in depth. + hipDeviceAttributeMaxSurface1D, ///< Maximum size of 1D surface. + hipDeviceAttributeMaxSurface1DLayered, ///< Cuda only. Maximum dimensions of 1D layered surface. + hipDeviceAttributeMaxSurface2D, ///< Maximum dimension (width, height) of 2D surface. + hipDeviceAttributeMaxSurface2DLayered, ///< Cuda only. Maximum dimensions of 2D layered surface. + hipDeviceAttributeMaxSurface3D, ///< Maximum dimension (width, height, depth) of 3D surface. + hipDeviceAttributeMaxSurfaceCubemap, ///< Cuda only. Maximum dimensions of Cubemap surface. + hipDeviceAttributeMaxSurfaceCubemapLayered, ///< Cuda only. Maximum dimension of Cubemap layered surface. + hipDeviceAttributeMaxTexture1DWidth, ///< Maximum size of 1D texture. + hipDeviceAttributeMaxTexture1DLayered, ///< Maximum dimensions of 1D layered texture. + hipDeviceAttributeMaxTexture1DLinear, ///< Maximum number of elements allocatable in a 1D linear texture. + ///< Use cudaDeviceGetTexture1DLinearMaxWidth() instead on Cuda. + hipDeviceAttributeMaxTexture1DMipmap, ///< Maximum size of 1D mipmapped texture. + hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D texture. + hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension hight of 2D texture. + hipDeviceAttributeMaxTexture2DGather, ///< Maximum dimensions of 2D texture if gather operations performed. + hipDeviceAttributeMaxTexture2DLayered, ///< Maximum dimensions of 2D layered texture. + hipDeviceAttributeMaxTexture2DLinear, ///< Maximum dimensions (width, height, pitch) of 2D textures bound to pitched memory. + hipDeviceAttributeMaxTexture2DMipmap, ///< Maximum dimensions of 2D mipmapped texture. + hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D texture. + hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimension height of 3D texture. + hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimension depth of 3D texture. + hipDeviceAttributeMaxTexture3DAlt, ///< Maximum dimensions of alternate 3D texture. + hipDeviceAttributeMaxTextureCubemap, ///< Maximum dimensions of Cubemap texture + hipDeviceAttributeMaxTextureCubemapLayered, ///< Maximum dimensions of Cubemap layered texture. + hipDeviceAttributeMaxThreadsDim, ///< Maximum dimension of a block + hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block. + hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per multiprocessor. + hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies + hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits. + hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz. + hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number. + hipDeviceAttributeMultiGpuBoardGroupID, ///< Unique ID of device group on the same multi-GPU board + hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device. + hipDeviceAttributeUnused1, ///< Previously hipDeviceAttributeName + hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory + ///< without calling hipHostRegister on it + hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via the host's page tables + hipDeviceAttributePciBusId, ///< PCI Bus ID. + hipDeviceAttributePciDeviceId, ///< PCI Device ID. + hipDeviceAttributePciDomainID, ///< PCI Domain ID. + hipDeviceAttributePersistingL2CacheMaxSize, ///< Maximum l2 persisting lines capacity in bytes + hipDeviceAttributeMaxRegistersPerBlock, ///< 32-bit registers available to a thread block. This number is shared + ///< by all thread blocks simultaneously resident on a multiprocessor. + hipDeviceAttributeMaxRegistersPerMultiprocessor, ///< 32-bit registers available per block. + hipDeviceAttributeReservedSharedMemPerBlock, ///< Shared memory reserved by CUDA driver per block. + hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in bytes. + hipDeviceAttributeSharedMemPerBlockOptin, ///< Maximum shared memory per block usable by special opt in. + hipDeviceAttributeSharedMemPerMultiprocessor, ///< Shared memory available per multiprocessor. + hipDeviceAttributeSingleToDoublePrecisionPerfRatio, ///< Cuda only. Performance ratio of single precision to double precision. + hipDeviceAttributeStreamPrioritiesSupported, ///< Whether to support stream priorities. + hipDeviceAttributeSurfaceAlignment, ///< Alignment requirement for surfaces + hipDeviceAttributeTccDriver, ///< Cuda only. Whether device is a Tesla device using TCC driver + hipDeviceAttributeTextureAlignment, ///< Alignment requirement for textures + hipDeviceAttributeTexturePitchAlignment, ///< Pitch alignment requirement for 2D texture references bound to pitched memory; + hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes. + hipDeviceAttributeTotalGlobalMem, ///< Global memory available on devicice. + hipDeviceAttributeUnifiedAddressing, ///< Cuda only. An unified address space shared with the host. + hipDeviceAttributeUnused2, ///< Previously hipDeviceAttributeUuid + hipDeviceAttributeWarpSize, ///< Warp size in threads. + hipDeviceAttributeMemoryPoolsSupported, ///< Device supports HIP Stream Ordered Memory Allocator + hipDeviceAttributeVirtualMemoryManagementSupported, ///< Device supports HIP virtual memory management + hipDeviceAttributeHostRegisterSupported, ///< Can device support host memory registration via hipHostRegister + hipDeviceAttributeMemoryPoolSupportedHandleTypes, ///< Supported handle mask for HIP Stream Ordered Memory Allocator + + hipDeviceAttributeCudaCompatibleEnd = 9999, + hipDeviceAttributeAmdSpecificBegin = 10000, + + hipDeviceAttributeClockInstructionRate = hipDeviceAttributeAmdSpecificBegin, ///< Frequency in khz of the timer used by the device-side "clock*" + hipDeviceAttributeUnused3, ///< Previously hipDeviceAttributeArch + hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory PerMultiprocessor. + hipDeviceAttributeUnused4, ///< Previously hipDeviceAttributeGcnArch + hipDeviceAttributeUnused5, ///< Previously hipDeviceAttributeGcnArchName + hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register + hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register + hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple + ///< devices with unmatched functions + hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple + ///< devices with unmatched grid dimensions + hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple + ///< devices with unmatched block dimensions + hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple + ///< devices with unmatched shared memories + hipDeviceAttributeIsLargeBar, ///< Whether it is LargeBar + hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device + hipDeviceAttributeCanUseStreamWaitValue, ///< '1' if Device supports hipStreamWaitValue32() and + ///< hipStreamWaitValue64(), '0' otherwise. + hipDeviceAttributeImageSupport, ///< '1' if Device supports image, '0' otherwise. + hipDeviceAttributePhysicalMultiProcessorCount, ///< All available physical compute + ///< units for the device + hipDeviceAttributeFineGrainSupport, ///< '1' if Device supports fine grain, '0' otherwise + hipDeviceAttributeWallClockRate, ///< Constant frequency of wall clock in kilohertz. + + hipDeviceAttributeAmdSpecificEnd = 19999, + hipDeviceAttributeVendorSpecificBegin = 20000, + // Extended attributes for vendors +} hipDeviceAttribute_t; + +#define hipGetDeviceProperties hipGetDevicePropertiesR0600 +#define hipDeviceProp_t hipDeviceProp_tR0600 +#define hipChooseDevice hipChooseDeviceR0600 + +typedef struct hipDeviceProp_t { + char name256; ///< Device name. + hipUUID uuid; ///< UUID of a device + char luid8; ///< 8-byte unique identifier. Only valid on windows + unsigned int luidDeviceNodeMask; ///< LUID node mask + size_t totalGlobalMem; ///< Size of global memory region (in bytes). + size_t sharedMemPerBlock; ///< Size of shared memory per block (in bytes). + int regsPerBlock; ///< Registers per block. + int warpSize; ///< Warp size. + size_t memPitch; ///< Maximum pitch in bytes allowed by memory copies + ///< pitched memory + int maxThreadsPerBlock; ///< Max work items per work group or workgroup max size. + int maxThreadsDim3; ///< Max number of threads in each dimension (XYZ) of a block. + int maxGridSize3; ///< Max grid dimensions (XYZ). + int clockRate; ///< Max clock frequency of the multiProcessors in khz. + size_t totalConstMem; ///< Size of shared constant memory region on the device + ///< (in bytes). + int major; ///< Major compute capability. On HCC, this is an approximation and features may + ///< differ from CUDA CC. See the arch feature flags for portable ways to query + ///< feature caps. + int minor; ///< Minor compute capability. On HCC, this is an approximation and features may + ///< differ from CUDA CC. See the arch feature flags for portable ways to query + ///< feature caps. + size_t textureAlignment; ///< Alignment requirement for textures + size_t texturePitchAlignment; ///< Pitch alignment requirement for texture references bound to + int deviceOverlap; ///< Deprecated. Use asyncEngineCount instead + int multiProcessorCount; ///< Number of multi-processors (compute units). + int kernelExecTimeoutEnabled; ///< Run time limit for kernels executed on the device + int integrated; ///< APU vs dGPU + int canMapHostMemory; ///< Check whether HIP can map host memory + int computeMode; ///< Compute mode. + int maxTexture1D; ///< Maximum number of elements in 1D images + int maxTexture1DMipmap; ///< Maximum 1D mipmap texture size + int maxTexture1DLinear; ///< Maximum size for 1D textures bound to linear memory + int maxTexture2D2; ///< Maximum dimensions (width, height) of 2D images, in image elements + int maxTexture2DMipmap2; ///< Maximum number of elements in 2D array mipmap of images + int maxTexture2DLinear3; ///< Maximum 2D tex dimensions if tex are bound to pitched memory + int maxTexture2DGather2; ///< Maximum 2D tex dimensions if gather has to be performed + int maxTexture3D3; ///< Maximum dimensions (width, height, depth) of 3D images, in image + ///< elements + int maxTexture3DAlt3; ///< Maximum alternate 3D texture dims + int maxTextureCubemap; ///< Maximum cubemap texture dims + int maxTexture1DLayered2; ///< Maximum number of elements in 1D array images + int maxTexture2DLayered3; ///< Maximum number of elements in 2D array images + int maxTextureCubemapLayered2; ///< Maximum cubemaps layered texture dims + int maxSurface1D; ///< Maximum 1D surface size + int maxSurface2D2; ///< Maximum 2D surface size + int maxSurface3D3; ///< Maximum 3D surface size + int maxSurface1DLayered2; ///< Maximum 1D layered surface size + int maxSurface2DLayered3; ///< Maximum 2D layared surface size + int maxSurfaceCubemap; ///< Maximum cubemap surface size + int maxSurfaceCubemapLayered2; ///< Maximum cubemap layered surface size + size_t surfaceAlignment; ///< Alignment requirement for surface + int concurrentKernels; ///< Device can possibly execute multiple kernels concurrently. + int ECCEnabled; ///< Device has ECC support enabled + int pciBusID; ///< PCI Bus ID. + int pciDeviceID; ///< PCI Device ID. + int pciDomainID; ///< PCI Domain ID + int tccDriver; ///< 1:If device is Tesla device using TCC driver, else 0 + int asyncEngineCount; ///< Number of async engines + int unifiedAddressing; ///< Does device and host share unified address space + int memoryClockRate; ///< Max global memory clock frequency in khz. + int memoryBusWidth; ///< Global memory bus width in bits. + int l2CacheSize; ///< L2 cache size. + int persistingL2CacheMaxSize; ///< Device's max L2 persisting lines in bytes + int maxThreadsPerMultiProcessor; ///< Maximum resident threads per multi-processor. + int streamPrioritiesSupported; ///< Device supports stream priority + int globalL1CacheSupported; ///< Indicates globals are cached in L1 + int localL1CacheSupported; ///< Locals are cahced in L1 + size_t sharedMemPerMultiprocessor; ///< Amount of shared memory available per multiprocessor. + int regsPerMultiprocessor; ///< registers available per multiprocessor + int managedMemory; ///< Device supports allocating managed memory on this system + int isMultiGpuBoard; ///< 1 if device is on a multi-GPU board, 0 if not. + int multiGpuBoardGroupID; ///< Unique identifier for a group of devices on same multiboard GPU + int hostNativeAtomicSupported; ///< Link between host and device supports native atomics + int singleToDoublePrecisionPerfRatio; ///< Deprecated. CUDA only. + int pageableMemoryAccess; ///< Device supports coherently accessing pageable memory + ///< without calling hipHostRegister on it + int concurrentManagedAccess; ///< Device can coherently access managed memory concurrently with + ///< the CPU + int computePreemptionSupported; ///< Is compute preemption supported on the device + int canUseHostPointerForRegisteredMem; ///< Device can access host registered memory with same + ///< address as the host + int cooperativeLaunch; ///< HIP device supports cooperative launch + int cooperativeMultiDeviceLaunch; ///< HIP device supports cooperative launch on multiple + ///< devices + size_t + sharedMemPerBlockOptin; ///< Per device m ax shared mem per block usable by special opt in + int pageableMemoryAccessUsesHostPageTables; ///< Device accesses pageable memory via the host's + ///< page tables + int directManagedMemAccessFromHost; ///< Host can directly access managed memory on the device + ///< without migration + int maxBlocksPerMultiProcessor; ///< Max number of blocks on CU + int accessPolicyMaxWindowSize; ///< Max value of access policy window + size_t reservedSharedMemPerBlock; ///< Shared memory reserved by driver per block + int hostRegisterSupported; ///< Device supports hipHostRegister + int sparseHipArraySupported; ///< Indicates if device supports sparse hip arrays + int hostRegisterReadOnlySupported; ///< Device supports using the hipHostRegisterReadOnly flag + ///< with hipHostRegistger + int timelineSemaphoreInteropSupported; ///< Indicates external timeline semaphore support + int memoryPoolsSupported; ///< Indicates if device supports hipMallocAsync and hipMemPool APIs + int gpuDirectRDMASupported; ///< Indicates device support of RDMA APIs + unsigned int gpuDirectRDMAFlushWritesOptions; ///< Bitmask to be interpreted according to + ///< hipFlushGPUDirectRDMAWritesOptions + int gpuDirectRDMAWritesOrdering; ///< value of hipGPUDirectRDMAWritesOrdering + unsigned int + memoryPoolSupportedHandleTypes; ///< Bitmask of handle types support with mempool based IPC + int deferredMappingHipArraySupported; ///< Device supports deferred mapping HIP arrays and HIP + ///< mipmapped arrays + int ipcEventSupported; ///< Device supports IPC events + int clusterLaunch; ///< Device supports cluster launch + int unifiedFunctionPointers; ///< Indicates device supports unified function pointers + int reserved63; ///< CUDA Reserved. + + int hipReserved32; ///< Reserved for adding new entries for HIP/CUDA. + + /* HIP Only struct members */ + char gcnArchName256; ///< AMD GCN Arch Name. HIP Only. + size_t maxSharedMemoryPerMultiProcessor; ///< Maximum Shared Memory Per CU. HIP Only. + int clockInstructionRate; ///< Frequency in khz of the timer used by the device-side "clock*" + ///< instructions. New for HIP. + hipDeviceArch_t arch; ///< Architectural feature flags. New for HIP. + unsigned int* hdpMemFlushCntl; ///< Addres of HDP_MEM_COHERENCY_FLUSH_CNTL register + unsigned int* hdpRegFlushCntl; ///< Addres of HDP_REG_COHERENCY_FLUSH_CNTL register + int cooperativeMultiDeviceUnmatchedFunc; ///< HIP device supports cooperative launch on + ///< multiple + /// devices with unmatched functions + int cooperativeMultiDeviceUnmatchedGridDim; ///< HIP device supports cooperative launch on + ///< multiple + /// devices with unmatched grid dimensions + int cooperativeMultiDeviceUnmatchedBlockDim; ///< HIP device supports cooperative launch on + ///< multiple + /// devices with unmatched block dimensions + int cooperativeMultiDeviceUnmatchedSharedMem; ///< HIP device supports cooperative launch on + ///< multiple + /// devices with unmatched shared memories + int isLargeBar; ///< 1: if it is a large PCI bar device, else 0 + int asicRevision; ///< Revision of the GPU in this device +} hipDeviceProp_t; + +typedef enum hipMemoryType { + hipMemoryTypeUnregistered = 0, ///< Unregistered memory + hipMemoryTypeHost = 1, ///< Memory is physically located on host + hipMemoryTypeDevice = 2, ///< Memory is physically located on device. (see deviceId for + ///< specific device) + hipMemoryTypeManaged = 3, ///< Managed memory, automaticallly managed by the unified + ///< memory system + ///< place holder for new values. + hipMemoryTypeArray = 10, ///< Array memory, physically located on device. (see deviceId for + ///< specific device) + hipMemoryTypeUnified = 11 ///< unified address space + +} hipMemoryType; + +typedef enum hipError_t { + hipSuccess = 0, ///< Successful completion. + hipErrorInvalidValue = 1, ///< One or more of the parameters passed to the API call is NULL + ///< or not in an acceptable range. + hipErrorOutOfMemory = 2, ///< out of memory range. + // Deprecated + hipErrorMemoryAllocation = 2, ///< Memory allocation error. + hipErrorNotInitialized = 3, ///< Invalid not initialized + // Deprecated + hipErrorInitializationError = 3, + hipErrorDeinitialized = 4, ///< Deinitialized + hipErrorProfilerDisabled = 5, + hipErrorProfilerNotInitialized = 6, + hipErrorProfilerAlreadyStarted = 7, + hipErrorProfilerAlreadyStopped = 8, + hipErrorInvalidConfiguration = 9, ///< Invalide configuration + hipErrorInvalidPitchValue = 12, ///< Invalid pitch value + hipErrorInvalidSymbol = 13, ///< Invalid symbol + hipErrorInvalidDevicePointer = 17, ///< Invalid Device Pointer + hipErrorInvalidMemcpyDirection = 21, ///< Invalid memory copy direction + hipErrorInsufficientDriver = 35, + hipErrorMissingConfiguration = 52, + hipErrorPriorLaunchFailure = 53, + hipErrorInvalidDeviceFunction = 98, ///< Invalid device function + hipErrorNoDevice = 100, ///< Call to hipGetDeviceCount returned 0 devices + hipErrorInvalidDevice = 101, ///< DeviceID must be in range from 0 to compute-devices. + hipErrorInvalidImage = 200, ///< Invalid image + hipErrorInvalidContext = 201, ///< Produced when input context is invalid. + hipErrorContextAlreadyCurrent = 202, + hipErrorMapFailed = 205, + // Deprecated + hipErrorMapBufferObjectFailed = 205, ///< Produced when the IPC memory attach failed from ROCr. + hipErrorUnmapFailed = 206, + hipErrorArrayIsMapped = 207, + hipErrorAlreadyMapped = 208, + hipErrorNoBinaryForGpu = 209, + hipErrorAlreadyAcquired = 210, + hipErrorNotMapped = 211, + hipErrorNotMappedAsArray = 212, + hipErrorNotMappedAsPointer = 213, + hipErrorECCNotCorrectable = 214, + hipErrorUnsupportedLimit = 215, ///< Unsupported limit + hipErrorContextAlreadyInUse = 216, ///< The context is already in use + hipErrorPeerAccessUnsupported = 217, + hipErrorInvalidKernelFile = 218, ///< In CUDA DRV, it is CUDA_ERROR_INVALID_PTX + hipErrorInvalidGraphicsContext = 219, + hipErrorInvalidSource = 300, ///< Invalid source. + hipErrorFileNotFound = 301, ///< the file is not found. + hipErrorSharedObjectSymbolNotFound = 302, + hipErrorSharedObjectInitFailed = 303, ///< Failed to initialize shared object. + hipErrorOperatingSystem = 304, ///< Not the correct operating system + hipErrorInvalidHandle = 400, ///< Invalide handle + // Deprecated + hipErrorInvalidResourceHandle = 400, ///< Resource handle (hipEvent_t or hipStream_t) invalid. + hipErrorIllegalState = 401, ///< Resource required is not in a valid state to perform operation. + hipErrorNotFound = 500, ///< Not found + hipErrorNotReady = 600, ///< Indicates that asynchronous operations enqueued earlier are not + ///< ready. This is not actually an error, but is used to distinguish + ///< from hipSuccess (which indicates completion). APIs that return + ///< this error include hipEventQuery and hipStreamQuery. + hipErrorIllegalAddress = 700, + hipErrorLaunchOutOfResources = 701, ///< Out of resources error. + hipErrorLaunchTimeOut = 702, ///< Timeout for the launch. + hipErrorPeerAccessAlreadyEnabled = 704, ///< Peer access was already enabled from the current + ///< device. + hipErrorPeerAccessNotEnabled = 705, ///< Peer access was never enabled from the current device. + hipErrorSetOnActiveProcess = 708, ///< The process is active. + hipErrorContextIsDestroyed = 709, ///< The context is already destroyed + hipErrorAssert = 710, ///< Produced when the kernel calls assert. + hipErrorHostMemoryAlreadyRegistered = 712, ///< Produced when trying to lock a page-locked + ///< memory. + hipErrorHostMemoryNotRegistered = 713, ///< Produced when trying to unlock a non-page-locked + ///< memory. + hipErrorLaunchFailure = 719, ///< An exception occurred on the device while executing a kernel. + hipErrorCooperativeLaunchTooLarge = 720, ///< This error indicates that the number of blocks + ///< launched per grid for a kernel that was launched + ///< via cooperative launch APIs exceeds the maximum + ///< number of allowed blocks for the current device. + hipErrorNotSupported = 801, ///< Produced when the hip API is not supported/implemented + hipErrorStreamCaptureUnsupported = 900, ///< The operation is not permitted when the stream + ///< is capturing. + hipErrorStreamCaptureInvalidated = 901, ///< The current capture sequence on the stream + ///< has been invalidated due to a previous error. + hipErrorStreamCaptureMerge = 902, ///< The operation would have resulted in a merge of + ///< two independent capture sequences. + hipErrorStreamCaptureUnmatched = 903, ///< The capture was not initiated in this stream. + hipErrorStreamCaptureUnjoined = 904, ///< The capture sequence contains a fork that was not + ///< joined to the primary stream. + hipErrorStreamCaptureIsolation = 905, ///< A dependency would have been created which crosses + ///< the capture sequence boundary. Only implicit + ///< in-stream ordering dependencies are allowed + ///< to cross the boundary + hipErrorStreamCaptureImplicit = 906, ///< The operation would have resulted in a disallowed + ///< implicit dependency on a current capture sequence + ///< from hipStreamLegacy. + hipErrorCapturedEvent = 907, ///< The operation is not permitted on an event which was last + ///< recorded in a capturing stream. + hipErrorStreamCaptureWrongThread = 908, ///< A stream capture sequence not initiated with + ///< the hipStreamCaptureModeRelaxed argument to + ///< hipStreamBeginCapture was passed to + ///< hipStreamEndCapture in a different thread. + hipErrorGraphExecUpdateFailure = 910, ///< This error indicates that the graph update + ///< not performed because it included changes which + ///< violated constraintsspecific to instantiated graph + ///< update. + hipErrorUnknown = 999, ///< Unknown error. + // HSA Runtime Error Codes start here. + hipErrorRuntimeMemory = 1052, ///< HSA runtime memory call returned error. Typically not seen + ///< in production systems. + hipErrorRuntimeOther = 1053, ///< HSA runtime call other than memory returned error. Typically + ///< not seen in production systems. + hipErrorTbd ///< Marker that more error codes are needed. +} hipError_t; + +typedef enum hipGraphicsRegisterFlags { + hipGraphicsRegisterFlagsNone = 0, + hipGraphicsRegisterFlagsReadOnly = 1, ///< HIP will not write to this registered resource + hipGraphicsRegisterFlagsWriteDiscard = + 2, ///< HIP will only write and will not read from this registered resource + hipGraphicsRegisterFlagsSurfaceLoadStore = 4, ///< HIP will bind this resource to a surface + hipGraphicsRegisterFlagsTextureGather = + 8 ///< HIP will perform texture gather operations on this registered resource +} hipGraphicsRegisterFlags; + +typedef struct _hipGraphicsResource hipGraphicsResource; + +typedef hipGraphicsResource* hipGraphicsResource_t; + +typedef struct ihipStream_t* hipStream_t; +typedef struct ihipModule_t* hipModule_t; +typedef struct ihipModuleSymbol_t* hipFunction_t; +typedef struct ihipEvent_t* hipEvent_t; + +/** Default stream creation flags. These are used with hipStreamCreate().*/ +#define hipStreamDefault 0x00 + +/** Stream does not implicitly synchronize with null stream.*/ +#define hipStreamNonBlocking 0x01 + +//Flags that can be used with hipEventCreateWithFlags. +/** Default flags.*/ +#define hipEventDefault 0x0 + +/** Waiting will yield CPU. Power-friendly and usage-friendly but may increase latency.*/ +#define hipEventBlockingSync 0x1 + +/** Disable event's capability to record timing information. May improve performance.*/ +#define hipEventDisableTiming 0x2 + +/** Event can support IPC. hipEventDisableTiming also must be set.*/ +#define hipEventInterprocess 0x4 +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/hiprtc.h
Added
@@ -0,0 +1,45 @@ +/* +Copyright (c) 2015 - 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +G_BEGIN_DECLS + +typedef enum hiprtcResult { + HIPRTC_SUCCESS = 0, ///< Success + HIPRTC_ERROR_OUT_OF_MEMORY = 1, ///< Out of memory + HIPRTC_ERROR_PROGRAM_CREATION_FAILURE = 2, ///< Failed to create program + HIPRTC_ERROR_INVALID_INPUT = 3, ///< Invalid input + HIPRTC_ERROR_INVALID_PROGRAM = 4, ///< Invalid program + HIPRTC_ERROR_INVALID_OPTION = 5, ///< Invalid option + HIPRTC_ERROR_COMPILATION = 6, ///< Compilation error + HIPRTC_ERROR_BUILTIN_OPERATION_FAILURE = 7, ///< Failed in builtin operation + HIPRTC_ERROR_NO_NAME_EXPRESSIONS_AFTER_COMPILATION = 8, ///< No name expression after compilation + HIPRTC_ERROR_NO_LOWERED_NAMES_BEFORE_COMPILATION = 9, ///< No lowered names before compilation + HIPRTC_ERROR_NAME_EXPRESSION_NOT_VALID = 10, ///< Invalid name expression + HIPRTC_ERROR_INTERNAL_ERROR = 11, ///< Internal error + HIPRTC_ERROR_LINKING = 100 ///< Error in linking +} hiprtcResult; + +typedef struct _hiprtcProgram* hiprtcProgram; + +G_END_DECLS \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/nvidia_hip_runtime_api.h
Added
@@ -0,0 +1,628 @@ +/* +Copyright (c) 2015 - 2022 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +#include <hip/hip_runtime_api.h> +#include <cuda.h> +#include <driver_types.h> + +inline static hipError_t hipCUDAErrorTohipError(cudaError_t cuError) { + switch (cuError) { + case cudaSuccess: + return hipSuccess; + case cudaErrorProfilerDisabled: + return hipErrorProfilerDisabled; + case cudaErrorProfilerNotInitialized: + return hipErrorProfilerNotInitialized; + case cudaErrorProfilerAlreadyStarted: + return hipErrorProfilerAlreadyStarted; + case cudaErrorProfilerAlreadyStopped: + return hipErrorProfilerAlreadyStopped; + case cudaErrorInsufficientDriver: + return hipErrorInsufficientDriver; + case cudaErrorUnsupportedLimit: + return hipErrorUnsupportedLimit; + case cudaErrorPeerAccessUnsupported: + return hipErrorPeerAccessUnsupported; + case cudaErrorInvalidGraphicsContext: + return hipErrorInvalidGraphicsContext; + case cudaErrorSharedObjectSymbolNotFound: + return hipErrorSharedObjectSymbolNotFound; + case cudaErrorSharedObjectInitFailed: + return hipErrorSharedObjectInitFailed; + case cudaErrorOperatingSystem: + return hipErrorOperatingSystem; + case cudaErrorIllegalState: + return hipErrorIllegalState; + case cudaErrorSetOnActiveProcess: + return hipErrorSetOnActiveProcess; + case cudaErrorIllegalAddress: + return hipErrorIllegalAddress; + case cudaErrorInvalidSymbol: + return hipErrorInvalidSymbol; + case cudaErrorMissingConfiguration: + return hipErrorMissingConfiguration; + case cudaErrorMemoryAllocation: + return hipErrorOutOfMemory; + case cudaErrorInitializationError: + return hipErrorNotInitialized; + case cudaErrorLaunchFailure: + return hipErrorLaunchFailure; + case cudaErrorCooperativeLaunchTooLarge: + return hipErrorCooperativeLaunchTooLarge; + case cudaErrorPriorLaunchFailure: + return hipErrorPriorLaunchFailure; + case cudaErrorLaunchOutOfResources: + return hipErrorLaunchOutOfResources; + case cudaErrorInvalidDeviceFunction: + return hipErrorInvalidDeviceFunction; + case cudaErrorInvalidConfiguration: + return hipErrorInvalidConfiguration; + case cudaErrorInvalidDevice: + return hipErrorInvalidDevice; + case cudaErrorInvalidValue: + return hipErrorInvalidValue; + case cudaErrorInvalidPitchValue: + return hipErrorInvalidPitchValue; + case cudaErrorInvalidDevicePointer: + return hipErrorInvalidDevicePointer; + case cudaErrorInvalidMemcpyDirection: + return hipErrorInvalidMemcpyDirection; + case cudaErrorInvalidResourceHandle: + return hipErrorInvalidHandle; + case cudaErrorNotReady: + return hipErrorNotReady; + case cudaErrorNoDevice: + return hipErrorNoDevice; + case cudaErrorPeerAccessAlreadyEnabled: + return hipErrorPeerAccessAlreadyEnabled; + case cudaErrorPeerAccessNotEnabled: + return hipErrorPeerAccessNotEnabled; + case cudaErrorContextIsDestroyed: + return hipErrorContextIsDestroyed; + case cudaErrorHostMemoryAlreadyRegistered: + return hipErrorHostMemoryAlreadyRegistered; + case cudaErrorHostMemoryNotRegistered: + return hipErrorHostMemoryNotRegistered; + case cudaErrorMapBufferObjectFailed: + return hipErrorMapFailed; + case cudaErrorAssert: + return hipErrorAssert; + case cudaErrorNotSupported: + return hipErrorNotSupported; + case cudaErrorCudartUnloading: + return hipErrorDeinitialized; + case cudaErrorInvalidKernelImage: + return hipErrorInvalidImage; + case cudaErrorUnmapBufferObjectFailed: + return hipErrorUnmapFailed; + case cudaErrorNoKernelImageForDevice: + return hipErrorNoBinaryForGpu; + case cudaErrorECCUncorrectable: + return hipErrorECCNotCorrectable; + case cudaErrorDeviceAlreadyInUse: + return hipErrorContextAlreadyInUse; + case cudaErrorInvalidPtx: + return hipErrorInvalidKernelFile; + case cudaErrorLaunchTimeout: + return hipErrorLaunchTimeOut; + case cudaErrorInvalidSource: + return hipErrorInvalidSource; + case cudaErrorFileNotFound: + return hipErrorFileNotFound; + case cudaErrorSymbolNotFound: + return hipErrorNotFound; + case cudaErrorArrayIsMapped: + return hipErrorArrayIsMapped; + case cudaErrorNotMappedAsPointer: + return hipErrorNotMappedAsPointer; + case cudaErrorNotMappedAsArray: + return hipErrorNotMappedAsArray; + case cudaErrorNotMapped: + return hipErrorNotMapped; + case cudaErrorAlreadyAcquired: + return hipErrorAlreadyAcquired; + case cudaErrorAlreadyMapped: + return hipErrorAlreadyMapped; + case cudaErrorDeviceUninitialized: + return hipErrorInvalidContext; + case cudaErrorStreamCaptureUnsupported: + return hipErrorStreamCaptureUnsupported; + case cudaErrorStreamCaptureInvalidated: + return hipErrorStreamCaptureInvalidated; + case cudaErrorStreamCaptureMerge: + return hipErrorStreamCaptureMerge; + case cudaErrorStreamCaptureUnmatched: + return hipErrorStreamCaptureUnmatched; + case cudaErrorStreamCaptureUnjoined: + return hipErrorStreamCaptureUnjoined; + case cudaErrorStreamCaptureIsolation: + return hipErrorStreamCaptureIsolation; + case cudaErrorStreamCaptureImplicit: + return hipErrorStreamCaptureImplicit; + case cudaErrorCapturedEvent: + return hipErrorCapturedEvent; + case cudaErrorStreamCaptureWrongThread: + return hipErrorStreamCaptureWrongThread; + case cudaErrorGraphExecUpdateFailure: + return hipErrorGraphExecUpdateFailure; + case cudaErrorUnknown: + default: + return hipErrorUnknown; // Note - translated error. + } +} + +inline static hipError_t hipCUResultTohipError(CUresult cuError) { + switch (cuError) { + case CUDA_SUCCESS: + return hipSuccess; + case CUDA_ERROR_OUT_OF_MEMORY: + return hipErrorOutOfMemory; + case CUDA_ERROR_INVALID_VALUE: + return hipErrorInvalidValue; + case CUDA_ERROR_INVALID_DEVICE: + return hipErrorInvalidDevice; + case CUDA_ERROR_DEINITIALIZED: + return hipErrorDeinitialized; + case CUDA_ERROR_NO_DEVICE: + return hipErrorNoDevice; + case CUDA_ERROR_INVALID_CONTEXT: + return hipErrorInvalidContext; + case CUDA_ERROR_NOT_INITIALIZED: + return hipErrorNotInitialized; + case CUDA_ERROR_INVALID_HANDLE: + return hipErrorInvalidHandle; + case CUDA_ERROR_MAP_FAILED: + return hipErrorMapFailed; + case CUDA_ERROR_PROFILER_DISABLED: + return hipErrorProfilerDisabled; + case CUDA_ERROR_PROFILER_NOT_INITIALIZED: + return hipErrorProfilerNotInitialized; + case CUDA_ERROR_PROFILER_ALREADY_STARTED: + return hipErrorProfilerAlreadyStarted; + case CUDA_ERROR_PROFILER_ALREADY_STOPPED: + return hipErrorProfilerAlreadyStopped; + case CUDA_ERROR_INVALID_IMAGE: + return hipErrorInvalidImage; + case CUDA_ERROR_CONTEXT_ALREADY_CURRENT: + return hipErrorContextAlreadyCurrent; + case CUDA_ERROR_UNMAP_FAILED: + return hipErrorUnmapFailed; + case CUDA_ERROR_ARRAY_IS_MAPPED: + return hipErrorArrayIsMapped; + case CUDA_ERROR_ALREADY_MAPPED: + return hipErrorAlreadyMapped; + case CUDA_ERROR_NO_BINARY_FOR_GPU: + return hipErrorNoBinaryForGpu; + case CUDA_ERROR_ALREADY_ACQUIRED: + return hipErrorAlreadyAcquired; + case CUDA_ERROR_NOT_MAPPED: + return hipErrorNotMapped; + case CUDA_ERROR_NOT_MAPPED_AS_ARRAY: + return hipErrorNotMappedAsArray; + case CUDA_ERROR_NOT_MAPPED_AS_POINTER: + return hipErrorNotMappedAsPointer; + case CUDA_ERROR_ECC_UNCORRECTABLE: + return hipErrorECCNotCorrectable; + case CUDA_ERROR_UNSUPPORTED_LIMIT: + return hipErrorUnsupportedLimit; + case CUDA_ERROR_CONTEXT_ALREADY_IN_USE: + return hipErrorContextAlreadyInUse; + case CUDA_ERROR_PEER_ACCESS_UNSUPPORTED: + return hipErrorPeerAccessUnsupported; + case CUDA_ERROR_INVALID_PTX: + return hipErrorInvalidKernelFile; + case CUDA_ERROR_INVALID_GRAPHICS_CONTEXT: + return hipErrorInvalidGraphicsContext; + case CUDA_ERROR_INVALID_SOURCE: + return hipErrorInvalidSource; + case CUDA_ERROR_FILE_NOT_FOUND: + return hipErrorFileNotFound; + case CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND: + return hipErrorSharedObjectSymbolNotFound; + case CUDA_ERROR_SHARED_OBJECT_INIT_FAILED: + return hipErrorSharedObjectInitFailed; + case CUDA_ERROR_OPERATING_SYSTEM: + return hipErrorOperatingSystem; + case CUDA_ERROR_ILLEGAL_STATE: + return hipErrorIllegalState; + case CUDA_ERROR_NOT_FOUND: + return hipErrorNotFound; + case CUDA_ERROR_NOT_READY: + return hipErrorNotReady; + case CUDA_ERROR_ILLEGAL_ADDRESS: + return hipErrorIllegalAddress; + case CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES: + return hipErrorLaunchOutOfResources; + case CUDA_ERROR_LAUNCH_TIMEOUT: + return hipErrorLaunchTimeOut; + case CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED: + return hipErrorPeerAccessAlreadyEnabled; + case CUDA_ERROR_PEER_ACCESS_NOT_ENABLED: + return hipErrorPeerAccessNotEnabled; + case CUDA_ERROR_PRIMARY_CONTEXT_ACTIVE: + return hipErrorSetOnActiveProcess; + case CUDA_ERROR_CONTEXT_IS_DESTROYED: + return hipErrorContextIsDestroyed; + case CUDA_ERROR_ASSERT: + return hipErrorAssert; + case CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED: + return hipErrorHostMemoryAlreadyRegistered; + case CUDA_ERROR_HOST_MEMORY_NOT_REGISTERED: + return hipErrorHostMemoryNotRegistered; + case CUDA_ERROR_LAUNCH_FAILED: + return hipErrorLaunchFailure; + case CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE: + return hipErrorCooperativeLaunchTooLarge; + case CUDA_ERROR_NOT_SUPPORTED: + return hipErrorNotSupported; + case CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED: + return hipErrorStreamCaptureUnsupported; + case CUDA_ERROR_STREAM_CAPTURE_INVALIDATED: + return hipErrorStreamCaptureInvalidated; + case CUDA_ERROR_STREAM_CAPTURE_MERGE: + return hipErrorStreamCaptureMerge; + case CUDA_ERROR_STREAM_CAPTURE_UNMATCHED: + return hipErrorStreamCaptureUnmatched; + case CUDA_ERROR_STREAM_CAPTURE_UNJOINED: + return hipErrorStreamCaptureUnjoined; + case CUDA_ERROR_STREAM_CAPTURE_ISOLATION: + return hipErrorStreamCaptureIsolation; + case CUDA_ERROR_STREAM_CAPTURE_IMPLICIT: + return hipErrorStreamCaptureImplicit; + case CUDA_ERROR_CAPTURED_EVENT: + return hipErrorCapturedEvent; + case CUDA_ERROR_STREAM_CAPTURE_WRONG_THREAD: + return hipErrorStreamCaptureWrongThread; + case CUDA_ERROR_GRAPH_EXEC_UPDATE_FAILURE: + return hipErrorGraphExecUpdateFailure; + case CUDA_ERROR_UNKNOWN: + default: + return hipErrorUnknown; // Note - translated error. + } +} + +inline static CUresult hipErrorToCUResult(hipError_t hError) { + switch (hError) { + case hipSuccess: + return CUDA_SUCCESS; + case hipErrorOutOfMemory: + return CUDA_ERROR_OUT_OF_MEMORY; + case hipErrorInvalidValue: + return CUDA_ERROR_INVALID_VALUE; + case hipErrorInvalidDevice: + return CUDA_ERROR_INVALID_DEVICE; + case hipErrorDeinitialized: + return CUDA_ERROR_DEINITIALIZED; + case hipErrorNoDevice: + return CUDA_ERROR_NO_DEVICE; + case hipErrorInvalidContext: + return CUDA_ERROR_INVALID_CONTEXT; + case hipErrorNotInitialized: + return CUDA_ERROR_NOT_INITIALIZED; + case hipErrorInvalidHandle: + return CUDA_ERROR_INVALID_HANDLE; + case hipErrorMapFailed: + return CUDA_ERROR_MAP_FAILED; + case hipErrorProfilerDisabled: + return CUDA_ERROR_PROFILER_DISABLED; + case hipErrorProfilerNotInitialized: + return CUDA_ERROR_PROFILER_NOT_INITIALIZED; + case hipErrorProfilerAlreadyStarted: + return CUDA_ERROR_PROFILER_ALREADY_STARTED; + case hipErrorProfilerAlreadyStopped: + return CUDA_ERROR_PROFILER_ALREADY_STOPPED; + case hipErrorInvalidImage: + return CUDA_ERROR_INVALID_IMAGE; + case hipErrorContextAlreadyCurrent: + return CUDA_ERROR_CONTEXT_ALREADY_CURRENT; + case hipErrorUnmapFailed: + return CUDA_ERROR_UNMAP_FAILED; + case hipErrorArrayIsMapped: + return CUDA_ERROR_ARRAY_IS_MAPPED; + case hipErrorAlreadyMapped: + return CUDA_ERROR_ALREADY_MAPPED; + case hipErrorNoBinaryForGpu: + return CUDA_ERROR_NO_BINARY_FOR_GPU; + case hipErrorAlreadyAcquired: + return CUDA_ERROR_ALREADY_ACQUIRED; + case hipErrorNotMapped: + return CUDA_ERROR_NOT_MAPPED; + case hipErrorNotMappedAsArray: + return CUDA_ERROR_NOT_MAPPED_AS_ARRAY; + case hipErrorNotMappedAsPointer: + return CUDA_ERROR_NOT_MAPPED_AS_POINTER; + case hipErrorECCNotCorrectable: + return CUDA_ERROR_ECC_UNCORRECTABLE; + case hipErrorUnsupportedLimit: + return CUDA_ERROR_UNSUPPORTED_LIMIT; + case hipErrorContextAlreadyInUse: + return CUDA_ERROR_CONTEXT_ALREADY_IN_USE; + case hipErrorPeerAccessUnsupported: + return CUDA_ERROR_PEER_ACCESS_UNSUPPORTED; + case hipErrorInvalidKernelFile: + return CUDA_ERROR_INVALID_PTX; + case hipErrorInvalidGraphicsContext: + return CUDA_ERROR_INVALID_GRAPHICS_CONTEXT; + case hipErrorInvalidSource: + return CUDA_ERROR_INVALID_SOURCE; + case hipErrorFileNotFound: + return CUDA_ERROR_FILE_NOT_FOUND; + case hipErrorSharedObjectSymbolNotFound: + return CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND; + case hipErrorSharedObjectInitFailed: + return CUDA_ERROR_SHARED_OBJECT_INIT_FAILED; + case hipErrorOperatingSystem: + return CUDA_ERROR_OPERATING_SYSTEM; + case hipErrorIllegalState: + return CUDA_ERROR_ILLEGAL_STATE; + case hipErrorNotFound: + return CUDA_ERROR_NOT_FOUND; + case hipErrorNotReady: + return CUDA_ERROR_NOT_READY; + case hipErrorIllegalAddress: + return CUDA_ERROR_ILLEGAL_ADDRESS; + case hipErrorLaunchOutOfResources: + return CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES; + case hipErrorLaunchTimeOut: + return CUDA_ERROR_LAUNCH_TIMEOUT; + case hipErrorPeerAccessAlreadyEnabled: + return CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED; + case hipErrorPeerAccessNotEnabled: + return CUDA_ERROR_PEER_ACCESS_NOT_ENABLED; + case hipErrorSetOnActiveProcess: + return CUDA_ERROR_PRIMARY_CONTEXT_ACTIVE; + case hipErrorContextIsDestroyed: + return CUDA_ERROR_CONTEXT_IS_DESTROYED; + case hipErrorAssert: + return CUDA_ERROR_ASSERT; + case hipErrorHostMemoryAlreadyRegistered: + return CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED; + case hipErrorHostMemoryNotRegistered: + return CUDA_ERROR_HOST_MEMORY_NOT_REGISTERED; + case hipErrorLaunchFailure: + return CUDA_ERROR_LAUNCH_FAILED; + case hipErrorCooperativeLaunchTooLarge: + return CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE; + case hipErrorNotSupported: + return CUDA_ERROR_NOT_SUPPORTED; + case hipErrorStreamCaptureUnsupported: + return CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED; + case hipErrorStreamCaptureInvalidated: + return CUDA_ERROR_STREAM_CAPTURE_INVALIDATED; + case hipErrorStreamCaptureMerge: + return CUDA_ERROR_STREAM_CAPTURE_MERGE; + case hipErrorStreamCaptureUnmatched: + return CUDA_ERROR_STREAM_CAPTURE_UNMATCHED; + case hipErrorStreamCaptureUnjoined: + return CUDA_ERROR_STREAM_CAPTURE_UNJOINED; + case hipErrorStreamCaptureIsolation: + return CUDA_ERROR_STREAM_CAPTURE_ISOLATION; + case hipErrorStreamCaptureImplicit: + return CUDA_ERROR_STREAM_CAPTURE_IMPLICIT; + case hipErrorCapturedEvent: + return CUDA_ERROR_CAPTURED_EVENT; + case hipErrorStreamCaptureWrongThread: + return CUDA_ERROR_STREAM_CAPTURE_WRONG_THREAD; + case hipErrorGraphExecUpdateFailure: + return CUDA_ERROR_GRAPH_EXEC_UPDATE_FAILURE; + case hipErrorUnknown: + default: + return CUDA_ERROR_UNKNOWN; // Note - translated error. + } +} + +inline static cudaError_t hipErrorToCudaError(hipError_t hError) { + switch (hError) { + case hipSuccess: + return cudaSuccess; + case hipErrorOutOfMemory: + return cudaErrorMemoryAllocation; + case hipErrorProfilerDisabled: + return cudaErrorProfilerDisabled; + case hipErrorProfilerNotInitialized: + return cudaErrorProfilerNotInitialized; + case hipErrorProfilerAlreadyStarted: + return cudaErrorProfilerAlreadyStarted; + case hipErrorProfilerAlreadyStopped: + return cudaErrorProfilerAlreadyStopped; + case hipErrorInvalidConfiguration: + return cudaErrorInvalidConfiguration; + case hipErrorLaunchOutOfResources: + return cudaErrorLaunchOutOfResources; + case hipErrorInvalidValue: + return cudaErrorInvalidValue; + case hipErrorInvalidPitchValue: + return cudaErrorInvalidPitchValue; + case hipErrorInvalidHandle: + return cudaErrorInvalidResourceHandle; + case hipErrorInvalidDevice: + return cudaErrorInvalidDevice; + case hipErrorInvalidMemcpyDirection: + return cudaErrorInvalidMemcpyDirection; + case hipErrorInvalidDevicePointer: + return cudaErrorInvalidDevicePointer; + case hipErrorNotInitialized: + return cudaErrorInitializationError; + case hipErrorNoDevice: + return cudaErrorNoDevice; + case hipErrorNotReady: + return cudaErrorNotReady; + case hipErrorPeerAccessNotEnabled: + return cudaErrorPeerAccessNotEnabled; + case hipErrorPeerAccessAlreadyEnabled: + return cudaErrorPeerAccessAlreadyEnabled; + case hipErrorHostMemoryAlreadyRegistered: + return cudaErrorHostMemoryAlreadyRegistered; + case hipErrorHostMemoryNotRegistered: + return cudaErrorHostMemoryNotRegistered; + case hipErrorDeinitialized: + return cudaErrorCudartUnloading; + case hipErrorInvalidSymbol: + return cudaErrorInvalidSymbol; + case hipErrorInsufficientDriver: + return cudaErrorInsufficientDriver; + case hipErrorMissingConfiguration: + return cudaErrorMissingConfiguration; + case hipErrorPriorLaunchFailure: + return cudaErrorPriorLaunchFailure; + case hipErrorInvalidDeviceFunction: + return cudaErrorInvalidDeviceFunction; + case hipErrorInvalidImage: + return cudaErrorInvalidKernelImage; + case hipErrorInvalidContext: + return cudaErrorDeviceUninitialized; + return cudaErrorUnknown; + case hipErrorMapFailed: + return cudaErrorMapBufferObjectFailed; + case hipErrorUnmapFailed: + return cudaErrorUnmapBufferObjectFailed; + case hipErrorArrayIsMapped: + return cudaErrorArrayIsMapped; + case hipErrorAlreadyMapped: + return cudaErrorAlreadyMapped; + case hipErrorNoBinaryForGpu: + return cudaErrorNoKernelImageForDevice; + case hipErrorAlreadyAcquired: + return cudaErrorAlreadyAcquired; + case hipErrorNotMapped: + return cudaErrorNotMapped; + case hipErrorNotMappedAsArray: + return cudaErrorNotMappedAsArray; + case hipErrorNotMappedAsPointer: + return cudaErrorNotMappedAsPointer; + case hipErrorECCNotCorrectable: + return cudaErrorECCUncorrectable; + case hipErrorUnsupportedLimit: + return cudaErrorUnsupportedLimit; + case hipErrorContextAlreadyInUse: + return cudaErrorDeviceAlreadyInUse; + case hipErrorPeerAccessUnsupported: + return cudaErrorPeerAccessUnsupported; + case hipErrorInvalidKernelFile: + return cudaErrorInvalidPtx; + case hipErrorInvalidGraphicsContext: + return cudaErrorInvalidGraphicsContext; + case hipErrorInvalidSource: + return cudaErrorInvalidSource; + case hipErrorFileNotFound: + return cudaErrorFileNotFound; + case hipErrorSharedObjectSymbolNotFound: + return cudaErrorSharedObjectSymbolNotFound; + case hipErrorSharedObjectInitFailed: + return cudaErrorSharedObjectInitFailed; + case hipErrorOperatingSystem: + return cudaErrorOperatingSystem; + case hipErrorIllegalState: + return cudaErrorIllegalState; + case hipErrorNotFound: + return cudaErrorSymbolNotFound; + case hipErrorIllegalAddress: + return cudaErrorIllegalAddress; + case hipErrorLaunchTimeOut: + return cudaErrorLaunchTimeout; + case hipErrorSetOnActiveProcess: + return cudaErrorSetOnActiveProcess; + case hipErrorContextIsDestroyed: + return cudaErrorContextIsDestroyed; + case hipErrorAssert: + return cudaErrorAssert; + case hipErrorLaunchFailure: + return cudaErrorLaunchFailure; + case hipErrorCooperativeLaunchTooLarge: + return cudaErrorCooperativeLaunchTooLarge; + case hipErrorStreamCaptureUnsupported: + return cudaErrorStreamCaptureUnsupported; + case hipErrorStreamCaptureInvalidated: + return cudaErrorStreamCaptureInvalidated; + case hipErrorStreamCaptureMerge: + return cudaErrorStreamCaptureMerge; + case hipErrorStreamCaptureUnmatched: + return cudaErrorStreamCaptureUnmatched; + case hipErrorStreamCaptureUnjoined: + return cudaErrorStreamCaptureUnjoined; + case hipErrorStreamCaptureIsolation: + return cudaErrorStreamCaptureIsolation; + case hipErrorStreamCaptureImplicit: + return cudaErrorStreamCaptureImplicit; + case hipErrorCapturedEvent: + return cudaErrorCapturedEvent; + case hipErrorStreamCaptureWrongThread: + return cudaErrorStreamCaptureWrongThread; + case hipErrorGraphExecUpdateFailure: + return cudaErrorGraphExecUpdateFailure; + case hipErrorNotSupported: + return cudaErrorNotSupported; + // HSA: does not exist in CUDA + case hipErrorRuntimeMemory: + // HSA: does not exist in CUDA + case hipErrorRuntimeOther: + case hipErrorUnknown: + case hipErrorTbd: + default: + return cudaErrorUnknown; // Note - translated error. + } +} + +static inline void hipMemcpy2DTocudaMemcpy2D(CUDA_MEMCPY2D &a, const hip_Memcpy2D* p){ + a.srcXInBytes = (size_t)p->srcXInBytes; + a.srcY = (size_t)p->srcY; + switch (p->srcMemoryType) { + case hipMemoryTypeHost: + a.srcMemoryType = CU_MEMORYTYPE_HOST; + break; + case hipMemoryTypeDevice: + a.srcMemoryType = CU_MEMORYTYPE_DEVICE; + break; + case hipMemoryTypeArray: + a.srcMemoryType = CU_MEMORYTYPE_ARRAY; + break; + default: + a.srcMemoryType = CU_MEMORYTYPE_UNIFIED; + } + a.srcHost = p->srcHost; + a.srcDevice = (CUdeviceptr)p->srcDevice; + a.srcArray = (CUarray)p->srcArray; + a.srcPitch = (size_t)p->srcPitch; + a.dstXInBytes = (size_t)p->dstXInBytes; + a.dstY = (size_t)p->dstY; + switch (p->dstMemoryType) { + case hipMemoryTypeHost: + a.dstMemoryType = CU_MEMORYTYPE_HOST; + break; + case hipMemoryTypeDevice: + a.dstMemoryType = CU_MEMORYTYPE_DEVICE; + break; + case hipMemoryTypeArray: + a.dstMemoryType = CU_MEMORYTYPE_ARRAY; + break; + default: + a.dstMemoryType = CU_MEMORYTYPE_UNIFIED; + } + a.dstHost = p->dstHost; + a.dstDevice = (CUdeviceptr)p->dstDevice; + a.dstArray = (CUarray)p->dstArray; + a.dstPitch = (size_t)p->dstPitch; + a.WidthInBytes = (size_t)p->WidthInBytes; + a.Height = (size_t)p->Height; +} \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/hip/stub/hip/texture_types.h
Added
@@ -0,0 +1,26 @@ +/* +Copyright (c) 2015 - 2023 Advanced Micro Devices, Inc. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. +*/ + +#pragma once + +struct __hip_texture; +typedef struct __hip_texture* hipTextureObject_t;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/insertbin/gstinsertbin.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/insertbin/gstinsertbin.h
Changed
@@ -102,7 +102,7 @@ GType gst_insert_bin_get_type (void); GST_INSERT_BIN_API -GstElement *gst_insert_bin_new (const gchar * name); +GstElement *gst_insert_bin_new (const gchar * name) G_GNUC_WARN_UNUSED_RESULT; GST_INSERT_BIN_API void gst_insert_bin_prepend (GstInsertBin * self, GstElement * element,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/isoff/gstisoff.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/isoff/gstisoff.c
Changed
@@ -885,7 +885,7 @@ parser->sidx.entry_index = 0; parser->status = GST_ISOFF_SIDX_PARSER_DATA; - + G_GNUC_FALLTHROUGH; case GST_ISOFF_SIDX_PARSER_DATA: while (parser->sidx.entry_index < parser->sidx.entries_count) { GstSidxBoxEntry *entry = @@ -912,10 +912,12 @@ parser->sidx.entry_index++; } - if (parser->sidx.entry_index == parser->sidx.entries_count) + if (parser->sidx.entry_index == parser->sidx.entries_count) { parser->status = GST_ISOFF_SIDX_PARSER_FINISHED; - else + } else { break; + } + G_GNUC_FALLTHROUGH; case GST_ISOFF_SIDX_PARSER_FINISHED: parser->sidx.entry_index = 0; res = GST_ISOFF_PARSER_DONE;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/meson.build
Changed
@@ -11,6 +11,7 @@ # cuda can depend on d3d11 subdir('cuda') subdir('dxva') +subdir('hip') subdir('insertbin') subdir('interfaces') subdir('isoff')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mpegts/gst-atsc-section.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mpegts/gst-atsc-section.h
Changed
@@ -264,7 +264,7 @@ GstMpegtsSection * gst_mpegts_section_from_atsc_mgt (GstMpegtsAtscMGT * mgt); GST_MPEGTS_API -GstMpegtsAtscMGT * gst_mpegts_atsc_mgt_new (void); +GstMpegtsAtscMGT * gst_mpegts_atsc_mgt_new (void) G_GNUC_WARN_UNUSED_RESULT; /* Multiple string structure (used in ETT and EIT) */ @@ -446,13 +446,13 @@ /* FIXME receive a non-const parameter but we only provide a const getter */ GST_MPEGTS_API -GstDateTime * gst_mpegts_atsc_stt_get_datetime_utc (GstMpegtsAtscSTT * stt); +GstDateTime * gst_mpegts_atsc_stt_get_datetime_utc (GstMpegtsAtscSTT * stt) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API GstMpegtsSection * gst_mpegts_section_from_atsc_stt (GstMpegtsAtscSTT * stt); GST_MPEGTS_API -GstMpegtsAtscSTT * gst_mpegts_atsc_stt_new (void); +GstMpegtsAtscSTT * gst_mpegts_atsc_stt_new (void) G_GNUC_WARN_UNUSED_RESULT; /* RRT */ #define GST_TYPE_MPEGTS_ATSC_RRT (gst_mpegts_atsc_rrt_get_type ()) @@ -530,13 +530,13 @@ GstMpegtsSection * gst_mpegts_section_from_atsc_rrt (GstMpegtsAtscRRT * rrt); GST_MPEGTS_API -GstMpegtsAtscRRT * gst_mpegts_atsc_rrt_new (void); +GstMpegtsAtscRRT * gst_mpegts_atsc_rrt_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API -GstMpegtsAtscRRTDimension * gst_mpegts_atsc_rrt_dimension_new (void); +GstMpegtsAtscRRTDimension * gst_mpegts_atsc_rrt_dimension_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API -GstMpegtsAtscRRTDimensionValue * gst_mpegts_atsc_rrt_dimension_value_new (void); +GstMpegtsAtscRRTDimensionValue * gst_mpegts_atsc_rrt_dimension_value_new (void) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mpegts/gst-dvb-section.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mpegts/gst-dvb-section.h
Changed
@@ -237,10 +237,10 @@ GstMpegtsSection *gst_mpegts_section_from_nit (GstMpegtsNIT *nit); GST_MPEGTS_API -GstMpegtsNIT *gst_mpegts_nit_new (void); +GstMpegtsNIT *gst_mpegts_nit_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API -GstMpegtsNITStream *gst_mpegts_nit_stream_new (void); +GstMpegtsNITStream *gst_mpegts_nit_stream_new (void) G_GNUC_WARN_UNUSED_RESULT; /* BAT */ @@ -343,10 +343,10 @@ GstMpegtsSection *gst_mpegts_section_from_sdt (GstMpegtsSDT * sdt); GST_MPEGTS_API -GstMpegtsSDT *gst_mpegts_sdt_new (void); +GstMpegtsSDT *gst_mpegts_sdt_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API -GstMpegtsSDTService *gst_mpegts_sdt_service_new (void); +GstMpegtsSDTService *gst_mpegts_sdt_service_new (void) G_GNUC_WARN_UNUSED_RESULT; /* EIT */ @@ -407,7 +407,7 @@ /* TDT */ GST_MPEGTS_API -GstDateTime *gst_mpegts_section_get_tdt (GstMpegtsSection *section); +GstDateTime *gst_mpegts_section_get_tdt (GstMpegtsSection *section) G_GNUC_WARN_UNUSED_RESULT; /* TOT */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mpegts/gst-scte-section.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mpegts/gst-scte-section.h
Changed
@@ -245,7 +245,7 @@ GType gst_mpegts_scte_sit_get_type (void); GST_MPEGTS_API -GstMpegtsSCTESIT *gst_mpegts_scte_sit_new (void); +GstMpegtsSCTESIT *gst_mpegts_scte_sit_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API GstMpegtsSCTESIT *gst_mpegts_scte_null_new (void); @@ -267,7 +267,7 @@ GType gst_mpegts_scte_splice_event_get_type (void); GST_MPEGTS_API -GstMpegtsSCTESpliceEvent *gst_mpegts_scte_splice_event_new (void); +GstMpegtsSCTESpliceEvent *gst_mpegts_scte_splice_event_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API const GstMpegtsSCTESIT *gst_mpegts_section_get_scte_sit (GstMpegtsSection *section); @@ -279,7 +279,7 @@ GType gst_mpegts_scte_splice_component_get_type (void); GST_MPEGTS_API -GstMpegtsSCTESpliceComponent *gst_mpegts_scte_splice_component_new (guint8 tag); +GstMpegtsSCTESpliceComponent *gst_mpegts_scte_splice_component_new (guint8 tag) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mpegts/gstmpegtsdescriptor.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mpegts/gstmpegtsdescriptor.h
Changed
@@ -227,7 +227,7 @@ void gst_mpegts_descriptor_free (GstMpegtsDescriptor *desc); GST_MPEGTS_API -GstMpegtsDescriptor * gst_mpegts_descriptor_copy (GstMpegtsDescriptor *desc); +GstMpegtsDescriptor * gst_mpegts_descriptor_copy (GstMpegtsDescriptor *desc) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API GPtrArray *gst_mpegts_parse_descriptors (guint8 * buffer, gsize buf_len);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mpegts/gstmpegtssection.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mpegts/gstmpegtssection.h
Changed
@@ -238,7 +238,7 @@ }; GST_MPEGTS_API -GBytes *gst_mpegts_section_get_data (GstMpegtsSection *section); +GBytes *gst_mpegts_section_get_data (GstMpegtsSection *section) G_GNUC_WARN_UNUSED_RESULT; /* PAT */ #define GST_TYPE_MPEGTS_PAT_PROGRAM (gst_mpegts_pat_program_get_type()) @@ -267,7 +267,7 @@ GPtrArray *gst_mpegts_pat_new (void); GST_MPEGTS_API -GstMpegtsPatProgram *gst_mpegts_pat_program_new (void); +GstMpegtsPatProgram *gst_mpegts_pat_program_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API GstMpegtsSection *gst_mpegts_section_from_pat (GPtrArray * programs, @@ -474,10 +474,10 @@ GType gst_mpegts_pmt_stream_get_type (void); GST_MPEGTS_API -GstMpegtsPMT *gst_mpegts_pmt_new (void); +GstMpegtsPMT *gst_mpegts_pmt_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API -GstMpegtsPMTStream *gst_mpegts_pmt_stream_new (void); +GstMpegtsPMTStream *gst_mpegts_pmt_stream_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API const GstMpegtsPMT *gst_mpegts_section_get_pmt (GstMpegtsSection *section); @@ -514,7 +514,7 @@ GST_MPEGTS_API GstMpegtsSection *gst_mpegts_section_new (guint16 pid, guint8 * data, - gsize data_size); + gsize data_size) G_GNUC_WARN_UNUSED_RESULT; GST_MPEGTS_API guint8 *gst_mpegts_section_packetize (GstMpegtsSection * section, gsize * output_size);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mse/gstappendpipeline.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mse/gstappendpipeline.c
Changed
@@ -792,7 +792,7 @@ gst_element_set_state (GST_ELEMENT (self->pipeline), GST_STATE_PLAYING); if (started != GST_STATE_CHANGE_SUCCESS) { GST_ERROR_OBJECT (self, "failed to start: %s", - gst_element_state_change_return_get_name (started)); + gst_state_change_return_get_name (started)); g_set_error (error, GST_MEDIA_SOURCE_ERROR, GST_MEDIA_SOURCE_ERROR_INVALID_STATE, "failed to start append pipeline"); @@ -849,7 +849,7 @@ gst_element_set_state (pipeline, GST_STATE_NULL); if (stopped != GST_STATE_CHANGE_SUCCESS) { GST_ERROR_OBJECT (self, "failed to stop: %s", - gst_element_state_change_return_get_name (stopped)); + gst_state_change_return_get_name (stopped)); return FALSE; } self->received_init_segment = FALSE; @@ -869,7 +869,7 @@ gst_element_set_state (pipeline, GST_STATE_READY); if (stopped != GST_STATE_CHANGE_SUCCESS) { GST_ERROR_OBJECT (self, "failed to stop: %s", - gst_element_state_change_return_get_name (stopped)); + gst_state_change_return_get_name (stopped)); return FALSE; } @@ -896,7 +896,7 @@ return TRUE; } else { GST_ERROR_OBJECT (self, "failed to start: %s", - gst_element_state_change_return_get_name (started)); + gst_state_change_return_get_name (started)); return FALSE; } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mse/gstmediasource.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mse/gstmediasource.h
Changed
@@ -140,7 +140,7 @@ GstObject); GST_MSE_API -GstMediaSource *gst_media_source_new (void); +GstMediaSource *gst_media_source_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API void gst_media_source_attach (GstMediaSource * self, GstMseSrc * element); @@ -150,11 +150,11 @@ GST_MSE_API GstSourceBufferList * gst_media_source_get_source_buffers ( - GstMediaSource * self); + GstMediaSource * self) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API GstSourceBufferList * gst_media_source_get_active_source_buffers ( - GstMediaSource * self); + GstMediaSource * self) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API GstMediaSourceReadyState gst_media_source_get_ready_state ( @@ -174,7 +174,7 @@ GST_MSE_API GstSourceBuffer * gst_media_source_add_source_buffer (GstMediaSource * self, const gchar * type, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API gboolean gst_media_source_remove_source_buffer (GstMediaSource * self,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mse/gstsourcebuffer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mse/gstsourcebuffer.c
Changed
@@ -815,9 +815,11 @@ { GstSourceBuffer *self = GST_SOURCE_BUFFER (user_data); +#ifndef G_DISABLE_CHECKS gboolean processed_init_segment = g_atomic_int_get (&self->processed_init_segment); g_return_if_fail (processed_init_segment); +#endif GST_OBJECT_LOCK (self); gboolean is_within_window = is_within_append_window_unlocked (self, sample);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mse/gstsourcebuffer.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mse/gstsourcebuffer.h
Changed
@@ -75,14 +75,14 @@ GstSourceBufferAppendMode mode, GError ** error); GST_MSE_API -gchar *gst_source_buffer_get_content_type (GstSourceBuffer * self); +gchar *gst_source_buffer_get_content_type (GstSourceBuffer * self) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API gboolean gst_source_buffer_get_updating (GstSourceBuffer * self); GST_MSE_API GArray * gst_source_buffer_get_buffered (GstSourceBuffer * self, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API gboolean gst_source_buffer_set_timestamp_offset (GstSourceBuffer * self,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/mse/gstsourcebufferlist.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/mse/gstsourcebufferlist.h
Changed
@@ -37,7 +37,7 @@ GST_MSE_API GstSourceBuffer *gst_source_buffer_list_index (GstSourceBufferList * self, - guint index); + guint index) G_GNUC_WARN_UNUSED_RESULT; GST_MSE_API guint gst_source_buffer_list_get_length (GstSourceBufferList * self);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/opencv/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/opencv/meson.build
Changed
@@ -30,9 +30,9 @@ gstopencv_cargs = -opencv_dep = dependency('opencv', version : '>= 3.0.0', '< 3.5.0', required : false) +opencv_dep = dependency('opencv', version : '>= 3.0.0', '< 3.5.0', required : false, include_type: 'system' ) if not opencv_dep.found() - opencv_dep = dependency('opencv4', version : '>= 4.0.0', required : opencv_opt) + opencv_dep = dependency('opencv4', version : '>= 4.0.0', required : opencv_opt, include_type: 'system') if not opencv_dep.found() subdir_done() endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/play/gstplay-signal-adapter.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/play/gstplay-signal-adapter.h
Changed
@@ -47,13 +47,13 @@ GType gst_play_signal_adapter_get_type (void); GST_PLAY_API -GstPlaySignalAdapter * gst_play_signal_adapter_new (GstPlay * play); +GstPlaySignalAdapter * gst_play_signal_adapter_new (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstPlaySignalAdapter * gst_play_signal_adapter_new_with_main_context (GstPlay * play, GMainContext * context); +GstPlaySignalAdapter * gst_play_signal_adapter_new_with_main_context (GstPlay * play, GMainContext * context) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstPlaySignalAdapter * gst_play_signal_adapter_new_sync_emit (GstPlay * play); +GstPlaySignalAdapter * gst_play_signal_adapter_new_sync_emit (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API GstPlay * gst_play_signal_adapter_get_play (GstPlaySignalAdapter * adapter);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/play/gstplay-visualization.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/play/gstplay-visualization.h
Changed
@@ -45,7 +45,7 @@ GType gst_play_visualization_get_type (void); GST_PLAY_API -GstPlayVisualization * gst_play_visualization_copy (const GstPlayVisualization *vis); +GstPlayVisualization * gst_play_visualization_copy (const GstPlayVisualization *vis) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API void gst_play_visualization_free (GstPlayVisualization *vis);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/play/gstplay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/play/gstplay.c
Changed
@@ -197,6 +197,8 @@ GstClockTime seek_position; GstStreamCollection *collection; + guint32 selection_seqnum; + GList *current_selection; gchar *video_sid; gboolean video_enabled; gchar *audio_sid; @@ -310,6 +312,7 @@ self->audio_enabled = TRUE; self->video_enabled = TRUE; self->subtitle_enabled = TRUE; + self->selection_seqnum = GST_SEQNUM_INVALID; GST_TRACE_OBJECT (self, "Initialized"); } @@ -513,6 +516,7 @@ gst_structure_free (self->config); if (self->collection) gst_object_unref (self->collection); + g_list_free_full (self->current_selection, g_free); if (self->media_info) g_object_unref (self->media_info); g_mutex_clear (&self->lock); @@ -1430,13 +1434,11 @@ gchar *transition_name; GST_DEBUG_OBJECT (self, "Changed state old: %s new: %s pending: %s", - gst_element_state_get_name (old_state), - gst_element_state_get_name (new_state), - gst_element_state_get_name (pending_state)); + gst_state_get_name (old_state), + gst_state_get_name (new_state), gst_state_get_name (pending_state)); transition_name = g_strdup_printf ("%s_%s", - gst_element_state_get_name (old_state), - gst_element_state_get_name (new_state)); + gst_state_get_name (old_state), gst_state_get_name (new_state)); dump_dot_file (self, transition_name); g_free (transition_name); @@ -1613,15 +1615,14 @@ gst_message_parse_request_state (msg, &state); - GST_DEBUG_OBJECT (self, "State %s requested", - gst_element_state_get_name (state)); + GST_DEBUG_OBJECT (self, "State %s requested", gst_state_get_name (state)); self->target_state = state; state_ret = gst_element_set_state (self->playbin, state); if (state_ret == GST_STATE_CHANGE_FAILURE) on_error (self, g_error_new (GST_PLAY_ERROR, GST_PLAY_ERROR_FAILED, "Failed to change to requested state %s", - gst_element_state_get_name (state)), NULL); + gst_state_get_name (state)), NULL); } static void @@ -1869,22 +1870,13 @@ gpointer user_data) { GstPlay *self = GST_PLAY (user_data); - GstStreamCollection *collection = NULL; - - gst_message_parse_streams_selected (msg, &collection); - - if (!collection) - return; + guint32 seqnum = gst_message_get_seqnum (msg); g_mutex_lock (&self->lock); - gboolean updated = update_stream_collection (self, collection); - gst_object_unref (collection); - - // This should not really happen: we should first get a stream-collection - // message with the new collection, then selection happens. - if (updated) { - GST_WARNING_OBJECT (self, - "Updated stream collection from streams-selected message"); + // Ignore selections for previous select-streams events + if (self->selection_seqnum != seqnum) { + g_mutex_unlock (&self->lock); + return; } gboolean found_audio = self->audio_sid == NULL; @@ -1921,9 +1913,10 @@ } } - if ((stream_type & GST_STREAM_TYPE_TEXT) && self->subtitle_enabled) { + if ((stream_type & GST_STREAM_TYPE_TEXT)) { GST_DEBUG_OBJECT (self, "Selected subtitle track %s", stream_id); - if (g_strcmp0 (self->subtitle_sid, stream_id) == 0) { + if (g_strcmp0 (self->subtitle_sid, stream_id) == 0 + && self->subtitle_enabled) { found_subtitle = TRUE; } else { GST_WARNING_OBJECT (self, "Unexpected subtitle stream id '%s' selected", @@ -1947,9 +1940,6 @@ self->subtitle_sid); } g_mutex_unlock (&self->lock); - - if (self->media_info && updated) - on_media_info_updated (self); } static void @@ -2251,6 +2241,7 @@ return FALSE; } +/* Must be called with lock */ static GstPlayStreamInfo * gst_play_stream_info_get_current_from_stream_id (GstPlay * self, const gchar * stream_id, GType type) @@ -2260,13 +2251,11 @@ if (!self->media_info || !stream_id) return NULL; - g_mutex_lock (&self->lock); info = gst_play_stream_info_find_from_stream_id (self->media_info, stream_id); if (info && G_OBJECT_TYPE (info) == type) info = gst_play_stream_info_copy (info); else info = NULL; - g_mutex_unlock (&self->lock); return info; } @@ -2579,6 +2568,30 @@ } } +static void +about_to_finish_cb (GstElement * playbin, GstPlay * self) +{ + GstPlayLoop loop; + gchar *uri = NULL; + + g_mutex_lock (&self->lock); + loop = gst_play_config_get_loop (self->config); + uri = g_strdup (self->uri); + g_mutex_unlock (&self->lock); + + switch (loop) { + case GST_PLAY_LOOP_NONE: + break; + case GST_PLAY_LOOP_TRACK: + GST_DEBUG_OBJECT (self, "Resetting URI to '%s'", GST_STR_NULL (uri)); + + g_object_set (self->playbin, "uri", uri, NULL); + break; + } + + g_free (uri); +} + static gpointer gst_play_main (gpointer data) { @@ -2652,6 +2665,8 @@ G_CALLBACK (mute_notify_cb), self); g_signal_connect (self->playbin, "source-setup", G_CALLBACK (source_setup_cb), self); + g_signal_connect (self->playbin, "about-to-finish", + G_CALLBACK (about_to_finish_cb), self); self->target_state = GST_STATE_NULL; self->current_state = GST_STATE_NULL; @@ -2961,6 +2976,9 @@ gst_object_unref (self->collection); self->collection = NULL; } + self->selection_seqnum = GST_SEQNUM_INVALID; + g_list_free_full (self->current_selection, g_free); + self->current_selection = NULL; g_free (self->video_sid); g_free (self->audio_sid); g_free (self->subtitle_sid); @@ -3448,12 +3466,16 @@ g_return_val_if_fail (GST_IS_PLAY (self), NULL); - if (!is_track_enabled (self, GST_PLAY_FLAG_AUDIO)) + g_mutex_lock (&self->lock); + if (!self->audio_enabled) { + g_mutex_unlock (&self->lock); return NULL; + } info = (GstPlayAudioInfo *) gst_play_stream_info_get_current_from_stream_id (self, self->audio_sid, GST_TYPE_PLAY_AUDIO_INFO); + g_mutex_unlock (&self->lock); return info; } @@ -3476,12 +3498,16 @@ g_return_val_if_fail (GST_IS_PLAY (self), NULL); - if (!is_track_enabled (self, GST_PLAY_FLAG_VIDEO)) + g_mutex_lock (&self->lock); + if (!self->video_enabled) { + g_mutex_unlock (&self->lock); return NULL; + } info = (GstPlayVideoInfo *) gst_play_stream_info_get_current_from_stream_id (self, self->video_sid, GST_TYPE_PLAY_VIDEO_INFO); + g_mutex_unlock (&self->lock); return info; } @@ -3504,16 +3530,37 @@ g_return_val_if_fail (GST_IS_PLAY (self), NULL); - if (!is_track_enabled (self, GST_PLAY_FLAG_SUBTITLE)) + g_mutex_lock (&self->lock); + if (!self->subtitle_enabled) { + g_mutex_unlock (&self->lock); return NULL; + } info = (GstPlaySubtitleInfo *) gst_play_stream_info_get_current_from_stream_id (self, self->subtitle_sid, GST_TYPE_PLAY_SUBTITLE_INFO); + g_mutex_unlock (&self->lock); return info; } +static gboolean +is_same_stream_selection (GList * a, GList * b) +{ + // We always create the list in the same order so + // checking both lists linearly is sufficient + while (a && b) { + if (!g_str_equal (a->data, b->data)) + return FALSE; + + a = a->next; + b = b->next; + } + + // If both lists are at the end now then they were equal + return a == b; +} + /* Must be called with lock */ static gboolean gst_play_select_streams (GstPlay * self) @@ -3537,15 +3584,25 @@ stream_list = g_list_append (stream_list, g_strdup (self->subtitle_sid)); } - g_mutex_unlock (&self->lock); if (stream_list) { - ret = gst_element_send_event (self->playbin, - gst_event_new_select_streams (stream_list)); - g_list_free_full (stream_list, g_free); + if (is_same_stream_selection (self->current_selection, stream_list)) { + GST_DEBUG_OBJECT (self, "Stream selection did not change"); + g_list_free_full (stream_list, g_free); + } else { + GstEvent *ev = gst_event_new_select_streams (stream_list); + g_list_free_full (self->current_selection, g_free); + self->current_selection = stream_list; + self->selection_seqnum = gst_event_get_seqnum (ev); + g_mutex_unlock (&self->lock); + ret = gst_element_send_event (self->playbin, ev); + g_mutex_lock (&self->lock); + if (!ret) { + GST_WARNING_OBJECT (self, "Stream selection failed"); + } + } } else { GST_ERROR_OBJECT (self, "No available streams for select-streams"); } - g_mutex_lock (&self->lock); return ret; } @@ -4485,6 +4542,24 @@ return (GType) id; } +GType +gst_play_loop_get_type (void) +{ + static gsize id = 0; + static const GEnumValue values = { + {C_ENUM (GST_PLAY_LOOP_NONE), "GST_PLAY_LOOP_NONE", "none"}, + {C_ENUM (GST_PLAY_LOOP_TRACK), "GST_PLAY_LOOP_TRACK", "track"}, + {0, NULL, NULL} + }; + + if (g_once_init_enter (&id)) { + GType tmp = g_enum_register_static ("GstPlayLoop", values); + g_once_init_leave (&id, tmp); + } + + return (GType) id; +} + /** * gst_play_state_get_name: * @state: a #GstPlayState @@ -4531,6 +4606,25 @@ return enum_value->value_name; } +/** + * gst_play_loop_get_name: + * @loop: a #GstPlayLoop + * + * Returns: (transfer none): a string with the name of the loop. + * Since: 1.28 + */ +const gchar * +gst_play_loop_get_name (GstPlayLoop loop) +{ + GEnumClass *enum_class; + GEnumValue *enum_value; + enum_class = g_type_class_ref (GST_TYPE_PLAY_LOOP); + enum_value = g_enum_get_value (enum_class, loop); + g_assert (enum_value != NULL); + g_type_class_unref (enum_class); + return enum_value->value_name; +} + GType gst_play_error_get_type (void) { @@ -4768,6 +4862,45 @@ } /** + * gst_play_config_set_loop: + * @config: a #GstPlay configuration + * @loop: #GstPlayLoop + * + * Sets the looping mode. + * + * Looping is disabled by default. + * + * Since: 1.28 + */ +void +gst_play_config_set_loop (GstStructure * config, GstPlayLoop loop) +{ + g_return_if_fail (config != NULL); + + gst_structure_set (config, "loop", GST_TYPE_PLAY_LOOP, loop, NULL); +} + +/** + * gst_play_config_get_loop: + * @config: a #GstPlay configuration + * + * Returns: The looping mode. + * + * Since: 1.28 + */ +GstPlayLoop +gst_play_config_get_loop (const GstStructure * config) +{ + GstPlayLoop loop = GST_PLAY_LOOP_NONE; + + g_return_val_if_fail (config != NULL, GST_PLAY_LOOP_NONE); + + gst_structure_get (config, "loop", GST_TYPE_PLAY_LOOP, &loop, NULL); + + return loop; +} + +/** * gst_play_config_set_pipeline_dump_in_error_details: * @config: a #GstPlay configuration * @value: Include pipeline dumps in error details, or not.
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/play/gstplay.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/play/gstplay.h
Changed
@@ -50,6 +50,15 @@ */ #define GST_TYPE_PLAY_MESSAGE (gst_play_message_get_type ()) +GST_PLAY_API +GType gst_play_loop_get_type (void); + +/** + * GST_TYPE_PLAY_LOOP: + * Since: 1.28 + */ +#define GST_TYPE_PLAY_LOOP (gst_play_loop_get_type ()) + /** * GstPlayState: * @GST_PLAY_STATE_STOPPED: the play is stopped. @@ -108,6 +117,19 @@ GST_PLAY_MESSAGE_SEEK_DONE } GstPlayMessage; +/** + * GstPlayLoop: + * @GST_PLAY_LOOP_NONE: Don't loop. + * @GST_PLAY_LOOP_TRACK: Loop over the current track. + * + * Since: 1.28 + */ +typedef enum +{ + GST_PLAY_LOOP_NONE, + GST_PLAY_LOOP_TRACK, +} GstPlayLoop; + GST_PLAY_API const gchar *gst_play_state_get_name (GstPlayState state); @@ -115,6 +137,9 @@ const gchar *gst_play_message_get_name (GstPlayMessage message_type); GST_PLAY_API +const gchar *gst_play_loop_get_name (GstPlayLoop loop); + +GST_PLAY_API GQuark gst_play_error_quark (void); GST_PLAY_API @@ -216,10 +241,10 @@ GType gst_play_get_type (void); GST_PLAY_API -GstPlay * gst_play_new (GstPlayVideoRenderer * video_renderer); +GstPlay * gst_play_new (GstPlayVideoRenderer * video_renderer) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstBus * gst_play_get_message_bus (GstPlay * play); +GstBus * gst_play_get_message_bus (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API void gst_play_play (GstPlay * play); @@ -242,14 +267,14 @@ gdouble gst_play_get_rate (GstPlay * play); GST_PLAY_API -gchar * gst_play_get_uri (GstPlay * play); +gchar * gst_play_get_uri (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API void gst_play_set_uri (GstPlay * play, const gchar * uri); GST_PLAY_API -gchar * gst_play_get_subtitle_uri (GstPlay * play); +gchar * gst_play_get_subtitle_uri (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API void gst_play_set_subtitle_uri (GstPlay * play, @@ -276,7 +301,7 @@ gboolean val); GST_PLAY_API -GstElement * gst_play_get_pipeline (GstPlay * play); +GstElement * gst_play_get_pipeline (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API void gst_play_set_video_track_enabled (GstPlay * play, @@ -320,16 +345,16 @@ const gchar *subtitle_stream_id); GST_PLAY_API -GstPlayMediaInfo * gst_play_get_media_info (GstPlay * play); +GstPlayMediaInfo * gst_play_get_media_info (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstPlayAudioInfo * gst_play_get_current_audio_track (GstPlay * play); +GstPlayAudioInfo * gst_play_get_current_audio_track (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstPlayVideoInfo * gst_play_get_current_video_track (GstPlay * play); +GstPlayVideoInfo * gst_play_get_current_video_track (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API -GstPlaySubtitleInfo * gst_play_get_current_subtitle_track (GstPlay * play); +GstPlaySubtitleInfo * gst_play_get_current_subtitle_track (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API gboolean gst_play_set_visualization (GstPlay * play, @@ -340,7 +365,7 @@ gboolean enabled); GST_PLAY_API -gchar * gst_play_get_current_visualization (GstPlay * play); +gchar * gst_play_get_current_visualization (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API gboolean gst_play_has_color_balance (GstPlay * play); @@ -388,7 +413,7 @@ GstStructure * config); GST_PLAY_API -GstStructure * gst_play_get_config (GstPlay * play); +GstStructure * gst_play_get_config (GstPlay * play) G_GNUC_WARN_UNUSED_RESULT; /* helpers for configuring the config structure */ @@ -413,6 +438,13 @@ gboolean gst_play_config_get_seek_accurate (const GstStructure * config); GST_PLAY_API +void gst_play_config_set_loop (GstStructure *config, + GstPlayLoop loop); + +GST_PLAY_API +GstPlayLoop gst_play_config_get_loop (const GstStructure * config); + +GST_PLAY_API void gst_play_config_set_pipeline_dump_in_error_details (GstStructure * config, gboolean value); @@ -440,7 +472,7 @@ GST_PLAY_API GstSample * gst_play_get_video_snapshot (GstPlay * play, - GstPlaySnapshotFormat format, const GstStructure * config); + GstPlaySnapshotFormat format, const GstStructure * config) G_GNUC_WARN_UNUSED_RESULT; GST_PLAY_API gboolean gst_play_is_play_message (GstMessage *msg);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/player/gstplayer-visualization.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/player/gstplayer-visualization.h
Changed
@@ -44,7 +44,7 @@ GType gst_player_visualization_get_type (void); GST_PLAYER_API -GstPlayerVisualization * gst_player_visualization_copy (const GstPlayerVisualization *vis); +GstPlayerVisualization * gst_player_visualization_copy (const GstPlayerVisualization *vis) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API void gst_player_visualization_free (GstPlayerVisualization *vis);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/player/gstplayer.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/player/gstplayer.h
Changed
@@ -112,7 +112,7 @@ GType gst_player_get_type (void); GST_PLAYER_API -GstPlayer * gst_player_new (GstPlayerVideoRenderer * video_renderer, GstPlayerSignalDispatcher * signal_dispatcher); +GstPlayer * gst_player_new (GstPlayerVideoRenderer * video_renderer, GstPlayerSignalDispatcher * signal_dispatcher) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API void gst_player_play (GstPlayer * player); @@ -135,14 +135,14 @@ gdouble gst_player_get_rate (GstPlayer * player); GST_PLAYER_API -gchar * gst_player_get_uri (GstPlayer * player); +gchar * gst_player_get_uri (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API void gst_player_set_uri (GstPlayer * player, const gchar * uri); GST_PLAYER_API -gchar * gst_player_get_subtitle_uri (GstPlayer * player); +gchar * gst_player_get_subtitle_uri (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API void gst_player_set_subtitle_uri (GstPlayer * player, @@ -169,7 +169,7 @@ gboolean val); GST_PLAYER_API -GstElement * gst_player_get_pipeline (GstPlayer * player); +GstElement * gst_player_get_pipeline (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API void gst_player_set_video_track_enabled (GstPlayer * player, @@ -196,16 +196,16 @@ gint stream_index); GST_PLAYER_API -GstPlayerMediaInfo * gst_player_get_media_info (GstPlayer * player); +GstPlayerMediaInfo * gst_player_get_media_info (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API -GstPlayerAudioInfo * gst_player_get_current_audio_track (GstPlayer * player); +GstPlayerAudioInfo * gst_player_get_current_audio_track (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API -GstPlayerVideoInfo * gst_player_get_current_video_track (GstPlayer * player); +GstPlayerVideoInfo * gst_player_get_current_video_track (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API -GstPlayerSubtitleInfo * gst_player_get_current_subtitle_track (GstPlayer * player); +GstPlayerSubtitleInfo * gst_player_get_current_subtitle_track (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API gboolean gst_player_set_visualization (GstPlayer * player, @@ -216,7 +216,7 @@ gboolean enabled); GST_PLAYER_API -gchar * gst_player_get_current_visualization (GstPlayer * player); +gchar * gst_player_get_current_visualization (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; GST_PLAYER_API gboolean gst_player_has_color_balance (GstPlayer * player); @@ -264,7 +264,7 @@ GstStructure * config); GST_PLAYER_API -GstStructure * gst_player_get_config (GstPlayer * player); +GstStructure * gst_player_get_config (GstPlayer * player) G_GNUC_WARN_UNUSED_RESULT; /* helpers for configuring the config structure */ @@ -299,7 +299,7 @@ GST_PLAYER_API GstSample * gst_player_get_video_snapshot (GstPlayer * player, - GstPlayerSnapshotFormat format, const GstStructure * config); + GstPlayerSnapshotFormat format, const GstStructure * config) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/transcoder/gsttranscoder-signal-adapter.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/transcoder/gsttranscoder-signal-adapter.c
Changed
@@ -110,7 +110,8 @@ GstStructure *details = NULL; gst_structure_get (message_data, GST_TRANSCODER_MESSAGE_DATA_ERROR, - G_TYPE_ERROR, &error, GST_TYPE_STRUCTURE, &details, NULL); + G_TYPE_ERROR, &error, GST_TRANSCODER_MESSAGE_DATA_ISSUE_DETAILS, + GST_TYPE_STRUCTURE, &details, NULL); g_signal_emit (self, signalsSIGNAL_ERROR, 0, error, details); g_error_free (error); if (details) @@ -122,7 +123,8 @@ GError *error = NULL; gst_structure_get (message_data, GST_TRANSCODER_MESSAGE_DATA_WARNING, - G_TYPE_ERROR, &error, GST_TYPE_STRUCTURE, &details, NULL); + G_TYPE_ERROR, &error, GST_TRANSCODER_MESSAGE_DATA_ISSUE_DETAILS, + GST_TYPE_STRUCTURE, &details, NULL); g_signal_emit (self, signalsSIGNAL_WARNING, 0, error, details); g_error_free (error); if (details)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/transcoder/gsttranscoder-signal-adapter.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/transcoder/gsttranscoder-signal-adapter.h
Changed
@@ -49,7 +49,7 @@ G_DECLARE_FINAL_TYPE(GstTranscoderSignalAdapter, gst_transcoder_signal_adapter, GST, TRANSCODER_SIGNAL_ADAPTER, GObject) GST_TRANSCODER_API -GstTranscoder * gst_transcoder_signal_adapter_get_transcoder (GstTranscoderSignalAdapter * self); +GstTranscoder * gst_transcoder_signal_adapter_get_transcoder (GstTranscoderSignalAdapter * self) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/transcoder/gsttranscoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/transcoder/gsttranscoder.c
Changed
@@ -80,7 +80,6 @@ GMainLoop *loop; GstElement *transcodebin; - GstBus *bus; GstState target_state, current_state; gboolean is_live, is_eos; GSource *tick_source, *ready_timeout_source; @@ -665,13 +664,11 @@ gchar *transition_name; GST_DEBUG_OBJECT (self, "Changed state old: %s new: %s pending: %s", - gst_element_state_get_name (old_state), - gst_element_state_get_name (new_state), - gst_element_state_get_name (pending_state)); + gst_state_get_name (old_state), + gst_state_get_name (new_state), gst_state_get_name (pending_state)); transition_name = g_strdup_printf ("%s_%s", - gst_element_state_get_name (old_state), - gst_element_state_get_name (new_state)); + gst_state_get_name (old_state), gst_state_get_name (new_state)); dump_dot_file (self, transition_name); g_free (transition_name); @@ -727,8 +724,7 @@ gst_message_parse_request_state (msg, &state); - GST_DEBUG_OBJECT (self, "State %s requested", - gst_element_state_get_name (state)); + GST_DEBUG_OBJECT (self, "State %s requested", gst_state_get_name (state)); self->target_state = state; state_ret = gst_element_set_state (self->transcodebin, state); @@ -736,7 +732,7 @@ GError *err = g_error_new (GST_TRANSCODER_ERROR, GST_TRANSCODER_ERROR_FAILED, "Failed to change to requested state %s", - gst_element_state_get_name (state)); + gst_state_get_name (state)); api_bus_post_message (self, GST_TRANSCODER_MESSAGE_ERROR, GST_TRANSCODER_MESSAGE_DATA_ERROR, G_TYPE_ERROR, err, NULL); @@ -802,7 +798,7 @@ g_source_attach (source, self->context); g_source_unref (source); - self->bus = bus = gst_element_get_bus (self->transcodebin); + bus = gst_element_get_bus (self->transcodebin); gst_bus_add_signal_watch (bus); g_signal_connect (G_OBJECT (bus), "message::error", G_CALLBACK (error_cb), @@ -1471,7 +1467,7 @@ * Since: 1.20 */ void -gst_transcoder_message_parse_error (GstMessage * msg, GError * error, +gst_transcoder_message_parse_error (GstMessage * msg, GError ** error, GstStructure ** details) { PARSE_MESSAGE_FIELD (msg, GST_TRANSCODER_MESSAGE_DATA_ERROR, G_TYPE_ERROR, @@ -1491,7 +1487,7 @@ * Since: 1.20 */ void -gst_transcoder_message_parse_warning (GstMessage * msg, GError * error, +gst_transcoder_message_parse_warning (GstMessage * msg, GError ** error, GstStructure ** details) { PARSE_MESSAGE_FIELD (msg, GST_TRANSCODER_MESSAGE_DATA_WARNING, G_TYPE_ERROR,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/transcoder/gsttranscoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/transcoder/gsttranscoder.h
Changed
@@ -93,10 +93,10 @@ void gst_transcoder_message_parse_state (GstMessage * msg, GstTranscoderState * state); GST_TRANSCODER_API -void gst_transcoder_message_parse_error (GstMessage * msg, GError * error, GstStructure ** details); +void gst_transcoder_message_parse_error (GstMessage * msg, GError ** error, GstStructure ** details); GST_TRANSCODER_API -void gst_transcoder_message_parse_warning (GstMessage * msg, GError * error, GstStructure ** details); +void gst_transcoder_message_parse_warning (GstMessage * msg, GError ** error, GstStructure ** details); @@ -114,19 +114,19 @@ GST_TRANSCODER_API GstTranscoder * gst_transcoder_new (const gchar * source_uri, const gchar * dest_uri, - const gchar * encoding_profile); + const gchar * encoding_profile) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API GstTranscoder * gst_transcoder_new_full (const gchar * source_uri, const gchar * dest_uri, - GstEncodingProfile * profile); + GstEncodingProfile * profile) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API gboolean gst_transcoder_run (GstTranscoder * self, GError ** error); GST_TRANSCODER_API -GstBus * gst_transcoder_get_message_bus (GstTranscoder * transcoder); +GstBus * gst_transcoder_get_message_bus (GstTranscoder * transcoder) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API void gst_transcoder_set_cpu_usage (GstTranscoder * self, @@ -140,10 +140,10 @@ guint interval); GST_TRANSCODER_API -gchar * gst_transcoder_get_source_uri (GstTranscoder * self); +gchar * gst_transcoder_get_source_uri (GstTranscoder * self) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API -gchar * gst_transcoder_get_dest_uri (GstTranscoder * self); +gchar * gst_transcoder_get_dest_uri (GstTranscoder * self) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API guint gst_transcoder_get_position_update_interval (GstTranscoder * self); @@ -155,7 +155,7 @@ GstClockTime gst_transcoder_get_duration (GstTranscoder * self); GST_TRANSCODER_API -GstElement * gst_transcoder_get_pipeline (GstTranscoder * self); +GstElement * gst_transcoder_get_pipeline (GstTranscoder * self) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API gboolean gst_transcoder_get_avoid_reencoding (GstTranscoder * self); @@ -168,10 +168,10 @@ GST_TRANSCODER_API GstTranscoderSignalAdapter* gst_transcoder_get_signal_adapter (GstTranscoder * self, - GMainContext *context); + GMainContext *context) G_GNUC_WARN_UNUSED_RESULT; GST_TRANSCODER_API GstTranscoderSignalAdapter* -gst_transcoder_get_sync_signal_adapter (GstTranscoder * self); +gst_transcoder_get_sync_signal_adapter (GstTranscoder * self) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvaallocator.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvaallocator.c
Changed
@@ -241,6 +241,10 @@ /*=========================== GstVaDmabufAllocator ===========================*/ +#define GST_VA_DMABUF_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocator)) +#define GST_VA_DMABUF_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocatorClass)) +#define GST_VA_DMABUF_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocatorClass)) + /** * GstVaDmabufAllocator: * @@ -1192,6 +1196,10 @@ /*===================== GstVaAllocator / GstVaMemory =========================*/ +#define GST_VA_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_ALLOCATOR, GstVaAllocator)) +#define GST_VA_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_ALLOCATOR, GstVaAllocatorClass)) +#define GST_VA_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_ALLOCATOR, GstVaAllocatorClass)) + /** * GstVaAllocator: *
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvaallocator.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvaallocator.h
Changed
@@ -29,16 +29,13 @@ G_BEGIN_DECLS #define GST_TYPE_VA_DMABUF_ALLOCATOR (gst_va_dmabuf_allocator_get_type()) -#define GST_VA_DMABUF_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocator)) -#define GST_VA_DMABUF_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocatorClass)) #define GST_IS_VA_DMABUF_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_VA_DMABUF_ALLOCATOR)) #define GST_IS_VA_DMABUF_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_VA_DMABUF_ALLOCATOR)) -#define GST_VA_DMABUF_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_DMABUF_ALLOCATOR, GstVaDmabufAllocatorClass)) GST_VA_API GType gst_va_dmabuf_allocator_get_type (void); GST_VA_API -GstAllocator * gst_va_dmabuf_allocator_new (GstVaDisplay * display); +GstAllocator * gst_va_dmabuf_allocator_new (GstVaDisplay * display) G_GNUC_WARN_UNUSED_RESULT; GST_VA_API gboolean gst_va_dmabuf_allocator_setup_buffer (GstAllocator * allocator, GstBuffer * buffer); @@ -70,11 +67,8 @@ guint usage_hint); #define GST_TYPE_VA_ALLOCATOR (gst_va_allocator_get_type()) -#define GST_VA_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_ALLOCATOR, GstVaAllocator)) -#define GST_VA_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_ALLOCATOR, GstVaAllocatorClass)) #define GST_IS_VA_ALLOCATOR(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_VA_ALLOCATOR)) #define GST_IS_VA_ALLOCATOR_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_VA_ALLOCATOR)) -#define GST_VA_ALLOCATOR_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_ALLOCATOR, GstVaAllocatorClass)) /** * GST_ALLOCATOR_VASURFACE: @@ -98,7 +92,7 @@ GType gst_va_allocator_get_type (void); GST_VA_API GstAllocator * gst_va_allocator_new (GstVaDisplay * display, - GArray * surface_formats); + GArray * surface_formats) G_GNUC_WARN_UNUSED_RESULT; GST_VA_API GstMemory * gst_va_allocator_alloc (GstAllocator * allocator); GST_VA_API
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvadisplay_drm.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvadisplay_drm.h
Changed
@@ -37,6 +37,6 @@ GST_VA_API GType gst_va_display_drm_get_type (void); GST_VA_API -GstVaDisplay * gst_va_display_drm_new_from_path (const gchar * path); +GstVaDisplay * gst_va_display_drm_new_from_path (const gchar * path) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvadisplay_wrapped.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvadisplay_wrapped.h
Changed
@@ -37,6 +37,6 @@ GST_VA_API GType gst_va_display_wrapped_get_type (void); GST_VA_API -GstVaDisplay * gst_va_display_wrapped_new (gpointer handle); +GstVaDisplay * gst_va_display_wrapped_new (gpointer handle) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvapool.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvapool.c
Changed
@@ -40,6 +40,10 @@ GST_DEBUG_CATEGORY_STATIC (gst_va_pool_debug); #define GST_CAT_DEFAULT gst_va_pool_debug +#define GST_VA_POOL(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_POOL, GstVaPool)) +#define GST_VA_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_POOL, GstVaPoolClass)) +#define GST_VA_POOL_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_POOL, GstVaPoolClass)) + /** * GstVaPool: *
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/va/gstvapool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/va/gstvapool.h
Changed
@@ -27,16 +27,13 @@ G_BEGIN_DECLS #define GST_TYPE_VA_POOL (gst_va_pool_get_type()) -#define GST_VA_POOL(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_VA_POOL, GstVaPool)) -#define GST_VA_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_VA_POOL, GstVaPoolClass)) #define GST_IS_VA_POOL(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_VA_POOL)) #define GST_IS_VA_POOL_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_VA_POOL)) -#define GST_VA_POOL_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_VA_POOL, GstVaPoolClass)) GST_VA_API GType gst_va_pool_get_type (void); GST_VA_API -GstBufferPool * gst_va_pool_new (void); +GstBufferPool * gst_va_pool_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_VA_API gboolean gst_va_pool_requires_video_meta (GstBufferPool * pool); GST_VA_API @@ -53,7 +50,7 @@ guint usage_hint, GstVaFeature use_derived, GstAllocator * allocator, - GstAllocationParams * alloc_params); + GstAllocationParams * alloc_params) G_GNUC_WARN_UNUSED_RESULT; GST_VA_API gboolean gst_va_pool_get_buffer_size (GstBufferPool * pool, guint * size);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkbufferpool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkbufferpool.h
Changed
@@ -79,7 +79,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanBufferPool, gst_object_unref); GST_VULKAN_API -GstBufferPool *gst_vulkan_buffer_pool_new (GstVulkanDevice * device); +GstBufferPool *gst_vulkan_buffer_pool_new (GstVulkanDevice * device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_buffer_pool_config_set_allocation_params
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkcommandbuffer.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkcommandbuffer.h
Changed
@@ -145,7 +145,7 @@ GST_VULKAN_API GstVulkanCommandBuffer * gst_vulkan_command_buffer_new_wrapped (VkCommandBuffer cmd, - VkCommandBufferLevel level); + VkCommandBufferLevel level) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkcommandpool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkcommandpool.h
Changed
@@ -71,11 +71,11 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanCommandPool, gst_object_unref) GST_VULKAN_API -GstVulkanQueue * gst_vulkan_command_pool_get_queue (GstVulkanCommandPool * pool); +GstVulkanQueue * gst_vulkan_command_pool_get_queue (GstVulkanCommandPool * pool) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanCommandBuffer * gst_vulkan_command_pool_create (GstVulkanCommandPool * pool, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_command_pool_lock (GstVulkanCommandPool * pool);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdebug.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdebug.c
Changed
@@ -109,8 +109,10 @@ #if VK_HEADER_VERSION >= 70 {VK_QUEUE_PROTECTED_BIT, "protected"}, #endif -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_decode_queue) {VK_QUEUE_VIDEO_DECODE_BIT_KHR, "decode"}, +#endif +#if defined(VK_KHR_video_encode_queue) {VK_QUEUE_VIDEO_ENCODE_BIT_KHR, "encode"} #endif };
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdecoder-private.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdecoder-private.c
Changed
@@ -59,6 +59,8 @@ GstVulkanVideoFunctions vk; gboolean started; + + guint32 features; }; #define GST_CAT_DEFAULT gst_vulkan_decoder_debug @@ -77,19 +79,12 @@ { GstVulkanDecoderPrivate *priv = gst_vulkan_decoder_get_instance_private (self); - GstVulkanInstance *instance; if (priv->vk_populated) return TRUE; - instance = gst_vulkan_device_get_instance (self->queue->device); - if (!instance) { - GST_ERROR_OBJECT (self, "Failed to get instance from the device"); - return FALSE; - } - - priv->vk_populated = gst_vulkan_video_get_vk_functions (instance, &priv->vk); - gst_object_unref (instance); + priv->vk_populated = + gst_vulkan_video_get_vk_functions (self->queue->device, &priv->vk); return priv->vk_populated; } @@ -116,6 +111,47 @@ gobject_class->finalize = gst_vulkan_decoder_finalize; } +static gboolean +_create_empty_params (GstVulkanDecoder * self, GError ** error) +{ + GstVulkanDecoderParameters empty_params; + GstVulkanDecoderPrivate *priv = + gst_vulkan_decoder_get_instance_private (self); + + switch (self->profile.profile.videoCodecOperation) { + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + /* *INDENT-OFF* */ + empty_params.h264 = (VkVideoDecodeH264SessionParametersCreateInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_SESSION_PARAMETERS_CREATE_INFO_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + /* *INDENT-OFF* */ + empty_params.h265 = (VkVideoDecodeH265SessionParametersCreateInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_SESSION_PARAMETERS_CREATE_INFO_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + /* VP9 doesn't have session parameters */ + return TRUE; + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + /* *INDENT-OFF* */ + empty_params.av1 = (VkVideoDecodeAV1SessionParametersCreateInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_SESSION_PARAMETERS_CREATE_INFO_KHR, + }; + /* *INDENT-ON* */ + break; + default: + g_assert_not_reached (); + } + + priv->empty_params = gst_vulkan_decoder_new_video_session_parameters (self, + &empty_params, error); + return (priv->empty_params != NULL); +} + /** * gst_vulkan_decoder_start: * @self: a #GstVulkanDecoder @@ -132,26 +168,17 @@ GstVulkanVideoProfile * profile, GError ** error) { GstVulkanDecoderPrivate *priv; - VkPhysicalDevice gpu; - VkResult res; - VkVideoFormatPropertiesKHR *fmts = NULL; - VkVideoProfileListInfoKHR profile_list = { - .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, - .profileCount = 1, - }; - VkPhysicalDeviceVideoFormatInfoKHR fmt_info = { - .sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_FORMAT_INFO_KHR, - .pNext = &profile_list, - }; + GArray *fmts = NULL; VkVideoSessionCreateInfoKHR session_create; - GstVulkanDecoderParameters empty_params; - guint i, maxlevel, n_fmts, codec_idx; + guint i, maxlevel, codec_idx; GstVideoFormat format = GST_VIDEO_FORMAT_UNKNOWN; VkFormat vk_format = VK_FORMAT_UNDEFINED; GstVulkanCommandPool *cmd_pool; + GstVulkanPhysicalDevice *phy_dev; GError *query_err = NULL; g_return_val_if_fail (GST_IS_VULKAN_DECODER (self), FALSE); + g_return_val_if_fail (profile != NULL, FALSE); priv = gst_vulkan_decoder_get_instance_private (self); @@ -169,6 +196,8 @@ switch (self->codec) { case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: if (!gst_vulkan_video_profile_is_valid (profile, self->codec)) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, "Invalid profile"); @@ -185,57 +214,32 @@ self->profile.profile.pNext = &self->profile.usage.decode; self->profile.usage.decode.pNext = &self->profile.codec; - switch (self->codec) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: - /* *INDENT-OFF* */ - priv->caps.decoder.codec.h264 = (VkVideoDecodeH264CapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_CAPABILITIES_KHR, - }; - /* *INDENT-ON* */ - codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_H264; - break; - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: - /* *INDENT-OFF* */ - priv->caps.decoder.codec.h265 = (VkVideoDecodeH265CapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_CAPABILITIES_KHR, - }; - /* *INDENT-ON* */ - codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_H265; - break; - default: - g_assert_not_reached (); - } - - /* *INDENT-OFF* */ - priv->caps.decoder.caps = (VkVideoDecodeCapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_CAPABILITIES_KHR, - .pNext = &priv->caps.decoder.codec, - }; - priv->caps.caps = (VkVideoCapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR, - .pNext = &priv->caps.decoder.caps, - }; - /* *INDENT-ON* */ - - gpu = gst_vulkan_device_get_physical_device (self->queue->device); - res = priv->vk.GetPhysicalDeviceVideoCapabilities (gpu, - &self->profile.profile, &priv->caps.caps); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoCapabilitiesKHR") != VK_SUCCESS) + phy_dev = self->queue->device->physical_device; + if (!gst_vulkan_video_try_configuration (phy_dev, &self->profile, &priv->caps, + &priv->profile_caps, &fmts, error)) return FALSE; switch (self->codec) { case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_H264; maxlevel = priv->caps.decoder.codec.h264.maxLevelIdc; break; case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_H265; maxlevel = priv->caps.decoder.codec.h265.maxLevelIdc; break; + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_VP9; + maxlevel = priv->caps.decoder.codec.vp9.maxLevel; + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + codec_idx = GST_VK_VIDEO_EXTENSION_DECODE_AV1; + maxlevel = priv->caps.decoder.codec.av1.maxLevel; + break; default: - maxlevel = 0; + g_assert_not_reached (); } - priv->profile_caps = gst_vulkan_video_profile_to_caps (&self->profile); GST_LOG_OBJECT (self, "Capabilities for %" GST_PTR_FORMAT ":\n" " Maximum level: %d\n" " Width from %i to %i\n" @@ -295,60 +299,22 @@ self->layered_dpb = ((priv->caps.caps.flags & VK_VIDEO_CAPABILITY_SEPARATE_REFERENCE_IMAGES_BIT_KHR) == 0); - priv->caps.caps.pNext = NULL; - - /* Get output format */ - profile_list.pProfiles = &self->profile.profile; - - fmt_info.imageUsage = VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR - | VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_SAMPLED_BIT; - if (!self->dedicated_dpb) - fmt_info.imageUsage |= VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR; - - res = priv->vk.GetPhysicalDeviceVideoFormatProperties (gpu, &fmt_info, - &n_fmts, NULL); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) - goto failed; - - if (n_fmts == 0) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Profile doesn't have an output format"); - goto failed; - } - - fmts = g_new0 (VkVideoFormatPropertiesKHR, n_fmts); - for (i = 0; i < n_fmts; i++) - fmtsi.sType = VK_STRUCTURE_TYPE_VIDEO_FORMAT_PROPERTIES_KHR; - - res = priv->vk.GetPhysicalDeviceVideoFormatProperties (gpu, &fmt_info, - &n_fmts, fmts); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) { - goto failed; - } - - if (n_fmts == 0) { - g_free (fmts); - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Profile doesn't have an output format"); - goto failed; - } - /* find the best output format */ - for (i = 0; i < n_fmts; i++) { - format = gst_vulkan_format_to_video_format (fmtsi.format); + for (i = 0; i < fmts->len; i++) { + VkVideoFormatPropertiesKHR *fmt = + &g_array_index (fmts, VkVideoFormatPropertiesKHR, i); + + format = gst_vulkan_format_to_video_format (fmt->format); if (format == GST_VIDEO_FORMAT_UNKNOWN) { - GST_WARNING_OBJECT (self, "Unknown Vulkan format %i", fmtsi.format); + GST_WARNING_OBJECT (self, "Unknown Vulkan format %i", fmt->format); continue; } else { - vk_format = fmtsi.format; - priv->format = fmtsi; - priv->format.pNext = NULL; + vk_format = fmt->format; + priv->format = *fmt; break; } } - g_clear_pointer (&fmts, g_free); + g_array_unref (fmts); if (vk_format == VK_FORMAT_UNDEFINED) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, @@ -373,35 +339,24 @@ }; /* *INDENT-ON* */ + if (gst_vulkan_physical_device_has_feature_video_maintenance2 (self-> + queue->device->physical_device)) { + priv->features |= GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS; + session_create.flags |= + VK_VIDEO_SESSION_CREATE_INLINE_SESSION_PARAMETERS_BIT_KHR; + } + /* create video session */ if (!gst_vulkan_video_session_create (&priv->session, self->queue->device, &priv->vk, &session_create, error)) goto failed; - /* create empty codec params */ - switch (self->profile.profile.videoCodecOperation) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: - /* *INDENT-OFF* */ - empty_params.h264 = (VkVideoDecodeH264SessionParametersCreateInfoKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_SESSION_PARAMETERS_CREATE_INFO_KHR, - }; - /* *INDENT-ON* */ - break; - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: - /* *INDENT-OFF* */ - empty_params.h265 = (VkVideoDecodeH265SessionParametersCreateInfoKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_SESSION_PARAMETERS_CREATE_INFO_KHR, - }; - /* *INDENT-ON* */ - break; - default: - g_assert_not_reached (); + if (!gst_vulkan_decoder_has_feature (self, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) { + if (!_create_empty_params (self, error)) + goto failed; } - priv->empty_params = gst_vulkan_decoder_new_video_session_parameters (self, - &empty_params, error); - if (!priv->empty_params) - goto failed; cmd_pool = gst_vulkan_queue_create_command_pool (self->queue, error); if (!cmd_pool) goto failed; @@ -426,7 +381,6 @@ failed: { - g_free (fmts); gst_clear_caps (&priv->profile_caps); if (priv->session.session) @@ -470,6 +424,8 @@ gst_vulkan_video_session_destroy (&priv->session); + priv->features = 0; + gst_clear_caps (&priv->profile_caps); gst_clear_vulkan_handle (&priv->empty_params); @@ -512,14 +468,15 @@ priv = gst_vulkan_decoder_get_instance_private (self); - if (!(priv->empty_params && priv->exec)) + if (!priv->exec) return FALSE; /* *INDENT-OFF* */ decode_start = (VkVideoBeginCodingInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_BEGIN_CODING_INFO_KHR, .videoSession = priv->session.session->handle, - .videoSessionParameters = priv->empty_params->handle, + .videoSessionParameters = + priv->empty_params ? priv->empty_params->handle : VK_NULL_HANDLE, }; /* *INDENT-ON* */ @@ -572,6 +529,23 @@ max_buffers = 0; } + if (priv->dpb_pool) { + GstCaps *old_caps = NULL; + gboolean keep_pool = FALSE; + + config = gst_buffer_pool_get_config (priv->dpb_pool); + gst_buffer_pool_config_get_params (config, &old_caps, NULL, NULL, NULL); + keep_pool = gst_caps_is_strictly_equal (caps, old_caps); + gst_structure_free (config); + + if (keep_pool) { + GST_INFO_OBJECT (self, "Reusing existing DPB pool"); + return TRUE; + } + + gst_clear_object (&priv->dpb_pool); + } + priv->dpb_pool = gst_vulkan_image_buffer_pool_new (self->queue->device); config = gst_buffer_pool_get_config (priv->dpb_pool); @@ -646,15 +620,16 @@ decode_start = (VkVideoBeginCodingInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_BEGIN_CODING_INFO_KHR, .videoSession = priv->session.session->handle, - .videoSessionParameters = priv->session_params->handle, + .videoSessionParameters = + priv->session_params ? priv->session_params->handle : VK_NULL_HANDLE, .referenceSlotCount = pic->decode_info.referenceSlotCount, .pReferenceSlots = pic->decode_info.pReferenceSlots, }; /* *INDENT-ON* */ - if (!(priv->started && priv->session_params)) { + if (!priv->started) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Vulkan Decoder has not started or no session parameters are set"); + "Vulkan Decoder has not started"); return FALSE; } @@ -955,6 +930,12 @@ g_return_val_if_fail (GST_IS_VULKAN_DECODER (self), FALSE); g_return_val_if_fail (params, FALSE); + /* if inline session parameters are enabled, there's no need to update session + * parameters. This function is no-op */ + if (gst_vulkan_decoder_has_feature (self, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) + return TRUE; + handle = gst_vulkan_decoder_new_video_session_parameters (self, params, error); if (!handle) @@ -1001,46 +982,24 @@ VkSamplerYcbcrRange range, VkChromaLocation xloc, VkChromaLocation yloc, GError ** error) { - const VkPhysicalDeviceFeatures2 *features; - const VkBaseOutStructure *iter; GstVulkanDevice *device; GstVulkanDecoderPrivate *priv; GstVulkanHandle *handle; VkSamplerYcbcrConversionCreateInfo create_info; VkSamplerYcbcrConversion ycbr_conversion; VkResult res; - gboolean found = FALSE; g_return_val_if_fail (GST_IS_VULKAN_DECODER (self), FALSE); device = self->queue->device; - if (!gst_vulkan_physical_device_check_api_version (device->physical_device, 1, - 2, 0)) { + if (!gst_vulkan_physical_device_has_feature_sampler_ycbrc_conversion + (device->physical_device)) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, "Sampler Ycbcr conversion not available in API"); return FALSE; } - features = gst_vulkan_physical_device_get_features (device->physical_device); - for (iter = (const VkBaseOutStructure *) features; iter; iter = iter->pNext) { - if (iter->sType == VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_1_FEATURES) { - const VkPhysicalDeviceVulkan11Features *features11 = - (const VkPhysicalDeviceVulkan11Features *) iter; - - if (!features11->samplerYcbcrConversion) - return FALSE; - found = TRUE; - break; - } - } - - if (!found) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Sampler Ycbcr conversion not available in driver"); - return FALSE; - } - priv = gst_vulkan_decoder_get_instance_private (self); /* *INDENT-OFF* */ @@ -1223,7 +1182,7 @@ /* append data */ { - GstBufferMapInfo mapinfo; + GstMapInfo mapinfo; guint32 offset; if (!gst_buffer_map (self->input_buffer, &mapinfo, GST_MAP_WRITE)) @@ -1284,6 +1243,21 @@ return TRUE; } +static const struct +{ + VkVideoCodecOperationFlagsKHR codec; + const char *extension; +} _vk_decoder_extension_map = { + {VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR, + VK_KHR_VIDEO_DECODE_H264_EXTENSION_NAME}, + {VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR, + VK_KHR_VIDEO_DECODE_H265_EXTENSION_NAME}, + {VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR, + VK_KHR_VIDEO_DECODE_VP9_EXTENSION_NAME}, + {VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR, + VK_KHR_VIDEO_DECODE_AV1_EXTENSION_NAME}, +}; + /** * gst_vulkan_decoder_new_from_queue: * @queue: a #GstVulkanQueue @@ -1298,8 +1272,8 @@ { GstVulkanPhysicalDevice *device; GstVulkanDecoder *decoder; - guint flags, expected_flag, supported_video_ops; - const char *extension; + guint i, flags, expected_flag, supported_video_ops; + const char *extension = NULL; static gsize cat_gonce = 0; g_return_val_if_fail (GST_IS_VULKAN_QUEUE (queue), NULL); @@ -1315,25 +1289,25 @@ g_once_init_leave (&cat_gonce, TRUE); } - if (device->properties.apiVersion < VK_MAKE_VERSION (1, 3, 275)) { + /* XXX: sync with the meson version for vulkan video enabling */ + if (!gst_vulkan_physical_device_check_api_version (device, 1, 4, 306)) { GST_WARNING_OBJECT (queue, - "Driver API version %d.%d.%d doesn't support Video extensions", + "Driver version %d.%d.%d doesn't support required video extensions", VK_VERSION_MAJOR (device->properties.apiVersion), VK_VERSION_MINOR (device->properties.apiVersion), VK_VERSION_PATCH (device->properties.apiVersion)); return NULL; } - switch (codec) { - case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: - extension = VK_KHR_VIDEO_DECODE_H264_EXTENSION_NAME; - break; - case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: - extension = VK_KHR_VIDEO_DECODE_H265_EXTENSION_NAME; + for (i = 0; i < G_N_ELEMENTS (_vk_decoder_extension_map); i++) { + if (_vk_decoder_extension_mapi.codec == codec) { + extension = _vk_decoder_extension_mapi.extension; break; - default: - GST_WARNING_OBJECT (queue, "Unsupported codec %u", codec); - return NULL; + } + } + if (!extension) { + GST_WARNING_OBJECT (queue, "Unsupported codec %u", codec); + return NULL; } if ((flags & expected_flag) != expected_flag) { GST_WARNING_OBJECT (queue, "Queue doesn't support decoding"); @@ -1345,11 +1319,7 @@ return NULL; } - if (!(gst_vulkan_device_is_extension_enabled (queue->device, - VK_KHR_VIDEO_QUEUE_EXTENSION_NAME) - && gst_vulkan_device_is_extension_enabled (queue->device, - VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME) - && gst_vulkan_device_is_extension_enabled (queue->device, extension))) + if (!gst_vulkan_device_is_extension_enabled (queue->device, extension)) return NULL; decoder = g_object_new (GST_TYPE_VULKAN_DECODER, NULL); @@ -1359,3 +1329,24 @@ return decoder; } + +/** + * gst_vulkan_decoder_has_feature: + * @self: a #GstVulkanDecoder + * @features: (type guint32): the features to support + * + * Check if the #GstVulkanDecoder supports the given features + * + * Returns: whether the features are supported + */ +gboolean +gst_vulkan_decoder_has_feature (GstVulkanDecoder * self, guint32 features) +{ + GstVulkanDecoderPrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_DECODER (self), FALSE); + + priv = gst_vulkan_decoder_get_instance_private (self); + + return ((priv->features & features) != 0); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdecoder-private.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdecoder-private.h
Changed
@@ -21,6 +21,7 @@ #pragma once #include <gst/vulkan/gstvkqueue.h> +#include "gstvkvideoutils-private.h" G_BEGIN_DECLS @@ -33,6 +34,10 @@ GST_VULKAN_API GType gst_vulkan_decoder_get_type (void); +enum { + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS = 1 << 0, +}; + typedef struct _GstVulkanDecoder GstVulkanDecoder; typedef struct _GstVulkanDecoderClass GstVulkanDecoderClass; typedef struct _GstVulkanDecoderPicture GstVulkanDecoderPicture; @@ -103,6 +108,8 @@ gboolean dedicated_dpb; gboolean layered_dpb; + + /*< private >*/ gpointer _reserved GST_PADDING; }; @@ -132,6 +139,7 @@ /*< private >*/ VkVideoDecodeH264SessionParametersCreateInfoKHR h264; VkVideoDecodeH265SessionParametersCreateInfoKHR h265; + VkVideoDecodeAV1SessionParametersCreateInfoKHR av1; }; G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanDecoder, gst_object_unref) @@ -198,4 +206,7 @@ GST_VULKAN_API gboolean gst_vulkan_decoder_wait (GstVulkanDecoder * self); +GST_VULKAN_API +gboolean gst_vulkan_decoder_has_feature (GstVulkanDecoder * self, guint32 feature); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorcache.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorcache.h
Changed
@@ -69,10 +69,10 @@ GST_VULKAN_API GstVulkanDescriptorCache * gst_vulkan_descriptor_cache_new (GstVulkanDescriptorPool * pool, guint n_layouts, - GstVulkanHandle ** layouts); + GstVulkanHandle ** layouts) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanDescriptorSet * gst_vulkan_descriptor_cache_acquire (GstVulkanDescriptorCache * cache, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; #endif /* __GST_VULKAN_DESCRIPTOR_CACHE_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorpool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorpool.h
Changed
@@ -71,16 +71,16 @@ GST_VULKAN_API GstVulkanDescriptorPool * gst_vulkan_descriptor_pool_new_wrapped (GstVulkanDevice * device, VkDescriptorPool pool, - gsize max_sets); + gsize max_sets) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanDevice * gst_vulkan_descriptor_pool_get_device (GstVulkanDescriptorPool * pool); +GstVulkanDevice * gst_vulkan_descriptor_pool_get_device (GstVulkanDescriptorPool * pool) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanDescriptorSet * gst_vulkan_descriptor_pool_create (GstVulkanDescriptorPool * pool, guint n_layouts, GstVulkanHandle **layouts, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gsize gst_vulkan_descriptor_pool_get_max_sets (GstVulkanDescriptorPool * pool);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorset.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdescriptorset.h
Changed
@@ -129,7 +129,7 @@ GstVulkanDescriptorSet * gst_vulkan_descriptor_set_new_wrapped (GstVulkanDescriptorPool * pool, VkDescriptorSet set, guint n_layouts, - GstVulkanHandle ** layouts); + GstVulkanHandle ** layouts) G_GNUC_WARN_UNUSED_RESULT; G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanDescriptorSet, gst_vulkan_descriptor_set_unref);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdevice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdevice.c
Changed
@@ -23,7 +23,6 @@ #endif #include "gstvkdevice.h" -#include "gstvkdebug.h" #include "gstvkphysicaldevice-private.h" #include <string.h> @@ -179,6 +178,8 @@ typedef gboolean (*CanExtensionBeEnabled) (const struct extension * extension, GstVulkanPhysicalDevice * phy_dev); +typedef gboolean (*IsFeatureEnabled) (GstVulkanPhysicalDevice * phy_dev); + struct extension { /* name of the extension */ @@ -191,8 +192,18 @@ /* the Vulkan API version that the extension has been promoted to core and * does not need explicit enabling */ guint promoted_api_version; + /* other extension in which this depend on */ + const char *dependency; + /* function to check it the feature is enabled in physical device */ + IsFeatureEnabled is_enabled; }; +static inline gboolean +gst_vulkan_physical_device_has_feature_none (GstVulkanPhysicalDevice * phy_dev) +{ + return TRUE; +} + #define NEVER_VK_VERSION VK_MAKE_VERSION (999, 0, 0) static gboolean @@ -206,42 +217,47 @@ VK_VERSION_PATCH (extension->promoted_api_version))) return FALSE; - return gst_vulkan_physical_device_check_api_version (phy_dev, - VK_VERSION_MAJOR (extension->min_api_version), - VK_VERSION_MINOR (extension->min_api_version), - VK_VERSION_PATCH (extension->min_api_version)); -} - -#define OPTIONAL_EXTENSION_VERSION(name, min, promoted) \ - { name, can_enable_api_version, min, promoted, } - -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS -static gboolean -can_enable_video_queue (const struct extension *extension, - GstVulkanPhysicalDevice * phy_dev) -{ - if (gst_vulkan_physical_device_check_api_version (phy_dev, 1, 3, 0)) - return TRUE; - -#if defined(VK_KHR_synchronization2) - if (gst_vulkan_physical_device_check_api_version (phy_dev, 1, 1, 0) - && gst_vulkan_physical_device_get_extension_info (phy_dev, - VK_KHR_SYNCHRONIZATION_2_EXTENSION_NAME, NULL)) + if (gst_vulkan_physical_device_check_api_version (phy_dev, + VK_VERSION_MAJOR (extension->min_api_version), + VK_VERSION_MINOR (extension->min_api_version), + VK_VERSION_PATCH (extension->min_api_version))) { + if (extension->is_enabled && !extension->is_enabled (phy_dev)) + return FALSE; + if (extension->dependency) { + return gst_vulkan_physical_device_get_extension_info (phy_dev, + extension->dependency, NULL); + } return TRUE; -#endif + } return FALSE; } -#define OPTIONAL_VIDEO_EXTENSION(name) \ - { name, can_enable_video_queue, VK_MAKE_VERSION (1, 1, 0), NEVER_VK_VERSION, } +#define OPTIONAL_EXTENSION_VERSION(name, min, promoted) \ + { name, can_enable_api_version, min, promoted, NULL, \ + gst_vulkan_physical_device_has_feature_none } +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#define OPTIONAL_VIDEO_EXTENSION(name, dep, feat) \ + { name, can_enable_api_version, VK_MAKE_VERSION (1, 3, 0), \ + NEVER_VK_VERSION, dep, gst_vulkan_physical_device_has_feature_##feat } #endif static const struct extension optional_extensions = { OPTIONAL_EXTENSION_VERSION (VK_KHR_SWAPCHAIN_EXTENSION_NAME, VK_MAKE_VERSION (1, 0, 0), NEVER_VK_VERSION), OPTIONAL_EXTENSION_VERSION (VK_KHR_SAMPLER_YCBCR_CONVERSION_EXTENSION_NAME, + VK_MAKE_VERSION (1, 0, 0), NEVER_VK_VERSION), +#if defined(VK_KHR_get_physical_device_properties2) + OPTIONAL_EXTENSION_VERSION + (VK_KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2_EXTENSION_NAME, VK_MAKE_VERSION (1, 0, 0), VK_MAKE_VERSION (1, 1, 0)), +#endif +#if defined(VK_KHR_format_feature_flags2) + /* TODO: dependency on VK_KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2_EXTENSION_NAME + * in 1.0 */ + OPTIONAL_EXTENSION_VERSION (VK_KHR_FORMAT_FEATURE_FLAGS_2_EXTENSION_NAME, + VK_MAKE_VERSION (1, 1, 0), VK_MAKE_VERSION (1, 3, 0)), +#endif #if defined(VK_KHR_timeline_semaphore) OPTIONAL_EXTENSION_VERSION (VK_KHR_TIMELINE_SEMAPHORE_EXTENSION_NAME, VK_MAKE_VERSION (1, 1, 0), VK_MAKE_VERSION (1, 2, 0)), @@ -251,17 +267,56 @@ VK_MAKE_VERSION (1, 1, 0), VK_MAKE_VERSION (1, 3, 0)), #endif #if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_QUEUE_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_H264_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_H265_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_H264_EXTENSION_NAME), - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_H265_EXTENSION_NAME), -#if defined(VK_KHR_video_maintenance1) - OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_MAINTENANCE_1_EXTENSION_NAME), -#endif +# if defined(VK_KHR_video_queue) + /* synchronization2 was promoted in 1.3 */ + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, + /* VK_KHR_SYNCHRONIZATION_2_EXTENSION_NAME */ NULL, none), #endif +# if defined(VK_KHR_video_decode_queue) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_decode_h264) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_H264_EXTENSION_NAME, + VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_decode_h265) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_H265_EXTENSION_NAME, + VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_decode_av1) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_AV1_EXTENSION_NAME, + VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_encode_queue) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_encode_h264) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_H264_EXTENSION_NAME, + VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_encode_h265) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_H265_EXTENSION_NAME, + VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME, none), +# endif +# if defined(VK_KHR_video_maintenance1) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_MAINTENANCE_1_EXTENSION_NAME, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, video_maintenance1), +# endif +# if defined(VK_KHR_video_maintenance2) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_MAINTENANCE_2_EXTENSION_NAME, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, video_maintenance2), +# endif +# if defined(VK_KHR_video_encode_av1) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_ENCODE_AV1_EXTENSION_NAME, + VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME, video_encode_av1), +# endif +# if defined(VK_KHR_video_decode_vp9) + OPTIONAL_VIDEO_EXTENSION (VK_KHR_VIDEO_DECODE_VP9_EXTENSION_NAME, + VK_KHR_VIDEO_DECODE_QUEUE_EXTENSION_NAME, video_decode_vp9), +# endif +#endif /* GST_VULKAN_HAVE_VIDEO_EXTENSIONS */ }; static void @@ -447,10 +502,6 @@ GArray *array; guint32 *family_scores, n_queue_families; int graph_index, comp_index, tx_index; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - int dec_index = -1; - int enc_index = -1; -#endif n_queue_families = device->physical_device->n_queue_families; queue_family_props = device->physical_device->queue_family_props; @@ -470,13 +521,21 @@ VK_QUEUE_TRANSFER_BIT, family_scores); array = _append_queue_create_info (array, tx_index, queue_family_props); #if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - dec_index = _pick_queue_family (queue_family_props, n_queue_families, - VK_QUEUE_VIDEO_DECODE_BIT_KHR, family_scores); - array = _append_queue_create_info (array, dec_index, queue_family_props); - enc_index = _pick_queue_family (queue_family_props, n_queue_families, - VK_QUEUE_VIDEO_ENCODE_BIT_KHR, family_scores); - array = _append_queue_create_info (array, enc_index, queue_family_props); -#endif +# if defined(VK_KHR_video_decode_queue) + { + int dec_index = _pick_queue_family (queue_family_props, n_queue_families, + VK_QUEUE_VIDEO_DECODE_BIT_KHR, family_scores); + array = _append_queue_create_info (array, dec_index, queue_family_props); + } +# endif +# if defined(VK_KHR_video_encode_queue) + { + int enc_index = _pick_queue_family (queue_family_props, n_queue_families, + VK_QUEUE_VIDEO_ENCODE_BIT_KHR, family_scores); + array = _append_queue_create_info (array, enc_index, queue_family_props); + } +# endif +#endif /* GST_VULKAN_HAVE_VIDEO_EXTENSIONS */ g_free (family_scores);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdevice.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdevice.h
Changed
@@ -87,11 +87,11 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanDevice, gst_object_unref) GST_VULKAN_API -GstVulkanDevice * gst_vulkan_device_new (GstVulkanPhysicalDevice * physical_device); +GstVulkanDevice * gst_vulkan_device_new (GstVulkanPhysicalDevice * physical_device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanDevice * gst_vulkan_device_new_with_index (GstVulkanInstance * instance, guint device_index); +GstVulkanDevice * gst_vulkan_device_new_with_index (GstVulkanInstance * instance, guint device_index) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanInstance * gst_vulkan_device_get_instance (GstVulkanDevice * device); +GstVulkanInstance * gst_vulkan_device_get_instance (GstVulkanDevice * device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_device_open (GstVulkanDevice * device, GError ** error); @@ -122,9 +122,9 @@ GST_VULKAN_API GstVulkanQueue * gst_vulkan_device_get_queue (GstVulkanDevice * device, guint32 queue_family, - guint32 queue_i); + guint32 queue_i) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GArray * gst_vulkan_device_queue_family_indices (GstVulkanDevice * device); +GArray * gst_vulkan_device_queue_family_indices (GstVulkanDevice * device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API VkPhysicalDevice gst_vulkan_device_get_physical_device (GstVulkanDevice * device); @@ -145,11 +145,11 @@ GST_VULKAN_API GstVulkanFence * gst_vulkan_device_create_fence (GstVulkanDevice * device, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanQueue * gst_vulkan_device_select_queue (GstVulkanDevice * device, - VkQueueFlagBits expected_flags); + VkQueueFlagBits expected_flags) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkdisplay.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkdisplay.h
Changed
@@ -128,10 +128,10 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanDisplay, gst_object_unref) GST_VULKAN_API -GstVulkanDisplay * gst_vulkan_display_new (GstVulkanInstance *instance); +GstVulkanDisplay * gst_vulkan_display_new (GstVulkanInstance *instance) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanDisplay * gst_vulkan_display_new_with_type (GstVulkanInstance *instance, - GstVulkanDisplayType type); + GstVulkanDisplayType type) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanDisplayType gst_vulkan_display_choose_type (GstVulkanInstance *instance); GST_VULKAN_API @@ -143,7 +143,7 @@ GST_VULKAN_API GstVulkanDisplayType gst_vulkan_display_get_handle_type (GstVulkanDisplay * display); GST_VULKAN_API -GstVulkanWindow * gst_vulkan_display_create_window (GstVulkanDisplay * display); +GstVulkanWindow * gst_vulkan_display_create_window (GstVulkanDisplay * display) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_context_get_vulkan_display (GstContext * context, @@ -163,7 +163,7 @@ GST_VULKAN_API gboolean gst_vulkan_display_remove_window (GstVulkanDisplay * display, GstVulkanWindow * window); GST_VULKAN_API -GstVulkanWindow * gst_vulkan_display_find_window (GstVulkanDisplay * display, gpointer data, GCompareFunc compare_func); +GstVulkanWindow * gst_vulkan_display_find_window (GstVulkanDisplay * display, gpointer data, GCompareFunc compare_func) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkencoder-private.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkencoder-private.c
Changed
@@ -24,11 +24,9 @@ #include "gstvkencoder-private.h" +#include "gstvkphysicaldevice-private.h" #include "gstvkvideo-private.h" -extern const VkExtensionProperties vk_codec_extensions3; -extern const uint32_t _vk_codec_supported_extensions4; - typedef struct _GstVulkanEncoderPrivate GstVulkanEncoderPrivate; struct _GstVulkanEncoderPrivate @@ -82,6 +80,7 @@ const uint32_t _vk_codec_supported_extensions = { GST_VK_VIDEO_EXTENSION_ENCODE_H264 = VK_MAKE_VIDEO_STD_VERSION (0, 9, 11), GST_VK_VIDEO_EXTENSION_ENCODE_H265 = VK_MAKE_VIDEO_STD_VERSION (0, 9, 12), + GST_VK_VIDEO_EXTENSION_ENCODE_AV1 = VK_MAKE_VIDEO_STD_VERSION (0, 9, 1), }; static gboolean @@ -89,19 +88,12 @@ { GstVulkanEncoderPrivate *priv = gst_vulkan_encoder_get_instance_private (self); - GstVulkanInstance *instance; if (priv->vk_loaded) return TRUE; - instance = gst_vulkan_device_get_instance (self->queue->device); - if (!instance) { - GST_ERROR_OBJECT (self, "Failed to get instance from the device"); - return FALSE; - } - - priv->vk_loaded = gst_vulkan_video_get_vk_functions (instance, &priv->vk); - gst_object_unref (instance); + priv->vk_loaded = + gst_vulkan_video_get_vk_functions (self->queue->device, &priv->vk); return priv->vk_loaded; } @@ -141,82 +133,6 @@ gobject_class->finalize = gst_vulkan_encoder_finalize; } -static VkFormat -gst_vulkan_video_encoder_get_format (GstVulkanEncoder * self, - VkImageUsageFlagBits imageUsage, GError ** error) -{ - VkResult res; - VkVideoFormatPropertiesKHR *fmts = NULL; - guint i, n_fmts; - VkPhysicalDevice gpu = - gst_vulkan_device_get_physical_device (self->queue->device); - GstVulkanEncoderPrivate *priv = - gst_vulkan_encoder_get_instance_private (self); - GstVideoFormat format = GST_VIDEO_FORMAT_UNKNOWN; - VkFormat vk_format = VK_FORMAT_UNDEFINED; - VkVideoProfileListInfoKHR profile_list = { - .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, - .profileCount = 1, - .pProfiles = &priv->profile.profile, - }; - VkPhysicalDeviceVideoFormatInfoKHR fmt_info = { - .sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_FORMAT_INFO_KHR, - .pNext = &profile_list, - .imageUsage = imageUsage, - }; - - res = priv->vk.GetPhysicalDeviceVideoFormatProperties (gpu, &fmt_info, - &n_fmts, NULL); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) - goto beach; - - if (n_fmts == 0) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Profile doesn't have an output format"); - return vk_format; - } - - fmts = g_new0 (VkVideoFormatPropertiesKHR, n_fmts); - for (i = 0; i < n_fmts; i++) - fmtsi.sType = VK_STRUCTURE_TYPE_VIDEO_FORMAT_PROPERTIES_KHR; - - res = priv->vk.GetPhysicalDeviceVideoFormatProperties (gpu, &fmt_info, - &n_fmts, fmts); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) { - goto beach; - } - - if (n_fmts == 0) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Profile doesn't have an output format"); - goto beach; - } - - /* find the best output format */ - for (i = 0; i < n_fmts; i++) { - format = gst_vulkan_format_to_video_format (fmtsi.format); - if (format == GST_VIDEO_FORMAT_UNKNOWN) { - GST_WARNING_OBJECT (self, "Unknown Vulkan format %i", fmtsi.format); - continue; - } else { - vk_format = fmtsi.format; - priv->format = fmtsi; - break; - } - } - - if (vk_format == VK_FORMAT_UNDEFINED) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "No valid output format found"); - } - -beach: - g_clear_pointer (&fmts, g_free); - return vk_format; -} - static void gst_vulkan_handle_free_video_session_parameters (GstVulkanHandle * handle, gpointer data) @@ -473,6 +389,30 @@ } /** + * gst_vulkan_encoder_rc_mdoe: + * @self: a #GstVulkanEncoder + * + * Get the current rate control mode. + * + * Returns: whether the encoder has started, it will return the rate control + * mode; otherwise it will return -1 + */ +gint32 +gst_vulkan_encoder_rc_mode (GstVulkanEncoder * self) +{ + GstVulkanEncoderPrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_ENCODER (self), -1); + + priv = gst_vulkan_encoder_get_instance_private (self); + + if (!priv->started) + return -1; + + return priv->rc_mode; +} + +/** * gst_vulkan_encoder_stop: * @self: a #GstVulkanEncoder * @@ -537,20 +477,20 @@ static void _rate_control_mode_validate (GstVulkanEncoder * self, - VkVideoEncodeRateControlModeFlagBitsKHR rc_mode) + VkVideoEncodeRateControlModeFlagBitsKHR * rc_mode) { GstVulkanEncoderPrivate *priv = gst_vulkan_encoder_get_instance_private (self); if (rc_mode > VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DEFAULT_KHR - && !(priv->caps.encoder.caps.rateControlModes & rc_mode)) { - rc_mode = VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DEFAULT_KHR; + && !(priv->caps.encoder.caps.rateControlModes & *rc_mode)) { + *rc_mode = VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DEFAULT_KHR; for (int i = VK_VIDEO_ENCODE_RATE_CONTROL_MODE_DISABLED_BIT_KHR; i <= VK_VIDEO_ENCODE_RATE_CONTROL_MODE_VBR_BIT_KHR; i++) { if ((priv->caps.encoder.caps.rateControlModes) & i) { GST_DEBUG_OBJECT (self, "rate control mode is forced to: %s", _rate_control_mode_to_str (i)); - rc_mode = i; + *rc_mode = i; break; } } @@ -578,12 +518,15 @@ VkResult res; VkVideoSessionCreateInfoKHR session_create; VkPhysicalDevice gpu; - VkFormat pic_format = VK_FORMAT_UNDEFINED; - int codec_idx; + VkFormat vk_format = VK_FORMAT_UNDEFINED; + guint i, codec_idx; GstVulkanCommandPool *cmd_pool; + GstVulkanPhysicalDevice *phy_dev; VkQueryPoolVideoEncodeFeedbackCreateInfoKHR query_create; VkPhysicalDeviceVideoEncodeQualityLevelInfoKHR quality_info; VkVideoEncodeQualityLevelPropertiesKHR quality_props; + GArray *fmts; + GstVideoFormat format; GError *query_err = NULL; g_return_val_if_fail (GST_IS_VULKAN_ENCODER (self), FALSE); @@ -603,65 +546,31 @@ switch (self->codec) { case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: - if (!gst_vulkan_video_profile_is_valid (profile, self->codec)) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Invalid profile"); - return FALSE; - } - priv->caps.encoder.codec.h264 = (VkVideoEncodeH264CapabilitiesKHR) { - /* *INDENT-OFF* */ - .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_CAPABILITIES_KHR, - /* *INDENT-ON* */ - }; codec_idx = GST_VK_VIDEO_EXTENSION_ENCODE_H264; break; case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: - if (!gst_vulkan_video_profile_is_valid (profile, self->codec)) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Invalid profile"); - return FALSE; - } - priv->caps.encoder.codec.h265 = (VkVideoEncodeH265CapabilitiesKHR) { - /* *INDENT-OFF* */ - .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_CAPABILITIES_KHR, - /* *INDENT-ON* */ - }; codec_idx = GST_VK_VIDEO_EXTENSION_ENCODE_H265; break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR: + codec_idx = GST_VK_VIDEO_EXTENSION_ENCODE_AV1; + break; default: g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, "Invalid codec"); return FALSE; } - priv->profile = *profile; - - /* ensure the chain up of structure */ - priv->profile.usage.encode.pNext = &priv->profile.codec; - priv->profile.profile.pNext = &priv->profile.usage.encode; - - /* *INDENT-OFF* */ - priv->caps.encoder.caps = (VkVideoEncodeCapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_CAPABILITIES_KHR, - .pNext = &priv->caps.encoder.codec, - }; - priv->caps.caps = (VkVideoCapabilitiesKHR) { - .sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR, - .pNext = &priv->caps.encoder.caps, - }; - /* *INDENT-ON* */ - - gpu = gst_vulkan_device_get_physical_device (self->queue->device); - res = priv->vk.GetPhysicalDeviceVideoCapabilities (gpu, - &priv->profile.profile, &priv->caps.caps); - if (gst_vulkan_error_to_g_error (res, error, - "vkGetPhysicalDeviceVideoCapabilitiesKHR") != VK_SUCCESS) + if (!gst_vulkan_video_profile_is_valid (profile, self->codec)) { + g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, + "Invalid profile"); return FALSE; + } if (_vk_codec_extensionscodec_idx.specVersion < _vk_codec_supported_extensionscodec_idx) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "STD version headers %i.%i.%i not supported, need at least %i.%i.%i, check your SDK path.", + "STD version headers %i.%i.%i not supported, need at least %i.%i.%i," + " check your SDK path.", VK_CODEC_VERSION (_vk_codec_extensionscodec_idx.specVersion), VK_CODEC_VERSION (_vk_codec_supported_extensionscodec_idx)); return FALSE; @@ -670,22 +579,51 @@ if (_vk_codec_extensionscodec_idx.specVersion < priv->caps.caps.stdHeaderVersion.specVersion) { g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "The driver needs a newer version %i.%i.%i of the current headers %d.%d.%d, please update the code to support this driver.", + "The driver needs a newer version %i.%i.%i of the current headers" + "%d.%d.%d, please update the code to support this driver.", VK_CODEC_VERSION (priv->caps.caps.stdHeaderVersion.specVersion), VK_CODEC_VERSION (_vk_codec_extensionscodec_idx.specVersion)); return FALSE; } - /* Get output format */ - pic_format = gst_vulkan_video_encoder_get_format (self, - VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR | - VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR, error); - if (pic_format == VK_FORMAT_UNDEFINED) + priv->profile = *profile; + + /* ensure the chain up of structure */ + priv->profile.usage.encode.pNext = &priv->profile.codec; + priv->profile.profile.pNext = &priv->profile.usage.encode; + + phy_dev = self->queue->device->physical_device; + if (!gst_vulkan_video_try_configuration (phy_dev, &priv->profile, &priv->caps, + &priv->profile_caps, &fmts, error)) return FALSE; + /* Get output format */ + for (i = 0; i < fmts->len; i++) { + VkVideoFormatPropertiesKHR *fmt = + &g_array_index (fmts, VkVideoFormatPropertiesKHR, i); + + format = gst_vulkan_format_to_video_format (fmt->format); + if (format == GST_VIDEO_FORMAT_UNKNOWN) { + GST_WARNING_OBJECT (self, "Unknown Vulkan format %i", fmt->format); + continue; + } else { + vk_format = fmt->format; + priv->format = *fmt; + priv->format.pNext = NULL; + break; + } + } + g_array_unref (fmts); + + if (vk_format == VK_FORMAT_UNDEFINED) { + g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, + "No valid input format found"); + goto failed; + } + cmd_pool = gst_vulkan_queue_create_command_pool (self->queue, error); if (!cmd_pool) - return FALSE; + goto failed; priv->exec = gst_vulkan_operation_new (cmd_pool); gst_object_unref (cmd_pool); @@ -709,8 +647,6 @@ g_clear_error (&query_err); } - priv->profile_caps = gst_vulkan_video_profile_to_caps (&priv->profile); - GST_LOG_OBJECT (self, "Encoder capabilities for %" GST_PTR_FORMAT ":\n" " Codec header version: %i.%i.%i (driver), %i.%i.%i (compiled)\n" " Width from %i to %i\n" @@ -789,6 +725,7 @@ }; /* *INDENT-ON* */ + gpu = gst_vulkan_device_get_physical_device (self->queue->device); res = priv->vk.GetPhysicalDeviceVideoEncodeQualityLevelProperties (gpu, &quality_info, &quality_props); if (gst_vulkan_error_to_g_error (res, error, @@ -801,9 +738,9 @@ .sType = VK_STRUCTURE_TYPE_VIDEO_SESSION_CREATE_INFO_KHR, .queueFamilyIndex = self->queue->family, .pVideoProfile = &profile->profile, - .pictureFormat = pic_format, + .pictureFormat = vk_format, .maxCodedExtent = priv->caps.caps.maxCodedExtent, - .referencePictureFormat = pic_format, + .referencePictureFormat = vk_format, .maxDpbSlots = priv->caps.caps.maxDpbSlots, .maxActiveReferencePictures = priv->caps.caps.maxActiveReferencePictures, .pStdHeaderVersion = &_vk_codec_extensionscodec_idx, @@ -815,7 +752,7 @@ goto failed; /* check rate control mode if it was set before start */ - _rate_control_mode_validate (self, priv->rc_mode); + _rate_control_mode_validate (self, &priv->rc_mode); priv->session_reset = TRUE; priv->started = TRUE; @@ -898,7 +835,6 @@ gboolean write; g_return_val_if_fail (GST_IS_VULKAN_ENCODER (self), FALSE); - g_return_val_if_fail (params != NULL && feedback != NULL, FALSE); priv = gst_vulkan_encoder_get_instance_private (self); if (!priv->started) @@ -906,6 +842,7 @@ switch (self->codec) { case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: + g_return_val_if_fail (params != NULL && feedback != NULL, FALSE); if (params->h264.sType != VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_GET_INFO_KHR) { gst_vulkan_error_to_g_error (GST_VULKAN_ERROR, error, @@ -919,6 +856,7 @@ } break; case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: + g_return_val_if_fail (params != NULL && feedback != NULL, FALSE); if (params->h265.sType != VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_SESSION_PARAMETERS_GET_INFO_KHR) { gst_vulkan_error_to_g_error (GST_VULKAN_ERROR, error, @@ -932,6 +870,10 @@ VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_SESSION_PARAMETERS_FEEDBACK_INFO_KHR; } break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR: + g_return_val_if_fail (params == NULL && feedback == NULL, FALSE); + write = TRUE; + break; default: return FALSE; } @@ -967,8 +909,10 @@ res = priv->vk.GetEncodedVideoSessionParameters (self->queue->device->device, &video_params_info, &feedback_info, &size, param_data); if (gst_vulkan_error_to_g_error (res, error, - "vGetEncodedVideoSessionParametersKHR") != VK_SUCCESS) + "vGetEncodedVideoSessionParametersKHR") != VK_SUCCESS) { + g_free (param_data); return FALSE; + } if (data_size) *data_size = size; @@ -1182,7 +1126,7 @@ .width = GST_VIDEO_INFO_WIDTH (info), .height = GST_VIDEO_INFO_HEIGHT (info), }, - .baseArrayLayer = 0, + .baseArrayLayer = priv->layered_dpb ? slot_index : 0, .imageViewBinding = pic->dpb_view->view, }; pic->dpb_slot = (VkVideoReferenceSlotInfoKHR) { @@ -1309,13 +1253,18 @@ priv->vk.CmdEndVideoCoding (cmd_buf->cmd, &end_coding); if (!gst_vulkan_operation_end (priv->exec, &err)) { - GST_ERROR_OBJECT (self, "The operation did not complete properly"); + GST_ERROR_OBJECT (self, "The operation did not complete properly: %s", + err->message); goto bail; } /* Wait the operation to complete or we might have a failing query */ gst_vulkan_operation_wait (priv->exec); - gst_vulkan_operation_get_query (priv->exec, (gpointer *) & encode_res, &err); + if (!gst_vulkan_operation_get_query (priv->exec, (gpointer *) & encode_res, + &err)) { + GST_ERROR_OBJECT (self, "Failed to query the operation: %s", err->message); + goto bail; + } if (encode_res->status == VK_QUERY_RESULT_STATUS_COMPLETE_KHR) { GST_INFO_OBJECT (self, "The frame %p has been encoded with size %" G_GUINT64_FORMAT, pic, encode_res->data_size + pic->offset); @@ -1330,10 +1279,25 @@ return ret; bail: { + if (err) + g_error_free (err); return FALSE; } } +static const struct +{ + VkVideoCodecOperationFlagsKHR codec; + const char *extension; +} _vk_encoder_extension_map = { + {VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, + VK_KHR_VIDEO_ENCODE_H264_EXTENSION_NAME}, + {VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR, + VK_KHR_VIDEO_ENCODE_H265_EXTENSION_NAME}, + {VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR, + VK_KHR_VIDEO_ENCODE_AV1_EXTENSION_NAME}, +}; + /** * gst_vulkan_create_encoder_from_queue: * @queue: a #GstVulkanQueue @@ -1342,15 +1306,14 @@ * Creates a #GstVulkanEncoder object if @codec encoding is supported by @queue * * Returns: (transfer full) (nullable): the #GstVulkanEncoder object - * */ GstVulkanEncoder * gst_vulkan_encoder_create_from_queue (GstVulkanQueue * queue, guint codec) { GstVulkanPhysicalDevice *device; GstVulkanEncoder *encoder; - guint flags, expected_flag, supported_video_ops; - const char *extension; + guint i, flags, expected_flag, supported_video_ops; + const char *extension = NULL; static gsize cat_gonce = 0; g_return_val_if_fail (GST_IS_VULKAN_QUEUE (queue), NULL); @@ -1366,27 +1329,26 @@ g_once_init_leave (&cat_gonce, TRUE); } - if (device->properties.apiVersion < VK_MAKE_VERSION (1, 3, 275)) { + /* XXX: sync with the meson version for vulkan video enabling */ + if (!gst_vulkan_physical_device_check_api_version (device, 1, 4, 306)) { GST_WARNING_OBJECT (queue, - "API version %d.%d.%d doesn't support video encode extensions", + "Driver version %d.%d.%d doesn't support required video extensions", VK_VERSION_MAJOR (device->properties.apiVersion), VK_VERSION_MINOR (device->properties.apiVersion), VK_VERSION_PATCH (device->properties.apiVersion)); return NULL; } - switch (codec) { - case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: - extension = VK_KHR_VIDEO_ENCODE_H264_EXTENSION_NAME; - break; - case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: - extension = VK_KHR_VIDEO_ENCODE_H265_EXTENSION_NAME; + for (i = 0; i < G_N_ELEMENTS (_vk_encoder_extension_map); i++) { + if (_vk_encoder_extension_mapi.codec == codec) { + extension = _vk_encoder_extension_mapi.extension; break; - default: - GST_WARNING_OBJECT (queue, "Unsupported codec"); - return NULL; + } + } + if (!extension) { + GST_WARNING_OBJECT (queue, "Unsupported codec %u", codec); + return NULL; } - if ((flags & expected_flag) != expected_flag) { GST_WARNING_OBJECT (queue, "Queue doesn't support encoding"); return NULL; @@ -1396,11 +1358,7 @@ return NULL; } - if (!(gst_vulkan_device_is_extension_enabled (queue->device, - VK_KHR_VIDEO_QUEUE_EXTENSION_NAME) - && gst_vulkan_device_is_extension_enabled (queue->device, - VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME) - && gst_vulkan_device_is_extension_enabled (queue->device, extension))) + if (!gst_vulkan_device_is_extension_enabled (queue->device, extension)) return NULL; encoder = g_object_new (GST_TYPE_VULKAN_ENCODER, NULL); @@ -1442,8 +1400,11 @@ if (priv->rc_mode == rc_mode) return; - if (priv->started) - _rate_control_mode_validate (self, rc_mode); + if (priv->started) { + _rate_control_mode_validate (self, &rc_mode); + if (priv->rc_mode == rc_mode) + return; + } priv->session_reset = TRUE; priv->rc_mode = rc_mode;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkencoder-private.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkencoder-private.h
Changed
@@ -21,6 +21,7 @@ #pragma once #include <gst/vulkan/vulkan.h> +#include "gstvkvideoutils-private.h" #define GST_TYPE_VULKAN_ENCODER (gst_vulkan_encoder_get_type()) #define GST_VULKAN_ENCODER(o) (G_TYPE_CHECK_INSTANCE_CAST((o), GST_TYPE_VULKAN_ENCODER, GstVulkanEncoder)) @@ -131,6 +132,7 @@ /*< private >*/ VkVideoEncodeH264SessionParametersCreateInfoKHR h264; VkVideoEncodeH265SessionParametersCreateInfoKHR h265; + VkVideoEncodeAV1SessionParametersCreateInfoKHR av1; }; union _GstVulkanEncoderParametersOverrides @@ -153,6 +155,7 @@ { VkVideoEncodeH264QualityLevelPropertiesKHR h264; VkVideoEncodeH265QualityLevelPropertiesKHR h265; + VkVideoEncodeAV1QualityLevelPropertiesKHR av1; } codec; }; @@ -209,6 +212,8 @@ GstCaps * gst_vulkan_encoder_profile_caps (GstVulkanEncoder * self); GST_VULKAN_API gint32 gst_vulkan_encoder_quality_level (GstVulkanEncoder * self); +GST_VULKAN_API +gint32 gst_vulkan_encoder_rc_mode (GstVulkanEncoder * self); GST_VULKAN_API gboolean gst_vulkan_encoder_picture_init (GstVulkanEncoderPicture * pic,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkerror.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkerror.c
Changed
@@ -54,6 +54,15 @@ {VK_ERROR_OUT_OF_DATE_KHR, "Out of date"}, {VK_ERROR_INCOMPATIBLE_DISPLAY_KHR, "Incompatible display"}, {VK_ERROR_NATIVE_WINDOW_IN_USE_KHR, "Native window in use"}, +#if defined(VK_KHR_video_queue) + {VK_ERROR_VIDEO_PICTURE_LAYOUT_NOT_SUPPORTED_KHR, "Video picture layout not supported"}, + {VK_ERROR_VIDEO_PROFILE_CODEC_NOT_SUPPORTED_KHR, "Video codec not supported"}, + {VK_ERROR_VIDEO_PROFILE_FORMAT_NOT_SUPPORTED_KHR, "Video profile format not supported"}, + {VK_ERROR_VIDEO_PROFILE_OPERATION_NOT_SUPPORTED_KHR, "Video profile operation not supported"}, +#endif +#if defined(VK_KHR_video_encode_queue) + {VK_ERROR_INVALID_VIDEO_STD_PARAMETERS_KHR, "Invalid video std parameters"}, +#endif }; /* *INDENT-ON* */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkfence.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkfence.h
Changed
@@ -71,12 +71,12 @@ GST_VULKAN_API GstVulkanFence * gst_vulkan_fence_new (GstVulkanDevice * device, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_fence_reset (GstVulkanFence * fence); GST_VULKAN_API -GstVulkanFence * gst_vulkan_fence_new_always_signalled (GstVulkanDevice *device); +GstVulkanFence * gst_vulkan_fence_new_always_signalled (GstVulkanDevice *device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_fence_is_signaled (GstVulkanFence * fence); @@ -134,7 +134,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanFenceCache, gst_object_unref); -GstVulkanFenceCache * gst_vulkan_fence_cache_new (GstVulkanDevice * device); +GstVulkanFenceCache * gst_vulkan_fence_cache_new (GstVulkanDevice * device) G_GNUC_WARN_UNUSED_RESULT; /** * gst_vulkan_fence_cache_acquire:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkformat.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkformat.c
Changed
@@ -23,6 +23,7 @@ #endif #include "gstvkformat.h" +#include "gstvkphysicaldevice-private.h" /** * SECTION:vkformat @@ -433,6 +434,14 @@ return 0; /* VK_IMAGE_ASPECT_NONE */ } +/* The in-memory ordering of bytes within a component is determined by the host + * endianness. */ +#if G_BYTE_ORDER == G_LITTLE_ENDIAN +#define GST_VIDEO_FORMAT_ENDIANNESS(fmt) G_PASTE(G_PASTE(GST_VIDEO_FORMAT_, fmt), LE) +#else +#define GST_VIDEO_FORMAT_ENDIANNESS(fmt) G_PASTE(G_PASTE(GST_VIDEO_FORMAT_, fmt), BE) +#endif + /* *INDENT-OFF* */ const static GstVulkanFormatMap vk_formats_map = { /* RGB unsigned normalized format sRGB nonlinear encoding */ @@ -449,20 +458,72 @@ { GST_VIDEO_FORMAT_RGB16, VK_FORMAT_R5G6B5_UNORM_PACK16, { VK_FORMAT_UNDEFINED, } }, { GST_VIDEO_FORMAT_BGR16, VK_FORMAT_B5G6R5_UNORM_PACK16, { VK_FORMAT_UNDEFINED, } }, /* Gray */ - { GST_VIDEO_FORMAT_GRAY16_BE, VK_FORMAT_R8G8_UNORM, { VK_FORMAT_UNDEFINED, } }, - { GST_VIDEO_FORMAT_GRAY16_LE, VK_FORMAT_R8G8_UNORM, { VK_FORMAT_UNDEFINED, } }, - { GST_VIDEO_FORMAT_GRAY8, VK_FORMAT_R8_UNORM, { VK_FORMAT_UNDEFINED, } }, + { GST_VIDEO_FORMAT_GRAY8, VK_FORMAT_R8_UNORM, { VK_FORMAT_R8_UNORM, } }, + { GST_VIDEO_FORMAT_ENDIANNESS (GRAY16_), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, } }, + /* From Vulkan 1.4.319 spec chapter 51.1.1 "Compatible Formats of Planes of + * Multi-Planar Formats" Table 72 */ /* YUV planes */ - { GST_VIDEO_FORMAT_AYUV, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8G8B8A8_UNORM, } }, - { GST_VIDEO_FORMAT_YUY2, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8G8_UNORM, } }, - { GST_VIDEO_FORMAT_UYVY, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8G8_UNORM, } }, - { GST_VIDEO_FORMAT_NV12, VK_FORMAT_G8_B8R8_2PLANE_420_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, - { GST_VIDEO_FORMAT_NV21, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, - { GST_VIDEO_FORMAT_Y444, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, - { GST_VIDEO_FORMAT_Y42B, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, - { GST_VIDEO_FORMAT_Y41B, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, - { GST_VIDEO_FORMAT_I420, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, - { GST_VIDEO_FORMAT_YV12, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, + { GST_VIDEO_FORMAT_AYUV, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8G8B8A8_UNORM, } }, + /* 1-plane 410 */ + { GST_VIDEO_FORMAT_Y41B, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, + /* 1-plane 422 */ + { GST_VIDEO_FORMAT_YUY2, VK_FORMAT_G8B8G8R8_422_UNORM, { VK_FORMAT_R8G8B8A8_UNORM } }, + { GST_VIDEO_FORMAT_UYVY, VK_FORMAT_B8G8R8G8_422_UNORM, { VK_FORMAT_R8G8B8A8_UNORM } }, + { GST_VIDEO_FORMAT_Y210, VK_FORMAT_G10X6B10X6G10X6R10X6_422_UNORM_4PACK16, { VK_FORMAT_R16G16B16A16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (Y212_), VK_FORMAT_G12X4B12X4G12X4R12X4_422_UNORM_4PACK16, { VK_FORMAT_R16G16B16A16_UNORM } }, + /* { XXX, VK_FORMAT_G16B16G16R16_422_UNORM, { VK_FORMAT_R16G16B16A16_UNORM } }, */ + /* 1-plane 444 */ + /* { XXX, VK_FORMAT_B8G8R8A8_UNORM, { VK_FORMAT_B8G8R8A8_UNORM } }, */ + { GST_VIDEO_FORMAT_Y410, VK_FORMAT_A2R10G10B10_UNORM_PACK32, { VK_FORMAT_R16G16B16A16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (Y412_), VK_FORMAT_R12X4G12X4B12X4A12X4_UNORM_4PACK16, { VK_FORMAT_R16G16B16A16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (Y416_), VK_FORMAT_R16G16B16A16_UNORM, { VK_FORMAT_R16G16B16A16_UNORM } }, + /* 2-planes 420 */ + { GST_VIDEO_FORMAT_NV21, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, + { GST_VIDEO_FORMAT_NV12, VK_FORMAT_G8_B8R8_2PLANE_420_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (P010_10), VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16, { VK_FORMAT_R10X6_UNORM_PACK16, VK_FORMAT_R10X6G10X6_UNORM_2PACK16 } }, + { GST_VIDEO_FORMAT_ENDIANNESS (P012_), VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16, { VK_FORMAT_R12X4_UNORM_PACK16, VK_FORMAT_R12X4G12X4_UNORM_2PACK16 } }, + { GST_VIDEO_FORMAT_ENDIANNESS (P016_), VK_FORMAT_G16_B16R16_2PLANE_420_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } }, + /* 2-planes 422 */ + { GST_VIDEO_FORMAT_NV16, VK_FORMAT_G8_B8R8_2PLANE_422_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, + /* { GST_VIDEO_FORMAT_ENDIANNESS (P021_10), VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } }, */ + /* { GST_VIDEO_FORMAT_ENDIANNESS (P021_12), VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } }, */ + /* { GST_VIDEO_FORMAT_ENDIANNESS (P021_16), VK_FORMAT_G16_B16R16_2PLANE_422_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } }, */ + /* 2-planes 444 */ + { GST_VIDEO_FORMAT_NV24, VK_FORMAT_G8_B8R8_2PLANE_444_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM } }, + /* { XXX, VK_FORMAT_G16_B16R16_2PLANE_444_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } } */ + /* { XXX, VK_FORMAT_G10X6_B10X6R10X6_2PLANE_444_UNORM_3PACK16, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } } */ + /* { XXX, VK_FORMAT_G12X4_B12X4R12X4_2PLANE_444_UNORM_3PACK16, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16G16_UNORM } } */ + /* 3-planes 420 */ + /* { GST_VIDEO_FORMAT_YV12, VK_FORMAT_UNDEFINED, { VK_FORMAT_R8_UNORM, } }, UV inverted I420 */ + { GST_VIDEO_FORMAT_I420, VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (I420_10), VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (I420_12), VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + /* { GST_VIDEO_FORMAT_ENDIANNESS (I420_16), VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, */ + /* 3-planes 422 */ + { GST_VIDEO_FORMAT_Y42B, VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (I422_10), VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (I422_12), VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + /* { GST_VIDEO_FORMAT_ENDIANNESS (I422_16), VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, */ + /* 3-planes 444 */ + { GST_VIDEO_FORMAT_Y444, VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (Y444_10), VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (Y444_12), VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + /* { GST_VIDEO_FORMAT_ENDIANNESS (Y444_16), VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, */ + /* YUVA 420 */ + { GST_VIDEO_FORMAT_A420, VK_FORMAT_R8_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A420_10), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A420_12), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A420_16), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + /* YUVA 422 */ + { GST_VIDEO_FORMAT_A422, VK_FORMAT_R8_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A422_10), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A422_12), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A422_16), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + /* YUVA 444 */ + { GST_VIDEO_FORMAT_A444, VK_FORMAT_R8_UNORM, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM, VK_FORMAT_R8_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A444_10), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A444_12), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, + { GST_VIDEO_FORMAT_ENDIANNESS (A444_16), VK_FORMAT_R16_UNORM, { VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM, VK_FORMAT_R16_UNORM } }, }; /* *INDENT-ON* */ @@ -542,11 +603,13 @@ {VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT, VK_IMAGE_USAGE_STORAGE_BIT}, {VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT, VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT}, -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_decode_queue) {VK_FORMAT_FEATURE_2_VIDEO_DECODE_OUTPUT_BIT_KHR, VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR}, {VK_FORMAT_FEATURE_2_VIDEO_DECODE_DPB_BIT_KHR, VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR}, +#endif +#if defined(VK_KHR_video_encode_queue) {VK_FORMAT_FEATURE_2_VIDEO_ENCODE_DPB_BIT_KHR, VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR}, {VK_FORMAT_FEATURE_2_VIDEO_ENCODE_INPUT_BIT_KHR, @@ -563,81 +626,6 @@ return usage; } -static gboolean -supports_KHR_get_physical_device_properties2 (GstVulkanDevice * device) -{ -#if defined (VK_KHR_get_physical_device_properties2) - return gst_vulkan_physical_device_check_api_version (device->physical_device, - 1, 1, 0) - || gst_vulkan_instance_is_extension_enabled (device->instance, - "VK_KHR_get_physical_device_properties2"); -#else - return FALSE; -#endif -} - -static gboolean -supports_KHR_format_feature_flags2 (GstVulkanDevice * device) -{ -#if defined (VK_KHR_format_feature_flags2) - if (gst_vulkan_physical_device_check_api_version (device->physical_device, 1, - 3, 0)) - return TRUE; - - if (supports_KHR_get_physical_device_properties2 (device) - && gst_vulkan_device_is_extension_enabled (device, - "VK_KHR_format_feature_flags2")) - return TRUE; -#endif - return FALSE; -} - -static guint64 -_get_feature_flags (GstVulkanDevice * device, gpointer func, - VkFormat format, VkImageTiling tiling) -{ - VkFormatProperties prop = { 0 }; - VkPhysicalDevice gpu = gst_vulkan_device_get_physical_device (device); -#if defined (VK_KHR_get_physical_device_properties2) -#if defined (VK_KHR_format_feature_flags2) - VkFormatProperties3KHR prop3 = { - .sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_3_KHR, - }; -#endif - VkFormatProperties2KHR prop2 = { - .sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_2_KHR, - .pNext = NULL, - }; - - if (func && supports_KHR_get_physical_device_properties2 (device)) { - PFN_vkGetPhysicalDeviceFormatProperties2KHR - gst_vkGetPhysicalDeviceFormatProperties2 = func; -#if defined (VK_KHR_format_feature_flags2) - prop2.pNext = &prop3; -#endif - - gst_vkGetPhysicalDeviceFormatProperties2 (gpu, format, &prop2); - if (supports_KHR_format_feature_flags2 (device)) { -#if defined (VK_KHR_format_feature_flags2) - return tiling == VK_IMAGE_TILING_LINEAR ? - prop3.linearTilingFeatures : prop3.optimalTilingFeatures; -#else - g_assert_not_reached (); -#endif - } else { - return tiling == VK_IMAGE_TILING_LINEAR ? - prop2.formatProperties.linearTilingFeatures : - prop2.formatProperties.optimalTilingFeatures; - } - } -#endif /* defined (VK_KHR_get_physical_device_properties2) */ - - /* fallback */ - vkGetPhysicalDeviceFormatProperties (gpu, format, &prop); - return tiling == VK_IMAGE_TILING_LINEAR ? - prop.linearTilingFeatures : prop.optimalTilingFeatures; -} - /** * gst_vulkan_format_from_video_info_2: (skip) * @device: a #GstVulkanDevice @@ -660,20 +648,7 @@ int *n_imgs, VkImageUsageFlags * usage_ret) { int i; -#if defined (VK_KHR_get_physical_device_properties2) - PFN_vkGetPhysicalDeviceFormatProperties2KHR - gst_vkGetPhysicalDeviceFormatProperties2 = NULL; - - gst_vkGetPhysicalDeviceFormatProperties2 = - gst_vulkan_instance_get_proc_address (device->instance, - "vkGetPhysicalDeviceFormatProperties2"); - if (!gst_vkGetPhysicalDeviceFormatProperties2) - gst_vkGetPhysicalDeviceFormatProperties2 = - gst_vulkan_instance_get_proc_address (device->instance, - "vkGetPhysicalDeviceFormatProperties2KHR"); -#else - gpointer gst_vkGetPhysicalDeviceFormatProperties2 = NULL; -#endif + GstVulkanFormatProperties props = { 0, }; for (i = 0; i < G_N_ELEMENTS (vk_formats_map); i++) { guint64 feats_primary, feats_secondary = 0; @@ -682,14 +657,16 @@ if (vk_formats_mapi.format != GST_VIDEO_INFO_FORMAT (info)) continue; - feats_primary = _get_feature_flags (device, - gst_vkGetPhysicalDeviceFormatProperties2, vk_formats_mapi.vkfrmt, - tiling); + gst_vulkan_physical_device_get_format_properties (device->physical_device, + vk_formats_mapi.vkfrmt, &props); + feats_primary = (tiling == VK_IMAGE_TILING_LINEAR) ? + props.linear_tiling_feat : props.optimal_tiling_feat; if (vk_formats_mapi.vkfrmt != vk_formats_mapi.vkfrmts0) { - feats_secondary = _get_feature_flags (device, - gst_vkGetPhysicalDeviceFormatProperties2, - vk_formats_mapi.vkfrmts0, tiling); + gst_vulkan_physical_device_get_format_properties (device->physical_device, + vk_formats_mapi.vkfrmts0, &props); + feats_secondary = (tiling == VK_IMAGE_TILING_LINEAR) ? + props.linear_tiling_feat : props.optimal_tiling_feat; } if (GST_VIDEO_INFO_IS_RGB (info)) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkfullscreenquad.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkfullscreenquad.h
Changed
@@ -102,7 +102,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanFullScreenQuad, gst_object_unref) GST_VULKAN_API -GstVulkanFullScreenQuad * gst_vulkan_full_screen_quad_new (GstVulkanQueue * queue); +GstVulkanFullScreenQuad * gst_vulkan_full_screen_quad_new (GstVulkanQueue * queue) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_full_screen_quad_set_info (GstVulkanFullScreenQuad * self, const GstVideoInfo *in_info, const GstVideoInfo * out_info); @@ -146,10 +146,10 @@ gboolean gst_vulkan_full_screen_quad_draw (GstVulkanFullScreenQuad * self, GError ** error); GST_VULKAN_API -GstVulkanFence * gst_vulkan_full_screen_quad_get_last_fence (GstVulkanFullScreenQuad * self); +GstVulkanFence * gst_vulkan_full_screen_quad_get_last_fence (GstVulkanFullScreenQuad * self) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanQueue * gst_vulkan_full_screen_quad_get_queue (GstVulkanFullScreenQuad * self); +GstVulkanQueue * gst_vulkan_full_screen_quad_get_queue (GstVulkanFullScreenQuad * self) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS #endif /* __GST_VULKAN_FULL_SCREEN_QUAD_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkhandle.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkhandle.h
Changed
@@ -214,7 +214,7 @@ GstVulkanHandleType type, GstVulkanHandleTypedef handle, GstVulkanHandleDestroyNotify notify, - gpointer user_data); + gpointer user_data) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_handle_free_descriptor_set_layout (GstVulkanHandle * handle,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkimagebufferpool.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkimagebufferpool.c
Changed
@@ -23,6 +23,11 @@ #endif #include "gstvkimagebufferpool.h" +#include "gstvkphysicaldevice-private.h" + +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#include "gst/vulkan/gstvkvideoutils-private.h" +#endif /** * SECTION:vkimagebufferpool @@ -55,7 +60,9 @@ int n_imgs; guint32 n_layers; guint32 n_profiles; +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS GstVulkanVideoProfile profiles2; +#endif GstVulkanOperation *exec; gboolean add_videometa; }; @@ -202,6 +209,142 @@ } static gboolean +_is_video_usage (VkImageUsageFlags requested_usage) +{ + VkImageUsageFlags video_usage = 0; + +#if defined(VK_KHR_video_decode_queue) + video_usage |= (VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR + | VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR); +#endif +#if defined(VK_KHR_video_encode_queue) + video_usage |= (VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR + | VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR); +#endif + + return ((requested_usage & video_usage) != 0); +} + +static gboolean +_is_video_profile_independent (VkImageUsageFlags requested_usage) +{ + VkImageUsageFlags video_dependent = 0; + +#if defined(VK_KHR_video_decode_queue) + if ((requested_usage & VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR) != 0 + && (requested_usage & VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR) == 0) + return FALSE; +#endif +#if defined(VK_KHR_video_encode_queue) + video_dependent |= VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR; +#endif +#if defined(VK_KHR_video_encode_quantization_map) + video_dependent |= VK_IMAGE_USAGE_VIDEO_ENCODE_QUANTIZATION_DELTA_MAP_BIT_KHR; +#endif +#if defined(VK_KHR_video_encode_quantization_map) + video_dependent |= VK_IMAGE_USAGE_VIDEO_ENCODE_EMPHASIS_MAP_BIT_KHR; +#endif + + return ((requested_usage & video_dependent) == 0); +} + +static gboolean +gst_vulkan_image_buffer_pool_fill_buffer (GstVulkanImageBufferPool * vk_pool, + VkImageTiling tiling, gsize offsetGST_VIDEO_MAX_PLANES, + GstBuffer * buffer) +{ + GstVulkanImageBufferPoolPrivate *priv = GET_PRIV (vk_pool); + int i; + VkImageCreateInfo image_info; +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS + VkVideoProfileInfoKHR profiles2; + VkVideoProfileListInfoKHR profile_list = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, + .profileCount = priv->n_profiles, + .pProfiles = profiles, + }; +#endif + + /* *INDENT-OFF* */ + image_info = (VkImageCreateInfo) { + .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, + .pNext = NULL, + .flags = priv->img_flags, + .imageType = VK_IMAGE_TYPE_2D, + /* .format = fill per image, */ + /* .extent = fill per plane, */ + .mipLevels = 1, + .arrayLayers = priv->n_layers, + .samples = VK_SAMPLE_COUNT_1_BIT, + .tiling = tiling, + .usage = priv->usage, + .sharingMode = VK_SHARING_MODE_EXCLUSIVE, + .queueFamilyIndexCount = 0, + .pQueueFamilyIndices = NULL, + .initialLayout = priv->initial_layout == VK_IMAGE_LAYOUT_PREINITIALIZED + ? VK_IMAGE_LAYOUT_PREINITIALIZED + : VK_IMAGE_LAYOUT_UNDEFINED, + }; + /* *INDENT-ON* */ + if (_is_video_usage (priv->usage)) { + GstVulkanPhysicalDevice *gpu = vk_pool->device->physical_device; + if (gst_vulkan_physical_device_has_feature_video_maintenance1 (gpu) + && _is_video_profile_independent (priv->usage)) { +#if defined(VK_KHR_video_maintenance1) + image_info.flags |= VK_IMAGE_CREATE_VIDEO_PROFILE_INDEPENDENT_BIT_KHR; +#endif + } else if (priv->n_profiles > 0) { +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS + for (i = 0; i < priv->n_profiles; i++) + profilesi = priv->profilesi.profile; + + image_info.pNext = &profile_list; +#endif + } + } + + priv->v_info.size = 0; + for (i = 0; i < priv->n_imgs; i++) { + GstMemory *mem; + guint width, height; + + if (GST_VIDEO_INFO_N_PLANES (&priv->v_info) != priv->n_imgs) { + width = GST_VIDEO_INFO_WIDTH (&priv->v_info); + height = GST_VIDEO_INFO_HEIGHT (&priv->v_info); + } else { + width = GST_VIDEO_INFO_COMP_WIDTH (&priv->v_info, i); + height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->v_info, i); + } + + image_info.format = priv->vk_fmtsi; + /* *INDENT-OFF* */ + image_info.extent = (VkExtent3D) { width, height, 1 }; + /* *INDENT-ON* */ + + mem = gst_vulkan_image_memory_alloc_with_image_info (vk_pool->device, + &image_info, priv->mem_props); + if (!mem) + return FALSE; + + if (buffer) { + if (i < GST_VIDEO_MAX_PLANES - 1) + offseti + 1 = mem->size; + + gst_buffer_append_memory (buffer, mem); + } else { + GstVulkanImageMemory *img_mem = (GstVulkanImageMemory *) mem; + + priv->v_info.offseti = priv->v_info.size; + priv->v_info.size += img_mem->requirements.size; + + gst_memory_unref (mem); + } + } + + return TRUE; +} + +static gboolean gst_vulkan_image_buffer_pool_set_config (GstBufferPool * pool, GstStructure * config) { @@ -209,12 +352,10 @@ GstVulkanImageBufferPoolPrivate *priv = GET_PRIV (vk_pool); VkImageTiling tiling; VkImageUsageFlags requested_usage; - VkImageCreateInfo image_info; guint min_buffers, max_buffers; GstCaps *caps = NULL, *decode_caps = NULL, *encode_caps = NULL; GstCapsFeatures *features; gboolean found, no_multiplane; - guint i; if (!gst_buffer_pool_config_get_params (config, &caps, NULL, &min_buffers, &max_buffers)) @@ -240,40 +381,50 @@ &priv->mem_props, &priv->initial_layout, &priv->initial_access, &priv->n_layers, &decode_caps, &encode_caps); -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - { - guint n = 0; - - priv->n_profiles = 0; - if (decode_caps && ((requested_usage - & (VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR - | VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR)) != 0)) { - n++; - if (gst_vulkan_video_profile_from_caps (&priv->profilespriv->n_profiles, - decode_caps, GST_VULKAN_VIDEO_OPERATION_DECODE)) - priv->n_profiles++; - } - gst_clear_caps (&decode_caps); - if (encode_caps && ((requested_usage - & (VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR - | VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR)) != 0)) { - n++; - if (gst_vulkan_video_profile_from_caps (&priv->profilespriv->n_profiles, - encode_caps, GST_VULKAN_VIDEO_OPERATION_ENCODE)) - priv->n_profiles++; - } - gst_clear_caps (&encode_caps); + priv->n_profiles = 0; - if (priv->n_profiles != n) - goto missing_profile; - } - if (priv->n_profiles > 0) { - no_multiplane = FALSE; - } else +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS + if (_is_video_usage (requested_usage)) { + GstVulkanPhysicalDevice *gpu = vk_pool->device->physical_device; + if (!gst_vulkan_physical_device_has_feature_video_maintenance1 (gpu) + || !_is_video_profile_independent (requested_usage)) { + guint n = 0; + +#if defined(VK_KHR_video_decode_queue) + if (decode_caps && ((requested_usage + & (VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR + | VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR)) != 0)) { + n++; + if (gst_vulkan_video_profile_from_caps (&priv-> + profilespriv->n_profiles, decode_caps, + GST_VULKAN_VIDEO_OPERATION_DECODE)) + priv->n_profiles++; + } #endif - { - no_multiplane = TRUE; +#if defined(VK_KHR_video_encode_queue) + if (encode_caps && ((requested_usage + & (VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR + | VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR)) != 0)) { + n++; + if (gst_vulkan_video_profile_from_caps (&priv-> + profilespriv->n_profiles, encode_caps, + GST_VULKAN_VIDEO_OPERATION_ENCODE)) + priv->n_profiles++; + } +#endif + if (priv->n_profiles != n) + goto missing_profile; + if (priv->n_profiles == 0) + GST_WARNING ("Vulkan video image allocation without video profiles"); + } } +#endif /* GST_VULKAN_HAVE_VIDEO_EXTENSIONS */ + + gst_clear_caps (&decode_caps); + gst_clear_caps (&encode_caps); + + no_multiplane = !(GST_VIDEO_INFO_IS_YUV (&priv->v_info) && + _is_video_usage (requested_usage)); tiling = priv->raw_caps ? VK_IMAGE_TILING_LINEAR : VK_IMAGE_TILING_OPTIMAL; found = gst_vulkan_format_from_video_info_2 (vk_pool->device, @@ -283,19 +434,13 @@ goto no_vk_format; { - gboolean video = FALSE, sampleable; + gboolean sampleable; const GstVulkanFormatMap *vkmap; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - video = (requested_usage & (VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR - | VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR - | VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR - | VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR)); -#endif - sampleable = requested_usage & - (VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT); + sampleable = ((requested_usage & + (VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT)) != 0); - if (sampleable && !video) { + if (sampleable && !_is_video_usage (requested_usage)) { vkmap = gst_vulkan_format_get_map (GST_VIDEO_INFO_FORMAT (&priv->v_info)); priv->img_flags = VK_IMAGE_CREATE_ALIAS_BIT; if (GST_VIDEO_INFO_N_PLANES (&priv->v_info) > 1 @@ -306,79 +451,15 @@ } } - /* get the size of the buffer to allocate */ - /* *INDENT-OFF* */ - image_info = (VkImageCreateInfo) { - .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, - .pNext = NULL, - .flags = priv->img_flags, - .imageType = VK_IMAGE_TYPE_2D, - /* .format = fill per image, */ - /* .extent = fill per plane, */ - .mipLevels = 1, - .arrayLayers = priv->n_layers, - .samples = VK_SAMPLE_COUNT_1_BIT, - .tiling = tiling, - .usage = requested_usage, - .sharingMode = VK_SHARING_MODE_EXCLUSIVE, - .queueFamilyIndexCount = 0, - .pQueueFamilyIndices = NULL, - .initialLayout = priv->initial_layout == VK_IMAGE_LAYOUT_PREINITIALIZED - ? VK_IMAGE_LAYOUT_PREINITIALIZED - : VK_IMAGE_LAYOUT_UNDEFINED, - }; - /* *INDENT-ON* */ - priv->v_info.size = 0; - for (i = 0; i < priv->n_imgs; i++) { - GstVulkanImageMemory *img_mem; - guint width, height; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - VkVideoProfileInfoKHR profiles = - { priv->profiles0.profile, priv->profiles1.profile }; - VkVideoProfileListInfoKHR profile_list = { - .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, - .profileCount = priv->n_profiles, - .pProfiles = profiles, - }; -#endif - - if (GST_VIDEO_INFO_N_PLANES (&priv->v_info) != priv->n_imgs) { - width = GST_VIDEO_INFO_WIDTH (&priv->v_info); - height = GST_VIDEO_INFO_HEIGHT (&priv->v_info); - } else { - width = GST_VIDEO_INFO_COMP_WIDTH (&priv->v_info, i); - height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->v_info, i); - } - - image_info.format = priv->vk_fmtsi; - /* *INDENT-OFF* */ - image_info.extent = (VkExtent3D) { width, height, 1 }; - /* *INDENT-ON* */ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - if (priv->n_profiles > 0) - image_info.pNext = &profile_list; -#endif - - img_mem = (GstVulkanImageMemory *) - gst_vulkan_image_memory_alloc_with_image_info (vk_pool->device, - &image_info, priv->mem_props); - if (!img_mem) - goto mem_create_failed; - - if (!img_mem) - goto image_failed; - - priv->v_info.offseti = priv->v_info.size; - priv->v_info.size += img_mem->requirements.size; + priv->usage = requested_usage; - gst_memory_unref (GST_MEMORY_CAST (img_mem)); - } + /* get the size of the buffer to allocate */ + if (!gst_vulkan_image_buffer_pool_fill_buffer (vk_pool, tiling, NULL, NULL)) + goto image_failed; gst_buffer_pool_config_set_params (config, caps, priv->v_info.size, min_buffers, max_buffers); - priv->usage = requested_usage; - /* enable metadata based on config of the pool */ priv->add_videometa = gst_buffer_pool_config_has_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); @@ -408,14 +489,12 @@ gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (&priv->v_info))); return FALSE; } -mem_create_failed: - { - GST_WARNING_OBJECT (pool, "Could not create Vulkan Memory"); - return FALSE; - } #if GST_VULKAN_HAVE_VIDEO_EXTENSIONS missing_profile: { + gst_clear_caps (&decode_caps); + gst_clear_caps (&encode_caps); + GST_WARNING_OBJECT (pool, "missing or invalid decode-caps"); return FALSE; } @@ -430,7 +509,6 @@ static gboolean prepare_buffer (GstVulkanImageBufferPool * vk_pool, GstBuffer * buffer) { -#if defined(VK_KHR_synchronization2) GstVulkanImageBufferPoolPrivate *priv = GET_PRIV (vk_pool); GArray *barriers = NULL; GError *error = NULL; @@ -471,6 +549,7 @@ barriers = gst_vulkan_operation_retrieve_image_barriers (priv->exec); if (barriers->len > 0) { if (gst_vulkan_operation_use_sync2 (priv->exec)) { +#if defined(VK_KHR_synchronization2) VkDependencyInfoKHR dependency_info = { .sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO_KHR, .pImageMemoryBarriers = (gpointer) barriers->data, @@ -478,6 +557,7 @@ }; gst_vulkan_operation_pipeline_barrier2 (priv->exec, &dependency_info); +#endif } else { gst_vulkan_command_buffer_lock (priv->exec->cmd_buf); vkCmdPipelineBarrier (priv->exec->cmd_buf->cmd, @@ -501,8 +581,6 @@ } return FALSE; } -#endif - return TRUE; } /* This function handles GstBuffer creation */ @@ -514,79 +592,15 @@ GstVulkanImageBufferPoolPrivate *priv = GET_PRIV (vk_pool); VkImageTiling tiling = priv->raw_caps ? VK_IMAGE_TILING_LINEAR : VK_IMAGE_TILING_OPTIMAL; - VkImageCreateInfo image_info; GstBuffer *buf; - guint i; gsize offsetGST_VIDEO_MAX_PLANES = { 0, }; - /* *INDENT-OFF* */ - image_info = (VkImageCreateInfo) { - .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO, - .pNext = NULL, - .flags = priv->img_flags, - .imageType = VK_IMAGE_TYPE_2D, - /* .format = fill per image, */ - /* .extent = fill per plane, */ - .mipLevels = 1, - .arrayLayers = priv->n_layers, - .samples = VK_SAMPLE_COUNT_1_BIT, - .tiling = tiling, - .usage = priv->usage, - .sharingMode = VK_SHARING_MODE_EXCLUSIVE, - .queueFamilyIndexCount = 0, - .pQueueFamilyIndices = NULL, - .initialLayout = priv->initial_layout == VK_IMAGE_LAYOUT_PREINITIALIZED - ? VK_IMAGE_LAYOUT_PREINITIALIZED - : VK_IMAGE_LAYOUT_UNDEFINED, - }; - /* *INDENT-ON* */ - if (!(buf = gst_buffer_new ())) { goto no_buffer; } - for (i = 0; i < priv->n_imgs; i++) { - GstMemory *mem; - guint width, height; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - VkVideoProfileInfoKHR profiles = - { priv->profiles0.profile, priv->profiles1.profile }; - VkVideoProfileListInfoKHR profile_list = { - .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, - .profileCount = priv->n_profiles, - .pProfiles = profiles, - }; -#endif - - if (GST_VIDEO_INFO_N_PLANES (&priv->v_info) != priv->n_imgs) { - width = GST_VIDEO_INFO_WIDTH (&priv->v_info); - height = GST_VIDEO_INFO_HEIGHT (&priv->v_info); - } else { - width = GST_VIDEO_INFO_COMP_WIDTH (&priv->v_info, i); - height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->v_info, i); - } - - image_info.format = priv->vk_fmtsi; - /* *INDENT-OFF* */ - image_info.extent = (VkExtent3D) { width, height, 1 }; - /* *INDENT-ON* */ -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - if (priv->n_profiles > 0) - image_info.pNext = &profile_list; -#endif - - mem = gst_vulkan_image_memory_alloc_with_image_info (vk_pool->device, - &image_info, priv->mem_props); - if (!mem) { - gst_buffer_unref (buf); - goto mem_create_failed; - } - - if (i < GST_VIDEO_MAX_PLANES - 1) - offseti + 1 = mem->size; - - gst_buffer_append_memory (buf, mem); - } + if (!gst_vulkan_image_buffer_pool_fill_buffer (vk_pool, tiling, offset, buf)) + goto mem_create_failed; prepare_buffer (vk_pool, buf); @@ -612,6 +626,8 @@ } mem_create_failed: { + gst_buffer_unref (buf); + GST_WARNING_OBJECT (pool, "Could not create Vulkan Memory"); return GST_FLOW_ERROR; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkimagebufferpool.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkimagebufferpool.h
Changed
@@ -78,7 +78,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanImageBufferPool, gst_object_unref); GST_VULKAN_API -GstBufferPool * gst_vulkan_image_buffer_pool_new (GstVulkanDevice * device); +GstBufferPool * gst_vulkan_image_buffer_pool_new (GstVulkanDevice * device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_image_buffer_pool_config_set_allocation_params
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkimagememory.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkimagememory.h
Changed
@@ -212,7 +212,7 @@ GST_VULKAN_API GstVulkanImageView *gst_vulkan_image_memory_find_view (GstVulkanImageMemory * image, GstVulkanImageMemoryFindViewFunc find_func, - gpointer user_data); + gpointer user_data) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_image_memory_add_view (GstVulkanImageMemory * image, GstVulkanImageView * view);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkimageview.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkimageview.h
Changed
@@ -121,7 +121,7 @@ GST_VULKAN_API GstVulkanImageView * gst_vulkan_image_view_new (GstVulkanImageMemory * image, - const VkImageViewCreateInfo * create_info); + const VkImageViewCreateInfo * create_info) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkinstance.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkinstance.c
Changed
@@ -87,10 +87,14 @@ GPtrArray *enabled_extensions; #if !defined (GST_DISABLE_DEBUG) +#if defined(VK_EXT_debug_utils) + VkDebugUtilsMessengerEXT dbg_messenger; + PFN_vkCreateDebugUtilsMessengerEXT dbgCreateDebugUtilsMessenger; + PFN_vkDestroyDebugUtilsMessengerEXT dbgDestroyDebugUtilsMessenger; +#endif VkDebugReportCallbackEXT msg_callback; PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; - PFN_vkDebugReportMessageEXT dbgReportMessage; #endif }; @@ -277,6 +281,12 @@ GstVulkanInstancePrivate *priv = GET_PRIV (instance); if (priv->opened) { +#if !defined (GST_DISABLE_DEBUG) + if (priv->dbg_messenger) { + priv->dbgDestroyDebugUtilsMessenger (instance->instance, + priv->dbg_messenger, NULL); + } +#endif if (priv->dbgDestroyDebugReportCallback) priv->dbgDestroyDebugReportCallback (instance->instance, priv->msg_callback, NULL); @@ -304,6 +314,7 @@ G_OBJECT_CLASS (parent_class)->finalize (object); } +#if !defined (GST_DISABLE_DEBUG) VKAPI_ATTR static VkBool32 _gst_vk_debug_callback (VkDebugReportFlagsEXT msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, @@ -341,6 +352,52 @@ return FALSE; } +#if defined(VK_EXT_debug_utils) +VKAPI_ATTR static VkBool32 +_gst_vk_debug_utils_callback (VkDebugUtilsMessageSeverityFlagBitsEXT severity, + VkDebugUtilsMessageTypeFlagsEXT messageType, + const VkDebugUtilsMessengerCallbackDataEXT * data, gpointer pUserData) +{ + GstDebugLevel level = + messageType == VK_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT ? + GST_LEVEL_FIXME : + severity == VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT ? + GST_LEVEL_ERROR : + severity == VK_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT ? + GST_LEVEL_WARNING : + severity == VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT ? + GST_LEVEL_INFO : GST_LEVEL_DEBUG; + const char *fmt = "%s Code 0x%x : %s"; + + /* Ignore */ + switch (data->messageIdNumber) { + case 0x24b5c69f: /* VkPhysicalDeviceVulkan12Properties::maxUpdateAfterBindDescriptorsInAllPools + = 32 */ + return VK_FALSE; + default: + break; + } + + GST_CAT_LEVEL_LOG (GST_VULKAN_DEBUG_CAT, level, NULL, fmt, + data->pMessageIdName, data->messageIdNumber, data->pMessage); + switch (level) { + case GST_LEVEL_ERROR: + g_critical (fmt, data->pMessageIdName, data->messageIdNumber, + data->pMessage); + break; + case GST_LEVEL_WARNING: + g_warning (fmt, data->pMessageIdName, data->messageIdNumber, + data->pMessage); + break; + default: + break; + } + + return FALSE; +} +#endif +#endif + static gboolean gst_vulkan_instance_get_layer_info_unlocked (GstVulkanInstance * instance, const gchar * name, gchar ** description, guint32 * spec_version, @@ -805,6 +862,13 @@ gst_debug_category_get_threshold (GST_VULKAN_DEBUG_CAT); if (vulkan_debug_level >= GST_LEVEL_ERROR) { +#if defined(VK_EXT_debug_utils) + if (gst_vulkan_instance_get_extension_info_unlocked (instance, + VK_EXT_DEBUG_UTILS_EXTENSION_NAME, NULL)) { + gst_vulkan_instance_enable_extension_unlocked (instance, + VK_EXT_DEBUG_UTILS_EXTENSION_NAME); + } else +#endif if (gst_vulkan_instance_get_extension_info_unlocked (instance, VK_EXT_DEBUG_REPORT_EXTENSION_NAME, NULL)) { gst_vulkan_instance_enable_extension_unlocked (instance, @@ -844,6 +908,128 @@ return ret; } +static gboolean +_gst_vulkan_configure_debug_utils (GstVulkanInstance * instance, + GstDebugLevel vulkan_debug_level) +{ +#if defined(VK_EXT_debug_utils) + GstVulkanInstancePrivate *priv = GET_PRIV (instance); + VkResult err; + + if (gst_vulkan_instance_is_extension_enabled_unlocked (instance, + VK_EXT_DEBUG_UTILS_EXTENSION_NAME, NULL)) { + VkDebugUtilsMessengerCreateInfoEXT dbg = { + .sType = VK_STRUCTURE_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT, + .pNext = NULL, + .flags = 0, + .messageSeverity = 0, + .messageType = VK_DEBUG_UTILS_MESSAGE_TYPE_GENERAL_BIT_EXT + | VK_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT, + .pfnUserCallback = _gst_vk_debug_utils_callback, + .pUserData = NULL + }; + + if (vulkan_debug_level >= GST_LEVEL_ERROR) + dbg.messageSeverity |= VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_WARNING) + dbg.messageSeverity |= VK_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_FIXME) { + dbg.messageType |= VK_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT; + dbg.messageSeverity |= VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT; + } + if (vulkan_debug_level >= GST_LEVEL_INFO) + dbg.messageSeverity |= VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_DEBUG) + dbg.messageSeverity |= VK_DEBUG_UTILS_MESSAGE_SEVERITY_VERBOSE_BIT_EXT; + + priv->dbgCreateDebugUtilsMessenger = (PFN_vkCreateDebugUtilsMessengerEXT) + gst_vulkan_instance_get_proc_address (instance, + "vkCreateDebugUtilsMessengerEXT"); + if (!priv->dbgCreateDebugUtilsMessenger) + return FALSE; + priv->dbgDestroyDebugUtilsMessenger = (PFN_vkDestroyDebugUtilsMessengerEXT) + gst_vulkan_instance_get_proc_address (instance, + "vkDestroyDebugUtilsMessengerEXT"); + if (!priv->dbgDestroyDebugUtilsMessenger) + return FALSE; + + err = priv->dbgCreateDebugUtilsMessenger (instance->instance, &dbg, NULL, + &priv->dbg_messenger); + if (err != VK_SUCCESS) + return FALSE; + + return TRUE; + } +#endif + return FALSE; +} + +static gboolean +gst_vulkan_instance_configure_debugging (GstVulkanInstance * instance, + GstDebugLevel vulkan_debug_level, GError ** error) +{ +#if !defined (GST_DISABLE_DEBUG) + GstVulkanInstancePrivate *priv = GET_PRIV (instance); + VkResult err; + + if (vulkan_debug_level == GST_LEVEL_NONE) + return TRUE; + + if (!_gst_vulkan_configure_debug_utils (instance, vulkan_debug_level)) { + /* fallback */ + if (gst_vulkan_instance_is_extension_enabled_unlocked (instance, + VK_EXT_DEBUG_REPORT_EXTENSION_NAME, NULL)) { + VkDebugReportCallbackCreateInfoEXT info = { + .sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT, + .pNext = NULL, + .flags = 0, + .pfnCallback = (PFN_vkDebugReportCallbackEXT) _gst_vk_debug_callback, + .pUserData = NULL, + }; + + priv->dbgCreateDebugReportCallback = (PFN_vkCreateDebugReportCallbackEXT) + gst_vulkan_instance_get_proc_address (instance, + "vkCreateDebugReportCallbackEXT"); + if (!priv->dbgCreateDebugReportCallback) { + g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, + "Failed to retrieve vkCreateDebugReportCallback"); + return FALSE; + } + priv->dbgDestroyDebugReportCallback = + (PFN_vkDestroyDebugReportCallbackEXT) + gst_vulkan_instance_get_proc_address (instance, + "vkDestroyDebugReportCallbackEXT"); + if (!priv->dbgDestroyDebugReportCallback) { + g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, + "Failed to retrieve vkDestroyDebugReportCallback"); + return FALSE; + } + + /* matches the conditions in _gst_vk_debug_callback() */ + if (vulkan_debug_level >= GST_LEVEL_ERROR) + info.flags |= VK_DEBUG_REPORT_ERROR_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_WARNING) + info.flags |= VK_DEBUG_REPORT_WARNING_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_FIXME) + info.flags |= VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_LOG) + info.flags |= VK_DEBUG_REPORT_INFORMATION_BIT_EXT; + if (vulkan_debug_level >= GST_LEVEL_TRACE) + info.flags |= VK_DEBUG_REPORT_DEBUG_BIT_EXT; + + err = + priv->dbgCreateDebugReportCallback (instance->instance, &info, NULL, + &priv->msg_callback); + if (gst_vulkan_error_to_g_error (err, error, + "vkCreateDebugReportCallback") < 0) + return FALSE; + } + } +#endif + + return TRUE; +} + /** * gst_vulkan_instance_open: * @instance: a #GstVulkanInstance @@ -936,6 +1122,7 @@ VkValidationFeatureEnableEXT feat_list = { VK_VALIDATION_FEATURE_ENABLE_GPU_ASSISTED_EXT, VK_VALIDATION_FEATURE_ENABLE_GPU_ASSISTED_RESERVE_BINDING_SLOT_EXT, + VK_VALIDATION_FEATURE_ENABLE_BEST_PRACTICES_EXT, #if defined (VK_API_VERSION_1_3) VK_VALIDATION_FEATURE_ENABLE_SYNCHRONIZATION_VALIDATION_EXT, #endif @@ -1015,62 +1202,9 @@ "vkEnumeratePhysicalDevices") < 0) goto error; -#if !defined (GST_DISABLE_DEBUG) - if (vulkan_debug_level >= GST_LEVEL_ERROR - && gst_vulkan_instance_is_extension_enabled_unlocked (instance, - VK_EXT_DEBUG_REPORT_EXTENSION_NAME, NULL)) { - VkDebugReportCallbackCreateInfoEXT info = { 0, }; - - priv->dbgCreateDebugReportCallback = (PFN_vkCreateDebugReportCallbackEXT) - gst_vulkan_instance_get_proc_address (instance, - "vkCreateDebugReportCallbackEXT"); - if (!priv->dbgCreateDebugReportCallback) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Failed to retrieve vkCreateDebugReportCallback"); - goto error; - } - priv->dbgDestroyDebugReportCallback = (PFN_vkDestroyDebugReportCallbackEXT) - gst_vulkan_instance_get_proc_address (instance, - "vkDestroyDebugReportCallbackEXT"); - if (!priv->dbgDestroyDebugReportCallback) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Failed to retrieve vkDestroyDebugReportCallback"); - goto error; - } - priv->dbgReportMessage = (PFN_vkDebugReportMessageEXT) - gst_vulkan_instance_get_proc_address (instance, - "vkDebugReportMessageEXT"); - if (!priv->dbgReportMessage) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_INITIALIZATION_FAILED, - "Failed to retrieve vkDebugReportMessage"); - goto error; - } - - info.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; - info.pNext = NULL; - info.flags = 0; - info.pfnCallback = (PFN_vkDebugReportCallbackEXT) _gst_vk_debug_callback; - info.pUserData = NULL; - /* matches the conditions in _gst_vk_debug_callback() */ - if (vulkan_debug_level >= GST_LEVEL_ERROR) - info.flags |= VK_DEBUG_REPORT_ERROR_BIT_EXT; - if (vulkan_debug_level >= GST_LEVEL_WARNING) - info.flags |= VK_DEBUG_REPORT_WARNING_BIT_EXT; - if (vulkan_debug_level >= GST_LEVEL_FIXME) - info.flags |= VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT; - if (vulkan_debug_level >= GST_LEVEL_LOG) - info.flags |= VK_DEBUG_REPORT_INFORMATION_BIT_EXT; - if (vulkan_debug_level >= GST_LEVEL_TRACE) - info.flags |= VK_DEBUG_REPORT_DEBUG_BIT_EXT; - - err = - priv->dbgCreateDebugReportCallback (instance->instance, &info, NULL, - &priv->msg_callback); - if (gst_vulkan_error_to_g_error (err, error, - "vkCreateDebugReportCallback") < 0) - goto error; - } -#endif + if (!gst_vulkan_instance_configure_debugging (instance, vulkan_debug_level, + error)) + goto error; priv->opened = TRUE; GST_OBJECT_UNLOCK (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkinstance.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkinstance.h
Changed
@@ -79,7 +79,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanInstance, gst_object_unref) GST_VULKAN_API -GstVulkanInstance * gst_vulkan_instance_new (void); +GstVulkanInstance * gst_vulkan_instance_new (void) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_instance_fill_info (GstVulkanInstance * instance, GError ** error); @@ -93,11 +93,11 @@ GST_VULKAN_API GstVulkanDevice * gst_vulkan_instance_create_device (GstVulkanInstance * instance, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanDevice * gst_vulkan_instance_create_device_with_index(GstVulkanInstance * instance, guint device_index, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_context_set_vulkan_instance (GstContext * context,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkoperation.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkoperation.c
Changed
@@ -23,9 +23,7 @@ #endif #include "gstvkoperation.h" -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS -# include "gstvkvideo-private.h" -#endif +#include "gstvkphysicaldevice-private.h" /** * SECTION:vkoperation @@ -149,39 +147,37 @@ } } - - static void gst_vulkan_operation_constructed (GObject * object) { -#if defined(VK_KHR_timeline_semaphore) || defined(VK_KHR_synchronization2) GstVulkanOperation *self = GST_VULKAN_OPERATION (object); GstVulkanOperationPrivate *priv = GET_PRIV (self); GstVulkanDevice *device = priv->cmd_pool->queue->device; -#if defined(VK_KHR_synchronization2) priv->has_sync2 = - gst_vulkan_physical_device_check_api_version (device->physical_device, 1, - 3, 0) - || gst_vulkan_device_is_extension_enabled (device, - VK_KHR_SYNCHRONIZATION_2_EXTENSION_NAME); + gst_vulkan_physical_device_has_feature_synchronization2 + (device->physical_device); + + priv->has_timeline = + gst_vulkan_physical_device_has_feature_timeline_sempahore + (device->physical_device); if (priv->has_sync2) { - if (gst_vulkan_physical_device_check_api_version (device->physical_device, - 1, 3, 0)) + gboolean vulkan_1_3 = + gst_vulkan_physical_device_check_api_version (device->physical_device, + 1, 3, 0); + + if (vulkan_1_3) { priv->QueueSubmit2 = gst_vulkan_device_get_proc_address (device, "vkQueueSubmit2"); + priv->CmdPipelineBarrier2 = + gst_vulkan_device_get_proc_address (device, "vkCmdPipelineBarrier2"); + } if (!priv->QueueSubmit2) { priv->QueueSubmit2 = gst_vulkan_device_get_proc_address (device, "vkQueueSubmit2KHR"); } - - if (gst_vulkan_physical_device_check_api_version (device->physical_device, - 1, 3, 0)) - priv->CmdPipelineBarrier2 = - gst_vulkan_device_get_proc_address (device, "vkCmdPipelineBarrier2"); - if (!priv->CmdPipelineBarrier2) { priv->CmdPipelineBarrier2 = gst_vulkan_device_get_proc_address (device, @@ -190,21 +186,13 @@ priv->has_sync2 = (priv->QueueSubmit2 && priv->CmdPipelineBarrier2); } -#endif - -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) priv->has_video = gst_vulkan_device_is_extension_enabled (device, VK_KHR_VIDEO_QUEUE_EXTENSION_NAME); - priv->has_video_maintenance1 = gst_vulkan_video_has_maintenance1 (device); -#endif -#if defined(VK_KHR_timeline_semaphore) - priv->has_timeline = - gst_vulkan_physical_device_check_api_version (device->physical_device, 1, - 2, 0) - || gst_vulkan_device_is_extension_enabled (device, - VK_KHR_TIMELINE_SEMAPHORE_EXTENSION_NAME); -#endif #endif + priv->has_video_maintenance1 = + gst_vulkan_physical_device_has_feature_video_maintenance1 + (device->physical_device); G_OBJECT_CLASS (parent_class)->constructed (object); } @@ -1206,6 +1194,20 @@ GST_OBJECT_UNLOCK (self); } +#if defined(VK_KHR_video_queue) +static inline gboolean +_query_type_is_video (VkQueryType query_type) +{ + if (query_type == VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR) + return TRUE; +# if defined(VK_KHR_video_encode_queue) + if (query_type == VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR) + return TRUE; +# endif + return FALSE; +} +#endif /* defined(VK_KHR_video_queue) */ + /** * gst_vulkan_operation_enable_query: * @self: a #GstVulkanOperation @@ -1226,10 +1228,6 @@ VkQueryType query_type, guint n_queries, gpointer pnext, GError ** error) { GstVulkanOperationPrivate *priv; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - GstVulkanPhysicalDevice *device; - guint32 queue_family; -#endif VkQueryPoolCreateInfo query_pool_info = { .sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO, .pNext = pnext, @@ -1247,34 +1245,31 @@ if (priv->query_pool) return TRUE; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - queue_family = priv->cmd_pool->queue->family; - device = priv->cmd_pool->queue->device->physical_device; - /* - * The VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR can be optional, so .query_result_status - * can be FALSE, see AMD's case. - * vkCreateQueryPool needs to be called when the query is - * VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR to enable it anyway. - */ - if (!device->queue_family_opsqueue_family.query_result_status && - query_type == VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR) { - g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_FEATURE_NOT_PRESENT, - "Queue %" GST_PTR_FORMAT - " doesn't support result status query operations", - priv->cmd_pool->queue); - return FALSE; - } - - if ((query_type == VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR - || query_type == VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR) - && priv->has_video && priv->has_video_maintenance1) { - VkBaseInStructure *base; - for (base = pnext; base; base = (VkBaseInStructure *) base->pNext) { - if (base->sType == VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR) { - priv->use_inline_query = TRUE; - break; - } +#if defined(VK_KHR_video_queue) + { + guint32 queue_family = priv->cmd_pool->queue->family; + GstVulkanPhysicalDevice *device = + priv->cmd_pool->queue->device->physical_device; + + /* + * The VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR can be optional, so .query_result_status + * can be FALSE, see AMD's case. + * vkCreateQueryPool needs to be called when the query is + * VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR to enable it anyway. + */ + if (!device->queue_family_opsqueue_family.query_result_status && + query_type == VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR) { + g_set_error (error, GST_VULKAN_ERROR, VK_ERROR_FEATURE_NOT_PRESENT, + "Queue %" GST_PTR_FORMAT + " doesn't support result status query operations", + priv->cmd_pool->queue); + return FALSE; } + + priv->use_inline_query = (_query_type_is_video (query_type) + && priv->has_video && priv->has_video_maintenance1 + && (vk_find_struct (pnext, + VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR) != NULL)); } #endif @@ -1292,17 +1287,18 @@ * + result support other structures besides a guint32 array */ switch (query_type) { -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) case VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR: if (priv->has_video) stride = sizeof (guint32); break; +#endif +#if defined(VK_KHR_video_encode_queue) case VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR: if (priv->has_video) stride = sizeof (GstVulkanEncodeQueryResult); break; #endif - default: break; } @@ -1342,10 +1338,8 @@ if (!priv->query_pool || !priv->query_data || !priv->op_submitted) return TRUE; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS - if (priv->has_video - && (priv->query_type == VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR - || priv->query_type == VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR)) { +#if defined(VK_KHR_video_queue) + if (priv->has_video && _query_type_is_video (priv->query_type)) { flags |= VK_QUERY_RESULT_WITH_STATUS_BIT_KHR; } #endif @@ -1397,11 +1391,10 @@ return FALSE; } - if (priv->use_inline_query) - g_return_val_if_fail (base, FALSE); - #if defined(VK_KHR_video_maintenance1) if (priv->use_inline_query) { + g_return_val_if_fail (base, FALSE); + /* *INDENT-OFF* */ priv->inline_query = (VkVideoInlineQueryInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_INLINE_QUERY_INFO_KHR,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkoperation.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkoperation.h
Changed
@@ -83,7 +83,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanOperation, gst_object_unref) GST_VULKAN_API -GstVulkanOperation * gst_vulkan_operation_new (GstVulkanCommandPool * cmd_pool); +GstVulkanOperation * gst_vulkan_operation_new (GstVulkanCommandPool * cmd_pool) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_operation_begin (GstVulkanOperation * self, GError ** error); @@ -96,10 +96,10 @@ void gst_vulkan_operation_reset (GstVulkanOperation * self); GST_VULKAN_API GArray * gst_vulkan_operation_retrieve_image_barriers - (GstVulkanOperation * self); + (GstVulkanOperation * self) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GArray * gst_vulkan_operation_new_extra_image_barriers - (GstVulkanOperation * self); + (GstVulkanOperation * self) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_operation_add_frame_barrier (GstVulkanOperation * self, GstBuffer * frame,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice-private.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice-private.h
Changed
@@ -25,8 +25,63 @@ G_BEGIN_DECLS -const -VkPhysicalDeviceFeatures2 * gst_vulkan_physical_device_get_features (GstVulkanPhysicalDevice * device); +typedef struct _GstVulkanFormatProperties GstVulkanFormatProperties; + +/** + * GstVulkanFormatProperties: (skip): + * @linear_tiling_feat: linear tiling features + * @optimal_tiling_feat: optimal tiling features + * @buffer_feat: buffer features + * + * Common structure for Vulkan color format properties. + */ +struct _GstVulkanFormatProperties +{ + guint64 linear_tiling_feat; + guint64 optimal_tiling_feat; + guint64 buffer_feat; +}; + +const VkPhysicalDeviceFeatures2 * + gst_vulkan_physical_device_get_features (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_sampler_ycbrc_conversion + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_synchronization2 + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_timeline_sempahore + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_video_maintenance1 + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_video_maintenance2 + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_video_decode_vp9 + (GstVulkanPhysicalDevice * device); + +gboolean gst_vulkan_physical_device_has_feature_video_encode_av1 + (GstVulkanPhysicalDevice * device); + +void gst_vulkan_physical_device_get_format_properties + (GstVulkanPhysicalDevice * device, + guint vk_format, + GstVulkanFormatProperties * props); + +GArray * gst_vulkan_physical_device_get_video_formats (GstVulkanPhysicalDevice * device, + guint64 image_usage, + gpointer pprofile, + GError ** error); + +gboolean gst_vulkan_physical_device_get_video_capabilities + (GstVulkanPhysicalDevice * device, + gpointer pprofile, + gpointer pcaps_out, + GError ** error); + static inline void vk_link_struct (gpointer chain, gconstpointer in) @@ -39,6 +94,20 @@ out->pNext = (void *) in; } +static inline gconstpointer +vk_find_struct (gconstpointer chain, VkStructureType stype) +{ + const VkBaseInStructure *in = chain; + + while (in) { + if (in->sType == stype) + return in; + in = in->pNext; + } + + return NULL; +} + G_END_DECLS #endif /* __GST_VULKAN_PHYSICAL_DEVICE_PRIVATE_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice.c
Changed
@@ -71,6 +71,10 @@ VkPhysicalDeviceVulkan13Features features13; VkPhysicalDeviceVulkan13Properties properties13; #endif +#if defined (VK_API_VERSION_1_4) + VkPhysicalDeviceVulkan14Features features14; + VkPhysicalDeviceVulkan14Properties properties14; +#endif #if defined (VK_KHR_synchronization2) VkPhysicalDeviceSynchronization2FeaturesKHR synchronization2; #endif @@ -80,6 +84,22 @@ #if defined (VK_KHR_video_maintenance1) VkPhysicalDeviceVideoMaintenance1FeaturesKHR videomaintenance1; #endif +#if defined (VK_KHR_video_maintenance2) + VkPhysicalDeviceVideoMaintenance2FeaturesKHR videomaintenance2; +#endif +#if defined (VK_KHR_video_encode_av1) + VkPhysicalDeviceVideoEncodeAV1FeaturesKHR video_encoder_av1; +#endif +#if defined (VK_KHR_video_decode_vp9) + VkPhysicalDeviceVideoDecodeVP9FeaturesKHR video_decoder_vp9; +#endif +#if defined (VK_KHR_get_physical_device_properties2) + PFN_vkGetPhysicalDeviceFormatProperties2KHR get_format_props_fn; +#endif +#if defined (VK_KHR_video_queue) + PFN_vkGetPhysicalDeviceVideoFormatPropertiesKHR get_video_format_props_fn; + PFN_vkGetPhysicalDeviceVideoCapabilitiesKHR get_video_capabilties_fn; +#endif }; static void @@ -211,6 +231,15 @@ VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_3_FEATURES; priv->features12.pNext = &priv->features13; #endif +#if defined (VK_API_VERSION_1_4) + priv->properties14.sType = + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_4_PROPERTIES; + priv->properties13.pNext = &priv->properties14; + + priv->features14.sType = + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_4_FEATURES; + priv->features13.pNext = &priv->features14; +#endif } static void @@ -490,6 +519,37 @@ } #endif /* defined (VK_API_VERSION_1_3) */ +#if defined (VK_API_VERSION_1_4) +static void +dump_features14 (GstVulkanPhysicalDevice * device, + VkPhysicalDeviceVulkan14Features * features) +{ + /* *INDENT-OFF* */ + DEBUG_BOOL_STRUCT ("support for (1.4)", features, globalPriorityQuery); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, shaderSubgroupRotate); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, shaderSubgroupRotateClustered); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, shaderFloatControls2); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, shaderExpectAssume); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, rectangularLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, bresenhamLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, smoothLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, stippledRectangularLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, stippledBresenhamLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, stippledSmoothLines); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, vertexAttributeInstanceRateDivisor); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, vertexAttributeInstanceRateZeroDivisor); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, indexTypeUint8); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, dynamicRenderingLocalRead); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, maintenance5); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, maintenance6); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, pipelineProtectedAccess); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, pipelineRobustness); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, hostImageCopy); + DEBUG_BOOL_STRUCT ("support for (1.4)", features, pushDescriptor); + /* *INDENT-ON* */ +} +#endif /* defined (VK_API_VERSION_1_4) */ + static void dump_features_extras (GstVulkanPhysicalDevice * device, VkBaseOutStructure * chain) @@ -517,6 +577,25 @@ videoMaintenance1); } #endif +#if defined (VK_KHR_video_maintenance2) + if (chain->sType == + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_MAINTENANCE_2_FEATURES_KHR) { + DEBUG_BOOL_STRUCT ("support for", &priv->videomaintenance2, + videoMaintenance2); + } +#endif +#if defined (VK_KHR_video_decode_vp9) + if (chain->sType == + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_DECODE_VP9_FEATURES_KHR) { + DEBUG_BOOL_STRUCT ("support for", &priv->video_decoder_vp9, videoDecodeVP9); + } +#endif +#if defined (VK_KHR_video_encode_av1) + if (chain->sType == + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_ENCODE_AV1_FEATURES_KHR) { + DEBUG_BOOL_STRUCT ("support for", &priv->video_encoder_av1, videoEncodeAV1); + } +#endif } static gboolean @@ -544,11 +623,17 @@ VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_3_FEATURES) dump_features13 (device, (VkPhysicalDeviceVulkan13Features *) iter); #endif +#if defined (VK_API_VERSION_1_4) + else if (gst_vulkan_physical_device_check_api_version (device, 1, 4, 0) + && iter->sType == + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_4_FEATURES) + dump_features14 (device, (VkPhysicalDeviceVulkan14Features *) iter); +#endif else dump_features_extras (device, iter); } } else -#endif +#endif /* VK_API_VERSION_1_2 */ { dump_features10 (device, &device->features); } @@ -894,6 +979,41 @@ } #endif +#if defined (VK_API_VERSION_1_4) +static void +dump_properties14 (GstVulkanPhysicalDevice * device, + VkPhysicalDeviceVulkan14Properties * properties) +{ + /* *INDENT-OFF* */ + DEBUG_UINT32 ("properties (1.4)", properties, lineSubPixelPrecisionBits); + DEBUG_UINT32 ("properties (1.4)", properties, maxVertexAttribDivisor); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, supportsNonZeroFirstInstance); + DEBUG_UINT32 ("properties (1.4)", properties, maxPushDescriptors); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, dynamicRenderingLocalReadDepthStencilAttachments); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, dynamicRenderingLocalReadMultisampledAttachments); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, earlyFragmentMultisampleCoverageAfterSampleCounting); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, earlyFragmentSampleMaskTestBeforeSampleCounting); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, depthStencilSwizzleOneSupport); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, polygonModePointSize); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, nonStrictSinglePixelWideLinesUseParallelogram); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, nonStrictWideLinesUseParallelogram); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, blockTexelViewCompatibleMultipleLayers); + DEBUG_UINT32 ("properties (1.4)", properties, maxCombinedImageSamplerDescriptorCount); + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, fragmentShadingRateClampCombinerInputs); + /* VkPipelineRobustnessBufferBehavior defaultRobustnessStorageBuffers; */ + /* VkPipelineRobustnessBufferBehavior defaultRobustnessUniformBuffers; */ + /* VkPipelineRobustnessBufferBehavior defaultRobustnessVertexInputs; */ + /* VkPipelineRobustnessImageBehavior defaultRobustnessImages; */ + DEBUG_UINT32 ("properties (1.4)", properties, copySrcLayoutCount); + /* VkImageLayout* pCopySrcLayouts); */ + DEBUG_UINT32 ("properties (1.4)", properties, copyDstLayoutCount); + /* VkImageLayout* pCopyDstLayouts); */ + /* uint8_t optimalTilingLayoutUUIDVK_UUID_SIZE); */ + DEBUG_BOOL_STRUCT ("properties (1.4)", properties, identicalMemoryTypeRequirements); + /* *INDENT-ON* */ +} +#endif + static gboolean physical_device_info (GstVulkanPhysicalDevice * device, GError ** error) { @@ -941,9 +1061,15 @@ VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_3_PROPERTIES) dump_properties13 (device, (VkPhysicalDeviceVulkan13Properties *) iter); #endif +#if defined (VK_API_VERSION_1_4) + else if (gst_vulkan_physical_device_check_api_version (device, 1, 4, 0) + && iter->sType == + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_4_PROPERTIES) + dump_properties14 (device, (VkPhysicalDeviceVulkan14Properties *) iter); +#endif } } -#endif +#endif /* VK_API_VERSION_1_2 */ return TRUE; } @@ -973,7 +1099,22 @@ #if defined (VK_KHR_video_maintenance1) priv->videomaintenance1.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_MAINTENANCE_1_FEATURES_KHR; - vk_link_struct (&priv->features13, &priv->videomaintenance1); + vk_link_struct (&priv->features12, &priv->videomaintenance1); +#endif +#if defined (VK_KHR_video_maintenance2) + priv->videomaintenance2.sType = + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_MAINTENANCE_2_FEATURES_KHR; + vk_link_struct (&priv->features12, &priv->videomaintenance2); +#endif +#if defined (VK_KHR_video_encode_av1) + priv->video_encoder_av1.sType = + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_ENCODE_AV1_FEATURES_KHR; + vk_link_struct (&priv->features12, &priv->video_encoder_av1); +#endif +#if defined(VK_KHR_video_decode_vp9) + priv->video_decoder_vp9.sType = + VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_DECODE_VP9_FEATURES_KHR; + vk_link_struct (&priv->features12, &priv->video_decoder_vp9); #endif } #endif /* VK_API_VERSION_1_2 */ @@ -1081,7 +1222,7 @@ VkQueueFamilyProperties2 *props; int i; void *next = NULL; -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) VkQueueFamilyVideoPropertiesKHR *queue_family_video_props; VkQueueFamilyQueryResultStatusPropertiesKHR *queue_family_query_props; @@ -1093,7 +1234,7 @@ #endif props = g_new0 (VkQueueFamilyProperties2, device->n_queue_families); for (i = 0; i < device->n_queue_families; i++) { -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) queue_family_query_propsi.sType = VK_STRUCTURE_TYPE_QUEUE_FAMILY_QUERY_RESULT_STATUS_PROPERTIES_KHR; @@ -1117,7 +1258,7 @@ memcpy (&device->queue_family_propsi, &propsi.queueFamilyProperties, sizeof (device->queue_family_propsi)); -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) device->queue_family_opsi.video = queue_family_video_propsi.videoCodecOperations; device->queue_family_opsi.query_result_status = @@ -1125,7 +1266,7 @@ #endif } g_free (props); -#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#if defined(VK_KHR_video_queue) g_free (queue_family_video_props); g_free (queue_family_query_props); #endif @@ -1316,6 +1457,116 @@ return NULL; } +gboolean + gst_vulkan_physical_device_has_feature_sampler_ycbrc_conversion + (GstVulkanPhysicalDevice * device) { +#if defined (VK_API_VERSION_1_2) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); + if (gst_vulkan_physical_device_check_api_version (device, 1, 1, 0)) + return priv->features11.samplerYcbcrConversion; +#endif + return FALSE; +} + +gboolean +gst_vulkan_physical_device_has_feature_synchronization2 (GstVulkanPhysicalDevice + * device) +{ +#if defined (VK_KHR_synchronization2) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); +# if defined (VK_API_VERSION_1_3) + if (gst_vulkan_physical_device_check_api_version (device, 1, 3, 0)) + return priv->features13.synchronization2; +# endif + return priv->synchronization2.synchronization2; +#endif + return FALSE; +} + +gboolean + gst_vulkan_physical_device_has_feature_timeline_sempahore + (GstVulkanPhysicalDevice * device) { +#if defined (VK_KHR_timeline_semaphore) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); +# if defined (VK_API_VERSION_1_2) + if (gst_vulkan_physical_device_check_api_version (device, 1, 2, 0)) + return priv->features12.timelineSemaphore; +# endif + return priv->timeline_semaphore.timelineSemaphore; +#endif + return FALSE; +} + +gboolean + gst_vulkan_physical_device_has_feature_video_maintenance1 + (GstVulkanPhysicalDevice * device) { +#if defined (VK_KHR_video_maintenance1) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); + return priv->videomaintenance1.videoMaintenance1; +#endif + return FALSE; +} + +gboolean + gst_vulkan_physical_device_has_feature_video_maintenance2 + (GstVulkanPhysicalDevice * device) { +#if defined (VK_KHR_video_maintenance2) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); + return priv->videomaintenance2.videoMaintenance2; +#endif + return FALSE; +} + +gboolean +gst_vulkan_physical_device_has_feature_video_decode_vp9 (GstVulkanPhysicalDevice + * device) +{ +#if defined (VK_KHR_video_decode_vp9) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); + return priv->video_decoder_vp9.videoDecodeVP9; +#endif + return FALSE; +} + +gboolean +gst_vulkan_physical_device_has_feature_video_encode_av1 (GstVulkanPhysicalDevice + * device) +{ +#if defined (VK_KHR_video_encode_av1) + GstVulkanPhysicalDevicePrivate *priv; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + + priv = GET_PRIV (device); + return priv->video_encoder_av1.videoEncodeAV1; +#endif + return FALSE; +} + /** * gst_vulkan_physical_device_get_api_version: * @device: a #GstVulkanPhysicalDevice @@ -1365,3 +1616,247 @@ && gst_vulkan_instance_check_api_version (device->instance, major, minor, patch); } + +static inline void +_copy_format_properties (GstVulkanFormatProperties * props, + VkFormatProperties * props1) +{ + props->optimal_tiling_feat = props1->optimalTilingFeatures; + props->linear_tiling_feat = props1->linearTilingFeatures; + props->buffer_feat = props1->bufferFeatures; +} + +/** + * gst_vulkan_physical_device_get_format_properties: (skip): + * @device: a #GstVulkanPhysicalDevice + * @vk_format: Vulkan color format to get it's properties + * @props: (out caller-allocates): a #GstVulkanFormatProperties + * + * This method will query for @vk_format's properties depending on the supported + * extensions by the driver. + */ +void +gst_vulkan_physical_device_get_format_properties (GstVulkanPhysicalDevice * + device, guint vk_format, GstVulkanFormatProperties * props) +{ +#if defined (VK_KHR_get_physical_device_properties2) + GstVulkanPhysicalDevicePrivate *priv; + VkFormatProperties2KHR prop2 = { + .sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_2_KHR, + .pNext = NULL, + }; +# if defined (VK_KHR_format_feature_flags2) + VkFormatProperties3KHR prop3 = { + .sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_3_KHR, + }; +# endif + gboolean has_flags2 = FALSE; + + g_return_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device)); + + priv = GET_PRIV (device); + + if (!priv->get_format_props_fn) { + if (gst_vulkan_physical_device_check_api_version (device, 1, 3, 0)) { + priv->get_format_props_fn = + gst_vulkan_instance_get_proc_address (device->instance, + "vkGetPhysicalDeviceFormatProperties2"); + } else { + priv->get_format_props_fn = + gst_vulkan_instance_get_proc_address (device->instance, + "vkGetPhysicalDeviceFormatProperties2KHR"); + } + + if (!priv->get_format_props_fn) + goto fallback; + } + +# if defined (VK_KHR_format_feature_flags2) + if (gst_vulkan_physical_device_check_api_version (device, 1, 3, 0) + || gst_vulkan_physical_device_get_extension_info (device, + VK_KHR_FORMAT_FEATURE_FLAGS_2_EXTENSION_NAME, NULL)) { + prop2.pNext = &prop3; + has_flags2 = TRUE; + } +# endif + + priv->get_format_props_fn (device->device, vk_format, &prop2); + + if (!props) + return; + +# if defined (VK_KHR_format_feature_flags2) + if (has_flags2) { + props->optimal_tiling_feat = prop3.optimalTilingFeatures; + props->linear_tiling_feat = prop3.linearTilingFeatures; + props->buffer_feat = prop3.bufferFeatures; + } else +# endif + { + _copy_format_properties (props, &prop2.formatProperties); + } + + return; +#endif + +fallback: + { + VkFormatProperties prop1 = { 0, }; + + vkGetPhysicalDeviceFormatProperties (device->device, vk_format, &prop1); + if (props) + _copy_format_properties (props, &prop1); + } +} + +/** + * gst_vulkan_physical_device_get_video_formats: (skip): + * @device: a #GstVulkanPhysicalDevice + * @image_usage: Vulkan format's usage (VkImageUsageFlags) + * @pprofile: a pointer to Vulkan video profile (VkVideoProfileInfoKHR) + * @error: a #GError pointer + * + * It will load once vkGetPhysicalDeviceVideoFormatPropertiesKHR() and it ill + * query the format properties defined by @pprofile and @image_usage. + * + * Returns: (transfer full): a #GArray storing the format properties + * (VkVideoFormatPropertiesKHR) or %NULL if @error + */ +GArray * +gst_vulkan_physical_device_get_video_formats (GstVulkanPhysicalDevice * device, + guint64 image_usage, gpointer pprofile, GError ** error) +{ +#if defined(VK_KHR_video_queue) + GstVulkanPhysicalDevicePrivate *priv; + VkResult res; + VkVideoProfileInfoKHR *profile = pprofile; + VkVideoProfileListInfoKHR profiles = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_LIST_INFO_KHR, + .profileCount = 1, + .pProfiles = profile, + }; + VkPhysicalDeviceVideoFormatInfoKHR fmt_info = { + .sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_FORMAT_INFO_KHR, + .pNext = &profiles, + .imageUsage = image_usage, + }; + guint32 n_fmts; + GArray *ret; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), NULL); + g_return_val_if_fail (profile && profile->sType == + VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, NULL); + + priv = GET_PRIV (device); + + if (!priv->get_video_format_props_fn) { + if (!gst_vulkan_physical_device_get_extension_info (device, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, NULL)) + goto bail; + + priv->get_video_format_props_fn = + gst_vulkan_instance_get_proc_address (device->instance, + "vkGetPhysicalDeviceVideoFormatPropertiesKHR"); + + g_assert (priv->get_video_format_props_fn); + } + + res = + priv->get_video_format_props_fn (device->device, &fmt_info, &n_fmts, + NULL); + if (gst_vulkan_error_to_g_error (res, error, + "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) + return NULL; + if (n_fmts == 0) { + gst_vulkan_error_to_g_error + (VK_ERROR_VIDEO_PROFILE_FORMAT_NOT_SUPPORTED_KHR, error, + "vkGetPhysicalDeviceVideoFormatPropertiesKHR"); + return NULL; + } + + ret = + g_array_sized_new (FALSE, TRUE, sizeof (VkVideoFormatPropertiesKHR), + n_fmts); + ret = g_array_set_size (ret, n_fmts); + for (int i = 0; i < n_fmts; i++) { + VkVideoFormatPropertiesKHR *props = + &g_array_index (ret, VkVideoFormatPropertiesKHR, i); + props->sType = VK_STRUCTURE_TYPE_VIDEO_FORMAT_PROPERTIES_KHR; + } + res = + priv->get_video_format_props_fn (device->device, &fmt_info, &n_fmts, + (VkVideoFormatPropertiesKHR *) ret->data); + if (gst_vulkan_error_to_g_error (res, error, + "vkGetPhysicalDeviceVideoFormatPropertiesKHR") != VK_SUCCESS) { + g_array_unref (ret); + return NULL; + } + + return ret; + +bail: +#endif + + gst_vulkan_error_to_g_error (VK_ERROR_EXTENSION_NOT_PRESENT, error, + "VK_KHR_video_queue"); + return NULL; +} + +/** + * gst_vulkan_physical_device_get_video_capabilities: (skip): + * @device: a #GstVulkanPhysicalDevice + * @pprofile: a pointer to Vulkan video profile (VkVideoProfileInfoKHR) + * @pcaps_out: (out caller-allocates): Where the Vulkan video capabilities will + * be stored (VkVideoCapabilitiesKHR) + * @error: a #GError pointer + * + * It will query @device for its video capabilities, stored in @pcaps_out given + * the @pprofile. + * + * Returns: %TRUE if no @error; otherwise %FALSE + */ +gboolean +gst_vulkan_physical_device_get_video_capabilities (GstVulkanPhysicalDevice * + device, gpointer pprofile, gpointer pcaps_out, GError ** error) +{ +#if defined(VK_KHR_video_queue) + GstVulkanPhysicalDevicePrivate *priv; + VkResult res; + VkVideoProfileInfoKHR *profile = pprofile; + VkVideoCapabilitiesKHR *caps = pcaps_out; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + g_return_val_if_fail (profile && profile->sType == + VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, FALSE); + g_return_val_if_fail (caps && caps->sType == + VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR, FALSE); + g_return_val_if_fail (caps && caps->pNext != NULL, FALSE); + + priv = GET_PRIV (device); + + if (!priv->get_video_capabilties_fn) { + if (!gst_vulkan_physical_device_get_extension_info (device, + VK_KHR_VIDEO_QUEUE_EXTENSION_NAME, NULL)) + goto bail; + + priv->get_video_capabilties_fn = + gst_vulkan_instance_get_proc_address (device->instance, + "vkGetPhysicalDeviceVideoCapabilitiesKHR"); + + g_assert (priv->get_video_capabilties_fn); + } + + res = priv->get_video_capabilties_fn (device->device, profile, caps); + if (gst_vulkan_error_to_g_error (res, error, + "vkGetPhysicalDeviceVideoCapabilitiesKHR") != VK_SUCCESS) + return FALSE; + + return TRUE; + +bail: +#endif + + gst_vulkan_error_to_g_error (VK_ERROR_EXTENSION_NOT_PRESENT, error, + "VK_KHR_video_queue"); + return FALSE; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkphysicaldevice.h
Changed
@@ -116,9 +116,9 @@ GST_VULKAN_API GstVulkanPhysicalDevice * gst_vulkan_physical_device_new (GstVulkanInstance * instance, - guint device_index); + guint device_index) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanInstance * gst_vulkan_physical_device_get_instance (GstVulkanPhysicalDevice * device); +GstVulkanInstance * gst_vulkan_physical_device_get_instance (GstVulkanPhysicalDevice * device) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API VkPhysicalDevice gst_vulkan_physical_device_get_handle (GstVulkanPhysicalDevice * device);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkqueue.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkqueue.h
Changed
@@ -83,11 +83,11 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstVulkanQueue, gst_object_unref) GST_VULKAN_API -GstVulkanDevice * gst_vulkan_queue_get_device (GstVulkanQueue * queue); +GstVulkanDevice * gst_vulkan_queue_get_device (GstVulkanQueue * queue) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API GstVulkanCommandPool * gst_vulkan_queue_create_command_pool (GstVulkanQueue * queue, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_queue_submit_lock (GstVulkanQueue * queue);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkswapper.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkswapper.h
Changed
@@ -88,7 +88,7 @@ GST_VULKAN_API GstVulkanSwapper * gst_vulkan_swapper_new (GstVulkanDevice * device, - GstVulkanWindow * window); + GstVulkanWindow * window) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_swapper_choose_queue (GstVulkanSwapper * swapper, @@ -96,7 +96,7 @@ GError ** error); GST_VULKAN_API GstCaps * gst_vulkan_swapper_get_supported_caps (GstVulkanSwapper * swapper, - GError ** error); + GError ** error) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API gboolean gst_vulkan_swapper_set_caps (GstVulkanSwapper * swapper, GstCaps * caps,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvktrash.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvktrash.h
Changed
@@ -101,7 +101,7 @@ GST_VULKAN_API GstVulkanTrash * gst_vulkan_trash_new (GstVulkanFence * fence, GstVulkanTrashNotify notify, - gpointer user_data); + gpointer user_data) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API void gst_vulkan_trash_mini_object_unref (GstVulkanDevice * device, gpointer user_data); @@ -110,7 +110,7 @@ gpointer user_data); GST_VULKAN_API GstVulkanTrash * gst_vulkan_trash_new_free_semaphore (GstVulkanFence * fence, - VkSemaphore semaphore); + VkSemaphore semaphore) G_GNUC_WARN_UNUSED_RESULT; /** * gst_vulkan_trash_new_object_unref: @@ -247,7 +247,7 @@ GstVulkanTrash * gst_vulkan_trash_list_acquire (GstVulkanTrashList * trash_list, GstVulkanFence * fence, GstVulkanTrashNotify notify, - gpointer user_data); + gpointer user_data) G_GNUC_WARN_UNUSED_RESULT; /** * GstVulkanTrashFenceList: * @@ -261,7 +261,7 @@ GST_VULKAN_API G_DECLARE_FINAL_TYPE (GstVulkanTrashFenceList, gst_vulkan_trash_fence_list, GST, VULKAN_TRASH_FENCE_LIST, GstVulkanTrashList); GST_VULKAN_API -GstVulkanTrashList * gst_vulkan_trash_fence_list_new (void); +GstVulkanTrashList * gst_vulkan_trash_fence_list_new (void) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkvideo-private.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkvideo-private.c
Changed
@@ -38,6 +38,14 @@ .extensionName = VK_STD_VULKAN_VIDEO_CODEC_H265_DECODE_EXTENSION_NAME, .specVersion = VK_STD_VULKAN_VIDEO_CODEC_H265_DECODE_SPEC_VERSION, }, + GST_VK_VIDEO_EXTENSION_DECODE_VP9 = { + .extensionName = VK_STD_VULKAN_VIDEO_CODEC_VP9_DECODE_EXTENSION_NAME, + .specVersion = VK_STD_VULKAN_VIDEO_CODEC_VP9_DECODE_SPEC_VERSION, + }, + GST_VK_VIDEO_EXTENSION_DECODE_AV1 = { + .extensionName = VK_STD_VULKAN_VIDEO_CODEC_AV1_DECODE_EXTENSION_NAME, + .specVersion = VK_STD_VULKAN_VIDEO_CODEC_AV1_DECODE_SPEC_VERSION, + }, GST_VK_VIDEO_EXTENSION_ENCODE_H264 = { .extensionName = VK_STD_VULKAN_VIDEO_CODEC_H264_ENCODE_EXTENSION_NAME, .specVersion = VK_STD_VULKAN_VIDEO_CODEC_H264_ENCODE_SPEC_VERSION, @@ -45,7 +53,11 @@ GST_VK_VIDEO_EXTENSION_ENCODE_H265 = { .extensionName = VK_STD_VULKAN_VIDEO_CODEC_H265_ENCODE_EXTENSION_NAME, .specVersion = VK_STD_VULKAN_VIDEO_CODEC_H265_ENCODE_SPEC_VERSION, - } + }, + GST_VK_VIDEO_EXTENSION_ENCODE_AV1 = { + .extensionName = VK_STD_VULKAN_VIDEO_CODEC_AV1_ENCODE_EXTENSION_NAME, + .specVersion = VK_STD_VULKAN_VIDEO_CODEC_AV1_ENCODE_SPEC_VERSION, + }, }; const VkComponentMapping _vk_identity_component_map = { @@ -57,28 +69,37 @@ /* *INDENT-ON* */ gboolean -gst_vulkan_video_get_vk_functions (GstVulkanInstance * instance, +gst_vulkan_video_get_vk_functions (GstVulkanDevice * device, GstVulkanVideoFunctions * vk_funcs) { gboolean ret = FALSE; + GstVulkanInstance *instance; - g_return_val_if_fail (GST_IS_VULKAN_INSTANCE (instance), FALSE); + g_return_val_if_fail (GST_IS_VULKAN_DEVICE (device), FALSE); g_return_val_if_fail (vk_funcs, FALSE); -#define GET_PROC_ADDRESS_REQUIRED(name) \ + instance = gst_vulkan_device_get_instance (device); + +#define GET_PROC_ADDRESS_REQUIRED(name, type) \ G_STMT_START { \ const char *fname = "vk" G_STRINGIFY (name) "KHR"; \ - vk_funcs->G_PASTE (, name) = gst_vulkan_instance_get_proc_address (instance, fname); \ + vk_funcs->G_PASTE (, name) = G_PASTE(G_PASTE(gst_vulkan_, type), _get_proc_address) (type, fname); \ if (!vk_funcs->G_PASTE(, name)) { \ - GST_ERROR_OBJECT (instance, "Failed to find required function %s", fname); \ + GST_ERROR_OBJECT (device, "Failed to find required function %s", fname); \ goto bail; \ } \ } G_STMT_END; - GST_VULKAN_VIDEO_FN_LIST (GET_PROC_ADDRESS_REQUIRED) +#define GET_DEVICE_PROC_ADDRESS_REQUIRED(name) GET_PROC_ADDRESS_REQUIRED(name, device) +#define GET_INSTANCE_PROC_ADDRESS_REQUIRED(name) GET_PROC_ADDRESS_REQUIRED(name, instance) + GST_VULKAN_DEVICE_VIDEO_FN_LIST (GET_DEVICE_PROC_ADDRESS_REQUIRED); + GST_VULKAN_INSTANCE_VIDEO_FN_LIST (GET_INSTANCE_PROC_ADDRESS_REQUIRED); +#undef GET_DEVICE_PROC_ADDRESS_REQUIRED +#undef GET_INSTANCE_PROC_ADDRESS_REQUIRED #undef GET_PROC_ADDRESS_REQUIRED - ret = TRUE; + ret = TRUE; bail: + gst_object_unref (instance); return ret; } @@ -115,11 +136,10 @@ g_return_val_if_fail (vk, FALSE); g_return_val_if_fail (session_create, FALSE); -#if defined(VK_KHR_video_maintenance1) - if (gst_vulkan_video_has_maintenance1 (device)) { + if (gst_vulkan_physical_device_has_feature_video_maintenance1 + (device->physical_device)) { session_create->flags |= VK_VIDEO_SESSION_CREATE_INLINE_QUERIES_BIT_KHR; } -#endif res = vk->CreateVideoSession (device->device, session_create, NULL, &vk_session); @@ -327,22 +347,162 @@ &view_create_info); } +/** + * gst_vulkan_video_try_configuration: + * @device: a #GstVulkanPhysicalDevice + * @profile: the #GstVulkanVideoProfile to configure + * @out_vkcaps: (out caller-allocates): the capabilities given @profile + * @out_caps: (out) (optional) (transfer full): the codec #GstCaps given + * @profile + * @out_formats: (out) (optional) (transfer full): a #GArray with all possible + * raw video formats + * @error: (out) (optional) (transfer full): the resulting error + * + * This function will try @profile, as a configuration in @device, by getting + * its Vulkan capabilities and the output formats that @profile can produce by + * the driver. + * + * If the capabilities are fetched correctly, then @out_caps is generated. If + * the output formats are fetched correctly, then @out_formats is generated. + * + * Return: whether @profile configuration is possible in @device + */ gboolean -gst_vulkan_video_has_maintenance1 (GstVulkanDevice * device) +gst_vulkan_video_try_configuration (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, GstVulkanVideoCapabilities * out_vkcaps, + GstCaps ** out_caps, GArray ** out_formats, GError ** error) { -#if defined(VK_KHR_video_maintenance1) - const VkPhysicalDeviceFeatures2 *features; - const VkBaseOutStructure *iter; - - features = gst_vulkan_physical_device_get_features (device->physical_device); - for (iter = (const VkBaseOutStructure *) features; iter; iter = iter->pNext) { - if (iter->sType == - VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VIDEO_MAINTENANCE_1_FEATURES_KHR) { - const VkPhysicalDeviceVideoMaintenance1FeaturesKHR *video_maintenance1 = - (const VkPhysicalDeviceVideoMaintenance1FeaturesKHR *) iter; - return video_maintenance1->videoMaintenance1; - } + VkVideoCodecOperationFlagBitsKHR codec_op; + VkImageUsageFlags image_usage; + GstVulkanVideoCapabilities vkcaps = { + .caps = {.sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR,}, + }; + GArray *fmts; + gboolean decode, encode; + + g_return_val_if_fail (GST_IS_VULKAN_PHYSICAL_DEVICE (device), FALSE); + g_return_val_if_fail (profile && profile->profile.videoCodecOperation, FALSE); + + codec_op = profile->profile.videoCodecOperation; + + /* VkVideoCodecOperationFlagBitsKHR distinguish decoding and encoding + * operations by the bit position with the following masks */ + decode = GST_VULKAN_VIDEO_CODEC_OPERATION_IS_DECODE (codec_op); + encode = GST_VULKAN_VIDEO_CODEC_OPERATION_IS_ENCODE (codec_op); + g_assert (decode ^ encode); + + /* fill vkcaps & output format usage */ + if (decode) { + gboolean dedicated_dpb; + + vkcaps.caps.pNext = &vkcaps.decoder; + /* *INDENT-OFF* */ + vkcaps.decoder.caps = (VkVideoDecodeCapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_CAPABILITIES_KHR, + .pNext = &vkcaps.decoder.codec, + }; + /* *INDENT-ON* */ + + dedicated_dpb = ((vkcaps.decoder.caps.flags & + VK_VIDEO_DECODE_CAPABILITY_DPB_AND_OUTPUT_COINCIDE_BIT_KHR) == 0); + + image_usage = VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR + | VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_SAMPLED_BIT; + if (!dedicated_dpb) + image_usage |= VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR; + } else if (encode) { + vkcaps.caps.pNext = &vkcaps.encoder; + /* *INDENT-OFF* */ + vkcaps.encoder.caps = (VkVideoEncodeCapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_CAPABILITIES_KHR, + .pNext = &vkcaps.encoder.codec, + }; + /* *INDENT-ON* */ + + image_usage = VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR + | VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR; + } else { + g_assert_not_reached (); } -#endif - return FALSE; + + switch (codec_op) { + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.decoder.codec.h264 = (VkVideoDecodeH264CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.decoder.codec.h265 = (VkVideoDecodeH265CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.decoder.codec.vp9 = (VkVideoDecodeVP9CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.encoder.codec.h264 = (VkVideoEncodeH264CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.decoder.codec.av1 = (VkVideoDecodeAV1CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.encoder.codec.h265 = (VkVideoEncodeH265CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR: + /* *INDENT-OFF* */ + vkcaps.encoder.codec.av1 = (VkVideoEncodeAV1CapabilitiesKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_CAPABILITIES_KHR, + }; + /* *INDENT-ON* */ + break; + default: + g_assert_not_reached (); + } + + if (!gst_vulkan_physical_device_get_video_capabilities (device, + &profile->profile, &vkcaps.caps, error)) + return FALSE; + + fmts = + gst_vulkan_physical_device_get_video_formats (device, image_usage, + &profile->profile, error); + if (!fmts || (error && *error)) { + g_clear_pointer (&fmts, g_array_unref); + return FALSE; + } + + if (out_vkcaps) { + *out_vkcaps = vkcaps; + out_vkcaps->caps.pNext = NULL; + } + + if (out_formats) + *out_formats = fmts; + else + g_array_unref (fmts); + + if (out_caps) + *out_caps = gst_vulkan_video_profile_to_caps (profile); + + return TRUE; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkvideo-private.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkvideo-private.h
Changed
@@ -22,7 +22,7 @@ #include <gst/gst.h> #include <gst/vulkan/gstvkapi.h> -#include <gst/vulkan/gstvkvideoutils.h> +#include "gstvkvideoutils-private.h" G_BEGIN_DECLS @@ -44,40 +44,47 @@ typedef enum { GST_VK_VIDEO_EXTENSION_DECODE_H264, GST_VK_VIDEO_EXTENSION_DECODE_H265, + GST_VK_VIDEO_EXTENSION_DECODE_VP9, + GST_VK_VIDEO_EXTENSION_DECODE_AV1, GST_VK_VIDEO_EXTENSION_ENCODE_H264, GST_VK_VIDEO_EXTENSION_ENCODE_H265, + GST_VK_VIDEO_EXTENSION_ENCODE_AV1, + GST_VK_VIDEO_EXTENSION_MAX, } GST_VK_VIDEO_EXTENSIONS; -#define GST_VULKAN_VIDEO_FN_LIST(V) \ - V(GetPhysicalDeviceVideoFormatProperties) \ - V(GetPhysicalDeviceVideoCapabilities) \ - V(CreateVideoSession) \ - V(DestroyVideoSession) \ - V(GetVideoSessionMemoryRequirements) \ - V(DestroyVideoSessionParameters) \ - V(UpdateVideoSessionParameters) \ - V(CreateVideoSessionParameters) \ - V(BindVideoSessionMemory) \ - V(CmdPipelineBarrier2) \ - V(CmdBeginVideoCoding) \ - V(CmdControlVideoCoding) \ - V(CmdEndVideoCoding) \ - V(CmdDecodeVideo) \ - V(CmdEncodeVideo) \ - V(GetEncodedVideoSessionParameters) \ +#define GST_VULKAN_DEVICE_VIDEO_FN_LIST(V) \ + V(CreateVideoSession) \ + V(DestroyVideoSession) \ + V(GetVideoSessionMemoryRequirements) \ + V(DestroyVideoSessionParameters) \ + V(UpdateVideoSessionParameters) \ + V(CreateVideoSessionParameters) \ + V(BindVideoSessionMemory) \ + V(CmdBeginVideoCoding) \ + V(CmdControlVideoCoding) \ + V(CmdEndVideoCoding) \ + V(CmdDecodeVideo) \ + V(CmdEncodeVideo) \ + V(GetEncodedVideoSessionParameters) + +#define GST_VULKAN_INSTANCE_VIDEO_FN_LIST(V) \ V(GetPhysicalDeviceVideoEncodeQualityLevelProperties) struct _GstVulkanVideoFunctions { #define DEFINE_FUNCTION(name) G_PASTE(G_PASTE(PFN_vk, name), KHR) name; - GST_VULKAN_VIDEO_FN_LIST (DEFINE_FUNCTION) + GST_VULKAN_DEVICE_VIDEO_FN_LIST (DEFINE_FUNCTION) + GST_VULKAN_INSTANCE_VIDEO_FN_LIST (DEFINE_FUNCTION) #undef DEFINE_FUNCTION }; -extern const VkExtensionProperties _vk_codec_extensions4; +#define GST_VULKAN_VIDEO_CODEC_OPERATION_IS_DECODE(codec) ((codec & 0x0000ffff) == codec) +#define GST_VULKAN_VIDEO_CODEC_OPERATION_IS_ENCODE(codec) ((codec & 0xffff0000) == codec) + +extern const VkExtensionProperties _vk_codec_extensionsGST_VK_VIDEO_EXTENSION_MAX; extern const VkComponentMapping _vk_identity_component_map; -gboolean gst_vulkan_video_get_vk_functions (GstVulkanInstance * instance, +gboolean gst_vulkan_video_get_vk_functions (GstVulkanDevice * device, GstVulkanVideoFunctions * vk_funcs); gboolean gst_vulkan_video_session_create (GstVulkanVideoSession * session, @@ -98,6 +105,12 @@ gboolean is_out, GstVulkanHandle * sampler); -gboolean gst_vulkan_video_has_maintenance1 (GstVulkanDevice * device); +GST_VULKAN_API +gboolean gst_vulkan_video_try_configuration (GstVulkanPhysicalDevice * device, + GstVulkanVideoProfile * profile, + GstVulkanVideoCapabilities * out_vkcaps, + GstCaps ** out_caps, + GArray ** out_formats, + GError ** error); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkvideofilter.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkvideofilter.h
Changed
@@ -86,11 +86,11 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanVideoFilter, gst_object_unref); GST_VULKAN_API -GstVulkanInstance * gst_vulkan_video_filter_get_instance (GstVulkanVideoFilter * filter); +GstVulkanInstance * gst_vulkan_video_filter_get_instance (GstVulkanVideoFilter * filter) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanDevice * gst_vulkan_video_filter_get_device (GstVulkanVideoFilter * filter); +GstVulkanDevice * gst_vulkan_video_filter_get_device (GstVulkanVideoFilter * filter) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanQueue * gst_vulkan_video_filter_get_queue (GstVulkanVideoFilter * filter); +GstVulkanQueue * gst_vulkan_video_filter_get_queue (GstVulkanVideoFilter * filter) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkvideoutils-private.c
Added
@@ -0,0 +1,651 @@ +/* + * GStreamer + * Copyright (C) 2023 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstvkvideoutils-private.h" + +/* *INDENT-OFF* */ +static const struct { + GstVulkanVideoOperation video_operation; + VkVideoCodecOperationFlagBitsKHR codec; + const char *mime; + VkStructureType stype; +} video_codecs_map = { + { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR, "video/x-h264", + VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR, "video/x-h265", + VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR, "video/x-vp9", + VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_ENCODE, VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, "video/x-h264", + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_ENCODE, VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR, "video/x-h265", + VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_ENCODE, VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR, "video/x-av1", + VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_PROFILE_INFO_KHR }, + { GST_VULKAN_VIDEO_OPERATION_DECODE, VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR, "video/x-av1", + VK_STRUCTURE_TYPE_VIDEO_DECODE_AV1_PROFILE_INFO_KHR }, +}; + +#define VK_VIDEO_CHROMA_SUBSAMPLING_ANY \ + VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR \ + | VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR \ + | VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR + +static const struct { + VkVideoChromaSubsamplingFlagBitsKHR chroma; + const char *chroma_str; +} video_chroma_map = { + { VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, "4:2:0" }, + { VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, "4:2:2" }, + { VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, "4:4:4" }, +}; + +#define VK_VIDEO_COMPONENT_BIT_DEPTH_ANY \ + VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR \ + | VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR \ + | VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR + +static const struct { + VkVideoComponentBitDepthFlagBitsKHR bitdepth; + int bit_depth; +} bit_depth_map = { + { VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, 8 }, + { VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, 10 }, + { VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, 12 }, +}; + +static const struct { + StdVideoH264ProfileIdc vk_profile; + const char *profile_str; +} h264_profile_map = { + { STD_VIDEO_H264_PROFILE_IDC_BASELINE, "constrained-baseline" }, + { STD_VIDEO_H264_PROFILE_IDC_MAIN, "main" }, + { STD_VIDEO_H264_PROFILE_IDC_HIGH, "high" }, + { STD_VIDEO_H264_PROFILE_IDC_HIGH_444_PREDICTIVE, "high-4:4:4" }, +}; + +static const struct { + VkVideoDecodeH264PictureLayoutFlagBitsKHR layout; + const char *layout_str; +} h264_layout_map = { + { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_PROGRESSIVE_KHR, "progressive" }, + { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_INTERLEAVED_LINES_BIT_KHR, + "interleaved" }, + { VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_INTERLACED_SEPARATE_PLANES_BIT_KHR, + "fields" }, +}; + +static const struct { + StdVideoH265ProfileIdc vk_profile; + VkVideoChromaSubsamplingFlagsKHR subsampling; + VkVideoComponentBitDepthFlagsKHR depth; + const char *profile_str; +} h265_profile_map = { + { STD_VIDEO_H265_PROFILE_IDC_MAIN, VK_VIDEO_CHROMA_SUBSAMPLING_ANY, VK_VIDEO_COMPONENT_BIT_DEPTH_ANY, "main" }, + { STD_VIDEO_H265_PROFILE_IDC_MAIN_10, VK_VIDEO_CHROMA_SUBSAMPLING_ANY, VK_VIDEO_COMPONENT_BIT_DEPTH_ANY, "main-10" }, + { STD_VIDEO_H265_PROFILE_IDC_MAIN_STILL_PICTURE, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_ANY, "main-still-picture" }, + { STD_VIDEO_H265_PROFILE_IDC_MAIN_STILL_PICTURE, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_ANY, "main-still-picture" }, + { STD_VIDEO_H265_PROFILE_IDC_MAIN_STILL_PICTURE, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_ANY, "main-444-still-picture" }, + /* { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "XXX" }, */ + /* { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "XXX" }, */ + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "main-12" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "main-422" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "main-422-10" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "main-422-12" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "main-444" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "main-444-10" }, + { STD_VIDEO_H265_PROFILE_IDC_FORMAT_RANGE_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "main-444-12" }, + { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "screen-extended-main" }, + { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "screen-extended-main-10" }, + /* { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "XXX" }, */ + /* { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "XXX" }, */ + /* { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "XXX" }, */ + /* { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_422_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "XXX" }, */ + { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, "screen-extended-main-444" }, + { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_10_BIT_KHR, "screen-extended-main-444-10" }, + /* { STD_VIDEO_H265_PROFILE_IDC_SCC_EXTENSIONS, VK_VIDEO_CHROMA_SUBSAMPLING_444_BIT_KHR, VK_VIDEO_COMPONENT_BIT_DEPTH_12_BIT_KHR, "XXX" }, */ +}; + +static const struct { + StdVideoAV1Profile vk_profile; + const char *profile_str; +} av1_profile_map = { + { STD_VIDEO_AV1_PROFILE_MAIN, "main" }, + { STD_VIDEO_AV1_PROFILE_HIGH, "high" }, + { STD_VIDEO_AV1_PROFILE_PROFESSIONAL, "professional" }, +}; +static const struct { + StdVideoVP9Profile vk_profile; + const char *profile_str; +} vp9_profile_map = { + { STD_VIDEO_VP9_PROFILE_0, "0" }, + { STD_VIDEO_VP9_PROFILE_1, "1" }, + { STD_VIDEO_VP9_PROFILE_2, "2" }, + { STD_VIDEO_VP9_PROFILE_3, "3" }, +}; +/* *INDENT-ON* */ + +typedef enum _GstVulkanVideoProfileFeature +{ + NO = 0, + YES = 1, + UNDEFINED = -1, +} GstVulkanVideoProfileFeature; + +static inline gboolean +h265_subsampling_match (const VkVideoProfileInfoKHR * profile, int i) +{ + return (((profile->chromaSubsampling & h265_profile_mapi.subsampling) == + profile->chromaSubsampling) + && ((profile->chromaBitDepth & h265_profile_mapi.depth) == + profile->chromaBitDepth) + && ((profile->lumaBitDepth & h265_profile_mapi.depth) == + profile->lumaBitDepth)); +} + +/** + * gst_vulkan_video_profile_to_caps: (skip) + * @profile: #GstVulkanVideoProfile to convert into a #GstCaps + * + * Returns: (transfer full): a #GstCaps from @profile + * + * Since: 1.24 + */ +GstCaps * +gst_vulkan_video_profile_to_caps (const GstVulkanVideoProfile * profile) +{ + const char *mime = NULL, *chroma_sub = NULL; + const char *profile_str = NULL, *layout = NULL; + int i, luma = 0, chroma = 0; + GstVulkanVideoProfileFeature film_grain = UNDEFINED; + GstCaps *caps; + + g_return_val_if_fail (profile + && profile->profile.sType == VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + NULL); + + for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { + if (profile->profile.videoCodecOperation == video_codecs_mapi.codec) { + mime = video_codecs_mapi.mime; + + switch (profile->profile.videoCodecOperation) { + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + if (profile->codec.h264dec.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (h264_profile_map); j++) { + if (profile->codec.h264dec.stdProfileIdc + == h264_profile_mapj.vk_profile) { + profile_str = h264_profile_mapj.profile_str; + break; + } + } + for (j = 0; j < G_N_ELEMENTS (h264_layout_map); j++) { + if (profile->codec.h264dec.pictureLayout + == h264_layout_mapj.layout) { + layout = h264_layout_mapj.layout_str; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + if (profile->codec.h265dec.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (h265_profile_map); j++) { + if ((profile->codec.h265dec.stdProfileIdc + == h265_profile_mapj.vk_profile) + && h265_subsampling_match (&profile->profile, j)) { + profile_str = h265_profile_mapj.profile_str; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + if (profile->codec.vp9dec.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (vp9_profile_map); j++) { + if (profile->codec.vp9dec.stdProfile + == vp9_profile_mapj.vk_profile) { + profile_str = vp9_profile_mapj.profile_str; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + if (profile->codec.av1dec.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (av1_profile_map); j++) { + if (profile->codec.av1dec.stdProfile + == av1_profile_mapj.vk_profile) { + profile_str = av1_profile_mapj.profile_str; + film_grain = profile->codec.av1dec.filmGrainSupport == VK_TRUE ? + YES : NO; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR: + if (profile->codec.h264enc.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (h264_profile_map); j++) { + if (profile->codec.h264enc.stdProfileIdc + == h264_profile_mapj.vk_profile) { + profile_str = h264_profile_mapj.profile_str; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR: + if (profile->codec.h265enc.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (h265_profile_map); j++) { + if ((profile->codec.h265enc.stdProfileIdc + == h265_profile_mapj.vk_profile) + && h265_subsampling_match (&profile->profile, j)) { + profile_str = h265_profile_mapj.profile_str; + break; + } + } + } + break; + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR: + if (profile->codec.av1enc.sType == video_codecs_mapi.stype) { + int j; + for (j = 0; j < G_N_ELEMENTS (av1_profile_map); j++) { + if (profile->codec.av1enc.stdProfile + == av1_profile_mapj.vk_profile) { + profile_str = av1_profile_mapj.profile_str; + break; + } + } + } + break; + default: + break; + } + + break; + } + } + if (i == G_N_ELEMENTS (video_codecs_map)) + return NULL; + + if (!profile_str) + return NULL; + + for (i = 0; i < G_N_ELEMENTS (video_chroma_map); i++) { + if (profile->profile.chromaSubsampling == video_chroma_mapi.chroma) { + chroma_sub = video_chroma_mapi.chroma_str; + break; + } + } + if (i == G_N_ELEMENTS (video_chroma_map)) + return NULL; + + for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { + if (profile->profile.chromaBitDepth == bit_depth_mapi.bitdepth) { + chroma = bit_depth_mapi.bit_depth; + break; + } + } + if (i == G_N_ELEMENTS (bit_depth_map)) + return NULL; + + for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { + if (profile->profile.lumaBitDepth == bit_depth_mapi.bitdepth) { + luma = bit_depth_mapi.bit_depth; + break; + } + } + if (i == G_N_ELEMENTS (bit_depth_map)) + return NULL; + + caps = gst_caps_new_simple (mime, "profile", G_TYPE_STRING, profile_str, + "chroma-format", G_TYPE_STRING, chroma_sub, "bit-depth-luma", G_TYPE_UINT, + luma, "bit-depth-chroma", G_TYPE_UINT, chroma, NULL); + + if (layout) + gst_caps_set_simple (caps, "interlace-mode", G_TYPE_STRING, layout, NULL); + + if (film_grain != UNDEFINED) { + gst_caps_set_simple (caps, "film-grain", G_TYPE_BOOLEAN, + film_grain == YES ? TRUE : FALSE, NULL); + } + + return caps; +} + +/** + * gst_vulkan_video_profile_from_caps: (skip) + * @profile: (out): the output profile + * @caps: a #GstCaps to parse + * @video_operation: a supported video operation + * + * Returns: %TRUE if @caps was parsed correctly, otherwise %FALSE + * + * Since: 1.24 + */ +gboolean +gst_vulkan_video_profile_from_caps (GstVulkanVideoProfile * profile, + GstCaps * caps, GstVulkanVideoOperation video_operation) +{ + const GstStructure *structure; + const gchar *mime, *profile_str = NULL, *layout = NULL; + gint i; + + g_return_val_if_fail (GST_IS_CAPS (caps), FALSE); + g_return_val_if_fail (profile, FALSE); + g_return_val_if_fail (video_operation < GST_VULKAN_VIDEO_OPERATION_UNKNOWN, + FALSE); + + structure = gst_caps_get_structure (caps, 0); + + profile->usage.decode.sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR; + profile->usage.decode.videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR; + + profile->profile.sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR; + profile->profile.pNext = &profile->usage; + + mime = gst_structure_get_name (structure); + + { + gint i, luma, chroma; + const gchar *chroma_sub; + + chroma_sub = gst_structure_get_string (structure, "chroma-format"); + if (!chroma_sub) + return FALSE; + if (!gst_structure_get (structure, "bit-depth-luma", G_TYPE_UINT, &luma, + "bit-depth-chroma", G_TYPE_UINT, &chroma, NULL)) + return FALSE; + + for (i = 0; i < G_N_ELEMENTS (video_chroma_map); i++) { + if (g_strcmp0 (chroma_sub, video_chroma_mapi.chroma_str) == 0) { + profile->profile.chromaSubsampling = video_chroma_mapi.chroma; + break; + } + } + if (i == G_N_ELEMENTS (video_chroma_map)) + return FALSE; + + for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { + if (luma == bit_depth_mapi.bit_depth) { + profile->profile.lumaBitDepth = bit_depth_mapi.bitdepth; + break; + } + } + if (i == G_N_ELEMENTS (bit_depth_map)) + return FALSE; + + for (i = 0; i < G_N_ELEMENTS (bit_depth_map); i++) { + if (chroma == bit_depth_mapi.bit_depth) { + profile->profile.chromaBitDepth = bit_depth_mapi.bitdepth; + break; + } + } + if (i == G_N_ELEMENTS (bit_depth_map)) + return FALSE; + } + + for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { + if ((video_codecs_mapi.video_operation == video_operation) + && (g_strcmp0 (video_codecs_mapi.mime, mime) == 0)) { + profile->profile.videoCodecOperation = video_codecs_mapi.codec; + + switch (profile->profile.videoCodecOperation) { + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR:{ + int j; + + profile->codec.h264dec.sType = video_codecs_mapi.stype; + profile->codec.h264dec.stdProfileIdc = + STD_VIDEO_H264_PROFILE_IDC_INVALID; + profile->codec.h264dec.pictureLayout = + VK_VIDEO_DECODE_H264_PICTURE_LAYOUT_FLAG_BITS_MAX_ENUM_KHR; + profile->usage.decode.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (h264_profile_map); j++) { + if (g_strcmp0 (profile_str, h264_profile_mapj.profile_str) == 0) { + profile->codec.h264dec.stdProfileIdc = + h264_profile_mapj.vk_profile; + break; + } + } + layout = gst_structure_get_string (structure, "interlace-mode"); + for (j = 0; layout && j < G_N_ELEMENTS (h264_layout_map); j++) { + if (g_strcmp0 (layout, h264_layout_mapj.layout_str) == 0) { + profile->codec.h264dec.pictureLayout = h264_layout_mapj.layout; + break; + } + } + break; + } + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR:{ + int j; + + profile->codec.h265dec.sType = video_codecs_mapi.stype; + profile->codec.h265dec.stdProfileIdc = + STD_VIDEO_H265_PROFILE_IDC_INVALID; + profile->usage.decode.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + + for (j = 0; profile_str && j < G_N_ELEMENTS (h265_profile_map); j++) { + if ((g_strcmp0 (profile_str, h265_profile_mapj.profile_str) == 0) + && h265_subsampling_match (&profile->profile, j)) { + profile->codec.h265dec.stdProfileIdc = + h265_profile_mapj.vk_profile; + break; + } + } + break; + } + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR:{ + int j; + + profile->codec.vp9dec.sType = video_codecs_mapi.stype; + profile->codec.vp9dec.stdProfile = STD_VIDEO_VP9_PROFILE_INVALID; + profile->usage.decode.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (vp9_profile_map); j++) { + if (g_strcmp0 (profile_str, vp9_profile_mapj.profile_str) == 0) { + profile->codec.vp9dec.stdProfile = vp9_profile_mapj.vk_profile; + break; + } + } + break; + } + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR:{ + int j; + gboolean film_grain; + + profile->codec.av1dec.sType = video_codecs_mapi.stype; + profile->codec.av1dec.stdProfile = STD_VIDEO_AV1_PROFILE_INVALID; + profile->usage.decode.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (av1_profile_map); j++) { + if (g_strcmp0 (profile_str, av1_profile_mapj.profile_str) == 0) { + profile->codec.av1dec.stdProfile = av1_profile_mapj.vk_profile; + break; + } + } + + if (gst_structure_get_boolean (structure, "film-grain", &film_grain)) { + profile->codec.av1dec.filmGrainSupport = + film_grain ? VK_TRUE : VK_FALSE; + } + + break; + } + case VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR:{ + int j; + + profile->codec.h264enc.sType = video_codecs_mapi.stype; + profile->codec.h264enc.stdProfileIdc = + STD_VIDEO_H264_PROFILE_IDC_INVALID; + profile->profile.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (h264_profile_map); j++) { + if (g_strcmp0 (profile_str, h264_profile_mapj.profile_str) == 0) { + profile->codec.h264enc.stdProfileIdc = + h264_profile_mapj.vk_profile; + break; + } + } + break; + } + case VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR:{ + int j; + + profile->codec.h265enc.sType = video_codecs_mapi.stype; + profile->codec.h265enc.stdProfileIdc = + STD_VIDEO_H265_PROFILE_IDC_INVALID; + profile->profile.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (h265_profile_map); j++) { + if ((g_strcmp0 (profile_str, h265_profile_mapj.profile_str) == 0) + && h265_subsampling_match (&profile->profile, j)) { + profile->codec.h265enc.stdProfileIdc = + h265_profile_mapj.vk_profile; + break; + } + } + break; + } + case VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR:{ + int j; + + profile->codec.av1enc.sType = video_codecs_mapi.stype; + profile->codec.av1enc.stdProfile = STD_VIDEO_AV1_PROFILE_INVALID; + profile->profile.pNext = &profile->codec; + + profile_str = gst_structure_get_string (structure, "profile"); + for (j = 0; profile_str && j < G_N_ELEMENTS (av1_profile_map); j++) { + if (g_strcmp0 (profile_str, av1_profile_mapj.profile_str) == 0) { + profile->codec.av1enc.stdProfile = av1_profile_mapj.vk_profile; + break; + } + } + break; + } + default: + profile->usage.decode.pNext = NULL; + break; + } + + break; + } + } + if (i == G_N_ELEMENTS (video_codecs_map)) + return FALSE; + + return TRUE; +} + +/** + * gst_vulkan_video_profile_is_valid: (skip) + * @profile: the output profile + * @codec: VkVideoCodecOperationFlagBitsKHR described by @profile + * + * Returns: %TRUE if @profile is correct and matches with @codec + * + * Since: 1.24 + */ +gboolean +gst_vulkan_video_profile_is_valid (GstVulkanVideoProfile * profile, guint codec) +{ + int i; + VkVideoCodecOperationFlagBitsKHR op = codec; + VkStructureType stype = VK_STRUCTURE_TYPE_MAX_ENUM; + + if (op == VK_VIDEO_CODEC_OPERATION_NONE_KHR) + return FALSE; + + if (profile->profile.videoCodecOperation != op) + return FALSE; + + for (i = 0; i < G_N_ELEMENTS (video_codecs_map); i++) { + if (op == video_codecs_mapi.codec) { + stype = video_codecs_mapi.stype; + break; + } + } + + if (stype == VK_STRUCTURE_TYPE_MAX_ENUM) + return FALSE; + + if (profile->codec.base.sType != stype) + return FALSE; + + return TRUE; +} + +/** + * gst_vulkan_video_profile_is_equal: + * @a: a #GstVulkanVideoProfile + * @b: another #GstVulkanVideoProfile + * + * Returns: whether @a and @b contains the same information. + */ +gboolean +gst_vulkan_video_profile_is_equal (const GstVulkanVideoProfile * a, + const GstVulkanVideoProfile * b) +{ + gboolean profile; + + g_return_val_if_fail (a && b, FALSE); + + profile = ((a->profile.videoCodecOperation == b->profile.videoCodecOperation) + && (a->profile.chromaSubsampling == b->profile.chromaSubsampling) + && (a->profile.chromaBitDepth == b->profile.chromaBitDepth) + && (a->profile.lumaBitDepth == b->profile.lumaBitDepth) + && (a->codec.base.sType == b->codec.base.sType)); + + if (!profile) + return FALSE; + + switch (a->profile.videoCodecOperation) { + case VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR: + return ((a->codec.h264dec.stdProfileIdc == b->codec.h264dec.stdProfileIdc) + && a->codec.h264dec.pictureLayout == b->codec.h264dec.pictureLayout); + case VK_VIDEO_CODEC_OPERATION_DECODE_H265_BIT_KHR: + return (a->codec.h265dec.stdProfileIdc == b->codec.h265dec.stdProfileIdc); + case VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR: + return (a->codec.vp9dec.stdProfile == b->codec.vp9dec.stdProfile); + case VK_VIDEO_CODEC_OPERATION_DECODE_AV1_BIT_KHR: + return (a->codec.av1dec.stdProfile == b->codec.av1dec.stdProfile); + default: + return FALSE; + } + + g_assert_not_reached (); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkvideoutils-private.h
Added
@@ -0,0 +1,171 @@ +/* + * GStreamer + * Copyright (C) 2023 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/vulkan/gstvkapi.h> + +G_BEGIN_DECLS + +typedef struct _GstVulkanVideoProfile GstVulkanVideoProfile; +typedef struct _GstVulkanVideoCapabilities GstVulkanVideoCapabilities; + +/** + * GstVulkanVideoProfile: + * @profile: the generic vulkan video profile + * @codec: the specific codec profile + * + * Since: 1.24 + */ +struct _GstVulkanVideoProfile +{ + /*< private >*/ + VkVideoProfileInfoKHR profile; + union { + VkVideoDecodeUsageInfoKHR decode; + /** + * GstVulkanVideoProfile.usage.encode: + * + * Since: 1.26 + **/ + VkVideoEncodeUsageInfoKHR encode; + } usage; + + union { + VkBaseInStructure base; + VkVideoDecodeH264ProfileInfoKHR h264dec; + VkVideoDecodeH265ProfileInfoKHR h265dec; + VkVideoDecodeAV1ProfileInfoKHR av1dec; + /** + * GstVulkanVideoProfile.usage.codec.vp9dec: + * + * Since: 1.28 + **/ + VkVideoDecodeVP9ProfileInfoKHR vp9dec; + /** + * GstVulkanVideoProfile.usage.codec.h264enc: + * + * Since: 1.26 + **/ + VkVideoEncodeH264ProfileInfoKHR h264enc; + /** + * GstVulkanVideoProfile.usage.codec.h265enc: + * + * Since: 1.26 + **/ + VkVideoEncodeH265ProfileInfoKHR h265enc; + /** + * GstVulkanVideoProfile.usage.codec.av1enc: + * + * Since: 1.28 + **/ + VkVideoEncodeAV1ProfileInfoKHR av1enc; + } codec; + gpointer _reservedGST_PADDING; +}; + +/** + * GstVulkanVideoCapabilities: + * + * Since: 1.24 + */ +struct _GstVulkanVideoCapabilities +{ + /*< private >*/ + VkVideoCapabilitiesKHR caps; + union + { + struct + { + /*< private >*/ + VkVideoDecodeCapabilitiesKHR caps; + union + { + /*< private >*/ + VkVideoDecodeH264CapabilitiesKHR h264; + VkVideoDecodeH265CapabilitiesKHR h265; + /** + * GstVulkanVideoCapabilities.caps.codec.vp9: + * + * Since: 1.28 + **/ + VkVideoDecodeVP9CapabilitiesKHR vp9; + /** + * GstVulkanVideoCapabilities.caps.codec.av1: + * + * Since: 1.28 + **/ + VkVideoDecodeAV1CapabilitiesKHR av1; + } codec; + } decoder; + struct + { + /*< private >*/ + VkVideoEncodeCapabilitiesKHR caps; + union + { + /*< private >*/ + VkVideoEncodeH264CapabilitiesKHR h264; + VkVideoEncodeH265CapabilitiesKHR h265; + /** + * _GstVulkanVideoCapabilities.encoder.codec.av1: + * + * Since: 1.28 + **/ + VkVideoEncodeAV1CapabilitiesKHR av1; + + } codec; + } encoder; + }; + /*< private >*/ + gpointer _reservedGST_PADDING; +}; + +/** + * GstVulkanVideoOperation: + * @GST_VULKAN_VIDEO_OPERATION_DECODE: decode operation + * @GST_VULKAN_VIDEO_OPERATION_ENCODE: encode operation + * @GST_VULKAN_VIDEO_OPERATION_UNKNOWN: unknown + * + * The type of video operation. + * + * Since: 1.24 + */ +typedef enum { + GST_VULKAN_VIDEO_OPERATION_DECODE = 0, + GST_VULKAN_VIDEO_OPERATION_ENCODE, + GST_VULKAN_VIDEO_OPERATION_UNKNOWN, +} GstVulkanVideoOperation; + +GST_VULKAN_API +GstCaps * gst_vulkan_video_profile_to_caps (const GstVulkanVideoProfile * profile); +GST_VULKAN_API +gboolean gst_vulkan_video_profile_from_caps (GstVulkanVideoProfile * profile, + GstCaps * caps, + GstVulkanVideoOperation video_operation); +GST_VULKAN_API +gboolean gst_vulkan_video_profile_is_valid (GstVulkanVideoProfile * profile, + guint codec); +GST_VULKAN_API +gboolean gst_vulkan_video_profile_is_equal (const GstVulkanVideoProfile * a, + const GstVulkanVideoProfile * b); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/gstvkwindow.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/gstvkwindow.h
Changed
@@ -143,10 +143,10 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVulkanWindow, gst_object_unref) GST_VULKAN_API -GstVulkanWindow * gst_vulkan_window_new (GstVulkanDisplay *display); +GstVulkanWindow * gst_vulkan_window_new (GstVulkanDisplay *display) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanDisplay * gst_vulkan_window_get_display (GstVulkanWindow *window); +GstVulkanDisplay * gst_vulkan_window_get_display (GstVulkanWindow *window) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API VkSurfaceKHR gst_vulkan_window_get_surface (GstVulkanWindow *window, GError **error);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/meson.build
Changed
@@ -35,7 +35,6 @@ 'gstvkswapper.c', 'gstvktrash.c', 'gstvkvideofilter.c', - 'gstvkvideoutils.c', 'gstvkutils.c', 'gstvkwindow.c', ) @@ -71,7 +70,6 @@ 'gstvktrash.h', 'gstvkutils.h', 'gstvkvideofilter.h', - 'gstvkvideoutils.h', 'gstvkwindow.h', 'vulkan-prelude.h', 'vulkan_fwd.h', @@ -322,8 +320,8 @@ video_test = ''' #include <vulkan/vulkan.h> -#if !(defined(VK_VERSION_1_4) || (defined(VK_VERSION_1_3) && VK_HEADER_VERSION >= 275)) -#error "Need at least Vulkan 1.3.275" +#if !(defined(VK_VERSION_1_5) || (defined(VK_VERSION_1_4) && VK_HEADER_VERSION >= 317)) +#error "Need at least Vulkan 1.4.317" #endif /* vk_video/vulkan_video_codec_h264std.h */ @@ -346,6 +344,7 @@ if have_vk_video vulkan_conf.set('GST_VULKAN_HAVE_VIDEO_EXTENSIONS', 1) vulkan_priv_sources += files( + 'gstvkvideoutils-private.c', 'gstvkvideo-private.c', 'gstvkdecoder-private.c', 'gstvkencoder-private.c',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/vulkan.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/vulkan.h
Changed
@@ -61,6 +61,5 @@ #include <gst/vulkan/gstvkoperation.h> #include <gst/vulkan/gstvkutils.h> -#include <gst/vulkan/gstvkvideoutils.h> #endif /* __GST_VULKAN_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/vulkan_fwd.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/vulkan_fwd.h
Changed
@@ -112,8 +112,6 @@ typedef struct _GstVulkanFullScreenQuadPrivate GstVulkanFullScreenQuadPrivate; typedef struct _GstVulkanQueueFamilyOps GstVulkanQueueFamilyOps; -typedef struct _GstVulkanVideoProfile GstVulkanVideoProfile; -typedef struct _GstVulkanVideoCapabilities GstVulkanVideoCapabilities; typedef struct _GstVulkanOperation GstVulkanOperation; typedef struct _GstVulkanOperationClass GstVulkanOperationClass;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/vulkan/wayland/gstvkdisplay_wayland.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/vulkan/wayland/gstvkdisplay_wayland.h
Changed
@@ -88,9 +88,9 @@ #define GST_VULKAN_DISPLAY_WAYLAND_DISPLAY(display_) (GST_VULKAN_DISPLAY_WAYLAND (display_)->display) GST_VULKAN_API -GstVulkanDisplayWayland *gst_vulkan_display_wayland_new (const gchar * name); +GstVulkanDisplayWayland *gst_vulkan_display_wayland_new (const gchar * name) G_GNUC_WARN_UNUSED_RESULT; GST_VULKAN_API -GstVulkanDisplayWayland *gst_vulkan_display_wayland_new_with_display (struct wl_display *display); +GstVulkanDisplayWayland *gst_vulkan_display_wayland_new_with_display (struct wl_display *display) G_GNUC_WARN_UNUSED_RESULT; G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlbuffer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlbuffer.c
Changed
@@ -172,9 +172,10 @@ static void gstmemory_disposed (GstWlBuffer * self) { +#ifndef G_DISABLE_ASSERT GstWlBufferPrivate *priv = gst_wl_buffer_get_instance_private (self); - g_assert (!priv->used_by_compositor); +#endif GST_TRACE_OBJECT (self, "owning GstMemory was finalized"); @@ -303,3 +304,17 @@ return priv->display; } + +GstVideoMeta * +gst_wl_buffer_get_video_meta (GstWlBuffer * self) +{ + GstWlBufferPrivate *priv = gst_wl_buffer_get_instance_private (self); + return gst_buffer_get_video_meta (priv->current_gstbuffer); +} + +GstVideoCropMeta * +gst_wl_buffer_get_video_crop_meta (GstWlBuffer * self) +{ + GstWlBufferPrivate *priv = gst_wl_buffer_get_instance_private (self); + return gst_buffer_get_video_crop_meta (priv->current_gstbuffer); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlbuffer.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlbuffer.h
Changed
@@ -21,17 +21,18 @@ #pragma once #include <gst/wayland/wayland.h> +#include <gst/video/video.h> G_BEGIN_DECLS #define GST_TYPE_WL_BUFFER gst_wl_buffer_get_type () GST_WL_API -G_DECLARE_FINAL_TYPE (GstWlBuffer, gst_wl_buffer, GST, WL_BUFFER, GObject); +G_DECLARE_FINAL_TYPE (GstWlBuffer, gst_wl_buffer, GST, WL_BUFFER, GstObject); struct _GstWlBuffer { - GObject parent_instance; + GstObject parent_instance; }; GST_WL_API @@ -56,4 +57,10 @@ GST_WL_API GstWlDisplay *gst_wl_buffer_get_display (GstWlBuffer * self); +GST_WL_API +GstVideoMeta * gst_wl_buffer_get_video_meta (GstWlBuffer * self); + +GST_WL_API +GstVideoCropMeta * gst_wl_buffer_get_video_crop_meta (GstWlBuffer * self); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwldisplay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwldisplay.c
Changed
@@ -23,7 +23,10 @@ #endif #include "gstwldisplay.h" +#include "gstwloutput-private.h" +#include "color-management-v1-client-protocol.h" +#include "color-representation-v1-client-protocol.h" #include "fullscreen-shell-unstable-v1-client-protocol.h" #include "linux-dmabuf-unstable-v1-client-protocol.h" #include "single-pixel-buffer-v1-client-protocol.h" @@ -53,10 +56,24 @@ struct wl_shm *shm; struct wp_viewporter *viewporter; struct zwp_linux_dmabuf_v1 *dmabuf; + struct wp_color_manager_v1 *color; + struct wp_color_representation_manager_v1 *color_representation; + GArray *shm_formats; GArray *dmabuf_formats; GArray *dmabuf_modifiers; + gboolean color_parametric_creator_supported; + gboolean color_mastering_display_supported; + GArray *color_transfer_functions; + GArray *color_primaries; + GArray *color_alpha_modes; + GArray *color_coefficients; + GArray *color_coefficients_range; + + GMutex outputs_mutex; + GHashTable *outputs; + /* private */ gboolean own_display; GThread *thread; @@ -92,11 +109,22 @@ priv->shm_formats = g_array_new (FALSE, FALSE, sizeof (uint32_t)); priv->dmabuf_formats = g_array_new (FALSE, FALSE, sizeof (uint32_t)); priv->dmabuf_modifiers = g_array_new (FALSE, FALSE, sizeof (guint64)); + priv->color_transfer_functions = + g_array_new (FALSE, FALSE, sizeof (uint32_t)); + priv->color_primaries = g_array_new (FALSE, FALSE, sizeof (uint32_t)); + priv->color_coefficients = g_array_new (FALSE, FALSE, sizeof (uint32_t)); + priv->color_coefficients_range = + g_array_new (FALSE, FALSE, sizeof (uint32_t)); + priv->color_alpha_modes = g_array_new (FALSE, FALSE, sizeof (uint32_t)); priv->wl_fd_poll = gst_poll_new (TRUE); priv->buffers = g_hash_table_new (g_direct_hash, g_direct_equal); g_mutex_init (&priv->buffers_mutex); g_rec_mutex_init (&priv->sync_mutex); + g_mutex_init (&priv->outputs_mutex); + priv->outputs = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, + (GDestroyNotify) g_object_unref); + gst_wl_linux_dmabuf_init_once (); gst_wl_shm_init_once (); gst_shm_allocator_init_once (); @@ -133,11 +161,27 @@ g_array_unref (priv->shm_formats); g_array_unref (priv->dmabuf_formats); g_array_unref (priv->dmabuf_modifiers); + + g_array_unref (priv->color_transfer_functions); + g_array_unref (priv->color_primaries); + g_array_unref (priv->color_alpha_modes); + g_array_unref (priv->color_coefficients); + g_array_unref (priv->color_coefficients_range); + gst_poll_free (priv->wl_fd_poll); g_hash_table_unref (priv->buffers); g_mutex_clear (&priv->buffers_mutex); g_rec_mutex_clear (&priv->sync_mutex); + g_mutex_clear (&priv->outputs_mutex); + g_hash_table_unref (priv->outputs); + + if (priv->color) + wp_color_manager_v1_destroy (priv->color); + + if (priv->color_representation) + wp_color_representation_manager_v1_destroy (priv->color_representation); + if (priv->viewporter) wp_viewporter_destroy (priv->viewporter); @@ -240,6 +284,190 @@ dmabuf_modifier, }; +static void +color_supported_intent (void *data, + struct wp_color_manager_v1 *wp_color_manager_v1, uint32_t render_intent) +{ +} + +static void +color_supported_feature (void *data, + struct wp_color_manager_v1 *wp_color_manager_v1, uint32_t feature) +{ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + switch (feature) { + case WP_COLOR_MANAGER_V1_FEATURE_PARAMETRIC: + GST_INFO_OBJECT (self, "New_parametric_creator supported"); + priv->color_parametric_creator_supported = TRUE; + break; + case WP_COLOR_MANAGER_V1_FEATURE_SET_MASTERING_DISPLAY_PRIMARIES: + GST_INFO_OBJECT (self, "Mastering Display supported"); + priv->color_mastering_display_supported = TRUE; + break; + default: + break; + } +} + +static void +color_supported_tf_named (void *data, + struct wp_color_manager_v1 *wp_color_manager_v1, uint32_t tf) +{ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + GST_INFO_OBJECT (self, "Supported transfer function 0x%x", tf); + g_array_append_val (priv->color_transfer_functions, tf); +} + +static void +color_supported_primaries_named (void *data, + struct wp_color_manager_v1 *wp_color_manager_v1, uint32_t primaries) +{ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + GST_INFO_OBJECT (self, "Supported primaries: 0x%x", primaries); + g_array_append_val (priv->color_primaries, primaries); +} + +static void +color_done (void *data, struct wp_color_manager_v1 *wp_color_manager_v1) +{ +} + +static const struct wp_color_manager_v1_listener color_listener = { + .supported_intent = color_supported_intent, + .supported_feature = color_supported_feature, + .supported_tf_named = color_supported_tf_named, + .supported_primaries_named = color_supported_primaries_named, + .done = color_done, +}; + +static void +color_representation_supported_alpha_mode (void *data, + struct wp_color_representation_manager_v1 + *wp_color_representation_manager_v1, uint32_t alpha_mode) +{ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + GST_INFO_OBJECT (self, "Supported alpha mode: 0x%x", alpha_mode); + g_array_append_val (priv->color_alpha_modes, alpha_mode); +} + +static void +color_representation_supported_coefficients_and_ranges (void *data, + struct wp_color_representation_manager_v1 + *wp_color_representation_manager_v1, uint32_t coefficients, uint32_t range) +{ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + GST_INFO_OBJECT (self, "Supported coefficients and range: 0x%x/0x%x", + coefficients, range); + g_array_append_val (priv->color_coefficients, coefficients); + g_array_append_val (priv->color_coefficients_range, range); +} + +static void +color_representation_done (void *data, struct wp_color_representation_manager_v1 + *wp_color_representation_manager_v1) +{ +} + +static const struct wp_color_representation_manager_v1_listener + color_representation_listener = { + .supported_alpha_mode = color_representation_supported_alpha_mode, + .supported_coefficients_and_ranges = + color_representation_supported_coefficients_and_ranges, + .done = color_representation_done, +}; + +static void +output_geometry (void *data, struct wl_output *wl_output, + int32_t x, int32_t y, int32_t physical_width, int32_t physical_height, + int32_t subpixel, const char *make, const char *model, int32_t transform) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + gst_wl_output_set_geometry (output, x, y, physical_width, physical_height, + subpixel, make, model, transform); +} + +static void +output_mode (void *data, struct wl_output *wl_output, + uint32_t flags, int32_t width, int32_t height, int32_t refresh) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + gst_wl_output_set_mode (output, flags, width, height, refresh); +} + +static void +output_scale (void *data, struct wl_output *wl_output, int32_t factor) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + gst_wl_output_set_scale (output, factor); +} + +static void +output_name (void *data, struct wl_output *wl_output, const char *name) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + gst_wl_output_set_name (output, name); +} + +static void +output_description (void *data, struct wl_output *wl_output, + const char *description) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + gst_wl_output_set_description (output, description); +} + +static void +output_done (void *data, struct wl_output *wl_output) +{ + GstWlOutput *output = GST_WL_OUTPUT (data); + GstWlDisplay *self = g_object_steal_data (G_OBJECT (output), "display"); + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + const gchar *name = gst_wl_output_get_name (output); + + GST_INFO ("Adding output %s (%p):", name, wl_output); + GST_INFO (" Make: %s", gst_wl_output_get_make (output)); + GST_INFO (" Model: %s", gst_wl_output_get_model (output)); + +#define ARGS(r) (r) /1000 , (r) % 1000 + GST_INFO (" Mode: %ix%i px %i.%ifps flags %x", + gst_wl_output_get_width (output), gst_wl_output_get_height (output), + ARGS (gst_wl_output_get_refresh (output)), + gst_wl_output_get_mode_flags (output)); +#undef ARGS + + GST_INFO (" Geometry: %i,%i %ix%i mm scale %i", + gst_wl_output_get_x (output), gst_wl_output_get_y (output), + gst_wl_output_get_physical_width (output), + gst_wl_output_get_physical_height (output), + gst_wl_output_get_scale (output)); + GST_INFO (" Subpixel %i", gst_wl_output_get_subpixel (output)); + GST_INFO (" Transform: %i", gst_wl_output_get_transform (output)); + GST_INFO ("---"); + + g_mutex_lock (&priv->outputs_mutex); + g_hash_table_replace (priv->outputs, g_strdup (name), output); + g_mutex_unlock (&priv->outputs_mutex); +} + +static const struct wl_output_listener output_listener = { + output_geometry, + output_mode, + output_done, + output_scale, + output_name, + output_description, +}; + gboolean gst_wl_display_check_format_for_shm (GstWlDisplay * self, const GstVideoInfo * video_info) @@ -334,6 +562,23 @@ priv->single_pixel_buffer = wl_registry_bind (registry, id, &wp_single_pixel_buffer_manager_v1_interface, 1); + } else if (g_strcmp0 (interface, wp_color_manager_v1_interface.name) == 0) { + priv->color = wl_registry_bind (registry, id, + &wp_color_manager_v1_interface, 1); + wp_color_manager_v1_add_listener (priv->color, &color_listener, self); + } else if (g_strcmp0 (interface, + wp_color_representation_manager_v1_interface.name) == 0) { + priv->color_representation = + wl_registry_bind (registry, id, + &wp_color_representation_manager_v1_interface, 1); + wp_color_representation_manager_v1_add_listener (priv->color_representation, + &color_representation_listener, self); + } else if (g_strcmp0 (interface, "wl_output") == 0) { + struct wl_output *wl_output = + wl_registry_bind (registry, id, &wl_output_interface, MIN (version, 4)); + GstWlOutput *output = gst_wl_output_new (wl_output, id); + g_object_set_data (G_OBJECT (output), "display", self); + wl_output_add_listener (wl_output, &output_listener, output); } } @@ -341,7 +586,24 @@ registry_handle_global_remove (void *data, struct wl_registry *registry, uint32_t name) { - /* temporarily do nothing */ + GstWlDisplay *self = data; + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + g_mutex_lock (&priv->outputs_mutex); + + GHashTableIter iter; + gpointer key, value; + g_hash_table_iter_init (&iter, priv->outputs); + while (g_hash_table_iter_next (&iter, &key, &value)) { + GstWlOutput *output = value; + + if (gst_wl_output_get_id (output) == name) { + g_hash_table_iter_remove (&iter); + break; + } + } + + g_mutex_unlock (&priv->outputs_mutex); } static const struct wl_registry_listener registry_listener = { @@ -767,3 +1029,210 @@ return priv->own_display; } + +/** + * gst_wl_display_get_color_manager_v1: + * @self: A #GstWlDisplay + * + * Returns: (transfer none): The color manager global or %NULL + * + * Since: 1.28 + */ +struct wp_color_manager_v1 * +gst_wl_display_get_color_manager_v1 (GstWlDisplay * self) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + return priv->color; +} + +/** + * gst_wl_display_get_color_representation_manager_v1: + * @self: A #GstWlDisplay + * + * Returns: (transfer none): The color representation global or %NULL + * + * Since: 1.28 + */ +struct wp_color_representation_manager_v1 * +gst_wl_display_get_color_representation_manager_v1 (GstWlDisplay * self) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + return priv->color_representation; +} + +/** + * gst_wl_display_is_color_parametric_creator_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports parametric image descriptions + * + * Since: 1.28 + */ +gboolean +gst_wl_display_is_color_parametric_creator_supported (GstWlDisplay * self) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + return priv->color_parametric_creator_supported; +} + +/** + * gst_wl_display_is_color_mastering_display_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports mastering display primaries + * image descriptions + * + * Since: 1.28 + */ +gboolean +gst_wl_display_is_color_mastering_display_supported (GstWlDisplay * self) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + + return priv->color_mastering_display_supported; +} + +/** + * gst_wl_display_is_color_transfer_function_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports @transfer_function + * + * Since: 1.28 + */ +gboolean +gst_wl_display_is_color_transfer_function_supported (GstWlDisplay * self, + uint32_t transfer_function) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + guint i; + + /* A value of 0 is invalid and will never be present in the list of enums. */ + if (transfer_function == 0) + return FALSE; + + for (i = 0; i < priv->color_transfer_functions->len; i++) { + uint32_t candidate = + g_array_index (priv->color_transfer_functions, uint32_t, i); + + if (candidate == transfer_function) + return TRUE; + } + + return FALSE; +} + +/** + * gst_wl_display_are_color_primaries_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports @primaries + * + * Since: 1.28 + */ +gboolean +gst_wl_display_are_color_primaries_supported (GstWlDisplay * self, + uint32_t primaries) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + guint i; + + /* A value of 0 is invalid and will never be present in the list of enums. */ + if (primaries == 0) + return FALSE; + + for (i = 0; i < priv->color_primaries->len; i++) { + uint32_t candidate = g_array_index (priv->color_primaries, uint32_t, i); + + if (candidate == primaries) + return TRUE; + } + + return FALSE; +} + +/** + * gst_wl_display_is_color_alpha_mode_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports @alpha_mode + * + * Since: 1.28 + */ +gboolean +gst_wl_display_is_color_alpha_mode_supported (GstWlDisplay * self, + uint32_t alpha_mode) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + guint i; + + for (i = 0; i < priv->color_alpha_modes->len; i++) { + uint32_t candidate = g_array_index (priv->color_alpha_modes, uint32_t, i); + + if (candidate == alpha_mode) + return TRUE; + } + + return FALSE; +} + +/** + * gst_wl_display_are_color_coefficients_supported: + * @self: A #GstWlDisplay + * + * Returns: %TRUE if the compositor supports the combination of @coefficients and @range + * + * Since: 1.28 + */ +gboolean +gst_wl_display_are_color_coefficients_supported (GstWlDisplay * self, + uint32_t coefficients, uint32_t range) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + guint i; + + /* A value of 0 is invalid and will never be present in the list of enums. */ + if (coefficients == 0 || range == 0) + return FALSE; + + for (i = 0; i < priv->color_coefficients->len; i++) { + uint32_t candidate = g_array_index (priv->color_coefficients, uint32_t, i); + uint32_t candidate_range = + g_array_index (priv->color_coefficients_range, uint32_t, i); + + if (candidate == coefficients && candidate_range == range) + return TRUE; + } + + return FALSE; +} + +/** +* gst_wl_display_get_output_by_name: +* @self: A #GstWlDisplay +* @output_name: Name of the output +* +* Lookup for a wl_output with the specified name. +* +* Returns: (transfer full): A #GstWlOutput or %NULL if not found. +* +* Since: 1.28 +*/ +GstWlOutput * +gst_wl_display_get_output_by_name (GstWlDisplay * self, + const gchar * output_name) +{ + GstWlDisplayPrivate *priv = gst_wl_display_get_instance_private (self); + GstWlOutput *output; + + g_mutex_lock (&priv->outputs_mutex); + output = GST_WL_OUTPUT (g_hash_table_lookup (priv->outputs, output_name)); + if (output) + g_object_ref (output); + g_mutex_unlock (&priv->outputs_mutex); + + return output; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwldisplay.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwldisplay.h
Changed
@@ -30,11 +30,11 @@ #define GST_TYPE_WL_DISPLAY (gst_wl_display_get_type ()) GST_WL_API -G_DECLARE_FINAL_TYPE (GstWlDisplay, gst_wl_display, GST, WL_DISPLAY, GObject); +G_DECLARE_FINAL_TYPE (GstWlDisplay, gst_wl_display, GST, WL_DISPLAY, GstObject); struct _GstWlDisplay { - GObject parent_instance; + GstObject parent_instance; }; GST_WL_API @@ -121,4 +121,31 @@ GST_WL_API gboolean gst_wl_display_has_own_display (GstWlDisplay * self); +GST_WL_API +struct wp_color_manager_v1 *gst_wl_display_get_color_manager_v1 (GstWlDisplay * self); + +GST_WL_API +struct wp_color_representation_manager_v1 *gst_wl_display_get_color_representation_manager_v1 (GstWlDisplay * self); + +GST_WL_API +gboolean gst_wl_display_is_color_parametric_creator_supported (GstWlDisplay * self); + +GST_WL_API +gboolean gst_wl_display_is_color_mastering_display_supported (GstWlDisplay * self); + +GST_WL_API +gboolean gst_wl_display_is_color_transfer_function_supported (GstWlDisplay * self, uint32_t transfer_function); + +GST_WL_API +gboolean gst_wl_display_are_color_primaries_supported (GstWlDisplay * self, uint32_t primaries); + +GST_WL_API +gboolean gst_wl_display_is_color_alpha_mode_supported (GstWlDisplay * self, uint32_t alpha_mode); + +GST_WL_API +gboolean gst_wl_display_are_color_coefficients_supported (GstWlDisplay * self, uint32_t coefficients, uint32_t range); + +GST_WL_API +GstWlOutput * gst_wl_display_get_output_by_name (GstWlDisplay * self, const gchar * output_name); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwloutput-private.h
Added
@@ -0,0 +1,45 @@ +/* GStreamer Wayland Library + * + * Copyright (C) 2025 Collabora Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the Free + * Software Foundation, Inc., 51 Franklin Street, Fifth Floor, + * Boston, MA 02110-1301 USA. + */ + +#pragma once + +#include <gst/wayland/wayland.h> + +G_BEGIN_DECLS + +/* <private> */ +GstWlOutput *gst_wl_output_new (struct wl_output *output, guint32 id); + +void gst_wl_output_set_name (GstWlOutput * self, const gchar *name); + +void gst_wl_output_set_description (GstWlOutput * self, const gchar *description); + +void gst_wl_output_set_scale (GstWlOutput * self, gint scale_factor); + +void gst_wl_output_set_geometry (GstWlOutput * self, gint x,gint y, + gint physical_width, gint physical_height, + enum wl_output_subpixel subpixel, + const gchar * make, const gchar * model, + enum wl_output_transform transform); + +void gst_wl_output_set_mode (GstWlOutput * self, guint flags, gint width, + gint height, gint refresh); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwloutput.c
Added
@@ -0,0 +1,454 @@ +/* GStreamer Wayland Library + * + * Copyright (C) 2025 Collabora Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the Free + * Software Foundation, Inc., 51 Franklin Street, Fifth Floor, + * Boston, MA 02110-1301 USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwloutput.h" +#include "gstwloutput-private.h" + +struct _GstWlOutput +{ + GObject parent; + + struct wl_output *output; + guint32 global_id; + + gchar *name; + gchar *description; + + gint scale_factor; + + struct + { + gint x; + gint y; + gint physical_width; + gint physical_height; + enum wl_output_subpixel subpixel; + gchar *make; + gchar *model; + enum wl_output_transform transform; + } geometry; + + struct + { + guint flags; + gint width; + gint height; + gint refresh; + } mode; +}; + +G_DEFINE_TYPE (GstWlOutput, gst_wl_output, G_TYPE_OBJECT); + +static void +gst_wl_output_finalize (GObject * object) +{ + GstWlOutput *self = GST_WL_OUTPUT (object); + g_free (self->name); + g_free (self->description); + g_free (self->geometry.make); + g_free (self->geometry.model); + + wl_output_destroy (self->output); + + G_OBJECT_CLASS (gst_wl_output_parent_class)->finalize (object); +} + +static void +gst_wl_output_init (GstWlOutput * self) +{ +} + +static void +gst_wl_output_class_init (GstWlOutputClass * klass) +{ + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + gobject_class->finalize = gst_wl_output_finalize; +} + +/** + * gst_wl_output_new: (skip): + * @output: A wl_output proxy + * + * Returns: (transfer full): A #GstWlOutput object + * + * Since: 1.28 + */ +GstWlOutput * +gst_wl_output_new (struct wl_output *output, guint32 global_id) +{ + GstWlOutput *self = GST_WL_OUTPUT (g_object_new (GST_TYPE_WL_OUTPUT, NULL)); + + self->output = output; + self->global_id = global_id; + + return self; +} + +/** + * gst_wl_output_set_name: (skip): + * @self: the #GstWlOutput + * @name: the name to set + * + * Saves the name of the #GstWlOutput. + * + * Since: 1.28 + */ +void +gst_wl_output_set_name (GstWlOutput * self, const gchar * name) +{ + g_free (self->name); + self->name = g_strdup (name); +} + +/** + * gst_wl_output_set_description: (skip): + * @self: the #GstWlOutput + * @description: the name to set + * + * Saves the description of the #GstWlOutput. + * + * Since: 1.28 + */ +void +gst_wl_output_set_description (GstWlOutput * self, const gchar * description) +{ + g_free (self->description); + self->description = g_strdup (description); +} + +/** + * gst_wl_output_set_scale: (skip): + * @self: the #GstWlOutput + * @scale: the name to set + * + * Saves the scale of the #GstWlOutput. + * + * Since: 1.28 + */ +void +gst_wl_output_set_scale (GstWlOutput * self, gint scale_factor) +{ + self->scale_factor = scale_factor; +} + +/** + * gst_wl_output_set_geometry: (skip): + * @self: the #GstWlOutput + * @x: the x coordinate + * @y: the y coordinate + * @physical_width: the width in mm + * @physical_height: the height in mm + * @subpixel: type of pixels + * @make: the brand of the output + * @model: the specific model + * @transform: the transform used to render to this output + * + * Saves all the parameters that are part of the geometry callback for the + * output. + * + * Since: 1.28 + */ +void +gst_wl_output_set_geometry (GstWlOutput * self, gint x, gint y, + gint physical_width, gint physical_height, + enum wl_output_subpixel subpixel, const gchar * make, const gchar * model, + enum wl_output_transform transform) +{ + g_free (self->geometry.make); + g_free (self->geometry.model); + self->geometry.x = x; + self->geometry.y = y; + self->geometry.physical_width = physical_width; + self->geometry.physical_height = physical_height; + self->geometry.subpixel = subpixel; + self->geometry.make = g_strdup (make); + self->geometry.model = g_strdup (model); + self->geometry.transform = transform; +} + +/** +* gst_wl_output_set_mode: (skip): +* @self: A #GstWlOutput +* @flags: enum wl_output_mode +* @width: the width in pixels +* @height: the height in pixels +* @refresh: the refresh rate in mHz +* +* Saves all the paramters that are part of the mode callback for the output. The +* compositor may call this multiple times but must send the current mode last. +* Only the last mode is kept. +* +* Since: 1.28 +*/ +void +gst_wl_output_set_mode (GstWlOutput * self, guint flags, gint width, + gint height, gint refresh) +{ + self->mode.flags = flags; + self->mode.width = width; + self->mode.height = height; + self->mode.refresh = refresh; +} + +/** +* gst_wl_output_get_wl_output: +* @self: A #GstWlOutput +* +* Returns: the struct wl_output pointer +* +* Since: 1.28 +*/ +struct wl_output * +gst_wl_output_get_wl_output (GstWlOutput * self) +{ + return self->output; +} + +/** +* gst_wl_output_get_id: +* @self: A #GstWlOutput +* +* Returns: the Waylandnd object global id +* +* Since: 1.28 +*/ +guint32 +gst_wl_output_get_id (GstWlOutput * self) +{ + return wl_proxy_get_id ((struct wl_proxy *) self->output); +} + +/** +* gst_wl_output_get_name: +* @self: A #GstWlOutput +* +* Returns: the output name +* +* Since: 1.28 +*/ +const gchar * +gst_wl_output_get_name (GstWlOutput * self) +{ + return self->name; +} + +/** +* gst_wl_output_get_description: +* @self: A #GstWlOutput +* +* Returns: the output description +* +* Since: 1.28 +*/ +const gchar * +gst_wl_output_get_decscription (GstWlOutput * self) +{ + return self->description; +} + +/** +* gst_wl_output_get_make: +* @self: A #GstWlOutput +* +* Returns: the output make +* +* Since: 1.28 +*/ +const gchar * +gst_wl_output_get_make (GstWlOutput * self) +{ + return self->geometry.make; +} + +/** +* gst_wl_output_get_model: +* @self: A #GstWlOutput +* +* Returns: the output model +* +* Since: 1.28 +*/ +const gchar * +gst_wl_output_get_model (GstWlOutput * self) +{ + return self->geometry.model; +} + +/** +* gst_wl_output_get_scale: +* @self: A #GstWlOutput +* +* The output scale from the output protocol is rounded up to the next integer +* value. For accurate scaling factor use the fractional scaling protocol, which +* is queried per surface rather then per output. +* +* Returns: the output scale factor +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_scale (GstWlOutput * self) +{ + return self->scale_factor; +} + +/** +* gst_wl_output_get_x: +* @self: A #GstWlOutput +* +* Returns: the output virtual x coordinate +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_x (GstWlOutput * self) +{ + return self->geometry.x; +} + +/** +* gst_wl_output_get_y: +* @self: A #GstWlOutput +* +* Returns: the output virtual y coordinate +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_y (GstWlOutput * self) +{ + return self->geometry.y; +} + +/** +* gst_wl_output_get_physical_width: +* @self: A #GstWlOutput +* +* Returns: the output physical width +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_physical_width (GstWlOutput * self) +{ + return self->geometry.physical_width; +} + +/** +* gst_wl_output_get_physical_height: +* @self: A #GstWlOutput +* +* Returns: the output physical height +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_physical_height (GstWlOutput * self) +{ + return self->geometry.physical_height; +} + +/** +* gst_wl_output_get_subpixel: +* @self: A #GstWlOutput +* +* Returns: the output subpixel type (see enum wl_output_subpixel) +* +* Since: 1.28 +*/ +enum wl_output_subpixel +gst_wl_output_get_subpixel (GstWlOutput * self) +{ + return self->geometry.subpixel; +} + +/** +* gst_wl_output_get_transform: +* @self: A #GstWlOutput +* +* Returns: the output transform (see enum wl_output_transfor) +* +* Since: 1.28 +*/ +enum wl_output_transform +gst_wl_output_get_transform (GstWlOutput * self) +{ + return self->geometry.transform; +} + +/** +* gst_wl_output_get_mode_flags: +* @self: A #GstWlOutput +* +* Returns: the output mode flags (see enum wl_output_mode) +* +* Since: 1.28 +*/ +guint +gst_wl_output_get_mode_flags (GstWlOutput * self) +{ + return self->mode.flags; +} + +/** +* gst_wl_output_get_width: +* @self: A #GstWlOutput +* +* Returns: the output width in pixels +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_width (GstWlOutput * self) +{ + return self->mode.width; +} + +/** +* gst_wl_output_get_height: +* @self: A #GstWlOutput +* +* Returns: the output height in pixels +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_height (GstWlOutput * self) +{ + return self->mode.height; +} + +/** +* gst_wl_output_get_refresh: +* @self: A #GstWlOutput +* +* Returns: the output refresh in mHz +* +* Since: 1.28 +*/ +gint +gst_wl_output_get_refresh (GstWlOutput * self) +{ + return self->mode.refresh; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwloutput.h
Added
@@ -0,0 +1,85 @@ +/* GStreamer Wayland Library + * + * Copyright (C) 2025 Collabora Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the Free + * Software Foundation, Inc., 51 Franklin Street, Fifth Floor, + * Boston, MA 02110-1301 USA. + */ + +#pragma once + +#include <gst/wayland/wayland-prelude.h> + +G_BEGIN_DECLS + +#define GST_TYPE_WL_OUTPUT (gst_wl_output_get_type ()) + +G_DECLARE_FINAL_TYPE (GstWlOutput, gst_wl_output, GST, WL_OUTPUT, GObject); + +GST_WL_API +struct wl_output * gst_wl_output_get_wl_output (GstWlOutput *self); + +GST_WL_API +guint32 gst_wl_output_get_id (GstWlOutput *self); + +GST_WL_API +const gchar * gst_wl_output_get_name (GstWlOutput *self); + +GST_WL_API +void gst_wl_output_info (GstWlOutput *self); + +GST_WL_API +const gchar * gst_wl_output_get_decscription (GstWlOutput *self); + +GST_WL_API +const gchar * gst_wl_output_get_make (GstWlOutput *self); + +GST_WL_API +const gchar * gst_wl_output_get_model (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_scale (GstWlOutput * self); + +GST_WL_API +gint gst_wl_output_get_x (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_y (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_physical_width (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_physical_height (GstWlOutput *self); + +GST_WL_API +enum wl_output_subpixel gst_wl_output_get_subpixel (GstWlOutput *self); + +GST_WL_API +enum wl_output_transform gst_wl_output_get_transform (GstWlOutput *self); + +GST_WL_API +guint gst_wl_output_get_mode_flags (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_width (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_height (GstWlOutput *self); + +GST_WL_API +gint gst_wl_output_get_refresh (GstWlOutput *self); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlvideoformat.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlvideoformat.c
Changed
@@ -29,17 +29,6 @@ #include <drm_fourcc.h> -/* This can be removed once we can bump the required wl_client_dep, - * which again is blocked by a CI image update, see - * https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5275 - */ -#ifndef WL_SHM_FORMAT_P010 -#define WL_SHM_FORMAT_P010 DRM_FORMAT_P010 -#endif -#ifndef WL_SHM_FORMAT_NV15 -#define WL_SHM_FORMAT_NV15 DRM_FORMAT_NV15 -#endif - #define GST_CAT_DEFAULT gst_wl_videoformat_debug GST_DEBUG_CATEGORY_STATIC (GST_CAT_DEFAULT); @@ -56,94 +45,59 @@ } } -typedef struct -{ - enum wl_shm_format wl_shm_format; - guint32 dma_format; - GstVideoFormat gst_format; -} wl_VideoFormat; - -static const wl_VideoFormat wl_formats = { - {WL_SHM_FORMAT_XRGB8888, DRM_FORMAT_XRGB8888, GST_VIDEO_FORMAT_BGRx}, - {WL_SHM_FORMAT_ARGB8888, DRM_FORMAT_ARGB8888, GST_VIDEO_FORMAT_BGRA}, - {WL_SHM_FORMAT_XBGR8888, DRM_FORMAT_XBGR8888, GST_VIDEO_FORMAT_RGBx}, - {WL_SHM_FORMAT_RGBX8888, DRM_FORMAT_RGBX8888, GST_VIDEO_FORMAT_xBGR}, - {WL_SHM_FORMAT_BGRX8888, DRM_FORMAT_BGRX8888, GST_VIDEO_FORMAT_xRGB}, - {WL_SHM_FORMAT_ABGR8888, DRM_FORMAT_ABGR8888, GST_VIDEO_FORMAT_RGBA}, - {WL_SHM_FORMAT_RGBA8888, DRM_FORMAT_RGBA8888, GST_VIDEO_FORMAT_ABGR}, - {WL_SHM_FORMAT_BGRA8888, DRM_FORMAT_BGRA8888, GST_VIDEO_FORMAT_ARGB}, - {WL_SHM_FORMAT_RGB888, DRM_FORMAT_RGB888, GST_VIDEO_FORMAT_BGR}, - {WL_SHM_FORMAT_BGR888, DRM_FORMAT_BGR888, GST_VIDEO_FORMAT_RGB}, - {WL_SHM_FORMAT_RGB565, DRM_FORMAT_RGB565, GST_VIDEO_FORMAT_RGB16}, - {WL_SHM_FORMAT_BGR565, DRM_FORMAT_BGR565, GST_VIDEO_FORMAT_BGR16}, - - {WL_SHM_FORMAT_YUYV, DRM_FORMAT_YUYV, GST_VIDEO_FORMAT_YUY2}, - {WL_SHM_FORMAT_YVYU, DRM_FORMAT_YVYU, GST_VIDEO_FORMAT_YVYU}, - {WL_SHM_FORMAT_UYVY, DRM_FORMAT_UYVY, GST_VIDEO_FORMAT_UYVY}, - {WL_SHM_FORMAT_AYUV, DRM_FORMAT_AYUV, GST_VIDEO_FORMAT_AYUV}, - {WL_SHM_FORMAT_NV12, DRM_FORMAT_NV12, GST_VIDEO_FORMAT_NV12}, - {WL_SHM_FORMAT_NV21, DRM_FORMAT_NV21, GST_VIDEO_FORMAT_NV21}, - {WL_SHM_FORMAT_NV16, DRM_FORMAT_NV16, GST_VIDEO_FORMAT_NV16}, - {WL_SHM_FORMAT_NV61, DRM_FORMAT_NV61, GST_VIDEO_FORMAT_NV61}, - {WL_SHM_FORMAT_P010, DRM_FORMAT_P010, GST_VIDEO_FORMAT_P010_10LE}, - {WL_SHM_FORMAT_NV15, DRM_FORMAT_NV15, GST_VIDEO_FORMAT_NV12_10LE40}, - {WL_SHM_FORMAT_YUV410, DRM_FORMAT_YUV410, GST_VIDEO_FORMAT_YUV9}, - {WL_SHM_FORMAT_YVU410, DRM_FORMAT_YVU410, GST_VIDEO_FORMAT_YVU9}, - {WL_SHM_FORMAT_YUV411, DRM_FORMAT_YUV411, GST_VIDEO_FORMAT_Y41B}, - {WL_SHM_FORMAT_YUV420, DRM_FORMAT_YUV420, GST_VIDEO_FORMAT_I420}, - {WL_SHM_FORMAT_YVU420, DRM_FORMAT_YVU420, GST_VIDEO_FORMAT_YV12}, - {WL_SHM_FORMAT_YUV422, DRM_FORMAT_YUV422, GST_VIDEO_FORMAT_Y42B}, - {WL_SHM_FORMAT_YUV444, DRM_FORMAT_YUV444, GST_VIDEO_FORMAT_v308}, -}; - enum wl_shm_format gst_video_format_to_wl_shm_format (GstVideoFormat format) { - guint i; + guint32 drm_format; + guint64 modifier; + + drm_format = gst_video_dma_drm_format_from_gst_format (format, &modifier); - for (i = 0; i < G_N_ELEMENTS (wl_formats); i++) - if (wl_formatsi.gst_format == format) - return wl_formatsi.wl_shm_format; + if (drm_format == DRM_FORMAT_INVALID || modifier != DRM_FORMAT_MOD_LINEAR) { + GST_WARNING ("wayland shm video format not found"); + return -1; + } + + if (drm_format == DRM_FORMAT_XRGB8888) + drm_format = WL_SHM_FORMAT_XRGB8888; + else if (drm_format == DRM_FORMAT_ARGB8888) + drm_format = WL_SHM_FORMAT_ARGB8888; - GST_WARNING ("wayland shm video format not found"); - return -1; + return drm_format; } guint32 gst_video_format_to_wl_dmabuf_format (GstVideoFormat format) { - guint i; + guint32 drm_format; + guint64 modifier; - for (i = 0; i < G_N_ELEMENTS (wl_formats); i++) - if (wl_formatsi.gst_format == format) - return wl_formatsi.dma_format; + drm_format = gst_video_dma_drm_format_from_gst_format (format, &modifier); + + if (drm_format == DRM_FORMAT_INVALID || modifier != DRM_FORMAT_MOD_LINEAR) { + GST_WARNING ("wayland dmabuf video format not found"); + return DRM_FORMAT_INVALID; + } - GST_WARNING ("wayland dmabuf video format not found"); - return 0; + return drm_format; } GstVideoFormat gst_wl_shm_format_to_video_format (enum wl_shm_format wl_format) { - guint i; + if (wl_format == WL_SHM_FORMAT_XRGB8888) + wl_format = DRM_FORMAT_XRGB8888; + else if (wl_format == WL_SHM_FORMAT_ARGB8888) + wl_format = DRM_FORMAT_ARGB8888; - for (i = 0; i < G_N_ELEMENTS (wl_formats); i++) - if (wl_formatsi.wl_shm_format == wl_format) - return wl_formatsi.gst_format; - - return GST_VIDEO_FORMAT_UNKNOWN; + return gst_wl_dmabuf_format_to_video_format (wl_format); } GstVideoFormat gst_wl_dmabuf_format_to_video_format (guint wl_format) { - guint i; - - for (i = 0; i < G_N_ELEMENTS (wl_formats); i++) - if (wl_formatsi.dma_format == wl_format) - return wl_formatsi.gst_format; - - return GST_VIDEO_FORMAT_UNKNOWN; + return gst_video_dma_drm_format_to_gst_format (wl_format, + DRM_FORMAT_MOD_LINEAR); } const gchar *
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlvideoformat.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlvideoformat.h
Changed
@@ -37,13 +37,15 @@ * Since: 1.24 */ #if G_BYTE_ORDER == G_BIG_ENDIAN -#define GST_WL_VIDEO_FORMATS "{ AYUV, RGBA, ARGB, BGRA, ABGR, P010_10LE, " \ - "NV12_10LE40, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, " \ - "YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }" +#define GST_WL_VIDEO_FORMATS "{ BGR10A2_LE, RGB10A2_LE, AYUV, RGBA, ARGB, " \ + "BGRA, ABGR, BGR10x2_LE, RGB10x2_LE, P010_10LE, NV12_10LE40, Y444, v308, " \ + "RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, " \ + "I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }" #elif G_BYTE_ORDER == G_LITTLE_ENDIAN -#define GST_WL_VIDEO_FORMATS "{ AYUV, RGBA, ARGB, BGRA, ABGR, P010_10LE, " \ - "NV12_10LE40, v308, RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, " \ - "YUY2, YVYU, UYVY, I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }" +#define GST_WL_VIDEO_FORMATS "{ BGR10A2_LE, RGB10A2_LE, AYUV, RGBA, ARGB, " \ + "BGRA, ABGR, BGR10x2_LE, RGB10x2_LE, P010_10LE, NV12_10LE40, Y444, v308, " \ + "RGBx, xRGB, BGRx, xBGR, RGB, BGR, Y42B, NV16, NV61, YUY2, YVYU, UYVY, " \ + "I420, YV12, NV12, NV21, Y41B, YUV9, YVU9, BGR16, RGB16 }" #endif GST_WL_API
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlwindow.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlwindow.c
Changed
@@ -26,6 +26,8 @@ #include "gstwlwindow.h" +#include "color-management-v1-client-protocol.h" +#include "color-representation-v1-client-protocol.h" #include "fullscreen-shell-unstable-v1-client-protocol.h" #include "single-pixel-buffer-v1-client-protocol.h" #include "viewporter-client-protocol.h" @@ -51,6 +53,8 @@ struct wp_viewport *video_viewport; struct xdg_surface *xdg_surface; struct xdg_toplevel *xdg_toplevel; + struct wp_color_management_surface_v1 *color_management_surface; + struct wp_color_representation_surface_v1 *color_representation_surface; gboolean configured; GCond configure_cond; GMutex configure_mutex; @@ -61,14 +65,25 @@ /* the size and position of the video_subsurface */ GstVideoRectangle video_rectangle; - /* the size of the video in the buffers */ + /* the size of the video in the buffers (unpadded) */ gint video_width, video_height; + /* the size of the video in the buffers (padded) */ + gint buffer_width, buffer_height; + + /* default window dimension used when the compositor does not chose a size */ + gint default_width, default_height; + /* video width scaled according to par */ gint scaled_width; + /* the crop rectangle */ + GstVideoRectangle crop; + enum wl_output_transform buffer_transform; + gboolean force_aspect_ratio; + /* when this is not set both the area_surface and the video_surface are not * visible and certain steps should be skipped */ gboolean is_area_surface_mapped; @@ -76,6 +91,8 @@ GMutex window_lock; GstWlBuffer *next_buffer; GstVideoInfo *next_video_info; + GstVideoMasteringDisplayInfo *next_minfo; + GstVideoContentLightLevel *next_linfo; GstWlBuffer *staged_buffer; gboolean clear_window; struct wl_callback *frame_callback; @@ -99,17 +116,24 @@ static void gst_wl_window_finalize (GObject * gobject); +static void gst_wl_window_update_geometry (GstWlWindow * self); + static void gst_wl_window_update_borders (GstWlWindow * self); static void gst_wl_window_commit_buffer (GstWlWindow * self, GstWlBuffer * buffer); +static void gst_wl_window_set_colorimetry (GstWlWindow * self, + const GstVideoColorimetry * colorimetry, + const GstVideoMasteringDisplayInfo * minfo, + const GstVideoContentLightLevel * linfo); + static void handle_xdg_toplevel_close (void *data, struct xdg_toplevel *xdg_toplevel) { GstWlWindow *self = data; - GST_DEBUG ("XDG toplevel got a \"close\" event."); + GST_DEBUG_OBJECT (self, "XDG toplevel got a \"close\" event."); g_signal_emit (self, signalsCLOSED, 0); } @@ -118,25 +142,38 @@ int32_t width, int32_t height, struct wl_array *states) { GstWlWindow *self = data; + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); const uint32_t *state; - GST_DEBUG ("XDG toplevel got a \"configure\" event, %d, %d .", + GST_DEBUG_OBJECT (self, "XDG toplevel got a \"configure\" event, %d, %d .", width, height); wl_array_for_each (state, states) { switch (*state) { case XDG_TOPLEVEL_STATE_FULLSCREEN: + GST_DEBUG_OBJECT (self, "XDG top-level now FULLSCREEN"); + break; case XDG_TOPLEVEL_STATE_MAXIMIZED: + GST_DEBUG_OBJECT (self, "XDG top-level now MAXIMIXED"); + break; case XDG_TOPLEVEL_STATE_RESIZING: + GST_DEBUG_OBJECT (self, "XDG top-level being RESIZED"); + break; case XDG_TOPLEVEL_STATE_ACTIVATED: + GST_DEBUG_OBJECT (self, "XDG top-level being ACTIVATED"); break; } } - if (width <= 0 || height <= 0) - return; + if (width <= 0 || height <= 0) { + width = priv->default_width; + height = priv->default_height; + } + g_mutex_lock (&priv->configure_mutex); + priv->configured = FALSE; gst_wl_window_set_render_rectangle (self, 0, 0, width, height); + g_mutex_unlock (&priv->configure_mutex); } static const struct xdg_toplevel_listener xdg_toplevel_listener = { @@ -156,6 +193,7 @@ g_mutex_lock (&priv->configure_mutex); priv->configured = TRUE; g_cond_signal (&priv->configure_cond); + gst_wl_window_update_geometry (self); g_mutex_unlock (&priv->configure_mutex); } @@ -210,6 +248,13 @@ if (priv->video_viewport) wp_viewport_destroy (priv->video_viewport); + if (priv->color_management_surface) + wp_color_management_surface_v1_destroy (priv->color_management_surface); + + if (priv->color_representation_surface) + wp_color_representation_surface_v1_destroy + (priv->color_representation_surface); + wl_proxy_wrapper_destroy (priv->video_surface_wrapper); wl_subsurface_destroy (priv->video_subsurface); wl_surface_destroy (priv->video_surface); @@ -243,6 +288,7 @@ priv->display = g_object_ref (display); priv->render_lock = render_lock; g_cond_init (&priv->configure_cond); + priv->force_aspect_ratio = TRUE; compositor = gst_wl_display_get_compositor (display); priv->area_surface = wl_compositor_create_surface (compositor); @@ -279,23 +325,69 @@ return self; } +/** + * gst_wl_window_ensure_fullscreen_for_output: + * @self: A #GstWlWindow + * @fullscreen: %TRUE to set fullscreen, %FALSE to unset it + * @output_nane: (nullable): The name of the wl_output to fullscreen to + * + * Ensure the window fullscreen state matches the desired state. If a + * output_name is provided, and this output exists, the window will be set to + * fullscreen on that screen. Otherwise the compisitor will decide. + * + * Since: 1.28 + */ void -gst_wl_window_ensure_fullscreen (GstWlWindow * self, gboolean fullscreen) +gst_wl_window_ensure_fullscreen_for_output (GstWlWindow * self, + gboolean fullscreen, const gchar * output_name) { GstWlWindowPrivate *priv; + GstWlOutput *output = NULL; + struct wl_output *wl_output = NULL; g_return_if_fail (self); - priv = gst_wl_window_get_instance_private (self); - if (fullscreen) - xdg_toplevel_set_fullscreen (priv->xdg_toplevel, NULL); - else + + if (!fullscreen) { xdg_toplevel_unset_fullscreen (priv->xdg_toplevel); + wl_display_flush (gst_wl_display_get_display (priv->display)); + return; + } + + if (output_name) { + output = gst_wl_display_get_output_by_name (priv->display, output_name); + if (output) + wl_output = gst_wl_output_get_wl_output (output); + else + GST_WARNING ("Could not find any output named '%s'", output_name); + } + + xdg_toplevel_set_fullscreen (priv->xdg_toplevel, wl_output); + wl_display_flush (gst_wl_display_get_display (priv->display)); + + // Unref last for thread safety + if (output) + g_object_unref (output); +} + +/** + * gst_wl_window_ensure_fullscreen: + * @self: A #GstWlWindow + * @fullscreen: %TRUE to set fullscreen, %FALSE to unset it + * + * Same as gst_wl_window_ensure_fullscreen_for_output() without specifying an + * output. + */ +void +gst_wl_window_ensure_fullscreen (GstWlWindow * self, gboolean fullscreen) +{ + gst_wl_window_ensure_fullscreen_for_output (self, fullscreen, NULL); } GstWlWindow * -gst_wl_window_new_toplevel (GstWlDisplay * display, const GstVideoInfo * info, - gboolean fullscreen, GMutex * render_lock) +gst_wl_window_new_toplevel_full (GstWlDisplay * display, + const GstVideoInfo * info, gboolean fullscreen, const gchar * output_name, + GMutex * render_lock) { GstWlWindow *self; GstWlWindowPrivate *priv; @@ -316,7 +408,7 @@ priv->xdg_surface = xdg_wm_base_get_xdg_surface (xdg_wm_base, priv->area_surface); if (!priv->xdg_surface) { - GST_ERROR ("Unable to get xdg_surface"); + GST_ERROR_OBJECT (self, "Unable to get xdg_surface"); goto error; } xdg_surface_add_listener (priv->xdg_surface, &xdg_surface_listener, self); @@ -324,7 +416,7 @@ /* Then the toplevel */ priv->xdg_toplevel = xdg_surface_get_toplevel (priv->xdg_surface); if (!priv->xdg_toplevel) { - GST_ERROR ("Unable to get xdg_toplevel"); + GST_ERROR_OBJECT (self, "Unable to get xdg_toplevel"); goto error; } xdg_toplevel_add_listener (priv->xdg_toplevel, @@ -335,39 +427,55 @@ xdg_toplevel_set_app_id (priv->xdg_toplevel, "org.gstreamer.wayland"); } - gst_wl_window_ensure_fullscreen (self, fullscreen); + gst_wl_window_ensure_fullscreen_for_output (self, fullscreen, output_name); /* Finally, commit the xdg_surface state as toplevel */ priv->configured = FALSE; + + /* set the initial size to be the same as the reported video size */ + priv->default_width = + gst_util_uint64_scale_int_round (info->width, info->par_n, info->par_d); + priv->default_height = info->height; + gst_wl_window_set_render_rectangle (self, 0, 0, priv->default_width, + priv->default_height); + + GST_INFO_OBJECT (self, "Configured default rectangle to %ix%i", + priv->default_width, priv->default_height); + wl_surface_commit (priv->area_surface); wl_display_flush (gst_wl_display_get_display (display)); g_mutex_lock (&priv->configure_mutex); - timeout = g_get_monotonic_time () + 100 * G_TIME_SPAN_MILLISECOND; + timeout = g_get_monotonic_time () + 5 * G_TIME_SPAN_SECOND; while (!priv->configured) { if (!g_cond_wait_until (&priv->configure_cond, &priv->configure_mutex, timeout)) { - GST_WARNING ("The compositor did not send configure event."); + GST_WARNING_OBJECT (self, + "The compositor did not send configure event."); break; } } g_mutex_unlock (&priv->configure_mutex); } else if (fullscreen_shell) { + GstWlOutput *output = NULL; + struct wl_output *wl_output = NULL; + if (output_name) { + output = gst_wl_display_get_output_by_name (priv->display, output_name); + wl_output = gst_wl_output_get_wl_output (output); + } + zwp_fullscreen_shell_v1_present_surface (fullscreen_shell, - priv->area_surface, ZWP_FULLSCREEN_SHELL_V1_PRESENT_METHOD_ZOOM, NULL); + priv->area_surface, ZWP_FULLSCREEN_SHELL_V1_PRESENT_METHOD_ZOOM, + wl_output); + + if (output) + gst_object_unref (output); } else { - GST_ERROR ("Unable to use either xdg_wm_base or zwp_fullscreen_shell."); + GST_ERROR_OBJECT (self, + "Unable to use either xdg_wm_base or zwp_fullscreen_shell."); goto error; } - /* render_rectangle is already set via toplevel_configure in - * xdg_shell fullscreen mode */ - if (!(xdg_wm_base && fullscreen)) { - /* set the initial size to be the same as the reported video size */ - gint width = - gst_util_uint64_scale_int_round (info->width, info->par_n, info->par_d); - gst_wl_window_set_render_rectangle (self, 0, 0, width, info->height); - } return self; @@ -377,6 +485,14 @@ } GstWlWindow * +gst_wl_window_new_toplevel (GstWlDisplay * display, const GstVideoInfo * info, + gboolean fullscreen, GMutex * render_lock) +{ + return gst_wl_window_new_toplevel_full (display, info, fullscreen, NULL, + render_lock); +} + +GstWlWindow * gst_wl_window_new_in_surface (GstWlDisplay * display, struct wl_surface *parent, GMutex * render_lock) { @@ -449,15 +565,19 @@ } static void -gst_wl_window_resize_video_surface (GstWlWindow * self, gboolean commit) +gst_wl_window_resize_video_surface (GstWlWindow * self) { GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); GstVideoRectangle src = { 0, }; + GstVideoRectangle wp_src = { 0, }; GstVideoRectangle dst = { 0, }; GstVideoRectangle res; - int wp_src_width; - int wp_src_height; + /* viewport coordinates will be based on the trasnformed surface */ + wl_surface_set_buffer_transform (priv->video_surface_wrapper, + priv->buffer_transform); + + /* adjust the width/height base on the rotation */ switch (priv->buffer_transform) { case WL_OUTPUT_TRANSFORM_NORMAL: case WL_OUTPUT_TRANSFORM_180: @@ -465,8 +585,8 @@ case WL_OUTPUT_TRANSFORM_FLIPPED_180: src.w = priv->scaled_width; src.h = priv->video_height; - wp_src_width = priv->video_width; - wp_src_height = priv->video_height; + wp_src.w = priv->crop.w; + wp_src.h = priv->crop.h; break; case WL_OUTPUT_TRANSFORM_90: case WL_OUTPUT_TRANSFORM_270: @@ -474,9 +594,48 @@ case WL_OUTPUT_TRANSFORM_FLIPPED_270: src.w = priv->video_height; src.h = priv->scaled_width; - wp_src_width = priv->video_height; - wp_src_height = priv->video_width; + wp_src.w = priv->crop.h; + wp_src.h = priv->crop.w; + break; + default: + g_assert_not_reached (); + } + + /* apply the x/y crop based on the transformation */ + switch (priv->buffer_transform) { + case WL_OUTPUT_TRANSFORM_NORMAL: + wp_src.x = priv->crop.x; + wp_src.y = priv->crop.y; + break; + case WL_OUTPUT_TRANSFORM_180: + wp_src.x = priv->buffer_width - (priv->crop.w + priv->crop.x); + wp_src.y = priv->buffer_height - (priv->crop.h + priv->crop.y); + break; + case WL_OUTPUT_TRANSFORM_FLIPPED: + wp_src.x = priv->buffer_width - (priv->crop.w + priv->crop.x); + wp_src.y = priv->crop.y; + break; + case WL_OUTPUT_TRANSFORM_FLIPPED_180: + wp_src.x = priv->crop.x; + wp_src.y = priv->buffer_height - (priv->crop.h + priv->crop.y); + break; + case WL_OUTPUT_TRANSFORM_90: + wp_src.x = priv->buffer_height - (priv->crop.h + priv->crop.y); + wp_src.y = priv->crop.x; + break; + case WL_OUTPUT_TRANSFORM_270: + wp_src.x = priv->crop.y; + wp_src.y = priv->buffer_width - (priv->crop.w + priv->crop.x); + break; + case WL_OUTPUT_TRANSFORM_FLIPPED_270: + wp_src.x = priv->buffer_height - (priv->crop.h + priv->crop.y); + wp_src.y = priv->buffer_width - (priv->crop.w + priv->crop.x); break; + case WL_OUTPUT_TRANSFORM_FLIPPED_90: + wp_src.x = priv->crop.y; + wp_src.y = priv->crop.x; + break; + default: g_assert_not_reached (); } @@ -486,21 +645,24 @@ /* center the video_subsurface inside area_subsurface */ if (priv->video_viewport) { - gst_video_center_rect (&src, &dst, &res, TRUE); - wp_viewport_set_source (priv->video_viewport, wl_fixed_from_int (0), - wl_fixed_from_int (0), wl_fixed_from_int (wp_src_width), - wl_fixed_from_int (wp_src_height)); + if (!priv->force_aspect_ratio) + res = dst; + else + gst_video_center_rect (&src, &dst, &res, TRUE); + wp_viewport_set_source (priv->video_viewport, wl_fixed_from_int (wp_src.x), + wl_fixed_from_int (wp_src.y), wl_fixed_from_int (wp_src.w), + wl_fixed_from_int (wp_src.h)); + + /* The protocol does not allow for a size set to 0 */ + res.w = MAX (res.w, 1); + res.h = MAX (res.h, 1); + wp_viewport_set_destination (priv->video_viewport, res.w, res.h); } else { gst_video_center_rect (&src, &dst, &res, FALSE); } wl_subsurface_set_position (priv->video_subsurface, res.x, res.y); - wl_surface_set_buffer_transform (priv->video_surface_wrapper, - priv->buffer_transform); - - if (commit) - wl_surface_commit (priv->video_surface_wrapper); priv->video_rectangle = res; } @@ -535,7 +697,7 @@ GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); GstWlBuffer *next_buffer; - GST_INFO ("frame_redraw_cb "); + GST_DEBUG_OBJECT (self, "frame_redraw_cb"); wl_callback_destroy (callback); priv->frame_callback = NULL; @@ -556,22 +718,76 @@ frame_redraw_callback }; +static gboolean +gst_wl_window_crop_rectangle_changed (GstWlWindow * self, + const GstVideoRectangle * pending_crop) +{ + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); + + if (priv->crop.x == pending_crop->x + && priv->crop.y == pending_crop->y + && priv->crop.w == pending_crop->w && priv->crop.h == pending_crop->h) + return FALSE; + + return TRUE; +} + static void gst_wl_window_commit_buffer (GstWlWindow * self, GstWlBuffer * buffer) { GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); GstVideoInfo *info = priv->next_video_info; + GstVideoMasteringDisplayInfo *minfo = priv->next_minfo; + GstVideoContentLightLevel *linfo = priv->next_linfo; struct wl_callback *callback; + gboolean needs_layout_update = FALSE; + GstVideoMeta *vmeta = gst_wl_buffer_get_video_meta (buffer); + GstVideoCropMeta *cmeta = gst_wl_buffer_get_video_crop_meta (buffer); + GstVideoRectangle crop = priv->crop; if (G_UNLIKELY (info)) { priv->scaled_width = gst_util_uint64_scale_int_round (info->width, info->par_n, info->par_d); - priv->video_width = info->width; - priv->video_height = info->height; + priv->video_width = priv->buffer_width = info->width; + priv->video_height = priv->buffer_height = info->height; + + /* we don't have video_width/height saved initially, so if we didn't have a + * crop meta the width/height needs to be fixed from its reset value of 0 */ + if (crop.w == 0) + crop.w = priv->video_width; + if (crop.h == 0) + crop.h = priv->video_height; + + needs_layout_update = TRUE; + } + + if (vmeta) { + if (priv->buffer_width != vmeta->width + || priv->buffer_height != vmeta->height) { + priv->buffer_width = vmeta->width; + priv->buffer_height = vmeta->height; + needs_layout_update = TRUE; + } + } + + if (cmeta) { + crop.x = cmeta->x; + crop.y = cmeta->y; + crop.w = cmeta->width; + crop.h = cmeta->height; + } + + if (gst_wl_window_crop_rectangle_changed (self, &crop)) { + priv->crop = crop; + needs_layout_update = TRUE; + } + if (G_UNLIKELY (needs_layout_update)) { wl_subsurface_set_sync (priv->video_subsurface); - gst_wl_window_resize_video_surface (self, FALSE); + gst_wl_window_resize_video_surface (self); gst_wl_window_set_opaque (self, info); + + gst_wl_window_set_colorimetry (self, &info->colorimetry, minfo, linfo); } if (G_LIKELY (buffer)) { @@ -599,13 +815,15 @@ priv->clear_window = FALSE; } - if (G_UNLIKELY (info)) { + if (G_UNLIKELY (needs_layout_update)) { /* commit also the parent (area_surface) in order to change * the position of the video_subsurface */ wl_surface_commit (priv->area_surface_wrapper); wl_subsurface_set_desync (priv->video_subsurface); gst_video_info_free (priv->next_video_info); priv->next_video_info = NULL; + g_clear_pointer (&priv->next_minfo, g_free); + g_clear_pointer (&priv->next_linfo, g_free); } } @@ -638,6 +856,14 @@ gst_wl_window_render (GstWlWindow * self, GstWlBuffer * buffer, const GstVideoInfo * info) { + return gst_wl_window_render_hdr (self, buffer, info, NULL, NULL); +} + +gboolean +gst_wl_window_render_hdr (GstWlWindow * self, GstWlBuffer * buffer, + const GstVideoInfo * info, const GstVideoMasteringDisplayInfo * minfo, + const GstVideoContentLightLevel * linfo) +{ GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); gboolean ret = TRUE; @@ -645,8 +871,20 @@ gst_wl_buffer_ref_gst_buffer (buffer); g_mutex_lock (&priv->window_lock); - if (G_UNLIKELY (info)) + if (G_UNLIKELY (info)) { + gst_video_info_free (priv->next_video_info); priv->next_video_info = gst_video_info_copy (info); + } + + if (G_UNLIKELY (minfo)) { + g_clear_pointer (&priv->next_minfo, g_free); + priv->next_minfo = g_memdup2 (minfo, sizeof (*minfo)); + } + + if (G_UNLIKELY (linfo)) { + g_clear_pointer (&priv->next_linfo, g_free); + priv->next_linfo = g_memdup2 (linfo, sizeof (*linfo)); + } if (priv->next_buffer && priv->staged_buffer) { GST_LOG_OBJECT (self, "buffer %p dropped (replaced)", priv->staged_buffer); @@ -670,6 +908,34 @@ return ret; } +/** + * gst_wl_window_flush: + * @self: a #GstWlWindow + * + * Releases and drops the currently staged buffer associated with the window, + * if one exists. This function is thread-safe and will set the staged buffer + * pointer to NULL after unreferencing it. + * + * Returns: %TRUE if flush successful, %FALSE otherwise. + * + * Since: 1.28 + */ +gboolean +gst_wl_window_flush (GstWlWindow * self) +{ + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); + + g_mutex_lock (&priv->window_lock); + if (priv->staged_buffer) { + GST_LOG_OBJECT (self, "drop buffer %p", priv->staged_buffer); + gst_wl_buffer_unref_buffer (priv->staged_buffer); + priv->staged_buffer = NULL; + } + g_mutex_unlock (&priv->window_lock); + + return TRUE; +} + /* Update the buffer used to draw black borders. When we have viewporter * support, this is a scaled up 1x1 image, and without we need an black image * the size of the rendering areay. */ @@ -758,7 +1024,8 @@ if (priv->scaled_width != 0) { wl_subsurface_set_sync (priv->video_subsurface); - gst_wl_window_resize_video_surface (self, TRUE); + gst_wl_window_resize_video_surface (self); + wl_surface_commit (priv->video_surface_wrapper); } wl_surface_commit (priv->area_surface_wrapper); @@ -828,3 +1095,315 @@ gst_wl_window_update_geometry (self); } + +enum ImageDescriptionFeedback +{ + IMAGE_DESCRIPTION_FEEDBACK_UNKNOWN = 0, + IMAGE_DESCRIPTION_FEEDBACK_READY, + IMAGE_DESCRIPTION_FEEDBACK_FAILED, +}; + +static void +image_description_failed (void *data, + struct wp_image_description_v1 *wp_image_description_v1, uint32_t cause, + const char *msg) +{ + enum ImageDescriptionFeedback *image_description_feedback = data; + + *image_description_feedback = IMAGE_DESCRIPTION_FEEDBACK_FAILED; +} + +static void +image_description_ready (void *data, + struct wp_image_description_v1 *wp_image_description_v1, uint32_t identity) +{ + enum ImageDescriptionFeedback *image_description_feedback = data; + + *image_description_feedback = IMAGE_DESCRIPTION_FEEDBACK_READY; +} + +static const struct wp_image_description_v1_listener description_listerer = { + .failed = image_description_failed, + .ready = image_description_ready, +}; + +static enum wp_color_manager_v1_transfer_function +gst_colorimetry_tf_to_wl (GstVideoTransferFunction tf) +{ + switch (tf) { + case GST_VIDEO_TRANSFER_SRGB: + return WP_COLOR_MANAGER_V1_TRANSFER_FUNCTION_SRGB; + case GST_VIDEO_TRANSFER_BT601: + case GST_VIDEO_TRANSFER_BT709: + case GST_VIDEO_TRANSFER_BT2020_10: + return WP_COLOR_MANAGER_V1_TRANSFER_FUNCTION_BT1886; + case GST_VIDEO_TRANSFER_SMPTE2084: + return WP_COLOR_MANAGER_V1_TRANSFER_FUNCTION_ST2084_PQ; + case GST_VIDEO_TRANSFER_ARIB_STD_B67: + return WP_COLOR_MANAGER_V1_TRANSFER_FUNCTION_HLG; + default: + GST_WARNING ("Transfer function not handled"); + return 0; + } +} + +static enum wp_color_manager_v1_primaries +gst_colorimetry_primaries_to_wl (GstVideoColorPrimaries primaries) +{ + switch (primaries) { + case GST_VIDEO_COLOR_PRIMARIES_BT709: + return WP_COLOR_MANAGER_V1_PRIMARIES_SRGB; + case GST_VIDEO_COLOR_PRIMARIES_SMPTE170M: + return WP_COLOR_MANAGER_V1_PRIMARIES_NTSC; + case GST_VIDEO_COLOR_PRIMARIES_BT2020: + return WP_COLOR_MANAGER_V1_PRIMARIES_BT2020; + default: + GST_WARNING ("Primaries not handled"); + return 0; + } +} + +static enum wp_color_representation_surface_v1_coefficients +gst_colorimetry_matrix_to_wl (GstVideoColorMatrix matrix) +{ + switch (matrix) { + case GST_VIDEO_COLOR_MATRIX_RGB: + return WP_COLOR_REPRESENTATION_SURFACE_V1_COEFFICIENTS_IDENTITY; + case GST_VIDEO_COLOR_MATRIX_BT709: + return WP_COLOR_REPRESENTATION_SURFACE_V1_COEFFICIENTS_BT709; + case GST_VIDEO_COLOR_MATRIX_BT601: + return WP_COLOR_REPRESENTATION_SURFACE_V1_COEFFICIENTS_BT601; + case GST_VIDEO_COLOR_MATRIX_BT2020: + return WP_COLOR_REPRESENTATION_SURFACE_V1_COEFFICIENTS_BT2020; + default: + GST_WARNING ("Matrix not handled"); + return 0; + } +} + +static enum wp_color_representation_surface_v1_range +gst_colorimetry_range_to_wl (GstVideoColorRange range) +{ + switch (range) { + case GST_VIDEO_COLOR_RANGE_0_255: + return WP_COLOR_REPRESENTATION_SURFACE_V1_RANGE_FULL; + case GST_VIDEO_COLOR_RANGE_16_235: + return WP_COLOR_REPRESENTATION_SURFACE_V1_RANGE_LIMITED; + default: + GST_WARNING ("Range not handled"); + return 0; + } +} + +static void +gst_wl_window_set_image_description (GstWlWindow * self, + const GstVideoColorimetry * colorimetry, + const GstVideoMasteringDisplayInfo * minfo, + const GstVideoContentLightLevel * linfo) +{ + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); + struct wl_display *wl_display; + struct wp_color_manager_v1 *color_manager; + struct wp_color_manager_v1 *color_manager_wrapper = NULL; + struct wl_event_queue *color_manager_queue = NULL; + struct wp_image_description_v1 *image_description = NULL; + struct wp_image_description_creator_params_v1 *params; + enum ImageDescriptionFeedback image_description_feedback = + IMAGE_DESCRIPTION_FEEDBACK_UNKNOWN; + uint32_t wl_transfer_function; + uint32_t wl_primaries; + + if (!gst_wl_display_is_color_parametric_creator_supported (priv->display)) { + GST_INFO_OBJECT (self, + "Color management or parametric creator not supported"); + return; + } + + color_manager = gst_wl_display_get_color_manager_v1 (priv->display); + if (!priv->color_management_surface) { + priv->color_management_surface = + wp_color_manager_v1_get_surface (color_manager, + priv->video_surface_wrapper); + } + + wl_transfer_function = gst_colorimetry_tf_to_wl (colorimetry->transfer); + wl_primaries = gst_colorimetry_primaries_to_wl (colorimetry->primaries); + + if (!gst_wl_display_is_color_transfer_function_supported (priv->display, + wl_transfer_function) || + !gst_wl_display_are_color_primaries_supported (priv->display, + wl_primaries)) { + wp_color_management_surface_v1_unset_image_description + (priv->color_management_surface); + + GST_INFO_OBJECT (self, + "Can not create image description: primaries or transfer function not supported"); + return; + } + + color_manager_wrapper = wl_proxy_create_wrapper (color_manager); + wl_display = gst_wl_display_get_display (priv->display); +#ifdef HAVE_WL_EVENT_QUEUE_NAME + color_manager_queue = wl_display_create_queue_with_name (wl_display, + "GStreamer color manager queue"); +#else + color_manager_queue = wl_display_create_queue (wl_display); +#endif + wl_proxy_set_queue ((struct wl_proxy *) color_manager_wrapper, + color_manager_queue); + + params = + wp_color_manager_v1_create_parametric_creator (color_manager_wrapper); + + wp_image_description_creator_params_v1_set_tf_named (params, + wl_transfer_function); + wp_image_description_creator_params_v1_set_primaries_named (params, + wl_primaries); + + if (gst_wl_display_is_color_mastering_display_supported (priv->display) + && minfo) { + /* first validate our luminance range */ + guint min_luminance = minfo->min_display_mastering_luminance / 10000; + guint max_luminance = + MAX (min_luminance + 1, minfo->max_display_mastering_luminance / 10000); + + /* We need to convert from 0.00002 unit to 0.000001 */ + const guint f = 20; + wp_image_description_creator_params_v1_set_mastering_display_primaries + (params, + minfo->display_primaries0.x * f, minfo->display_primaries0.y * f, + minfo->display_primaries1.x * f, minfo->display_primaries1.y * f, + minfo->display_primaries2.x * f, minfo->display_primaries2.y * f, + minfo->white_point.x * f, minfo->white_point.y * f); + wp_image_description_creator_params_v1_set_mastering_luminance (params, + minfo->min_display_mastering_luminance, max_luminance); + + /* + * FIXME its unclear what makes a color volume exceeds the primary volume, + * and how to verify it, ignoring this aspect for now, but may need to be + * revisited. + */ + + /* We can't set the light level if we don't know the luminance range */ + if (linfo) { + guint maxFALL = CLAMP (min_luminance + 1, + linfo->max_frame_average_light_level, max_luminance); + guint maxCLL = + CLAMP (maxFALL, linfo->max_content_light_level, max_luminance); + wp_image_description_creator_params_v1_set_max_cll (params, maxCLL); + wp_image_description_creator_params_v1_set_max_fall (params, maxFALL); + } + } + + image_description = wp_image_description_creator_params_v1_create (params); + wp_image_description_v1_add_listener (image_description, + &description_listerer, &image_description_feedback); + + while (image_description_feedback == IMAGE_DESCRIPTION_FEEDBACK_UNKNOWN) { + if (wl_display_dispatch_queue (wl_display, color_manager_queue) == -1) + break; + } + + if (image_description_feedback == IMAGE_DESCRIPTION_FEEDBACK_READY) { + wp_color_management_surface_v1_set_image_description + (priv->color_management_surface, image_description, + WP_COLOR_MANAGER_V1_RENDER_INTENT_PERCEPTUAL); + + GST_INFO_OBJECT (self, "Successfully set parametric image description"); + } else { + wp_color_management_surface_v1_unset_image_description + (priv->color_management_surface); + + GST_INFO_OBJECT (self, "Creating image description failed"); + } + + /* Setting the image description has copy semantics */ + wp_image_description_v1_destroy (image_description); + wl_proxy_wrapper_destroy (color_manager_wrapper); + wl_event_queue_destroy (color_manager_queue); +} + +static void +gst_wl_window_set_color_representation (GstWlWindow * self, + const GstVideoColorimetry * colorimetry) +{ + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); + struct wp_color_representation_manager_v1 *cr_manager; + uint32_t wl_alpha_mode; + uint32_t wl_coefficients; + uint32_t wl_range; + gboolean alpha_mode_supported; + gboolean coefficients_supported; + + cr_manager = + gst_wl_display_get_color_representation_manager_v1 (priv->display); + if (!cr_manager) { + GST_INFO_OBJECT (self, "Color representation not supported"); + return; + } + + wl_alpha_mode = WP_COLOR_REPRESENTATION_SURFACE_V1_ALPHA_MODE_STRAIGHT; + alpha_mode_supported = + gst_wl_display_is_color_alpha_mode_supported (priv->display, + wl_alpha_mode); + + wl_coefficients = gst_colorimetry_matrix_to_wl (colorimetry->matrix); + wl_range = gst_colorimetry_range_to_wl (colorimetry->range); + coefficients_supported = + gst_wl_display_are_color_coefficients_supported (priv->display, + wl_coefficients, wl_range); + + if (alpha_mode_supported || coefficients_supported) { + if (!priv->color_representation_surface) { + priv->color_representation_surface = + wp_color_representation_manager_v1_get_surface (cr_manager, + priv->video_surface_wrapper); + } + + if (alpha_mode_supported) + wp_color_representation_surface_v1_set_alpha_mode + (priv->color_representation_surface, wl_alpha_mode); + + if (coefficients_supported) + wp_color_representation_surface_v1_set_coefficients_and_range + (priv->color_representation_surface, wl_coefficients, wl_range); + + GST_INFO_OBJECT (self, "Successfully set color representation"); + } else { + if (priv->color_representation_surface) { + wp_color_representation_surface_v1_destroy + (priv->color_representation_surface); + priv->color_representation_surface = NULL; + } + + GST_INFO_OBJECT (self, "Coefficients and range not supported"); + } +} + +static void +gst_wl_window_set_colorimetry (GstWlWindow * self, + const GstVideoColorimetry * colorimetry, + const GstVideoMasteringDisplayInfo * minfo, + const GstVideoContentLightLevel * linfo) +{ + GST_OBJECT_LOCK (self); + + GST_INFO_OBJECT (self, "Trying to set colorimetry: %s", + gst_video_colorimetry_to_string (colorimetry)); + + gst_wl_window_set_image_description (self, colorimetry, minfo, linfo); + gst_wl_window_set_color_representation (self, colorimetry); + + GST_OBJECT_UNLOCK (self); +} + +void +gst_wl_window_set_force_aspect_ratio (GstWlWindow * self, + gboolean force_aspect_ratio) +{ + GstWlWindowPrivate *priv = gst_wl_window_get_instance_private (self); + + priv->force_aspect_ratio = force_aspect_ratio; + + gst_wl_window_update_geometry (self); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/gstwlwindow.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/gstwlwindow.h
Changed
@@ -28,22 +28,29 @@ #define GST_TYPE_WL_WINDOW (gst_wl_window_get_type ()) GST_WL_API -G_DECLARE_FINAL_TYPE (GstWlWindow, gst_wl_window, GST, WL_WINDOW, GObject); +G_DECLARE_FINAL_TYPE (GstWlWindow, gst_wl_window, GST, WL_WINDOW, GstObject); struct _GstWlWindow { - GObject parent_instance; + GstObject parent_instance; }; GST_WL_API -void gst_wl_window_ensure_fullscreen (GstWlWindow * self, - gboolean fullscreen); +void gst_wl_window_ensure_fullscreen (GstWlWindow * self, gboolean fullscreen); + +GST_WL_API +void gst_wl_window_ensure_fullscreen_for_output (GstWlWindow * self, + gboolean fullscreen, const gchar * output_name); GST_WL_API GstWlWindow *gst_wl_window_new_toplevel (GstWlDisplay * display, const GstVideoInfo * info, gboolean fullscreen, GMutex * render_lock); GST_WL_API +GstWlWindow * gst_wl_window_new_toplevel_full (GstWlDisplay * display, const GstVideoInfo * info, + gboolean fullscreen, const gchar * output_name, GMutex * render_lock); + +GST_WL_API GstWlWindow *gst_wl_window_new_in_surface (GstWlDisplay * display, struct wl_surface * parent, GMutex * render_lock); @@ -64,6 +71,14 @@ const GstVideoInfo * info); GST_WL_API +gboolean gst_wl_window_flush (GstWlWindow * self); + +GST_WL_API +gboolean gst_wl_window_render_hdr (GstWlWindow * self, GstWlBuffer * buffer, + const GstVideoInfo * info, const GstVideoMasteringDisplayInfo *minfo, + const GstVideoContentLightLevel *linfo); + +GST_WL_API void gst_wl_window_set_render_rectangle (GstWlWindow * self, gint x, gint y, gint w, gint h); @@ -74,4 +89,8 @@ void gst_wl_window_set_rotate_method (GstWlWindow *self, GstVideoOrientationMethod rotate_method); +GST_WL_API +void gst_wl_window_set_force_aspect_ratio (GstWlWindow * self, + gboolean force_aspect_ratio); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/meson.build
Changed
@@ -1,7 +1,7 @@ wl_req = '>= 1.15' wl_client_dep = dependency('wayland-client', version: wl_req, required: get_option('wayland')) libdrm_dep = dependency('libdrm', version: '>= 2.4.98', required: get_option('wayland')) -wl_proto_req = '>= 1.26' +wl_proto_req = '>= 1.44' wl_protocol_dep = dependency('wayland-protocols', version: wl_proto_req, required: get_option('wayland')) wl_scanner = find_program('wayland-scanner', required: get_option('wayland')) # Also used in ext/wayland @@ -13,6 +13,7 @@ 'gstwlcontext.c', 'gstwldisplay.c', 'gstwllinuxdmabuf.c', + 'gstwloutput.c', 'gstwlshmallocator.c', 'gstwlvideobufferpool.c', 'gstwlvideoformat.c', @@ -25,6 +26,7 @@ 'gstwlcontext.h', 'gstwldisplay.h', 'gstwllinuxdmabuf.h', + 'gstwloutput.h', 'gstwlshmallocator.h', 'gstwlvideobufferpool.h', 'gstwlvideoformat.h', @@ -36,6 +38,8 @@ protocols_datadir = wl_protocol_dep.get_variable('pkgdatadir') protocol_defs = + 'color-management', 'staging', 'v1' , + 'color-representation', 'staging', 'v1' , 'viewporter', 'stable', , 'linux-dmabuf', 'unstable', 'v1', , 'fullscreen-shell', 'unstable', 'v1', , @@ -45,22 +49,28 @@ protocols_files = foreach protodef: protocol_defs - proto_name = protodef.get(0) - proto_stability = protodef.get(1) - if proto_stability == 'stable' - output_base = proto_name + proto_name = protodef0 + proto_stability = protodef1 + + if proto_stability == 'internal' + base_file = proto_name + xml_path = 'protocols' / proto_name + '.xml' + elif proto_stability == 'stable' + base_file = proto_name + xml_path = protocols_datadir / 'stable' / proto_name / (base_file + '.xml') + elif proto_stability == 'unstable' + base_file = '@0@-unstable-@1@'.format(proto_name, protodef2) + xml_path = protocols_datadir / 'unstable' / proto_name / (base_file + '.xml') elif proto_stability == 'staging' - proto_version = protodef.get(2) - output_base = f'@proto_name@-@proto_version@' + base_file = '@0@-@1@'.format(proto_name, protodef2) + xml_path = protocols_datadir / 'staging' / proto_name / (base_file + '.xml') else - proto_version = protodef.get(2) - output_base = f'@proto_name@-@proto_stability@-@proto_version@' + error('Unsupported protocol stability') endif - input = protocols_datadir / proto_stability / proto_name / f'@output_base@.xml' - protocols_files += custom_target(f'@output_base@ client header', - input: input, - output: f'@output_base@-client-protocol.h', + protocols_files += custom_target(f'@base_file@ client header', + input: xml_path, + output: f'@base_file@-client-protocol.h', command: wl_scanner, 'client-header', @@ -68,9 +78,9 @@ , ) - protocols_files += custom_target(f'@output_base@ source', - input: input, - output: f'@output_base@-protocol.c', + protocols_files += custom_target(f'@base_file@ source', + input: xml_path, + output: f'@base_file@-protocol.c', command: wl_scanner, 'private-code',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/wayland/wayland.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/wayland/wayland.h
Changed
@@ -31,6 +31,7 @@ #include <gst/wayland/gstwl_fwd.h> #include <gst/wayland/gstwlbuffer.h> #include <gst/wayland/gstwlcontext.h> +#include <gst/wayland/gstwloutput.h> #include <gst/wayland/gstwldisplay.h> #include <gst/wayland/gstwllinuxdmabuf.h> #include <gst/wayland/gstwlshmallocator.h>
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/datachannel.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/datachannel.c
Changed
@@ -332,10 +332,13 @@ g_signal_new ("on-buffered-amount-low", G_TYPE_FROM_CLASS (klass), G_SIGNAL_RUN_LAST, 0, NULL, NULL, NULL, G_TYPE_NONE, 0); +#ifndef GST_REMOVE_DEPRECATED /** * GstWebRTCDataChannel::send-data: * @object: the #GstWebRTCDataChannel * @data: (nullable): a #GBytes with the data + * + * Deprecated: 1.22: Use gst_webrtc_data_channel_send_data_full() instead */ gst_webrtc_data_channel_signalsSIGNAL_SEND_DATA = g_signal_new_class_handler ("send-data", G_TYPE_FROM_CLASS (klass), @@ -347,12 +350,15 @@ * GstWebRTCDataChannel::send-string: * @object: the #GstWebRTCDataChannel * @data: (nullable): the data to send as a string + * + * Deprecated: 1.22: Use gst_webrtc_data_channel_send_string_full() instead */ gst_webrtc_data_channel_signalsSIGNAL_SEND_STRING = g_signal_new_class_handler ("send-string", G_TYPE_FROM_CLASS (klass), - G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION, + G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION | G_SIGNAL_DEPRECATED, G_CALLBACK (gst_webrtc_data_channel_send_string), NULL, NULL, NULL, G_TYPE_NONE, 1, G_TYPE_STRING); +#endif /** * GstWebRTCDataChannel::close: @@ -508,12 +514,15 @@ gst_webrtc_data_channel_signalsSIGNAL_ON_BUFFERED_AMOUNT_LOW, 0); } +#ifndef GST_REMOVE_DEPRECATED /** * gst_webrtc_data_channel_send_data: * @channel: a #GstWebRTCDataChannel * @data: (nullable): a #GBytes or %NULL * * Send @data as a data message over @channel. + * + * Deprecated: 1.22: Use gst_webrtc_data_channel_send_data_full() instead */ void gst_webrtc_data_channel_send_data (GstWebRTCDataChannel * channel, @@ -526,6 +535,7 @@ klass = GST_WEBRTC_DATA_CHANNEL_GET_CLASS (channel); (void) klass->send_data (channel, data, NULL); } +#endif /** * gst_webrtc_data_channel_send_data_full: @@ -551,12 +561,15 @@ return klass->send_data (channel, data, error); } +#ifndef GST_REMOVE_DEPRECATED /** * gst_webrtc_data_channel_send_string: * @channel: a #GstWebRTCDataChannel * @str: (nullable): a string or %NULL * * Send @str as a string message over @channel. + * + * Deprecated: 1.22: Use gst_webrtc_data_channel_send_string_full() instead */ void gst_webrtc_data_channel_send_string (GstWebRTCDataChannel * channel, @@ -569,6 +582,7 @@ klass = GST_WEBRTC_DATA_CHANNEL_GET_CLASS (channel); (void) klass->send_string (channel, str, NULL); } +#endif /** * gst_webrtc_data_channel_send_string_full:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/ice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/ice.c
Changed
@@ -314,6 +314,8 @@ * Returns: FALSE on failure, otherwise @local_stats @remote_stats will be set * * Since: 1.22 + * + * Deprecated: 1.28: Use gst_webrtc_ice_transport_get_selected_candidate_pair(). */ gboolean gst_webrtc_ice_get_selected_pair (GstWebRTCICE * ice, @@ -341,6 +343,9 @@ if (stats) { g_free (stats->ipaddr); g_free (stats->url); + g_free (stats->ABI.abi.foundation); + g_free (stats->ABI.abi.related_address); + g_free (stats->ABI.abi.username_fragment); } g_free (stats); @@ -364,6 +369,9 @@ copy->ipaddr = g_strdup (stats->ipaddr); copy->url = g_strdup (stats->url); + copy->ABI.abi.foundation = g_strdup (stats->ABI.abi.foundation); + copy->ABI.abi.related_address = g_strdup (stats->ABI.abi.related_address); + copy->ABI.abi.username_fragment = g_strdup (stats->ABI.abi.username_fragment); return copy; } @@ -497,6 +505,115 @@ return GST_WEBRTC_ICE_GET_CLASS (ice)->get_http_proxy (ice); } +/** + * gst_webrtc_ice_close: + * @ice: The #GstWebRTCICE + * @promise: (transfer none) (nullable): a #GstPromise to be notified when the task is + * complete. + * + * Invoke the close procedure as specified in + * https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close. + * + * Since: 1.28 + */ +void +gst_webrtc_ice_close (GstWebRTCICE * ice, GstPromise * promise) +{ + g_return_if_fail (GST_IS_WEBRTC_ICE (ice)); + g_assert (GST_WEBRTC_ICE_GET_CLASS (ice)->close); + + GST_WEBRTC_ICE_GET_CLASS (ice)->close (ice, promise); +} + +/** + * gst_webrtc_ice_candidate_free: + * @candidate: The #GstWebRTCICECandidate to be free'd + * + * Helper function to free #GstWebRTCICECandidate + * + * Since: 1.28 + */ +void +gst_webrtc_ice_candidate_free (GstWebRTCICECandidate * candidate) +{ + if (candidate) { + g_free (candidate->candidate); + gst_webrtc_ice_candidate_stats_free (candidate->stats); + g_free (candidate->sdp_mid); + } + + g_free (candidate); +} + +/** + * gst_webrtc_ice_candidate_copy: + * @candidate: The #GstWebRTCICECandidate to be copied + * + * Returns: (transfer full): A copy of @candidate + * + * Since: 1.28 + */ +GstWebRTCICECandidate * +gst_webrtc_ice_candidate_copy (GstWebRTCICECandidate * candidate) +{ + GstWebRTCICECandidate *copy = g_malloc (sizeof (GstWebRTCICECandidate)); + + *copy = *candidate; + + copy->candidate = g_strdup (candidate->candidate); + copy->stats = gst_webrtc_ice_candidate_stats_copy (candidate->stats); + copy->sdp_mid = g_strdup (candidate->sdp_mid); + + return copy; +} + +G_DEFINE_BOXED_TYPE (GstWebRTCICECandidate, gst_webrtc_ice_candidate, + (GBoxedCopyFunc) gst_webrtc_ice_candidate_copy, + (GBoxedFreeFunc) gst_webrtc_ice_candidate_free); + + +/** + * gst_webrtc_ice_candidate_pair_free: + * @pair: The #GstWebRTCICECandidatePair to be free'd + * + * Helper function to free #GstWebRTCICECandidatePair + * + * Since: 1.28 + */ +void +gst_webrtc_ice_candidate_pair_free (GstWebRTCICECandidatePair * pair) +{ + if (pair) { + gst_webrtc_ice_candidate_free (pair->local); + gst_webrtc_ice_candidate_free (pair->remote); + } + + g_free (pair); +} + +/** + * gst_webrtc_ice_candidate_pair_copy: + * @pair: The #GstWebRTCICE + * + * Returns: (transfer full): A copy of @pair + * + * Since: 1.28 + */ +GstWebRTCICECandidatePair * +gst_webrtc_ice_candidate_pair_copy (GstWebRTCICECandidatePair * pair) +{ + GstWebRTCICECandidatePair *copy = + g_malloc (sizeof (GstWebRTCICECandidatePair)); + + copy->local = gst_webrtc_ice_candidate_copy (pair->local); + copy->remote = gst_webrtc_ice_candidate_copy (pair->remote); + + return copy; +} + +G_DEFINE_BOXED_TYPE (GstWebRTCICECandidatePair, gst_webrtc_ice_candidate_pair, + (GBoxedCopyFunc) gst_webrtc_ice_candidate_pair_copy, + (GBoxedFreeFunc) gst_webrtc_ice_candidate_pair_free); static void gst_webrtc_ice_set_property (GObject * object, guint prop_id, @@ -568,6 +685,7 @@ klass->get_local_candidates = NULL; klass->get_remote_candidates = NULL; klass->get_selected_pair = NULL; + klass->close = NULL; gobject_class->get_property = gst_webrtc_ice_get_property; gobject_class->set_property = gst_webrtc_ice_set_property;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/ice.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/ice.h
Changed
@@ -47,6 +47,31 @@ gpointer _gst_reservedGST_PADDING; }; +/** + * GstWebRTCICECandidateStats: + * @ipaddr: A string containing the address of the candidate. This value may be + * an IPv4 address, an IPv6 address, or a fully-qualified domain name + * @port: The network port number used by the candidate + * @stream_id: A string that uniquely identifies the object that is being + * monitored to produce this set of statistics + * @type: The candidate type + * @proto: A string specifying the protocol (tcp or udp) used to transmit data + * on the @port + * @replay_proto: A string identifying the protocol used by the endpoint for + * communicating with the TURN server; valid values are tcp, udp, and tls + * @prio: The candidate's priority, corresponding to RTCIceCandidate.priority + * @url: For local candidates, the url property is the URL of the ICE server + * from which the candidate was received + * @foundation: The ICE foundation as defined in RFC5245 section 15.1 (Since: 1.28) + * @related_address: The ICE rel-addr defined in RFC5245 section 15.1 Only + * set for serverreflexive, peerreflexive and relay candidates. (Since: 1.28) + * @related_port: The ICE rel-addr defined in RFC5245 section 15.1. Only set + * for serverreflexive, peerreflexive and relay candidates. (Since: 1.28) + * @username_fragment: The ICE username fragment as defined in RFC5245 section 7.1.2.3 (Since: 1.28) + * @tcp_type: The ICE candidate TCP type, (Since: 1.28) + * + * Since: 1.22 + */ struct _GstWebRTCICECandidateStats { gchar *ipaddr; @@ -58,9 +83,184 @@ guint prio; gchar *url; + /** + * GstWebRTCICECandidateStats.ABI: (attributes doc.skip=true) + * + * ABI compatibility union + * + * Since: 1.28 + */ + union { + /** + * GstWebRTCICECandidateStats.ABI.abi: (attributes doc.skip=true) + * + * ABI compatibility struct + * + * Since: 1.28 + */ + struct { + /** + * GstWebRTCICECandidateStats.ABI.abi.foundation: + * + * The foundation of the ICE candidate. + * + * Since: 1.28 + */ + gchar *foundation; + + /** + * GstWebRTCICECandidateStats.ABI.abi.related_address: + * + * The related address (STUN or TURN server) of the candidate. Will be + * NULL for host candidates. + * + * Since: 1.28 + */ + gchar *related_address; + + /** + * GstWebRTCICECandidateStats.ABI.abi.related_port: + * + * The related port (STUN or TURN server) of the candidate. Will be + * 0 for host candidates. + * + * Since: 1.28 + */ + guint related_port; + + /** + * GstWebRTCICECandidateStats.ABI.abi.username_fragment: + * + * The ICE username for this candidate. + * + * Since: 1.28 + */ + gchar *username_fragment; + + /** + * GstWebRTCICECandidateStats.ABI.abi.tcp_type: + * + * The type of TCP candidate. Will be NULL if the candidate is not a TCP + * candidate. + * + * Since: 1.28 + */ + GstWebRTCICETcpCandidateType tcp_type; + } abi; + /*< private >*/ + gpointer _gst_reservedGST_PADDING_LARGE; + } ABI; +}; + +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_ADDRESS: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_ADDRESS(c) ((c)->ipaddr) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_PORT: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_PORT(c) ((c)->port) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_STREAM_ID: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_STREAM_ID(c) ((c)->stream_id) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_TYPE: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_TYPE(c) ((c)->type) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_PROTOCOL: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_PROTOCOL(c) ((c)->proto) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_RELAY_PROTOCOL: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_RELAY_PROTOCOL(c) ((c)->relay_proto) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_PRIORITY: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_PRIORITY(c) ((c)->prio) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_URL: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_URL(c) ((c)->url) + +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_FOUNDATION: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_FOUNDATION(c) ((c)->ABI.abi.foundation) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_ADDRESS: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_ADDRESS(c) ((c)->ABI.abi.related_address) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_PORT: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_PORT(c) ((c)->ABI.abi.related_port) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_USERNAME_FRAGMENT: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_USERNAME_FRAGMENT(c) ((c)->ABI.abi.username_fragment) +/** + * GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE: + * + * Since: 1.28 + */ +#define GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE(c) ((c)->ABI.abi.tcp_type) + +/** + * GstWebRTCICECandidate: + * @candidate: String carrying the candidate-attribute as defined in + * section 15.1 of RFC5245 + * @component: The assigned network component of the candidate (1 for RTP + * 2 for RTCP). + * @sdp_mid: The media stream "identification-tag" defined in RFC5888 for the + * media component this candidate is associated with. + * @sdp_mline_index: The index (starting at zero) of the media description in + * the SDP this candidate is associated with. + * @stats: The #GstWebRTCICECandidateStats associated to this candidate. + * + * Since: 1.28 + */ +struct _GstWebRTCICECandidate { + gchar *candidate; + gint component; + gchar *sdp_mid; + gint sdp_mline_index; /* Set to -1 if unknown. */ + GstWebRTCICECandidateStats *stats; + gpointer _gst_reservedGST_PADDING_LARGE; }; +struct _GstWebRTCICECandidatePair { + GstWebRTCICECandidate *local; + GstWebRTCICECandidate *remote; +}; + /** * GstWebRTCICEOnCandidateFunc: * @ice: The #GstWebRTCICE @@ -149,17 +349,32 @@ GstWebRTCICEStream * stream, GstWebRTCICECandidateStats ** local_stats, GstWebRTCICECandidateStats ** remote_stats); - gpointer _gst_reservedGST_PADDING; + + /** + * GstWebRTCICEClass::close: + * @ice: a #GstWebRTCICE + * @promise: (transfer full) (nullable): a #GstPromise to be notified when the task is + * complete. + * + * Invoke the close procedure as specified in + * https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close. + * + * Since: 1.28 + */ + void (*close) (GstWebRTCICE * ice, + GstPromise * promise); + + gpointer _gst_reservedGST_PADDING - 1; }; GST_WEBRTC_API GstWebRTCICEStream * gst_webrtc_ice_add_stream (GstWebRTCICE * ice, - guint session_id); + guint session_id) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API GstWebRTCICETransport * gst_webrtc_ice_find_transport (GstWebRTCICE * ice, GstWebRTCICEStream * stream, - GstWebRTCICEComponent component); + GstWebRTCICEComponent component) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API @@ -206,21 +421,21 @@ const gchar * uri); GST_WEBRTC_API -gchar * gst_webrtc_ice_get_stun_server (GstWebRTCICE * ice); +gchar * gst_webrtc_ice_get_stun_server (GstWebRTCICE * ice) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API void gst_webrtc_ice_set_turn_server (GstWebRTCICE * ice, const gchar * uri); GST_WEBRTC_API -gchar * gst_webrtc_ice_get_turn_server (GstWebRTCICE * ice); +gchar * gst_webrtc_ice_get_turn_server (GstWebRTCICE * ice) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API void gst_webrtc_ice_set_http_proxy (GstWebRTCICE * ice, const gchar * uri); GST_WEBRTC_API -gchar * gst_webrtc_ice_get_http_proxy (GstWebRTCICE * ice); +gchar * gst_webrtc_ice_get_http_proxy (GstWebRTCICE * ice) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API void gst_webrtc_ice_set_on_ice_candidate (GstWebRTCICE * ice, @@ -235,17 +450,19 @@ GST_WEBRTC_API GstWebRTCICECandidateStats** gst_webrtc_ice_get_local_candidates (GstWebRTCICE * ice, - GstWebRTCICEStream * stream); + GstWebRTCICEStream * stream) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API GstWebRTCICECandidateStats** gst_webrtc_ice_get_remote_candidates (GstWebRTCICE * ice, - GstWebRTCICEStream * stream); + GstWebRTCICEStream * stream) G_GNUC_WARN_UNUSED_RESULT; -GST_WEBRTC_API +#ifndef GST_DISABLE_DEPRECATED +GST_WEBRTC_DEPRECATED_FOR(gst_webrtc_ice_transport_get_selected_pair) gboolean gst_webrtc_ice_get_selected_pair (GstWebRTCICE * ice, GstWebRTCICEStream * stream, GstWebRTCICECandidateStats ** local_stats, GstWebRTCICECandidateStats ** remote_stats); +#endif GST_WEBRTC_API void gst_webrtc_ice_candidate_stats_free (GstWebRTCICECandidateStats * stats); @@ -256,7 +473,29 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstWebRTCICE, gst_object_unref) GST_WEBRTC_API -GstWebRTCICECandidateStats * gst_webrtc_ice_candidate_stats_copy (GstWebRTCICECandidateStats *stats); +GstWebRTCICECandidateStats * gst_webrtc_ice_candidate_stats_copy (GstWebRTCICECandidateStats *stats) G_GNUC_WARN_UNUSED_RESULT; + +GST_WEBRTC_API +void gst_webrtc_ice_close (GstWebRTCICE * ice, + GstPromise * promise); + +GST_WEBRTC_API +void gst_webrtc_ice_candidate_free (GstWebRTCICECandidate * candidate); + +GST_WEBRTC_API +GType gst_webrtc_ice_candidate_get_type (void); + +GST_WEBRTC_API +GstWebRTCICECandidate * gst_webrtc_ice_candidate_copy (GstWebRTCICECandidate * candidate); + +GST_WEBRTC_API +void gst_webrtc_ice_candidate_pair_free (GstWebRTCICECandidatePair * pair); + +GST_WEBRTC_API +GType gst_webrtc_ice_candidate_pair_get_type (void); + +GST_WEBRTC_API +GstWebRTCICECandidatePair * gst_webrtc_ice_candidate_pair_copy (GstWebRTCICECandidatePair * pair); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/icestream.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/icestream.h
Changed
@@ -50,7 +50,7 @@ GST_WEBRTC_API GstWebRTCICETransport * gst_webrtc_ice_stream_find_transport (GstWebRTCICEStream * stream, - GstWebRTCICEComponent component); + GstWebRTCICEComponent component) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API gboolean gst_webrtc_ice_stream_gather_candidates (GstWebRTCICEStream * ice);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/icetransport.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/icetransport.c
Changed
@@ -102,6 +102,30 @@ stream_id, component, attr); } +/** + * gst_webrtc_ice_transport_get_selected_candidate_pair: + * @transport: ICE Transport + * + * See also + * https://w3c.github.io/webrtc-pc/#dom-rtcicetransport-getselectedcandidatepair + * + * Returns: (transfer full) (nullable): A #GstWebRTCICECandidatePair + * + * Since: 1.28 + */ +GstWebRTCICECandidatePair * +gst_webrtc_ice_transport_get_selected_candidate_pair (GstWebRTCICETransport * + transport) +{ + GstWebRTCICETransportClass *klass = + GST_WEBRTC_ICE_TRANSPORT_GET_CLASS (transport); + + if (!klass->get_selected_candidate_pair) + return NULL; + + return klass->get_selected_candidate_pair (transport); +} + static void gst_webrtc_ice_transport_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/icetransport.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/icetransport.h
Changed
@@ -53,9 +53,22 @@ { GstObjectClass parent_class; - gboolean (*gather_candidates) (GstWebRTCICETransport * transport); + gboolean (*gather_candidates) (GstWebRTCICETransport * transport); - gpointer _paddingGST_PADDING; + /** + * GstWebRTCICETransportClass::get_selected_candidate_pair: + * @transport: a #GstWebRTCICETransport + * + * See also + * https://w3c.github.io/webrtc-pc/#dom-rtcicetransport-getselectedcandidatepair + * + * Returns: (transfer full) (nullable): A #GstWebRTCICECandidatePair + * + * Since: 1.28 + */ + GstWebRTCICECandidatePair* (*get_selected_candidate_pair) (GstWebRTCICETransport * transport); + + gpointer _paddingGST_PADDING - 1; }; GST_WEBRTC_API @@ -69,6 +82,9 @@ GST_WEBRTC_API void gst_webrtc_ice_transport_new_candidate (GstWebRTCICETransport * ice, guint stream_id, GstWebRTCICEComponent component, const gchar * attr); +GST_WEBRTC_API +GstWebRTCICECandidatePair * gst_webrtc_ice_transport_get_selected_candidate_pair (GstWebRTCICETransport * transport); + G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstWebRTCICETransport, gst_object_unref) G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/nice/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/nice/meson.build
Changed
@@ -13,20 +13,16 @@ libgstwebrtcnice_dep = dependency('', required : false) -libnice_dep = dependency('nice', version : '>=0.1.20', required : get_option('webrtc'), +libnice_dep = dependency('nice', version : '>=0.1.23', required : get_option('webrtc'), allow_fallback: true, default_options: 'tests=disabled') deps = gstwebrtc_dep, libnice_dep if libnice_dep.found() libnice_version = libnice_dep.version() - libnice_c_args = - if libnice_version.version_compare('> 0.1.21.1') - libnice_c_args += '-DHAVE_LIBNICE_CONSENT_FIX' - endif libgstwebrtcnice = library('gstwebrtcnice-' + api_version, libgstwebrtcnice_sources, libgstwebrtcnice_headers, - c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', '-DBUILDING_GST_WEBRTCNICE', '-DG_LOG_DOMAIN="GStreamer-webrtcnice"' + libnice_c_args, + c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', '-DBUILDING_GST_WEBRTCNICE', '-DG_LOG_DOMAIN="GStreamer-webrtcnice"', include_directories: configinc, version : libversion, soversion : soversion,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/nice/nice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/nice/nice.c
Changed
@@ -23,10 +23,20 @@ #include "nice.h" #include "nicestream.h" +#include "niceutils.h" /* libnice */ #include <agent.h> #define HTTP_PROXY_PORT_DEFAULT 3128 +#define MAX_CLOSING_TIME_MILLI_SECONDS 2 * 1000 /* limit closing procedure to 2s */ + +typedef struct +{ + GMutex mutex; /* Mutex for guarding count */ + GCond cond; /* Condition for signaling that all resolves have finished */ + guint count; + gboolean cancelled; +} OutstandingResolves; /* XXX: * @@ -70,6 +80,12 @@ GHashTable *turn_servers; GstUri *http_proxy; + + gchar *remote_ufrag; + gchar *remote_pwd; + + GCancellable *resolve_cancellable; + OutstandingResolves *outstanding_resolves; /* keeps track of uncompleted resolve tasks */ }; #define gst_webrtc_nice_parent_class parent_class @@ -78,6 +94,59 @@ GST_DEBUG_CATEGORY_INIT (gst_webrtc_nice_debug, "webrtcnice", 0, "webrtcnice");); +static OutstandingResolves * +outstanding_resolves_ref (OutstandingResolves * r) +{ + return g_atomic_rc_box_acquire (r); +} + +static void +outstanding_resolves_free (OutstandingResolves * r) +{ + g_cond_clear (&r->cond); + g_mutex_clear (&r->mutex); +} + +static void +outstanding_resolves_unref (OutstandingResolves * r) +{ + g_atomic_rc_box_release_full (r, (GDestroyNotify) outstanding_resolves_free); +} + +static void +outstanding_resolves_dec (OutstandingResolves * r) +{ + g_mutex_lock (&r->mutex); + r->count--; + if (r->count == 0) + g_cond_signal (&r->cond); + g_mutex_unlock (&r->mutex); +} + +static gboolean +outstanding_resolves_try_inc (OutstandingResolves * r) +{ + gboolean ret = FALSE; + g_mutex_lock (&r->mutex); + if (!r->cancelled) { + r->count++; + ret = TRUE; + } + g_mutex_unlock (&r->mutex); + + return ret; +} + +static void +outstanding_resolves_wait (OutstandingResolves * r) +{ + g_mutex_lock (&r->mutex); + r->cancelled = TRUE; + while (r->count != 0) + g_cond_wait (&r->cond, &r->mutex); + g_mutex_unlock (&r->mutex); +} + static gboolean _unlock_pc_thread (GMutex * lock) { @@ -284,6 +353,7 @@ GstResolvedCallback resolved_callback; gpointer user_data; GDestroyNotify notify; + OutstandingResolves *outstanding_resolves; }; static struct resolve_host_data * @@ -330,6 +400,9 @@ GError *error = NULL; GList *addresses; + outstanding_resolves_dec (rh->outstanding_resolves); + outstanding_resolves_unref (rh->outstanding_resolves); + if (!nice) { error = g_error_new_literal (G_IO_ERROR, G_IO_ERROR_CANCELLED, "Cancelled"); rh->resolved_callback (NULL, NULL, error, rh->user_data); @@ -366,15 +439,18 @@ struct resolve_host_data *rh = user_data; GstWebRTCNice *nice = g_weak_ref_get (&rh->nice_weak); - if (nice) { + if (nice && outstanding_resolves_try_inc (rh->outstanding_resolves)) { /* no need to error anymore if the main context disappears and this task is * not run */ rh->main_context_handled = TRUE; GST_DEBUG_OBJECT (nice, "Resolving host %s", rh->host); - g_resolver_lookup_by_name_async (resolver, rh->host, NULL, - (GAsyncReadyCallback) on_resolve_host, resolve_host_data_ref (rh)); + g_resolver_lookup_by_name_async (resolver, rh->host, + nice->priv->resolve_cancellable, (GAsyncReadyCallback) on_resolve_host, + resolve_host_data_ref (rh)); gst_object_unref (nice); + } else { + outstanding_resolves_unref (rh->outstanding_resolves); } return G_SOURCE_REMOVE; @@ -415,6 +491,8 @@ rh->resolved_callback = resolved_callback; rh->user_data = user_data; rh->notify = notify; + rh->outstanding_resolves = + outstanding_resolves_ref (nice->priv->outstanding_resolves); GST_TRACE_OBJECT (nice, "invoking main context for resolving host %s " "with data %p", host, rh); @@ -551,26 +629,18 @@ return item->stream; } -static void -_on_new_candidate (NiceAgent * agent, NiceCandidate * candidate, - GstWebRTCNice * ice) +void +gst_webrtc_nice_fill_local_candidate_credentials (NiceAgent * agent, + NiceCandidate * candidate) { - struct NiceStreamItem *item; - gchar *attr; - - item = _find_item (ice, -1, candidate->stream_id, NULL); - if (!item) { - GST_WARNING_OBJECT (ice, "received signal for non-existent stream %u", - candidate->stream_id); - return; - } if (!candidate->username || !candidate->password) { gboolean got_credentials; gchar *ufrag, *password; - got_credentials = nice_agent_get_local_credentials (ice->priv->nice_agent, - candidate->stream_id, &ufrag, &password); + got_credentials = + nice_agent_get_local_credentials (agent, candidate->stream_id, &ufrag, + &password); g_warn_if_fail (got_credentials); if (!candidate->username) @@ -583,8 +653,40 @@ else g_free (password); } +} + +void +gst_webrtc_nice_fill_remote_candidate_credentials (GstWebRTCNice * nice, + NiceCandidate * candidate) +{ + if (!candidate->username) + candidate->username = g_strdup (nice->priv->remote_ufrag); + + if (!candidate->password) + candidate->password = g_strdup (nice->priv->remote_pwd); +} + +static void +_on_new_candidate (NiceAgent * agent, NiceCandidate * candidate, + GstWebRTCNice * ice) +{ + struct NiceStreamItem *item; + NiceCandidate *c; + gchar *attr; + + item = _find_item (ice, -1, candidate->stream_id, NULL); + if (!item) { + GST_WARNING_OBJECT (ice, "received signal for non-existent stream %u", + candidate->stream_id); + return; + } - attr = nice_agent_generate_local_candidate_sdp (agent, candidate); + c = nice_candidate_copy (candidate); + gst_webrtc_nice_fill_local_candidate_credentials (agent, c); + + attr = nice_agent_generate_local_candidate_sdp (agent, c); + + nice_candidate_free (c); if (ice->priv->on_candidate) ice->priv->on_candidate (GST_WEBRTC_ICE (ice), item->session_id, attr, @@ -606,79 +708,6 @@ return gst_webrtc_ice_stream_find_transport (item->stream, component); } -#if 0 -/* TODO don't rely on libnice to (de)serialize candidates */ -static NiceCandidateType -_candidate_type_from_string (const gchar * s) -{ - if (g_strcmp0 (s, "host") == 0) { - return NICE_CANDIDATE_TYPE_HOST; - } else if (g_strcmp0 (s, "srflx") == 0) { - return NICE_CANDIDATE_TYPE_SERVER_REFLEXIVE; - } else if (g_strcmp0 (s, "prflx") == 0) { /* FIXME: is the right string? */ - return NICE_CANDIDATE_TYPE_PEER_REFLEXIVE; - } else if (g_strcmp0 (s, "relay") == 0) { - return NICE_CANDIDATE_TYPE_RELAY; - } else { - g_assert_not_reached (); - return 0; - } -} - -static const gchar * -_candidate_type_to_string (NiceCandidateType type) -{ - switch (type) { - case NICE_CANDIDATE_TYPE_HOST: - return "host"; - case NICE_CANDIDATE_TYPE_SERVER_REFLEXIVE: - return "srflx"; - case NICE_CANDIDATE_TYPE_PEER_REFLEXIVE: - return "prflx"; - case NICE_CANDIDATE_TYPE_RELAY: - return "relay"; - default: - g_assert_not_reached (); - return NULL; - } -} - -static NiceCandidateTransport -_candidate_transport_from_string (const gchar * s) -{ - if (g_strcmp0 (s, "UDP") == 0) { - return NICE_CANDIDATE_TRANSPORT_UDP; - } else if (g_strcmp0 (s, "TCP tcptype") == 0) { - return NICE_CANDIDATE_TRANSPORT_TCP_ACTIVE; - } else if (g_strcmp0 (s, "tcp-passive") == 0) { /* FIXME: is the right string? */ - return NICE_CANDIDATE_TRANSPORT_TCP_PASSIVE; - } else if (g_strcmp0 (s, "tcp-so") == 0) { - return NICE_CANDIDATE_TRANSPORT_TCP_SO; - } else { - g_assert_not_reached (); - return 0; - } -} - -static const gchar * -_candidate_type_to_string (NiceCandidateType type) -{ - switch (type) { - case NICE_CANDIDATE_TYPE_HOST: - return "host"; - case NICE_CANDIDATE_TYPE_SERVER_REFLEXIVE: - return "srflx"; - case NICE_CANDIDATE_TYPE_PEER_REFLEXIVE: - return "prflx"; - case NICE_CANDIDATE_TYPE_RELAY: - return "relay"; - default: - g_assert_not_reached (); - return NULL; - } -} -#endif - /* parse the address for possible resolution */ static gboolean get_candidate_address (const gchar * candidate, gchar ** prefix, @@ -848,6 +877,9 @@ if (candidate == NULL) { nice_agent_peer_candidate_gathering_done (nice->priv->nice_agent, item->nice_stream_id); + if (promise) { + gst_promise_reply (promise, NULL); + } return; } @@ -916,6 +948,9 @@ add_ice_candidate_to_libnice (ice, item->nice_stream_id, cand); nice_candidate_free (cand); + if (promise) { + gst_promise_reply (promise, NULL); + } } static gboolean @@ -936,6 +971,11 @@ nice_agent_set_remote_credentials (nice->priv->nice_agent, item->nice_stream_id, ufrag, pwd); + g_free (nice->priv->remote_ufrag); + g_free (nice->priv->remote_pwd); + nice->priv->remote_ufrag = g_strdup (ufrag); + nice->priv->remote_pwd = g_strdup (pwd); + return TRUE; } @@ -1096,8 +1136,8 @@ nice_agent_set_stream_tos (nice->priv->nice_agent, item->nice_stream_id, tos); } -static const gchar * -_relay_type_to_string (GstUri * turn_server) +const gchar * +gst_webrtc_nice_get_candidate_relay_protocol (GstUri * turn_server) { const gchar *scheme; const gchar *transport; @@ -1120,8 +1160,9 @@ return "none"; } -static gchar * -_get_server_url (GstWebRTCNice * ice, NiceCandidate * cand) +gchar * +gst_webrtc_nice_get_candidate_server_url (GstWebRTCNice * ice, + NiceCandidate * cand) { switch (cand->type) { case NICE_CANDIDATE_TYPE_RELAYED:{ @@ -1147,55 +1188,72 @@ } } -/* TODO: replace it with nice_candidate_type_to_string() - * when it's ready for use - * https://libnice.freedesktop.org/libnice/NiceCandidate.html#nice-candidate-type-to-string - */ -static const gchar * -_candidate_type_to_string (NiceCandidateType type) -{ - switch (type) { - case NICE_CANDIDATE_TYPE_HOST: - return "host"; - case NICE_CANDIDATE_TYPE_SERVER_REFLEXIVE: - return "srflx"; - case NICE_CANDIDATE_TYPE_PEER_REFLEXIVE: - return "prflx"; - case NICE_CANDIDATE_TYPE_RELAYED: - return "relay"; - default: - g_assert_not_reached (); - return NULL; - } -} - static void _populate_candidate_stats (GstWebRTCNice * ice, NiceCandidate * cand, GstWebRTCICEStream * stream, GstWebRTCICECandidateStats * stats, - gboolean is_local) + GstWebRTCNiceCandidateOrigin origin) { gchar ipaddrINET6_ADDRSTRLEN; g_assert (cand != NULL); nice_address_to_string (&cand->addr, ipaddr); - stats->port = nice_address_get_port (&cand->addr); - stats->ipaddr = g_strdup (ipaddr); - stats->stream_id = stream->stream_id; - stats->type = _candidate_type_to_string (cand->type); - stats->prio = cand->priority; - stats->proto = + GST_WEBRTC_ICE_CANDIDATE_STATS_PORT (stats) = + nice_address_get_port (&cand->addr); + GST_WEBRTC_ICE_CANDIDATE_STATS_ADDRESS (stats) = g_strdup (ipaddr); + GST_WEBRTC_ICE_CANDIDATE_STATS_STREAM_ID (stats) = stream->stream_id; + GST_WEBRTC_ICE_CANDIDATE_STATS_TYPE (stats) = + nice_candidate_type_to_string (cand->type); + GST_WEBRTC_ICE_CANDIDATE_STATS_PRIORITY (stats) = cand->priority; + GST_WEBRTC_ICE_CANDIDATE_STATS_PROTOCOL (stats) = cand->transport == NICE_CANDIDATE_TRANSPORT_UDP ? "udp" : "tcp"; - if (is_local) { - if (cand->type == NICE_CANDIDATE_TYPE_RELAYED) - stats->relay_proto = _relay_type_to_string (ice->priv->turn_server); - stats->url = _get_server_url (ice, cand); + if (origin == GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL) { + if (cand->type == NICE_CANDIDATE_TYPE_RELAYED) { + NiceAddress relay_address; + nice_candidate_relay_address (cand, &relay_address); + + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_ADDRESS (stats) = + nice_address_dup_string (&relay_address); + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_PORT (stats) = + nice_address_get_port (&relay_address); + + GST_WEBRTC_ICE_CANDIDATE_STATS_RELAY_PROTOCOL (stats) = + gst_webrtc_nice_get_candidate_relay_protocol (ice->priv->turn_server); + } + GST_WEBRTC_ICE_CANDIDATE_STATS_URL (stats) = + gst_webrtc_nice_get_candidate_server_url (ice, cand); } + + GST_WEBRTC_ICE_CANDIDATE_STATS_FOUNDATION (stats) = + g_strdup (cand->foundation); + + switch (cand->transport) { + case NICE_CANDIDATE_TRANSPORT_UDP: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_ACTIVE: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_ACTIVE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_PASSIVE: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_PASSIVE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_SO: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_SO; + break; + }; + + GST_WEBRTC_ICE_CANDIDATE_STATS_USERNAME_FRAGMENT (stats) = + g_strdup (cand->username); } static void _populate_candidate_list_stats (GstWebRTCNice * ice, GSList * cands, - GstWebRTCICEStream * stream, GPtrArray * result, gboolean is_local) + GstWebRTCICEStream * stream, GPtrArray * result, + GstWebRTCNiceCandidateOrigin origin) { GSList *item; @@ -1203,7 +1261,7 @@ GstWebRTCICECandidateStats *stats = g_malloc0 (sizeof (GstWebRTCICECandidateStats)); NiceCandidate *c = item->data; - _populate_candidate_stats (ice, c, stream, stats, is_local); + _populate_candidate_stats (ice, c, stream, stats, origin); g_ptr_array_add (result, stats); } @@ -1223,7 +1281,8 @@ cands = nice_agent_get_local_candidates (nice->priv->nice_agent, stream->stream_id, NICE_COMPONENT_TYPE_RTP); - _populate_candidate_list_stats (nice, cands, stream, result, TRUE); + _populate_candidate_list_stats (nice, cands, stream, result, + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL); g_slist_free_full (cands, (GDestroyNotify) nice_candidate_free); return (GstWebRTCICECandidateStats **) g_ptr_array_free (result, FALSE); @@ -1242,39 +1301,13 @@ cands = nice_agent_get_remote_candidates (nice->priv->nice_agent, stream->stream_id, NICE_COMPONENT_TYPE_RTP); - _populate_candidate_list_stats (nice, cands, stream, result, FALSE); + _populate_candidate_list_stats (nice, cands, stream, result, + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_REMOTE); g_slist_free_full (cands, (GDestroyNotify) nice_candidate_free); return (GstWebRTCICECandidateStats **) g_ptr_array_free (result, FALSE); } -static gboolean -gst_webrtc_nice_get_selected_pair (GstWebRTCICE * ice, - GstWebRTCICEStream * stream, GstWebRTCICECandidateStats ** local_stats, - GstWebRTCICECandidateStats ** remote_stats) -{ - GstWebRTCNice *nice = GST_WEBRTC_NICE (ice); - NiceCandidate *local_cand = NULL; - NiceCandidate *remote_cand = NULL; - - - if (stream) { - if (nice_agent_get_selected_pair (nice->priv->nice_agent, stream->stream_id, - NICE_COMPONENT_TYPE_RTP, &local_cand, &remote_cand)) { - *local_stats = g_new0 (GstWebRTCICECandidateStats, 1); - _populate_candidate_stats (nice, local_cand, stream, *local_stats, TRUE); - - *remote_stats = g_new0 (GstWebRTCICECandidateStats, 1); - _populate_candidate_stats (nice, remote_cand, stream, *remote_stats, - FALSE); - - return TRUE; - } - } - - return FALSE; -} - static void _clear_ice_stream (struct NiceStreamItem *item) { @@ -1574,6 +1607,94 @@ return NULL; } +struct close_data +{ + GWeakRef nice_weak; + GstPromise *promise; + gboolean agent_closed; +}; + +static struct close_data * +close_data_new (GstWebRTCNice * ice, GstPromise * p) +{ + struct close_data *d = g_atomic_rc_box_new0 (struct close_data); + g_weak_ref_init (&d->nice_weak, ice); + d->promise = p ? gst_promise_ref (p) : NULL; + d->agent_closed = FALSE; + return d; +} + +static void +close_data_clear (struct close_data *d) +{ + g_weak_ref_clear (&d->nice_weak); + if (d->promise) + gst_promise_unref (d->promise); +} + +static struct close_data * +close_data_ref (struct close_data *d) +{ + return (struct close_data *) g_atomic_rc_box_acquire (d); +} + +static void +close_data_unref (struct close_data *d) +{ + g_atomic_rc_box_release_full (d, (GDestroyNotify) close_data_clear); +} + +static void +on_agent_closed (GObject * src, GAsyncResult * result, gpointer user_data) +{ + struct close_data *d = (struct close_data *) user_data; + + /* 9. Set the IceTransportState slot of each of connection's + * RTCIceTransports to "closed". */ + /* FIXME: We don't expose IceTransportState yet. */ + + if (d->promise) { + gst_promise_reply (d->promise, NULL); + } + + d->agent_closed = TRUE; + close_data_unref (d); +} + +static gboolean +close_main_cb (gpointer user_data) +{ + struct close_data *d = (struct close_data *) user_data; + GstWebRTCNice *nice = g_weak_ref_get (&d->nice_weak); + + if (nice) { + /* 8. Destroy connection's ICE Agent, abruptly ending any active ICE + * processing and releasing any relevant resources (e.g. TURN permissions). */ + nice_agent_close_async (NICE_AGENT (nice->priv->nice_agent), + on_agent_closed, close_data_ref (d)); + if (!d->promise) { + while (!d->agent_closed) { + g_main_context_iteration (nice->priv->main_context, TRUE); + } + } + gst_object_unref (nice); + } + + return G_SOURCE_REMOVE; +} + +static void +gst_webrtc_nice_close (GstWebRTCICE * ice, GstPromise * promise) +{ + GstWebRTCNice *nice = GST_WEBRTC_NICE (ice); + struct close_data *d = close_data_new (nice, promise); + + /* https://www.w3.org/TR/webrtc/#dom-rtcpeerconnection-close */ + + g_main_context_invoke_full (nice->priv->main_context, G_PRIORITY_DEFAULT, + close_main_cb, d, (GDestroyNotify) close_data_unref); +} + static void gst_webrtc_nice_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec) @@ -1640,14 +1761,66 @@ } static void +_agent_closed_cb (GObject * source_object, GAsyncResult * res, + gpointer user_data) +{ + gboolean *agent_closed = user_data; + + *agent_closed = TRUE; +} + +static gboolean +_agent_closed_timeout_cb (gpointer user_data) +{ + gboolean *agent_timeout = user_data; + + *agent_timeout = TRUE; + return FALSE; +}; + +static void +_close_agent (GstWebRTCNice * ice) +{ + GMainContext *main_context = g_main_context_new (); + gboolean agent_closed = FALSE; + gboolean agent_timeout = FALSE; + GSource *timeout_source; + + g_main_context_push_thread_default (main_context); + timeout_source = g_timeout_source_new (MAX_CLOSING_TIME_MILLI_SECONDS); + g_source_set_callback (timeout_source, _agent_closed_timeout_cb, + &agent_timeout, NULL); + g_source_attach (timeout_source, main_context); + nice_agent_close_async (ice->priv->nice_agent, _agent_closed_cb, + &agent_closed); + while (!agent_closed && !agent_timeout) { + g_main_context_iteration (main_context, TRUE); + } + if (agent_timeout) { + GST_WARNING ("nice_agent_close_async() did not finish"); + } + g_source_destroy (timeout_source); + g_source_unref (timeout_source); + g_main_context_pop_thread_default (main_context); + g_main_context_unref (main_context); +} + +static void gst_webrtc_nice_finalize (GObject * object) { GstWebRTCNice *ice = GST_WEBRTC_NICE (object); g_signal_handlers_disconnect_by_data (ice->priv->nice_agent, ice); + g_cancellable_cancel (ice->priv->resolve_cancellable); + _close_agent (ice); + outstanding_resolves_wait (ice->priv->outstanding_resolves); + _stop_thread (ice); + g_clear_object (&ice->priv->resolve_cancellable); + outstanding_resolves_unref (ice->priv->outstanding_resolves); + if (ice->priv->on_candidate_notify) ice->priv->on_candidate_notify (ice->priv->on_candidate_data); ice->priv->on_candidate = NULL; @@ -1669,6 +1842,9 @@ g_hash_table_unref (ice->priv->turn_servers); + g_free (ice->priv->remote_ufrag); + g_free (ice->priv->remote_pwd); + G_OBJECT_CLASS (parent_class)->finalize (object); } @@ -1682,11 +1858,8 @@ options |= NICE_AGENT_OPTION_ICE_TRICKLE; options |= NICE_AGENT_OPTION_REGULAR_NOMINATION; - -/* https://gitlab.freedesktop.org/libnice/libnice/-/merge_requests/257 */ -#ifdef HAVE_LIBNICE_CONSENT_FIX + options |= NICE_AGENT_OPTION_CLOSE_FORCED; options |= NICE_AGENT_OPTION_CONSENT_FRESHNESS; -#endif ice->priv->nice_agent = nice_agent_new_full (ice->priv->main_context, NICE_COMPATIBILITY_RFC5245, options); @@ -1728,7 +1901,7 @@ gst_webrtc_nice_get_local_candidates; gst_webrtc_ice_class->get_remote_candidates = gst_webrtc_nice_get_remote_candidates; - gst_webrtc_ice_class->get_selected_pair = gst_webrtc_nice_get_selected_pair; + gst_webrtc_ice_class->close = gst_webrtc_nice_close; gobject_class->constructed = gst_webrtc_nice_constructed; gobject_class->get_property = gst_webrtc_nice_get_property; @@ -1776,6 +1949,11 @@ g_array_new (FALSE, TRUE, sizeof (struct NiceStreamItem)); g_array_set_clear_func (ice->priv->nice_stream_map, (GDestroyNotify) _clear_ice_stream); + + ice->priv->resolve_cancellable = g_cancellable_new (); + ice->priv->outstanding_resolves = g_atomic_rc_box_new0 (OutstandingResolves); + g_mutex_init (&ice->priv->outstanding_resolves->mutex); + g_cond_init (&ice->priv->outstanding_resolves->cond); } GstWebRTCNice *
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/nice/nicetransport.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/nice/nicetransport.c
Changed
@@ -23,6 +23,7 @@ #include "nicestream.h" #include "nicetransport.h" +#include "niceutils.h" #define GST_CAT_DEFAULT gst_webrtc_nice_transport_debug GST_DEBUG_CATEGORY_STATIC (GST_CAT_DEFAULT); @@ -315,6 +316,134 @@ g_free (weak); } +static GstWebRTCICECandidate * +nice_candidate_to_gst (GstWebRTCNice * webrtc_ice, + NiceAgent * agent, NiceCandidate * cand, + GstWebRTCNiceCandidateOrigin origin) +{ + GstWebRTCICECandidate *gst_candidate = g_new0 (GstWebRTCICECandidate, 1); + gchar *attr = nice_agent_generate_local_candidate_sdp (agent, cand); + GstWebRTCICEComponent comp = _nice_component_to_gst (cand->component_id); + gchar *addr = nice_address_dup_string (&cand->addr); + guint port = nice_address_get_port (&cand->addr); + gchar *url = + origin == + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL ? + gst_webrtc_nice_get_candidate_server_url (webrtc_ice, + cand) : NULL; + + gst_candidate->stats = g_new0 (GstWebRTCICECandidateStats, 1); + GST_WEBRTC_ICE_CANDIDATE_STATS_TYPE (gst_candidate->stats) = + nice_candidate_type_to_string (cand->type); + + GST_WEBRTC_ICE_CANDIDATE_STATS_PROTOCOL (gst_candidate->stats) = + cand->transport == NICE_CANDIDATE_TRANSPORT_UDP ? "udp" : "tcp"; + + switch (cand->transport) { + case NICE_CANDIDATE_TRANSPORT_UDP: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (gst_candidate->stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_ACTIVE: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (gst_candidate->stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_ACTIVE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_PASSIVE: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (gst_candidate->stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_PASSIVE; + break; + case NICE_CANDIDATE_TRANSPORT_TCP_SO: + GST_WEBRTC_ICE_CANDIDATE_STATS_TCP_TYPE (gst_candidate->stats) = + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_SO; + break; + }; + + /* FIXME: sdpMid, sdpMLineIndex */ + gst_candidate->sdp_mid = NULL; + gst_candidate->sdp_mline_index = -1; + + gst_candidate->candidate = attr; + GST_WEBRTC_ICE_CANDIDATE_STATS_FOUNDATION (gst_candidate->stats) = + g_strdup (cand->foundation); + gst_candidate->component = comp; + GST_WEBRTC_ICE_CANDIDATE_STATS_PRIORITY (gst_candidate->stats) = + cand->priority; + GST_WEBRTC_ICE_CANDIDATE_STATS_ADDRESS (gst_candidate->stats) = addr; + GST_WEBRTC_ICE_CANDIDATE_STATS_PORT (gst_candidate->stats) = port; + GST_WEBRTC_ICE_CANDIDATE_STATS_USERNAME_FRAGMENT (gst_candidate->stats) = + g_strdup (cand->username); + GST_WEBRTC_ICE_CANDIDATE_STATS_URL (gst_candidate->stats) = NULL; + if (url && !g_str_equal (url, "")) + GST_WEBRTC_ICE_CANDIDATE_STATS_URL (gst_candidate->stats) = url; + if (!GST_WEBRTC_ICE_CANDIDATE_STATS_URL (gst_candidate->stats)) + g_free (url); + + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_ADDRESS (gst_candidate->stats) = NULL; + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_PORT (gst_candidate->stats) = -1; + + if (cand->type == NICE_CANDIDATE_TYPE_RELAYED + && origin == GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL) { + NiceAddress relay_address; + + nice_candidate_relay_address (cand, &relay_address); + if (nice_address_is_valid (&relay_address)) { + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_ADDRESS (gst_candidate->stats) = + nice_address_dup_string (&relay_address); + GST_WEBRTC_ICE_CANDIDATE_STATS_RELATED_PORT (gst_candidate->stats) = + nice_address_get_port (&relay_address); + + /* FIXME: Set relayProtocol as one of these strings (udp, tcp, tls), from + * the candidate TURN server. libnice API needed for this. */ + } + } + + return gst_candidate; +} + +static GstWebRTCICECandidatePair * +gst_webrtc_nice_get_selected_candidate_pair (GstWebRTCICETransport * ice) +{ + GstWebRTCNiceTransport *nice = GST_WEBRTC_NICE_TRANSPORT (ice); + NiceAgent *agent = NULL; + GstWebRTCNice *webrtc_ice = NULL; + GstWebRTCICECandidatePair *candidates_pair = NULL; + GstWebRTCICEStream *nice_stream; + NiceCandidate *local_candidate = NULL; + NiceCandidate *remote_candidate = NULL; + NiceComponentType component; + + nice_stream = GST_WEBRTC_ICE_STREAM (nice->stream); + + g_object_get (nice->stream, "ice", &webrtc_ice, NULL); + g_assert (webrtc_ice != NULL); + + g_object_get (webrtc_ice, "agent", &agent, NULL); + g_assert (agent != NULL); + + component = _gst_component_to_nice (ice->component); + + if (nice_agent_get_selected_pair (agent, nice_stream->stream_id, component, + &local_candidate, &remote_candidate)) { + + gst_webrtc_nice_fill_local_candidate_credentials (agent, local_candidate); + gst_webrtc_nice_fill_remote_candidate_credentials (webrtc_ice, + remote_candidate); + + candidates_pair = g_new0 (GstWebRTCICECandidatePair, 1); + candidates_pair->local = + nice_candidate_to_gst (webrtc_ice, agent, local_candidate, + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL); + candidates_pair->remote = + nice_candidate_to_gst (webrtc_ice, agent, remote_candidate, + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_REMOTE); + } + + g_object_unref (agent); + gst_object_unref (webrtc_ice); + + return candidates_pair; +} + static void gst_webrtc_nice_transport_constructed (GObject * object) { @@ -369,12 +498,17 @@ gst_webrtc_nice_transport_class_init (GstWebRTCNiceTransportClass * klass) { GObjectClass *gobject_class = (GObjectClass *) klass; + GstWebRTCICETransportClass *transport_class = + (GstWebRTCICETransportClass *) klass; gobject_class->constructed = gst_webrtc_nice_transport_constructed; gobject_class->get_property = gst_webrtc_nice_transport_get_property; gobject_class->set_property = gst_webrtc_nice_transport_set_property; gobject_class->finalize = gst_webrtc_nice_transport_finalize; + transport_class->get_selected_candidate_pair = + gst_webrtc_nice_get_selected_candidate_pair; + g_object_class_install_property (gobject_class, PROP_STREAM, g_param_spec_object ("stream",
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/nice/niceutils.h
Added
@@ -0,0 +1,34 @@ +/* GStreamer + * Copyright (C) 2022 Sherrill Lin <lshuying@amazon.com> + * Copyright (C) 2022 Philippe Normand <philn@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <glib.h> +#include "nice.h" + +gchar *gst_webrtc_nice_get_candidate_server_url(GstWebRTCNice * ice, NiceCandidate * cand); +const gchar *gst_webrtc_nice_get_candidate_relay_protocol(GstUri * turn_server); +void gst_webrtc_nice_fill_local_candidate_credentials(NiceAgent * agent, NiceCandidate * cand); +void gst_webrtc_nice_fill_remote_candidate_credentials(GstWebRTCNice *nice, NiceCandidate * cand); + +typedef enum { + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_LOCAL, + GST_WEBRTC_NICE_CANDIDATE_ORIGIN_REMOTE, +} GstWebRTCNiceCandidateOrigin;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/rtcsessiondescription.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/rtcsessiondescription.h
Changed
@@ -47,9 +47,9 @@ }; GST_WEBRTC_API -GstWebRTCSessionDescription * gst_webrtc_session_description_new (GstWebRTCSDPType type, GstSDPMessage *sdp); +GstWebRTCSessionDescription * gst_webrtc_session_description_new (GstWebRTCSDPType type, GstSDPMessage *sdp) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API -GstWebRTCSessionDescription * gst_webrtc_session_description_copy (const GstWebRTCSessionDescription * src); +GstWebRTCSessionDescription * gst_webrtc_session_description_copy (const GstWebRTCSessionDescription * src) G_GNUC_WARN_UNUSED_RESULT; GST_WEBRTC_API void gst_webrtc_session_description_free (GstWebRTCSessionDescription * desc);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-libs/gst/webrtc/webrtc_fwd.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-libs/gst/webrtc/webrtc_fwd.h
Changed
@@ -70,14 +70,79 @@ typedef struct _GstWebRTCICE GstWebRTCICE; typedef struct _GstWebRTCICEClass GstWebRTCICEClass; +typedef struct _GstWebRTCICECandidate GstWebRTCICECandidate; + /** - * GstWebRTCICECandidateStats: + * GstWebRTCICECandidatePair: * - * Since: 1.22 + * Since: 1.28 */ +typedef struct _GstWebRTCICECandidatePair GstWebRTCICECandidatePair; + typedef struct _GstWebRTCICECandidateStats GstWebRTCICECandidateStats; /** + * GstWebRTCICETcpCandidateType: + * @GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_ACTIVE: An "active" TCP candidate is one for which the transport + * will attempt to open an outbound connection but will not + * receive incoming connection requests. + * @GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_PASSIVE: A "passive" TCP candidate is one for which the transport + * will receive incoming connection attempts but not attempt + * a connection. + * @GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_SO: An "so" candidate is one for which the transport will attempt + * to open a connection simultaneously with its peer. + * @GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE: Value used for non-TCP candidate type. + * + * Since: 1.28 + */ +typedef enum /*< underscore_name=gst_webrtc_ice_tcp_candidate_type >*/ +{ + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_ACTIVE, + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_PASSIVE, + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_SO, + GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE, +} GstWebRTCICETcpCandidateType; + +/** + * GstWebRTCICECandidateProtocolType: + * @GST_WEBRTC_ICE_CANDIDATE_PROTOCOL_TYPE_TCP: + * @GST_WEBRTC_ICE_CANDIDATE_PROTOCOL_TYPE_UDP: + * + * Since: 1.28 + */ +typedef enum /*< underscore_name=gst_webrtc_ice_candidate_protocol_type >*/ +{ + GST_WEBRTC_ICE_CANDIDATE_PROTOCOL_TYPE_TCP, + GST_WEBRTC_ICE_CANDIDATE_PROTOCOL_TYPE_UDP, +} GstWebRTCICECandidateProtocolType; + +/** + * GstWebRTCICECandidateType: + * @GST_WEBRTC_ICE_CANDIDATE_TYPE_HOST: The candidate is a host candidate, whose + * IP address as specified in the RTCIceCandidate.address property is in fact the + * true address of the remote peer. + * @GST_WEBRTC_ICE_CANDIDATE_TYPE_SERVER_REFLEXIVE: The candidate is a server + * reflexive candidate; the ip and port are a binding allocated by a NAT for an + * agent when it sent a packet through the NAT to a server. They can be learned by + * the STUN server and TURN server to represent the candidate's peer anonymously. + * @GST_WEBRTC_ICE_CANDIDATE_TYPE_PEER_REFLEXIVE: The candidate is a peer + * reflexive candidate; the ip and port are a binding allocated by a NAT when it + * sent a STUN request to represent the candidate's peer anonymously. + * @GST_WEBRTC_ICE_CANDIDATE_TYPE_RELAYED: The candidate is a relay candidate, + * obtained from a TURN server. The relay candidate's IP address is an address the + * TURN server uses to forward the media between the two peers. + * + * Since: 1.28 + */ +typedef enum /*< underscore_name=gst_webrtc_ice_candidate_type >*/ +{ + GST_WEBRTC_ICE_CANDIDATE_TYPE_HOST, + GST_WEBRTC_ICE_CANDIDATE_TYPE_SERVER_REFLEXIVE, + GST_WEBRTC_ICE_CANDIDATE_TYPE_PEER_REFLEXIVE, + GST_WEBRTC_ICE_CANDIDATE_TYPE_RELAYED, +} GstWebRTCICECandidateType; + +/** * GstWebRTCICEStream: * * Since: 1.22 @@ -150,6 +215,21 @@ } GstWebRTCDTLSTransportState; /** + * GstWebRTCDTLSRole: + * @GST_WEBRTC_DTLS_ROLE_CLIENT: client + * @GST_WEBRTC_DTLS_ROLE_SERVER: server + * @GST_WEBRTC_DTLS_ROLE_UNKNOWN: unknown + * + * Since: 1.28 + */ +typedef enum /*< underscore_name=gst_webrtc_dtls_role >*/ +{ + GST_WEBRTC_DTLS_ROLE_CLIENT, + GST_WEBRTC_DTLS_ROLE_SERVER, + GST_WEBRTC_DTLS_ROLE_UNKNOWN +} GstWebRTCDTLSRole; + +/** * GstWebRTCICEGatheringState: * @GST_WEBRTC_ICE_GATHERING_STATE_NEW: new * @GST_WEBRTC_ICE_GATHERING_STATE_GATHERING: gathering
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst-plugins-bad.doap -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst-plugins-bad.doap
Changed
@@ -35,101 +35,51 @@ <release> <Version> - <revision>1.26.10</revision> - <branch>1.26</branch> - <name></name> - <created>2025-12-25</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.10.tar.xz" /> - </Version> - </release> - - <release> - <Version> - <revision>1.26.9</revision> - <branch>1.26</branch> - <name></name> - <created>2025-12-01</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.9.tar.xz" /> - </Version> - </release> - - <release> - <Version> - <revision>1.26.8</revision> - <branch>1.26</branch> - <name></name> - <created>2025-11-10</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.8.tar.xz" /> - </Version> - </release> - - <release> - <Version> - <revision>1.26.7</revision> - <branch>1.26</branch> - <name></name> - <created>2025-10-14</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.7.tar.xz" /> - </Version> - </release> - - <release> - <Version> - <revision>1.26.6</revision> - <branch>1.26</branch> - <name></name> - <created>2025-09-14</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.6.tar.xz" /> - </Version> - </release> - - <release> - <Version> - <revision>1.26.5</revision> - <branch>1.26</branch> + <revision>1.28.0</revision> + <branch>main</branch> <name></name> - <created>2025-08-07</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.5.tar.xz" /> + <created>2026-01-27</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.28.0.tar.xz" /> </Version> </release> <release> <Version> - <revision>1.26.4</revision> - <branch>1.26</branch> + <revision>1.27.90</revision> + <branch>main</branch> <name></name> - <created>2025-07-16</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.4.tar.xz" /> + <created>2026-01-05</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.27.90.tar.xz" /> </Version> </release> <release> <Version> - <revision>1.26.3</revision> - <branch>1.26</branch> + <revision>1.27.50</revision> + <branch>main</branch> <name></name> - <created>2025-06-26</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.3.tar.xz" /> + <created>2025-12-09</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.27.50.tar.xz" /> </Version> </release> <release> <Version> - <revision>1.26.2</revision> - <branch>1.26</branch> + <revision>1.27.2</revision> + <branch>main</branch> <name></name> - <created>2025-05-29</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.2.tar.xz" /> + <created>2025-09-07</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.27.2.tar.xz" /> </Version> </release> <release> <Version> - <revision>1.26.1</revision> - <branch>1.26</branch> + <revision>1.27.1</revision> + <branch>main</branch> <name></name> - <created>2025-04-24</created> - <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.26.1.tar.xz" /> + <created>2025-07-08</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.27.1.tar.xz" /> </Version> </release>
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/accurip/gstaccurip.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/accurip/gstaccurip.c
Changed
@@ -120,7 +120,7 @@ gstbasetrans_class->sink_event = GST_DEBUG_FUNCPTR (gst_accurip_sink_event); gstbasetrans_class->passthrough_on_same_caps = TRUE; - gst_element_class_set_metadata (GST_ELEMENT_CLASS (klass), + gst_element_class_set_static_metadata (GST_ELEMENT_CLASS (klass), "AccurateRip(TM) CRC element", "Filter/Analyzer/Audio", "Computes an AccurateRip CRC", "Christophe Fergeau <teuf@gnome.org>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/audiomixmatrix/gstaudiomixmatrix.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/audiomixmatrix/gstaudiomixmatrix.c
Changed
@@ -87,7 +87,8 @@ PROP_MODE }; -GType +#define GST_TYPE_AUDIO_MIX_MATRIX_MODE (gst_audio_mix_matrix_mode_get_type()) +static GType gst_audio_mix_matrix_mode_get_type (void) { static GType gst_audio_mix_matrix_mode_type = 0; @@ -130,11 +131,55 @@ GST_AUDIO_NE (S32) "}") ); +typedef struct _MixOutEntry +{ + guint index; + guint offset; + guint count; +} MixOutEntry; + +typedef struct _MixEntry +{ + guint index; + gdouble coeff; + gint64 coeff_s32; + gint32 coeff_s16; +} MixEntry; + +typedef void (*MixerFunc) (GstAudioMixMatrix * self, GstMapInfo * in_map, + GstMapInfo * out_map); + +#define NONZERO_DENSITY_THRESHOLD 0.5 + +struct _GstAudioMixMatrix +{ + GstBaseTransform audiofilter; + + guint in_channels; + guint out_channels; + gdouble *matrix; + gint32 *s16_conv_matrix; + gint64 *s32_conv_matrix; + guint64 channel_mask; + GstAudioMixMatrixMode mode; + gint shift_bytes_s16; + gint shift_bytes_s32; + + GstAudioInfo info; + + MixerFunc func; + + /* sparse-matrix optimization */ + MixOutEntry *out_entry; + MixEntry *entry; + guint num_valid_out_ch; +}; + static void gst_audio_mix_matrix_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec); static void gst_audio_mix_matrix_get_property (GObject * object, guint prop_id, GValue * value, GParamSpec * pspec); -static void gst_audio_mix_matrix_dispose (GObject * object); +static void gst_audio_mix_matrix_finalize (GObject * object); static gboolean gst_audio_mix_matrix_get_unit_size (GstBaseTransform * trans, GstCaps * caps, gsize * size); static gboolean gst_audio_mix_matrix_set_caps (GstBaseTransform * trans, @@ -169,7 +214,7 @@ gobject_class->set_property = gst_audio_mix_matrix_set_property; gobject_class->get_property = gst_audio_mix_matrix_get_property; - gobject_class->dispose = gst_audio_mix_matrix_dispose; + gobject_class->finalize = gst_audio_mix_matrix_finalize; g_object_class_install_property (gobject_class, PROP_IN_CHANNELS, g_param_spec_uint ("in-channels", "Input audio channels", @@ -225,64 +270,118 @@ static void gst_audio_mix_matrix_init (GstAudioMixMatrix * self) { - self->in_channels = 0; - self->out_channels = 0; - self->matrix = NULL; - self->channel_mask = 0; - self->s16_conv_matrix = NULL; - self->s32_conv_matrix = NULL; self->mode = GST_AUDIO_MIX_MATRIX_MODE_MANUAL; } static void -gst_audio_mix_matrix_dispose (GObject * object) +gst_audio_mix_matrix_clear (GstAudioMixMatrix * self, gboolean full) +{ + g_clear_pointer (&self->s16_conv_matrix, g_free); + g_clear_pointer (&self->s32_conv_matrix, g_free); + g_clear_pointer (&self->out_entry, g_free); + g_clear_pointer (&self->entry, g_free); + self->num_valid_out_ch = 0; + + if (full) + g_clear_pointer (&self->matrix, g_free); +} + +static void +gst_audio_mix_matrix_finalize (GObject * object) { GstAudioMixMatrix *self = GST_AUDIO_MIX_MATRIX (object); - if (self->matrix) { - g_free (self->matrix); - self->matrix = NULL; - } + gst_audio_mix_matrix_clear (self, TRUE); - G_OBJECT_CLASS (gst_audio_mix_matrix_parent_class)->dispose (object); + G_OBJECT_CLASS (gst_audio_mix_matrix_parent_class)->finalize (object); } -static void -gst_audio_mix_matrix_convert_s16_matrix (GstAudioMixMatrix * self) +static gboolean +gst_audio_mix_matrix_build_matrix (GstAudioMixMatrix * self) { - gint i; + const gdouble eps = 1e-12; + guint out, in; + guint offset = 0; + guint total_pairs = 0; + gdouble density; + + if (!self->matrix || !self->in_channels || !self->out_channels) + return TRUE; + + gst_audio_mix_matrix_clear (self, FALSE); /* converted bits - input bits - sign - bits needed for channel */ - self->shift_bytes = 32 - 16 - 1 - ceil (log (self->in_channels) / log (2)); - - if (self->s16_conv_matrix) - g_free (self->s16_conv_matrix); - self->s16_conv_matrix = - g_new (gint32, self->in_channels * self->out_channels); - for (i = 0; i < self->in_channels * self->out_channels; i++) { - self->s16_conv_matrixi = - (gint32) ((self->matrixi) * (1 << self->shift_bytes)); + self->shift_bytes_s16 = + 32 - 16 - 1 - ceil (log (self->in_channels) / log (2)); + self->shift_bytes_s32 = + 64 - 32 - 1 - (gint) (log (self->in_channels) / log (2)); + + for (out = 0; out < self->out_channels; out++) { + for (in = 0; in < self->in_channels; in++) { + if (fabs (self->matrixout * self->in_channels + in) > eps) + total_pairs++; + } } -} -static void -gst_audio_mix_matrix_convert_s32_matrix (GstAudioMixMatrix * self) -{ - gint i; + density = ((double) total_pairs) / (self->out_channels * self->in_channels); + + GST_DEBUG_OBJECT (self, "nonzero coeff ratio: %.2lf (%d / %d)", density, + total_pairs, self->out_channels * self->in_channels); + + /* Sparse matrix mixing involves extra lookup and memset overhead. + * Use sparse optimization only when a sufficient number of zero coefficients + * is detected */ + if (NONZERO_DENSITY_THRESHOLD <= density) { + guint i; + self->s16_conv_matrix = + g_new (gint32, self->in_channels * self->out_channels); + self->s32_conv_matrix = + g_new (gint64, self->in_channels * self->out_channels); + + for (i = 0; i < self->in_channels * self->out_channels; i++) { + self->s16_conv_matrixi = + (gint32) ((self->matrixi) * (1 << self->shift_bytes_s16)); + self->s32_conv_matrixi = + (gint64) ((self->matrixi) * (1LL << self->shift_bytes_s32)); + } - /* converted bits - input bits - sign - bits needed for channel */ - self->shift_bytes = 64 - 32 - 1 - (gint) (log (self->in_channels) / log (2)); - - if (self->s32_conv_matrix) - g_free (self->s32_conv_matrix); - self->s32_conv_matrix = - g_new (gint64, self->in_channels * self->out_channels); - for (i = 0; i < self->in_channels * self->out_channels; i++) { - self->s32_conv_matrixi = - (gint64) ((self->matrixi) * (1 << self->shift_bytes)); + return FALSE; } -} + self->out_entry = g_new0 (MixOutEntry, self->out_channels); + self->entry = g_new0 (MixEntry, total_pairs); + + for (out = 0; out < self->out_channels; out++) { + guint count = 0; + for (in = 0; in < self->in_channels; in++) { + gdouble coeff = self->matrixout * self->in_channels + in; + if (fabs (coeff) > eps) { + self->entryoffset.index = in; + self->entryoffset.coeff = coeff; + self->entryoffset.coeff_s32 = + (gint64) (coeff * (1LL << self->shift_bytes_s32)); + self->entryoffset.coeff_s16 = + (gint32) (coeff * (1 << self->shift_bytes_s16)); + offset++; + count++; + } + } + + if (count > 0) { + MixOutEntry *out_entry = &self->out_entryself->num_valid_out_ch; + out_entry->index = out; + out_entry->offset = offset - count; + out_entry->count = count; + self->num_valid_out_ch++; + } + } + + GST_DEBUG_OBJECT (self, + "in-channels: %d, out-channels: %d, matrix-size: %d", + self->in_channels, self->out_channels, self->num_valid_out_ch); + + return TRUE; +} static void gst_audio_mix_matrix_set_property (GObject * object, guint prop_id, @@ -293,23 +392,16 @@ switch (prop_id) { case PROP_IN_CHANNELS: self->in_channels = g_value_get_uint (value); - if (self->matrix) { - gst_audio_mix_matrix_convert_s16_matrix (self); - gst_audio_mix_matrix_convert_s32_matrix (self); - } + gst_audio_mix_matrix_build_matrix (self); break; case PROP_OUT_CHANNELS: self->out_channels = g_value_get_uint (value); - if (self->matrix) { - gst_audio_mix_matrix_convert_s16_matrix (self); - gst_audio_mix_matrix_convert_s32_matrix (self); - } + gst_audio_mix_matrix_build_matrix (self); break; case PROP_MATRIX:{ gint in, out; - if (self->matrix) - g_free (self->matrix); + g_free (self->matrix); self->matrix = g_new (gdouble, self->in_channels * self->out_channels); g_return_if_fail (gst_value_array_get_size (value) == self->out_channels); @@ -326,8 +418,7 @@ self->matrixout * self->in_channels + in = coefficient; } } - gst_audio_mix_matrix_convert_s16_matrix (self); - gst_audio_mix_matrix_convert_s32_matrix (self); + gst_audio_mix_matrix_build_matrix (self); break; } case PROP_CHANNEL_MASK: @@ -398,32 +489,18 @@ s = GST_ELEMENT_CLASS (gst_audio_mix_matrix_parent_class)->change_state (element, transition); - if (transition == GST_STATE_CHANGE_PAUSED_TO_READY) { - if (self->s16_conv_matrix) { - g_free (self->s16_conv_matrix); - self->s16_conv_matrix = NULL; - } - - if (self->s32_conv_matrix) { - g_free (self->s32_conv_matrix); - self->s32_conv_matrix = NULL; - } - } + if (transition == GST_STATE_CHANGE_PAUSED_TO_READY) + gst_audio_mix_matrix_clear (self, FALSE); return s; } - static GstFlowReturn gst_audio_mix_matrix_transform (GstBaseTransform * vfilter, GstBuffer * inbuf, GstBuffer * outbuf) { GstMapInfo inmap, outmap; GstAudioMixMatrix *self = GST_AUDIO_MIX_MATRIX (vfilter); - gint in, out, sample; - guint inchannels = self->in_channels; - guint outchannels = self->out_channels; - gdouble *matrix = self->matrix; if (!gst_buffer_map (inbuf, &inmap, GST_MAP_READ)) { return GST_FLOW_ERROR; @@ -433,103 +510,7 @@ return GST_FLOW_ERROR; } - switch (self->format) { - case GST_AUDIO_FORMAT_F32LE: - case GST_AUDIO_FORMAT_F32BE:{ - const gfloat *inarray; - gfloat *outarray; - guint n_samples = outmap.size / (sizeof (gfloat) * outchannels); - - inarray = (gfloat *) inmap.data; - outarray = (gfloat *) outmap.data; - - for (sample = 0; sample < n_samples; sample++) { - for (out = 0; out < outchannels; out++) { - gfloat outval = 0; - for (in = 0; in < inchannels; in++) { - outval += - inarraysample * inchannels + - in * matrixout * inchannels + in; - } - outarraysample * outchannels + out = outval; - } - } - break; - } - case GST_AUDIO_FORMAT_F64LE: - case GST_AUDIO_FORMAT_F64BE:{ - const gdouble *inarray; - gdouble *outarray; - guint n_samples = outmap.size / (sizeof (gdouble) * outchannels); - - inarray = (gdouble *) inmap.data; - outarray = (gdouble *) outmap.data; - - for (sample = 0; sample < n_samples; sample++) { - for (out = 0; out < outchannels; out++) { - gdouble outval = 0; - for (in = 0; in < inchannels; in++) { - outval += - inarraysample * inchannels + - in * matrixout * inchannels + in; - } - outarraysample * outchannels + out = outval; - } - } - break; - } - case GST_AUDIO_FORMAT_S16LE: - case GST_AUDIO_FORMAT_S16BE:{ - const gint16 *inarray; - gint16 *outarray; - guint n_samples = outmap.size / (sizeof (gint16) * outchannels); - guint n = self->shift_bytes; - gint32 *conv_matrix = self->s16_conv_matrix; - - inarray = (gint16 *) inmap.data; - outarray = (gint16 *) outmap.data; - - for (sample = 0; sample < n_samples; sample++) { - for (out = 0; out < outchannels; out++) { - gint32 outval = 0; - for (in = 0; in < inchannels; in++) { - outval += (gint32) (inarraysample * inchannels + in * - conv_matrixout * inchannels + in); - } - outarraysample * outchannels + out = (gint16) (outval >> n); - } - } - break; - } - case GST_AUDIO_FORMAT_S32LE: - case GST_AUDIO_FORMAT_S32BE:{ - const gint32 *inarray; - gint32 *outarray; - guint n_samples = outmap.size / (sizeof (gint32) * outchannels); - guint n = self->shift_bytes; - gint64 *conv_matrix = self->s32_conv_matrix; - - inarray = (gint32 *) inmap.data; - outarray = (gint32 *) outmap.data; - - for (sample = 0; sample < n_samples; sample++) { - for (out = 0; out < outchannels; out++) { - gint64 outval = 0; - for (in = 0; in < inchannels; in++) { - outval += (gint64) (inarraysample * inchannels + in * - conv_matrixout * inchannels + in); - } - outarraysample * outchannels + out = (gint32) (outval >> n); - } - } - break; - } - default: - gst_buffer_unmap (inbuf, &inmap); - gst_buffer_unmap (outbuf, &outmap); - return GST_FLOW_NOT_SUPPORTED; - - } + self->func (self, &inmap, &outmap); gst_buffer_unmap (inbuf, &inmap); gst_buffer_unmap (outbuf, &outmap); @@ -550,27 +531,303 @@ return TRUE; } +static void +gst_audio_mix_matrix_mix_f32 (GstAudioMixMatrix * self, GstMapInfo * in_map, + GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gdouble *matrix = self->matrix; + const gfloat *inarray; + gfloat *outarray; + guint n_samples = out_map->size / (sizeof (gfloat) * outchannels); + guint in, out, sample; + + inarray = (gfloat *) in_map->data; + outarray = (gfloat *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gfloat *out_arr = &outarraysample * outchannels; + const gfloat *in_arr = &inarraysample * inchannels; + + for (out = 0; out < outchannels; out++) { + gfloat outval = 0; + const gdouble *coeff = &matrixout * inchannels; + for (in = 0; in < inchannels; in++) + outval += (gfloat) (in_arrin * coeffin); + out_arrout = outval; + } + } +} + +static void +gst_audio_mix_matrix_sparse_mix_f32 (GstAudioMixMatrix * self, + GstMapInfo * in_map, GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gfloat *inarray; + gfloat *outarray; + guint n_samples = out_map->size / (sizeof (gfloat) * outchannels); + guint in, out, sample; + + gst_audio_format_info_fill_silence (self->info.finfo, + out_map->data, out_map->size); + + inarray = (gfloat *) in_map->data; + outarray = (gfloat *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gfloat *out_arr = &outarraysample * outchannels; + const gfloat *in_arr = &inarraysample * inchannels; + + for (out = 0; out < self->num_valid_out_ch; out++) { + gfloat outval = 0; + const MixOutEntry *out_entry = &self->out_entryout; + guint out_index = out_entry->index; + guint offset = out_entry->offset; + guint count = out_entry->count; + + for (in = 0; in < count; in++) { + const MixEntry *entry = &self->entryoffset + in; + guint in_index = entry->index; + gfloat coeff = (gfloat) entry->coeff; + outval += in_arrin_index * coeff; + } + out_arrout_index = outval; + } + } +} + +static void +gst_audio_mix_matrix_mix_f64 (GstAudioMixMatrix * self, GstMapInfo * in_map, + GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gdouble *matrix = self->matrix; + const gdouble *inarray; + gdouble *outarray; + guint n_samples = out_map->size / (sizeof (gdouble) * outchannels); + guint in, out, sample; + + inarray = (gdouble *) in_map->data; + outarray = (gdouble *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gdouble *out_arr = &outarraysample * outchannels; + const gdouble *in_arr = &inarraysample * inchannels; + + for (out = 0; out < outchannels; out++) { + gdouble outval = 0; + const gdouble *coeff = &matrixout * inchannels; + for (in = 0; in < inchannels; in++) + outval += in_arrin * coeffin; + out_arrout = outval; + } + } +} + +static void +gst_audio_mix_matrix_sparse_mix_f64 (GstAudioMixMatrix * self, + GstMapInfo * in_map, GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gdouble *inarray; + gdouble *outarray; + guint n_samples = out_map->size / (sizeof (gdouble) * outchannels); + guint in, out, sample; + + gst_audio_format_info_fill_silence (self->info.finfo, + out_map->data, out_map->size); + + inarray = (gdouble *) in_map->data; + outarray = (gdouble *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gdouble *out_arr = &outarraysample * outchannels; + const gdouble *in_arr = &inarraysample * inchannels; + + for (out = 0; out < self->num_valid_out_ch; out++) { + gdouble outval = 0; + const MixOutEntry *out_entry = &self->out_entryout; + guint out_index = out_entry->index; + guint offset = out_entry->offset; + guint count = out_entry->count; + + for (in = 0; in < count; in++) { + const MixEntry *entry = &self->entryoffset + in; + guint in_index = entry->index; + gdouble coeff = entry->coeff; + outval += in_arrin_index * coeff; + } + out_arrout_index = outval; + } + } +} + +static void +gst_audio_mix_matrix_mix_s16 (GstAudioMixMatrix * self, GstMapInfo * in_map, + GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gint32 *matrix = self->s16_conv_matrix; + const gint16 *inarray; + gint16 *outarray; + guint n_samples = out_map->size / (sizeof (gint16) * outchannels); + guint n = self->shift_bytes_s16; + guint in, out, sample; + + inarray = (gint16 *) in_map->data; + outarray = (gint16 *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gint16 *out_arr = &outarraysample * outchannels; + const gint16 *in_arr = &inarraysample * inchannels; + + for (out = 0; out < outchannels; out++) { + gint32 outval = 0; + const gint32 *coeff = &matrixout * inchannels; + for (in = 0; in < inchannels; in++) + outval += in_arrin * coeffin; + out_arrout = (gint16) (outval >> n); + } + } +} + +static void +gst_audio_mix_matrix_sparse_mix_s16 (GstAudioMixMatrix * self, + GstMapInfo * in_map, GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gint16 *inarray; + gint16 *outarray; + guint n_samples = out_map->size / (sizeof (gint16) * outchannels); + guint n = self->shift_bytes_s16; + guint in, out, sample; + + gst_audio_format_info_fill_silence (self->info.finfo, + out_map->data, out_map->size); + + inarray = (gint16 *) in_map->data; + outarray = (gint16 *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gint16 *out_arr = &outarraysample * outchannels; + const gint16 *in_arr = &inarraysample * inchannels; + + for (out = 0; out < self->num_valid_out_ch; out++) { + gint32 outval = 0; + const MixOutEntry *out_entry = &self->out_entryout; + guint out_index = out_entry->index; + guint offset = out_entry->offset; + guint count = out_entry->count; + + for (in = 0; in < count; in++) { + const MixEntry *entry = &self->entryoffset + in; + guint in_index = entry->index; + gint32 coeff = entry->coeff_s16; + outval += in_arrin_index * coeff; + } + out_arrout_index = (gint16) (outval >> n); + } + } +} + +static void +gst_audio_mix_matrix_mix_s32 (GstAudioMixMatrix * self, GstMapInfo * in_map, + GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gint64 *matrix = self->s32_conv_matrix; + const gint32 *inarray; + gint32 *outarray; + guint n_samples = out_map->size / (sizeof (gint32) * outchannels); + guint n = self->shift_bytes_s32; + guint in, out, sample; + + inarray = (gint32 *) in_map->data; + outarray = (gint32 *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gint32 *out_arr = &outarraysample * outchannels; + const gint32 *in_arr = &inarraysample * inchannels; + + for (out = 0; out < outchannels; out++) { + gint64 outval = 0; + const gint64 *coeff = &matrixout * inchannels; + for (in = 0; in < inchannels; in++) + outval += in_arrin * coeffin; + out_arrout = (gint32) (outval >> n); + } + } +} + +static void +gst_audio_mix_matrix_sparse_mix_s32 (GstAudioMixMatrix * self, + GstMapInfo * in_map, GstMapInfo * out_map) +{ + guint inchannels = self->in_channels; + guint outchannels = self->out_channels; + const gint32 *inarray; + gint32 *outarray; + guint n_samples = out_map->size / (sizeof (gint32) * outchannels); + guint n = self->shift_bytes_s32; + guint in, out, sample; + + gst_audio_format_info_fill_silence (self->info.finfo, + out_map->data, out_map->size); + + inarray = (gint32 *) in_map->data; + outarray = (gint32 *) out_map->data; + + for (sample = 0; sample < n_samples; sample++) { + gint32 *out_arr = &outarraysample * outchannels; + const gint32 *in_arr = &inarraysample * inchannels; + + for (out = 0; out < self->num_valid_out_ch; out++) { + gint64 outval = 0; + const MixOutEntry *out_entry = &self->out_entryout; + guint out_index = out_entry->index; + guint offset = out_entry->offset; + guint count = out_entry->count; + + for (in = 0; in < count; in++) { + const MixEntry *entry = &self->entryoffset + in; + guint in_index = entry->index; + gint64 coeff = entry->coeff_s32; + outval += in_arrin_index * coeff; + } + out_arrout_index = (gint32) (outval >> n); + } + } +} + static gboolean gst_audio_mix_matrix_set_caps (GstBaseTransform * trans, GstCaps * incaps, GstCaps * outcaps) { GstAudioMixMatrix *self = GST_AUDIO_MIX_MATRIX (trans); - GstAudioInfo info, out_info; + GstAudioInfo out_info; + gboolean use_sparse; - if (!gst_audio_info_from_caps (&info, incaps)) + if (!gst_audio_info_from_caps (&self->info, incaps)) return FALSE; if (!gst_audio_info_from_caps (&out_info, outcaps)) return FALSE; - self->format = info.finfo->format; - if (self->mode == GST_AUDIO_MIX_MATRIX_MODE_FIRST_CHANNELS) { gint in, out; - self->in_channels = info.channels; + self->in_channels = self->info.channels; self->out_channels = out_info.channels; + g_free (self->matrix); self->matrix = g_new (gdouble, self->in_channels * self->out_channels); for (out = 0; out < self->out_channels; out++) { @@ -578,7 +835,7 @@ self->matrixout * self->in_channels + in = (out == in); } } - } else if (!self->matrix || info.channels != self->in_channels || + } else if (!self->matrix || self->info.channels != self->in_channels || out_info.channels != self->out_channels) { GST_ELEMENT_ERROR (self, LIBRARY, SETTINGS, ("Erroneous matrix detected"), @@ -586,20 +843,41 @@ return FALSE; } - switch (self->format) { + use_sparse = gst_audio_mix_matrix_build_matrix (self); + switch (GST_AUDIO_INFO_FORMAT (&self->info)) { + case GST_AUDIO_FORMAT_F32LE: + case GST_AUDIO_FORMAT_F32BE: + if (use_sparse) + self->func = (MixerFunc) gst_audio_mix_matrix_sparse_mix_f32; + else + self->func = (MixerFunc) gst_audio_mix_matrix_mix_f32; + break; + case GST_AUDIO_FORMAT_F64LE: + case GST_AUDIO_FORMAT_F64BE: + if (use_sparse) + self->func = (MixerFunc) gst_audio_mix_matrix_sparse_mix_f64; + else + self->func = (MixerFunc) gst_audio_mix_matrix_mix_f64; + break; case GST_AUDIO_FORMAT_S16LE: - case GST_AUDIO_FORMAT_S16BE:{ - gst_audio_mix_matrix_convert_s16_matrix (self); + case GST_AUDIO_FORMAT_S16BE: + if (use_sparse) + self->func = (MixerFunc) gst_audio_mix_matrix_sparse_mix_s16; + else + self->func = (MixerFunc) gst_audio_mix_matrix_mix_s16; break; - } case GST_AUDIO_FORMAT_S32LE: - case GST_AUDIO_FORMAT_S32BE:{ - gst_audio_mix_matrix_convert_s32_matrix (self); + case GST_AUDIO_FORMAT_S32BE: + if (use_sparse) + self->func = (MixerFunc) gst_audio_mix_matrix_sparse_mix_s32; + else + self->func = (MixerFunc) gst_audio_mix_matrix_mix_s32; break; - } default: - break; + g_assert_not_reached (); + return FALSE; } + return TRUE; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/audiomixmatrix/gstaudiomixmatrix.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/audiomixmatrix/gstaudiomixmatrix.h
Changed
@@ -26,16 +26,9 @@ #include <gst/gst.h> #include <gst/audio/audio.h> -#define GST_TYPE_AUDIO_MIX_MATRIX (gst_audio_mix_matrix_get_type()) -#define GST_AUDIO_MIX_MATRIX(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_AUDIO_MIX_MATRIX,GstAudioMixMatrix)) -#define GST_AUDIO_MIX_MATRIX_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_AUDIO_MIX_MATRIX,GstAudioMixMatrixClass)) -#define GST_AUDIO_MIX_MATRIX_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_AUDIO_MIX_MATRIX,GstAudioMixMatrixClass)) -#define GST_IS_AUDIO_MIX_MATRIX(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_AUDIO_MIX_MATRIX)) -#define GST_IS_AUDIO_MIX_MATRIX_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_AUDIO_MIX_MATRIX)) -#define GST_TYPE_AUDIO_MIX_MATRIX_MODE (gst_audio_mix_matrix_mode_get_type()) - -typedef struct _GstAudioMixMatrix GstAudioMixMatrix; -typedef struct _GstAudioMixMatrixClass GstAudioMixMatrixClass; +#define GST_TYPE_AUDIO_MIX_MATRIX (gst_audio_mix_matrix_get_type()) +G_DECLARE_FINAL_TYPE (GstAudioMixMatrix, gst_audio_mix_matrix, + GST, AUDIO_MIX_MATRIX, GstBaseTransform) typedef enum _GstAudioMixMatrixMode { @@ -43,38 +36,7 @@ GST_AUDIO_MIX_MATRIX_MODE_FIRST_CHANNELS = 1 } GstAudioMixMatrixMode; -/** - * GstAudioMixMatrix: - * - * Opaque data structure. - */ -struct _GstAudioMixMatrix -{ - GstBaseTransform audiofilter; - - /* < private > */ - guint in_channels; - guint out_channels; - gdouble *matrix; - guint64 channel_mask; - GstAudioMixMatrixMode mode; - gint32 *s16_conv_matrix; - gint64 *s32_conv_matrix; - gint shift_bytes; - - GstAudioFormat format; -}; - -struct _GstAudioMixMatrixClass -{ - GstBaseTransformClass parent_class; -}; - -GType gst_audio_mix_matrix_get_type (void); - GST_ELEMENT_REGISTER_DECLARE (audiomixmatrix); -GType gst_audio_mix_matrix_mode_get_type (void); - G_END_DECLS #endif /* __GST_AUDIO_MIX_MATRIX_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/bayer/gstbayer2rgb.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/bayer/gstbayer2rgb.c
Changed
@@ -557,7 +557,9 @@ const int bayersrc16 = bayer2rgb->bpp > 8; int j; guint8 *tmp; - guint32 *dtmp; + // This is always initialized when we check for bayersrc16 + // but explicitly do so to avoid the gcc false-positive warning + guint32 *dtmp = 0; process_func merge2 = { NULL, NULL }; process_func16 merge162 = { NULL, NULL }; int r_off, g_off, b_off;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/bayer/gstbayerorc-dist.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/bayer/gstbayerorc-dist.c
Changed
@@ -67,6 +67,7 @@ orc_int32 x22; float x2f2; orc_int16 x44; + orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT @@ -74,6 +75,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -186,6 +189,7 @@ /* begin Orc C target preamble */ +#include <math.h> #define ORC_CLAMP(x,a,b) ((x)<(a) ? (a) : ((x)>(b) ? (b) : (x))) #define ORC_ABS(a) ((a)<0 ? -(a) : (a)) #define ORC_MIN(a,b) ((a)<(b) ? (a) : (b)) @@ -221,6 +225,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -375,65 +381,61 @@ guint8 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 34, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 104, 111, 114, - 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 117, 110, 97, - 108, - 105, 103, 110, 101, 100, 11, 2, 2, 11, 2, 2, 12, 2, 2, 14, 4, - 1, 0, 0, 0, 20, 2, 20, 1, 20, 1, 20, 1, 20, 1, 199, 34, - 33, 4, 83, 32, 4, 16, 199, 36, 35, 32, 39, 36, 34, 36, 196, 0, - 34, 36, 39, 33, 33, 35, 196, 1, 33, 35, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_bayer_orc_horiz_upsample_unaligned); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_horiz_upsample_unaligned"); - orc_program_set_backup_function (p, - _backup_bayer_orc_horiz_upsample_unaligned); - orc_program_add_destination (p, 2, "d1"); - orc_program_add_destination (p, 2, "d2"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_constant (p, 4, 0x00000001, "c1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 1, "t2"); - orc_program_add_temporary (p, 1, "t3"); - orc_program_add_temporary (p, 1, "t4"); - orc_program_add_temporary (p, 1, "t5"); - - orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, - ORC_VAR_D1); - orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, - ORC_VAR_C1, ORC_VAR_D1); - orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 34, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 104, 111, 114, + 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 117, 110, 97, + 108, + 105, 103, 110, 101, 100, 11, 2, 2, 11, 2, 2, 12, 2, 2, 14, 4, + 1, 0, 0, 0, 20, 2, 20, 1, 20, 1, 20, 1, 20, 1, 199, 34, + 33, 4, 83, 32, 4, 16, 199, 36, 35, 32, 39, 36, 34, 36, 196, 0, + 34, 36, 39, 33, 33, 35, 196, 1, 33, 35, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, + _backup_bayer_orc_horiz_upsample_unaligned); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_horiz_upsample_unaligned"); + orc_program_set_backup_function (p, + _backup_bayer_orc_horiz_upsample_unaligned); + orc_program_add_destination (p, 2, "d1"); + orc_program_add_destination (p, 2, "d2"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_constant (p, 4, 0x00000001, "c1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 1, "t2"); + orc_program_add_temporary (p, 1, "t3"); + orc_program_add_temporary (p, 1, "t4"); + orc_program_add_temporary (p, 1, "t5"); + + orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, + ORC_VAR_D1); + orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -611,67 +613,63 @@ const guint8 * ORC_RESTRICT s1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 24, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 104, 111, 114, - 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 11, 2, 2, 11, 2, - 2, 12, 2, 2, 14, 4, 255, 255, 255, 255, 14, 4, 1, 0, 0, 0, - 20, 2, 20, 1, 20, 1, 20, 1, 20, 1, 83, 32, 4, 16, 189, 33, - 32, 199, 35, 34, 4, 83, 32, 4, 17, 188, 36, 32, 39, 36, 34, 36, - 196, 0, 34, 36, 39, 33, 33, 35, 196, 1, 33, 35, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_horiz_upsample); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_horiz_upsample"); - orc_program_set_backup_function (p, _backup_bayer_orc_horiz_upsample); - orc_program_add_destination (p, 2, "d1"); - orc_program_add_destination (p, 2, "d2"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_constant (p, 4, 0xffffffff, "c1"); - orc_program_add_constant (p, 4, 0x00000001, "c2"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 1, "t2"); - orc_program_add_temporary (p, 1, "t3"); - orc_program_add_temporary (p, 1, "t4"); - orc_program_add_temporary (p, 1, "t5"); - - orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, - ORC_VAR_C1, ORC_VAR_D1); - orc_program_append_2 (p, "select1wb", 0, ORC_VAR_T2, ORC_VAR_T1, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_S1, - ORC_VAR_D1); - orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, - ORC_VAR_C2, ORC_VAR_D1); - orc_program_append_2 (p, "select0wb", 0, ORC_VAR_T5, ORC_VAR_T1, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 24, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 104, 111, 114, + 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 11, 2, 2, 11, 2, + 2, 12, 2, 2, 14, 4, 255, 255, 255, 255, 14, 4, 1, 0, 0, 0, + 20, 2, 20, 1, 20, 1, 20, 1, 20, 1, 83, 32, 4, 16, 189, 33, + 32, 199, 35, 34, 4, 83, 32, 4, 17, 188, 36, 32, 39, 36, 34, 36, + 196, 0, 34, 36, 39, 33, 33, 35, 196, 1, 33, 35, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_horiz_upsample); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_horiz_upsample"); + orc_program_set_backup_function (p, _backup_bayer_orc_horiz_upsample); + orc_program_add_destination (p, 2, "d1"); + orc_program_add_destination (p, 2, "d2"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_constant (p, 4, 0xffffffff, "c1"); + orc_program_add_constant (p, 4, 0x00000001, "c2"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 1, "t2"); + orc_program_add_temporary (p, 1, "t3"); + orc_program_add_temporary (p, 1, "t4"); + orc_program_add_temporary (p, 1, "t5"); + + orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "select1wb", 0, ORC_VAR_T2, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "splitwb", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_S1, + ORC_VAR_D1); + orc_program_append_2 (p, "loadoffw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "select0wb", 0, ORC_VAR_T5, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -969,77 +967,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 98, 103, 95, 98, 103, 114, 97, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, - 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 33, - 6, 35, 21, 1, 196, 32, 34, 18, 21, 1, 195, 0, 33, 32, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_bgra); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_bg_bgra"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_bgra); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x000000ff, "c1"); - orc_program_add_constant (p, 2, 0x0000ff00, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_S3, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_C3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T2, ORC_VAR_T1, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 98, 103, 95, 98, 103, 114, 97, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, + 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 33, + 6, 35, 21, 1, 196, 32, 34, 18, 21, 1, 195, 0, 33, 32, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_bgra); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_bg_bgra"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_bgra); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x000000ff, "c1"); + orc_program_add_constant (p, 2, 0x0000ff00, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_S3, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_C3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T2, ORC_VAR_T1, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -1341,77 +1335,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 103, 114, 95, 98, 103, 114, 97, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, - 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 33, - 34, 35, 21, 1, 196, 32, 7, 18, 21, 1, 195, 0, 33, 32, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_bgra); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_gr_bgra"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_bgra); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x0000ff00, "c1"); - orc_program_add_constant (p, 2, 0x000000ff, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_S4, ORC_VAR_C3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T2, ORC_VAR_T1, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 103, 114, 95, 98, 103, 114, 97, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, + 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 33, + 34, 35, 21, 1, 196, 32, 7, 18, 21, 1, 195, 0, 33, 32, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_bgra); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_gr_bgra"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_bgra); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x0000ff00, "c1"); + orc_program_add_constant (p, 2, 0x000000ff, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_S4, ORC_VAR_C3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T2, ORC_VAR_T1, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -1713,77 +1703,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 98, 103, 95, 97, 98, 103, 114, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, - 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 18, 6, 21, 1, 196, 33, 35, 34, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_abgr); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_bg_abgr"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_abgr); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x000000ff, "c1"); - orc_program_add_constant (p, 2, 0x0000ff00, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_S3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 98, 103, 95, 97, 98, 103, 114, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, + 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 18, 6, 21, 1, 196, 33, 35, 34, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_abgr); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_bg_abgr"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_abgr); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x000000ff, "c1"); + orc_program_add_constant (p, 2, 0x0000ff00, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_S3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -2085,77 +2071,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 103, 114, 95, 97, 98, 103, 114, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, - 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 18, 34, 21, 1, 196, 33, 35, 7, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_abgr); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_gr_abgr"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_abgr); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x0000ff00, "c1"); - orc_program_add_constant (p, 2, 0x000000ff, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_S4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 103, 114, 95, 97, 98, 103, 114, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, + 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 18, 34, 21, 1, 196, 33, 35, 7, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_abgr); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_gr_abgr"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_abgr); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x0000ff00, "c1"); + orc_program_add_constant (p, 2, 0x000000ff, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_S4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -2457,77 +2439,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 98, 103, 95, 114, 103, 98, 97, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, - 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 34, 35, 21, 1, 196, 33, 6, 18, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_rgba); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_bg_rgba"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_rgba); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x000000ff, "c1"); - orc_program_add_constant (p, 2, 0x0000ff00, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_S3, ORC_VAR_C3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 98, 103, 95, 114, 103, 98, 97, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, + 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 34, 35, 21, 1, 196, 33, 6, 18, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_rgba); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_bg_rgba"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_rgba); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x000000ff, "c1"); + orc_program_add_constant (p, 2, 0x0000ff00, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_S3, ORC_VAR_C3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -2829,77 +2807,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 103, 114, 95, 114, 103, 98, 97, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, - 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 7, 35, 21, 1, 196, 33, 34, 18, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_rgba); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_gr_rgba"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_rgba); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x0000ff00, "c1"); - orc_program_add_constant (p, 2, 0x000000ff, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_S4, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_C3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 103, 114, 95, 114, 103, 98, 97, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, + 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 7, 35, 21, 1, 196, 33, 34, 18, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_rgba); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_gr_rgba"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_rgba); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x0000ff00, "c1"); + orc_program_add_constant (p, 2, 0x000000ff, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_S4, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_C3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -3201,77 +3175,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 98, 103, 95, 97, 114, 103, 98, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, - 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 18, 34, 21, 1, 196, 33, 35, 6, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_argb); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_bg_argb"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_argb); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x000000ff, "c1"); - orc_program_add_constant (p, 2, 0x0000ff00, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_S3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 98, 103, 95, 97, 114, 103, 98, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 255, 0, 0, 0, 14, 2, 0, 255, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 5, + 9, 21, 1, 39, 35, 4, 8, 79, 36, 7, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 18, 34, 21, 1, 196, 33, 35, 6, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_argb); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_bg_argb"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_bg_argb); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x000000ff, "c1"); + orc_program_add_constant (p, 2, 0x0000ff00, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_S3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -3573,77 +3543,73 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, - 103, 101, 95, 103, 114, 95, 97, 114, 103, 98, 11, 8, 8, 12, 2, 2, - 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, - 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, - 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, - 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, - 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, - 18, 7, 21, 1, 196, 33, 35, 34, 21, 1, 195, 0, 32, 33, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_argb); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer_orc_merge_gr_argb"); - orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_argb); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 2, "s1"); - orc_program_add_source (p, 2, "s2"); - orc_program_add_source (p, 2, "s3"); - orc_program_add_source (p, 2, "s4"); - orc_program_add_source (p, 2, "s5"); - orc_program_add_source (p, 2, "s6"); - orc_program_add_constant (p, 2, 0x0000ff00, "c1"); - orc_program_add_constant (p, 2, 0x000000ff, "c2"); - orc_program_add_constant (p, 1, 0x000000ff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_S4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 95, 111, 114, 99, 95, 109, 101, 114, + 103, 101, 95, 103, 114, 95, 97, 114, 103, 98, 11, 8, 8, 12, 2, 2, + 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 12, 2, 2, 14, + 2, 0, 255, 0, 0, 14, 2, 255, 0, 0, 0, 14, 1, 255, 0, 0, + 0, 20, 4, 20, 4, 20, 2, 20, 2, 20, 2, 21, 1, 39, 34, 4, + 8, 21, 1, 39, 35, 5, 9, 79, 36, 6, 21, 1, 39, 35, 35, 36, + 73, 35, 35, 16, 73, 36, 36, 17, 92, 35, 36, 35, 21, 1, 196, 32, + 18, 7, 21, 1, 196, 33, 35, 34, 21, 1, 195, 0, 32, 33, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_argb); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer_orc_merge_gr_argb"); + orc_program_set_backup_function (p, _backup_bayer_orc_merge_gr_argb); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 2, "s1"); + orc_program_add_source (p, 2, "s2"); + orc_program_add_source (p, 2, "s3"); + orc_program_add_source (p, 2, "s4"); + orc_program_add_source (p, 2, "s5"); + orc_program_add_source (p, 2, "s6"); + orc_program_add_constant (p, 2, 0x0000ff00, "c1"); + orc_program_add_constant (p, 2, 0x000000ff, "c2"); + orc_program_add_constant (p, 1, 0x000000ff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyw", 0, ORC_VAR_T5, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avgub", 1, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orw", 0, ORC_VAR_T4, ORC_VAR_T5, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T1, ORC_VAR_C3, ORC_VAR_S4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergebw", 1, ORC_VAR_T2, ORC_VAR_T4, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -3809,65 +3775,59 @@ guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 29, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 104, - 111, 114, 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 108, - 101, - 11, 4, 4, 11, 4, 4, 12, 4, 4, 14, 4, 1, 0, 0, 0, 20, - 4, 20, 2, 20, 2, 20, 2, 20, 2, 198, 34, 33, 4, 114, 32, 4, - 16, 198, 36, 35, 32, 76, 36, 34, 36, 195, 0, 34, 36, 76, 33, 33, - 35, 195, 1, 33, 35, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_bayer16_orc_horiz_upsample_le); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_horiz_upsample_le"); - orc_program_set_backup_function (p, - _backup_bayer16_orc_horiz_upsample_le); - orc_program_add_destination (p, 4, "d1"); - orc_program_add_destination (p, 4, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_constant (p, 4, 0x00000001, "c1"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, - ORC_VAR_D1); - orc_program_append_2 (p, "loadoffl", 0, ORC_VAR_T1, ORC_VAR_S1, - ORC_VAR_C1, ORC_VAR_D1); - orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 29, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 104, + 111, 114, 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 108, + 101, + 11, 4, 4, 11, 4, 4, 12, 4, 4, 14, 4, 1, 0, 0, 0, 20, + 4, 20, 2, 20, 2, 20, 2, 20, 2, 198, 34, 33, 4, 114, 32, 4, + 16, 198, 36, 35, 32, 76, 36, 34, 36, 195, 0, 34, 36, 76, 33, 33, + 35, 195, 1, 33, 35, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_horiz_upsample_le); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_horiz_upsample_le"); + orc_program_set_backup_function (p, _backup_bayer16_orc_horiz_upsample_le); + orc_program_add_destination (p, 4, "d1"); + orc_program_add_destination (p, 4, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_constant (p, 4, 0x00000001, "c1"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, + ORC_VAR_D1); + orc_program_append_2 (p, "loadoffl", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -4053,74 +4013,68 @@ guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 29, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 104, - 111, 114, 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 98, - 101, - 11, 4, 4, 11, 4, 4, 12, 4, 4, 14, 4, 1, 0, 0, 0, 20, - 4, 20, 2, 20, 2, 20, 2, 20, 2, 198, 34, 33, 4, 183, 33, 33, - 183, 34, 34, 114, 32, 4, 16, 198, 36, 35, 32, 183, 35, 35, 183, 36, - 36, 76, 36, 34, 36, 195, 0, 34, 36, 76, 33, 33, 35, 195, 1, 33, - 35, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_bayer16_orc_horiz_upsample_be); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_horiz_upsample_be"); - orc_program_set_backup_function (p, - _backup_bayer16_orc_horiz_upsample_be); - orc_program_add_destination (p, 4, "d1"); - orc_program_add_destination (p, 4, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_constant (p, 4, 0x00000001, "c1"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - - orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, - ORC_VAR_D1); - orc_program_append_2 (p, "swapw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "swapw", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "loadoffl", 0, ORC_VAR_T1, ORC_VAR_S1, - ORC_VAR_C1, ORC_VAR_D1); - orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "swapw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "swapw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 29, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 104, + 111, 114, 105, 122, 95, 117, 112, 115, 97, 109, 112, 108, 101, 95, 98, + 101, + 11, 4, 4, 11, 4, 4, 12, 4, 4, 14, 4, 1, 0, 0, 0, 20, + 4, 20, 2, 20, 2, 20, 2, 20, 2, 198, 34, 33, 4, 183, 33, 33, + 183, 34, 34, 114, 32, 4, 16, 198, 36, 35, 32, 183, 35, 35, 183, 36, + 36, 76, 36, 34, 36, 195, 0, 34, 36, 76, 33, 33, 35, 195, 1, 33, + 35, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_horiz_upsample_be); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_horiz_upsample_be"); + orc_program_set_backup_function (p, _backup_bayer16_orc_horiz_upsample_be); + orc_program_add_destination (p, 4, "d1"); + orc_program_add_destination (p, 4, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_constant (p, 4, 0x00000001, "c1"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + + orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_S1, + ORC_VAR_D1); + orc_program_append_2 (p, "swapw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "swapw", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "loadoffl", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "splitlw", 0, ORC_VAR_T5, ORC_VAR_T4, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "swapw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "swapw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 0, ORC_VAR_T5, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T3, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T4, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -4410,73 +4364,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 98, 103, 95, 98, 103, 114, 97, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 6, 33, 21, 1, 195, 1, 32, 18, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_bgra); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_bg_bgra"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_bgra); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0x0000ffff, "c1"); - orc_program_add_constant (p, 4, 0xffff0000, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_S3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T1, ORC_VAR_C3, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 98, 103, 95, 98, 103, 114, 97, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 6, 33, 21, 1, 195, 1, 32, 18, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_bgra); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_bg_bgra"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_bgra); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0x0000ffff, "c1"); + orc_program_add_constant (p, 4, 0xffff0000, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_S3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T1, ORC_VAR_C3, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -4771,73 +4721,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 103, 114, 95, 98, 103, 114, 97, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 32, 33, 21, 1, 195, 1, 7, 18, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_bgra); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_gr_bgra"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_bgra); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0xffff0000, "c1"); - orc_program_add_constant (p, 4, 0x0000ffff, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_S4, ORC_VAR_C3, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 103, 114, 95, 98, 103, 114, 97, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 32, 33, 21, 1, 195, 1, 7, 18, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_bgra); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_gr_bgra"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_bgra); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0xffff0000, "c1"); + orc_program_add_constant (p, 4, 0x0000ffff, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_S4, ORC_VAR_C3, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -5132,73 +5078,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 98, 103, 95, 97, 98, 103, 114, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 18, 6, 21, 1, 195, 1, 33, 32, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_abgr); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_bg_abgr"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_abgr); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0x0000ffff, "c1"); - orc_program_add_constant (p, 4, 0xffff0000, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_S3, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T1, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 98, 103, 95, 97, 98, 103, 114, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 18, 6, 21, 1, 195, 1, 33, 32, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_abgr); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_bg_abgr"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_abgr); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0x0000ffff, "c1"); + orc_program_add_constant (p, 4, 0xffff0000, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_S3, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T1, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -5493,73 +5435,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 103, 114, 95, 97, 98, 103, 114, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 18, 32, 21, 1, 195, 1, 33, 7, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_abgr); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_gr_abgr"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_abgr); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0xffff0000, "c1"); - orc_program_add_constant (p, 4, 0x0000ffff, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_S4, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 103, 114, 95, 97, 98, 103, 114, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 18, 32, 21, 1, 195, 1, 33, 7, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_abgr); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_gr_abgr"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_abgr); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0xffff0000, "c1"); + orc_program_add_constant (p, 4, 0x0000ffff, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_S4, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -5854,73 +5792,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 98, 103, 95, 114, 103, 98, 97, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 32, 33, 21, 1, 195, 1, 6, 18, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_rgba); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_bg_rgba"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_rgba); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0x0000ffff, "c1"); - orc_program_add_constant (p, 4, 0xffff0000, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_S3, ORC_VAR_C3, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 98, 103, 95, 114, 103, 98, 97, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 32, 33, 21, 1, 195, 1, 6, 18, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_rgba); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_bg_rgba"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_rgba); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0x0000ffff, "c1"); + orc_program_add_constant (p, 4, 0xffff0000, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_S3, ORC_VAR_C3, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -6215,73 +6149,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 103, 114, 95, 114, 103, 98, 97, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 7, 33, 21, 1, 195, 1, 32, 18, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_rgba); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_gr_rgba"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_rgba); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0xffff0000, "c1"); - orc_program_add_constant (p, 4, 0x0000ffff, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_S4, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T1, ORC_VAR_C3, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 103, 114, 95, 114, 103, 98, 97, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 7, 33, 21, 1, 195, 1, 32, 18, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_rgba); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_gr_rgba"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_rgba); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0xffff0000, "c1"); + orc_program_add_constant (p, 4, 0x0000ffff, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_S4, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T1, ORC_VAR_C3, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -6576,73 +6506,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 98, 103, 95, 97, 114, 103, 98, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 18, 32, 21, 1, 195, 1, 33, 6, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_argb); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_bg_argb"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_argb); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0x0000ffff, "c1"); - orc_program_add_constant (p, 4, 0xffff0000, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_S3, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 98, 103, 95, 97, 114, 103, 98, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 255, 255, 0, 0, 14, 4, 0, 0, 255, 255, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 5, 9, 21, 1, 76, 33, 4, 8, 112, 34, 7, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 18, 32, 21, 1, 195, 1, 33, 6, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_argb); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_bg_argb"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_bg_argb); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0x0000ffff, "c1"); + orc_program_add_constant (p, 4, 0xffff0000, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_S3, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -6937,73 +6863,69 @@ const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, - 101, 114, 103, 101, 95, 103, 114, 95, 97, 114, 103, 98, 11, 8, 8, 11, - 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, - 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, - 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, - 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, - 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, - 0, 18, 7, 21, 1, 195, 1, 33, 32, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_argb); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16_orc_merge_gr_argb"); - orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_argb); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_destination (p, 8, "d2"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_source (p, 4, "s3"); - orc_program_add_source (p, 4, "s4"); - orc_program_add_source (p, 4, "s5"); - orc_program_add_source (p, 4, "s6"); - orc_program_add_constant (p, 4, 0xffff0000, "c1"); - orc_program_add_constant (p, 4, 0x0000ffff, "c2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c3"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, - ORC_VAR_D1); - orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_S4, - ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T1, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 25, 98, 97, 121, 101, 114, 49, 54, 95, 111, 114, 99, 95, 109, + 101, 114, 103, 101, 95, 103, 114, 95, 97, 114, 103, 98, 11, 8, 8, 11, + 8, 8, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, 4, 12, 4, + 4, 12, 4, 4, 14, 4, 0, 0, 255, 255, 14, 4, 255, 255, 0, 0, + 14, 2, 255, 255, 0, 0, 20, 4, 20, 4, 20, 4, 21, 1, 76, 32, + 4, 8, 21, 1, 76, 33, 5, 9, 112, 34, 6, 21, 1, 76, 33, 33, + 34, 106, 33, 33, 16, 106, 34, 34, 17, 123, 33, 34, 33, 21, 1, 195, + 0, 18, 7, 21, 1, 195, 1, 33, 32, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_argb); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16_orc_merge_gr_argb"); + orc_program_set_backup_function (p, _backup_bayer16_orc_merge_gr_argb); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_destination (p, 8, "d2"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_source (p, 4, "s3"); + orc_program_add_source (p, 4, "s4"); + orc_program_add_source (p, 4, "s5"); + orc_program_add_source (p, 4, "s6"); + orc_program_add_constant (p, 4, 0xffff0000, "c1"); + orc_program_add_constant (p, 4, 0x0000ffff, "c2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c3"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_S5, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_S6, + ORC_VAR_D1); + orc_program_append_2 (p, "copyl", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "avguw", 1, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "orl", 0, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D1, ORC_VAR_C3, ORC_VAR_S4, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 1, ORC_VAR_D2, ORC_VAR_T2, ORC_VAR_T1, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -7072,8 +6994,8 @@ var40.x20 = ((orc_uint32) var39.x20) >> p1; var40.x21 = ((orc_uint32) var39.x21) >> p1; /* 4: convuuslw */ - var41.x20 = ORC_CLAMP_UW ((orc_uint32) var40.x20); - var41.x21 = ORC_CLAMP_UW ((orc_uint32) var40.x21); + var41.x20 = ORC_MIN ((orc_uint32) var40.x20, ORC_UW_MAX); + var41.x21 = ORC_MIN ((orc_uint32) var40.x21, ORC_UW_MAX); /* 5: loadl */ var37 = ptr5i; /* 6: muluwl */ @@ -7087,8 +7009,8 @@ var43.x20 = ((orc_uint32) var42.x20) >> p1; var43.x21 = ((orc_uint32) var42.x21) >> p1; /* 8: convuuslw */ - var44.x20 = ORC_CLAMP_UW ((orc_uint32) var43.x20); - var44.x21 = ORC_CLAMP_UW ((orc_uint32) var43.x21); + var44.x20 = ORC_MIN ((orc_uint32) var43.x20, ORC_UW_MAX); + var44.x21 = ORC_MIN ((orc_uint32) var43.x21, ORC_UW_MAX); /* 9: mergelq */ { orc_union64 _dest; @@ -7148,8 +7070,8 @@ var40.x20 = ((orc_uint32) var39.x20) >> ex->params24; var40.x21 = ((orc_uint32) var39.x21) >> ex->params24; /* 4: convuuslw */ - var41.x20 = ORC_CLAMP_UW ((orc_uint32) var40.x20); - var41.x21 = ORC_CLAMP_UW ((orc_uint32) var40.x21); + var41.x20 = ORC_MIN ((orc_uint32) var40.x20, ORC_UW_MAX); + var41.x21 = ORC_MIN ((orc_uint32) var40.x21, ORC_UW_MAX); /* 5: loadl */ var37 = ptr5i; /* 6: muluwl */ @@ -7163,8 +7085,8 @@ var43.x20 = ((orc_uint32) var42.x20) >> ex->params24; var43.x21 = ((orc_uint32) var42.x21) >> ex->params24; /* 8: convuuslw */ - var44.x20 = ORC_CLAMP_UW ((orc_uint32) var43.x20); - var44.x21 = ORC_CLAMP_UW ((orc_uint32) var43.x21); + var44.x20 = ORC_MIN ((orc_uint32) var43.x20, ORC_UW_MAX); + var44.x21 = ORC_MIN ((orc_uint32) var43.x21, ORC_UW_MAX); /* 9: mergelq */ { orc_union64 _dest; @@ -7184,61 +7106,57 @@ int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 23, 98, 97, 121, 101, 114, 49, 54, 116, 111, 49, 54, 95, 111, - 114, 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 8, 8, 12, 4, 4, - 12, 4, 4, 14, 2, 255, 255, 0, 0, 16, 4, 20, 4, 20, 4, 20, - 8, 21, 1, 177, 34, 4, 16, 21, 1, 126, 34, 34, 24, 21, 1, 168, - 32, 34, 21, 1, 177, 34, 5, 16, 21, 1, 126, 34, 34, 24, 21, 1, - 168, 33, 34, 194, 0, 32, 33, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16to16_orc_reorder); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16to16_orc_reorder"); - orc_program_set_backup_function (p, _backup_bayer16to16_orc_reorder); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_constant (p, 2, 0x0000ffff, "c1"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 4, "t2"); - orc_program_add_temporary (p, 8, "t3"); - - orc_program_append_2 (p, "muluwl", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "shrul", 1, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuuslw", 1, ORC_VAR_T1, ORC_VAR_T3, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "muluwl", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "shrul", 1, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuuslw", 1, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "mergelq", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 23, 98, 97, 121, 101, 114, 49, 54, 116, 111, 49, 54, 95, 111, + 114, 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 8, 8, 12, 4, 4, + 12, 4, 4, 14, 2, 255, 255, 0, 0, 16, 4, 20, 4, 20, 4, 20, + 8, 21, 1, 177, 34, 4, 16, 21, 1, 126, 34, 34, 24, 21, 1, 168, + 32, 34, 21, 1, 177, 34, 5, 16, 21, 1, 126, 34, 34, 24, 21, 1, + 168, 33, 34, 194, 0, 32, 33, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16to16_orc_reorder); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16to16_orc_reorder"); + orc_program_set_backup_function (p, _backup_bayer16to16_orc_reorder); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_constant (p, 2, 0x0000ffff, "c1"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 4, "t2"); + orc_program_add_temporary (p, 8, "t3"); + + orc_program_append_2 (p, "muluwl", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "shrul", 1, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuuslw", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "muluwl", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "shrul", 1, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuuslw", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "mergelq", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -7367,55 +7285,51 @@ int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 22, 98, 97, 121, 101, 114, 49, 54, 116, 111, 56, 95, 111, 114, - 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 4, 4, 12, 4, 4, 12, - 4, 4, 16, 4, 20, 2, 20, 2, 20, 4, 21, 1, 95, 34, 4, 24, - 21, 1, 162, 32, 34, 21, 1, 95, 34, 5, 24, 21, 1, 162, 33, 34, - 195, 0, 32, 33, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer16to8_orc_reorder); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer16to8_orc_reorder"); - orc_program_set_backup_function (p, _backup_bayer16to8_orc_reorder); - orc_program_add_destination (p, 4, "d1"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_source (p, 4, "s2"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 4, "t3"); - - orc_program_append_2 (p, "shruw", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuuswb", 1, ORC_VAR_T1, ORC_VAR_T3, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "shruw", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuuswb", 1, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1, ORC_VAR_D1); - orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 22, 98, 97, 121, 101, 114, 49, 54, 116, 111, 56, 95, 111, 114, + 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 4, 4, 12, 4, 4, 12, + 4, 4, 16, 4, 20, 2, 20, 2, 20, 4, 21, 1, 95, 34, 4, 24, + 21, 1, 162, 32, 34, 21, 1, 95, 34, 5, 24, 21, 1, 162, 33, 34, + 195, 0, 32, 33, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer16to8_orc_reorder); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer16to8_orc_reorder"); + orc_program_set_backup_function (p, _backup_bayer16to8_orc_reorder); + orc_program_add_destination (p, 4, "d1"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_source (p, 4, "s2"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 4, "t3"); + + orc_program_append_2 (p, "shruw", 1, ORC_VAR_T3, ORC_VAR_S1, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuuswb", 1, ORC_VAR_T1, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "shruw", 1, ORC_VAR_T3, ORC_VAR_S2, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuuswb", 1, ORC_VAR_T2, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "mergewl", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -7496,40 +7410,36 @@ const guint32 * ORC_RESTRICT s1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); - - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; + + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 22, 98, 97, 121, 101, 114, 56, 116, 111, 49, 54, 95, 111, 114, - 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 8, 8, 12, 4, 4, 21, - 2, 151, 0, 4, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_bayer8to16_orc_reorder); -#else - p = orc_program_new (); - orc_program_set_name (p, "bayer8to16_orc_reorder"); - orc_program_set_backup_function (p, _backup_bayer8to16_orc_reorder); - orc_program_add_destination (p, 8, "d1"); - orc_program_add_source (p, 4, "s1"); - - orc_program_append_2 (p, "splatbw", 2, ORC_VAR_D1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); -#endif - - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + static const orc_uint8 bc = { + 1, 9, 22, 98, 97, 121, 101, 114, 56, 116, 111, 49, 54, 95, 111, 114, + 99, 95, 114, 101, 111, 114, 100, 101, 114, 11, 8, 8, 12, 4, 4, 21, + 2, 151, 0, 4, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_bayer8to16_orc_reorder); +#else + p = orc_program_new (); + orc_program_set_name (p, "bayer8to16_orc_reorder"); + orc_program_set_backup_function (p, _backup_bayer8to16_orc_reorder); + orc_program_add_destination (p, 8, "d1"); + orc_program_add_source (p, 4, "s1"); + + orc_program_append_2 (p, "splatbw", 2, ORC_VAR_D1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); +#endif + + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/bayer/gstbayerorc-dist.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/bayer/gstbayerorc-dist.h
Changed
@@ -6,8 +6,7 @@ #include <glib.h> #ifdef __cplusplus -extern "C" -{ +extern "C" { #endif @@ -16,70 +15,55 @@ #define _ORC_INTEGER_TYPEDEFS_ #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #include <stdint.h> - typedef int8_t orc_int8; - typedef int16_t orc_int16; - typedef int32_t orc_int32; - typedef int64_t orc_int64; - typedef uint8_t orc_uint8; - typedef uint16_t orc_uint16; - typedef uint32_t orc_uint32; - typedef uint64_t orc_uint64; +typedef int8_t orc_int8; +typedef int16_t orc_int16; +typedef int32_t orc_int32; +typedef int64_t orc_int64; +typedef uint8_t orc_uint8; +typedef uint16_t orc_uint16; +typedef uint32_t orc_uint32; +typedef uint64_t orc_uint64; #define ORC_UINT64_C(x) UINT64_C(x) #elif defined(_MSC_VER) - typedef signed __int8 orc_int8; - typedef signed __int16 orc_int16; - typedef signed __int32 orc_int32; - typedef signed __int64 orc_int64; - typedef unsigned __int8 orc_uint8; - typedef unsigned __int16 orc_uint16; - typedef unsigned __int32 orc_uint32; - typedef unsigned __int64 orc_uint64; +typedef signed __int8 orc_int8; +typedef signed __int16 orc_int16; +typedef signed __int32 orc_int32; +typedef signed __int64 orc_int64; +typedef unsigned __int8 orc_uint8; +typedef unsigned __int16 orc_uint16; +typedef unsigned __int32 orc_uint32; +typedef unsigned __int64 orc_uint64; #define ORC_UINT64_C(x) (x##Ui64) #define inline __inline #else #include <limits.h> - typedef signed char orc_int8; - typedef short orc_int16; - typedef int orc_int32; - typedef unsigned char orc_uint8; - typedef unsigned short orc_uint16; - typedef unsigned int orc_uint32; +typedef signed char orc_int8; +typedef short orc_int16; +typedef int orc_int32; +typedef unsigned char orc_uint8; +typedef unsigned short orc_uint16; +typedef unsigned int orc_uint32; #if INT_MAX == LONG_MAX - typedef long long orc_int64; - typedef unsigned long long orc_uint64; +typedef long long orc_int64; +typedef unsigned long long orc_uint64; #define ORC_UINT64_C(x) (x##ULL) #else - typedef long orc_int64; - typedef unsigned long orc_uint64; +typedef long orc_int64; +typedef unsigned long orc_uint64; #define ORC_UINT64_C(x) (x##UL) #endif #endif - typedef union - { - orc_int16 i; - orc_int8 x22; - } orc_union16; - typedef union - { - orc_int32 i; - float f; - orc_int16 x22; - orc_int8 x44; - } orc_union32; - typedef union - { - orc_int64 i; - double f; - orc_int32 x22; - float x2f2; - orc_int16 x44; - } orc_union64; +typedef union { orc_int16 i; orc_int8 x22; } orc_union16; +typedef union { orc_int32 i; float f; orc_int16 x22; orc_int8 x44; } orc_union32; +typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -97,95 +81,31 @@ #endif #endif - void bayer_orc_horiz_upsample_unaligned (guint8 * ORC_RESTRICT d1, - guint8 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, int n); - void bayer_orc_horiz_upsample (guint8 * ORC_RESTRICT d1, - guint8 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, int n); - void bayer_orc_merge_bg_bgra (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_gr_bgra (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_bg_abgr (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_gr_abgr (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_bg_rgba (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_gr_rgba (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_bg_argb (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer_orc_merge_gr_argb (guint8 * ORC_RESTRICT d1, - const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, - const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, - const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_horiz_upsample_le (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n); - void bayer16_orc_horiz_upsample_be (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n); - void bayer16_orc_merge_bg_bgra (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_gr_bgra (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_bg_abgr (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_gr_abgr (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_bg_rgba (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_gr_rgba (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_bg_argb (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16_orc_merge_gr_argb (guint16 * ORC_RESTRICT d1, - guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, - const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, - const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, - const guint8 * ORC_RESTRICT s6, int n); - void bayer16to16_orc_reorder (guint8 * ORC_RESTRICT d1, - const guint32 * ORC_RESTRICT s1, const guint32 * ORC_RESTRICT s2, int p1, - int n); - void bayer16to8_orc_reorder (guint8 * ORC_RESTRICT d1, - const guint32 * ORC_RESTRICT s1, const guint32 * ORC_RESTRICT s2, int p1, - int n); - void bayer8to16_orc_reorder (guint8 * ORC_RESTRICT d1, - const guint32 * ORC_RESTRICT s1, int n); +void bayer_orc_horiz_upsample_unaligned (guint8 * ORC_RESTRICT d1, guint8 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, int n); +void bayer_orc_horiz_upsample (guint8 * ORC_RESTRICT d1, guint8 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, int n); +void bayer_orc_merge_bg_bgra (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_gr_bgra (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_bg_abgr (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_gr_abgr (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_bg_rgba (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_gr_rgba (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_bg_argb (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer_orc_merge_gr_argb (guint8 * ORC_RESTRICT d1, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_horiz_upsample_le (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n); +void bayer16_orc_horiz_upsample_be (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint16 * ORC_RESTRICT s1, int n); +void bayer16_orc_merge_bg_bgra (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_gr_bgra (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_bg_abgr (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_gr_abgr (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_bg_rgba (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_gr_rgba (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_bg_argb (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16_orc_merge_gr_argb (guint16 * ORC_RESTRICT d1, guint16 * ORC_RESTRICT d2, const guint8 * ORC_RESTRICT s1, const guint8 * ORC_RESTRICT s2, const guint8 * ORC_RESTRICT s3, const guint8 * ORC_RESTRICT s4, const guint8 * ORC_RESTRICT s5, const guint8 * ORC_RESTRICT s6, int n); +void bayer16to16_orc_reorder (guint8 * ORC_RESTRICT d1, const guint32 * ORC_RESTRICT s1, const guint32 * ORC_RESTRICT s2, int p1, int n); +void bayer16to8_orc_reorder (guint8 * ORC_RESTRICT d1, const guint32 * ORC_RESTRICT s1, const guint32 * ORC_RESTRICT s2, int p1, int n); +void bayer8to16_orc_reorder (guint8 * ORC_RESTRICT d1, const guint32 * ORC_RESTRICT s1, int n); #ifdef __cplusplus } #endif +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/bcd.h
Changed
(renamed from ext/closedcaption/bcd.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/bit_slicer.c
Changed
(renamed from ext/closedcaption/bit_slicer.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/bit_slicer.h
Changed
(renamed from ext/closedcaption/bit_slicer.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/ccutils.c
Changed
(renamed from ext/closedcaption/ccutils.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/ccutils.h
Changed
(renamed from ext/closedcaption/ccutils.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/decoder.c
Changed
(renamed from ext/closedcaption/decoder.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/decoder.h
Changed
(renamed from ext/closedcaption/decoder.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcccombiner.c
Changed
(renamed from ext/closedcaption/gstcccombiner.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcccombiner.h
Changed
(renamed from ext/closedcaption/gstcccombiner.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstccconverter.c
Changed
(renamed from ext/closedcaption/gstccconverter.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstccconverter.h
Changed
(renamed from ext/closedcaption/gstccconverter.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstccextractor.c
Changed
(renamed from ext/closedcaption/gstccextractor.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstccextractor.h
Changed
(renamed from ext/closedcaption/gstccextractor.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcea608mux.c
Added
@@ -0,0 +1,501 @@ +/* + * GStreamer + * Copyright (C) 2023 Mathieu Duponchelle <mathieu@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-cea608mux + * @title: cea608mux + * @short_description: Combine CC1 and CC3 raw 608 streams + * + * ``` + * gst-launch-1.0 cea608mux name=mux ! fakesink dump=true \ + * filesrc location=one.scc ! sccparse ! closedcaption/x-cea-608 ! ccconverter ! mux. \ + * filesrc location=two.scc ! sccparse ! ccconverter ! closedcaption/x-cea-608, format=raw, field=0 ! \ + * capssetter caps="closedcaption/x-cea-608, format=raw, field=1" ! mux. + * ``` + * + * Since: 1.24 + */ + + +#ifdef HAVE_CONFIG_H +# include <config.h> +#endif + +#include <gst/gst.h> +#include <gst/base/base.h> +#include <gst/video/video.h> +#include <string.h> + +#include "ccutils.h" +#include "gstcea608mux.h" + +GST_DEBUG_CATEGORY_STATIC (gst_cea608_mux_debug); +#define GST_CAT_DEFAULT gst_cea608_mux_debug + +enum +{ + PROP_0, + PROP_FORCE_LIVE, +}; + +#define DEFAULT_FORCE_LIVE FALSE + +static GstStaticPadTemplate srctemplate = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("closedcaption/x-cea-608, format=s334-1a, " + "framerate=(fraction){60/1, 60000/1001, 50/1, 30/1, 30000/1001, 25/1, 24/1, 24000/1001}")); + +static GstStaticPadTemplate cc1_template = GST_STATIC_PAD_TEMPLATE ("cc1", + GST_PAD_SINK, + GST_PAD_REQUEST, + GST_STATIC_CAPS ("closedcaption/x-cea-608,format=raw,field=0")); + +static GstStaticPadTemplate cc3_template = GST_STATIC_PAD_TEMPLATE ("cc3", + GST_PAD_SINK, + GST_PAD_REQUEST, + GST_STATIC_CAPS ("closedcaption/x-cea-608,format=raw,field=1")); + +#define parent_class gst_cea608_mux_parent_class +G_DEFINE_TYPE (GstCea608Mux, gst_cea608_mux, GST_TYPE_AGGREGATOR); +GST_ELEMENT_REGISTER_DEFINE (cea608mux, "cea608mux", + GST_RANK_NONE, GST_TYPE_CEA608MUX); + +static void +gst_cea608_mux_finalize (GObject * object) +{ + GstCea608Mux *self = GST_CEA608MUX (object); + + gst_clear_object (&self->cc_buffer); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +#define GST_FLOW_NEED_DATA GST_FLOW_CUSTOM_SUCCESS + +static GstAggregatorPad * +find_best_pad (GstAggregator * aggregator, GstClockTime * ts, gboolean timeout) +{ + GstAggregatorPad *best = NULL; + GstClockTime best_ts = GST_CLOCK_TIME_NONE; + GstIterator *pads; + GValue padptr = { 0, }; + gboolean done = FALSE; + + pads = gst_element_iterate_sink_pads (GST_ELEMENT (aggregator)); + + while (!done) { + switch (gst_iterator_next (pads, &padptr)) { + case GST_ITERATOR_OK:{ + GstAggregatorPad *apad = g_value_get_object (&padptr); + GstClockTime t = GST_CLOCK_TIME_NONE; + GstBuffer *buffer; + + buffer = gst_aggregator_pad_peek_buffer (apad); + if (!buffer) { + if (!timeout && !gst_aggregator_pad_is_eos (apad)) { + gst_object_replace ((GstObject **) & best, NULL); + best_ts = GST_CLOCK_TIME_NONE; + done = TRUE; + } + break; + } + + if (GST_CLOCK_TIME_IS_VALID (GST_BUFFER_DTS_OR_PTS (buffer))) { + t = gst_segment_to_running_time (&apad->segment, GST_FORMAT_TIME, + GST_BUFFER_PTS (buffer)); + } + + if (!GST_CLOCK_TIME_IS_VALID (best_ts) || + (GST_CLOCK_TIME_IS_VALID (t) && t < best_ts)) { + gst_object_replace ((GstObject **) & best, GST_OBJECT (apad)); + best_ts = t; + } + gst_buffer_unref (buffer); + break; + } + case GST_ITERATOR_DONE: + done = TRUE; + break; + case GST_ITERATOR_RESYNC: + gst_iterator_resync (pads); + /* Clear the best pad and start again. It might have disappeared */ + gst_object_replace ((GstObject **) & best, NULL); + best_ts = GST_CLOCK_TIME_NONE; + break; + case GST_ITERATOR_ERROR: + /* This can't happen if the parameters to gst_iterator_next() are valid */ + g_assert_not_reached (); + break; + } + g_value_reset (&padptr); + } + g_value_unset (&padptr); + gst_iterator_free (pads); + + if (best) { + GST_LOG_OBJECT (aggregator, + "Best pad found with TS %" GST_TIME_FORMAT ": %" GST_PTR_FORMAT, + GST_TIME_ARGS (best_ts), best); + } else { + GST_LOG_OBJECT (aggregator, "Best pad not found"); + } + + if (ts && GST_CLOCK_TIME_IS_VALID (best_ts)) + *ts = best_ts; + + return best; +} + +static gboolean +all_pads_eos (GstAggregator * agg) +{ + GList *l; + gboolean ret = TRUE; + + GST_OBJECT_LOCK (agg); + for (l = GST_ELEMENT_CAST (agg)->sinkpads; l; l = l->next) { + GstAggregatorPad *pad = GST_AGGREGATOR_PAD (l->data); + + if (!gst_aggregator_pad_is_eos (pad)) { + ret = FALSE; + break; + } + } + GST_OBJECT_UNLOCK (agg); + + return ret; +} + +static void +take_s334_both_fields (GstCea608Mux * self, GstBuffer * buffer) +{ + GstMapInfo out = GST_MAP_INFO_INIT; + gint s334_len; + guint cc_data_len, i; + + gst_buffer_map (buffer, &out, GST_MAP_READWRITE); + + cc_data_len = out.size; + cc_buffer_take_cc_data (self->cc_buffer, self->cdp_fps_entry, out.data, + &cc_data_len); + s334_len = drop_ccp_from_cc_data (out.data, cc_data_len); + if (s334_len < 0) { + s334_len = 0; + goto out; + } + + for (i = 0; i < s334_len / 3; i++) { + guint byte = out.datai * 3; + /* We have to assume a line offset of 0 */ + out.datai * 3 = (byte == 0xfc || byte == 0xf8) ? 0x80 : 0x00; + } + +out: + gst_buffer_unmap (buffer, &out); + gst_buffer_set_size (buffer, s334_len); +} + +static GstFlowReturn +finish_s334_both_fields (GstCea608Mux * self) +{ + GstClockTime output_pts = gst_util_uint64_scale_int (GST_SECOND, + self->cdp_fps_entry->fps_d * self->n_output_buffers, + self->cdp_fps_entry->fps_n); + GstClockTime output_duration = + gst_util_uint64_scale_int (GST_SECOND, self->cdp_fps_entry->fps_d, + self->cdp_fps_entry->fps_n); + GstBuffer *output = gst_buffer_new_allocate (NULL, MAX_CDP_PACKET_LEN, NULL); + GstSegment *agg_segment = + &GST_AGGREGATOR_PAD (GST_AGGREGATOR (self)->srcpad)->segment; + + output_pts += self->start_time; + + take_s334_both_fields (self, output); + GST_BUFFER_PTS (output) = output_pts; + GST_BUFFER_DURATION (output) = output_duration; + GST_DEBUG_OBJECT (self, "Finishing %" GST_PTR_FORMAT, output); + self->n_output_buffers += 1; + agg_segment->position = output_pts + output_duration; + + return gst_aggregator_finish_buffer (GST_AGGREGATOR (self), output); +} + +static GstFlowReturn +gst_cea608_mux_aggregate (GstAggregator * aggregator, gboolean timeout) +{ + GstCea608Mux *self = GST_CEA608MUX (aggregator); + GstFlowReturn flow_ret = GST_FLOW_OK; + GstAggregatorPad *best_pad = NULL; + GstClockTime output_duration = + gst_util_uint64_scale_int (GST_SECOND, self->cdp_fps_entry->fps_d, + self->cdp_fps_entry->fps_n); + GstSegment *agg_segment = &GST_AGGREGATOR_PAD (aggregator->srcpad)->segment; + GstClockTime output_start_time = agg_segment->position; + GstClockTime output_end_running_time; + + if (agg_segment->position == -1 || agg_segment->position < agg_segment->start) + output_start_time = agg_segment->start; + + if (!GST_CLOCK_TIME_IS_VALID (self->start_time)) { + self->start_time = output_start_time; + GST_DEBUG_OBJECT (self, "Start time %" GST_TIME_FORMAT, + GST_TIME_ARGS (self->start_time)); + } + + best_pad = + find_best_pad (aggregator, &self->earliest_input_running_time, timeout); + + output_end_running_time = + gst_segment_to_running_time (agg_segment, GST_FORMAT_TIME, + output_start_time + output_duration); + + GST_LOG_OBJECT (self, "best-pad: %s, timeout: %d, " + "earliest input running time: %" + GST_TIME_FORMAT ", output running time: %" GST_TIME_FORMAT, + best_pad ? GST_OBJECT_NAME (best_pad) : "NULL", timeout, + GST_TIME_ARGS (self->earliest_input_running_time), + GST_TIME_ARGS (output_end_running_time)); + + if (GST_CLOCK_TIME_IS_VALID (self->earliest_input_running_time) + && self->earliest_input_running_time > output_end_running_time) { + /* Nothing to consume, earliest pad is not ready yet */ + GST_LOG_OBJECT (self, "Nothing to consume"); + } else if (best_pad) { + GstBuffer *buffer; + + buffer = gst_aggregator_pad_pop_buffer (GST_AGGREGATOR_PAD (best_pad)); + + if (buffer) { + GstMapInfo map; + + gst_buffer_map (buffer, &map, GST_MAP_READ); + + if (g_strcmp0 (GST_PAD_NAME (best_pad), "cc1") == 0) { + GST_DEBUG_OBJECT (self, "Consuming CC1 %" GST_PTR_FORMAT, buffer); + cc_buffer_push_separated (self->cc_buffer, map.data, map.size, NULL, 0, + NULL, 0); + } else { + GST_DEBUG_OBJECT (self, "Consuming CC3 %" GST_PTR_FORMAT, buffer); + cc_buffer_push_separated (self->cc_buffer, NULL, 0, map.data, map.size, + NULL, 0); + } + + gst_buffer_unmap (buffer, &map); + gst_buffer_unref (buffer); + } else if (!timeout) { + /* We got flushed and still have time to wait before the deadline */ + flow_ret = GST_AGGREGATOR_FLOW_NEED_DATA; + } + } else if (!gst_aggregator_get_force_live (aggregator) + && all_pads_eos (aggregator)) { + GST_INFO_OBJECT (self, "EOS!"); + flow_ret = GST_FLOW_EOS; + } else if (!timeout) { + GST_LOG_OBJECT (self, "Need more data"); + flow_ret = GST_AGGREGATOR_FLOW_NEED_DATA; + } + + if (flow_ret == GST_FLOW_OK) { + if (timeout || output_end_running_time < self->earliest_input_running_time) { + flow_ret = finish_s334_both_fields (self); + } + } else if (flow_ret == GST_FLOW_EOS && !cc_buffer_is_empty (self->cc_buffer)) { + flow_ret = finish_s334_both_fields (self); + } + + g_clear_pointer (&best_pad, gst_object_unref); + + return flow_ret; +} + +static gboolean +gst_cea608_mux_stop (GstAggregator * aggregator) +{ + GstCea608Mux *self = GST_CEA608MUX (aggregator); + + cc_buffer_discard (self->cc_buffer); + self->n_output_buffers = 0; + self->earliest_input_running_time = 0; + self->start_time = GST_CLOCK_TIME_NONE; + + return TRUE; +} + +static GstFlowReturn +gst_cea608_mux_flush (GstAggregator * aggregator) +{ + GstCea608Mux *self = GST_CEA608MUX (aggregator); + GstSegment *agg_segment = &GST_AGGREGATOR_PAD (aggregator->srcpad)->segment; + + GST_DEBUG_OBJECT (self, "Flush"); + + cc_buffer_discard (self->cc_buffer); + self->n_output_buffers = 0; + self->earliest_input_running_time = 0; + self->start_time = GST_CLOCK_TIME_NONE; + agg_segment->position = -1; + + return GST_FLOW_OK; +} + +static gboolean +gst_cea608_mux_negotiated_src_caps (GstAggregator * agg, GstCaps * caps) +{ + GstStructure *s = gst_caps_get_structure (caps, 0); + gint fps_n, fps_d; + GstCea608Mux *self = GST_CEA608MUX (agg); + GstClockTime latency; + gboolean success GST_UNUSED_ASSERT; + + GST_INFO_OBJECT (agg->srcpad, "set src caps: %" GST_PTR_FORMAT, caps); + + success = gst_structure_get_fraction (s, "framerate", &fps_n, &fps_d); + g_assert (success); + self->cdp_fps_entry = cdp_fps_entry_from_fps (fps_n, fps_d); + g_assert (self->cdp_fps_entry != NULL && self->cdp_fps_entry->fps_n != 0); + + latency = + gst_util_uint64_scale (GST_SECOND, self->cdp_fps_entry->fps_d, + self->cdp_fps_entry->fps_n); + gst_aggregator_set_latency (agg, latency, latency); + + return TRUE; +} + +static GstBuffer * +gst_cea608_mux_clip (GstAggregator * aggregator, GstAggregatorPad * pad, + GstBuffer * buffer) +{ + GstClockTime time; + + if (!GST_BUFFER_PTS_IS_VALID (buffer)) + return buffer; + + time = gst_segment_to_running_time (&pad->segment, GST_FORMAT_TIME, + GST_BUFFER_PTS (buffer)); + if (!GST_CLOCK_TIME_IS_VALID (time)) { + GST_DEBUG_OBJECT (pad, "Dropping buffer on pad outside segment %" + GST_TIME_FORMAT, GST_TIME_ARGS (GST_BUFFER_PTS (buffer))); + gst_buffer_unref (buffer); + return NULL; + } + + return buffer; +} + +static void +gst_cea608_mux_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec) +{ + switch (prop_id) { + case PROP_FORCE_LIVE: + g_value_set_boolean (value, + gst_aggregator_get_force_live (GST_AGGREGATOR (object))); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_cea608_mux_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec) +{ + switch (prop_id) { + case PROP_FORCE_LIVE: + gst_aggregator_set_force_live (GST_AGGREGATOR (object), + g_value_get_boolean (value)); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_cea608_mux_class_init (GstCea608MuxClass * klass) +{ + GObjectClass *gobject_class; + GstElementClass *gstelement_class; + GstAggregatorClass *aggregator_class; + + gobject_class = (GObjectClass *) klass; + gstelement_class = (GstElementClass *) klass; + aggregator_class = (GstAggregatorClass *) klass; + + gobject_class->finalize = gst_cea608_mux_finalize; + gobject_class->get_property = gst_cea608_mux_get_property; + gobject_class->set_property = gst_cea608_mux_set_property; + + gst_element_class_set_static_metadata (gstelement_class, + "Closed Caption Muxer", + "Aggregator", + "Combines raw 608 streams", + "Mathieu Duponchelle <mathieu@centricular.com>"); + + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &srctemplate, GST_TYPE_AGGREGATOR_PAD); + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &cc1_template, GST_TYPE_AGGREGATOR_PAD); + gst_element_class_add_static_pad_template_with_gtype (gstelement_class, + &cc3_template, GST_TYPE_AGGREGATOR_PAD); + + aggregator_class->aggregate = gst_cea608_mux_aggregate; + aggregator_class->stop = gst_cea608_mux_stop; + aggregator_class->flush = gst_cea608_mux_flush; + aggregator_class->negotiated_src_caps = gst_cea608_mux_negotiated_src_caps; + aggregator_class->get_next_time = gst_aggregator_simple_get_next_time; + aggregator_class->clip = gst_cea608_mux_clip; + + GST_DEBUG_CATEGORY_INIT (gst_cea608_mux_debug, "cea608mux", + 0, "Closed Caption muxer"); + + /** + * cea608mux:force-live: + * + * Causes the element to aggregate on a timeout even when no live source is + * connected to its sinks. See #GstAggregator:min-upstream-latency for a + * companion property: in the vast majority of cases where you plan to plug in + * live sources with a non-zero latency, you should set it to a non-zero value. + * + * Since: 1.26 + */ + g_object_class_install_property (gobject_class, PROP_FORCE_LIVE, + g_param_spec_boolean ("force-live", "Force live", + "Always operate in live mode and aggregate on timeout regardless of " + "whether any live sources are linked upstream", + DEFAULT_FORCE_LIVE, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT_ONLY)); +} + +static void +gst_cea608_mux_init (GstCea608Mux * self) +{ + self->cc_buffer = cc_buffer_new (); + cc_buffer_set_max_buffer_time (self->cc_buffer, GST_CLOCK_TIME_NONE); + cc_buffer_set_output_padding (self->cc_buffer, TRUE, FALSE); + cc_buffer_set_cea608_padding_strategy (self->cc_buffer, + CC_BUFFER_CEA608_PADDING_STRATEGY_VALID | + CC_BUFFER_CEA608_PADDING_STRATEGY_INPUT_REMOVE); + self->cdp_fps_entry = &null_fps_entry; + self->start_time = GST_CLOCK_TIME_NONE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcea608mux.h
Changed
(renamed from ext/closedcaption/gstcea608mux.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstclosedcaption.c
Added
@@ -0,0 +1,66 @@ +/* + * GStreamer + * Copyright (C) 2018 Edward Hervey <edward@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#ifdef HAVE_CONFIG_H +# include <config.h> +#endif + +#include <gst/gst.h> + +#include "gstcccombiner.h" +#include "gstccconverter.h" +#include "gstccextractor.h" +#include "gstcea608mux.h" +#include "gstline21dec.h" +#include "gstline21enc.h" +#include "ccutils.h" +#include "gsth264ccextractor.h" +#include "gsth265ccextractor.h" +#include "gsth264ccinserter.h" +#include "gsth265ccinserter.h" + +static gboolean +closedcaption_init (GstPlugin * plugin) +{ + gboolean ret = FALSE; + + GST_DEBUG_CATEGORY_INIT (ccutils_debug_cat, "ccutils", 0, + "Closed caption utilities"); + + ret |= GST_ELEMENT_REGISTER (cccombiner, plugin); + ret |= GST_ELEMENT_REGISTER (cea608mux, plugin); + ret |= GST_ELEMENT_REGISTER (ccconverter, plugin); + ret |= GST_ELEMENT_REGISTER (ccextractor, plugin); + ret |= GST_ELEMENT_REGISTER (line21decoder, plugin); + ret |= GST_ELEMENT_REGISTER (line21encoder, plugin); + ret |= GST_ELEMENT_REGISTER (h264ccextractor, plugin); + ret |= GST_ELEMENT_REGISTER (h265ccextractor, plugin); + ret |= GST_ELEMENT_REGISTER (h264ccinserter, plugin); + ret |= GST_ELEMENT_REGISTER (h265ccinserter, plugin); + + return ret; +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, + GST_VERSION_MINOR, + closedcaption, + "Closed Caption elements", + closedcaption_init, VERSION, "LGPL", GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcodecccinserter.c
Changed
(renamed from ext/closedcaption/gstcodecccinserter.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstcodecccinserter.h
Changed
(renamed from ext/closedcaption/gstcodecccinserter.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264ccextractor.c
Changed
(renamed from ext/closedcaption/gsth264ccextractor.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264ccextractor.h
Changed
(renamed from ext/closedcaption/gsth264ccextractor.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264ccinserter.c
Changed
(renamed from ext/closedcaption/gsth264ccinserter.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264ccinserter.h
Changed
(renamed from ext/closedcaption/gsth264ccinserter.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264reorder.c
Changed
(renamed from ext/closedcaption/gsth264reorder.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth264reorder.h
Changed
(renamed from ext/closedcaption/gsth264reorder.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265ccextractor.c
Changed
(renamed from ext/closedcaption/gsth265ccextractor.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265ccextractor.h
Changed
(renamed from ext/closedcaption/gsth265ccextractor.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265ccinserter.c
Changed
(renamed from ext/closedcaption/gsth265ccinserter.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265ccinserter.h
Changed
(renamed from ext/closedcaption/gsth265ccinserter.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265reorder.c
Added
@@ -0,0 +1,1746 @@ +/* GStreamer + * Copyright (C) 2015 Intel Corporation + * Author: Sreerenj Balachandran <sreerenj.balachandran@intel.com> + * Copyright (C) 2019 Seungha Yang <seungha.yang@navercorp.com> + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsth265reorder.h" +#include "gsth264reorder.h" +#include <gst/codecs/gsth265picture.h> +#include <string.h> + +GST_DEBUG_CATEGORY_STATIC (gst_h265_reorder_debug); +#define GST_CAT_DEFAULT gst_h265_reorder_debug + +struct _GstH265Reorder +{ + GstObject parent; + + gboolean need_reorder; + + gint width; + gint height; + + guint8 conformance_window_flag; + gint crop_rect_width; + gint crop_rect_height; + gint crop_rect_x; + gint crop_rect_y; + gint fps_n; + gint fps_d; + + guint nal_length_size; + gboolean is_hevc; + GstH265Parser *parser; + GstH265Parser *preproc_parser; + GstH265Dpb *dpb; + + guint8 field_seq_flag; + guint8 progressive_source_flag; + guint8 interlaced_source_flag; + + GstH265SEIPicStructType cur_pic_struct; + guint8 cur_source_scan_type; + guint8 cur_duplicate_flag; + + gboolean no_output_of_prior_pics_flag; + + /* vps/sps/pps of the current slice */ + const GstH265VPS *active_vps; + const GstH265SPS *active_sps; + const GstH265PPS *active_pps; + + guint32 SpsMaxLatencyPictures; + + GstH265Picture *current_picture; + GstVideoCodecFrame *current_frame; + + /* Slice (slice header + nalu) currently being processed/decoded */ + GstH265Slice current_slice; + GstH265Slice prev_slice; + GstH265Slice prev_independent_slice; + + GstH265Picture *RefPicSetStCurrBefore16; + GstH265Picture *RefPicSetStCurrAfter16; + GstH265Picture *RefPicSetStFoll16; + GstH265Picture *RefPicSetLtCurr16; + GstH265Picture *RefPicSetLtFoll16; + + guint NumPocStCurrBefore; + guint NumPocStCurrAfter; + guint NumPocStFoll; + guint NumPocLtCurr; + guint NumPocLtFoll; + guint NumPicTotalCurr; + + gint32 poc; // PicOrderCntVal + gint32 poc_msb; // PicOrderCntMsb + gint32 poc_lsb; // pic_order_cnt_lsb (from slice_header()) + gint32 prev_poc_msb; // prevPicOrderCntMsb + gint32 prev_poc_lsb; // prevPicOrderCntLsb + gint32 prev_tid0pic_poc_lsb; + gint32 prev_tid0pic_poc_msb; + gint32 PocStCurrBefore16; + gint32 PocStCurrAfter16; + gint32 PocStFoll16; + gint32 PocLtCurr16; + gint32 PocLtFoll16; + + /* PicOrderCount of the previously outputted frame */ + gint last_output_poc; + + gboolean associated_irap_NoRaslOutputFlag; + gboolean new_bitstream; + gboolean prev_nal_is_eos; + + GArray *nalu; + + /* Split packetized data into actual nal chunks (for malformed stream) */ + GArray *split_nalu; + + GArray *au_nalus; + + GPtrArray *frame_queue; + GPtrArray *output_queue; + guint32 system_num; + guint32 present_num; + + GstClockTime latency; +}; + +typedef struct +{ + union + { + GstH265VPS vps; + GstH265SPS sps; + GstH265PPS pps; + GstH265Slice slice; + } unit; + GstH265NalUnitType nalu_type; +} GstH265ReorderNalUnit; + +static void gst_h265_reorder_finalize (GObject * object); + +static gboolean gst_h265_reorder_start_current_picture (GstH265Reorder * self); + +#define gst_h265_reorder_parent_class parent_class +G_DEFINE_TYPE (GstH265Reorder, gst_h265_reorder, GST_TYPE_OBJECT); + +static void +gst_h265_reorder_class_init (GstH265ReorderClass * klass) +{ + GObjectClass *object_class = G_OBJECT_CLASS (klass); + + object_class->finalize = gst_h265_reorder_finalize; + + GST_DEBUG_CATEGORY_INIT (gst_h265_reorder_debug, "h265reorder", 0, + "h265reorder"); +} + +static inline gboolean +is_slice_nalu (GstH265NalUnitType type) +{ + if ((type >= GST_H265_NAL_SLICE_TRAIL_N && + type <= GST_H265_NAL_SLICE_RASL_R) || + (type >= GST_H265_NAL_SLICE_BLA_W_LP && + type <= GST_H265_NAL_SLICE_CRA_NUT)) { + return TRUE; + } + + return FALSE; +} + +static void +gst_h265_reorder_clear_nalu (GstH265ReorderNalUnit * nalu) +{ + if (!nalu) + return; + + if (is_slice_nalu (nalu->nalu_type)) + gst_h265_slice_hdr_free (&nalu->unit.slice.header); + + memset (nalu, 0, sizeof (GstH265ReorderNalUnit)); +} + +static void +gst_h265_reorder_init (GstH265Reorder * self) +{ + self->parser = gst_h265_parser_new (); + self->preproc_parser = gst_h265_parser_new (); + self->dpb = gst_h265_dpb_new (); + self->frame_queue = + g_ptr_array_new_with_free_func ( + (GDestroyNotify) gst_video_codec_frame_unref); + self->output_queue = + g_ptr_array_new_with_free_func ( + (GDestroyNotify) gst_video_codec_frame_unref); + + self->nalu = g_array_sized_new (FALSE, TRUE, sizeof (GstH265ReorderNalUnit), + 8); + g_array_set_clear_func (self->nalu, + (GDestroyNotify) gst_h265_reorder_clear_nalu); + self->split_nalu = g_array_new (FALSE, FALSE, sizeof (GstH265NalUnit)); + self->au_nalus = g_array_new (FALSE, FALSE, sizeof (GstH265NalUnit)); + self->fps_n = 25; + self->fps_d = 1; +} + +static void +gst_h265_reorder_clear_ref_pic_sets (GstH265Reorder * self) +{ + guint i; + + for (i = 0; i < 16; i++) { + gst_clear_h265_picture (&self->RefPicSetLtCurri); + gst_clear_h265_picture (&self->RefPicSetLtFolli); + gst_clear_h265_picture (&self->RefPicSetStCurrBeforei); + gst_clear_h265_picture (&self->RefPicSetStCurrAfteri); + gst_clear_h265_picture (&self->RefPicSetStFolli); + } +} + +static void +gst_h265_reorder_finalize (GObject * object) +{ + GstH265Reorder *self = GST_H265_REORDER (object); + + gst_h265_parser_free (self->parser); + gst_h265_parser_free (self->preproc_parser); + g_ptr_array_unref (self->frame_queue); + g_ptr_array_unref (self->output_queue); + g_array_unref (self->nalu); + g_array_unref (self->split_nalu); + g_array_unref (self->au_nalus); + gst_h265_reorder_clear_ref_pic_sets (self); + gst_h265_dpb_free (self->dpb); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static gboolean +gst_h265_reorder_is_crop_rect_changed (GstH265Reorder * self, GstH265SPS * sps) +{ + if (self->conformance_window_flag != sps->conformance_window_flag) + return TRUE; + if (self->crop_rect_width != sps->crop_rect_width) + return TRUE; + if (self->crop_rect_height != sps->crop_rect_height) + return TRUE; + if (self->crop_rect_x != sps->crop_rect_x) + return TRUE; + if (self->crop_rect_y != sps->crop_rect_y) + return TRUE; + + return FALSE; +} + +typedef struct +{ + const gchar *level_name; + guint8 level_idc; + guint32 MaxLumaPs; +} GstH265LevelLimits; + +/* *INDENT-OFF* */ +/* Table A.8 - General tier and level limits */ +static const GstH265LevelLimits level_limits = { + /* level idc MaxLumaPs */ + { "1", GST_H265_LEVEL_L1, 36864 }, + { "2", GST_H265_LEVEL_L2, 122880 }, + { "2.1", GST_H265_LEVEL_L2_1, 245760 }, + { "3", GST_H265_LEVEL_L3, 552960 }, + { "3.1", GST_H265_LEVEL_L3_1, 983040 }, + { "4", GST_H265_LEVEL_L4, 2228224 }, + { "4.1", GST_H265_LEVEL_L4_1, 2228224 }, + { "5", GST_H265_LEVEL_L5, 8912896 }, + { "5.1", GST_H265_LEVEL_L5_1, 8912896 }, + { "5.2", GST_H265_LEVEL_L5_2, 8912896 }, + { "6", GST_H265_LEVEL_L6, 35651584 }, + { "6.1", GST_H265_LEVEL_L6_1, 35651584 }, + { "6.2", GST_H265_LEVEL_L6_2, 35651584 }, +}; +/* *INDENT-ON* */ + +static gint +gst_h265_reorder_get_max_dpb_size_from_sps (GstH265Reorder * self, + GstH265SPS * sps) +{ + guint i; + guint PicSizeInSamplesY; + /* Default is the worst case level 6.2 */ + guint32 MaxLumaPS = G_MAXUINT32; + gint MaxDpbPicBuf = 6; + gint max_dpb_size; + + /* A.4.2, maxDpbPicBuf is equal to 6 for all profiles where the value of + * sps_curr_pic_ref_enabled_flag is required to be equal to 0 and 7 for all + * profiles where the value of sps_curr_pic_ref_enabled_flag is not required + * to be equal to 0 */ + if (sps->sps_scc_extension_flag) { + /* sps_curr_pic_ref_enabled_flag could be non-zero only if profile is SCC */ + MaxDpbPicBuf = 7; + } + + /* Unknown level */ + if (sps->profile_tier_level.level_idc == 0) + return 16; + + PicSizeInSamplesY = sps->width * sps->height; + for (i = 0; i < G_N_ELEMENTS (level_limits); i++) { + if (sps->profile_tier_level.level_idc <= level_limitsi.level_idc) { + if (PicSizeInSamplesY <= level_limitsi.MaxLumaPs) { + MaxLumaPS = level_limitsi.MaxLumaPs; + } else { + GST_DEBUG_OBJECT (self, + "%u (%dx%d) exceeds allowed max luma sample for level \"%s\" %u", + PicSizeInSamplesY, sps->width, sps->height, + level_limitsi.level_name, level_limitsi.MaxLumaPs); + } + break; + } + } + + /* Unknown level */ + if (MaxLumaPS == G_MAXUINT32) + return 16; + + /* A.4.2 */ + if (PicSizeInSamplesY <= (MaxLumaPS >> 2)) + max_dpb_size = MaxDpbPicBuf * 4; + else if (PicSizeInSamplesY <= (MaxLumaPS >> 1)) + max_dpb_size = MaxDpbPicBuf * 2; + else if (PicSizeInSamplesY <= ((3 * MaxLumaPS) >> 2)) + max_dpb_size = (MaxDpbPicBuf * 4) / 3; + else + max_dpb_size = MaxDpbPicBuf; + + max_dpb_size = MIN (max_dpb_size, 16); + + /* MaxDpbSize is not an actual maximum required buffer size. + * Instead, it indicates upper bound for other syntax elements, such as + * sps_max_dec_pic_buffering_minus1. If this bitstream can satisfy + * the requirement, use this as our dpb size */ + if (sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1 <= + max_dpb_size) { + GST_DEBUG_OBJECT (self, "max_dec_pic_buffering_minus1 %d < MaxDpbSize %d", + sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1, + max_dpb_size); + max_dpb_size = + sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1; + } else { + /* not reliable values, use 16 */ + max_dpb_size = 16; + } + + return max_dpb_size; +} + +static gboolean +gst_h265_reorder_process_sps (GstH265Reorder * self, GstH265SPS * sps) +{ + gint max_dpb_size; + gint prev_max_dpb_size; + guint8 field_seq_flag = 0; + guint8 progressive_source_flag = 0; + guint8 interlaced_source_flag = 0; + guint frames_delay; + + max_dpb_size = gst_h265_reorder_get_max_dpb_size_from_sps (self, sps); + + if (sps->vui_parameters_present_flag) + field_seq_flag = sps->vui_params.field_seq_flag; + + progressive_source_flag = sps->profile_tier_level.progressive_source_flag; + interlaced_source_flag = sps->profile_tier_level.interlaced_source_flag; + + prev_max_dpb_size = gst_h265_dpb_get_max_num_pics (self->dpb); + if (self->width != sps->width || self->height != sps->height || + prev_max_dpb_size != max_dpb_size || + self->field_seq_flag != field_seq_flag || + self->progressive_source_flag != progressive_source_flag || + self->interlaced_source_flag != interlaced_source_flag || + gst_h265_reorder_is_crop_rect_changed (self, sps)) { + + GST_DEBUG_OBJECT (self, + "SPS updated, resolution: %dx%d -> %dx%d, dpb size: %d -> %d, " + "field_seq_flag: %d -> %d, progressive_source_flag: %d -> %d, " + "interlaced_source_flag: %d -> %d", + self->width, self->height, sps->width, sps->height, + prev_max_dpb_size, max_dpb_size, self->field_seq_flag, field_seq_flag, + self->progressive_source_flag, progressive_source_flag, + self->interlaced_source_flag, interlaced_source_flag); + + gst_h265_reorder_drain (self); + + self->width = sps->width; + self->height = sps->height; + self->conformance_window_flag = sps->conformance_window_flag; + self->crop_rect_width = sps->crop_rect_width; + self->crop_rect_height = sps->crop_rect_height; + self->crop_rect_x = sps->crop_rect_x; + self->crop_rect_y = sps->crop_rect_y; + self->field_seq_flag = field_seq_flag; + self->progressive_source_flag = progressive_source_flag; + self->interlaced_source_flag = interlaced_source_flag; + + gst_h265_dpb_set_max_num_pics (self->dpb, max_dpb_size); + + GST_DEBUG_OBJECT (self, "Set DPB max size %d", max_dpb_size); + } + + if (sps->max_latency_increase_plus1sps->max_sub_layers_minus1) { + self->SpsMaxLatencyPictures = + sps->max_num_reorder_picssps->max_sub_layers_minus1 + + sps->max_latency_increase_plus1sps->max_sub_layers_minus1 - 1; + } else { + self->SpsMaxLatencyPictures = 0; + } + + frames_delay = sps->max_num_reorder_picssps->max_sub_layers_minus1; + self->latency = gst_util_uint64_scale_int (frames_delay * GST_SECOND, + self->fps_d, self->fps_n); + + return TRUE; +} + +static GstH265ParserResult +gst_h265_reorder_parse_sei (GstH265Reorder * self, GstH265NalUnit * nalu) +{ + GstH265ParserResult pres; + GArray *messages = NULL; + guint i; + + pres = gst_h265_parser_parse_sei (self->preproc_parser, nalu, &messages); + if (pres != GST_H265_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse SEI, result %d", pres); + + /* XXX: Ignore error from SEI parsing, it might be malformed bitstream, + * or our fault. But shouldn't be critical */ + g_clear_pointer (&messages, g_array_unref); + return GST_H265_PARSER_OK; + } + + for (i = 0; i < messages->len; i++) { + GstH265SEIMessage *sei = &g_array_index (messages, GstH265SEIMessage, i); + + switch (sei->payloadType) { + case GST_H265_SEI_PIC_TIMING: + self->cur_pic_struct = sei->payload.pic_timing.pic_struct; + self->cur_source_scan_type = sei->payload.pic_timing.source_scan_type; + self->cur_duplicate_flag = sei->payload.pic_timing.duplicate_flag; + + GST_TRACE_OBJECT (self, + "Picture Timing SEI, pic_struct: %d, source_scan_type: %d, " + "duplicate_flag: %d", self->cur_pic_struct, + self->cur_source_scan_type, self->cur_duplicate_flag); + break; + default: + break; + } + } + + g_array_free (messages, TRUE); + GST_LOG_OBJECT (self, "SEI parsed"); + + return GST_H265_PARSER_OK; +} + +static gboolean +gst_h265_reorder_preprocess_slice (GstH265Reorder * self, GstH265Slice * slice) +{ + const GstH265SliceHdr *slice_hdr = &slice->header; + + if (self->current_picture && slice_hdr->first_slice_segment_in_pic_flag) { + GST_WARNING_OBJECT (self, + "Current picture is not finished but slice header has " + "first_slice_segment_in_pic_flag"); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_h265_reorder_process_slice (GstH265Reorder * self, GstH265Slice * slice) +{ + self->current_slice = *slice; + + if (self->current_slice.header.dependent_slice_segment_flag) { + GstH265SliceHdr *slice_hdr = &self->current_slice.header; + GstH265SliceHdr *indep_slice_hdr = &self->prev_independent_slice.header; + + memcpy (&slice_hdr->type, &indep_slice_hdr->type, + G_STRUCT_OFFSET (GstH265SliceHdr, num_entry_point_offsets) - + G_STRUCT_OFFSET (GstH265SliceHdr, type)); + } else { + self->prev_independent_slice = self->current_slice; + memset (&self->prev_independent_slice.nalu, 0, sizeof (GstH265NalUnit)); + } + + if (!gst_h265_reorder_preprocess_slice (self, &self->current_slice)) + return FALSE; + + /* The used SPS may not be the latest parsed one, make + * sure we have updated it before decode the frame */ + if (!gst_h265_reorder_process_sps (self, self->current_slice.header.pps->sps)) { + GST_WARNING_OBJECT (self, "Failed to process sps"); + return FALSE; + } + + self->active_pps = self->current_slice.header.pps; + self->active_sps = self->active_pps->sps; + + if (!self->current_picture) { + GstH265Picture *picture; + + g_assert (self->current_frame); + + picture = gst_h265_picture_new (); + /* This allows accessing the frame from the picture. */ + GST_CODEC_PICTURE_FRAME_NUMBER (picture) = + self->current_frame->system_frame_number; + + self->current_picture = picture; + + if (!gst_h265_reorder_start_current_picture (self)) { + GST_WARNING_OBJECT (self, "start picture failed"); + return FALSE; + } + } + + return TRUE; +} + +static GstH265ParserResult +gst_h265_reorder_parse_slice (GstH265Reorder * self, GstH265NalUnit * nalu) +{ + GstH265ParserResult pres; + GstH265Slice slice; + GstH265ReorderNalUnit decoder_nalu; + + memset (&slice, 0, sizeof (GstH265Slice)); + + pres = gst_h265_parser_parse_slice_hdr (self->preproc_parser, + nalu, &slice.header); + if (pres != GST_H265_PARSER_OK) + return pres; + + slice.nalu = *nalu; + + if (nalu->type >= GST_H265_NAL_SLICE_BLA_W_LP && + nalu->type <= GST_H265_NAL_SLICE_CRA_NUT) { + slice.rap_pic_flag = TRUE; + } + + /* NoRaslOutputFlag == 1 if the current picture is + * 1) an IDR picture + * 2) a BLA picture + * 3) a CRA picture that is the first access unit in the bitstream + * 4) first picture that follows an end of sequence NAL unit in decoding order + * 5) has HandleCraAsBlaFlag == 1 (set by external means, so not considering ) + */ + if (GST_H265_IS_NAL_TYPE_IDR (nalu->type) || + GST_H265_IS_NAL_TYPE_BLA (nalu->type) || + (GST_H265_IS_NAL_TYPE_CRA (nalu->type) && self->new_bitstream) || + self->prev_nal_is_eos) { + slice.no_rasl_output_flag = TRUE; + } + + if (GST_H265_IS_NAL_TYPE_IRAP (nalu->type)) { + slice.intra_pic_flag = TRUE; + + if (slice.no_rasl_output_flag && !self->new_bitstream) { + /* C 3.2 */ + slice.clear_dpb = TRUE; + if (nalu->type == GST_H265_NAL_SLICE_CRA_NUT) { + slice.no_output_of_prior_pics_flag = TRUE; + } else { + slice.no_output_of_prior_pics_flag = + slice.header.no_output_of_prior_pics_flag; + } + } + } + + if (slice.no_output_of_prior_pics_flag) + self->no_output_of_prior_pics_flag = TRUE; + + decoder_nalu.unit.slice = slice; + decoder_nalu.nalu_type = nalu->type; + + g_array_append_val (self->nalu, decoder_nalu); + + return GST_H265_PARSER_OK; +} + +static GstH265ParserResult +gst_h265_reorder_parse_nalu (GstH265Reorder * self, GstH265NalUnit * nalu) +{ + GstH265VPS vps; + GstH265SPS sps; + GstH265PPS pps; + GstH265ParserResult ret = GST_H265_PARSER_OK; + GstH265ReorderNalUnit decoder_nalu; + + GST_LOG_OBJECT (self, "Parsed nal type: %d, offset %d, size %d", + nalu->type, nalu->offset, nalu->size); + + memset (&decoder_nalu, 0, sizeof (GstH265ReorderNalUnit)); + decoder_nalu.nalu_type = nalu->type; + + switch (nalu->type) { + case GST_H265_NAL_VPS: + ret = gst_h265_parser_parse_vps (self->preproc_parser, nalu, &vps); + if (ret != GST_H265_PARSER_OK) + break; + + decoder_nalu.unit.vps = vps; + g_array_append_val (self->nalu, decoder_nalu); + break; + case GST_H265_NAL_SPS: + ret = gst_h265_parser_parse_sps (self->preproc_parser, nalu, &sps, TRUE); + if (ret != GST_H265_PARSER_OK) + break; + + decoder_nalu.unit.sps = sps; + g_array_append_val (self->nalu, decoder_nalu); + break; + case GST_H265_NAL_PPS: + ret = gst_h265_parser_parse_pps (self->preproc_parser, nalu, &pps); + if (ret != GST_H265_PARSER_OK) + break; + + decoder_nalu.unit.pps = pps; + g_array_append_val (self->nalu, decoder_nalu); + break; + case GST_H265_NAL_PREFIX_SEI: + case GST_H265_NAL_SUFFIX_SEI: + ret = gst_h265_reorder_parse_sei (self, nalu); + break; + case GST_H265_NAL_SLICE_TRAIL_N: + case GST_H265_NAL_SLICE_TRAIL_R: + case GST_H265_NAL_SLICE_TSA_N: + case GST_H265_NAL_SLICE_TSA_R: + case GST_H265_NAL_SLICE_STSA_N: + case GST_H265_NAL_SLICE_STSA_R: + case GST_H265_NAL_SLICE_RADL_N: + case GST_H265_NAL_SLICE_RADL_R: + case GST_H265_NAL_SLICE_RASL_N: + case GST_H265_NAL_SLICE_RASL_R: + case GST_H265_NAL_SLICE_BLA_W_LP: + case GST_H265_NAL_SLICE_BLA_W_RADL: + case GST_H265_NAL_SLICE_BLA_N_LP: + case GST_H265_NAL_SLICE_IDR_W_RADL: + case GST_H265_NAL_SLICE_IDR_N_LP: + case GST_H265_NAL_SLICE_CRA_NUT: + ret = gst_h265_reorder_parse_slice (self, nalu); + self->new_bitstream = FALSE; + self->prev_nal_is_eos = FALSE; + break; + case GST_H265_NAL_EOB: + self->new_bitstream = TRUE; + break; + case GST_H265_NAL_EOS: + self->prev_nal_is_eos = TRUE; + break; + default: + break; + } + + return ret; +} + +static gboolean +gst_h265_reorder_decode_nalu (GstH265Reorder * self, + GstH265ReorderNalUnit * nalu) +{ + GstH265ParserResult rst; + + switch (nalu->nalu_type) { + case GST_H265_NAL_VPS: + gst_h265_parser_update_vps (self->parser, &nalu->unit.vps); + return TRUE; + case GST_H265_NAL_SPS: + gst_h265_parser_update_sps (self->parser, &nalu->unit.sps); + return TRUE; + case GST_H265_NAL_PPS: + gst_h265_parser_update_pps (self->parser, &nalu->unit.pps); + return TRUE; + default: + if (!is_slice_nalu (nalu->nalu_type)) { + GST_WARNING_OBJECT (self, "Unexpected nal type %d", nalu->nalu_type); + return TRUE; + } + break; + } + + rst = gst_h265_parser_link_slice_hdr (self->parser, &nalu->unit.slice.header); + + if (rst != GST_H265_PARSER_OK) { + GST_ERROR_OBJECT (self, "Couldn't update slice header"); + return FALSE; + } + + return gst_h265_reorder_process_slice (self, &nalu->unit.slice); +} + +static gboolean +gst_h265_reorder_parse_codec_data (GstH265Reorder * self, const guint8 * data, + gsize size) +{ + GstH265Parser *parser = self->parser; + GstH265ParserResult pres; + gboolean ret = FALSE; + GstH265VPS vps; + GstH265SPS sps; + GstH265PPS pps; + GstH265DecoderConfigRecord *config = NULL; + guint i, j; + + pres = gst_h265_parser_parse_decoder_config_record (parser, + data, size, &config); + if (pres != GST_H265_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse hvcC data"); + return FALSE; + } + + self->nal_length_size = config->length_size_minus_one + 1; + GST_DEBUG_OBJECT (self, "nal length size %u", self->nal_length_size); + + for (i = 0; i < config->nalu_array->len; i++) { + GstH265DecoderConfigRecordNalUnitArray *array = + &g_array_index (config->nalu_array, + GstH265DecoderConfigRecordNalUnitArray, i); + + for (j = 0; j < array->nalu->len; j++) { + GstH265NalUnit *nalu = &g_array_index (array->nalu, GstH265NalUnit, j); + + switch (nalu->type) { + case GST_H265_NAL_VPS: + pres = gst_h265_parser_parse_vps (parser, nalu, &vps); + if (pres != GST_H265_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse VPS"); + goto out; + } + gst_h265_parser_update_vps (self->preproc_parser, &vps); + break; + case GST_H265_NAL_SPS: + pres = gst_h265_parser_parse_sps (parser, nalu, &sps, TRUE); + if (pres != GST_H265_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse SPS"); + goto out; + } + gst_h265_parser_update_sps (self->preproc_parser, &sps); + break; + case GST_H265_NAL_PPS: + pres = gst_h265_parser_parse_pps (parser, nalu, &pps); + if (pres != GST_H265_PARSER_OK) { + GST_WARNING_OBJECT (self, "Failed to parse PPS"); + goto out; + } + gst_h265_parser_update_pps (self->preproc_parser, &pps); + break; + default: + break; + } + } + } + + ret = TRUE; + +out: + gst_h265_decoder_config_record_free (config); + return ret; +} + +gboolean +gst_h265_reorder_set_caps (GstH265Reorder * self, GstCaps * caps, + GstClockTime * latency) +{ + GstStructure *s; + const gchar *str; + const GValue *codec_data; + gboolean ret = TRUE; + gint fps_n, fps_d; + + GST_DEBUG_OBJECT (self, "Set caps %" GST_PTR_FORMAT, caps); + + self->nal_length_size = 4; + self->is_hevc = FALSE; + + s = gst_caps_get_structure (caps, 0); + str = gst_structure_get_string (s, "stream-format"); + if (str && (g_strcmp0 (str, "hvc1") == 0 || g_strcmp0 (str, "hev1") == 0)) + self->is_hevc = TRUE; + + if (gst_structure_get_fraction (s, "framerate", &fps_n, &fps_d) && + fps_n > 0 && fps_d > 0) { + self->fps_n = fps_n; + self->fps_d = fps_d; + } else { + self->fps_n = 25; + self->fps_d = 1; + } + + codec_data = gst_structure_get_value (s, "codec_data"); + if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER) { + GstBuffer *buf = gst_value_get_buffer (codec_data); + GstMapInfo info; + if (gst_buffer_map (buf, &info, GST_MAP_READ)) { + ret = gst_h265_reorder_parse_codec_data (self, info.data, info.size); + gst_buffer_unmap (buf, &info); + } else { + GST_ERROR_OBJECT (self, "Couldn't map codec data"); + ret = FALSE; + } + } + + if (self->need_reorder) + *latency = self->latency; + else + *latency = 0; + + return ret; +} + +static gboolean +gst_h265_reorder_fill_picture_from_slice (GstH265Reorder * self, + const GstH265Slice * slice, GstH265Picture * picture) +{ + const GstH265SliceHdr *slice_hdr = &slice->header; + const GstH265NalUnit *nalu = &slice->nalu; + + picture->RapPicFlag = slice->rap_pic_flag; + picture->NoRaslOutputFlag = slice->no_rasl_output_flag; + picture->IntraPicFlag = slice->intra_pic_flag; + picture->NoOutputOfPriorPicsFlag = slice->no_output_of_prior_pics_flag; + if (picture->IntraPicFlag) { + self->associated_irap_NoRaslOutputFlag = picture->NoRaslOutputFlag; + } + + if (GST_H265_IS_NAL_TYPE_RASL (nalu->type) && + self->associated_irap_NoRaslOutputFlag) { + picture->output_flag = FALSE; + } else { + picture->output_flag = slice_hdr->pic_output_flag; + } + + return TRUE; +} + +#define RSV_VCL_N10 10 +#define RSV_VCL_N12 12 +#define RSV_VCL_N14 14 + +static gboolean +nal_is_ref (guint8 nal_type) +{ + gboolean ret = FALSE; + switch (nal_type) { + case GST_H265_NAL_SLICE_TRAIL_N: + case GST_H265_NAL_SLICE_TSA_N: + case GST_H265_NAL_SLICE_STSA_N: + case GST_H265_NAL_SLICE_RADL_N: + case GST_H265_NAL_SLICE_RASL_N: + case RSV_VCL_N10: + case RSV_VCL_N12: + case RSV_VCL_N14: + ret = FALSE; + break; + default: + ret = TRUE; + break; + } + return ret; +} + +static gboolean +gst_h265_reorder_calculate_poc (GstH265Reorder * self, + const GstH265Slice * slice, GstH265Picture * picture) +{ + const GstH265SliceHdr *slice_hdr = &slice->header; + const GstH265NalUnit *nalu = &slice->nalu; + const GstH265SPS *sps = self->active_sps; + gint32 MaxPicOrderCntLsb = 1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4); + gboolean is_irap; + + self->prev_poc_lsb = self->poc_lsb; + self->prev_poc_msb = self->poc_msb; + + is_irap = GST_H265_IS_NAL_TYPE_IRAP (nalu->type); + + if (!(is_irap && picture->NoRaslOutputFlag)) { + self->prev_poc_lsb = self->prev_tid0pic_poc_lsb; + self->prev_poc_msb = self->prev_tid0pic_poc_msb; + } + + /* Finding PicOrderCntMsb */ + if (is_irap && picture->NoRaslOutputFlag) { + self->poc_msb = 0; + } else { + /* (8-1) */ + if ((slice_hdr->pic_order_cnt_lsb < self->prev_poc_lsb) && + ((self->prev_poc_lsb - slice_hdr->pic_order_cnt_lsb) >= + (MaxPicOrderCntLsb / 2))) + self->poc_msb = self->prev_poc_msb + MaxPicOrderCntLsb; + + else if ((slice_hdr->pic_order_cnt_lsb > self->prev_poc_lsb) && + ((slice_hdr->pic_order_cnt_lsb - self->prev_poc_lsb) > + (MaxPicOrderCntLsb / 2))) + self->poc_msb = self->prev_poc_msb - MaxPicOrderCntLsb; + + else + self->poc_msb = self->prev_poc_msb; + } + + /* (8-2) */ + self->poc = picture->pic_order_cnt = + self->poc_msb + slice_hdr->pic_order_cnt_lsb; + self->poc_lsb = picture->pic_order_cnt_lsb = slice_hdr->pic_order_cnt_lsb; + + if (GST_H265_IS_NAL_TYPE_IDR (nalu->type)) { + picture->pic_order_cnt = 0; + picture->pic_order_cnt_lsb = 0; + self->poc_lsb = 0; + self->poc_msb = 0; + self->prev_poc_lsb = 0; + self->prev_poc_msb = 0; + self->prev_tid0pic_poc_lsb = 0; + self->prev_tid0pic_poc_msb = 0; + } + + GST_LOG_OBJECT (self, + "PicOrderCntVal %d, (lsb %d)", picture->pic_order_cnt, + picture->pic_order_cnt_lsb); + + if (nalu->temporal_id_plus1 == 1 && !GST_H265_IS_NAL_TYPE_RASL (nalu->type) && + !GST_H265_IS_NAL_TYPE_RADL (nalu->type) && nal_is_ref (nalu->type)) { + self->prev_tid0pic_poc_lsb = slice_hdr->pic_order_cnt_lsb; + self->prev_tid0pic_poc_msb = self->poc_msb; + } + + return TRUE; +} + +static gboolean +gst_h265_reorder_init_current_picture (GstH265Reorder * self) +{ + if (!gst_h265_reorder_fill_picture_from_slice (self, &self->current_slice, + self->current_picture)) { + return FALSE; + } + + if (!gst_h265_reorder_calculate_poc (self, + &self->current_slice, self->current_picture)) + return FALSE; + + /* Use picture struct parsed from picture timing SEI */ + self->current_picture->pic_struct = self->cur_pic_struct; + self->current_picture->source_scan_type = self->cur_source_scan_type; + self->current_picture->duplicate_flag = self->cur_duplicate_flag; + + return TRUE; +} + +static gboolean +has_entry_in_rps (GstH265Picture * dpb_pic, + GstH265Picture ** rps_list, guint rps_list_length) +{ + guint i; + + if (!dpb_pic || !rps_list || !rps_list_length) + return FALSE; + + for (i = 0; i < rps_list_length; i++) { + if (rps_listi && rps_listi->pic_order_cnt == dpb_pic->pic_order_cnt) + return TRUE; + } + return FALSE; +} + +static void +gst_h265_reorder_derive_and_mark_rps (GstH265Reorder * self, + GstH265Picture * picture, gint32 * CurrDeltaPocMsbPresentFlag, + gint32 * FollDeltaPocMsbPresentFlag) +{ + guint i; + GArray *dpb_array; + + gst_h265_reorder_clear_ref_pic_sets (self); + + /* (8-6) */ + for (i = 0; i < self->NumPocLtCurr; i++) { + if (!CurrDeltaPocMsbPresentFlagi) { + self->RefPicSetLtCurri = + gst_h265_dpb_get_ref_by_poc_lsb (self->dpb, self->PocLtCurri); + } else { + self->RefPicSetLtCurri = + gst_h265_dpb_get_ref_by_poc (self->dpb, self->PocLtCurri); + } + } + + for (i = 0; i < self->NumPocLtFoll; i++) { + if (!FollDeltaPocMsbPresentFlagi) { + self->RefPicSetLtFolli = + gst_h265_dpb_get_ref_by_poc_lsb (self->dpb, self->PocLtFolli); + } else { + self->RefPicSetLtFolli = + gst_h265_dpb_get_ref_by_poc (self->dpb, self->PocLtFolli); + } + } + + /* Mark all ref pics in RefPicSetLtCurr and RefPicSetLtFol as long_term_refs */ + for (i = 0; i < self->NumPocLtCurr; i++) { + if (self->RefPicSetLtCurri) { + self->RefPicSetLtCurri->ref = TRUE; + self->RefPicSetLtCurri->long_term = TRUE; + } + } + + for (i = 0; i < self->NumPocLtFoll; i++) { + if (self->RefPicSetLtFolli) { + self->RefPicSetLtFolli->ref = TRUE; + self->RefPicSetLtFolli->long_term = TRUE; + } + } + + /* (8-7) */ + for (i = 0; i < self->NumPocStCurrBefore; i++) { + self->RefPicSetStCurrBeforei = + gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStCurrBeforei); + } + + for (i = 0; i < self->NumPocStCurrAfter; i++) { + self->RefPicSetStCurrAfteri = + gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStCurrAfteri); + } + + for (i = 0; i < self->NumPocStFoll; i++) { + self->RefPicSetStFolli = + gst_h265_dpb_get_short_ref_by_poc (self->dpb, self->PocStFolli); + } + + /* Mark all dpb pics not beloging to RefPicSet* as unused for ref */ + dpb_array = gst_h265_dpb_get_pictures_all (self->dpb); + for (i = 0; i < dpb_array->len; i++) { + GstH265Picture *dpb_pic = g_array_index (dpb_array, GstH265Picture *, i); + + if (dpb_pic && + !has_entry_in_rps (dpb_pic, self->RefPicSetLtCurr, self->NumPocLtCurr) + && !has_entry_in_rps (dpb_pic, self->RefPicSetLtFoll, + self->NumPocLtFoll) + && !has_entry_in_rps (dpb_pic, self->RefPicSetStCurrAfter, + self->NumPocStCurrAfter) + && !has_entry_in_rps (dpb_pic, self->RefPicSetStCurrBefore, + self->NumPocStCurrBefore) + && !has_entry_in_rps (dpb_pic, self->RefPicSetStFoll, + self->NumPocStFoll)) { + GST_LOG_OBJECT (self, "Mark Picture %p (poc %d) as non-ref", dpb_pic, + dpb_pic->pic_order_cnt); + dpb_pic->ref = FALSE; + dpb_pic->long_term = FALSE; + } + } + + g_array_unref (dpb_array); +} + +static gboolean +gst_h265_reorder_prepare_rps (GstH265Reorder * self, const GstH265Slice * slice, + GstH265Picture * picture) +{ + gint32 CurrDeltaPocMsbPresentFlag16 = { 0, }; + gint32 FollDeltaPocMsbPresentFlag16 = { 0, }; + const GstH265SliceHdr *slice_hdr = &slice->header; + const GstH265NalUnit *nalu = &slice->nalu; + const GstH265SPS *sps = self->active_sps; + guint32 MaxPicOrderCntLsb = 1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4); + gint i, j, k; + + /* if it is an irap pic, set all ref pics in dpb as unused for ref */ + if (GST_H265_IS_NAL_TYPE_IRAP (nalu->type) && picture->NoRaslOutputFlag) { + GST_DEBUG_OBJECT (self, "Mark all pictures in DPB as non-ref"); + gst_h265_dpb_mark_all_non_ref (self->dpb); + } + + /* Reset everything for IDR */ + if (GST_H265_IS_NAL_TYPE_IDR (nalu->type)) { + memset (self->PocStCurrBefore, 0, sizeof (self->PocStCurrBefore)); + memset (self->PocStCurrAfter, 0, sizeof (self->PocStCurrAfter)); + memset (self->PocStFoll, 0, sizeof (self->PocStFoll)); + memset (self->PocLtCurr, 0, sizeof (self->PocLtCurr)); + memset (self->PocLtFoll, 0, sizeof (self->PocLtFoll)); + self->NumPocStCurrBefore = self->NumPocStCurrAfter = self->NumPocStFoll = 0; + self->NumPocLtCurr = self->NumPocLtFoll = 0; + } else { + const GstH265ShortTermRefPicSet *stRefPic = NULL; + gint32 num_lt_pics, pocLt; + gint32 PocLsbLt16 = { 0, }; + gint32 UsedByCurrPicLt16 = { 0, }; + gint32 DeltaPocMsbCycleLt16 = { 0, }; + gint numtotalcurr = 0; + + /* this is based on CurrRpsIdx described in spec */ + if (!slice_hdr->short_term_ref_pic_set_sps_flag) + stRefPic = &slice_hdr->short_term_ref_pic_sets; + else if (sps->num_short_term_ref_pic_sets) + stRefPic = + &sps->short_term_ref_pic_setslice_hdr->short_term_ref_pic_set_idx; + + if (stRefPic == NULL) + return FALSE; + + GST_LOG_OBJECT (self, + "NumDeltaPocs: %d, NumNegativePics: %d, NumPositivePics %d", + stRefPic->NumDeltaPocs, stRefPic->NumNegativePics, + stRefPic->NumPositivePics); + + for (i = 0, j = 0, k = 0; i < stRefPic->NumNegativePics; i++) { + if (stRefPic->UsedByCurrPicS0i) { + self->PocStCurrBeforej++ = + picture->pic_order_cnt + stRefPic->DeltaPocS0i; + numtotalcurr++; + } else + self->PocStFollk++ = picture->pic_order_cnt + stRefPic->DeltaPocS0i; + } + self->NumPocStCurrBefore = j; + for (i = 0, j = 0; i < stRefPic->NumPositivePics; i++) { + if (stRefPic->UsedByCurrPicS1i) { + self->PocStCurrAfterj++ = + picture->pic_order_cnt + stRefPic->DeltaPocS1i; + numtotalcurr++; + } else + self->PocStFollk++ = picture->pic_order_cnt + stRefPic->DeltaPocS1i; + } + self->NumPocStCurrAfter = j; + self->NumPocStFoll = k; + num_lt_pics = slice_hdr->num_long_term_sps + slice_hdr->num_long_term_pics; + /* The variables PocLsbLti and UsedByCurrPicLti are derived as follows: */ + for (i = 0; i < num_lt_pics; i++) { + if (i < slice_hdr->num_long_term_sps) { + PocLsbLti = sps->lt_ref_pic_poc_lsb_spsslice_hdr->lt_idx_spsi; + UsedByCurrPicLti = + sps->used_by_curr_pic_lt_sps_flagslice_hdr->lt_idx_spsi; + } else { + PocLsbLti = slice_hdr->poc_lsb_lti; + UsedByCurrPicLti = slice_hdr->used_by_curr_pic_lt_flagi; + } + if (UsedByCurrPicLti) + numtotalcurr++; + } + + self->NumPicTotalCurr = numtotalcurr; + + /* The variable DeltaPocMsbCycleLti is derived as follows: (7-38) */ + for (i = 0; i < num_lt_pics; i++) { + if (i == 0 || i == slice_hdr->num_long_term_sps) + DeltaPocMsbCycleLti = slice_hdr->delta_poc_msb_cycle_lti; + else + DeltaPocMsbCycleLti = + slice_hdr->delta_poc_msb_cycle_lti + DeltaPocMsbCycleLti - 1; + } + + /* (8-5) */ + for (i = 0, j = 0, k = 0; i < num_lt_pics; i++) { + pocLt = PocLsbLti; + if (slice_hdr->delta_poc_msb_present_flagi) + pocLt += + picture->pic_order_cnt - DeltaPocMsbCycleLti * MaxPicOrderCntLsb - + slice_hdr->pic_order_cnt_lsb; + if (UsedByCurrPicLti) { + self->PocLtCurrj = pocLt; + CurrDeltaPocMsbPresentFlagj++ = + slice_hdr->delta_poc_msb_present_flagi; + } else { + self->PocLtFollk = pocLt; + FollDeltaPocMsbPresentFlagk++ = + slice_hdr->delta_poc_msb_present_flagi; + } + } + self->NumPocLtCurr = j; + self->NumPocLtFoll = k; + } + + GST_LOG_OBJECT (self, "NumPocStCurrBefore: %d", self->NumPocStCurrBefore); + GST_LOG_OBJECT (self, "NumPocStCurrAfter: %d", self->NumPocStCurrAfter); + GST_LOG_OBJECT (self, "NumPocStFoll: %d", self->NumPocStFoll); + GST_LOG_OBJECT (self, "NumPocLtCurr: %d", self->NumPocLtCurr); + GST_LOG_OBJECT (self, "NumPocLtFoll: %d", self->NumPocLtFoll); + GST_LOG_OBJECT (self, "NumPicTotalCurr: %d", self->NumPicTotalCurr); + + /* the derivation process for the RPS and the picture marking */ + gst_h265_reorder_derive_and_mark_rps (self, picture, + CurrDeltaPocMsbPresentFlag, FollDeltaPocMsbPresentFlag); + + return TRUE; +} + +static void +gst_h265_reorder_set_output_buffer (GstH265Reorder * self, guint frame_num) +{ + gsize i, j; + + for (i = 0; i < self->frame_queue->len; i++) { + GstVideoCodecFrame *frame = g_ptr_array_index (self->frame_queue, i); + if (frame->system_frame_number != frame_num) + continue; + + /* Copy frame at present index to */ + if (!frame->output_buffer) { + GST_LOG_OBJECT (self, "decoding order: %u, display order: %u", + frame_num, self->present_num); + frame->presentation_frame_number = self->present_num; + self->present_num++; + for (j = 0; j < self->frame_queue->len; j++) { + GstVideoCodecFrame *other_frame = + g_ptr_array_index (self->frame_queue, j); + if (other_frame->system_frame_number == + frame->presentation_frame_number) { + frame->output_buffer = gst_buffer_ref (other_frame->input_buffer); + return; + } + } + } + + break; + } +} + +static void +gst_h265_reorder_output_picture (GstH265Reorder * self, + GstH265Picture * picture) +{ + guint frame_num = GST_CODEC_PICTURE_FRAME_NUMBER (picture); + + gst_h265_reorder_set_output_buffer (self, frame_num); + gst_h265_picture_unref (picture); + + /* Move completed frames to output queue */ + while (self->frame_queue->len > 0) { + GstVideoCodecFrame *frame = g_ptr_array_index (self->frame_queue, 0); + if (!frame->output_buffer) + break; + + frame = g_ptr_array_steal_index (self->frame_queue, 0); + g_ptr_array_add (self->output_queue, frame); + } +} + +GstH265Reorder * +gst_h265_reorder_new (gboolean need_reorder) +{ + GstH265Reorder *self = g_object_new (GST_TYPE_H265_REORDER, NULL); + gst_object_ref_sink (self); + + self->need_reorder = need_reorder; + + return self; +} + +void +gst_h265_reorder_drain (GstH265Reorder * reorder) +{ + GstH265Picture *picture; + + while ((picture = gst_h265_dpb_bump (reorder->dpb, TRUE)) != NULL) { + gst_h265_reorder_output_picture (reorder, picture); + } + + gst_h265_dpb_clear (reorder->dpb); + + /* Frame queue should be empty or holding only current frame */ + while (reorder->frame_queue->len > 0) { + GstVideoCodecFrame *frame = g_ptr_array_index (reorder->frame_queue, 0); + if (frame == reorder->current_frame) + break; + + GST_WARNING_OBJECT (reorder, "Remaining frame after drain %" GST_PTR_FORMAT, + frame->input_buffer); + + /* Move to output queue anyway */ + frame->output_buffer = gst_buffer_ref (frame->input_buffer); + frame = g_ptr_array_steal_index (reorder->frame_queue, 0); + g_ptr_array_add (reorder->output_queue, frame); + } + + /* presentation number */ + if (reorder->current_frame) + reorder->present_num = reorder->current_frame->system_frame_number; + else + reorder->present_num = reorder->system_num; +} + +/* C.5.2.2 */ +static gboolean +gst_h265_reorder_dpb_init (GstH265Reorder * self, const GstH265Slice * slice, + GstH265Picture * picture) +{ + const GstH265SPS *sps = self->active_sps; + GstH265Picture *to_output; + + /* C 3.2 */ + if (slice->clear_dpb) { + /* Ignores NoOutputOfPriorPicsFlag and drain all */ + gst_h265_reorder_drain (self); + } else { + /* TODO: According to 7.4.3.3.3, TwoVersionsOfCurrDecPicFlag + * should be considered. + * + * NOTE: (See 8.1.3) if TwoVersionsOfCurrDecPicFlag is 1, + * current picture requires two picture buffers allocated in DPB storage, + * one is decoded picture *after* in-loop filter, and the other is + * decoded picture *before* in-loop filter, so that current picture + * can be used as a reference of the current picture + * (e.g., intra block copy method in SCC). + * Here TwoVersionsOfCurrDecPicFlag takes effect in order to ensure + * at least two empty DPB buffer before starting current picture decoding. + * + * However, two DPB picture allocation is not implemented + * in current baseclass (which would imply that we are doing reference + * picture management wrongly in case of SCC). + * Let's ignore TwoVersionsOfCurrDecPicFlag for now */ + guint max_dec_pic_buffering = + sps->max_dec_pic_buffering_minus1sps->max_sub_layers_minus1 + 1; + gst_h265_dpb_delete_unused (self->dpb); + while (gst_h265_dpb_needs_bump (self->dpb, + sps->max_num_reorder_picssps->max_sub_layers_minus1, + self->SpsMaxLatencyPictures, max_dec_pic_buffering)) { + to_output = gst_h265_dpb_bump (self->dpb, FALSE); + + /* Something wrong... */ + if (!to_output) { + GST_WARNING_OBJECT (self, "Bumping is needed but no picture to output"); + break; + } + + gst_h265_reorder_output_picture (self, to_output); + } + } + + return TRUE; +} + +static gboolean +gst_h265_reorder_start_current_picture (GstH265Reorder * self) +{ + g_assert (self->current_picture != NULL); + g_assert (self->active_sps != NULL); + g_assert (self->active_pps != NULL); + + if (!gst_h265_reorder_init_current_picture (self)) + return FALSE; + + /* Drop all RASL pictures having NoRaslOutputFlag is TRUE for the + * associated IRAP picture */ + if (GST_H265_IS_NAL_TYPE_RASL (self->current_slice.nalu.type) && + self->associated_irap_NoRaslOutputFlag) { + GST_DEBUG_OBJECT (self, "Ignores associated_irap_NoRaslOutputFlag"); + } + + if (!gst_h265_reorder_prepare_rps (self, &self->current_slice, + self->current_picture)) { + GST_WARNING_OBJECT (self, "Failed to prepare ref pic set"); + gst_clear_h265_picture (&self->current_picture); + return FALSE; + } + + if (!gst_h265_reorder_dpb_init (self, + &self->current_slice, self->current_picture)) { + GST_WARNING_OBJECT (self, "Failed to init dpb"); + gst_clear_h265_picture (&self->current_picture); + return FALSE; + } + + return TRUE; +} + +static void +gst_h265_reorder_finish_picture (GstH265Reorder * self, + GstH265Picture * picture) +{ + const GstH265SPS *sps = self->active_sps; + + GST_LOG_OBJECT (self, + "Finishing picture %p (poc %d), entries in DPB %d", + picture, picture->pic_order_cnt, gst_h265_dpb_get_size (self->dpb)); + + gst_h265_dpb_delete_unused (self->dpb); + + /* gst_h265_dpb_add() will take care of pic_latency_cnt increment and + * reference picture marking for this picture */ + gst_h265_dpb_add (self->dpb, picture); + + /* NOTE: As per C.5.2.2, bumping by sps_max_dec_pic_buffering_minus1 is + * applied only for the output and removal of pictures from the DPB before + * the decoding of the current picture. So pass zero here */ + while (gst_h265_dpb_needs_bump (self->dpb, + sps->max_num_reorder_picssps->max_sub_layers_minus1, + self->SpsMaxLatencyPictures, 0)) { + GstH265Picture *to_output = gst_h265_dpb_bump (self->dpb, FALSE); + + /* Something wrong... */ + if (!to_output) { + GST_WARNING_OBJECT (self, "Bumping is needed but no picture to output"); + break; + } + + gst_h265_reorder_output_picture (self, to_output); + } +} + +static void +gst_h265_reorder_reset_frame_state (GstH265Reorder * self) +{ + /* Clear picture struct information */ + self->cur_pic_struct = GST_H265_SEI_PIC_STRUCT_FRAME; + self->cur_source_scan_type = 2; + self->cur_duplicate_flag = 0; + self->no_output_of_prior_pics_flag = FALSE; + self->current_frame = NULL; + g_array_set_size (self->nalu, 0); +} + +static GstBuffer * +gst_h265_reorder_remove_caption_sei (GstH265Reorder * self, GstBuffer * buffer) +{ + GstH265ParserResult pres = GST_H265_PARSER_OK; + GstMapInfo map; + GstH265NalUnit nalu; + guint i; + gboolean have_sei = FALSE; + GstBuffer *new_buf; + + g_array_set_size (self->au_nalus, 0); + + gst_buffer_map (buffer, &map, GST_MAP_READ); + if (self->is_hevc) { + guint offset = 0; + gsize consumed = 0; + guint i; + + do { + pres = gst_h265_parser_identify_and_split_nalu_hevc (self->parser, + map.data, offset, map.size, self->nal_length_size, + self->split_nalu, &consumed); + if (pres != GST_H265_PARSER_OK) + break; + + for (i = 0; i < self->split_nalu->len; i++) { + nalu = g_array_index (self->split_nalu, GstH265NalUnit, i); + g_array_append_val (self->au_nalus, nalu); + } + + offset += consumed; + } while (pres == GST_H265_PARSER_OK); + } else { + pres = gst_h265_parser_identify_nalu (self->parser, + map.data, 0, map.size, &nalu); + + if (pres == GST_H265_PARSER_NO_NAL_END) + pres = GST_H265_PARSER_OK; + + while (pres == GST_H265_PARSER_OK) { + g_array_append_val (self->au_nalus, nalu); + + pres = gst_h265_parser_identify_nalu (self->parser, + map.data, nalu.offset + nalu.size, map.size, &nalu); + + if (pres == GST_H265_PARSER_NO_NAL_END) + pres = GST_H265_PARSER_OK; + } + } + + /* Fast scan without parsing */ + for (i = 0; i < self->au_nalus->len; i++) { + GstH265NalUnit *nl = &g_array_index (self->au_nalus, GstH265NalUnit, i); + switch (nl->type) { + case GST_H265_NAL_VPS: + { + GstH265VPS vps; + gst_h265_parser_parse_vps (self->parser, nl, &vps); + break; + } + case GST_H265_NAL_SPS: + { + GstH265SPS sps; + gst_h265_parser_parse_sps (self->parser, nl, &sps, TRUE); + break; + } + case GST_H265_NAL_PREFIX_SEI: + case GST_H265_NAL_SUFFIX_SEI: + have_sei = TRUE; + break; + default: + break; + } + } + + if (!have_sei) { + GST_LOG_OBJECT (self, "Buffer without SEI, %" GST_PTR_FORMAT, buffer); + gst_buffer_unmap (buffer, &map); + g_array_set_size (self->au_nalus, 0); + return gst_buffer_ref (buffer); + } + + new_buf = gst_buffer_new (); + gst_buffer_copy_into (new_buf, buffer, GST_BUFFER_COPY_METADATA, 0, -1); + + for (i = 0; i < self->au_nalus->len; i++) { + GstH265NalUnit *nl = &g_array_index (self->au_nalus, GstH265NalUnit, i); + GstMemory *mem = NULL; + + if (nl->type == GST_H265_NAL_PREFIX_SEI || + nl->type == GST_H265_NAL_SUFFIX_SEI) { + GArray *msg = NULL; + gint j; + gst_h265_parser_parse_sei (self->parser, nl, &msg); + gboolean have_caption_sei = FALSE; + + for (j = 0; j < (gint) msg->len; j++) { + GstH265SEIMessage *sei = &g_array_index (msg, GstH265SEIMessage, j); + GstH265RegisteredUserData *rud; + if (sei->payloadType != GST_H265_SEI_REGISTERED_USER_DATA) + continue; + + rud = &sei->payload.registered_user_data; + + if (!gst_h264_reorder_is_cea708_sei (rud->country_code, + rud->data, rud->size)) { + continue; + } + + GST_LOG_OBJECT (self, "Found CEA708 caption SEI"); + have_caption_sei = TRUE; + + g_array_remove_index (msg, j); + j--; + } + + if (have_caption_sei) { + if (msg->len > 0) { + /* Creates new SEI memory */ + if (self->is_hevc) { + mem = gst_h265_create_sei_memory_hevc (nl->layer_id, + nl->temporal_id_plus1, self->nal_length_size, msg); + } else { + mem = gst_h265_create_sei_memory (nl->layer_id, + nl->temporal_id_plus1, 4, msg); + } + + if (!mem) + GST_ERROR_OBJECT (self, "Couldn't create SEI memory"); + else + gst_buffer_append_memory (new_buf, mem); + } + } else { + gsize size = nl->size + (nl->offset - nl->sc_offset); + gpointer *data = g_memdup2 (nl->data + nl->sc_offset, size); + mem = gst_memory_new_wrapped (0, data, size, 0, size, data, g_free); + gst_buffer_append_memory (new_buf, mem); + } + + g_array_unref (msg); + } else { + gsize size = nl->size + (nl->offset - nl->sc_offset); + gpointer *data = g_memdup2 (nl->data + nl->sc_offset, size); + mem = gst_memory_new_wrapped (0, data, size, 0, size, data, g_free); + gst_buffer_append_memory (new_buf, mem); + } + } + + gst_buffer_unmap (buffer, &map); + g_array_set_size (self->au_nalus, 0); + + return new_buf; +} + +gboolean +gst_h265_reorder_push (GstH265Reorder * reorder, GstVideoCodecFrame * frame, + GstClockTime * latency) +{ + GstBuffer *in_buf; + GstH265NalUnit nalu; + GstH265ParserResult pres = GST_H265_PARSER_OK; + GstMapInfo map; + gboolean decode_ret = TRUE; + guint i; + + gst_h265_reorder_reset_frame_state (reorder); + + frame->system_frame_number = reorder->system_num; + frame->decode_frame_number = reorder->system_num; + + GST_LOG_OBJECT (reorder, + "Push frame %u, frame queue size: %u, output queue size %u", + frame->system_frame_number, reorder->frame_queue->len, + reorder->output_queue->len); + + in_buf = gst_h265_reorder_remove_caption_sei (reorder, frame->input_buffer); + if (in_buf) { + gst_buffer_unref (frame->input_buffer); + frame->input_buffer = in_buf; + } else { + in_buf = frame->input_buffer; + } + + reorder->system_num++; + + if (!reorder->need_reorder) { + g_ptr_array_add (reorder->output_queue, frame); + *latency = 0; + return TRUE; + } + + g_ptr_array_add (reorder->frame_queue, frame); + reorder->current_frame = frame; + + gst_buffer_map (in_buf, &map, GST_MAP_READ); + if (reorder->is_hevc) { + guint offset = 0; + gsize consumed = 0; + + do { + pres = gst_h265_parser_identify_and_split_nalu_hevc (reorder->parser, + map.data, offset, map.size, reorder->nal_length_size, + reorder->split_nalu, &consumed); + if (pres != GST_H265_PARSER_OK) + break; + + for (i = 0; i < reorder->split_nalu->len; i++) { + GstH265NalUnit *nl = + &g_array_index (reorder->split_nalu, GstH265NalUnit, i); + pres = gst_h265_reorder_parse_nalu (reorder, nl); + if (pres != GST_H265_PARSER_OK) + break; + } + + if (pres != GST_H265_PARSER_OK) + break; + + offset += consumed; + } while (pres == GST_H265_PARSER_OK); + } else { + pres = gst_h265_parser_identify_nalu (reorder->parser, + map.data, 0, map.size, &nalu); + + if (pres == GST_H265_PARSER_NO_NAL_END) + pres = GST_H265_PARSER_OK; + + while (pres == GST_H265_PARSER_OK) { + pres = gst_h265_reorder_parse_nalu (reorder, &nalu); + if (pres != GST_H265_PARSER_OK) + break; + + pres = gst_h265_parser_identify_nalu (reorder->parser, + map.data, nalu.offset + nalu.size, map.size, &nalu); + if (pres == GST_H265_PARSER_NO_NAL_END) + pres = GST_H265_PARSER_OK; + } + } + + for (i = 0; i < reorder->nalu->len && decode_ret; i++) { + GstH265ReorderNalUnit *decoder_nalu = + &g_array_index (reorder->nalu, GstH265ReorderNalUnit, i); + decode_ret = gst_h265_reorder_decode_nalu (reorder, decoder_nalu); + } + + gst_buffer_unmap (in_buf, &map); + gst_h265_reorder_reset_frame_state (reorder); + + if (!decode_ret) { + GST_ERROR_OBJECT (reorder, "Couldn't decode frame"); + gst_clear_h265_picture (&reorder->current_picture); + reorder->current_frame = NULL; + + g_ptr_array_remove (reorder->frame_queue, frame); + reorder->system_num--; + + return FALSE; + } + + if (!reorder->current_picture) { + GST_DEBUG_OBJECT (reorder, + "AU buffer without slice data, current frame %u", + frame->system_frame_number); + + g_ptr_array_remove (reorder->frame_queue, frame); + reorder->current_frame = NULL; + reorder->system_num--; + + return FALSE; + } + + gst_h265_reorder_finish_picture (reorder, reorder->current_picture); + reorder->current_picture = NULL; + reorder->current_frame = NULL; + + *latency = reorder->latency; + + return TRUE; +} + +GstVideoCodecFrame * +gst_h265_reorder_pop (GstH265Reorder * reorder) +{ + if (!reorder->output_queue->len) { + GST_LOG_OBJECT (reorder, "Empty output queue, frames queue size %u", + reorder->frame_queue->len); + return NULL; + } + + return g_ptr_array_steal_index (reorder->output_queue, 0); +} + +guint +gst_h265_reorder_get_num_buffered (GstH265Reorder * reorder) +{ + return reorder->frame_queue->len + reorder->output_queue->len; +} + +GstBuffer * +gst_h265_reorder_insert_sei (GstH265Reorder * reorder, GstBuffer * au, + GArray * sei) +{ + GstMemory *mem; + GstBuffer *new_buf; + + if (reorder->is_hevc) + mem = gst_h265_create_sei_memory_hevc (0, 1, reorder->nal_length_size, sei); + else + mem = gst_h265_create_sei_memory (0, 1, 4, sei); + + if (!mem) { + GST_ERROR_OBJECT (reorder, "Couldn't create SEI memory"); + return NULL; + } + + if (reorder->is_hevc) { + new_buf = gst_h265_parser_insert_sei_hevc (reorder->parser, + reorder->nal_length_size, au, mem); + } else { + new_buf = gst_h265_parser_insert_sei (reorder->parser, au, mem); + } + + gst_memory_unref (mem); + return new_buf; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gsth265reorder.h
Changed
(renamed from ext/closedcaption/gsth265reorder.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstline21dec.c
Changed
(renamed from ext/closedcaption/gstline21dec.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstline21dec.h
Changed
(renamed from ext/closedcaption/gstline21dec.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstline21enc.c
Changed
(renamed from ext/closedcaption/gstline21enc.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/gstline21enc.h
Changed
(renamed from ext/closedcaption/gstline21enc.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/hamm.h
Changed
(renamed from ext/closedcaption/hamm.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/io-sim.c
Changed
(renamed from ext/closedcaption/io-sim.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/io-sim.h
Changed
(renamed from ext/closedcaption/io-sim.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/macros.h
Changed
(renamed from ext/closedcaption/macros.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/meson.build
Added
@@ -0,0 +1,63 @@ +closedcaption_sources = + 'gstcccombiner.c', + 'gstccextractor.c', + 'gstccconverter.c', + 'gstcea608mux.c', + 'gstclosedcaption.c', + 'gstline21dec.c', + 'gstline21enc.c', + 'ccutils.c', + 'gsth264ccextractor.c', + 'gsth265ccextractor.c', + 'gsth264reorder.c', + 'gsth265reorder.c', + 'gstcodecccinserter.c', + 'gsth264ccinserter.c', + 'gsth265ccinserter.c', + + +closedcaption_headers = + 'gstline21dec.h', + 'gstcccombiner.h', + 'gstcea608mux.h', + 'gstccconverter.h', + 'gstccextractor.h', + 'ccutils.h', + 'gstline21enc.h', + + +zvbi_sources = + 'bit_slicer.c', + 'decoder.c', + 'raw_decoder.c', + 'sampling_par.c', + 'io-sim.c', + + +extra_args = '-DGST_USE_UNSTABLE_API' + +doc_sources = +foreach s: closedcaption_sources + closedcaption_headers + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'closedcaption': pathsep.join(doc_sources) +} + +if get_option('closedcaption').disabled() + subdir_done() +endif + +gstclosedcaption = library('gstclosedcaption', + closedcaption_sources, + zvbi_sources, + c_args : gst_plugins_bad_args + extra_args, + link_args : noseh_link_args, + include_directories : configinc, + dependencies : gstvideo_dep, gstbase_dep, gst_dep, libm, + gstcodecs_dep, + install : true, + install_dir : plugins_install_dir, +) +plugins += gstclosedcaption
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/misc.h
Added
@@ -0,0 +1,526 @@ +/* + * libzvbi -- Miscellaneous cows and chickens + * + * Copyright (C) 2000-2003 Iñaki García Etxebarria + * Copyright (C) 2002-2007 Michael H. Schimek + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, + * Boston, MA 02110-1301 USA. + */ + +/* $Id: misc.h,v 1.24 2013-07-02 02:32:31 mschimek Exp $ */ + +#ifndef MISC_H +#define MISC_H + +#include <stdio.h> +#include <stdlib.h> +#include <stddef.h> +#include <stdarg.h> +#include <string.h> +#include <inttypes.h> /* (u)intXX_t */ +#include <sys/types.h> /* (s)size_t */ +#include <float.h> /* DBL_MAX */ +#include <limits.h> /* (S)SIZE_MAX */ +#include <assert.h> +#include <glib.h> +#include <gst/gst.h> + +#include "macros.h" + +#define N_ELEMENTS(array) (sizeof (array) / sizeof (*(array))) + +#ifdef __GNUC__ + +#if __GNUC__ < 3 +/* Expect expression usually true/false, schedule accordingly. */ +# define likely(expr) (expr) +# define unlikely(expr) (expr) +#else +# define likely(expr) __builtin_expect(expr, 1) +# define unlikely(expr) __builtin_expect(expr, 0) +#endif + +#undef __i386__ +#undef __i686__ +/* FIXME #cpu is deprecated +#if #cpu (i386) +# define __i386__ 1 +#endif +#if #cpu (i686) +# define __i686__ 1 +#endif +*/ + +/* &x == PARENT (&x.tm_min, struct tm, tm_min), + safer than &x == (struct tm *) &x.tm_min. A NULL _ptr is safe and + will return NULL, not -offsetof(_member). */ +#undef PARENT +#define PARENT(_ptr, _type, _member) ({ \ + __typeof__ (&((_type *) 0)->_member) _p = (_ptr); \ + (_p != 0) ? (_type *)(((char *) _p) - offsetof (_type, \ + _member)) : (_type *) 0; \ +}) + +/* Like PARENT(), to be used with const _ptr. */ +#define CONST_PARENT(_ptr, _type, _member) ({ \ + __typeof__ (&((const _type *) 0)->_member) _p = (_ptr); \ + (_p != 0) ? (const _type *)(((const char *) _p) - offsetof \ + (const _type, _member)) : (const _type *) 0; \ +}) + +/* Note the following macros have no side effects only when you + compile with GCC, so don't expect this. */ + +/* Absolute value of int, long or long long without a branch. + Note ABS (INT_MIN) -> INT_MAX + 1. */ +#undef ABS +#define ABS(n) ({ \ + register __typeof__ (n) _n = (n), _t = _n; \ + if (-1 == (-1 >> 1)) { /* do we have signed shifts? */ \ + _t >>= sizeof (_t) * 8 - 1; \ + _n ^= _t; \ + _n -= _t; \ + } else if (_n < 0) { /* also warns if n is unsigned type */ \ + _n = -_n; \ + } \ + /* return */ _n; \ +}) + +#undef MIN +#define MIN(x, y) ({ \ + __typeof__ (x) _x = (x); \ + __typeof__ (y) _y = (y); \ + (void)(&_x == &_y); /* warn if types do not match */ \ + /* return */ (_x < _y) ? _x : _y; \ +}) + +#undef MAX +#define MAX(x, y) ({ \ + __typeof__ (x) _x = (x); \ + __typeof__ (y) _y = (y); \ + (void)(&_x == &_y); /* warn if types do not match */ \ + /* return */ (_x > _y) ? _x : _y; \ +}) + +/* Note other compilers may swap only int, long or pointer. */ +#undef SWAP +#define SWAP(x, y) \ +do { \ + __typeof__ (x) _x = x; \ + x = y; \ + y = _x; \ +} while (0) + +#undef SATURATE +#ifdef __i686__ /* has conditional move */ +#define SATURATE(n, min, max) ({ \ + __typeof__ (n) _n = (n); \ + __typeof__ (n) _min = (min); \ + __typeof__ (n) _max = (max); \ + (void)(&_n == &_min); /* warn if types do not match */ \ + (void)(&_n == &_max); \ + if (_n < _min) \ + _n = _min; \ + if (_n > _max) \ + _n = _max; \ + /* return */ _n; \ +}) +#else +#define SATURATE(n, min, max) ({ \ + __typeof__ (n) _n = (n); \ + __typeof__ (n) _min = (min); \ + __typeof__ (n) _max = (max); \ + (void)(&_n == &_min); /* warn if types do not match */ \ + (void)(&_n == &_max); \ + if (_n < _min) \ + _n = _min; \ + else if (_n > _max) \ + _n = _max; \ + /* return */ _n; \ +}) +#endif + +#else /* !__GNUC__ */ + +#define likely(expr) (expr) +#define unlikely(expr) (expr) +#undef __i386__ +#undef __i686__ + +static char * +PARENT_HELPER (char *p, unsigned int offset) +{ return (0 == p) ? ((char *) 0) : p - offset; } + +static const char * +CONST_PARENT_HELPER (const char *p, unsigned int offset) +{ return (0 == p) ? ((char *) 0) : p - offset; } + +#define PARENT(_ptr, _type, _member) \ + ((0 == offsetof (_type, _member)) ? (_type *)(_ptr) \ + : (_type *) PARENT_HELPER ((char *)(_ptr), offsetof (_type, _member))) +#define CONST_PARENT(_ptr, _type, _member) \ + ((0 == offsetof (const _type, _member)) ? (const _type *)(_ptr) \ + : (const _type *) CONST_PARENT_HELPER ((const char *)(_ptr), \ + offsetof (const _type, _member))) + +#undef ABS +#define ABS(n) (((n) < 0) ? -(n) : (n)) + +#undef MIN +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) + +#undef MAX +#define MAX(x, y) (((x) > (y)) ? (x) : (y)) + +#undef SWAP +#define SWAP(x, y) \ +do { \ + long _x = x; \ + x = y; \ + y = _x; \ +} while (0) + +#undef SATURATE +#define SATURATE(n, min, max) MIN (MAX (min, n), max) + +#endif /* !__GNUC__ */ + +/* 32 bit constant byte reverse, e.g. 0xAABBCCDD -> 0xDDCCBBAA. */ +#define SWAB32(m) \ + (+ (((m) & 0xFF000000) >> 24) \ + + (((m) & 0xFF0000) >> 8) \ + + (((m) & 0xFF00) << 8) \ + + (((m) & 0xFF) << 24)) + +#ifdef HAVE_BUILTIN_POPCOUNT +# define popcnt(x) __builtin_popcount ((uint32_t)(x)) +#else +# define popcnt(x) _vbi_popcnt (x) +#endif + +extern unsigned int +_vbi_popcnt (uint32_t x); + +/* NB GCC inlines and optimizes these functions when size is const. */ +#define SET(var) memset (&(var), ~0, sizeof (var)) + +#define CLEAR(var) memset (&(var), 0, sizeof (var)) + +/* Useful to copy arrays, otherwise use assignment. */ +#define COPY(d, s) \ + (assert (sizeof (d) == sizeof (s)), memcpy (d, s, sizeof (d))) + +/* Copy string const into char array. */ +#define STRACPY(array, s) \ +do { \ + /* Complain if s is no string const or won't fit. */ \ + const char t_sizeof (array) - 1 _vbi_unused = s; \ + \ + memcpy (array, s, sizeof (s)); \ +} while (0) + +/* Copy bits through mask. */ +#define COPY_SET_MASK(dest, from, mask) \ + (dest ^= (from) ^ (dest & (mask))) + +/* Set bits if cond is TRUE, clear if FALSE. */ +#define COPY_SET_COND(dest, bits, cond) \ + ((cond) ? (dest |= (bits)) : (dest &= ~(bits))) + +/* Set and clear bits. */ +#define COPY_SET_CLEAR(dest, set, clear) \ + (dest = (dest & ~(clear)) | (set)) + +/* For applications, debugging and fault injection during unit tests. */ + +#define vbi_malloc malloc +#define vbi_realloc realloc +#define vbi_strdup strdup +#define vbi_free free + +#define vbi_cache_malloc vbi_malloc +#define vbi_cache_free vbi_free + +/* Helper functions. */ + +_vbi_inline int +_vbi_to_ascii (int c) +{ + if (c < 0) + return '?'; + + c &= 0x7F; + + if (c < 0x20 || c >= 0x7F) + return '.'; + + return c; +} + +typedef struct { + const char * key; + int value; +} _vbi_key_value_pair; + +extern vbi_bool +_vbi_keyword_lookup (int * value, + const char ** inout_s, + const _vbi_key_value_pair * table, + unsigned int n_pairs) + _vbi_nonnull ((1, 2, 3)); + +extern void +_vbi_shrink_vector_capacity (void ** vector, + size_t * capacity, + size_t min_capacity, + size_t element_size) + _vbi_nonnull ((1, 2)); +extern vbi_bool +_vbi_grow_vector_capacity (void ** vector, + size_t * capacity, + size_t min_capacity, + size_t element_size) + _vbi_nonnull ((1, 2)); + +GST_DEBUG_CATEGORY_EXTERN (libzvbi_debug); + +#ifndef GST_DISABLE_GST_DEBUG +/* Logging stuff. */ +#define VBI_CAT_LEVEL_LOG(level,object,...) G_STMT_START{ \ + if (G_UNLIKELY ((level) <= GST_LEVEL_MAX && (level) <= _gst_debug_min)) { \ + gst_debug_log (libzvbi_debug, (level), __FILE__, GST_FUNCTION, __LINE__, \ + (GObject *) (object), __VA_ARGS__); \ + } \ +}G_STMT_END +#else +static inline void +VBI_CAT_LEVEL_LOG (GstDebugLevel level, + gpointer object, const char *format, ...) +{ +} +#endif /* GST_DISABLE_GST_DEBUG */ + +#ifdef G_HAVE_GNUC_VARARGS +#define error(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_ERROR, NULL, templ , ##args) +#define warn(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_WARNING, NULL, templ , ##args) +#define notice(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ , ##args) +#define info(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ , ##args) +#define debug1(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_DEBUG, NULL, templ , ##args) +#define debug2(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_LOG, NULL, templ , ##args) +#define debug3(hook, templ, args...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_TRACE, NULL, templ , ##args) +#elif defined(G_HAVE_ISO_VARARGS) +#define error(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_ERROR, NULL, templ, __VA_ARGS__) +#define warn(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_WARNING, NULL, templ, __VA_ARGS__) +#define notice(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ, __VA_ARGS__) +#define info(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_INFO, NULL, templ, __VA_ARGS__) +#define debug1(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_DEBUG, NULL, templ, __VA_ARGS__) +#define debug2(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_LOG, NULL, templ, __VA_ARGS__) +#define debug3(hook, templ, ...) \ + VBI_CAT_LEVEL_LOG (GST_LEVEL_TRACE, NULL, templ, __VA_ARGS__) +#else +/* if someone needs this, they can implement the inline functions for it */ +#error "variadic macros are required" +#endif + + +#if 0 /* Replaced logging with GStreamer logging system */ +extern _vbi_log_hook _vbi_global_log; + +extern void +_vbi_log_vprintf (vbi_log_fn * log_fn, + void * user_data, + vbi_log_mask level, + const char * source_file, + const char * context, + const char * templ, + va_list ap) + _vbi_nonnull ((1, 4, 5, 6)); +extern void +_vbi_log_printf (vbi_log_fn * log_fn, + void * user_data, + vbi_log_mask level, + const char * source_file, + const char * context, + const char * templ, + ...) + _vbi_nonnull ((1, 4, 5, 6)) _vbi_format ((printf, 6, 7)); + +#define _vbi_log(hook, level, templ, args...) \ +do { \ + _vbi_log_hook *_h = hook; \ + \ + if ((NULL != _h && 0 != (_h->mask & level)) \ + || (_h = &_vbi_global_log, 0 != (_h->mask & level))) \ + _vbi_log_printf (_h->fn, _h->user_data, \ + level, __FILE__, __FUNCTION__, \ + templ , ##args); \ +} while (0) + +#define _vbi_vlog(hook, level, templ, ap) \ +do { \ + _vbi_log_hook *_h = hook; \ + \ + if ((NULL != _h && 0 != (_h->mask & level)) \ + || (_h = &_vbi_global_log, 0 != (_h->mask & level))) \ + _vbi_log_vprintf (_h->fn, _h->user_data, \ + level, __FILE__, __FUNCTION__, \ + templ, ap); \ +} while (0) +#define error(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_ERROR, templ , ##args) +#define warning(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_ERROR, templ , ##args) +#define notice(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_NOTICE, templ , ##args) +#define info(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_INFO, templ , ##args) +#define debug1(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_DEBUG, templ , ##args) +#define debug2(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_DEBUG2, templ , ##args) +#define debug3(hook, templ, args...) \ + _vbi_log (hook, VBI_LOG_DEBUG3, templ , ##args) +#endif + +/* Portability stuff. */ + +/* These should be defined in inttypes.h. */ +#ifndef PRId64 +# define PRId64 "lld" +#endif +#ifndef PRIu64 +# define PRIu64 "llu" +#endif +#ifndef PRIx64 +# define PRIx64 "llx" +#endif + +/* Should be defined in C99 limits.h? */ +#ifndef SIZE_MAX +# define SIZE_MAX ((size_t) -1) +#endif + +#ifndef TIME_MIN +# define TIME_MIN (_vbi_time_min ()) +_vbi_inline time_t +_vbi_time_min (void) +{ + const time_t t = (time_t) -1.25; + + if (t < -1) { + return (time_t)((sizeof (time_t) > 4) ? DBL_MIN : FLT_MIN); + } else if (t < 0) { + return ((uint64_t) 1) << (sizeof (time_t) * 8 - 1); + } else { + return 0; + } +} +#endif + +#ifndef TIME_MAX +# define TIME_MAX (_vbi_time_max ()) +_vbi_inline time_t +_vbi_time_max (void) +{ + const time_t t = (time_t) -1.25; + + if (t < -1) { + return (time_t)((sizeof (time_t) > 4) ? DBL_MAX : FLT_MAX); + } else if (t < 0) { + /* Most likely signed 32 or 64 bit. */ + return (((uint64_t) 1) << (sizeof (time_t) * 8 - 1)) - 1; + } else { + return -1; + } +} +#endif + +/* __va_copy is a GNU extension. */ +#ifndef __va_copy +# define __va_copy(ap1, ap2) do { ap1 = ap2; } while (0) +#endif + +#if 0 +/* Use this instead of strncpy(). strlcpy() is a BSD extension. */ +#ifndef HAVE_STRLCPY +# define strlcpy _vbi_strlcpy +#endif +#undef strncpy +#define strncpy use_strlcpy_instead + +extern size_t +_vbi_strlcpy (char * dst, + const char * src, + size_t size) + _vbi_nonnull ((1, 2)); +#endif + +/* /\* strndup() is a BSD/GNU extension. *\/ */ +/* #ifndef HAVE_STRNDUP */ +/* # define strndup _vbi_strndup */ +/* #endif */ + +/* extern char * */ +/* _vbi_strndup (const char * s, */ +/* size_t len) */ +/* _vbi_nonnull ((1)); */ + +/* vasprintf() is a GNU extension. */ +#ifndef HAVE_VASPRINTF +# define vasprintf _vbi_vasprintf +#endif + +extern int +_vbi_vasprintf (char ** dstp, + const char * templ, + va_list ap) + _vbi_nonnull ((1, 2)); + +/* asprintf() is a GNU extension. */ +#ifndef HAVE_ASPRINTF +# define asprintf _vbi_asprintf +#endif + +extern int +_vbi_asprintf (char ** dstp, + const char * templ, + ...) + _vbi_nonnull ((1, 2)) _vbi_format ((printf, 2, 3)); + +#undef sprintf +#define sprintf use_snprintf_or_asprintf_instead + +#endif /* MISC_H */ + +/* +Local variables: +c-set-style: K&R +c-basic-offset: 8 +End: +*/
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/raw_decoder.c
Changed
(renamed from ext/closedcaption/raw_decoder.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/raw_decoder.h
Changed
(renamed from ext/closedcaption/raw_decoder.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/sampling_par.c
Changed
(renamed from ext/closedcaption/sampling_par.c)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/sampling_par.h
Changed
(renamed from ext/closedcaption/sampling_par.h)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/closedcaption/sliced.h
Changed
(renamed from ext/closedcaption/sliced.h)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/debugutils/gsttestsrcbin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/debugutils/gsttestsrcbin.c
Changed
@@ -572,9 +572,8 @@ if (self->expose_sources_async) { GST_OBJECT_UNLOCK (self); - gst_element_call_async (GST_ELEMENT (self), - (GstElementCallAsyncFunc) gst_test_src_bin_create_sources, - NULL, NULL); + gst_object_call_async (GST_OBJECT (self), + (GstObjectCallAsyncFunc) gst_test_src_bin_create_sources, NULL); } else { GST_OBJECT_UNLOCK (self); @@ -619,7 +618,7 @@ switch (prop_id) { case PROP_STREAM_TYPES: { - gboolean set G_GNUC_UNUSED; /* G_DISABLE_ASSERT */ + gboolean set GST_UNUSED_ASSERT; gchar *uri = g_strdup_printf ("testbin://%s", g_value_get_string (value)); set = gst_uri_handler_set_uri (GST_URI_HANDLER (self), uri, NULL); @@ -680,9 +679,8 @@ switch (transition) { case GST_STATE_CHANGE_READY_TO_PAUSED:{ if (self->expose_sources_async) { - gst_element_call_async (element, - (GstElementCallAsyncFunc) gst_test_src_bin_create_sources, - NULL, NULL); + gst_object_call_async (GST_OBJECT_CAST (element), + (GstObjectCallAsyncFunc) gst_test_src_bin_create_sources, NULL); } else { gst_test_src_bin_create_sources (self); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/dvdspu/gstdvdspu.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/dvdspu/gstdvdspu.c
Changed
@@ -487,7 +487,9 @@ if (gst_caps_is_any (peer_caps)) { /* if peer returns ANY caps, return filtered src pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (srcpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (srcpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else { /* duplicate caps which contains the composition into one version with * the meta and one without. Filter the other caps by the software caps */ @@ -542,7 +544,9 @@ if (gst_caps_is_any (peer_caps)) { /* if peer returns ANY caps, return filtered sink pad template caps */ - caps = gst_caps_copy (gst_pad_get_pad_template_caps (sinkpad)); + GstCaps *tcaps = gst_pad_get_pad_template_caps (sinkpad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); } else { /* return upstream caps + composition feature + upstream caps * filtered by the software caps. */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/fieldanalysis/gstfieldanalysisorc-dist.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/fieldanalysis/gstfieldanalysisorc-dist.c
Changed
@@ -67,6 +67,7 @@ orc_int32 x22; float x2f2; orc_int16 x44; + orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT @@ -74,6 +75,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -114,6 +117,7 @@ /* begin Orc C target preamble */ +#include <math.h> #define ORC_CLAMP(x,a,b) ((x)<(a) ? (a) : ((x)>(b) ? (b) : (x))) #define ORC_ABS(a) ((a)<0 ? -(a) : (a)) #define ORC_MIN(a,b) ((a)<(b) ? (a) : (b)) @@ -149,6 +153,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -270,66 +276,61 @@ int p1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 44, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, - 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, - 121, - 95, 115, 97, 100, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, 118, 12, - 1, 1, 12, 1, 1, 13, 4, 16, 4, 20, 2, 20, 2, 20, 4, 20, - 4, 150, 32, 4, 150, 33, 5, 98, 32, 32, 33, 69, 32, 32, 154, 34, - 32, 111, 35, 34, 24, 106, 34, 34, 35, 181, 12, 34, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_sad_planar_yuv); + static const orc_uint8 bc = { + 1, 9, 44, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, + 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, 121, + 95, 115, 97, 100, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, 118, 12, + 1, 1, 12, 1, 1, 13, 4, 16, 4, 20, 2, 20, 2, 20, 4, 20, + 4, 150, 32, 4, 150, 33, 5, 98, 32, 32, 33, 69, 32, 32, 154, 34, + 32, 111, 35, 34, 24, 106, 34, 34, 35, 181, 12, 34, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_sad_planar_yuv); #else - p = orc_program_new (); - orc_program_set_name (p, "fieldanalysis_orc_same_parity_sad_planar_yuv"); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_sad_planar_yuv); - orc_program_add_source (p, 1, "s1"); - orc_program_add_source (p, 1, "s2"); - orc_program_add_accumulator (p, 4, "a1"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 4, "t3"); - orc_program_add_temporary (p, 4, "t4"); - - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T3, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T3, ORC_VAR_D1, - ORC_VAR_D1); + p = orc_program_new (); + orc_program_set_name (p, "fieldanalysis_orc_same_parity_sad_planar_yuv"); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_sad_planar_yuv); + orc_program_add_source (p, 1, "s1"); + orc_program_add_source (p, 1, "s2"); + orc_program_add_accumulator (p, 4, "a1"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 4, "t3"); + orc_program_add_temporary (p, 4, "t4"); + + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T3, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -453,65 +454,59 @@ int p1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 44, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, - 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, - 121, - 95, 115, 115, 100, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, 118, - 12, - 1, 1, 12, 1, 1, 13, 4, 16, 4, 20, 2, 20, 2, 20, 4, 20, - 4, 150, 32, 4, 150, 33, 5, 98, 32, 32, 33, 176, 34, 32, 32, 111, - 35, 34, 24, 106, 34, 34, 35, 181, 12, 34, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_ssd_planar_yuv); + static const orc_uint8 bc = { + 1, 9, 44, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, + 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, 121, + 95, 115, 115, 100, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, 118, 12, + 1, 1, 12, 1, 1, 13, 4, 16, 4, 20, 2, 20, 2, 20, 4, 20, + 4, 150, 32, 4, 150, 33, 5, 98, 32, 32, 33, 176, 34, 32, 32, 111, + 35, 34, 24, 106, 34, 34, 35, 181, 12, 34, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_ssd_planar_yuv); #else - p = orc_program_new (); - orc_program_set_name (p, "fieldanalysis_orc_same_parity_ssd_planar_yuv"); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_ssd_planar_yuv); - orc_program_add_source (p, 1, "s1"); - orc_program_add_source (p, 1, "s2"); - orc_program_add_accumulator (p, 4, "a1"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 4, "t3"); - orc_program_add_temporary (p, 4, "t4"); - - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "mulswl", 0, ORC_VAR_T3, ORC_VAR_T1, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T3, ORC_VAR_D1, - ORC_VAR_D1); + p = orc_program_new (); + orc_program_set_name (p, "fieldanalysis_orc_same_parity_ssd_planar_yuv"); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_ssd_planar_yuv); + orc_program_add_source (p, 1, "s1"); + orc_program_add_source (p, 1, "s2"); + orc_program_add_accumulator (p, 4, "a1"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 4, "t3"); + orc_program_add_temporary (p, 4, "t4"); + + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "mulswl", 0, ORC_VAR_T3, ORC_VAR_T1, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T4, ORC_VAR_T3, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T3, ORC_VAR_D1, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -745,100 +740,94 @@ int p1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 46, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, - 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, - 121, - 95, 51, 95, 116, 97, 112, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, - 118, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, 1, - 12, 1, 1, 13, 4, 14, 2, 2, 0, 0, 0, 16, 4, 20, 2, 20, - 2, 20, 2, 20, 2, 20, 2, 20, 2, 20, 4, 20, 4, 150, 32, 4, - 150, 33, 5, 150, 34, 6, 150, 35, 7, 150, 36, 8, 150, 37, 9, 93, - 33, 33, 16, 93, 36, 36, 16, 70, 32, 32, 33, 70, 32, 32, 34, 70, - 35, 35, 36, 70, 35, 35, 37, 98, 32, 32, 35, 69, 32, 32, 154, 38, - 32, 111, 39, 38, 24, 106, 38, 38, 39, 181, 12, 38, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_3_tap_planar_yuv); + static const orc_uint8 bc = { + 1, 9, 46, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, + 95, 111, 114, 99, 95, 115, 97, 109, 101, 95, 112, 97, 114, 105, 116, 121, + 95, 51, 95, 116, 97, 112, 95, 112, 108, 97, 110, 97, 114, 95, 121, 117, + 118, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, 1, + 12, 1, 1, 13, 4, 14, 2, 2, 0, 0, 0, 16, 4, 20, 2, 20, + 2, 20, 2, 20, 2, 20, 2, 20, 2, 20, 4, 20, 4, 150, 32, 4, + 150, 33, 5, 150, 34, 6, 150, 35, 7, 150, 36, 8, 150, 37, 9, 93, + 33, 33, 16, 93, 36, 36, 16, 70, 32, 32, 33, 70, 32, 32, 34, 70, + 35, 35, 36, 70, 35, 35, 37, 98, 32, 32, 35, 69, 32, 32, 154, 38, + 32, 111, 39, 38, 24, 106, 38, 38, 39, 181, 12, 38, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_3_tap_planar_yuv); #else - p = orc_program_new (); - orc_program_set_name (p, - "fieldanalysis_orc_same_parity_3_tap_planar_yuv"); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_same_parity_3_tap_planar_yuv); - orc_program_add_source (p, 1, "s1"); - orc_program_add_source (p, 1, "s2"); - orc_program_add_source (p, 1, "s3"); - orc_program_add_source (p, 1, "s4"); - orc_program_add_source (p, 1, "s5"); - orc_program_add_source (p, 1, "s6"); - orc_program_add_accumulator (p, 4, "a1"); - orc_program_add_constant (p, 2, 0x00000002, "c1"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - orc_program_add_temporary (p, 2, "t6"); - orc_program_add_temporary (p, 4, "t7"); - orc_program_add_temporary (p, 4, "t8"); - - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T4, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T5, ORC_VAR_S5, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T6, ORC_VAR_S6, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "shlw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "shlw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T6, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T7, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T8, ORC_VAR_T7, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T7, ORC_VAR_T7, ORC_VAR_T8, - ORC_VAR_D1); - orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T7, ORC_VAR_D1, - ORC_VAR_D1); + p = orc_program_new (); + orc_program_set_name (p, "fieldanalysis_orc_same_parity_3_tap_planar_yuv"); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_same_parity_3_tap_planar_yuv); + orc_program_add_source (p, 1, "s1"); + orc_program_add_source (p, 1, "s2"); + orc_program_add_source (p, 1, "s3"); + orc_program_add_source (p, 1, "s4"); + orc_program_add_source (p, 1, "s5"); + orc_program_add_source (p, 1, "s6"); + orc_program_add_accumulator (p, 4, "a1"); + orc_program_add_constant (p, 2, 0x00000002, "c1"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + orc_program_add_temporary (p, 2, "t6"); + orc_program_add_temporary (p, 4, "t7"); + orc_program_add_temporary (p, 4, "t8"); + + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T4, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T5, ORC_VAR_S5, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T6, ORC_VAR_S6, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "shlw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "shlw", 0, ORC_VAR_T5, ORC_VAR_T5, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_T6, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T7, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T8, ORC_VAR_T7, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T7, ORC_VAR_T7, ORC_VAR_T8, + ORC_VAR_D1); + orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T7, ORC_VAR_D1, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0; @@ -1072,98 +1061,93 @@ const orc_uint8 * ORC_RESTRICT s5, int p1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 50, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, - 95, 111, 114, 99, 95, 111, 112, 112, 111, 115, 105, 116, 101, 95, 112, - 97, - 114, 105, 116, 121, 95, 53, 95, 116, 97, 112, 95, 112, 108, 97, 110, 97, - 114, 95, 121, 117, 118, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, - 1, 12, 1, 1, 13, 4, 14, 2, 2, 0, 0, 0, 14, 2, 3, 0, - 0, 0, 16, 4, 20, 2, 20, 2, 20, 2, 20, 2, 20, 2, 20, 4, - 20, 4, 150, 32, 4, 150, 33, 5, 150, 34, 6, 150, 35, 7, 150, 36, - 8, 93, 34, 34, 16, 89, 33, 33, 17, 89, 35, 35, 17, 98, 32, 32, - 33, 70, 32, 32, 34, 98, 32, 32, 35, 70, 32, 32, 36, 69, 32, 32, - 154, 37, 32, 111, 38, 37, 24, 106, 37, 37, 38, 181, 12, 37, 2, 0, - - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_opposite_parity_5_tap_planar_yuv); + static const orc_uint8 bc = { + 1, 9, 50, 102, 105, 101, 108, 100, 97, 110, 97, 108, 121, 115, 105, 115, + 95, 111, 114, 99, 95, 111, 112, 112, 111, 115, 105, 116, 101, 95, 112, 97, + 114, 105, 116, 121, 95, 53, 95, 116, 97, 112, 95, 112, 108, 97, 110, 97, + 114, 95, 121, 117, 118, 12, 1, 1, 12, 1, 1, 12, 1, 1, 12, 1, + 1, 12, 1, 1, 13, 4, 14, 2, 2, 0, 0, 0, 14, 2, 3, 0, + 0, 0, 16, 4, 20, 2, 20, 2, 20, 2, 20, 2, 20, 2, 20, 4, + 20, 4, 150, 32, 4, 150, 33, 5, 150, 34, 6, 150, 35, 7, 150, 36, + 8, 93, 34, 34, 16, 89, 33, 33, 17, 89, 35, 35, 17, 98, 32, 32, + 33, 70, 32, 32, 34, 98, 32, 32, 35, 70, 32, 32, 36, 69, 32, 32, + 154, 37, 32, 111, 38, 37, 24, 106, 37, 37, 38, 181, 12, 37, 2, 0, + + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_opposite_parity_5_tap_planar_yuv); #else - p = orc_program_new (); - orc_program_set_name (p, - "fieldanalysis_orc_opposite_parity_5_tap_planar_yuv"); - orc_program_set_backup_function (p, - _backup_fieldanalysis_orc_opposite_parity_5_tap_planar_yuv); - orc_program_add_source (p, 1, "s1"); - orc_program_add_source (p, 1, "s2"); - orc_program_add_source (p, 1, "s3"); - orc_program_add_source (p, 1, "s4"); - orc_program_add_source (p, 1, "s5"); - orc_program_add_accumulator (p, 4, "a1"); - orc_program_add_constant (p, 2, 0x00000002, "c1"); - orc_program_add_constant (p, 2, 0x00000003, "c2"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 2, "t1"); - orc_program_add_temporary (p, 2, "t2"); - orc_program_add_temporary (p, 2, "t3"); - orc_program_add_temporary (p, 2, "t4"); - orc_program_add_temporary (p, 2, "t5"); - orc_program_add_temporary (p, 4, "t6"); - orc_program_add_temporary (p, 4, "t7"); - - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T4, ORC_VAR_S4, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 0, ORC_VAR_T5, ORC_VAR_S5, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "shlw", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C1, - ORC_VAR_D1); - orc_program_append_2 (p, "mullw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "mullw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T4, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T5, - ORC_VAR_D1); - orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T6, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T7, ORC_VAR_T6, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "andl", 0, ORC_VAR_T6, ORC_VAR_T6, ORC_VAR_T7, - ORC_VAR_D1); - orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T6, ORC_VAR_D1, - ORC_VAR_D1); + p = orc_program_new (); + orc_program_set_name (p, + "fieldanalysis_orc_opposite_parity_5_tap_planar_yuv"); + orc_program_set_backup_function (p, + _backup_fieldanalysis_orc_opposite_parity_5_tap_planar_yuv); + orc_program_add_source (p, 1, "s1"); + orc_program_add_source (p, 1, "s2"); + orc_program_add_source (p, 1, "s3"); + orc_program_add_source (p, 1, "s4"); + orc_program_add_source (p, 1, "s5"); + orc_program_add_accumulator (p, 4, "a1"); + orc_program_add_constant (p, 2, 0x00000002, "c1"); + orc_program_add_constant (p, 2, 0x00000003, "c2"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 2, "t1"); + orc_program_add_temporary (p, 2, "t2"); + orc_program_add_temporary (p, 2, "t3"); + orc_program_add_temporary (p, 2, "t4"); + orc_program_add_temporary (p, 2, "t5"); + orc_program_add_temporary (p, 4, "t6"); + orc_program_add_temporary (p, 4, "t7"); + + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T2, ORC_VAR_S2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T3, ORC_VAR_S3, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T4, ORC_VAR_S4, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 0, ORC_VAR_T5, ORC_VAR_S5, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "shlw", 0, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C1, + ORC_VAR_D1); + orc_program_append_2 (p, "mullw", 0, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "mullw", 0, ORC_VAR_T4, ORC_VAR_T4, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T4, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_T5, + ORC_VAR_D1); + orc_program_append_2 (p, "absw", 0, ORC_VAR_T1, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convuwl", 0, ORC_VAR_T6, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "cmpgtsl", 0, ORC_VAR_T7, ORC_VAR_T6, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "andl", 0, ORC_VAR_T6, ORC_VAR_T6, ORC_VAR_T7, + ORC_VAR_D1); + orc_program_append_2 (p, "accl", 0, ORC_VAR_A1, ORC_VAR_T6, ORC_VAR_D1, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/fieldanalysis/gstfieldanalysisorc-dist.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/fieldanalysis/gstfieldanalysisorc-dist.h
Changed
@@ -1,8 +1,7 @@ /* autogenerated from gstfieldanalysisorc.orc */ -#ifndef _GSTFIELDANALYSISORC_H_ -#define _GSTFIELDANALYSISORC_H_ +#pragma once #include <glib.h> @@ -56,13 +55,15 @@ #endif typedef union { orc_int16 i; orc_int8 x22; } orc_union16; typedef union { orc_int32 i; float f; orc_int16 x22; orc_int8 x44; } orc_union32; -typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; } orc_union64; +typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -89,5 +90,3 @@ } #endif -#endif -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/gaudieffects/gstgaudieffectsorc-dist.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/gaudieffects/gstgaudieffectsorc-dist.c
Changed
@@ -67,6 +67,7 @@ orc_int32 x22; float x2f2; orc_int16 x44; + orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT @@ -74,6 +75,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -100,6 +103,7 @@ /* begin Orc C target preamble */ +#include <math.h> #define ORC_CLAMP(x,a,b) ((x)<(a) ? (a) : ((x)>(b) ? (b) : (x))) #define ORC_ABS(a) ((a)<0 ? -(a) : (a)) #define ORC_MIN(a,b) ((a)<(b) ? (a) : (b)) @@ -135,6 +139,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -391,71 +397,67 @@ int p1, int n) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 9, 14, 103, 97, 117, 100, 105, 95, 111, 114, 99, 95, 98, 117, 114, - 110, 11, 4, 4, 12, 4, 4, 14, 1, 255, 0, 0, 0, 14, 1, 7, - 0, 0, 0, 14, 1, 1, 0, 0, 0, 16, 4, 20, 4, 20, 8, 20, - 8, 21, 2, 42, 32, 4, 21, 2, 150, 33, 32, 21, 2, 70, 34, 33, - 24, 21, 2, 95, 34, 34, 18, 21, 2, 65, 32, 16, 32, 21, 2, 150, - 33, 32, 21, 2, 93, 33, 33, 17, 21, 2, 81, 33, 33, 34, 21, 2, - 98, 33, 16, 33, 21, 2, 157, 32, 33, 128, 0, 32, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_gaudi_orc_burn); + static const orc_uint8 bc = { + 1, 9, 14, 103, 97, 117, 100, 105, 95, 111, 114, 99, 95, 98, 117, 114, + 110, 11, 4, 4, 12, 4, 4, 14, 1, 255, 0, 0, 0, 14, 1, 7, + 0, 0, 0, 14, 1, 1, 0, 0, 0, 16, 4, 20, 4, 20, 8, 20, + 8, 21, 2, 42, 32, 4, 21, 2, 150, 33, 32, 21, 2, 70, 34, 33, + 24, 21, 2, 95, 34, 34, 18, 21, 2, 65, 32, 16, 32, 21, 2, 150, + 33, 32, 21, 2, 93, 33, 33, 17, 21, 2, 81, 33, 33, 34, 21, 2, + 98, 33, 16, 33, 21, 2, 157, 32, 33, 128, 0, 32, 2, 0, + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_gaudi_orc_burn); #else - p = orc_program_new (); - orc_program_set_name (p, "gaudi_orc_burn"); - orc_program_set_backup_function (p, _backup_gaudi_orc_burn); - orc_program_add_destination (p, 4, "d1"); - orc_program_add_source (p, 4, "s1"); - orc_program_add_constant (p, 1, 0x000000ff, "c1"); - orc_program_add_constant (p, 1, 0x00000007, "c2"); - orc_program_add_constant (p, 1, 0x00000001, "c3"); - orc_program_add_parameter (p, 4, "p1"); - orc_program_add_temporary (p, 4, "t1"); - orc_program_add_temporary (p, 8, "t2"); - orc_program_add_temporary (p, 8, "t3"); + p = orc_program_new (); + orc_program_set_name (p, "gaudi_orc_burn"); + orc_program_set_backup_function (p, _backup_gaudi_orc_burn); + orc_program_add_destination (p, 4, "d1"); + orc_program_add_source (p, 4, "s1"); + orc_program_add_constant (p, 1, 0x000000ff, "c1"); + orc_program_add_constant (p, 1, 0x00000007, "c2"); + orc_program_add_constant (p, 1, 0x00000001, "c3"); + orc_program_add_parameter (p, 4, "p1"); + orc_program_add_temporary (p, 4, "t1"); + orc_program_add_temporary (p, 8, "t2"); + orc_program_add_temporary (p, 8, "t3"); - orc_program_append_2 (p, "copyb", 2, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 2, ORC_VAR_T2, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "addw", 2, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_P1, - ORC_VAR_D1); - orc_program_append_2 (p, "shruw", 2, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C3, - ORC_VAR_D1); - orc_program_append_2 (p, "subb", 2, ORC_VAR_T1, ORC_VAR_C1, ORC_VAR_T1, - ORC_VAR_D1); - orc_program_append_2 (p, "convubw", 2, ORC_VAR_T2, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "shlw", 2, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C2, - ORC_VAR_D1); - orc_program_append_2 (p, "divluw", 2, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, - ORC_VAR_D1); - orc_program_append_2 (p, "subw", 2, ORC_VAR_T2, ORC_VAR_C1, ORC_VAR_T2, - ORC_VAR_D1); - orc_program_append_2 (p, "convwb", 2, ORC_VAR_T1, ORC_VAR_T2, ORC_VAR_D1, - ORC_VAR_D1); - orc_program_append_2 (p, "storel", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_D1, - ORC_VAR_D1); + orc_program_append_2 (p, "copyb", 2, ORC_VAR_T1, ORC_VAR_S1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 2, ORC_VAR_T2, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "addw", 2, ORC_VAR_T3, ORC_VAR_T2, ORC_VAR_P1, + ORC_VAR_D1); + orc_program_append_2 (p, "shruw", 2, ORC_VAR_T3, ORC_VAR_T3, ORC_VAR_C3, + ORC_VAR_D1); + orc_program_append_2 (p, "subb", 2, ORC_VAR_T1, ORC_VAR_C1, ORC_VAR_T1, + ORC_VAR_D1); + orc_program_append_2 (p, "convubw", 2, ORC_VAR_T2, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "shlw", 2, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_C2, + ORC_VAR_D1); + orc_program_append_2 (p, "divluw", 2, ORC_VAR_T2, ORC_VAR_T2, ORC_VAR_T3, + ORC_VAR_D1); + orc_program_append_2 (p, "subw", 2, ORC_VAR_T2, ORC_VAR_C1, ORC_VAR_T2, + ORC_VAR_D1); + orc_program_append_2 (p, "convwb", 2, ORC_VAR_T1, ORC_VAR_T2, ORC_VAR_D1, + ORC_VAR_D1); + orc_program_append_2 (p, "storel", 0, ORC_VAR_D1, ORC_VAR_T1, ORC_VAR_D1, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/gaudieffects/gstgaudieffectsorc-dist.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/gaudieffects/gstgaudieffectsorc-dist.h
Changed
@@ -1,8 +1,7 @@ /* autogenerated from gstgaudieffectsorc.orc */ -#ifndef _GSTGAUDIEFFECTSORC_H_ -#define _GSTGAUDIEFFECTSORC_H_ +#pragma once #include <glib.h> @@ -56,13 +55,15 @@ #endif typedef union { orc_int16 i; orc_int8 x22; } orc_union16; typedef union { orc_int32 i; float f; orc_int16 x22; orc_int8 x44; } orc_union32; -typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; } orc_union64; +typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -86,5 +87,3 @@ } #endif -#endif -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/geometrictransform/gstgeometrictransform.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/geometrictransform/gstgeometrictransform.c
Changed
@@ -100,7 +100,18 @@ /* * (x,y) pairs of the inverse mapping */ - gt->map = g_malloc0 (sizeof (gdouble) * gt->width * gt->height * 2); + gsize map_size; + + /* Use GLib's checked multiplication to prevent overflow */ + if (!g_size_checked_mul (&map_size, gt->width, gt->height) || + !g_size_checked_mul (&map_size, map_size, 2) || + !g_size_checked_mul (&map_size, map_size, sizeof (gdouble))) { + GST_ERROR_OBJECT (gt, + "Image dimensions too large, map allocation would overflow"); + return FALSE; + } + + gt->map = g_malloc0 (map_size); ptr = gt->map; for (y = 0; y < gt->height; y++) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/geometrictransform/gstkaleidoscope.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/geometrictransform/gstkaleidoscope.c
Changed
@@ -179,9 +179,12 @@ theta = gst_gm_triangle (theta / G_PI * kaleidoscope->sides * 0.5); if (cgt->precalc_radius != 0) { - gdouble radiusc = cgt->precalc_radius / cos (theta); - - distance = radiusc * gst_gm_triangle (distance / radiusc); + gdouble cos_theta = cos (theta); + /* Avoid division by zero when cos(theta) is too close to zero */ + if (fabs (cos_theta) > 1e-10) { + gdouble radiusc = cgt->precalc_radius / cos_theta; + distance = radiusc * gst_gm_triangle (distance / radiusc); + } } theta += kaleidoscope->angle;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/id3tag/id3tag.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/id3tag/id3tag.c
Changed
@@ -386,7 +386,7 @@ const gchar ** strings_utf8, int num_strings) { GstId3v2Frame frame; - guint len, i; + guint i; int encoding; if (num_strings < 1 || strings_utf8 == NULL || strings_utf80 == NULL) { @@ -402,8 +402,10 @@ GST_LOG ("Adding text frame %s with %d strings", frame_id, num_strings); for (i = 0; i < num_strings; ++i) { - len = strlen (strings_utf8i); +#ifndef G_DISABLE_CHECKS + guint len = strlen (strings_utf8i); g_return_if_fail (g_utf8_validate (strings_utf8i, len, NULL)); +#endif id3v2_frame_write_string (&frame, encoding, strings_utf8i, i != num_strings - 1); @@ -696,7 +698,7 @@ if (gst_tag_list_peek_string_index (list, tag, n, &s) && s != NULL) { gchar *desc = NULL, *val = NULL, *lang = NULL; - int desclen, vallen, encoding1, encoding2, encoding; + int encoding1, encoding2, encoding; GstId3v2Frame frame; id3v2_frame_init (&frame, "COMM", 0); @@ -713,10 +715,10 @@ if (!lang || strlen (lang) < 3) lang = g_strdup ("XXX"); - desclen = strlen (desc); - g_return_if_fail (g_utf8_validate (desc, desclen, NULL)); - vallen = strlen (val); - g_return_if_fail (g_utf8_validate (val, vallen, NULL)); +#ifndef G_DISABLE_CHECKS + g_return_if_fail (g_utf8_validate (desc, strlen (desc), NULL)); + g_return_if_fail (g_utf8_validate (val, strlen (val), NULL)); +#endif GST_LOG ("%s%u = '%s' (%s|%s|%s)", tag, n, s, GST_STR_NULL (desc), GST_STR_NULL (lang), GST_STR_NULL (val));
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/interlace/gstinterlace.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/interlace/gstinterlace.c
Changed
@@ -188,7 +188,7 @@ "v308, IYU2, Y444, NV24, " /* 8-bit 4:4:4 */ \ "v216, I422_12BE, I422_12LE, " /* 16-bit 4:2:2 */ \ "Y212_BE, Y212_LE, " /* 12-bit 4:2:2 */ \ - "UYVP, Y210, NV16_10LE32, v210, I422_10BE, I422_10LE, " /* 10-bit 4:2:2 */ \ + "UYVP, Y210, NV16_10LE40, NV16_10LE32, v210, I422_10BE, I422_10LE, " /* 10-bit 4:2:2 */ \ "YUY2, UYVY, VYUY, YVYU, Y42B, NV16, NV61, " /* 8-bit 4:2:2 */ \ "P016_BE, P016_LE, " /* 16-bit 4:2:0 */ \ "I420_12BE, I420_12LE, P012_BE, P012_LE, " /* 12-bit 4:2:0 */ \
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/jpegformat/gstjpegparse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/jpegformat/gstjpegparse.c
Changed
@@ -71,6 +71,7 @@ GST_JPEG_PARSER_STATE_GOT_SOS = 1 << 2, GST_JPEG_PARSER_STATE_GOT_JFIF = 1 << 3, GST_JPEG_PARSER_STATE_GOT_ADOBE = 1 << 4, + GST_JPEG_PARSER_STATE_GOT_METADATA = 1 << 5, GST_JPEG_PARSER_STATE_VALID_PICTURE = (GST_JPEG_PARSER_STATE_GOT_SOI | GST_JPEG_PARSER_STATE_GOT_SOF | GST_JPEG_PARSER_STATE_GOT_SOS), @@ -109,6 +110,10 @@ GST_ELEMENT_REGISTER_DEFINE (jpegparse, "jpegparse", GST_RANK_PRIMARY, GST_TYPE_JPEG_PARSE); +/* CIPA DC-x 007-2009 MPF spec states as TIFF */ +#define MPF_LE 0x4949 +#define MPF_BE 0x4D4D + enum GstJPEGColorspace { GST_JPEG_COLORSPACE_NONE, @@ -328,6 +333,13 @@ return FALSE; } + if (parse->mpf.mode + && parse->mpf.primary_image_index != parse->mpf.cur_image_index) { + GST_DEBUG_OBJECT (parse, "Ignoring MPF SOF of picture %d", + parse->mpf.cur_image_index); + return TRUE; + } + colorspace = GST_JPEG_COLORSPACE_NONE; sampling = GST_JPEG_SAMPLING_NONE; @@ -515,7 +527,7 @@ if (xt > 0 && yt > 0) GST_FIXME_OBJECT (parse, "embedded thumbnail ignored"); - return TRUE; + goto bail; } /* JFIF Extension */ @@ -523,7 +535,7 @@ if (!valid_state (parse->state, GST_JPEG_PARSER_STATE_GOT_JFIF)) return FALSE; - return TRUE; + goto bail; } /* https://exiftool.org/TagNames/JPEG.html#AVI1 */ @@ -538,12 +550,14 @@ GST_DEBUG_OBJECT (parse, "MJPEG interleaved field: %s", unit == 0 ? "not interleaved" : unit % 2 ? "Odd" : "Even"); - return TRUE; + goto bail; } GST_MEMDUMP_OBJECT (parse, "Unhandled app0", seg->data + seg->offset, seg->size); +bail: + parse->state |= GST_JPEG_PARSER_STATE_GOT_METADATA; return TRUE; } @@ -592,8 +606,28 @@ if (!gst_byte_reader_get_data (&reader, size, &data)) return FALSE; - buf = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_READONLY, - (gpointer) data, size, 0, size, NULL, NULL); + /* add synthetic xpacket for xpm if it doesn't have it */ + if (i == 1 && !g_strstr_len ((const char *) data, size, "<?xpacket begin")) { + gpointer str; + gsize len; + GString *xmp = g_string_new ("<?xpacket begin=\"r\"?>"); + + g_string_append_len (xmp, (const char *) data, size); + g_string_append (xmp, "<?xpacket end=\"r\"?>"); + + len = xmp->len; +#if GLIB_CHECK_VERSION (2, 76, 0) + str = g_string_free_and_steal (xmp); +#else + str = g_string_free (xmp, FALSE); +#endif + + buf = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_READONLY, str, len, 0, + len, str, g_free); + } else { + buf = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_READONLY, + (gpointer) data, size, 0, size, NULL, NULL); + } if (buf) { GstTagList *tags; @@ -608,16 +642,255 @@ gst_tag_list_unref (tags); } else { GST_INFO_OBJECT (parse, "failed to parse %s: %s", id_str, data); - return FALSE; } } - return TRUE; + goto bail; } GST_MEMDUMP_OBJECT (parse, "Unhandled app1", seg->data + seg->offset, seg->size); +bail: + parse->state |= GST_JPEG_PARSER_STATE_GOT_METADATA; + return TRUE; +} + +struct MPF +{ + guint32 num_images; + guint32 individual_image_no; + struct + { + gboolean parent; + gboolean child; + gboolean representative; + enum + { + UNDEFINED = 0x0, + BASELINE_PRIMARY = 0x30000, + LARGE_THUMB_VGA = 0x10001, + LARGE_THUMB_HD = 0x10002, + MULTI_FRAME_PANO = 0x20001, + MULTI_FRAME_DISPARITY = 0x20002, + MULTI_FRAME_MULTI_ANGLE = 0x20003, + } type; + guint32 size; + guint32 offset; + } entries32; +}; + +static gboolean +gst_jpeg_parse_mpf (GstJpegParse * parse, GstByteReader * reader, + struct MPF *mpf) +{ + guint16 endianness; + guint16 fortytwo; + guint32 offset; + guint16 num_entries; + gsize offset_ref, offset_cur; + + offset_ref = gst_byte_reader_get_pos (reader); + + /* MP Header */ + if (!gst_byte_reader_get_uint16_be (reader, &endianness)) + return FALSE; + + if (endianness == MPF_LE) { + if (!gst_byte_reader_get_uint16_le (reader, &fortytwo) + || !gst_byte_reader_get_uint32_le (reader, &offset)) + return FALSE; + } else if (endianness == MPF_BE) { + if (!gst_byte_reader_get_uint16_be (reader, &fortytwo) + || !gst_byte_reader_get_uint32_be (reader, &offset)) + return FALSE; + } else { + return FALSE; + } + + /* endianness check */ + if (fortytwo != 42) + return FALSE; + + /* Skip to MP Index IFD */ + offset_cur = gst_byte_reader_get_pos (reader); + /* number of bytes to skip = (reference offset + new offset) - current offset */ + if (!gst_byte_reader_skip (reader, offset_ref + offset - offset_cur)) + return FALSE; + + while (TRUE) { + /* MP Index IFD - number of entries */ + if (endianness == MPF_LE) { + if (!gst_byte_reader_get_uint16_le (reader, &num_entries)) + return FALSE; + } else { + if (!gst_byte_reader_get_uint16_be (reader, &num_entries)) + return FALSE; + } + + GST_DEBUG_OBJECT (parse, "MPF: %d IFD entries", num_entries); + + for (int i = 0; i < num_entries; i++) { + guint16 tag, type; + guint32 count, value; + + if (endianness == MPF_LE) { + if (!gst_byte_reader_get_uint16_le (reader, &tag) + || !gst_byte_reader_get_uint16_le (reader, &type) + || !gst_byte_reader_get_uint32_le (reader, &count) + || !gst_byte_reader_get_uint32_le (reader, &value)) + return FALSE; + } else { + if (!gst_byte_reader_get_uint16_be (reader, &tag) + || !gst_byte_reader_get_uint16_be (reader, &type) + || !gst_byte_reader_get_uint32_be (reader, &count) + || !gst_byte_reader_get_uint32_be (reader, &value)) + return FALSE; + } + + switch (tag) { + case 0XB000: /* MPF version # */ + GST_DEBUG_OBJECT (parse, "MPF version %" GST_FOURCC_FORMAT, + GST_FOURCC_ARGS (value)); + break; + case 0xB001: /* number of images */ + mpf->num_images = value; + GST_DEBUG_OBJECT (parse, "MPF number of images %d", mpf->num_images); + break; + case 0xB002:{ /* MP entries */ + if (count / 16 != mpf->num_images) + return FALSE; + + offset_cur = gst_byte_reader_get_pos (reader); + if (!gst_byte_reader_skip (reader, offset_ref + value - offset_cur)) + return FALSE; + + if (mpf->num_images > 32) { + GST_WARNING_OBJECT (parse, + "MPF has more than 32 pictures. Forced to 32"); + mpf->num_images = 32; + } + + for (int j = 0; j < mpf->num_images; j++) { + guint32 attr, size, offset, dependencies; + + if (endianness == MPF_LE) { + if (!gst_byte_reader_get_uint32_le (reader, &attr) + || !gst_byte_reader_get_uint32_le (reader, &size) + || !gst_byte_reader_get_uint32_le (reader, &offset) + || !gst_byte_reader_get_uint32_le (reader, &dependencies)) + return FALSE; + } else { + if (!gst_byte_reader_get_uint32_be (reader, &attr) + || !gst_byte_reader_get_uint32_be (reader, &size) + || !gst_byte_reader_get_uint32_be (reader, &offset) + || !gst_byte_reader_get_uint32_be (reader, &dependencies)) + return FALSE; + } + + mpf->entriesj.parent = attr & 0x8000000000u; + mpf->entriesj.child = attr & 0x4000000000u; + mpf->entriesj.representative = attr & 0x2000000000u; + mpf->entriesj.type = attr & 0xffffffu; + mpf->entriesj.size = size; + mpf->entriesj.offset = offset; + + GST_DEBUG_OBJECT (parse, "MPF entry image type 0x%x", + mpf->entriesj.type); + } + + break; + } + case 0xB101: /* individual image number */ + mpf->individual_image_no = value; + GST_DEBUG_OBJECT (parse, "MPF individual image %d", + mpf->individual_image_no); + break; + case 0xB003: /* image uid list */ + case 0xB004: /* total frames */ + case 0xB201: /* panorama scanning orientation */ + case 0xB202: /* panorama horiz overlap */ + case 0xB203: /* panorama vert overlap */ + case 0xB204: /* base viewpoint # */ + case 0xB205: /* convergence angle */ + case 0xB206: /* baseline length */ + case 0xB207: /* divergence angle */ + case 0xB208: /* horiz axis distance */ + case 0xB209: /* vert axis distance */ + case 0xB20A: /* collimation axis distance */ + case 0xB20B: /* yaw angle */ + case 0xB20C: /* pitch angle */ + case 0xB20D: /* roll angle */ + GST_DEBUG_OBJECT (parse, "unhandled MPF entry 0x%x", tag); + break; + default: + return FALSE; + }; + } + + if (gst_byte_reader_get_remaining (reader) == 0) + break; + + /* Next IFD offset */ + if (endianness == MPF_LE) { + if (!gst_byte_reader_get_uint32_le (reader, &offset)) + return FALSE; + } else { + if (!gst_byte_reader_get_uint32_be (reader, &offset)) + return FALSE; + } + + if (offset == 0) + break; + + offset_cur = gst_byte_reader_get_pos (reader); + if (!gst_byte_reader_skip (reader, offset_ref + offset - offset_cur)) + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_jpeg_parse_app2 (GstJpegParse * parse, GstJpegSegment * seg) +{ + GstByteReader reader; + const gchar *id_str; + guint i; + + if (seg->size < 4) /* less than 6 means no id string */ + return FALSE; + + gst_byte_reader_init (&reader, seg->data + seg->offset, seg->size); + gst_byte_reader_skip_unchecked (&reader, 2); + + if (!gst_byte_reader_get_string_utf8 (&reader, &id_str)) + return FALSE; + + if (g_str_has_suffix (id_str, "MPF")) { + struct MPF mpf = { 0, }; + + if (!gst_jpeg_parse_mpf (parse, &reader, &mpf)) + return FALSE; + + parse->mpf.mode = TRUE; + parse->mpf.num_images = mpf.num_images; + parse->mpf.cur_image_index = 0; + + for (i = 0; i < mpf.num_images; i++) { + if (mpf.entriesi.type == BASELINE_PRIMARY) { + parse->mpf.primary_image_index = i; + break; + } + } + + if (i == mpf.num_images) { + GST_WARNING_OBJECT (parse, + "No baseline primary image found. Forcing the first"); + parse->mpf.primary_image_index = 0; + } + } + return TRUE; } @@ -644,8 +917,10 @@ if (!gst_byte_reader_skip (&reader, 5)) return FALSE; } else { - GST_DEBUG_OBJECT (parse, "Unhandled app14"); - return TRUE; + GST_MEMDUMP_OBJECT (parse, "Unhandled app14", seg->data + seg->offset, + seg->size); + + goto bail; } /* skip version and flags */ @@ -656,9 +931,12 @@ /* transform bit might not exist */ if (!gst_byte_reader_get_uint8 (&reader, &transform)) - return TRUE; + goto bail; parse->adobe_transform = transform; + +bail: + parse->state |= GST_JPEG_PARSER_STATE_GOT_METADATA; return TRUE; } @@ -715,6 +993,8 @@ g_free (comment); } + parse->state |= GST_JPEG_PARSER_STATE_GOT_METADATA; + return TRUE; } @@ -835,6 +1115,25 @@ return ret; } +static inline gboolean +gst_jpeg_parse_should_finish_buffer (GstJpegParse * parse, GstJpegMarker marker) +{ + guint field_to_check; + + if (parse->mpf.mode) + return parse->mpf.cur_image_index + 1 == parse->mpf.num_images; + + if (marker == GST_JPEG_MARKER_SOI) + field_to_check = 0; + else if (marker == GST_JPEG_MARKER_EOI) + field_to_check = 1; + else + g_assert_not_reached (); + + return parse->interlace_mode == GST_VIDEO_INTERLACE_MODE_PROGRESSIVE + || parse->field == field_to_check; +} + static GstFlowReturn gst_jpeg_parse_handle_frame (GstBaseParse * bparse, GstBaseParseFrame * frame, gint * skipsize) @@ -844,6 +1143,7 @@ GstJpegMarker marker; GstJpegSegment seg; guint offset; + gint prev_state; GST_TRACE_OBJECT (parse, "frame %" GST_PTR_FORMAT, frame->buffer); @@ -890,9 +1190,7 @@ switch (marker) { case GST_JPEG_MARKER_SOI: /* This means that new SOI comes without an previous EOI. */ - if (offset > 2 - && (parse->interlace_mode == GST_VIDEO_INTERLACE_MODE_PROGRESSIVE - || parse->field == 0)) { + if (offset > 2 && gst_jpeg_parse_should_finish_buffer (parse, marker)) { /* If already some data segment parsed, push it as a frame. */ if (valid_state (parse->state, GST_JPEG_PARSER_STATE_GOT_SOS)) { gst_buffer_unmap (frame->buffer, &mapinfo); @@ -907,15 +1205,19 @@ return gst_jpeg_parse_finish_frame (parse, frame, seg.offset - 2); } + prev_state = parse->state; gst_jpeg_parse_reset (parse); - parse->state |= GST_JPEG_PARSER_STATE_GOT_SOI; - /* unset tags */ - gst_base_parse_merge_tags (bparse, NULL, GST_TAG_MERGE_UNDEFINED); - - *skipsize = offset - 2; - GST_DEBUG_OBJECT (parse, "skipping %d bytes before SOI", *skipsize); - parse->last_offset = 2; - goto beach; + parse->state |= prev_state | GST_JPEG_PARSER_STATE_GOT_SOI; + + if (!valid_state (parse->state, GST_JPEG_PARSER_STATE_GOT_METADATA)) { + /* unset tags */ + gst_base_parse_merge_tags (bparse, NULL, GST_TAG_MERGE_UNDEFINED); + + *skipsize = offset - 2; + GST_DEBUG_OBJECT (parse, "skipping %d bytes before SOI", *skipsize); + parse->last_offset = 2; + goto beach; + } } /* unset tags */ @@ -923,14 +1225,20 @@ parse->state |= GST_JPEG_PARSER_STATE_GOT_SOI; break; case GST_JPEG_MARKER_EOI: - if (parse->interlace_mode == GST_VIDEO_INTERLACE_MODE_PROGRESSIVE - || parse->field == 1) { + if (gst_jpeg_parse_should_finish_buffer (parse, marker)) { gst_buffer_unmap (frame->buffer, &mapinfo); return gst_jpeg_parse_finish_frame (parse, frame, seg.offset); + } else if (parse->mpf.mode) { + parse->mpf.cur_image_index++; + GST_DEBUG_OBJECT (parse, "finished image number %d of %d", + parse->mpf.cur_image_index, parse->mpf.num_images); + /* reset the state to continue parsing */ + parse->state = GST_JPEG_PARSER_STATE_GOT_METADATA; } else if (parse->interlace_mode == GST_VIDEO_INTERLACE_MODE_INTERLEAVED && parse->field == 0) { parse->field = 1; - parse->state = 0; + /* reset the state to continue parsing */ + parse->state = GST_JPEG_PARSER_STATE_GOT_METADATA; } break; case GST_JPEG_MARKER_SOS: @@ -950,6 +1258,10 @@ if (!gst_jpeg_parse_app1 (parse, &seg)) GST_WARNING_OBJECT (parse, "Failed to parse app1 segment"); break; + case GST_JPEG_MARKER_APP2: + if (!gst_jpeg_parse_app2 (parse, &seg)) + GST_WARNING_OBJECT (parse, "Failed to parse app2 segment"); + break; case GST_JPEG_MARKER_APP14: if (!gst_jpeg_parse_app14 (parse, &seg)) GST_WARNING_OBJECT (parse, "Failed to parse app14 segment");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/jpegformat/gstjpegparse.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/jpegformat/gstjpegparse.h
Changed
@@ -69,6 +69,14 @@ GstVideoFieldOrder field_order; guint field; + /* multi picture format */ + struct { + gboolean mode; + guint num_images; + guint primary_image_index; + guint cur_image_index; /* current picture index */ + } mpf; + /* format color space */ guint colorspace; guint sampling;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/librfb/gstrfbsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/librfb/gstrfbsrc.c
Changed
@@ -417,6 +417,11 @@ if (pool == NULL) { /* we did not get a pool, make one ourselves then */ pool = gst_video_buffer_pool_new (); + { + gchar *name = g_strdup_printf ("%s-pool", GST_OBJECT_NAME (bsrc)); + g_object_set (pool, "name", name, NULL); + g_free (name); + } size = info.size; min = 1; max = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/librfb/gstrfbsrc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/librfb/gstrfbsrc.h
Changed
@@ -24,7 +24,7 @@ #include <gst/gst.h> #include <gst/base/gstpushsrc.h> -#include <gst/video/gstvideopool.h> +#include <gst/video/video.h> #include "rfbdecoder.h"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/meson.build
Changed
@@ -1,7 +1,7 @@ foreach plugin : 'accurip', 'adpcmdec', 'adpcmenc', 'aiff', 'asfmux', 'audiobuffersplit', 'audiofxbad', 'audiomixmatrix', 'audiolatency', 'audiovisualizers', 'autoconvert', 'bayer', - 'camerabin2', 'codecalpha', 'codectimestamper', 'coloreffects', + 'camerabin2', 'closedcaption', 'codecalpha', 'codectimestamper', 'coloreffects', 'debugutils', 'dvbsubenc', 'dvbsuboverlay', 'dvdspu', 'faceoverlay', 'festival', 'fieldanalysis', 'freeverb', 'frei0r', 'gaudieffects', 'gdp', @@ -13,6 +13,6 @@ 'segmentclip', 'siren', 'smooth', 'speed', 'subenc', 'switchbin', 'tensordecoders', 'timecode', 'transcode', 'unixfd', 'videofilters', 'videoframe_audiolevel', 'videoparsers', 'videosignal', - 'vmnc', 'y4m' + 'vmnc' subdir(plugin) endforeach
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsdemux/mpegtsbase.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsdemux/mpegtsbase.c
Changed
@@ -60,13 +60,15 @@ ); #define DEFAULT_IGNORE_PCR FALSE +#define DEFAULT_SKEW_CORRECTIONS TRUE enum { PROP_0, PROP_PARSE_PRIVATE_SECTIONS, PROP_IGNORE_PCR, - /* FILL ME */ + PROP_SKEW_CORRECTIONS + /* FILL ME */ }; static void mpegts_base_dispose (GObject * object); @@ -149,7 +151,7 @@ G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); /** - * GstMpegtsBase:ignore-pcr: + * MpegTSBase:ignore-pcr: * * Ignore PCR (Program Clock Reference) data from MPEG-TS PSI. * This can help with playback of some broken files. @@ -161,6 +163,20 @@ "Ignore PCR stream for timing", DEFAULT_IGNORE_PCR, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + /** + * MpegTSBase:skew-corrections: + * + * With push TIME inputs, apply continuous skew corrections to the output. The + * default is enabled. You can disable it if downstream doesn't require live + * synchronization. + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_SKEW_CORRECTIONS, + g_param_spec_boolean ("skew-corrections", "Apply skew corrections", + "Apply skew corrections", DEFAULT_SKEW_CORRECTIONS, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + klass->sink_query = GST_DEBUG_FUNCPTR (mpegts_base_default_sink_query); klass->handle_psi = NULL; @@ -180,6 +196,9 @@ case PROP_IGNORE_PCR: base->ignore_pcr = g_value_get_boolean (value); break; + case PROP_SKEW_CORRECTIONS: + base->packetizer->skew_correction = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); } @@ -198,6 +217,9 @@ case PROP_IGNORE_PCR: g_value_set_boolean (value, base->ignore_pcr); break; + case PROP_SKEW_CORRECTIONS: + g_value_set_boolean (value, base->packetizer->skew_correction); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); } @@ -269,6 +291,7 @@ base->disposed = FALSE; base->packetizer = mpegts_packetizer_new (); + base->packetizer->skew_correction = DEFAULT_SKEW_CORRECTIONS; base->programs = g_ptr_array_new_full (16, (GDestroyNotify) mpegts_base_free_program);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsdemux/mpegtspacketizer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsdemux/mpegtspacketizer.c
Changed
@@ -1471,6 +1471,9 @@ /* keep track of the last extended pcrtime */ pcr->last_pcrtime = gstpcrtime; + if (!packetizer->skew_correction) + goto no_skew; + /* we don't have an arrival timestamp so we can't do skew detection. we * should still apply a timestamp based on RTP timestamp and base_time */ if (!GST_CLOCK_TIME_IS_VALID (time)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsdemux/mpegtspacketizer.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsdemux/mpegtspacketizer.h
Changed
@@ -259,6 +259,8 @@ /* clock skew calculation */ gboolean calculate_skew; + /* skew_correction: apply skew correction to values */ + gboolean skew_correction; /* offset/bitrate calculator */ gboolean calculate_offset;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsdemux/pesparse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsdemux/pesparse.c
Changed
@@ -106,10 +106,11 @@ GST_LOG ("scrambling_control 0x%0x", res->scrambling_control); GST_LOG ("flags_1: %s%s%s%s%s", - val8 & 0x08 ? "priority " : "", - val8 & 0x04 ? "data_alignment " : "", - val8 & 0x02 ? "copyright " : "", - val8 & 0x01 ? "original_or_copy " : "", val8 & 0x0f ? "" : "<none>"); + val8 & PES_FLAG_PRIORITY ? "priority " : "", + val8 & PES_FLAG_DATA_ALIGNMENT ? "data_alignment " : "", + val8 & PES_FLAG_COPYRIGHT ? "copyright " : "", + val8 & PES_FLAG_ORIGINAL_OR_COPY ? "original_or_copy " : "", + val8 & 0x0f ? "" : "<none>"); /* PTS_DTS_flags 2 * ESCR_flag 1
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsdemux/tsdemux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsdemux/tsdemux.c
Changed
@@ -184,6 +184,8 @@ GstClockTime pts; GstClockTime dts; + PESHeaderFlags current_pes_packet_flags; + /* Reference PTS used to detect gaps */ GstClockTime gap_ref_pts; /* Number of outputted buffers */ @@ -802,6 +804,7 @@ break; } default: + GST_TRACE_OBJECT (stream->pad, "unit type %d", unit.type); break; } @@ -1348,7 +1351,7 @@ GstPad *pad = NULL; gboolean sparse = FALSE; gboolean is_audio = FALSE, is_video = FALSE, is_subpicture = FALSE, - is_private = FALSE; + is_private = FALSE, is_metadata = FALSE; gst_ts_demux_create_tags (stream); @@ -1403,6 +1406,9 @@ caps = gst_caps_new_empty_simple ("audio/x-dts"); stream->target_pes_substream = 0x71; break; + default: + GST_DEBUG_OBJECT (demux, "Stream type %d", bstream->stream_type); + break; } } @@ -1692,9 +1698,17 @@ case DRF_ID_KLVA: sparse = TRUE; is_private = TRUE; + is_metadata = TRUE; caps = gst_caps_new_simple ("meta/x-klv", "parsed", G_TYPE_BOOLEAN, TRUE, NULL); break; + case DRF_ID_ID3: + sparse = TRUE; + is_private = TRUE; + is_metadata = TRUE; + caps = gst_caps_new_simple ("meta/x-id3", + "parsed", G_TYPE_BOOLEAN, FALSE, NULL); + break; case DRF_ID_AC4: is_audio = TRUE; caps = gst_caps_new_empty_simple ("audio/x-ac4"); @@ -1705,6 +1719,7 @@ break; case DRF_ID_VANC: is_private = TRUE; + is_metadata = TRUE; caps = gst_caps_new_simple ("meta/x-st-2038", "alignment", G_TYPE_STRING, "line", NULL); @@ -1760,17 +1775,32 @@ if (desc) { GstMpegtsMetadataDescriptor *metadataDescriptor; if (gst_mpegts_descriptor_parse_metadata (desc, &metadataDescriptor)) { - if ((metadataDescriptor->metadata_format == - GST_MPEGTS_METADATA_FORMAT_IDENTIFIER_FIELD) - && (metadataDescriptor->metadata_format_identifier == - DRF_ID_KLVA)) { - sparse = TRUE; - is_private = TRUE; - /* registration_id is not correctly set or parsed for some streams */ - bstream->registration_id = DRF_ID_KLVA; + if (metadataDescriptor->metadata_format == + GST_MPEGTS_METADATA_FORMAT_IDENTIFIER_FIELD) { + + switch (metadataDescriptor->metadata_format_identifier) { + case DRF_ID_KLVA: + sparse = TRUE; + is_private = TRUE; + is_metadata = TRUE; + /* registration_id is not correctly set or parsed for some streams */ + bstream->registration_id = DRF_ID_KLVA; + + caps = gst_caps_new_simple ("meta/x-klv", + "parsed", G_TYPE_BOOLEAN, TRUE, NULL); + break; + + case DRF_ID_ID3: + sparse = TRUE; + is_private = TRUE; + is_metadata = TRUE; + bstream->registration_id = DRF_ID_ID3; + + caps = gst_caps_new_simple ("meta/x-id3", + "parsed", G_TYPE_BOOLEAN, FALSE, NULL); + break; + } - caps = gst_caps_new_simple ("meta/x-klv", - "parsed", G_TYPE_BOOLEAN, TRUE, NULL); } g_free (metadataDescriptor); } @@ -1859,6 +1889,8 @@ colorimetry_mode = GST_VIDEO_COLORIMETRY_BT709; break; default: + GST_DEBUG_OBJECT (demux, "color specification %d", + color_specification); break; } caps = gst_caps_new_simple ("image/x-jpc", @@ -2094,6 +2126,9 @@ name = g_strdup_printf ("private_%01x_%04x", demux->program_generation, bstream->pid); + if (is_metadata) + gst_stream_set_stream_type (bstream->stream_object, + GST_STREAM_TYPE_METADATA); } else if (is_subpicture) { template = gst_static_pad_template_get (&subpicture_template); name = @@ -2830,6 +2865,7 @@ stream->current_size = length; stream->state = PENDING_PACKET_BUFFER; + stream->current_pes_packet_flags = header.flags; if (stream->pending_header_data) { g_free (stream->pending_header_data); @@ -3622,9 +3658,17 @@ buffer = parse_jp2k_access_unit (stream); } else if (bs->stream_type == GST_MPEGTS_STREAM_TYPE_AUDIO_AAC_ADTS) { buffer = parse_aac_adts_frame (stream); - } else if (bs->stream_type == GST_MPEGTS_STREAM_TYPE_METADATA_PES_PACKETS - && bs->registration_id == DRF_ID_KLVA) { - buffer_list = parse_pes_metadata_frame (stream); + } else if (bs->stream_type == GST_MPEGTS_STREAM_TYPE_METADATA_PES_PACKETS) { + if (bs->registration_id == DRF_ID_KLVA) { + buffer_list = parse_pes_metadata_frame (stream); + } else if (bs->registration_id == DRF_ID_ID3) { + buffer = gst_buffer_new_wrapped (stream->data, stream->current_size); + if ((stream->current_pes_packet_flags & PES_FLAG_DATA_ALIGNMENT) == 0) { + gst_buffer_set_flags (buffer, GST_BUFFER_FLAG_DELTA_UNIT); + } + } else { + buffer = gst_buffer_new_wrapped (stream->data, stream->current_size); + } } else if (bs->stream_type == GST_MPEGTS_STREAM_TYPE_VIDEO_JPEG_XS) { buffer = parse_jpegxs_access_unit (stream); } else {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsmux/gstbasetsmux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsmux/gstbasetsmux.c
Changed
@@ -597,7 +597,6 @@ guint8 color_spec = 0; const gchar *stream_format = NULL; const char *interlace_mode = NULL; - gchar *pmt_name; GstMpegtsDescriptor *pmt_descriptor = NULL; GST_DEBUG_OBJECT (ts_pad, @@ -996,11 +995,23 @@ ts_pad->stream->pmt_descriptor = pmt_descriptor; } - pmt_name = g_strdup_printf ("PMT_%d", ts_pad->pid); - if (mux->prog_map && gst_structure_has_field (mux->prog_map, pmt_name)) { - gst_structure_get_int (mux->prog_map, pmt_name, &ts_pad->stream->pmt_index); + if (mux->prog_map) { + gchar *pmt_name = g_strdup_printf ("PMT_ORDER_%d", ts_pad->pid); + + if (!gst_structure_get_int (mux->prog_map, pmt_name, + &ts_pad->stream->pmt_index)) { + gchar *pmt_name_2 = g_strdup_printf ("PMT_%d", ts_pad->pid); + + if (gst_structure_get_int (mux->prog_map, pmt_name_2, + &ts_pad->stream->pmt_index)) + GST_FIXME_OBJECT (mux, "Use of ambiguous prog-map entry %s, prefer %s", + pmt_name_2, pmt_name); + + g_free (pmt_name_2); + } + + g_free (pmt_name); } - g_free (pmt_name); interlace_mode = gst_structure_get_string (s, "interlace-mode"); gst_structure_get_int (s, "rate", &ts_pad->stream->audio_sampling);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mpegtsmux/gstbasetsmuxjpegxs.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mpegtsmux/gstbasetsmuxjpegxs.h
Changed
@@ -24,7 +24,6 @@ #ifndef __BASETSMUX_JPEGXS_H__ #define __BASETSMUX_JPEGXS_H__ -#include "glib.h" #include "gstbasetsmux.h" typedef struct jpegxs_private_data
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mxf/mxfvanc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mxf/mxfvanc.c
Changed
@@ -27,6 +27,8 @@ #include <gst/gst.h> #include <gst/base/base.h> +#include <gst/base/gstbitreader.h> +#include <gst/base/gstbitwriter.h> #include <gst/video/video.h> #include <string.h> @@ -56,6 +58,8 @@ mxf_metadata_vanc_descriptor, MXF_TYPE_METADATA_GENERIC_DATA_ESSENCE_DESCRIPTOR); +static gboolean HANDLE_AS_ST2038 = TRUE; + static void mxf_metadata_vanc_descriptor_init (MXFMetadataVANCDescriptor * self) { @@ -81,6 +85,175 @@ key->u14 == 0x00 && key->u15 == 0x00); } +static guint16 +with_parity (const guint8 word) +{ + guint8 bit8, parity; + + parity = word ^ (word >> 4); + parity ^= (parity >> 2); + parity ^= (parity >> 1); + bit8 = parity & 1; + + return (word | (bit8 << 8) | ((!bit8) << 9)); +} + +static gboolean +get_c_not_y_channel_flag (const guint8 payload_sample_coding) +{ + /* + * 5 - 8-bit color difference samples + * 8 - 10-bit color difference samples + * 11 - 8-bit color difference samples with parity + */ + return (payload_sample_coding == 5) || + (payload_sample_coding == 8) || (payload_sample_coding == 11); +} + +static gboolean +is_payload_10bit (const guint8 payload_sample_coding) +{ + /* + * 7 - 10-bit luma samples + * 8 - 10-bit color difference samples + * 9 - 10-bit luma and color difference samples + */ + if (payload_sample_coding == 7 || payload_sample_coding == 8 + || payload_sample_coding == 9) { + return TRUE; + } + + return FALSE; +} + +static void +write_st2038_header (GstBitWriter * writer, guint8 c_not_y_channel_flag, + guint16 line_number, guint16 did, guint sdid, guint16 data_count) +{ + gst_bit_writer_put_bits_uint8 (writer, 0, 6); /* 6 zero bits */ + gst_bit_writer_put_bits_uint8 (writer, c_not_y_channel_flag, 1); + gst_bit_writer_put_bits_uint16 (writer, line_number, 11); /* line number */ + gst_bit_writer_put_bits_uint16 (writer, 0xFFF /* Unknown/unspecified */ , 12); /* horizontal offset */ + + gst_bit_writer_put_bits_uint16 (writer, did, 10); + gst_bit_writer_put_bits_uint16 (writer, sdid, 10); + gst_bit_writer_put_bits_uint16 (writer, data_count, 10); +} + +static GstBuffer * +mxf_vanc_to_st2038 (const guint8 * vanc_data, gsize vanc_data_size, + guint16 line_number, guint16 payload_sample_count, + guint8 payload_sample_coding, guint32 array_count, guint32 array_item_size) +{ + GstBitWriter writer; + GstBitReader bit_reader; + GstByteReader byte_reader; + guint16 checksum, did, sdid, data_count; + guint8 c_not_y_channel_flag, *data; + gboolean payload_10bit; + gsize size; + guint i; + + c_not_y_channel_flag = get_c_not_y_channel_flag (payload_sample_coding); + payload_10bit = is_payload_10bit (payload_sample_coding); + + if (payload_10bit) { + gst_bit_reader_init (&bit_reader, vanc_data, vanc_data_size); + + /* Check if we can read DID, SDID and Data Count */ + if (gst_bit_reader_get_remaining (&bit_reader) < 32) { + GST_WARNING ("Insufficient VANC data"); + return NULL; + } + + /* See section 5.4.4 of ST-436 on 10-bit sample coding */ + did = gst_bit_reader_get_bits_uint16_unchecked (&bit_reader, 10); + sdid = gst_bit_reader_get_bits_uint16_unchecked (&bit_reader, 10); + data_count = gst_bit_reader_get_bits_uint16_unchecked (&bit_reader, 10); + + /* Skip 2-bit padding */ + gst_bit_reader_skip (&bit_reader, 2); + + if (payload_sample_count - 3 < data_count) { + GST_WARNING ("Insufficient user data words"); + return NULL; + } + + gst_bit_writer_init_with_size (&writer, 64 + data_count * 2, FALSE); + write_st2038_header (&writer, c_not_y_channel_flag, line_number, did, sdid, + data_count); + + /* + * See Section 6.7 of ST-291 on Checksum Word. + * Write data words and checksum. + * + * In 10-bit applications, the checksum value shall be equal to + * the nine least significant bits of the sum of the nine least + * significant bits of the DID, SDID, DC and UDW. + */ + checksum = (did & 0x1FF) + (sdid & 0x1FF) + (data_count & 0x1FF); + for (i = 0; i < data_count; i++) { + /* + * For a 10-bit coding, 4 bytes representing 3 source samples + * are coded using the high-order 30-bits (bits 2 to 31) of a + * 32-bit (4 byte) Payload Array data word. The 2 low-order + * bits of the payload data 32-bit word (bits 0 and 1) are set + * to zero. + */ + guint16 udw = gst_bit_reader_get_bits_uint16_unchecked (&bit_reader, 10); + checksum += (udw & 0x1FF); + gst_bit_writer_put_bits_uint16 (&writer, udw, 10); + + if (i % 3 == 2) { + gst_bit_reader_skip (&bit_reader, 2); + } + } + + gst_bit_writer_put_bits_uint16 (&writer, checksum & 0x1FF, 10); + } else { + gst_byte_reader_init (&byte_reader, vanc_data, vanc_data_size); + + /* Check if we can read DID, SDID and Data Count */ + if (gst_byte_reader_get_remaining (&byte_reader) < 3) { + GST_WARNING ("Insufficient VANC data"); + return NULL; + } + + did = gst_byte_reader_get_uint8_unchecked (&byte_reader); + sdid = gst_byte_reader_get_uint8_unchecked (&byte_reader); + data_count = gst_byte_reader_get_uint8_unchecked (&byte_reader); + + if (payload_sample_count - 3 < data_count) { + GST_WARNING ("Insufficient user data words"); + return NULL; + } + + gst_bit_writer_init_with_size (&writer, 64 + data_count * 2, FALSE); + write_st2038_header (&writer, c_not_y_channel_flag, line_number, + with_parity (did), with_parity (sdid), with_parity (data_count)); + + /* + * See Section 6.7 of ST-291 on Checksum Word. + * Write data words and checksum. + */ + checksum = did + sdid + data_count; + for (i = 0; i < data_count; i++) { + guint8 udw = gst_byte_reader_get_uint8_unchecked (&byte_reader); + checksum += udw; + gst_bit_writer_put_bits_uint16 (&writer, with_parity (udw), 10); + } + + gst_bit_writer_put_bits_uint16 (&writer, with_parity (checksum & 0xFF), 10); + } + + gst_bit_writer_align_bytes (&writer, 1); + + size = gst_bit_writer_get_size (&writer) / 8; + data = gst_bit_writer_reset_and_get_data (&writer); + + return gst_buffer_new_wrapped (data, size); +} + static GstFlowReturn mxf_vanc_handle_essence_element (const MXFUL * key, GstBuffer * buffer, GstCaps * caps, @@ -152,12 +325,14 @@ array_count = gst_byte_reader_get_uint32_be_unchecked (&reader); array_item_size = gst_byte_reader_get_uint32_be_unchecked (&reader); - /* Skip over anything that is not 8 bit VANC */ - if (payload_sample_coding != 4 && payload_sample_coding != 5 - && payload_sample_coding != 6) { - if (!gst_byte_reader_skip (&reader, array_count * array_item_size)) - goto out; - continue; + if (!HANDLE_AS_ST2038) { + /* Skip over anything that is not 8 bit VANC */ + if (payload_sample_coding != 4 && payload_sample_coding != 5 + && payload_sample_coding != 6) { + if (!gst_byte_reader_skip (&reader, array_count * array_item_size)) + goto out; + continue; + } } if (gst_byte_reader_get_remaining (&reader) < array_count * array_item_size) @@ -172,35 +347,46 @@ continue; } - did = gst_byte_reader_get_uint8_unchecked (&reader); - sdid = gst_byte_reader_get_uint8_unchecked (&reader); - - /* Not S334 EIA-708 */ - if (did != 0x61 && sdid != 0x01) { - GST_TRACE ("Skipping VANC data with DID/SDID 0x%02X/0x%02X", did, sdid); - if (!gst_byte_reader_skip (&reader, array_count * array_item_size - 2)) - goto out; - continue; - } - - if (payload_sample_count < 2) { - if (!gst_byte_reader_skip (&reader, array_count * array_item_size - 2)) - goto out; - continue; - } - - cdp_size = gst_byte_reader_get_uint8_unchecked (&reader); - if (payload_sample_count - 3 < cdp_size) { - if (!gst_byte_reader_skip (&reader, array_count * array_item_size - 3)) - goto out; - continue; + if (!HANDLE_AS_ST2038) { + /* Type-2 Ancillary Data Packet Format */ + did = gst_byte_reader_get_uint8_unchecked (&reader); + sdid = gst_byte_reader_get_uint8_unchecked (&reader); + + /* Not S334 EIA-708 */ + if (did != 0x61 && sdid != 0x01) { + GST_TRACE ("Skipping VANC data with DID/SDID 0x%02X/0x%02X", did, sdid); + if (!gst_byte_reader_skip (&reader, array_count * array_item_size - 2)) + goto out; + continue; + } + + cdp_size = gst_byte_reader_get_uint8_unchecked (&reader); + if (payload_sample_count - 3 < cdp_size) { + if (!gst_byte_reader_skip (&reader, array_count * array_item_size - 3)) + goto out; + continue; + } + + gst_buffer_unmap (buffer, &map); + *outbuf = + gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, + gst_byte_reader_get_pos (&reader), cdp_size); + gst_buffer_unref (buffer); + } else { + gsize byte_pos = gst_byte_reader_get_pos (&reader); + gsize vanc_data_size = gst_byte_reader_get_remaining (&reader); + + /* Convert from ST-436M to ST-2038 */ + *outbuf = mxf_vanc_to_st2038 (&map.databyte_pos, vanc_data_size, + line_num, payload_sample_count, payload_sample_coding, + array_count, array_item_size); + if (!outbuf) + goto no_data; + + gst_buffer_unmap (buffer, &map); + gst_buffer_unref (buffer); } - gst_buffer_unmap (buffer, &map); - *outbuf = - gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, - gst_byte_reader_get_pos (&reader), cdp_size); - gst_buffer_unref (buffer); return GST_FLOW_OK; } @@ -269,9 +455,15 @@ *handler = mxf_vanc_handle_essence_element; - caps = - gst_caps_new_simple ("closedcaption/x-cea-708", "format", - G_TYPE_STRING, "cdp", NULL); + if (!HANDLE_AS_ST2038) { + caps = + gst_caps_new_simple ("closedcaption/x-cea-708", "format", + G_TYPE_STRING, "cdp", NULL); + } else { + caps = + gst_caps_new_simple ("meta/x-st-2038", "alignment", + G_TYPE_STRING, "frame", NULL); + } if (p && p->parent.parent.sample_rate.d != 0) { gst_caps_set_simple (caps, "framerate", GST_TYPE_FRACTION, @@ -341,6 +533,228 @@ return GST_FLOW_OK; } +/* Extract 10-bit user data words from ST 2038 packet */ +static gboolean +extract_st2038_user_data (const guint8 * data, guint data_size, + const St2038AncHeader * header, guint8 * user_data) +{ + GstBitReader reader; + guint16 temp16; + guint i; + + gst_bit_reader_init (&reader, data, data_size); + + /* Skip to user data: 6 + 1 + 11 + 12 + 10 + 10 + 10 = 60 bits */ + if (!gst_bit_reader_skip (&reader, 60)) + return FALSE; + + if (gst_bit_reader_get_remaining (&reader) < header->data_count * 10) + return FALSE; + + /* Read each 10-bit user data word (take lower 8 bits) */ + for (i = 0; i < header->data_count; i++) { + temp16 = gst_bit_reader_get_bits_uint16_unchecked (&reader, 10); + user_datai = temp16 & 0xFF; + } + + return TRUE; +} + +static gboolean +parse_st2038_header (const guint8 * data, guint data_size, + St2038AncHeader * header) +{ + GstBitReader reader; + guint8 zeroes; + guint16 temp16; + guint bit_pos; + + if (data_size < 8) + return FALSE; + + gst_bit_reader_init (&reader, data, data_size); + + /* Check if we have enough until Data Count */ + if (gst_bit_reader_get_remaining (&reader) < 50) { + GST_WARNING ("Incomplete ST-2038 header"); + return FALSE; + } + + /* Read 6 zero bits */ + zeroes = gst_bit_reader_get_bits_uint8_unchecked (&reader, 6); + if (zeroes != 0) { + GST_WARNING ("ST2038: Zero bits are not zero (got 0x%x)", zeroes); + return FALSE; + } + + header->c_not_y_channel_flag = + gst_bit_reader_get_bits_uint8_unchecked (&reader, 1); + header->line_number = gst_bit_reader_get_bits_uint16_unchecked (&reader, 11); + header->horizontal_offset = + gst_bit_reader_get_bits_uint16_unchecked (&reader, 12); + + temp16 = gst_bit_reader_get_bits_uint16_unchecked (&reader, 10); + header->did = temp16 & 0xFF; + + temp16 = gst_bit_reader_get_bits_uint16_unchecked (&reader, 10); + header->sdid = temp16 & 0xFF; + + if (!gst_bit_reader_get_bits_uint16 (&reader, &temp16, 10)) + return FALSE; + header->data_count = temp16 & 0xFF; + + if (!gst_bit_reader_skip (&reader, header->data_count * 10)) + return FALSE; + + if (!gst_bit_reader_get_bits_uint16 (&reader, &header->checksum, 10)) + return FALSE; + + /* Skip alignment bits (should be all 1's until byte aligned) */ + bit_pos = gst_bit_reader_get_pos (&reader); + if (bit_pos % 8 != 0) { + guint bits_to_skip = 8 - (bit_pos % 8); + guint8 alignment_bits; + + if (gst_bit_reader_get_bits_uint8 (&reader, &alignment_bits, bits_to_skip)) { + /* Verify alignment bits are all 1's */ + guint8 expected = (1 << bits_to_skip) - 1; + if (alignment_bits != expected) { + GST_WARNING + ("ST2038: Alignment bits are not all 1's (got 0x%x, expected 0x%x)", + alignment_bits, expected); + } + } + } + + /* Calculate total length in bytes */ + header->len_bytes = gst_bit_reader_get_pos (&reader) / 8; + + return TRUE; +} + +static GstFlowReturn +mxf_st2038_to_vanc_write_func (GstBuffer * buffer, + gpointer mapping_data, GstAdapter * adapter, GstBuffer ** outbuf, + gboolean flush) +{ + GstMapInfo map; + GstByteWriter writer; + guint8 *data; + guint size; + guint i, offset; + guint total_anc_size = 0; + guint num_anc_structures = 0; + + gst_buffer_map (buffer, &map, GST_MAP_READ); + + /* First pass: parse ST 2038 to determine total size needed */ + offset = 0; + while (offset < map.size) { + St2038AncHeader header; + + if (!parse_st2038_header (&map.dataoffset, map.size - offset, &header)) + break; + + /* + * Each ANC packet in ST 436M needs: + * 2 bytes DID/SDID + 1 byte DC + data_count bytes + 1 checksum byte + */ + guint packet_size = 4 + header.data_count; + total_anc_size += packet_size; + num_anc_structures++; + + offset += header.len_bytes; + } + + if (num_anc_structures == 0) { + gst_buffer_unmap (buffer, &map); + gst_buffer_unref (buffer); + *outbuf = gst_buffer_new (); + return GST_FLOW_OK; + } + + /* + * Calculate total ST 436M wrapper size: + * 16 bytes base header + 4 bytes array count + total ANC data + */ + size = 20 + total_anc_size; + + gst_byte_writer_init_with_size (&writer, size, TRUE); + + /* See ST-436M Section 7 */ + gst_byte_writer_put_uint16_be_unchecked (&writer, num_anc_structures); + + /* Second pass: convert each ST 2038 packet to ST 436M ANC payload */ + offset = 0; + while (offset < map.size) { + St2038AncHeader header; + guint8 user_data256; + guint8 checksum; + guint16 did_sdid; + guint packet_data_size; + + if (!parse_st2038_header (&map.dataoffset, map.size - offset, &header)) + break; + + if (!extract_st2038_user_data (&map.dataoffset, map.size - offset, + &header, user_data)) + break; + + gst_byte_writer_put_uint16_be_unchecked (&writer, header.line_number); + gst_byte_writer_put_uint8_unchecked (&writer, 1); /* Wrapping type */ + + /* + * ST2038 is 10 bits and we strip off the two parity bits, so + * use a value of 4 here which indicate 8-bit luma samples or + * 8-bit colour difference samples. + */ + if (header.c_not_y_channel_flag) { + gst_byte_writer_put_uint8_unchecked (&writer, 5); /* Payload Sample Coding */ + } else { + gst_byte_writer_put_uint8_unchecked (&writer, 4); /* Payload Sample Coding */ + } + + gst_byte_writer_put_uint16_be_unchecked (&writer, total_anc_size); /* Payload Sample Count */ + + /* + * See Section 4.3 of ST-377 on Compound Data Types. + * First 4 bytes define the number of elements in the array. + * Last 4 bytes define the length of each element. + */ + gst_byte_writer_put_uint32_be_unchecked (&writer, total_anc_size); + gst_byte_writer_put_uint32_be_unchecked (&writer, 1); + + did_sdid = (header.did << 8) | header.sdid; + gst_byte_writer_put_uint16_be_unchecked (&writer, did_sdid); + gst_byte_writer_put_uint8_unchecked (&writer, header.data_count); + gst_byte_writer_put_data_unchecked (&writer, user_data, header.data_count); + + /* Calculate checksum (8-bit sum of DID + SDID + DC + all user data) */ + checksum = header.did + header.sdid + header.data_count; + for (i = 0; i < header.data_count; i++) + checksum += user_datai; + gst_byte_writer_put_uint8_unchecked (&writer, checksum & 0xff); + + /* Pad to 4-byte boundary */ + packet_data_size = 4 + header.data_count; + if (GST_ROUND_UP_4 (packet_data_size) != packet_data_size) { + gst_byte_writer_fill_unchecked (&writer, 0, + GST_ROUND_UP_4 (packet_data_size) - packet_data_size); + } + + offset += header.len_bytes; + } + + data = gst_byte_writer_reset_and_get_data (&writer); + + gst_buffer_unmap (buffer, &map); + gst_buffer_unref (buffer); + + *outbuf = gst_buffer_new_wrapped (data, size); + + return GST_FLOW_OK; +} + static const guint8 vanc_essence_container_ul = { 0x06, 0x0e, 0x2b, 0x34, 0x04, 0x01, 0x01, 0x09, 0x0d, 0x01, 0x03, 0x01, 0x02, 0x0e, 0x00, 0x00 @@ -356,11 +770,22 @@ gint fps_n, fps_d; s = gst_caps_get_structure (caps, 0); - if (strcmp (gst_structure_get_name (s), "closedcaption/x-cea-708") != 0 || - !(format = gst_structure_get_string (s, "format")) || - strcmp (format, "cdp") != 0 || - !gst_structure_get_value (s, "framerate")) { - GST_ERROR ("Invalid caps %" GST_PTR_FORMAT, caps); + if (!HANDLE_AS_ST2038) { + if (strcmp (gst_structure_get_name (s), "closedcaption/x-cea-708") != 0 || + !(format = gst_structure_get_string (s, "format")) || + strcmp (format, "cdp") != 0) { + GST_ERROR ("Invalid caps %" GST_PTR_FORMAT, caps); + return NULL; + } + } else { + if (strcmp (gst_structure_get_name (s), "meta/x-st-2038") != 0) { + GST_ERROR ("Invalid caps %" GST_PTR_FORMAT, caps); + return NULL; + } + } + + if (!gst_structure_get_value (s, "framerate")) { + GST_ERROR ("Missing framerate in caps %" GST_PTR_FORMAT, caps); return NULL; } @@ -372,7 +797,11 @@ memcpy (&ret->parent.parent.essence_container, &vanc_essence_container_ul, 16); - *handler = mxf_vanc_write_func; + if (HANDLE_AS_ST2038) { + *handler = mxf_st2038_to_vanc_write_func; + } else { + *handler = mxf_vanc_write_func; + } return (MXFMetadataFileDescriptor *) ret; } @@ -427,13 +856,19 @@ { mxf_metadata_register (MXF_TYPE_METADATA_VANC_DESCRIPTOR); mxf_essence_element_handler_register (&mxf_vanc_essence_element_handler); + const char *vanc_caps; + + if (g_getenv ("GST_VANC_AS_CEA708") != NULL) { + vanc_caps = "closedcaption/x-cea-708, format = (string) cdp, " + "framerate = " GST_VIDEO_FPS_RANGE; + HANDLE_AS_ST2038 = FALSE; + } else { + vanc_caps = "meta/x-st-2038,alignment=frame"; + } mxf_vanc_essence_element_writer.pad_template = gst_pad_template_new ("vanc_sink_%u", GST_PAD_SINK, - GST_PAD_REQUEST, - gst_caps_from_string - ("closedcaption/x-cea-708, format = (string) cdp, framerate = " - GST_VIDEO_FPS_RANGE)); + GST_PAD_REQUEST, gst_caps_from_string (vanc_caps)); memcpy (&mxf_vanc_essence_element_writer.data_definition, mxf_metadata_track_identifier_get (MXF_METADATA_TRACK_DATA_ESSENCE), 16); mxf_essence_element_writer_register (&mxf_vanc_essence_element_writer);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/mxf/mxfvanc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/mxf/mxfvanc.h
Changed
@@ -26,6 +26,17 @@ #include <gst/gst.h> +typedef struct { + guint8 c_not_y_channel_flag; + guint8 did; + guint8 sdid; + guint16 line_number; + guint16 horizontal_offset; + guint8 data_count; + guint16 checksum; + guint len_bytes; /* Total length in bytes */ +} St2038AncHeader; + void mxf_vanc_init (void); #endif /* __MXF_VANC_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/netsim/gstnetsim.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/netsim/gstnetsim.c
Changed
@@ -633,9 +633,10 @@ static void gst_net_sim_dispose (GObject * object) { +#ifndef G_DISABLE_ASSERT GstNetSim *netsim = GST_NET_SIM (object); - g_assert (netsim->main_loop == NULL); +#endif G_OBJECT_CLASS (gst_net_sim_parent_class)->dispose (object); } @@ -651,7 +652,7 @@ gst_element_class_add_static_pad_template (gstelement_class, &gst_net_sim_sink_template); - gst_element_class_set_metadata (gstelement_class, + gst_element_class_set_static_metadata (gstelement_class, "Network Simulator", "Filter/Network", "An element that simulates network jitter, "
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rist/gstristrtpdeext.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rist/gstristrtpdeext.c
Changed
@@ -318,7 +318,7 @@ GstElementClass *element_class = (GstElementClass *) klass; GObjectClass *object_class = (GObjectClass *) klass; - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "RIST RTP Extension remover", "Filter/Network", "Removes RIST TR-06-2 RTP Header extension", "Olivier Crete <olivier.crete@collabora.com");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rist/gstristrtpext.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rist/gstristrtpext.c
Changed
@@ -279,7 +279,7 @@ GstElementClass *element_class = (GstElementClass *) klass; GObjectClass *object_class = (GObjectClass *) klass; - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "RIST RTP Extension adder", "Filter/Network", "Adds RIST TR-06-2 RTP Header extension", "Olivier Crete <olivier.crete@collabora.com");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rist/gstristsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rist/gstristsink.c
Changed
@@ -1341,7 +1341,7 @@ session_id_quark = g_quark_from_static_string ("gst-rist-sink-session-id"); - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "RIST Sink", "Source/Network", "Sink that implements RIST TR-06-1 streaming specification", "Nicolas Dufresne <nicolas.dufresne@collabora.com");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rist/gstristsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rist/gstristsrc.c
Changed
@@ -1352,7 +1352,7 @@ GstElementClass *element_class = (GstElementClass *) klass; GObjectClass *object_class = (GObjectClass *) klass; - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "RIST Source", "Source/Network", "Source that implements RIST TR-06-1 streaming specification", "Nicolas Dufresne <nicolas.dufresne@collabora.com");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rist/gstroundrobin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rist/gstroundrobin.c
Changed
@@ -124,7 +124,7 @@ { GstElementClass *element_class = (GstElementClass *) klass; - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "Round Robin", "Source/Network", "A round robin dispatcher element.", "Nicolas Dufresne <nicolas.dufresne@collabora.com");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rtmp2/rtmp/amf.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rtmp2/rtmp/amf.c
Changed
@@ -338,9 +338,12 @@ const gchar * gst_amf_node_peek_string (const GstAmfNode * node, gsize * size) { +#ifndef G_DISABLE_CHECKS GstAmfType type = gst_amf_node_get_type (node); g_return_val_if_fail (type == GST_AMF_TYPE_STRING || type == GST_AMF_TYPE_LONG_STRING, FALSE); +#endif + return g_bytes_get_data (node->value.v_bytes, size); } @@ -364,32 +367,40 @@ const GstAmfNode * gst_amf_node_get_field_by_index (const GstAmfNode * node, guint index) { +#ifndef G_DISABLE_CHECKS guint len = gst_amf_node_get_num_fields (node); g_return_val_if_fail (index < len, NULL); +#endif return get_field (node, index)->value; } guint gst_amf_node_get_num_fields (const GstAmfNode * node) { +#ifndef G_DISABLE_CHECKS GstAmfType type = gst_amf_node_get_type (node); g_return_val_if_fail (type == GST_AMF_TYPE_OBJECT || type == GST_AMF_TYPE_ECMA_ARRAY, 0); +#endif return node->value.v_fields->len; } const GstAmfNode * gst_amf_node_get_element (const GstAmfNode * node, guint index) { +#ifndef G_DISABLE_CHECKS guint len = gst_amf_node_get_num_elements (node); g_return_val_if_fail (index < len, NULL); +#endif return get_element (node, index); } guint gst_amf_node_get_num_elements (const GstAmfNode * node) { +#ifndef G_DISABLE_CHECKS GstAmfType type = gst_amf_node_get_type (node); +#endif g_return_val_if_fail (type == GST_AMF_TYPE_STRICT_ARRAY, 0); return node->value.v_elements->len; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rtmp2/rtmp/rtmpconnection.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rtmp2/rtmp/rtmpconnection.c
Changed
@@ -752,13 +752,14 @@ gst_rtmp_connection_handle_aggregate (GstRtmpConnection * connection, GstBuffer * buffer) { - GstRtmpMeta *meta; GstMapInfo map; gsize pos = 0; guint32 first_ts = 0; - meta = gst_buffer_get_rtmp_meta (buffer); +#ifndef G_DISABLE_CHECKS + GstRtmpMeta *meta = gst_buffer_get_rtmp_meta (buffer); g_return_if_fail (meta); +#endif gst_buffer_map (buffer, &map, GST_MAP_READ); GST_TRACE_OBJECT (connection, "got aggregate message");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rtmp2/rtmp/rtmpmessage.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rtmp2/rtmp/rtmpmessage.c
Changed
@@ -430,7 +430,9 @@ gst_rtmp_message_parse_user_control (GstBuffer * buffer, GstRtmpUserControl * out) { +#ifndef G_DISABLE_CHECKS GstRtmpMeta *meta = gst_buffer_get_rtmp_meta (buffer); +#endif GstMapInfo map; GstRtmpUserControl uc; gsize uc_size;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rtp/gstrtpsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rtp/gstrtpsink.c
Changed
@@ -535,8 +535,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG_OBJECT (self, "changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/rtp/gstrtpsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/rtp/gstrtpsrc.c
Changed
@@ -810,8 +810,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG_OBJECT (self, "Changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); if (ret == GST_STATE_CHANGE_FAILURE)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/segmentclip/gstsegmentclip.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/segmentclip/gstsegmentclip.c
Changed
@@ -160,7 +160,9 @@ ret = gst_caps_intersect (tmp, gst_pad_get_pad_template_caps (pad)); gst_caps_unref (tmp); } else { - ret = gst_caps_copy (gst_pad_get_pad_template_caps (pad)); + tmp = gst_pad_get_pad_template_caps (pad); + ret = gst_caps_copy (tmp); + gst_caps_unref (tmp); } GST_LOG_OBJECT (pad, "Returning caps: %" GST_PTR_FORMAT, ret);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstclassifiertensordecoder.c
Added
@@ -0,0 +1,707 @@ +/* + * GStreamer gstreamer-classifiertensordecoder + * Copyright (C) 2025 Collabora Ltd. + * @author: Daniel Morin <daniel.morin@dmohub.org> + * + * gstclassifiertensordecoder.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-classifiertensordecoder.c + * @short_description: Decode tensors from classification model using a common + * tensor output format. + * + * + * This element can parse per-buffer inference tensor meta data generated by + * an upstream inference element. + * + * Tensor format must be: + * Dims: batch-size, class_count + * Datatype: float32 + * + * Tensor M,N + * Batch 0 | Class 0 confidence level | ... | Class N confidence level | + * ... + * Batch M | Class 0 confidence level | ... | Class N confidence level | + * + * In-memory tensor format: + * + * |Batch 0, Class 0 confidence level | + * |Batch 0, ... | + * |Batch 0, Class N confidence level | + * | ... | + * |Batch M, Class 0 confidence level | + * |Batch M, ... | + * |Batch M, Class N confidence level | + * + * + * ## Example launch command: + * | + * gst-launch-1.0 filesrc location=/onnx-models/images/bus.jpg \ + * ! jpegdec \ + * ! videoconvertscale add-borders=1 \ + * ! onnxinference execution-provider=cpu \ + * model-file=/onnx-models/models/mobilenet_v1.onnx \ + * ! classifiertensordecoder labels-file=labels.txt ! fakesink \ + * | This pipeline create an tensor-decoder for classification model + * + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstclassifiertensordecoder.h" +#include <gst/gst.h> +#include <math.h> +#include <gst/analytics/analytics.h> + +#define GROUP_ID_CLASSIFICATION "classification-generic-out" +#define GROUP_ID_CLASSIFICATION_SOFTMAXED "classification-generic-softmaxed-out" +#define GST_MODEL_STD_IMAGE_CLASSIFICATION "classification-generic-out" +#define GST_MODEL_STD_IMAGE_CLASSIFICATION_SOFTMAXED "classification-generic-softmaxed-out" + +GST_DEBUG_CATEGORY_STATIC (classifier_tensor_decoder_debug); +#define GST_CAT_DEFAULT classifier_tensor_decoder_debug +#define gst_classifier_tensor_decoder_parent_class parent_class + +GST_ELEMENT_REGISTER_DEFINE (classifier_tensor_decoder, + "classifiertensordecoder", GST_RANK_SECONDARY, + GST_TYPE_CLASSIFIER_TENSOR_DECODER); + + +/* GstClassifierTensorDecoder properties */ +enum +{ + PROP_0, + PROP_THRESHOLD, + PROP_LABEL_FILE +}; + +static const float DEFAULT_THRESHOLD = 0.7f; + +static GstStaticPadTemplate gst_classifier_tensor_decoder_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS_ANY); + +/* *INDENT-OFF* */ + +static GstStaticPadTemplate gst_classifier_tensor_decoder_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ( + "video/x-raw," + "tensors=(structure)" + "tensorgroups," + GROUP_ID_CLASSIFICATION"=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_STD_IMAGE_CLASSIFICATION"," + "dims=<(int)0,1, (int)1,max>," + "dims-order=(string)row-major," + "type={float32, uint8};" + "tensor/strided," + "tensor-id="GST_MODEL_STD_IMAGE_CLASSIFICATION"," + "dims=<(int)1,max>," + "dims-order=(string)row-major," + "type={float32, uint8};" + "}" + ";" + "video/x-raw," + "tensors=(structure)" + "tensorgroups," + GROUP_ID_CLASSIFICATION_SOFTMAXED"=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_STD_IMAGE_CLASSIFICATION_SOFTMAXED"," + "dims=<(int)0,1, (int)1,max>," + "dims-order=(string)row-major," + "type={float32, uint8};" + "tensor/strided," + "tensor-id="GST_MODEL_STD_IMAGE_CLASSIFICATION_SOFTMAXED"," + "dims=<(int)1,max>," + "dims-order=(string)row-major," + "type={float32, uint8};" + "}" + "" + )); +/* *INDENT-ON* */ + +static void gst_classifier_tensor_decoder_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_classifier_tensor_decoder_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); + +static void gst_classifier_tensor_decoder_finalize (GObject * object); + +static GstFlowReturn +gst_classifier_tensor_decoder_transform_ip (GstBaseTransform * trans, + GstBuffer * buf); + +static GstStateChangeReturn +gst_classifier_tensor_decoder_change_state (GstElement * element, + GstStateChange transition); + +static gboolean +gst_classifier_tensor_decoder_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps); + + +#define softmax(len, values, results, max_val) \ + gsize i; \ + gfloat sum = 0.0; \ + gfloat value; \ + g_return_if_fail (values != NULL); \ + g_return_if_fail (results != NULL); \ + \ + /* Calculate exponential of every value */ \ + for (i = 0; i < len; i++) { \ + value = valuesi / max_val; \ + resultsi = exp (value); \ + sum += resultsi; \ + } \ + \ + /* Complete softmax */ \ + for (i = 0; i < len; i++) { \ + resulti = resultsi / sum; \ + } + +static void +softmax_u8 (gsize len, const guint8 * values, gfloat * result) +{ + softmax (len, values, result, 255.0); +} + +static void +softmax_f32 (gsize len, const gfloat * values, gfloat * result) +{ + softmax (len, values, result, 1.0); +} + +G_DEFINE_TYPE (GstClassifierTensorDecoder, gst_classifier_tensor_decoder, + GST_TYPE_BASE_TRANSFORM); + +static void +gst_classifier_tensor_decoder_class_init (GstClassifierTensorDecoderClass * + klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + GST_DEBUG_CATEGORY_INIT (classifier_tensor_decoder_debug, + "classifiertensordecoder", 0, + "Tensor decoder for classification model with common output format"); + + gobject_class->set_property = gst_classifier_tensor_decoder_set_property; + gobject_class->get_property = gst_classifier_tensor_decoder_get_property; + gobject_class->finalize = gst_classifier_tensor_decoder_finalize; + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_THRESHOLD, + g_param_spec_float ("class-confidence-threshold", + "Class confidence threshold", + "Classes with a confidence level inferior to this threshold " + "will be excluded", + 0.0, 1.0, DEFAULT_THRESHOLD, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_LABEL_FILE, + g_param_spec_string ("labels-file", + "Class labels file", + "Path to a file containing class label. COCO format", + NULL, (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + element_class->change_state = gst_classifier_tensor_decoder_change_state; + + gst_element_class_set_static_metadata (element_class, + "Classification tensor decoder", "Tensordecoder", + "Decode tensors output from classification model using common format.\n" + "\tTensor format must be: \n" "\t\tDims: batch-size, class_count\n" + "\t\tDatatype: float32 \n" "\n" "\t\tTensor M,N\n" + "\t\t\tBatch 0 | Class 0 confidence level | ... | Class N-1 confidence level |\n" + "\t\t\t...\n" + "\t\t\tBatch M-1 | Class 0 confidence level | ... | Class N-1 confidence level |\n" + "\t\t\n" "\tIn-memory tensor format:\n" "\n" + "\t\t|Batch 0, Class 0 confidence level |\n" + "\t\t|Batch 0, ... |\n" + "\t\t|Batch 0, Class N-1 confidence level |\n" + "\t\t| ... |\n" + "\t\t|Batch M-1, Class 0 confidence level |\n" + "\t\t|Batch M-1, ... |\n" + "\t\t|Batch M-1, Class N-1 confidence level |\n" "\n" " model", + "Daniel Morin <daniel.morin@collabora.com>"); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get + (&gst_classifier_tensor_decoder_sink_template)); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get + (&gst_classifier_tensor_decoder_src_template)); + + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_classifier_tensor_decoder_transform_ip); + + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_classifier_tensor_decoder_set_caps); +} + +static void +gst_classifier_tensor_decoder_init (GstClassifierTensorDecoder * self) +{ + self->threshold = DEFAULT_THRESHOLD; + self->labels_file = NULL; + self->postproc_result = NULL; + self->class_count = 0; + self->do_softmax = TRUE; + + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); + GST_PAD_UNSET_ACCEPT_INTERSECT (self->basetransform.sinkpad); +} + +static void +gst_classifier_tensor_decoder_finalize (GObject * object) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (object); + + g_free (self->labels_file); + G_OBJECT_CLASS (gst_classifier_tensor_decoder_parent_class)->finalize + (object); +} + +static void +gst_classifier_tensor_decoder_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (object); + static GFileTest filetest = (G_FILE_TEST_EXISTS | G_FILE_TEST_IS_REGULAR); + + switch (prop_id) { + case PROP_THRESHOLD: + self->threshold = g_value_get_float (value); + break; + case PROP_LABEL_FILE: + self->labels_file = g_strdup (g_value_get_string (value)); + + if (self->labels_file) { + if (!g_file_test (self->labels_file, filetest)) { + GST_ERROR_OBJECT (self, "Unable to load %s", self->labels_file); + g_free (g_steal_pointer (&self->labels_file)); + } + } else { + GST_ERROR_OBJECT (self, "Invalid file"); + } + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_classifier_tensor_decoder_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (object); + + switch (prop_id) { + case PROP_THRESHOLD: + g_value_set_float (value, self->threshold); + break; + case PROP_LABEL_FILE: + g_value_set_string (value, self->labels_file); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static guint +gst_classifier_tensor_decoder_load_labels (GstClassifierTensorDecoder * self) +{ + gchar *content = NULL; + gchar **tokens = NULL; + gsize len; + GError *err = NULL; + GQuark val; + GArray *class_quark = NULL; + + if (self->labels_file == NULL) { + GST_ERROR_OBJECT (self, "Missing label file"); + return 0; + } + if (!g_file_get_contents (self->labels_file, &content, &len, &err)) { + GST_ERROR_OBJECT (self, "Could not load labels file %s: %s", + self->labels_file, err->message); + g_error_free (err); + return 0; + } + + if (len == 0) { + GST_ERROR_OBJECT (self, "Labels file %s is empty", self->labels_file); + g_free (content); + return 0; + } + + tokens = g_strsplit (content, "\n", 0); + g_free (content); + + if (tokens0 != NULL) { + class_quark = + g_array_sized_new (FALSE, FALSE, sizeof (GQuark), self->class_count); + } + + self->class_quark = g_array_new (FALSE, FALSE, sizeof (GQuark)); + + for (int i = 0; tokensi != NULL && tokensi0 != '\0'; i++) { + val = g_quark_from_string (tokensi); + g_array_append_val (class_quark, val); + } + + if (class_quark == NULL) + GST_WARNING_OBJECT (self, "Label %s file does not contain any labels", + self->labels_file); + + self->class_quark = class_quark; + + g_strfreev (tokens); + return self->class_quark->len; +} + +static GstStateChangeReturn +gst_classifier_tensor_decoder_change_state (GstElement * element, + GstStateChange transition) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (element); + GstStateChangeReturn ret; + + switch (transition) { + case GST_STATE_CHANGE_NULL_TO_READY: + if (self->labels_file != NULL && + !gst_classifier_tensor_decoder_load_labels (self)) { + return GST_STATE_CHANGE_FAILURE; + } + break; + default: + break; + } + + ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition); + + switch (transition) { + case GST_STATE_CHANGE_READY_TO_NULL: + if (self->class_quark) + g_array_free (self->class_quark, FALSE); + if (self->postproc_result) + g_array_free (self->postproc_result, TRUE); + break; + default: + break; + } + + return ret; +} + +static const GstTensor * +get_tensor (GstTensorMeta * tmeta, GQuark tensor_id) +{ + const GstTensor *tensor; + const gsize DIMS = { 1, G_MAXSIZE }; + + tensor = gst_tensor_meta_get_typed_tensor (tmeta, tensor_id, + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 1, NULL); + if (tensor == NULL) + tensor = gst_tensor_meta_get_typed_tensor (tmeta, tensor_id, + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, DIMS); + if (tensor == NULL) + tensor = gst_tensor_meta_get_typed_tensor (tmeta, tensor_id, + GST_TENSOR_DATA_TYPE_UINT8, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 1, NULL); + if (tensor == NULL) + tensor = gst_tensor_meta_get_typed_tensor (tmeta, tensor_id, + GST_TENSOR_DATA_TYPE_UINT8, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, DIMS); + + return tensor; +} + +static const GstTensor * +gst_classifier_tensor_decoder_get_tensor (GstClassifierTensorDecoder * + self, GstBuffer * buf) +{ + GstMeta *meta = NULL; + gpointer iter_state = NULL; + const gchar *expected_tensor_id; + + if (!gst_buffer_get_meta (buf, GST_TENSOR_META_API_TYPE)) { + GST_DEBUG_OBJECT (self, + "missing tensor meta from buffer %" GST_PTR_FORMAT, buf); + return NULL; + } + + /* Use the tensor-id that matches what was negotiated */ + expected_tensor_id = self->do_softmax ? + GST_MODEL_STD_IMAGE_CLASSIFICATION : + GST_MODEL_STD_IMAGE_CLASSIFICATION_SOFTMAXED; + + while ((meta = gst_buffer_iterate_meta_filtered (buf, &iter_state, + GST_TENSOR_META_API_TYPE))) { + GstTensorMeta *tensor_meta = (GstTensorMeta *) meta; + const GstTensor *tensor; + + tensor = get_tensor (tensor_meta, + g_quark_from_static_string (expected_tensor_id)); + + if (tensor) + return tensor; + } + + return NULL; +} + +static GstFlowReturn +gst_classifier_tensor_decoder_decode (GstClassifierTensorDecoder * self, + const GstTensor * tensor, GstAnalyticsRelationMeta * rmeta) +{ + GstMapInfo map_info = GST_MAP_INFO_INIT; + gfloat max = 0.0; + gfloat *result_data = NULL; + gsize len; + GQuark q, qmax; + gint max_idx = -1; + GstAnalyticsClsMtd cls_mtd; + + len = tensor->dimstensor->num_dims - 1; + + if (len != self->class_quark->len) { + GST_WARNING_OBJECT (self, "Labels file has size %zu, but the tensor has" + " %u entries, it is probably not the right labels file", + len, self->class_quark->len); + len = MIN (len, self->class_quark->len); + } + + if (!gst_buffer_map (tensor->data, &map_info, GST_MAP_READ)) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to map tensor data")); + return GST_FLOW_ERROR; + } + + GST_TRACE_OBJECT (self, "Tensor shape dims %zu", tensor->num_dims); + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (gint i = 0; i < tensor->num_dims; i++) { + GST_TRACE_OBJECT (self, "Tensor dim %d: %zu", i, tensor->dimsi); + } + } + + switch (tensor->data_type) { + case GST_TENSOR_DATA_TYPE_FLOAT32: + if (map_info.size != len * sizeof (gfloat)) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Tensor size is not as expected for float: map.size(%zu) !=" + " label-file-length(%zu) * sizeof(float)(%zu)", map_info.size, + len, sizeof (float))); + goto error_mapped; + } + + if (self->do_softmax) { + result_data = (gfloat *) self->postproc_result->data; + softmax_f32 (len, (gfloat *) map_info.data, result_data); + } else { + /* Already softmaxed, use data directly */ + result_data = (gfloat *) map_info.data; + } + break; + case GST_TENSOR_DATA_TYPE_UINT8: + if (map_info.size != len) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Tensor size is not as expected for uint8: map.size(%zu) !=" + " label-file-length(%zu))", map_info.size, len)); + goto error_mapped; + } + + /* Always need conversion buffer for uint8 -> float */ + result_data = (gfloat *) self->postproc_result->data; + if (self->do_softmax) { + softmax_u8 (len, (guint8 *) map_info.data, result_data); + } else { + const guint8 *uint8_data = map_info.data; + for (gint i = 0; i < len; i++) { + result_datai = uint8_datai / 255.0; + } + } + break; + default: + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Can't handle data type %d", tensor->data_type)); + goto error_mapped; + } + + for (gint j = 0; j < len; j++) { + if (self->class_quark != NULL) { + q = g_array_index (self->class_quark, GQuark, j); + } else { + q = j; + } + + if (result_dataj > max) { + max = result_dataj; + max_idx = j; + qmax = q; + } + } + + gst_buffer_unmap (tensor->data, &map_info); + + if (max_idx != -1) { + gst_analytics_relation_meta_add_one_cls_mtd (rmeta, max, qmax, &cls_mtd); + GST_LOG_OBJECT (self, "Max class is %d:%s with %f", max_idx, + g_quark_to_string (qmax), max); + } + + return GST_FLOW_OK; + +error_mapped: + gst_buffer_unmap (tensor->data, &map_info); + return GST_FLOW_ERROR; +} + +static GstFlowReturn +gst_classifier_tensor_decoder_transform_ip (GstBaseTransform * trans, + GstBuffer * buf) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (trans); + const GstTensor *tensor; + GstAnalyticsRelationMeta *rmeta; + + tensor = gst_classifier_tensor_decoder_get_tensor (self, buf); + if (tensor == NULL) { + GST_WARNING_OBJECT (trans, "missing tensor meta"); + return GST_FLOW_OK; + } + + rmeta = gst_buffer_add_analytics_relation_meta (buf); + + return gst_classifier_tensor_decoder_decode (self, tensor, rmeta); +} + +static gboolean +gst_classifier_tensor_decoder_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps) +{ + GstClassifierTensorDecoder *self = GST_CLASSIFIER_TENSOR_DECODER (trans); + const GstCaps *tcaps; + const GstStructure *s, *ts, *dims_s; + const GValue *dims_v, *dim_v, *tensors_v, *tensors_gv, *tensor_caps_v; + gsize dims_size, batchsize = 1; + gchar buffer32; + GQuark val; + + /* Get the classification tensor */ + s = gst_caps_get_structure (incaps, 0); + g_return_val_if_fail (s != NULL, FALSE); + + tensors_v = gst_structure_get_value (s, "tensors"); + g_return_val_if_fail (tensors_v != NULL, FALSE); + + ts = gst_value_get_structure (tensors_v); + g_return_val_if_fail (ts != NULL, FALSE); + + /* Try to get classification group (non-softmaxed) first */ + tensors_gv = gst_structure_get_value (ts, GROUP_ID_CLASSIFICATION); + /* If not found, try softmaxed group */ + if (tensors_gv == NULL) + tensors_gv = + gst_structure_get_value (ts, GROUP_ID_CLASSIFICATION_SOFTMAXED); + g_return_val_if_fail (tensors_gv != NULL, FALSE); + + tensor_caps_v = gst_value_unique_list_get_value (tensors_gv, 0); + g_return_val_if_fail (tensor_caps_v != NULL, FALSE); + + tcaps = gst_value_get_caps (tensor_caps_v); + s = gst_caps_get_structure (tcaps, 0); + g_return_val_if_fail (tcaps != NULL, FALSE); + + if (gst_structure_has_field (s, "tensor-id")) { + const gchar *tensor_id = gst_structure_get_string (s, "tensor-id"); + + /* Determine if we need to apply softmax based on negotiated tensor-id */ + if (g_strcmp0 (tensor_id, GST_MODEL_STD_IMAGE_CLASSIFICATION) == 0) { + self->do_softmax = TRUE; + } else if (g_strcmp0 (tensor_id, + GST_MODEL_STD_IMAGE_CLASSIFICATION_SOFTMAXED) == 0) { + self->do_softmax = FALSE; + } else { + /* Unknown tensor-id, skip */ + return TRUE; + } + + dims_s = gst_caps_get_structure (tcaps, 0); + dims_v = gst_structure_get_value (dims_s, "dims"); + dims_size = gst_value_array_get_size (dims_v); + + if (dims_size == 2) { + /* Explicit batch-size */ + dim_v = gst_value_array_get_value (dims_v, 0); + batchsize = g_value_get_int (dim_v); + + if (batchsize == 0) + batchsize = 1; + + dim_v = gst_value_array_get_value (dims_v, 1); + } else { + dim_v = gst_value_array_get_value (dims_v, 0); + } + + /* Get classes count */ + self->class_count = g_value_get_int (dim_v); + + /* Allocate postproc_result buffer for softmax or uint8->float conversion */ + self->postproc_result = + g_array_sized_new (FALSE, TRUE, sizeof (gfloat), self->class_count); + + if (self->class_quark != NULL && + self->class_count != self->class_quark->len) { + GST_ELEMENT_ERROR (GST_BASE_TRANSFORM (self), STREAM, FAILED, + ("Label-file/Tensor mismatch"), + ("Class count from tensor mismatch class count from label file")); + return FALSE; + } + + /* Generate labels if no label file was specified. */ + if (self->class_quark == NULL) { + self->class_quark = g_array_sized_new (FALSE, FALSE, sizeof (GQuark), + self->class_count); + for (gsize i = 0; i < self->class_count; i++) { + if (g_snprintf (buffer, sizeof (buffer), "%zu", i) >= sizeof (buffer)) { + g_array_free (self->postproc_result, FALSE); + self->postproc_result = NULL; + return FALSE; + } + val = g_quark_from_string (buffer); + g_array_append_val (self->class_quark, val); + } + } + } + + return TRUE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstclassifiertensordecoder.h
Added
@@ -0,0 +1,70 @@ +/* + * GStreamer gstreamer-classifiertensordecoder + * Copyright (C) 2025 Collabora Ltd + * @author: Daniel Morin <daniel.morin@dmohub.org> + * + * gstclassifiertensordecoder.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#ifndef __GST_CLASSIFIER_TENSOR_DECODER_H__ +#define __GST_CLASSIFIER_TENSOR_DECODER_H__ + +#include <gst/gst.h> +#include <gst/base/gstbasetransform.h> + +G_BEGIN_DECLS + +#define GST_TYPE_CLASSIFIER_TENSOR_DECODER (gst_classifier_tensor_decoder_get_type ()) +G_DECLARE_FINAL_TYPE (GstClassifierTensorDecoder, gst_classifier_tensor_decoder, + GST, CLASSIFIER_TENSOR_DECODER, GstBaseTransform) + +/** + * GstClassifierTensorDecoder: + * + * @threshold: Class confidence threshold + * @labels_file: Path where to read class labels + * @class_quark: Class labels quark representation + * @postproc_result: Buffer for post-processing (softmax, uint8->float conversion) + * @class_count: Class count + * @do_softmax: Whether softmax needs to be applied (determined by negotiated caps) + * + * Since: 1.24 + */ +struct _GstClassifierTensorDecoder +{ + GstBaseTransform basetransform; + gfloat threshold; + gchar *labels_file; + GArray *class_quark; + GArray *postproc_result; + gsize class_count; + gboolean do_softmax; +}; + +struct _GstClassifierTensorDecoderClass +{ + GstBaseTransformClass parent_class; + + /* TODO: Add vmethod to allow overwriting: decode, postprocess, load_labels */ +}; + +GST_ELEMENT_REGISTER_DECLARE (classifier_tensor_decoder) + +G_END_DECLS +#endif /* __GST_CLASSIFIER_TENSOR_DECODER_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstfacedetectortensordecoder.c
Added
@@ -0,0 +1,702 @@ +/* + * GStreamer gstreamer-ultralightfacedetectortensordec + * Copyright (C) 2025 Collabora Ltd. + * + * gstfacedetectortensordecoder.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + /** + * SECTION:element-ultralightfacedetectortensordec + * @short_description: Detect faces in video buffers using the Ultra Light Face Detection model. + * + * This element can parse per-buffer inference tensor meta data generated by an upstream + * inference element. + * + * ## Example launch command: + * + * Test image files can be found here : + * https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/tree/master/imgs + * + * The Model file can be found here : + * https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/tree/master/models/onnx + * + * GST_DEBUG=ultralightfacedetectortensordec \ + * gst-launch-1.0 multifilesrc location=~/imgs/11.jpg ! jpegdec ! videoconvertscale ! \ + * onnxinference model-file=version-RFB-320.onnx input-image-format=chw input-tensor-offset=-127 input-tensor-scale=128.0 ! \ + * ultralightfacedetectortensordec ! objectdetectionoverlay object-detection-outline-color=0xFF0000FF draw-labels=false ! \ + * videoconvertscale ! autovideosink + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstfacedetectortensordecoder.h" + +#include <gio/gio.h> + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/analytics/analytics.h> + +/* Face detection tensor id strings */ +#define BOXES_TENSOR_ID "ssd-mobilenet-v1-variant-1-out-boxes" +#define SCORES_TENSOR_ID "ultra-lightweight-face-detection-rfb-320-v1-variant-1-out-scores" +#define GROUP_ID "ultra-lightweight-face-detection-rfb-320-v1-variant-1-out" + +GST_DEBUG_CATEGORY_STATIC (face_detector_tensor_decoder_debug); +#define GST_CAT_DEFAULT face_detector_tensor_decoder_debug + +GST_ELEMENT_REGISTER_DEFINE (face_detector_tensor_decoder, + "ultralightfacedetectortensordec", GST_RANK_PRIMARY, + GST_TYPE_FACE_DETECTOR_TENSOR_DECODER); + +/* GstFaceDetectorTensorDecoder properties, see properties description in + * gst_face_detector_tensor_decoder_class_init for more details. */ +enum +{ + PROP_0, + PROP_SCORE_THRESHOLD, + PROP_IOU_THRESHOLD +}; + +/* Default properties value */ +static const gfloat DEFAULT_SCORE_THRESHOLD = 0.6f; /* confidence threshold */ +static const gfloat DEFAULT_IOU_THRESHOLD = 0.3f; /* NMS IoU threshold */ + +/* To tensor-id are defined by a string that is converted to quark + * which is just an integer value using a hash function. For efficiency + * we compare on the quark (hash value). Since tensor-id never change we + * just calculate the hash once during initialization and store the value in + * these variables. */ +GQuark BOXES_TENSOR_ID_QUARK; +GQuark SCORES_TENSOR_ID_QUARK; + +GQuark FACE_QUARK; + +/* GStreamer element srcpad template. Template of a srcpad that can receive + * any raw video. */ +static GstStaticPadTemplate gst_face_detector_tensor_decoder_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw") + ); + +/* GStreamer element sinkpad template. Template of a sinkpad that can receive + * any raw video. */ + +/* *INDENT-OFF* */ +static GstStaticPadTemplate gst_face_detector_tensor_decoder_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw," + "tensors=(structure)" + "tensorgroups," + GROUP_ID"=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id="BOXES_TENSOR_ID"," + "dims=<(int)1,(int)1,max,(int)4>," + "dims-order=(string)row-major," + "type=(string)float32;," + "(GstCaps)" + "tensor/strided," + "tensor-id="SCORES_TENSOR_ID"," + "dims=<(int)1,(int)1,max,(int)2>," + "dims-order=(string)row-major," + "type=(string)float32;" + "}" + ";" + )); +/* *INDENT-ON* */ + +/* Prototypes */ +static void gst_face_detector_tensor_decoder_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_face_detector_tensor_decoder_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_face_detector_tensor_decoder_finalize (GObject * object); +static GstFlowReturn +gst_face_detector_tensor_decoder_transform_ip (GstBaseTransform * trans, + GstBuffer * buf); +static gboolean gst_face_detector_tensor_decoder_set_caps (GstBaseTransform * + trans, GstCaps * incaps, GstCaps * outcaps); + +G_DEFINE_TYPE (GstFaceDetectorTensorDecoder, gst_face_detector_tensor_decoder, + GST_TYPE_BASE_TRANSFORM); + +static void +gst_face_detector_tensor_decoder_class_init (GstFaceDetectorTensorDecoderClass + * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + /* Define GstFaceDetectorTensorDecoder debug category. */ + GST_DEBUG_CATEGORY_INIT (face_detector_tensor_decoder_debug, + "ultralightfacedetectortensordec", 0, + "Tensor Decoder for Face Detection"); + + /* Set GObject vmethod to get and set property */ + gobject_class->set_property = gst_face_detector_tensor_decoder_set_property; + gobject_class->get_property = gst_face_detector_tensor_decoder_get_property; + gobject_class->finalize = gst_face_detector_tensor_decoder_finalize; + + /* Define GstFaceDetectorTensorDecoder properties using GObject properties + * interface.*/ + + /** + * GstFaceDetectorTensorDecoder:score-threshold + * + * Threshold for deciding when to remove boxes based on score + * + * Since: 1.28 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SCORE_THRESHOLD, + g_param_spec_float ("score-threshold", + "Score threshold", + "Threshold for deciding when to remove boxes based on score", + 0.0, 1.0, DEFAULT_SCORE_THRESHOLD, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstFaceDetectorTensorDecoder:iou-threshold + * + * Threshold for removing boxes based on proportion of the image + * + * Since: 1.28 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_IOU_THRESHOLD, + g_param_spec_float ("iou-threshold", + "IoU threshold", + "Threshold for removing boxes based on proportion of the image", + 0.0, 1.0, DEFAULT_IOU_THRESHOLD, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /* Element description. */ + gst_element_class_set_static_metadata (element_class, + "ultralightfacedetectortensordec", "Tensordecoder/Video", + "Detect tensor output from the inference of Ultra Light Face Detection" + " to detect the faces in video frames." + "The original repository of the Ultra Light Face Detection is located at" + " https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB.", + "Raghavendra Rao <raghavendra.rao@collabora.com>"); + + /* Add pads to element base on pad template defined earlier */ + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get + (&gst_face_detector_tensor_decoder_sink_template)); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get + (&gst_face_detector_tensor_decoder_src_template)); + + /* Set GstBaseTransform vmethod transform_ip. This methode is called + * by the srcpad when it receive buffer. ip stand for in-place meaning the + * buffer remain unchanged by the element. Tensor-decoder only monitor + * buffer it receive for a meta attach to the buffer that is a GstTensorMeta + * and has a tensor-id can be handled by GstFaceDetectorTensorDecoder. */ + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_face_detector_tensor_decoder_transform_ip); + + /* Set GstBaseTransform set_caps vmethod. This will be called once the + * capability negotiation has been completed. We will be able to extract + * resolution from this callback. */ + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_face_detector_tensor_decoder_set_caps); + + BOXES_TENSOR_ID_QUARK = g_quark_from_static_string (BOXES_TENSOR_ID); + SCORES_TENSOR_ID_QUARK = g_quark_from_static_string (SCORES_TENSOR_ID); + FACE_QUARK = g_quark_from_static_string ("face"); +} + +static void +gst_face_detector_tensor_decoder_init (GstFaceDetectorTensorDecoder * self) +{ + self->score_threshold = DEFAULT_SCORE_THRESHOLD; + self->iou_threshold = DEFAULT_IOU_THRESHOLD; + self->sel_candidates = NULL; + self->selected = NULL; + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); + + GST_PAD_UNSET_ACCEPT_INTERSECT (self->basetransform.sinkpad); +} + +static void +gst_face_detector_tensor_decoder_finalize (GObject * object) +{ + GstFaceDetectorTensorDecoder *self = + GST_FACE_DETECTOR_TENSOR_DECODER (object); + + g_clear_pointer (&self->sel_candidates, g_ptr_array_unref); + g_clear_pointer (&self->selected, g_ptr_array_unref); + g_clear_pointer (&self->candidates, g_free); + + G_OBJECT_CLASS (gst_face_detector_tensor_decoder_parent_class)->finalize + (object); +} + +static void +gst_face_detector_tensor_decoder_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstFaceDetectorTensorDecoder *self = + GST_FACE_DETECTOR_TENSOR_DECODER (object); + + switch (prop_id) { + case PROP_SCORE_THRESHOLD: + self->score_threshold = g_value_get_float (value); + break; + case PROP_IOU_THRESHOLD: + self->iou_threshold = g_value_get_float (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_face_detector_tensor_decoder_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstFaceDetectorTensorDecoder *self = + GST_FACE_DETECTOR_TENSOR_DECODER (object); + + switch (prop_id) { + case PROP_SCORE_THRESHOLD: + g_value_set_float (value, self->score_threshold); + break; + case PROP_IOU_THRESHOLD: + g_value_set_float (value, self->iou_threshold); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +/* gst_face_detector_tensor_decoder_set_caps: + * + * Callback on caps negotiation completed. We use it here to retrieve + * video resolution. See GstBaseTransform for more details. + */ +static gboolean +gst_face_detector_tensor_decoder_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps) +{ + GstFaceDetectorTensorDecoder *self = GST_FACE_DETECTOR_TENSOR_DECODER (trans); + + if (!gst_video_info_from_caps (&self->video_info, incaps)) { + GST_ERROR_OBJECT (self, "Failed to parse caps"); + return FALSE; + } + + return TRUE; +} + +/* gst_face_detector_tensor_decoder_get_tensor_meta + * @buf:in: buffer + * @boxes_tensor:out: Boxes tensor + * @scores_tensor:out: scores tensor + * + * Retrieve FaceDetection boxes and scores tensors from buffer. + * + * @return: TRUE if buf has boxes and scores tensor with desired features are attached to it. + * Otherwise FALSE will be returned. + */ +static gboolean +gst_face_detector_tensor_decoder_get_tensor_meta (GstFaceDetectorTensorDecoder + * self, GstBuffer * buf, const GstTensor ** boxes_tensor, + const GstTensor ** scores_tensor) +{ + GstMeta *meta; + gpointer state = NULL; + static const gsize BOXES_DIMS = { 1, G_MAXSIZE, 4 }; + static const gsize SCORES_DIMS = { 1, G_MAXSIZE, 2 }; + + g_return_val_if_fail (boxes_tensor != NULL, FALSE); + g_return_val_if_fail (scores_tensor != NULL, FALSE); + + *boxes_tensor = NULL; + *scores_tensor = NULL; + + /* Find ultralightfacedetectortensordec TensorMeta */ + while ((meta = gst_buffer_iterate_meta_filtered (buf, &state, + GST_TENSOR_META_API_TYPE))) { + GstTensorMeta *tensor_meta = (GstTensorMeta *) meta; + + GST_LOG_OBJECT (self, "Num tensors %zu", tensor_meta->num_tensors); + + /* Retrieve the tensor that has a tensor-id matching + * BOXES_TENSOR_ID_QUARK in the GstTensorMeta along with + * the reading order from the memory matching with GST_TENSOR_DIM_ORDER_ROW_MAJOR, + * 3 dimensions and the data type matching with GST_TENSOR_DATA_TYPE_FLOAT32 */ + *boxes_tensor = + gst_tensor_meta_get_typed_tensor (tensor_meta, BOXES_TENSOR_ID_QUARK, + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 3, + BOXES_DIMS); + + if (*boxes_tensor == NULL) + continue; + + /* Retrieve the tensor that has a tensor-id matching + * SCORES_TENSOR_ID_QUARK in the GstTensorMeta along with + * the reading order from the memory matching with GST_TENSOR_DIM_ORDER_ROW_MAJOR, + * 3 dimensions and the data type matching with GST_TENSOR_DATA_TYPE_FLOAT32 */ + *scores_tensor = + gst_tensor_meta_get_typed_tensor (tensor_meta, SCORES_TENSOR_ID_QUARK, + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 3, + SCORES_DIMS); + + if (*scores_tensor == NULL) + continue; + + } + + if (*boxes_tensor == NULL) { + GST_WARNING_OBJECT (self, "Can't retrieve boxes tensor"); + return FALSE; + } + + if (*scores_tensor == NULL) { + GST_WARNING_OBJECT (self, "Can't retrieve boxes tensor"); + return FALSE; + } + + return TRUE; +} + +/* Compare c1 and c2 + * Utility function for sorting candiates based on the scores. + */ +static gint +gst_face_detector_tensor_decoder_sort_candidates (gconstpointer c1, + gconstpointer c2) +{ + const Candidate *candidate1 = *((Candidate **) c1); + const Candidate *candidate2 = *((Candidate **) c2); + + if (*candidate1->score < *candidate2->score) { + return 1; + } else if (*candidate1->score > *candidate2->score) { + return -1; + } else { + return 0; + } +} + +static gfloat +iou_box (const Candidate * a, const Candidate * b) +{ + gfloat ax1 = a->box0; + gfloat ay1 = a->box1; + gfloat ax2 = a->box2; + gfloat ay2 = a->box3; + + gfloat bx1 = b->box0; + gfloat by1 = b->box1; + gfloat bx2 = b->box2; + gfloat by2 = b->box3; + + gfloat xx1 = (ax1 > bx1) ? ax1 : bx1; + gfloat yy1 = (ay1 > by1) ? ay1 : by1; + gfloat xx2 = (ax2 < bx2) ? ax2 : bx2; + gfloat yy2 = (ay2 < by2) ? ay2 : by2; + + gfloat w = xx2 - xx1; + gfloat h = yy2 - yy1; + if (w < 0.0f || h < 0.0f) { + /* No overlap */ + return 0.0f; + } + + /* Area of intersection */ + gfloat intersection = w * h; + + /* Area of each box */ + gfloat areaA = (ax2 - ax1) * (ay2 - ay1); + gfloat areaB = (bx2 - bx1) * (by2 - by1); + if (areaA <= 0.0f || areaB <= 0.0f) + return 0.0f; + + /* IoU = intersection / union */ + gfloat iou = intersection / (areaA + areaB - intersection); + return iou; +} + +/* hard_nms: + * @sel_candidates: array of pointers of selected boxes with scores + * @selected: array of pointers of selected boxes with scores after the removal of overlappings + * @iou_threshold: threshold for removing boxes based on proportion of the image + * @top_k: number of boxes to keep (if top_k <= 0, keep all). + * @return: void + * Hard NMS: + * 1) Keep highest scoring box + * 2) Remove boxes with IoU >= iou_threshold + * 3) Repeat until no boxes left or we reach top_k + */ +static void +hard_nms (const GPtrArray * sel_candidates, + GPtrArray * selected, gfloat iou_threshold, gint top_k) +{ + /* Edge case: Handle the case of no input boxes */ + if (sel_candidates->len == 0) { + return; + } + + /* We'll mark boxes as "suppressed" using an array of booleans. */ + gchar *discarded = g_alloca0 (sel_candidates->len); /* 0 => keep, 1 => discard */ + + /* The maximum possible output is 'sel_candidates->len'. We'll store the kept boxes into 'selected'. */ + + /* Perform NMS. */ + for (gsize i = 0; i < sel_candidates->len; i++) { + if (discardedi) { + /* Already thrown out due to overlap. */ + continue; + } + + /* Get the current indexed candidate from the selected candidates. + * Then store this current box/candidate into final selected candidates array + */ + Candidate *c = (Candidate *) g_ptr_array_index (sel_candidates, i); + g_ptr_array_add (selected, c); + + /* If we have reached top_k (and top_k > 0), break. */ + if (top_k > 0 && selected->len == top_k) { + break; + } + + /* Suppress any candidate that overlap (IoU >= iou_threshold) with the current one. */ + for (gsize j = i + 1; j < sel_candidates->len; j++) { + if (discardedj) + continue; + + gfloat overlap = iou_box (g_ptr_array_index (sel_candidates, i), + g_ptr_array_index (sel_candidates, j)); + if (overlap >= iou_threshold) { + discardedj = 1; /* Mark for discard */ + } + } + } +} + +/* gst_face_detector_tensor_decoder_decode_boxes_f32: + * @self: Instance + * @boxes_tensor: Buffer containing the boxes tensor + * @scores_tensor: Buffer containing the scores/confidences tensor + * @rmeta: analytics-meta that is attached to the buffer + * @return: void + * Decode Face Detection tensors, post-process tensors and store decoded information + * into an analytics-meta that is attached to the buffer before been pushed + * downstream. + */ +static void +gst_face_detector_tensor_decoder_decode_boxes_f32 (GstFaceDetectorTensorDecoder + * self, const GstTensor * boxes_tensor, const GstTensor * scores_tensor, + GstAnalyticsRelationMeta * rmeta) +{ + GstMapInfo map_info_boxes, map_info_scores; + gfloat *candidate, *score; + gboolean rv GST_UNUSED_ASSERT; + GPtrArray *sel_candidates = self->sel_candidates, *selected = self->selected; + + rv = gst_buffer_map (boxes_tensor->data, &map_info_boxes, GST_MAP_READ); + g_assert (rv); + + /* Retrieve memory at index 0 from scores_tensor in READ mode */ + rv = gst_buffer_map (scores_tensor->data, &map_info_scores, GST_MAP_READ); + g_assert (rv); + + GST_LOG_OBJECT (self, "Boxes Tensor shape dims %zu", boxes_tensor->num_dims); + GST_LOG_OBJECT (self, "scores Tensor shape dims %zu", + scores_tensor->num_dims); + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + /* Trace boxes tensor dimensions */ + for (gsize i = 0; i < boxes_tensor->num_dims; i++) { + GST_TRACE_OBJECT (self, "Boxes Tensor dim %zu: %zu", i, + boxes_tensor->dimsi); + } + + /* Trace scores tensor dimensions */ + for (gsize i = 0; i < scores_tensor->num_dims; i++) { + GST_TRACE_OBJECT (self, "Scores Tensor dim %zu: %zu", i, + scores_tensor->dimsi); + } + } + + /* Allocate array to store selected candidates */ + if (sel_candidates == NULL) { + /* Number of candidates can be large, keep the array to avoid frequent + * allocation */ + sel_candidates = g_ptr_array_new_full (boxes_tensor->dims1, NULL); + self->sel_candidates = sel_candidates; + selected = g_ptr_array_new_full (boxes_tensor->dims1, NULL); + self->selected = selected; + self->candidates = (Candidate *) g_new0 (Candidate, boxes_tensor->dims1); + } else { + /* Reset lengths when we re-use arrays */ + g_ptr_array_set_size (sel_candidates, 0); + g_ptr_array_set_size (selected, 0); + } + + score = (gfloat *) map_info_scores.data; + candidate = (gfloat *) map_info_boxes.data; + + gsize idx = 0; + + /* For UltraLightFaceDetection: + * "boxes" => shape N,4, where N = 4420 + * "scores"=> shape N,2, (background,face) + * We'll skip the background (index = 0) and keep the foreground (index = 1). + */ + + /* + * Iterate through the Scores tensor. + * Check whether the score exceeds default threshold, if it does, select the score and corresponding box. + * Add these selected boxes to the sel_candidates array. + * */ + for (gsize i = 1, j = 0; i < scores_tensor->dims1 * 2; i += 2, j += 4) { + if (scorei >= self->score_threshold) { + self->candidatesidx.index = idx; + self->candidatesidx.box = &candidatej; + self->candidatesidx.score = &scorei; + + g_ptr_array_add (sel_candidates, &self->candidatesidx); + idx++; + } + } + + GST_LOG_OBJECT (self, "Number of selected candidates = %d", + sel_candidates->len); + + if (sel_candidates->len == 0) { + GST_LOG_OBJECT (self, "No boxes above threshold=%1.2f", + self->score_threshold); + goto cleanup; + } + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (gsize i = 0; i < sel_candidates->len; i++) { + Candidate *c = (Candidate *) g_ptr_array_index (sel_candidates, i); + gsize j = 0; + for (; j < boxes_tensor->dims2; j++) { + GST_TRACE_OBJECT (self, "sel_candidates%zu = %1.5f ", i + j, + c->boxj); + } + GST_TRACE_OBJECT (self, "score%zu = %1.5f", i + j, c->score0); + } + } + + /* + * Sort the sel_candidates array so as to have the candidates in descending order w.r.t. scores + */ + g_ptr_array_sort (sel_candidates, + gst_face_detector_tensor_decoder_sort_candidates); + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (gsize i = 0; i < sel_candidates->len; i++) { + Candidate *c = (Candidate *) g_ptr_array_index (sel_candidates, i); + GST_TRACE_OBJECT (self, "c%zu = %1.5f index = %d", i, c->score0, + c->index); + } + } + + /* NMS */ + hard_nms (sel_candidates, selected, self->iou_threshold, -1); + + GST_LOG_OBJECT (self, "Number of faces detected = %d", selected->len); + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (gsize i = 0; i < selected->len; i++) { + Candidate *c = (Candidate *) g_ptr_array_index (selected, i); + GST_TRACE_OBJECT (self, + "%zu x1 = %1.5f y1 = %1.5f x2 = %1.5f y2 = %1.5f score = %1.5f", + i + 1, c->boxi + 0, c->boxi + 1, c->boxi + 2, c->boxi + 3, + c->score0); + } + } + + gsize frame_width = self->video_info.width; + gsize frame_height = self->video_info.height; + + /* Convert each final box from normalized to pixel coords and attach to meta. */ + for (gint i = 0; i < selected->len; i++) { + Candidate *c = (Candidate *) g_ptr_array_index (selected, i); + gfloat x1 = c->box0 * frame_width; + gfloat y1 = c->box1 * frame_height; + gfloat x2 = c->box2 * frame_width; + gfloat y2 = c->box3 * frame_height; + gfloat w_ = x2 - x1; + gfloat h_ = y2 - y1; + + /* Add to analytics meta: (x, y, width, height). */ + gst_analytics_relation_meta_add_od_mtd (rmeta, FACE_QUARK, + (gint) (x1 + 0.5f), (gint) (y1 + 0.5f), + (gint) (w_ + 0.5f), (gint) (h_ + 0.5f), c->score0, NULL); + } + +cleanup: + + /* Unmap */ + gst_buffer_unmap (boxes_tensor->data, &map_info_boxes); + gst_buffer_unmap (scores_tensor->data, &map_info_scores); +} + +/* gst_face_detector_tensor_decoder_transform_ip: + * @trans: Instance + * @buf:inout: Buffer containing media and where tensors can be attached + * @return: Flow errors + * Decode Face Detection tensors, post-process tensors and store decoded information + * into an analytics-meta that is attached to the buffer before been pushed + * downstream. + */ +static GstFlowReturn +gst_face_detector_tensor_decoder_transform_ip (GstBaseTransform * trans, + GstBuffer * buf) +{ + GstFaceDetectorTensorDecoder *self = GST_FACE_DETECTOR_TENSOR_DECODER (trans); + const GstTensor *boxes_tensor, *scores_tensor; + GstAnalyticsRelationMeta *rmeta; + + /* Retrive the desired Face Detection tensors. + * Return Flow Error if the desired tensors were not supported. */ + if (!gst_face_detector_tensor_decoder_get_tensor_meta (self, buf, + &boxes_tensor, &scores_tensor)) { + GST_ELEMENT_ERROR (self, STREAM, DECODE, (NULL), + ("Tensor doesn't have the expected data type or shape.")); + return GST_FLOW_ERROR; + } + + rmeta = gst_buffer_add_analytics_relation_meta (buf); + g_assert (rmeta != NULL); + + /* Decode boxes_tensor, scores_tensor and attach the information in a structured way + * to rmeta. */ + gst_face_detector_tensor_decoder_decode_boxes_f32 (self, boxes_tensor, + scores_tensor, rmeta); + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstfacedetectortensordecoder.h
Added
@@ -0,0 +1,86 @@ +/* + * GStreamer gstreamer-facedetectortensordecoder + * Copyright (C) 2025 Collabora Ltd + * + * gstfacedetectortensordecoder.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_FACE_DETECTOR_TENSOR_DECODER_H__ +#define __GST_FACE_DETECTOR_TENSOR_DECODER_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/base/base.h> + +G_BEGIN_DECLS +#define GST_TYPE_FACE_DETECTOR_TENSOR_DECODER (gst_face_detector_tensor_decoder_get_type()) +G_DECLARE_FINAL_TYPE (GstFaceDetectorTensorDecoder, + gst_face_detector_tensor_decoder, GST, FACE_DETECTOR_TENSOR_DECODER, + GstBaseTransform) + +typedef struct +{ + guint16 index; + gfloat *box; + gfloat *score; +} Candidate; + +/** + * GstFaceDetectorTensorDecoder: + * + * Since: 1.28 + */ +struct _GstFaceDetectorTensorDecoder +{ + GstBaseTransform basetransform; + + /* Confidence threshold. */ + gfloat score_threshold; + + /* Intersection-of-Union threshold. */ + gfloat iou_threshold; + + /* Video Info */ + GstVideoInfo video_info; + + /* Candidates with a class confidence level above threshold. */ + GPtrArray *sel_candidates; + + /* Final candidates selected that respect class confidence level, + * NMS and maximum detection. */ + GPtrArray *selected; + + /* Candidates with a class confidence level and bounding boxes. */ + Candidate *candidates; +}; + +/** + * GstFaceDetectorTensorDecoderClass: + * + * @parent_class base transform base class + * + * Since: 1.28 + */ +struct _GstFaceDetectorTensorDecoderClass +{ + GstBaseTransformClass parent_class; +}; + +GST_ELEMENT_REGISTER_DECLARE (face_detector_tensor_decoder) + G_END_DECLS +#endif /* __GST_FACE_DETECTOR_TENSOR_DECODER_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstioutracker.c
Added
@@ -0,0 +1,469 @@ +/* + * GStreamer gstreamer-ioutracker + * Copyright (C) 2025 Collabora Ltd. + * author: Santosh Mahto <santosh.mahto@collabora.com> + * gstioutracker.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-ioutracker + * @short_description: Simple object tracking based on Intersection-over-Union + * + * This element can parse per-buffer object-detection meta and add tracking information. + * + * It does an intersection-over-union, so it tracks if two detections share enough area to + * be likely to be the same thing. + * + * Note: This is meant for simplest cases of object tracking and has known limitations. + * For complex cases, please choose other advance tracking. + * + * \ + * gst-launch-1.0 filesrc location=bouncing.mp4 ! decodebin + * ! videoconvertscale add-borders=1 ! 'video/x-raw,pixel-aspect-ratio=1/1' \ + * ! onnxinference execution-provider=cpu model-file=./yolov8s.onnx \ + * ! yolotensordecoder class-confidence-threshold=0.5 \ + * ! ioutracker iou-score-threshold=0.7 \ + * ! videoconvert ! glimagesink + * | + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstioutracker.h" + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/analytics/analytics.h> +#include <gst/analytics/gstanalytics_image_util.h> + +GST_DEBUG_CATEGORY_STATIC (iou_tracker_debug); +#define GST_CAT_DEFAULT iou_tracker_debug +GST_ELEMENT_REGISTER_DEFINE (iou_tracker, "ioutracker", + GST_RANK_PRIMARY, GST_TYPE_IOU_TRACKER); + +/* GstIouTracker properties */ +enum +{ + PROP_0, + PROP_IOU_SCORE_THRESHOLD, + PROP_MIN_FRAME_COUNT_FOR_LOST_TRACK +}; + +#define DEFAULT_MIN_FRAME_COUNT_FOR_LOST_TRACK 5 /* randomly chosen */ +#define DEFAULT_IOU_SCORE_THRESHOLD 0.5f /* 0 to 1 */ + +static GstStaticPadTemplate gst_iou_tracker_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw") + ); + +static GstStaticPadTemplate gst_iou_tracker_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw") + ); + +typedef struct _BBox +{ + gint x; + gint y; + gint w; + gint h; +} BBox; + +typedef struct _TrackData +{ + guint64 id; + GstClockTime first_seen; // First time object was seen + GstClockTime last_seen; // Last time object was seen + GstClockTime last_tracked; // Last time object was tracked + guint unseen_frame_count; // Last frame seen + gboolean lost; // Whether the object is lost or not + GQuark obj_type; // The object type from the object detection + GQueue bbqueue; // List of bounding boxes history for the object +} TrackData; + +static void gst_iou_tracker_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_iou_tracker_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_iou_tracker_finalize (GObject * object); +static GstFlowReturn gst_iou_tracker_transform_ip (GstBaseTransform * + trans, GstBuffer * buf); +static gboolean gst_iou_tracker_stop (GstBaseTransform * trans); + +G_DEFINE_TYPE (GstIouTracker, gst_iou_tracker, GST_TYPE_BASE_TRANSFORM); + +static void +gst_iou_tracker_class_init (GstIouTrackerClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + GST_DEBUG_CATEGORY_INIT (iou_tracker_debug, "ioutracker", 0, + "Intersection-over-Union tracker"); + + gobject_class->set_property = gst_iou_tracker_set_property; + gobject_class->get_property = gst_iou_tracker_get_property; + gobject_class->finalize = gst_iou_tracker_finalize; + + /** + * GstIouTracker:iou-score-threshold + * + * The score below which object is considered as different object. + * + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_IOU_SCORE_THRESHOLD, g_param_spec_float ("iou-score-threshold", + "IoU Score threshold", + "Threshold for deciding wether the object is same in different frames", + 0.0, 1.0, DEFAULT_IOU_SCORE_THRESHOLD, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + /** + * GstIouTracker:min-frame-count-for-lost-track + * + * Min number of frame where object is not seen, required to mark object as lost. + * + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_MIN_FRAME_COUNT_FOR_LOST_TRACK, + g_param_spec_uint ("min-frame-count-for-lost-track", + "Min consecutive frame count for lost track", + "Min number of consecutive frames where object is absent before track is considered lost", + 0, G_MAXUINT, DEFAULT_MIN_FRAME_COUNT_FOR_LOST_TRACK, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + gst_element_class_set_static_metadata (element_class, + "Intersection-over-Union (IoU) object tracker", "Analyzer/Video", + "Track the objects across frames based on Intersection-over-Union (IoU)", + "Santosh Mahto <santosh.mahto@collabora.com>"); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_iou_tracker_sink_template)); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_iou_tracker_src_template)); + + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_iou_tracker_transform_ip); + basetransform_class->stop = gst_iou_tracker_stop; +} + +static void +gst_iou_tracker_track_data_free (TrackData * data) +{ + if (!data) + return; + + g_queue_clear_full (&data->bbqueue, (GDestroyNotify) g_free); + g_free (data); +} + +static gboolean +gst_iou_tracker_stop (GstBaseTransform * trans) +{ + GstIouTracker *self = GST_IOU_TRACKER (trans); + g_hash_table_remove_all (self->picked_odmtds); + g_list_free_full (self->tracks, + (GDestroyNotify) gst_iou_tracker_track_data_free); + self->tracks = NULL; + + return TRUE; +} + +static void +gst_iou_tracker_init (GstIouTracker * self) +{ + self->min_frame_count_for_lost_track = DEFAULT_MIN_FRAME_COUNT_FOR_LOST_TRACK; + self->iou_score_threshold = DEFAULT_IOU_SCORE_THRESHOLD; + self->tracks = NULL; + self->next_track_id = 0; + + self->picked_odmtds = g_hash_table_new_full (g_direct_hash, + g_direct_equal, NULL, NULL); + + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); +} + +static void +gst_iou_tracker_finalize (GObject * object) +{ + GstIouTracker *self = GST_IOU_TRACKER (object); + g_hash_table_destroy (self->picked_odmtds); + g_list_free_full (self->tracks, + (GDestroyNotify) gst_iou_tracker_track_data_free); + G_OBJECT_CLASS (gst_iou_tracker_parent_class)->finalize (object); +} + +static void +gst_iou_tracker_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstIouTracker *self = GST_IOU_TRACKER (object); + + switch (prop_id) { + case PROP_IOU_SCORE_THRESHOLD: + self->iou_score_threshold = g_value_get_float (value); + break; + case PROP_MIN_FRAME_COUNT_FOR_LOST_TRACK: + self->min_frame_count_for_lost_track = g_value_get_uint (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_iou_tracker_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstIouTracker *self = GST_IOU_TRACKER (object); + + switch (prop_id) { + case PROP_IOU_SCORE_THRESHOLD: + g_value_set_float (value, self->iou_score_threshold); + break; + case PROP_MIN_FRAME_COUNT_FOR_LOST_TRACK: + g_value_set_uint (value, self->min_frame_count_for_lost_track); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gfloat +gst_iou_tracker_get_iou (BBox b1, BBox b2) +{ + return gst_analytics_image_util_iou_float (b1.x, b1.y, b1.w, b1.h, + b2.x, b2.y, b2.w, b2.h); +} + +static GstFlowReturn +gst_iou_tracker_transform_ip (GstBaseTransform * trans, GstBuffer * buf) +{ + GstIouTracker *self = GST_IOU_TRACKER (trans); + GstAnalyticsRelationMeta *rmeta; + GstAnalyticsMtd mtd; + GstClockTime pt = GST_BUFFER_PTS (buf); + GstClockTime running_time; + gpointer state = NULL; + + rmeta = gst_buffer_get_analytics_relation_meta (buf); + + if (!rmeta) { + GST_DEBUG_OBJECT (self, "No GstAnalyticsRelationMeta found in buffer"); + if (self->tracks) { + // Tracking has started, so add an rmeta to allow adding + // TrackingMtd. + rmeta = gst_buffer_add_analytics_relation_meta (buf); + } else { + return GST_FLOW_OK; + } + } + + g_hash_table_remove_all (self->picked_odmtds); + running_time = + gst_segment_to_running_time (&trans->segment, GST_FORMAT_TIME, pt); + + /* + * Interate over all the existing tracks and update them with new detections. + * When object is not seen in `min_lost_frame_count_to_remove_track` consecutive frames, + * the mark it as lost and track are removed, until then track the object and assume + * predicted position as new position. + */ + GList *track = self->tracks; + while (track) { + TrackData *tdata = (TrackData *) track->data; + GstAnalyticsODMtd nearest_mtd; + GstAnalyticsTrackingMtd tmtd; + gfloat max_iou_score = 0.0f; + gpointer state = NULL; + BBox *cbox; + + // Add the older tracking meta to the relation meta to this new buffer + if (!gst_analytics_relation_meta_add_tracking_mtd (rmeta, + tdata->id, tdata->first_seen, &tmtd)) { + GST_DEBUG_OBJECT (self, "Failed to add tracking meta to relation meta"); + continue; + } + + gst_analytics_tracking_mtd_update_last_seen (&tmtd, tdata->last_seen); + if (tdata->lost) { + gst_analytics_tracking_mtd_set_lost (&tmtd); + } + + cbox = g_queue_peek_head (&tdata->bbqueue); + // Iterate over od mtds in current frame and find the latest position of + // tracked object based on IOU score. + while (gst_analytics_relation_meta_iterate (rmeta, &state, + gst_analytics_od_mtd_get_mtd_type (), &mtd)) { + GstAnalyticsODMtd *od_mtd = (GstAnalyticsODMtd *) & mtd; + BBox odbox; + gfloat iou_score = -0.0f; + + if (g_hash_table_contains (self->picked_odmtds, + GINT_TO_POINTER (od_mtd->id))) + continue; + + /* Different type, ignore it */ + if (tdata->obj_type != gst_analytics_od_mtd_get_obj_type (od_mtd)) + continue; + + gst_analytics_od_mtd_get_location (od_mtd, &odbox.x, &odbox.y, + &odbox.w, &odbox.h, NULL); + + // Note: IoU based tracking fails when object position doesn't overlap across frame since iou + // becomes zero. This mostly happens when frame rate are low or object is moving fast. + // This is known limitation of current implementation. + iou_score = gst_iou_tracker_get_iou (odbox, *cbox); + if (iou_score > max_iou_score) { + max_iou_score = iou_score; + nearest_mtd = *od_mtd; + } + } + + if (max_iou_score >= self->iou_score_threshold) { + BBox *new_box = g_new0 (BBox, 1); + + gst_analytics_od_mtd_get_location (&nearest_mtd, &new_box->x, + &new_box->y, &new_box->w, &new_box->h, NULL); + + g_queue_push_head (&tdata->bbqueue, new_box); + + tdata->last_seen = running_time; + tdata->last_tracked = running_time; + tdata->unseen_frame_count = 0; + gst_analytics_tracking_mtd_update_last_seen (&tmtd, running_time); + gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, nearest_mtd.id, tmtd.id); + + GST_DEBUG_OBJECT (self, + "Total track: %u, Track %" G_GUINT64_FORMAT + " updated with new last seen time: %" + GST_TIME_FORMAT, g_list_length (self->tracks), + tdata->id, GST_TIME_ARGS (tdata->last_seen)); + + g_hash_table_insert (self->picked_odmtds, + GINT_TO_POINTER (nearest_mtd.id), (gpointer) TRUE); + } else { + tdata->unseen_frame_count++; + + // Remove the track once we have seen enough frame where object was missing. + if (tdata->unseen_frame_count >= self->min_frame_count_for_lost_track) { + gst_analytics_tracking_mtd_set_lost (&tmtd); + GST_DEBUG_OBJECT (self, "Track %" G_GUINT64_FORMAT " marked as lost", + tdata->id); + GList *nexttrack = track->next; + guint64 trackid = tdata->id; // Logging purpose + // Remove current track from the list + // caution: list element is freed within iteration + gst_iou_tracker_track_data_free (tdata); + self->tracks = g_list_delete_link (self->tracks, track); + track = nexttrack; + + GST_DEBUG_OBJECT (self, + "Track %" G_GUINT64_FORMAT " FORMAT after %u unseen frames", + trackid, tdata->unseen_frame_count); + + continue; // start next iteration + } else { + // Since object is not seen in this frame, we need to calulate predicted position + // based on previous position change + guint count = tdata->bbqueue.length; + BBox *new_box = g_new0 (BBox, 1); + + BBox *cur_box = g_queue_peek_head (&tdata->bbqueue); + BBox *last_box = g_queue_peek_tail (&tdata->bbqueue); + + new_box->x = cur_box->x + (cur_box->x - last_box->x) / count; + new_box->y = cur_box->y + (cur_box->y - last_box->y) / count; + new_box->w = cur_box->w; + new_box->h = cur_box->h; + g_queue_push_head (&tdata->bbqueue, new_box); + tdata->last_tracked = running_time; + GST_DEBUG_OBJECT (self, "Track %" G_GUINT64_FORMAT + " not updated, but predicted position is (%d, %d, %d, %d)", + tdata->id, new_box->x, new_box->y, new_box->w, new_box->h); + } + } + track = track->next; + } + + // Add new tracks for all the new object found in detection. so for the first time + // tracks for all the detections are created. + while (gst_analytics_relation_meta_iterate (rmeta, &state, + gst_analytics_od_mtd_get_mtd_type (), &mtd)) { + GstAnalyticsODMtd *od_mtd = (GstAnalyticsODMtd *) & mtd; + GstAnalyticsTrackingMtd tmtd; + + if (!g_hash_table_contains (self->picked_odmtds, + GINT_TO_POINTER (od_mtd->id))) { + // If the mtd is not picked, it means it is not matched with any track + // and hence it is a new detection + if (!gst_analytics_relation_meta_add_tracking_mtd (rmeta, + self->next_track_id, running_time, &tmtd)) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Failed to add tracking mtd for new track")); + return GST_FLOW_ERROR; + } + + BBox *new_bbox = g_new0 (BBox, 1); + gst_analytics_od_mtd_get_location (od_mtd, &new_bbox->x, &new_bbox->y, + &new_bbox->w, &new_bbox->h, NULL); + + TrackData *new_track_data = g_new0 (TrackData, 1); + g_queue_init (&new_track_data->bbqueue); + g_queue_push_head (&new_track_data->bbqueue, new_bbox); + new_track_data->id = self->next_track_id; + new_track_data->first_seen = running_time; + new_track_data->last_seen = running_time; + new_track_data->last_tracked = running_time; + new_track_data->lost = FALSE; + new_track_data->unseen_frame_count = 0; + new_track_data->obj_type = gst_analytics_od_mtd_get_obj_type (od_mtd); + self->tracks = g_list_append (self->tracks, new_track_data); + GST_DEBUG_OBJECT (self, + "New track created with ID: %" G_GUINT64_FORMAT + ", First Seen: %" GST_TIME_FORMAT, + new_track_data->id, GST_TIME_ARGS (new_track_data->first_seen)); + + self->next_track_id++; + + if (!gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, od_mtd->id, tmtd.id)) { + GST_ERROR_OBJECT (self, + "Failed to set relation for new track Tracking ID: %u and ODM ID: %u", + self->next_track_id, od_mtd->id); + } + } + } + + // picked_odmtds is used to keep track of the ODMs for single buffer only, so free it. + g_hash_table_remove_all (self->picked_odmtds); + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstioutracker.h
Added
@@ -0,0 +1,73 @@ +/* + * GStreamer gstreamer-ioutracker + * Copyright (C) 2025 Collabora Ltd + * + * gstioutracker.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_IOU_TRACKER_H__ +#define __GST_IOU_TRACKER_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/video/gstvideofilter.h> + +G_BEGIN_DECLS + +#define GST_TYPE_IOU_TRACKER (gst_iou_tracker_get_type()) +G_DECLARE_FINAL_TYPE (GstIouTracker, gst_iou_tracker, GST, IOU_TRACKER, GstBaseTransform) + +/* + * GstIouTracker: + * + * @basetransform: the parent class + * @min_frame_count_for_lost_track: Minimum no of consecutive frame where object is absent, before track is marked lost + * @iou_score_threshold: the threshold for Intersection over Union (IoU) score to consider a detection as a match + * @tracks: a list of current tracks being tracked + * @picked_odmtds: a hash table to keep track of picked object detection metadata + * @next_track_id: the next tracking id to assign to a new track + * + * Since: 1.28 + */ +struct _GstIouTracker +{ + GstBaseTransform basetransform; + guint min_frame_count_for_lost_track; + gfloat iou_score_threshold; + GList *tracks; + GHashTable *picked_odmtds; + guint next_track_id; // Next tracking id to assign +}; + +/** + * GstIouTrackerClass: + * + * @parent_class base transform base class + * + * Since: 1.28 + */ +struct _GstIouTrackerClass +{ + GstBaseTransformClass parent_class; +}; + +GST_ELEMENT_REGISTER_DECLARE (iou_tracker); + +G_END_DECLS + +#endif /* __GST_IOU_TRACKER_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstssdtensordec.c
Added
@@ -0,0 +1,676 @@ +/* + * GStreamer gstreamer-ssdtensordec + * Copyright (C) 2021,2025 Collabora Ltd. + * + * gstssdtensordec.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-ssdtensordec + * @short_description: Decode tensors from inference using a SSD neural network + * + * This element can parse per-buffer inference tensor meta data generated by an upstream + * inference element + * + * + * ## Example launch command: + * + * Test image file, model file (SSD) and label file can be found here : + * https://gitlab.collabora.com/gstreamer/onnx-models + * + * GST_DEBUG=ssdtensordec:5 \ + * gst-launch-1.0 multifilesrc location=onnx-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \ + * ssdtensordec label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! autovideosink + * + * Since: 1.28 + */ + +/** + * SECTION:element-ssdobjectdetector + * @short_description: Detect objects in video buffers using SSD neural network + * + * This element can parse per-buffer inference tensor meta data generated by an upstream + * inference element + * + * + * ## Example launch command: + * + * Test image file, model file (SSD) and label file can be found here : + * https://gitlab.collabora.com/gstreamer/onnx-models + * + * GST_DEBUG=ssdtensordec:5 \ + * gst-launch-1.0 multifilesrc location=onnx-models/images/bus.jpg ! \ + * jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \ + * ssdtensordec label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! autovideosink + * + * Since: 1.20 + * Deprecated: 1.28 + */ +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstssdtensordec.h" + +#include <gio/gio.h> + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/analytics/analytics.h> + +/* Object detection tensor id strings */ +#define GROUP_ID "ssd-mobilenet-v1-variant-1-out" +#define GST_MODEL_OBJECT_DETECTOR_BOXES "ssd-mobilenet-v1-variant-1-out-boxes" +#define GST_MODEL_OBJECT_DETECTOR_SCORES "ssd-mobilenet-v1-variant-1-out-scores" +#define GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS "generic-variant-1-out-count" +#define GST_MODEL_OBJECT_DETECTOR_CLASSES "ssd-mobilenet-v1-variant-1-out-classes" + +GST_DEBUG_CATEGORY_STATIC (ssd_tensor_dec_debug); +#define GST_CAT_DEFAULT ssd_tensor_dec_debug +GST_ELEMENT_REGISTER_DEFINE (ssd_tensor_dec, "ssdtensordec", + GST_RANK_SECONDARY, GST_TYPE_SSD_TENSOR_DEC); + +GST_DEBUG_CATEGORY_STATIC (ssd_object_detector_debug); +GST_ELEMENT_REGISTER_DEFINE (ssd_object_detector, "ssdobjectdetector", + GST_RANK_NONE, GST_TYPE_SSD_OBJECT_DETECTOR); +/* GstSsdTensorDec properties */ +enum +{ + PROP_0, + PROP_LABEL_FILE, + PROP_SCORE_THRESHOLD, + PROP_SIZE_THRESHOLD +}; + +#define GST_SSD_TENSOR_DEC_DEFAULT_SCORE_THRESHOLD 0.3f /* 0 to 1 */ +#define GST_SSD_TENSOR_DEC_DEFAULT_SIZE_THRESHOLD 0.9f /* 0 to 1 */ + +static GstStaticPadTemplate gst_ssd_tensor_dec_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw") + ); + +/* *INDENT-OFF* */ + +static GstStaticPadTemplate gst_ssd_tensor_dec_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw," + "tensors=(structure)" + "tensorgroups," + GROUP_ID"=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS"," + "dims=(int)<0, max>," + "dims-order=(string)row-major," + "type=(string)float32;," + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_OBJECT_DETECTOR_SCORES"," + "dims=(int)<0, max,0>," + "dims-order=(string)row-major," + "type=(string)float32;," + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_OBJECT_DETECTOR_BOXES"," + "dims=(int)<0, max,0,4>," + "dims-order=(string)row-major," + "type=(string)float32;," + "(GstCaps)" + "tensor/strided," + "tensor-id="GST_MODEL_OBJECT_DETECTOR_CLASSES"," + "dims=(int)<0, max,0>," + "dims-order=(string)row-major," + "type=(string)float32;" + "}" + ";" + )); + +/* *INDENT-ON* */ + +static void gst_ssd_tensor_dec_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_ssd_tensor_dec_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_ssd_tensor_dec_finalize (GObject * object); +static GstFlowReturn gst_ssd_tensor_dec_transform_ip (GstBaseTransform * + trans, GstBuffer * buf); +static gboolean gst_ssd_tensor_dec_process (GstBaseTransform * trans, + GstBuffer * buf); +static gboolean +gst_ssd_tensor_dec_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps); + +G_DEFINE_TYPE (GstSsdTensorDec, gst_ssd_tensor_dec, GST_TYPE_BASE_TRANSFORM); + +static void +gst_ssd_tensor_dec_class_init (GstSsdTensorDecClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + GST_DEBUG_CATEGORY_INIT (ssd_tensor_dec_debug, "ssdtensordec", + 0, "ssdtensordec"); + gobject_class->set_property = gst_ssd_tensor_dec_set_property; + gobject_class->get_property = gst_ssd_tensor_dec_get_property; + gobject_class->finalize = gst_ssd_tensor_dec_finalize; + + /** + * GstSsdTensorDec:label-file + * + * Label file + * + * Since: 1.24 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_LABEL_FILE, + g_param_spec_string ("label-file", + "Label file", "Label file", NULL, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstSsdTensorDec:score-threshold + * + * Threshold for deciding when to remove boxes based on score + * + * Since: 1.24 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SCORE_THRESHOLD, + g_param_spec_float ("score-threshold", + "Score threshold", + "Threshold for deciding when to remove boxes based on score", + 0.0, 1.0, GST_SSD_TENSOR_DEC_DEFAULT_SCORE_THRESHOLD, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstSsdTensorDec:size-threshold + * + * Threshold for deciding when to remove boxes based on proportion of the image + * + * Since: 1.26 + */ + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_SIZE_THRESHOLD, + g_param_spec_float ("size-threshold", + "Size threshold", + "Threshold for deciding when to remove boxes based on proportion of the image", + 0.0, 1.0, GST_SSD_TENSOR_DEC_DEFAULT_SIZE_THRESHOLD, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_set_static_metadata (element_class, + "SSD MobileNet Object Detector tensor decoder", + "Tensordecoder/Video", + "Apply tensor output from inference to detect objects in video frames", + "Aaron Boxer <aaron.boxer@collabora.com>, Marcus Edel <marcus.edel@collabora.com>"); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_ssd_tensor_dec_sink_template)); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_ssd_tensor_dec_src_template)); + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_ssd_tensor_dec_transform_ip); + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_ssd_tensor_dec_set_caps); +} + +static void +gst_ssd_tensor_dec_init (GstSsdTensorDec * self) +{ + self->size_threshold = GST_SSD_TENSOR_DEC_DEFAULT_SIZE_THRESHOLD; + self->score_threshold = GST_SSD_TENSOR_DEC_DEFAULT_SCORE_THRESHOLD; + GST_PAD_UNSET_ACCEPT_INTERSECT (self->basetransform.sinkpad); +} + +static void +gst_ssd_tensor_dec_finalize (GObject * object) +{ + GstSsdTensorDec *self = GST_SSD_TENSOR_DEC (object); + + g_free (self->label_file); + g_clear_pointer (&self->labels, g_array_unref); + + G_OBJECT_CLASS (gst_ssd_tensor_dec_parent_class)->finalize (object); +} + +static GArray * +read_labels (const char *labels_file) +{ + GArray *array; + GFile *file = g_file_new_for_path (labels_file); + GFileInputStream *file_stream; + GDataInputStream *data_stream; + GError *error = NULL; + gchar *line; + + file_stream = g_file_read (file, NULL, &error); + g_object_unref (file); + if (!file_stream) { + GST_WARNING ("Could not open file %s: %s\n", labels_file, error->message); + g_clear_error (&error); + return NULL; + } + + data_stream = g_data_input_stream_new (G_INPUT_STREAM (file_stream)); + g_object_unref (file_stream); + + array = g_array_new (FALSE, FALSE, sizeof (GQuark)); + + while ((line = g_data_input_stream_read_line (data_stream, NULL, NULL, + &error))) { + GQuark label = g_quark_from_string (line); + g_array_append_val (array, label); + g_free (line); + } + + g_object_unref (data_stream); + + if (error) { + GST_WARNING ("Could not open file %s: %s", labels_file, error->message); + g_array_free (array, TRUE); + g_clear_error (&error); + return NULL; + } + + if (array->len == 0) { + g_array_free (array, TRUE); + return NULL; + } + + return array; +} + +static void +gst_ssd_tensor_dec_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstSsdTensorDec *self = GST_SSD_TENSOR_DEC (object); + const gchar *filename; + + switch (prop_id) { + case PROP_LABEL_FILE: + { + GArray *labels; + + filename = g_value_get_string (value); + labels = read_labels (filename); + + if (labels) { + g_free (self->label_file); + self->label_file = g_strdup (filename); + g_clear_pointer (&self->labels, g_array_unref); + self->labels = labels; + } else { + GST_WARNING_OBJECT (self, "Label file '%s' not found!", filename); + } + } + break; + case PROP_SCORE_THRESHOLD: + GST_OBJECT_LOCK (self); + self->score_threshold = g_value_get_float (value); + GST_OBJECT_UNLOCK (self); + break; + case PROP_SIZE_THRESHOLD: + GST_OBJECT_LOCK (self); + self->size_threshold = g_value_get_float (value); + GST_OBJECT_UNLOCK (self); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_ssd_tensor_dec_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstSsdTensorDec *self = GST_SSD_TENSOR_DEC (object); + + switch (prop_id) { + case PROP_LABEL_FILE: + g_value_set_string (value, self->label_file); + break; + case PROP_SCORE_THRESHOLD: + GST_OBJECT_LOCK (self); + g_value_set_float (value, self->score_threshold); + GST_OBJECT_UNLOCK (self); + break; + case PROP_SIZE_THRESHOLD: + GST_OBJECT_LOCK (self); + g_value_set_float (value, self->size_threshold); + GST_OBJECT_UNLOCK (self); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_ssd_tensor_dec_get_tensors (GstSsdTensorDec * object_detector, + GstBuffer * buf, const GstTensor ** classes_tensor, + const GstTensor ** numdetect_tensor, const GstTensor ** scores_tensor, + const GstTensor ** boxes_tensor) +{ + GstMeta *meta = NULL; + gpointer iter_state = NULL; + static const gsize BOXES_DIMS = { 1, G_MAXSIZE, 4 }; + static const gsize NUM_DETECT_DIMS = { 1 }; + static const gsize SCORES_CLASSES_DIMS = { 1, G_MAXSIZE }; + + if (!gst_buffer_get_meta (buf, GST_TENSOR_META_API_TYPE)) { + GST_DEBUG_OBJECT (object_detector, + "missing tensor meta from buffer %" GST_PTR_FORMAT, buf); + return FALSE; + } + + // find object detector meta + + while ((meta = gst_buffer_iterate_meta_filtered (buf, &iter_state, + GST_TENSOR_META_API_TYPE))) { + GstTensorMeta *tmeta = (GstTensorMeta *) meta; + + *boxes_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_BOXES), + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 3, + BOXES_DIMS); + if (*boxes_tensor == NULL) + *boxes_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_BOXES), + GST_TENSOR_DATA_TYPE_UINT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 3, + BOXES_DIMS); + if (*boxes_tensor == NULL) + continue; + + *scores_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_SCORES), + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, + SCORES_CLASSES_DIMS); + if (*scores_tensor == NULL) + *scores_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_SCORES), + GST_TENSOR_DATA_TYPE_UINT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, + SCORES_CLASSES_DIMS); + if (*scores_tensor == NULL) + continue; + + *numdetect_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS), + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 1, + NUM_DETECT_DIMS); + if (*numdetect_tensor == NULL) + *numdetect_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_NUM_DETECTIONS), + GST_TENSOR_DATA_TYPE_UINT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 1, + NUM_DETECT_DIMS); + if (*numdetect_tensor == NULL) + continue; + + *classes_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_CLASSES), + GST_TENSOR_DATA_TYPE_FLOAT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, + SCORES_CLASSES_DIMS); + if (*classes_tensor == NULL) + *classes_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + g_quark_from_static_string (GST_MODEL_OBJECT_DETECTOR_CLASSES), + GST_TENSOR_DATA_TYPE_UINT32, GST_TENSOR_DIM_ORDER_ROW_MAJOR, 2, + SCORES_CLASSES_DIMS); + + return TRUE; + } + + return FALSE; +} + +static gboolean +gst_ssd_tensor_dec_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + GstSsdTensorDec *self = GST_SSD_TENSOR_DEC (trans); + + if (!gst_video_info_from_caps (&self->video_info, incaps)) { + GST_ERROR_OBJECT (self, "Failed to parse caps"); + return FALSE; + } + + return TRUE; +} + +static GstFlowReturn +gst_ssd_tensor_dec_transform_ip (GstBaseTransform * trans, GstBuffer * buf) +{ + if (!gst_base_transform_is_passthrough (trans)) { + if (!gst_ssd_tensor_dec_process (trans, buf)) { + GST_ELEMENT_ERROR (trans, STREAM, FAILED, + (NULL), ("ssd object detection failed")); + return GST_FLOW_ERROR; + } + } + + return GST_FLOW_OK; +} + +#define DEFINE_GET_FUNC(TYPE, MAX) \ + static gboolean \ + get_ ## TYPE ## _at_index (const GstTensor *tensor, GstMapInfo *map, \ + guint index, TYPE * out) \ + { \ + switch (tensor->data_type) { \ + case GST_TENSOR_DATA_TYPE_FLOAT32: { \ + float *f = (float *) map->data; \ + if (sizeof(*f) * (index + 1) > map->size) \ + return FALSE; \ + *out = findex; \ + break; \ + } \ + case GST_TENSOR_DATA_TYPE_UINT32: { \ + guint32 *u = (guint32 *) map->data; \ + if (sizeof(*u) * (index + 1) > map->size) \ + return FALSE; \ + *out = uindex; \ + break; \ + } \ + default: \ + GST_ERROR ("Only float32 and int32 tensors are understood"); \ + return FALSE; \ + } \ + return TRUE; \ + } + +DEFINE_GET_FUNC (guint32, UINT32_MAX); +DEFINE_GET_FUNC (float, FLOAT_MAX); +#undef DEFINE_GET_FUNC + +static void +extract_bounding_boxes (GstSsdTensorDec * self, gsize w, gsize h, + GstAnalyticsRelationMeta * rmeta, const GstTensor * classes_tensor, + const GstTensor * numdetect_tensor, const GstTensor * scores_tensor, + const GstTensor * boxes_tensor) +{ + GstMapInfo boxes_map = GST_MAP_INFO_INIT; + GstMapInfo numdetect_map = GST_MAP_INFO_INIT; + GstMapInfo scores_map = GST_MAP_INFO_INIT; + GstMapInfo classes_map = GST_MAP_INFO_INIT; + + guint num_detections = 0; + + if (numdetect_tensor == NULL || scores_tensor == NULL || boxes_tensor == NULL) { + GST_WARNING ("Missing tensor data expected for SSD model"); + return; + } + + if (!gst_buffer_map (numdetect_tensor->data, &numdetect_map, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Failed to map numdetect tensor memory"); + goto cleanup; + } + + if (!gst_buffer_map (boxes_tensor->data, &boxes_map, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Failed to map boxes tensor memory"); + goto cleanup; + } + + if (!gst_buffer_map (scores_tensor->data, &scores_map, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Failed to map scores tensor memory"); + goto cleanup; + } + + if (classes_tensor && + !gst_buffer_map (classes_tensor->data, &classes_map, GST_MAP_READ)) { + GST_DEBUG_OBJECT (self, "Failed to map classes tensor memory"); + goto cleanup; + } + + + if (!get_guint32_at_index (numdetect_tensor, &numdetect_map, + 0, &num_detections)) { + GST_ERROR_OBJECT (self, "Failed to get the number of detections"); + goto cleanup; + } + + + GST_LOG_OBJECT (self, "Model claims %u detections", num_detections); + num_detections = MIN (num_detections, scores_tensor->dims1); + num_detections = MIN (num_detections, boxes_tensor->dims1); + if (classes_tensor) + num_detections = MIN (num_detections, classes_tensor->dims1); + GST_LOG_OBJECT (self, "Model really has %u detections" + " (%zu scores, %zu boxes, %zu classes)", num_detections, + scores_tensor->dims1, boxes_tensor->dims1, + classes_tensor ? classes_tensor->dims1 : 0); + + for (int i = 0; i < num_detections; i++) { + float score; + float x, y, bwidth, bheight; + gint x_i, y_i, bwidth_i, bheight_i; + guint32 bclass; + GQuark label = 0; + GstAnalyticsODMtd odmtd; + + if (!get_float_at_index (scores_tensor, &scores_map, i, &score)) + continue; + + GST_LOG_OBJECT (self, "Detection %u score is %f", i, score); + if (score < self->score_threshold) + continue; + + if (!get_float_at_index (boxes_tensor, &boxes_map, i * 4, &y)) + continue; + if (!get_float_at_index (boxes_tensor, &boxes_map, i * 4 + 1, &x)) + continue; + if (!get_float_at_index (boxes_tensor, &boxes_map, i * 4 + 2, &bheight)) + continue; + if (!get_float_at_index (boxes_tensor, &boxes_map, i * 4 + 3, &bwidth)) + continue; + + if (CLAMP (bwidth, 0, 1) * CLAMP (bheight, 0, 1) > self->size_threshold) { + GST_LOG_OBJECT (self, "Object at (%fx%f)=%f > %f, skipping", + CLAMP (bwidth, 0, 1), CLAMP (bheight, 0, 1), + CLAMP (bwidth, 0, 1) * CLAMP (bheight, 0, 1), self->size_threshold); + continue; + } + + if (self->labels && classes_map.memory && + get_guint32_at_index (classes_tensor, &classes_map, i, &bclass)) { + if (bclass < self->labels->len) + label = g_array_index (self->labels, GQuark, bclass); + } + + x_i = x * w; + y_i = y * h; + bheight_i = (bheight * h) - y_i; + bwidth_i = (bwidth * w) - x_i; + + if (gst_analytics_relation_meta_add_od_mtd (rmeta, label, + x_i, y_i, bwidth_i, bheight_i, score, &odmtd)) + GST_DEBUG_OBJECT (self, + "Object detected with label%u : %s, score: %f, bound box: %dx%d at (%d,%d)", + bclass, g_quark_to_string (label), score, bwidth_i, bheight_i, x_i, + y_i); + else + GST_WARNING_OBJECT (self, "Could not add detection to meta"); + } + +cleanup: + + if (numdetect_map.memory) + gst_buffer_unmap (numdetect_tensor->data, &numdetect_map); + if (classes_map.memory) + gst_buffer_unmap (classes_tensor->data, &classes_map); + if (scores_map.memory) + gst_buffer_unmap (scores_tensor->data, &scores_map); + if (boxes_map.memory) + gst_buffer_unmap (boxes_tensor->data, &boxes_map); +} + + +static gboolean +gst_ssd_tensor_dec_process (GstBaseTransform * trans, GstBuffer * buf) +{ + GstSsdTensorDec *self = GST_SSD_TENSOR_DEC (trans); + GstAnalyticsRelationMeta *rmeta; + const GstTensor *classes_tensor = NULL; + const GstTensor *numdetect_tensor = NULL; + const GstTensor *scores_tensor = NULL; + const GstTensor *boxes_tensor = NULL; + + // get all tensor metas + if (!gst_ssd_tensor_dec_get_tensors (self, buf, + &classes_tensor, &numdetect_tensor, &scores_tensor, &boxes_tensor)) { + GST_WARNING_OBJECT (trans, "missing tensor meta"); + return TRUE; + } else { + rmeta = gst_buffer_add_analytics_relation_meta (buf); + g_assert (rmeta); + } + + extract_bounding_boxes (self, self->video_info.width, + self->video_info.height, rmeta, classes_tensor, numdetect_tensor, + scores_tensor, boxes_tensor); + + return TRUE; +} + +G_DEFINE_TYPE (GstSsdObjectDetector, gst_ssd_object_detector, + GST_TYPE_SSD_TENSOR_DEC); + +static void +gst_ssd_object_detector_class_init (GstSsdObjectDetectorClass * klass) +{ + GST_DEBUG_CATEGORY_INIT (ssd_object_detector_debug, "ssdobjectdetector", + 0, "ssd object detector category"); + + gst_element_class_set_static_metadata (GST_ELEMENT_CLASS (klass), + "SSD MobileNet Object Detector tensor decoder (Deprecated)", + "Tensordecoder/Video", + "Apply tensor output from inference to detect objects in video frames", + "Aaron Boxer <aaron.boxer@collabora.com>, Marcus Edel <marcus.edel@collabora.com>"); + + gst_type_mark_as_plugin_api (GST_TYPE_SSD_TENSOR_DEC, 0); +} + +static void +gst_ssd_object_detector_init (GstSsdObjectDetector * self) +{ + GST_CAT_WARNING (ssd_object_detector_debug, "ssdobjectdetector is " + "deprecated, use ssdtensordec instead"); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstssdtensordec.h
Added
@@ -0,0 +1,120 @@ +/* + * GStreamer gstreamer-ssdtensordec + * Copyright (C) 2021,2025 Collabora Ltd + * + * gstssdtensordec.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_SSD_TENSOR_DEC_H__ +#define __GST_SSD_TENSOR_DEC_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/video/gstvideofilter.h> + +G_BEGIN_DECLS + +GType gst_ssd_tensor_dec_get_type (void); + +#define GST_TYPE_SSD_TENSOR_DEC (gst_ssd_tensor_dec_get_type()) +#define GST_SSD_TENSOR_DEC(obj) \ + (G_TYPE_CHECK_INSTANCE_CAST ((obj), GST_TYPE_SSD_TENSOR_DEC, GstSsdTensorDec)) +#define GST_SSD_TENSOR_DEC_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_SSD_TENSOR_DEC, GstSsdTensorDecClass)) +#define GST_IS_SSD_TENSOR_DEC(obj) \ + (G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_SSD_TENSOR_DEC)) +#define GST_IS_SSD_TENSOR_DEC_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_SSD_TENSOR_DEC)) +#define GST_SSD_TENSOR_DEC_GET_CLASS(obj) \ + (G_TYPE_INSTANCE_GET_CLASS ((obj), GST_TYPE_SSD_TENSOR_DEC, GstSsdTensorDecClass)) + +typedef struct _GstSsdTensorDec GstSsdTensorDec; +typedef struct _GstSsdTensorDecClass GstSsdTensorDecClass; + + +#define GST_SSD_TENSOR_DEC_META_NAME "ssd-tensor-dec" +#define GST_SSD_TENSOR_DEC_META_PARAM_NAME "extra-data" +#define GST_SSD_TENSOR_DEC_META_FIELD_LABEL "label" +#define GST_SSD_TENSOR_DEC_META_FIELD_SCORE "score" + +/** + * GstSsdTensorDec: + * + * @label_file label file + * @score_threshold score threshold + * + * Since: 1.20 + */ +struct _GstSsdTensorDec +{ + GstBaseTransform basetransform; + gchar *label_file; + GArray *labels; + gfloat score_threshold; + gfloat size_threshold; + GstVideoInfo video_info; +}; + +/** + * GstSsdTensorDecClass: + * + * @parent_class base transform base class + * + * Since: 1.20 + */ +struct _GstSsdTensorDecClass +{ + GstBaseTransformClass parent_class; +}; + +GST_ELEMENT_REGISTER_DECLARE (ssd_tensor_dec) + +G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstSsdTensorDec, g_object_unref) + +#define GST_TYPE_SSD_OBJECT_DETECTOR (gst_ssd_object_detector_get_type()) +G_DECLARE_FINAL_TYPE (GstSsdObjectDetector, gst_ssd_object_detector, GST, SSD_OBJECT_DETECTOR, GstSsdTensorDec) + +/** + * GstSsdObjectDetector: + * + * Since: 1.20 + * Deprecated: 1.28 : Use GstSsdTensorDec instead. + */ +struct _GstSsdObjectDetector +{ + GstSsdTensorDec parent; +}; + +/** + * GstSsdObjectDetectorClass: + * + * @parent_class base transform base class + * + * Since: 1.20 + * Deprecated: 1.28 : Use GstSsdTensorDecClass instead. + */ +struct _GstSsdObjectDetectorClass +{ + GstSsdTensorDecClass parent_class; +}; + +GST_ELEMENT_REGISTER_DECLARE (ssd_object_detector) + +G_END_DECLS + +#endif /* __GST_SSD_TENSOR_DEC_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gsttensordecodebin.c
Added
@@ -0,0 +1,595 @@ +/* GStreamer object detection overlay + * Copyright (C) <2024, 2025> Collabora Ltd. + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gsttensordecodebin.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-tensordecodebin + * @short_description: Find and instantiate compatible tensor decoder + * + * This element instantiate a tensor decoder compatible with upstream caps. + * + * ## Example launch command: + * | + * gst-launch-1.0 filesrc location=/onnx-models/images/bus.jpg ! + * ! jpegdec ! videoconvert ! onnxinference execution-provider=cpu + * model-file=/onnx-models/models/ssd_mobilenet_v1_coco.onnx + * ! tensordecodebin ! objectdetectionoverlay ! videoconvert ! imagefreeze + * ! autovideosink + * | Assuming the model is a object detection model this pipeline will instantiate + * a tensor decoder compatible upstream tensor caps. + * + * Since: 1.28 + */ + +#include <gst/gst.h> +#ifdef HAVE_CONFI_H +#include "config.h" +#endif + +#include "gsttensordecodebin.h" + +#include <gst/gst.h> + + +GST_DEBUG_CATEGORY_STATIC (tensordecodebin_debug); +#define GST_CAT_DEFAULT tensordecodebin_debug + +GST_ELEMENT_REGISTER_DEFINE (tensordecodebin, "tensordecodebin", + GST_RANK_NONE, GST_TYPE_TENSORDECODEBIN); + +static GstStaticPadTemplate gst_tensordecodebin_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS_ANY); + +static GstStaticPadTemplate gst_tensordecodebin_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS_ANY); + +static gboolean +gst_tensordecodebin_sink_query (GstPad * pad, GstObject * parend, GstQuery * + query); + +static gboolean +gst_tensordecodebin_sink_event (GstPad * pad, GstObject * parent, GstEvent * + event); + +static void gst_tensordecodebin_finalize (GObject * object); + +G_DEFINE_TYPE (GstTensorDecodeBin, gst_tensordecodebin, GST_TYPE_BIN); + +static void +gst_tensordecodebin_class_init (GstTensorDecodeBinClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + + GST_DEBUG_CATEGORY_INIT (tensordecodebin_debug, "tensordecodebin", 0, + "Tensor decode bin"); + + /* Element description. */ + gst_element_class_set_static_metadata (element_class, "tensordecodebin", + "Tensor Decoder Bin", + "Tensor Decode Bin", "Daniel Morin <daniel.morin@collabora.com>"); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_tensordecodebin_src_template)); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_tensordecodebin_sink_template)); + + gobject_class->finalize = gst_tensordecodebin_finalize; +} + +static void +gst_tensordecodebin_init (GstTensorDecodeBin * self) +{ + GstPadTemplate *pad_tmpl; + pad_tmpl = gst_static_pad_template_get (&gst_tensordecodebin_sink_template); + self->sinkpad = gst_ghost_pad_new_no_target_from_template ("sink", pad_tmpl); + + gst_clear_object (&pad_tmpl); + pad_tmpl = gst_static_pad_template_get (&gst_tensordecodebin_src_template); + self->srcpad = gst_ghost_pad_new_no_target_from_template ("src", pad_tmpl); + gst_clear_object (&pad_tmpl); + + self->last_event_caps = NULL; + self->factories_cookie = 0; + self->aggregated_caps = NULL; + + gst_pad_set_query_function (self->sinkpad, gst_tensordecodebin_sink_query); + gst_pad_set_event_function_full (self->sinkpad, + gst_tensordecodebin_sink_event, self, NULL); + + gst_element_add_pad (GST_ELEMENT (self), self->sinkpad); + gst_element_add_pad (GST_ELEMENT (self), self->srcpad); +} + +static void +gst_tensordecodebin_finalize (GObject * object) +{ + GstTensorDecodeBin *self = GST_TENSORDECODEBIN (object); + g_list_free_full (self->tensordec_factories, gst_object_unref); + gst_clear_caps (&self->last_event_caps); + gst_clear_caps (&self->aggregated_caps); + G_OBJECT_CLASS (gst_tensordecodebin_parent_class)->finalize (object); +} + +static gboolean +decoder_filter (GstPluginFeature * feature, GstTensorDecodeBin * self) +{ + const gchar *klass; + GstElementFactory *fact; + GstElementClass *self_class = GST_ELEMENT_GET_CLASS (self); + + /* we only care about element factories */ + if (G_UNLIKELY (!GST_IS_ELEMENT_FACTORY (feature))) + return FALSE; + + fact = GST_ELEMENT_FACTORY_CAST (feature); + + klass = gst_element_factory_get_metadata (fact, GST_ELEMENT_METADATA_KLASS); + + /* Filter on Tensordecoder Klass */ + if (strstr (klass, "Tensordecoder")) { + + /* Skip ourself */ + if (fact == self_class->elementfactory) + return FALSE; + + /* Only keep element with rank equal or above marginal */ + if (gst_plugin_feature_get_rank (feature) < GST_RANK_MARGINAL) + return FALSE; + + GST_DEBUG_OBJECT (self, "adding %s factory", GST_OBJECT_NAME (fact)); + return TRUE; + } + + return FALSE; +} + +static GList * +gst_tensordecodebin_get_or_load_tensordec_factories_unlocked (GstTensorDecodeBin + * self) +{ + guint32 cookie; + GList *all_tensordec_factories; + + cookie = gst_registry_get_feature_list_cookie (gst_registry_get ()); + if (!self->tensordec_factories || self->factories_cookie != cookie) { + + if (self->tensordec_factories) + g_list_free_full (self->tensordec_factories, gst_object_unref); + + all_tensordec_factories = + g_list_sort (gst_registry_feature_filter (gst_registry_get (), + (GstPluginFeatureFilter) decoder_filter, FALSE, self), + (GCompareFunc) gst_plugin_feature_rank_compare_func); + + self->tensordec_factories = all_tensordec_factories; + self->factories_cookie = cookie; + gst_clear_caps (&self->aggregated_caps); + } + + return g_list_copy_deep (self->tensordec_factories, + (GCopyFunc) gst_object_ref, NULL); +} + +static GList * +gst_tensordecodebin_get_or_load_tensordec_factories (GstTensorDecodeBin * self) +{ + GList *factories; + GST_OBJECT_LOCK (self); + factories = + gst_tensordecodebin_get_or_load_tensordec_factories_unlocked (self); + GST_OBJECT_UNLOCK (self); + return factories; +} + +static void +_remove_all_elements (GstBin * bin) +{ + GstElement *e; + GList *childs, *l; + + GST_DEBUG_OBJECT (bin, "Removing all childs"); + + GST_OBJECT_LOCK (bin); + childs = g_list_copy_deep (bin->children, (GCopyFunc) gst_object_ref, NULL); + GST_OBJECT_UNLOCK (bin); + + l = childs; + while (l) { + GST_TRACE_OBJECT (bin, "Removing child %p", l->data); + e = l->data; + gst_bin_remove (bin, e); + gst_element_set_state (GST_ELEMENT (e), GST_STATE_NULL); + l = l->next; + }; + g_list_free_full (childs, gst_object_unref); +} + +static GstPadTemplate * +_get_compatible_sinkpad_template (GstTensorDecodeBin * self, + GstElementFactory * factory) +{ + const GList *tpls; + GstPadTemplate *tpl; + const GList *fact_tpls; + GstPadTemplate *compa_sinkpad_tpl = NULL; + guint16 num_compa_sinkpad_template = 0, num_compa_srcpad_template = 0; + + fact_tpls = gst_element_factory_get_static_pad_templates (factory); + for (tpls = fact_tpls; tpls; tpls = tpls->next) { + tpl = gst_static_pad_template_get (tpls->data); + + /* FIXME: Add support for Request pads and Sometime pads */ + if (tpl->presence != GST_PAD_ALWAYS) { + GST_WARNING_OBJECT (self, "Tensor decoder %s has %s pad which is " + "not currently supported by the tensordecodebin and is ignored.", + gst_element_factory_get_metadata (factory, + GST_ELEMENT_METADATA_LONGNAME), + tpl->presence == GST_PAD_REQUEST ? "request" : "sometimes"); + + /* Skip this template */ + gst_clear_object (&tpl); + continue; + } + + if (tpl->direction == GST_PAD_SINK) { + num_compa_sinkpad_template++; + + if (num_compa_sinkpad_template == 1) + compa_sinkpad_tpl = gst_object_ref (tpl); + + } else if (tpl->direction == GST_PAD_SRC) { + num_compa_srcpad_template++; + } else { + GST_WARNING_OBJECT (self, + "Tensor decoder %s has a pad template with UNKNOWN direction," + " skipping this template.", gst_element_factory_get_metadata (factory, + GST_ELEMENT_METADATA_LONGNAME)); + + /* Skip this template */ + gst_clear_object (&tpl); + continue; + } + gst_clear_object (&tpl); + } + + /* FIXME: Add support for tensor decoder with multiple sinkpads and/or + * srcpads */ + if (num_compa_sinkpad_template != 1 || num_compa_srcpad_template != 1) { + GST_WARNING_OBJECT (self, + "tensordecodebin only support tensor decoder with 1 always" + " sinkpad and 1 always srcpad, but %s has %u sinkpad and %u srcpad and will not be condiered", + gst_element_factory_get_metadata (factory, + GST_ELEMENT_METADATA_LONGNAME), num_compa_sinkpad_template, + num_compa_srcpad_template); + gst_clear_object (&compa_sinkpad_tpl); + } + + return compa_sinkpad_tpl; +} + +static gboolean +gst_tensordecodebin_sink_caps_event (GstTensorDecodeBin * self, GstCaps * ecaps) +{ + gboolean ret = TRUE; + GstElement *e = NULL; + GstPad *sinkpad = NULL, *srcpad = NULL; + GstElementFactory *factory; + GList *factories = NULL; + GstCaps *tplcaps; + + const GstStructure *s; + const GValue *v; + + gst_ghost_pad_set_target (GST_GHOST_PAD (self->sinkpad), NULL); + gst_ghost_pad_set_target (GST_GHOST_PAD (self->srcpad), NULL); + _remove_all_elements (GST_BIN (self)); + gst_caps_replace (&self->last_event_caps, ecaps); + + /* We check all tensor group can be handled by a tensor decoder */ + s = gst_caps_get_structure (ecaps, 0); + v = gst_structure_get_value (s, "tensors"); + + if (v == NULL) { + /* No tensor caps, we don't need any tensor decoder */ + GST_INFO_OBJECT (self, "No tensor caps in, tensordecodebin will be " + "passthrough"); + e = gst_element_factory_make ("identity", NULL); + if (!gst_bin_add (GST_BIN (self), e)) { + GST_ERROR_OBJECT (self, "Failed to add identity"); + goto fail; + } else { + sinkpad = gst_element_get_static_pad (e, "sink"); + if (!gst_ghost_pad_set_target (GST_GHOST_PAD (self->sinkpad), sinkpad)) { + GST_ERROR_OBJECT (self, "Failed to set sinkpad target to " + "identity.sinkpad"); + goto fail; + } + + gst_clear_object (&sinkpad); + srcpad = gst_element_get_static_pad (e, "src"); + gst_element_sync_state_with_parent (e); + + goto done; + } + } + + /* NOTE: tensordecodebin assumes that tensordecoder does not modify the media + * or the capabilities. This is not a fundamental limitation of tensor + * capabilities but rather a limitation of the current tensordecodebin + * implementation. To implement support for tensordecoder-induced capability + * changes, we would need to maintain a full history of transformations. Currently, + * tensordecoder assumes the tensor was produced by inference on the attached + * media. However, this assumption will not hold if tensordecoder can modify + * media. Consequently, a tensordecoder following one that changes media would + * need to retrieve media details from the time the inference produced the + * tensor being decoded. + */ + factories = gst_tensordecodebin_get_or_load_tensordec_factories (self); + for (GList * f = factories; f; f = g_list_next (f)) { + GstPadTemplate *compa_sinkpad_tpl = NULL; + factory = GST_ELEMENT_FACTORY (f->data); + compa_sinkpad_tpl = _get_compatible_sinkpad_template (self, factory); + + tplcaps = gst_pad_template_get_caps (compa_sinkpad_tpl); + + /* Check if sinkpad has at least a tensors field */ + s = gst_caps_get_structure (tplcaps, 0); + if (!gst_structure_has_field (s, "tensors")) { + GST_WARNING_OBJECT (self, + "Element from %s factory have no tensors capabilities", + gst_element_factory_get_longname (factory)); + gst_clear_caps (&tplcaps); + gst_clear_object (&compa_sinkpad_tpl); + continue; + } + + if (gst_caps_is_subset (ecaps, tplcaps)) { + gst_clear_caps (&tplcaps); + + e = gst_element_factory_create (factory, NULL); + sinkpad = + gst_element_get_static_pad (e, compa_sinkpad_tpl->name_template); + if (!sinkpad) { + GST_WARNING_OBJECT (self, "Element %p from %s factory has no sinkpad", + e, gst_element_factory_get_longname (factory)); + gst_clear_object (&e); + gst_clear_object (&compa_sinkpad_tpl); + + continue; + } + } else { + gst_clear_caps (&tplcaps); + gst_clear_object (&compa_sinkpad_tpl); + continue; + } + + gst_clear_object (&compa_sinkpad_tpl); + + if (gst_pad_query_accept_caps (sinkpad, ecaps)) { + gst_bin_add (GST_BIN (self), e); + + GST_DEBUG_OBJECT (self, "selected tensor decoder: %" GST_PTR_FORMAT, e); + + if (!gst_element_sync_state_with_parent (e)) { + GST_WARNING_OBJECT (self, "Element %" GST_PTR_FORMAT " failed to " + "synchronise its state with parent and will not be added to " + "this bin.", e); + gst_bin_remove (GST_BIN (self), e); + gst_clear_object (&e); + continue; + } + + if (srcpad) { + if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) { + GST_ERROR_OBJECT (self, + "Could not link %" GST_PTR_FORMAT " and %" GST_PTR_FORMAT, srcpad, + sinkpad); + goto fail; + } + + gst_clear_object (&srcpad); + } else { + if (!gst_ghost_pad_set_target (GST_GHOST_PAD (self->sinkpad), sinkpad)) { + GST_ERROR_OBJECT (self, "Failed to set sinkpad target"); + goto fail; + } + } + + gst_clear_object (&sinkpad); + srcpad = gst_element_get_static_pad (e, "src"); + + e = NULL; + continue; + + } else { + GST_WARNING_OBJECT (self, "Factory (%p)'s sinkpad (%p) didn't accept " + "caps:%" GST_PTR_FORMAT, factory, sinkpad, ecaps); + gst_clear_object (&sinkpad); + gst_clear_object (&e); + } + } + + g_list_free_full (g_steal_pointer (&factories), gst_object_unref); + + if (srcpad == NULL) { + GST_WARNING_OBJECT (self, "Could not find tensor decoder for %" + GST_PTR_FORMAT, ecaps); + goto fail; + } + +done: + if (!srcpad || !gst_ghost_pad_set_target (GST_GHOST_PAD (self->srcpad), + srcpad)) { + GST_ERROR_OBJECT (self, "Failed to set srcpad target"); + goto fail; + } + + gst_clear_object (&srcpad); + return ret; + +fail: + g_list_free_full (factories, gst_object_unref); + + _remove_all_elements (GST_BIN (self)); + + gst_clear_object (&srcpad); + gst_clear_object (&sinkpad); + gst_clear_object (&e); + return FALSE; +} + +static gboolean +gst_tensordecodebin_sink_event (GstPad * pad, GstObject * parent, GstEvent * + event) +{ + gboolean ret = TRUE; + GstTensorDecodeBin *self = GST_TENSORDECODEBIN (parent); + GstCaps *ecaps = NULL; + + switch (GST_EVENT_TYPE (event)) { + case GST_EVENT_CAPS: + { + gst_event_parse_caps (event, &ecaps); + + if (!gst_tensordecodebin_sink_caps_event (self, ecaps)) { + gst_caps_unref (ecaps); + goto done; + } + + ret = gst_pad_event_default (pad, parent, event); + break; + } + default: + ret = gst_pad_event_default (pad, parent, event); + break; + } + +done: + return ret; +} + +static GstCaps * +_get_tensordecoders_caps (GstTensorDecodeBin * self) +{ + GstElementFactory *factory; + GstCaps *tplcaps, *acc_caps = NULL; + GstPadTemplate *tpl; + + GList *factories = + gst_tensordecodebin_get_or_load_tensordec_factories_unlocked (self); + + if (self->aggregated_caps != NULL) + goto done; + + acc_caps = gst_caps_new_empty (); + + for (GList * f = factories; f; f = g_list_next (f)) { + factory = GST_ELEMENT_FACTORY (f->data); + tpl = _get_compatible_sinkpad_template (self, factory); + + if (!tpl) { + GST_WARNING_OBJECT (self, + "No compatible sinkpad template found %s factory", + gst_element_factory_get_metadata (factory, + GST_ELEMENT_METADATA_LONGNAME)); + continue; + } + + tplcaps = gst_pad_template_get_caps (tpl); + gst_clear_object (&tpl); + acc_caps = gst_caps_merge (acc_caps, tplcaps); + } + + self->aggregated_caps = acc_caps; + +done: + g_list_free_full (factories, gst_object_unref); + + return gst_caps_ref (self->aggregated_caps); +} + +static gboolean +gst_tensordecodebin_sink_query (GstPad * pad, GstObject * parent, GstQuery * + query) +{ + gboolean ret; + GstTensorDecodeBin *self = GST_TENSORDECODEBIN (parent); + + switch (query->type) { + case GST_QUERY_CAPS: + { + GstCaps *acc_caps, *filter_caps = NULL, *intersection; + GstQuery *dn_query; + gst_query_parse_caps (query, &filter_caps); + + GST_OBJECT_LOCK (self); + acc_caps = _get_tensordecoders_caps (self); + GST_OBJECT_UNLOCK (self); + + if (filter_caps) { + intersection = gst_caps_intersect (acc_caps, filter_caps); + gst_caps_replace (&acc_caps, intersection); + } + + dn_query = gst_query_new_caps (acc_caps); + if ((ret = gst_pad_peer_query (self->srcpad, dn_query))) { + gst_query_parse_caps (dn_query, &filter_caps); + if (filter_caps) { + intersection = gst_caps_intersect (acc_caps, filter_caps); + gst_caps_replace (&acc_caps, intersection); + } + } + + gst_clear_query (&dn_query); + gst_query_set_caps_result (query, acc_caps); + gst_caps_unref (acc_caps); + break; + } + case GST_QUERY_ACCEPT_CAPS: + { + GstCaps *caps, *acc_caps = NULL; + + GST_OBJECT_LOCK (self); + acc_caps = _get_tensordecoders_caps (self); + gst_query_parse_accept_caps (query, &caps); + gst_query_set_accept_caps_result (query, gst_caps_can_intersect (acc_caps, + caps)); + GST_OBJECT_UNLOCK (self); + + gst_caps_unref (acc_caps); + ret = TRUE; + break; + } + default: + ret = gst_pad_query_default (pad, parent, query); + break; + } + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gsttensordecodebin.h
Added
@@ -0,0 +1,58 @@ +/* GStreamer object detection overlay + * Copyright (C) <2025> Collabora Ltd. + * @author: Daniel Morin <daniel.morin@collabora.com> + * + * gsttensordecodebin.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifndef __GST_TENSOR_DECODE_BIN__ +#define __GST_TENSOR_DECODE_BIN__ + +#include <gst/gst.h> + +G_BEGIN_DECLS + +#define GST_TYPE_TENSORDECODEBIN (gst_tensordecodebin_get_type ()) +G_DECLARE_FINAL_TYPE (GstTensorDecodeBin, gst_tensordecodebin, + GST, TENSORDECODEBIN, GstBin) + +struct _GstTensorDecodeBin +{ + GstBin basebin; + GstPad *sinkpad; + GstPad *srcpad; + + // only change under object lock + guint32 factories_cookie; + + // only change under object lock + GList *tensordec_factories; + + GstCaps *last_event_caps; + GstCaps *aggregated_caps; +}; + +struct _GstTensorDecodeBinClass +{ + GstBin parent_class; +}; + +GST_ELEMENT_REGISTER_DECLARE (tensordecodebin) + +G_END_DECLS +#endif /* __GST_TENSOR_DECODE_BIN__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/tensordecoders/gsttensordecoders.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gsttensordecoders.c
Changed
@@ -24,7 +24,13 @@ # include "config.h" #endif -#include "gstssdobjectdetector.h" +#include "gstssdtensordec.h" +#include "gstclassifiertensordecoder.h" +#include "gstfacedetectortensordecoder.h" +#include "gstioutracker.h" +#include "gstyolotensordecoder.h" +#include "gstyolosegtensordecoder.h" +#include "gsttensordecodebin.h" /** * SECTION:plugin-tensordecoders @@ -37,7 +43,14 @@ plugin_init (GstPlugin * plugin) { gboolean ret = FALSE; + ret |= GST_ELEMENT_REGISTER (ssd_tensor_dec, plugin); ret |= GST_ELEMENT_REGISTER (ssd_object_detector, plugin); + ret |= GST_ELEMENT_REGISTER (classifier_tensor_decoder, plugin); + ret |= GST_ELEMENT_REGISTER (face_detector_tensor_decoder, plugin); + ret |= GST_ELEMENT_REGISTER (iou_tracker, plugin); + ret |= GST_ELEMENT_REGISTER (yolo_tensor_decoder, plugin); + ret |= GST_ELEMENT_REGISTER (yolo_seg_tensor_decoder, plugin); + ret |= GST_ELEMENT_REGISTER (tensordecodebin, plugin); return ret; }
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstyolosegtensordecoder.c
Added
@@ -0,0 +1,433 @@ +/* + * GStreamer + * Copyright (C) 2024 Collabora Ltd. + * Authors: Daniel Morin <daniel.morin@collabora.com> + * Vineet Suryan <vineet.suryan@collabora.com> + * Santosh Mahto <santosh.mahto@collabora.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-yolosegv8tensordec + * @short_description: Decode tensors from a FastSAM/YOLOv8 segmentation + * models + * + * This element can parse per-buffer inference tensors meta data generated by an upstream + * inference element + * + * ## Example launch command: + * + * Test image file, model file and labels file can be found here : + * https://gitlab.collabora.com/gstreamer/onnx-models + * + * gst-launch-1.0 v4l2src device=/dev/video4 ! videorate max-rate=3 \ + * ! videoconvertscale ! video/x-raw, pixel-aspect-ratio=1/1 \ + * ! onnxinference \ + * model-file=/home/dmorin/repos/onnx-models/models/yolov8s-seg.onnx \ + * ! yolosegv8tensordec class-confidence-threshold=0.8 iou-threshold=0.3 \ + * max-detections=100 \ + * label-file=/home/dmorin/repos/onnx-models/labels/COCO_classes.txt \ + * ! segmentationoverlay \ + * ! glimagesink sink="gtkglsink processing-deadline=300000000 + * + * The original repository of the Yolo is located at + * https://github.com/ultralytics/ultralytics. + * For easy experimentation, a object segmentation model based on Yolo + * architecture in Onnx format can be found at https://col.la/gstonnxmodelseg. + * This model already has the required tensor-ids embedded in the model + * It's also possible to embed tensor-ids into any model based on Yolo + * architecture to allow this tensor-decoder to decode tensors. This process + * is described in the Readme of this repository: https://col.la/gstonnxmodels" + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFI_H +#include "config.h" +#endif + +#include "gstyolosegtensordecoder.h" + +#include <gst/analytics/analytics.h> +#include <gio/gio.h> + +#include <math.h> + +#define YOLO_SEGMENTATION_LOGITS "yolo-v8-segmentation-out-protos" +GQuark YOLO_SEGMENTATION_LOGITS_TENSOR_ID; + +#define YOLO_SEGMENTATION_DETECTION_MASK "yolo-v8-segmentation-out-detections" +GQuark YOLO_SEGMENTATION_DETECTION_MASK_ID; + +/* *INDENT-OFF* */ +static GstStaticPadTemplate gst_yolo_seg_tensor_decoder_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw," + "tensors=(structure)" + "tensorgroups," + "yolo-v8-segmentation-out=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id=yolo-v8-segmentation-out-detections," + "dims=(int)<1, 1,max, 1,max>," + "dims-order=(string)col-major," + "type=(string)float32" + "," + "(GstCaps)" + "tensor/strided," + "tensor-id=yolo-v8-segmentation-out-protos," + "dims=(int)<1, 1,max, 1,max, 1,max>," + "dims-order=(string)col-major," + "type=(string)float32" + "" + "}" + "" + )); +/* *INDENT-ON* */ + + +GST_DEBUG_CATEGORY_STATIC (yolo_seg_tensor_decoder_debug); +#define GST_CAT_DEFAULT yolo_seg_tensor_decoder_debug + +GST_ELEMENT_REGISTER_DEFINE (yolo_seg_tensor_decoder, "yolosegv8tensordec", + GST_RANK_SECONDARY, GST_TYPE_YOLO_SEG_TENSOR_DECODER); + +/* For debug purpose */ +typedef struct _DebugCandidates +{ + gpointer self; + gsize fields; /* Fields count do debug */ + gsize offset; /* Fields offset */ + gsize start; /* First field index to debug */ +} DebugCandidates; + +/* GstYoloSegTensorDecoder Prototypes */ +static gboolean gst_yolo_seg_tensor_decoder_stop (GstBaseTransform * trans); +static GstFlowReturn gst_yolo_seg_tensor_decoder_transform_ip (GstBaseTransform + * trans, GstBuffer * buf); + +static void gst_yolo_seg_tensor_decoder_object_found (GstYoloTensorDecoder * od, + GstAnalyticsRelationMeta * rmeta, BBox * bb, gfloat confidence, + GQuark class_quark, const gfloat * candidate_masks, gsize offset, + guint count); + +G_DEFINE_TYPE (GstYoloSegTensorDecoder, gst_yolo_seg_tensor_decoder, + GST_TYPE_YOLO_TENSOR_DECODER); + +static gboolean +gst_yolo_seg_tensor_decoder_stop (GstBaseTransform * trans) +{ + GstYoloSegTensorDecoder *self = GST_YOLO_SEG_TENSOR_DECODER (trans); + + self->mask_w = 0; + self->mask_h = 0; + self->mask_length = 0; + if (self->mask_pool) + gst_buffer_pool_set_active (self->mask_pool, FALSE); + g_clear_object (&self->mask_pool); + + return TRUE; +} + +static void +gst_yolo_seg_tensor_decoder_class_init (GstYoloSegTensorDecoderClass * klass) +{ + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + GstYoloTensorDecoderClass *od_class = (GstYoloTensorDecoderClass *) klass; + + /* Define GstYoloSegTensorDecoder debug category. */ + GST_DEBUG_CATEGORY_INIT (yolo_seg_tensor_decoder_debug, + "yolosegv8tensordec", 0, "Tensor decoder for Yolo segmentation models"); + + YOLO_SEGMENTATION_DETECTION_MASK_ID = + g_quark_from_static_string (YOLO_SEGMENTATION_DETECTION_MASK); + YOLO_SEGMENTATION_LOGITS_TENSOR_ID = + g_quark_from_static_string (YOLO_SEGMENTATION_LOGITS); + + gst_element_class_set_static_metadata (element_class, + "YOLO v8-11 segmentastion tensor decoder", "Tensordecoder/Video", + "Decode tensors output from the inference of Yolo or FastSAM model (segmentation)" + " on video frames. It works with YOLO version > 8 and FastSAM models.", + "Daniel Morin <daniel.morin@collabora.com>, Santosh Mahto <santosh.mahto@collabora.com>"); + + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_yolo_seg_tensor_decoder_sink_template)); + + + basetransform_class->transform_ip = gst_yolo_seg_tensor_decoder_transform_ip; + basetransform_class->stop = gst_yolo_seg_tensor_decoder_stop; + + od_class->object_found = gst_yolo_seg_tensor_decoder_object_found; + + /* Workaround hotdoc bug */ + gst_type_mark_as_plugin_api (GST_TYPE_YOLO_TENSOR_DECODER, 0); +} + +static void +gst_yolo_seg_tensor_decoder_init (GstYoloSegTensorDecoder * self) +{ + /* GstYoloSegTensorDecoder instance initialization */ + self->mask_w = 0; + self->mask_h = 0; + self->mask_length = 0; + self->mask_pool = NULL; + memset (&self->mask_roi, 0, sizeof (BBox)); + + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); +} + +static gboolean +gst_yolo_seg_tensor_decoder_get_tensors (GstYoloSegTensorDecoder * self, + GstBuffer * buf, const GstTensor ** logits_tensor, + const GstTensor ** detections_tensor) +{ + GstMeta *meta = NULL; + gpointer iter_state = NULL; + + if (!gst_buffer_get_meta (buf, GST_TENSOR_META_API_TYPE)) { + GST_WARNING_OBJECT (self, + "missing tensor meta from buffer %" GST_PTR_FORMAT, buf); + return FALSE; + } + + while ((meta = gst_buffer_iterate_meta_filtered (buf, &iter_state, + GST_TENSOR_META_API_TYPE))) { + GstTensorMeta *tmeta = (GstTensorMeta *) meta; + const gsize YOLO_LOGITS_TENSOR_N_DIMS = 4; + static const gsize logits_dims4 = { 1, G_MAXSIZE, G_MAXSIZE, G_MAXSIZE }; + const gsize YOLO_DETECTIONS_TENSOR_N_DIMS = 3; + static const gsize detections_dims3 = { 1, G_MAXSIZE, G_MAXSIZE }; + + *logits_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + YOLO_SEGMENTATION_LOGITS_TENSOR_ID, GST_TENSOR_DATA_TYPE_FLOAT32, + GST_TENSOR_DIM_ORDER_ROW_MAJOR, YOLO_LOGITS_TENSOR_N_DIMS, logits_dims); + if (*logits_tensor == NULL) + continue; + + + *detections_tensor = gst_tensor_meta_get_typed_tensor (tmeta, + YOLO_SEGMENTATION_DETECTION_MASK_ID, GST_TENSOR_DATA_TYPE_FLOAT32, + GST_TENSOR_DIM_ORDER_ROW_MAJOR, YOLO_DETECTIONS_TENSOR_N_DIMS, + detections_dims); + + if (*detections_tensor == NULL) + continue; + + guint num_masks = (*logits_tensor)->dims1; + + if ((*detections_tensor)->dims1 < 4 + 1 + num_masks) { + GST_WARNING_OBJECT (self, "Ignore tensor because dims1 is %zu < %d", + (*detections_tensor)->dims1, 4 + 1 + num_masks); + continue; + } + + return TRUE; + } + + return FALSE; +} + +/* gst_yolo_seg_tensor_decoder_transform_ip: + * @trans: Instance + * @buf:inout: Buffer containing media and where tensors can be attached + * @return: Flow errors + * Decode Yolo tensors, post-process tensors and store decoded information + * into an analytics-meta that is attached to the buffer before been pushed + * downstream. + */ +static GstFlowReturn +gst_yolo_seg_tensor_decoder_transform_ip (GstBaseTransform * trans, + GstBuffer * buf) +{ + GstYoloSegTensorDecoder *self = GST_YOLO_SEG_TENSOR_DECODER (trans); + GstYoloTensorDecoder *od = GST_YOLO_TENSOR_DECODER (trans); + GstAnalyticsRelationMeta *rmeta; + gsize mask_w, mask_h; + const GstTensor *detections_tensor; + const GstTensor *logits_tensor; + GstFlowReturn ret = GST_FLOW_OK; + gboolean rv; + + if (!gst_yolo_seg_tensor_decoder_get_tensors (self, buf, &logits_tensor, + &detections_tensor)) { + GST_WARNING_OBJECT (self, + "Couldn't find logit or detections tensor, skipping"); + return GST_FLOW_OK; + } + + rmeta = gst_buffer_add_analytics_relation_meta (buf); + if (rmeta == NULL) { + GST_ELEMENT_ERROR (trans, STREAM, FAILED, (NULL), + ("Analytics Relation meta allocation failed")); + return GST_FLOW_ERROR; + } + + mask_w = logits_tensor->dims2; + mask_h = logits_tensor->dims3; + + /* The detections need to be cropped to fit the SAR of the image. */ + /* TODO: We're reconstructing the transformation that was done on the + * original image based on the assumption that the complete image without + * deformation would be analyzed. This assumption is not alway true and + * we should try to find a way to convey this transformation information + * and retrieve from here to know the transformation that need to be done + * on the mask.*/ + + if (self->mask_w != mask_w || self->mask_h != mask_h) { + self->mask_w = mask_w; + self->mask_h = mask_h; + self->mask_length = mask_w * mask_h; + + if (od->video_info.width > od->video_info.height) { + self->bb2mask_gain = ((gfloat) self->mask_w) / od->video_info.width; + self->mask_roi.x = 0; + self->mask_roi.w = self->mask_w; + self->mask_roi.h = ((gfloat) self->bb2mask_gain) * od->video_info.height; + self->mask_roi.y = (self->mask_h - self->mask_roi.h) / 2; + } else { + self->bb2mask_gain = ((gfloat) self->mask_h) / od->video_info.height; + self->mask_roi.y = 0; + self->mask_roi.h = self->mask_h; + self->mask_roi.w = self->bb2mask_gain * od->video_info.width; + self->mask_roi.x = (self->mask_w - self->mask_roi.w) / 2; + } + + if (self->mask_pool) { + gst_buffer_pool_set_active (self->mask_pool, FALSE); + g_clear_object (&self->mask_pool); + } + } + + if (self->mask_pool == NULL) { + GstVideoInfo minfo; + GstCaps *caps; + gst_video_info_init (&minfo); + gst_video_info_set_format (&minfo, GST_VIDEO_FORMAT_GRAY8, self->mask_w, + self->mask_h); + caps = gst_video_info_to_caps (&minfo);; + self->mask_pool = gst_video_buffer_pool_new (); + + GstStructure *config = gst_buffer_pool_get_config (self->mask_pool); + gst_buffer_pool_config_set_params (config, caps, self->mask_length, 0, 0); + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_set_config (self->mask_pool, config); + gst_buffer_pool_set_active (self->mask_pool, TRUE); + gst_caps_unref (caps); + } + + /* Retrieve memory at index 0 from logits_tensor in READ mode */ + rv = gst_buffer_map (logits_tensor->data, &self->map_info_logits, + GST_MAP_READ); + if (!rv) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Couldn't map logits tensor buffer: %" GST_PTR_FORMAT, + logits_tensor->data)); + return GST_FLOW_ERROR; + } + + self->logits_tensor = logits_tensor; + + if (!gst_yolo_tensor_decoder_decode_f32 (od, rmeta, detections_tensor, + logits_tensor->dims1)) + ret = GST_FLOW_ERROR; + + gst_buffer_unmap (logits_tensor->data, &self->map_info_logits); + + return ret; +} + +static float +sigmoid (float x) +{ + /* Check for positive overflow */ + if (x > 0) { + double exp_neg_x = exp (-x); + return 1.0 / (1.0 + exp_neg_x); + } + /* Check for negative overflow and improve stability for negative x */ + else { + double exp_x = exp (x); + return exp_x / (1.0 + exp_x); + } +} + +static void +gst_yolo_seg_tensor_decoder_object_found (GstYoloTensorDecoder * od, + GstAnalyticsRelationMeta * rmeta, BBox * bb, gfloat confidence, + GQuark class_quark, const gfloat * candidate_masks, gsize offset, + guint count) +{ + GstYoloSegTensorDecoder *self = GST_YOLO_SEG_TENSOR_DECODER (od); + GstAnalyticsODMtd od_mtd; + GstBuffer *mask_buf = NULL; + gfloat *data_logits = (gfloat *) self->map_info_logits.data; + BBox bb_mask; + GstFlowReturn flowret; + GstMapInfo out_mask_info; + guint region_ids2 = { 0, count }; + GstAnalyticsMtd seg_mtd; + + gst_analytics_relation_meta_add_od_mtd (rmeta, class_quark, + bb->x, bb->y, bb->w, bb->h, confidence, &od_mtd); + + bb_mask.x = self->bb2mask_gain * bb->x + self->mask_roi.x; + bb_mask.y = self->bb2mask_gain * bb->y + self->mask_roi.y; + bb_mask.w = self->bb2mask_gain * bb->w; + bb_mask.h = self->bb2mask_gain * bb->h; + + flowret = gst_buffer_pool_acquire_buffer (self->mask_pool, &mask_buf, NULL); + g_assert (flowret == GST_FLOW_OK); + gst_buffer_map (mask_buf, &out_mask_info, GST_MAP_READWRITE); + + GstVideoMeta *vmeta = gst_buffer_get_video_meta (mask_buf); + g_assert (vmeta != NULL); + vmeta->width = bb_mask.w; + vmeta->height = bb_mask.h; + +#define MX_MAX (bb_mask.x + bb_mask.w) +#define MY_MAX (bb_mask.y + bb_mask.h) + + for (gint my = bb_mask.y, i = 0; my < MY_MAX; my++) { + for (gint mx = bb_mask.x; mx < MX_MAX; mx++, i++) { + float sum = 0.0f; + gint j = my * self->mask_w + mx; + for (gsize k = 0; k < self->logits_tensor->dims1; ++k) { + GST_TRACE_OBJECT (self, "protos data at ((mx=%d,my=%d)=%d, %zu) is %f", + mx, my, j, k, data_logitsk * self->mask_length + j); + sum += candidate_masksoffset * k * + data_logitsk * self->mask_length + j; + } + out_mask_info.datai = sigmoid (sum) > 0.5 ? count : 0; + } + } + + gst_analytics_relation_meta_add_segmentation_mtd (rmeta, mask_buf, + GST_SEGMENTATION_TYPE_INSTANCE, 1, region_ids, bb->x, bb->y, bb->w, + bb->h, &seg_mtd); + + gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, od_mtd.id, seg_mtd.id); + gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, seg_mtd.id, od_mtd.id); + + + gst_buffer_unmap (mask_buf, &out_mask_info); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstyolosegtensordecoder.h
Added
@@ -0,0 +1,69 @@ +/* + * GStreamer gstreamer-yolotensordecoder + * Copyright (C) 2024 Collabora Ltd + * Authors: Daniel Morin <daniel.morin@collabora.com> + * Vineet Suryan <vineet.suryan@collabora.com> + * Santosh Mahto <santosh.mahto@collabora.com> + * + * gstyolotensordecoder.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#ifndef __GST_YOLO_SEG_TENSOR_DECODER_H__ +#define __GST_YOLO_SEG_TENSOR_DECODER_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/base/base.h> +#include "gstyolotensordecoder.h" + +/* Yolo segmentation tensor decoder */ +#define GST_TYPE_YOLO_SEG_TENSOR_DECODER (gst_yolo_seg_tensor_decoder_get_type ()) + +G_DECLARE_FINAL_TYPE (GstYoloSegTensorDecoder, gst_yolo_seg_tensor_decoder, + GST, YOLO_SEG_TENSOR_DECODER, GstYoloTensorDecoder) + +struct _GstYoloSegTensorDecoder +{ + GstYoloTensorDecoder parent; + + /* Mask width */ + guint mask_w; + /* Mask height */ + guint mask_h; + /* Mask length */ + gsize mask_length; + + /* Scaling factor to convert bounding-box coordinates to mask coordinates */ + gfloat bb2mask_gain; + /* Region of the mask that contain valid segmentation information */ + BBox mask_roi; + + /* BufferPool for mask */ + GstBufferPool *mask_pool; + + /* Those are only valid during the call to + * the base call gst_yolo_tensor_decoder_decode_f32 + */ + const GstTensor *logits_tensor; + GstMapInfo map_info_logits; +}; + +GST_ELEMENT_REGISTER_DECLARE (yolo_seg_tensor_decoder) + +#endif /* __GST_YOLO_SEG_TENSOR_DECODER_H__ */
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstyolotensordecoder.c
Added
@@ -0,0 +1,805 @@ +/* + * GStreamer gstreamer-yolotensordecoder + * Copyright (C) 2024 Collabora Ltd. + * Authors: Daniel Morin <daniel.morin@collabora.com> + * Vineet Suryan <vineet.suryan@collabora.com> + * Santosh Mahto <santosh.mahto@collabora.com> + * + * gstyolotensordecoder.c + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-yolotensordec + * @short_description: Decode tensors from YOLO detection models + * + * This element can parse per-buffer inference tensors meta data generated + * by an upstream inference element + * + * + * ## Example launch command: + * + * Test image file, model file and labels file can be found here : + * https://gitlab.collabora.com/gstreamer/onnx-models + * + * gst-launch-1.0 -v v4l2src \ + * ! videoconvertscale qos=false ! video/x-raw, pixel-aspect-ratio=1/1 \ + * ! onnxinference model-file=yolov8s.onnx \ + * ! yolov8tensordec class-confidence-threshold=0.8 iou-threshold=0.3 \ + * max-detections=100 label-file=labels/COCO_classes.txt \ + * ! objectdetectionoverlay ! glimagesink sink=gtkglsink + * + * The original repository of the Yolo is located at + * https://github.com/ultralytics/ultralytics. + * For easy experimentation, the models based on Yolo architecture in Onnx + * format can be found at https://col.la/gstonnxmodels . This model already + * has tensors name embedded matching default values of tensors-detections-name + * and tensors-logits-name properties. It's also possible to embed tensor-ids + * into any model based on Yolo architecture to allow this tensor-decoder + * to decode tensors.This process is described in the Readme of + * repository: https://col.la/gstonnxmodels" + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFI_H +#include "config.h" +#endif + +#include "gstyolotensordecoder.h" + +#include <gst/analytics/analytics.h> +#include <gst/analytics/gstanalytics_image_util.h> +#include <gio/gio.h> + +#include <math.h> + +#define YOLO_DETECTION_MASK "yolo-v8-out" +GQuark YOLO_DETECTION_MASK_ID; + +/** + * GstYoloTensorDecoder: + * + * A tensor decoder for YOLO v8-v11 models. + * + * Since: 1.28 + */ + +GST_DEBUG_CATEGORY_STATIC (yolo_tensor_decoder_debug); +#define GST_CAT_DEFAULT yolo_tensor_decoder_debug + +GST_ELEMENT_REGISTER_DEFINE (yolo_tensor_decoder, "yolov8tensordec", + GST_RANK_PRIMARY, GST_TYPE_YOLO_TENSOR_DECODER); + +/* GstYoloTensorDecoder properties, see properties description in + * gst_yolo_tensor_decoder_class_init for more details. */ +enum +{ + PROP_0, + PROP_BOX_CONFI_THRESH, + PROP_CLS_CONFI_THRESH, + PROP_IOU_THRESH, + PROP_MAX_DETECTION, + PROP_LABEL_FILE +}; + +/* Specify the range of confidence level in tensor output*/ +typedef struct _ConfidenceRange +{ + gsize start; /* Start index of confidence level */ + gsize end; /* End index of confidence level */ + gsize step; /* Step size of next confidence level index */ +} ConfidenceRange; + +/* Default properties value */ +static const gfloat DEFAULT_BOX_CONFI_THRESH = 0.4f; +static const gfloat DEFAULT_CLS_CONFI_THRESH = 0.4f; +static const gfloat DEFAULT_IOU_THRESH = 0.7f; +static const gsize DEFAULT_MAX_DETECTION = 100; + +/* Global variable storing class for OD. Generally OD has class + * and we need to provide one but this class is just a placeholder.*/ +GQuark OOI_CLASS_ID; + +/* GStreamer element srcpad template. Template of a srcpad that can receive + * any raw video. */ +static GstStaticPadTemplate gst_yolo_tensor_decoder_src_template = +GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ("video/x-raw")); + +/* *INDENT-OFF* */ +static GstStaticPadTemplate gst_yolo_tensor_decoder_sink_template = +GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS ( + "video/x-raw," + "tensors=(structure)" + "tensorgroups," + "yolo-v8-out=(/uniquelist){" + "(GstCaps)" + "tensor/strided," + "tensor-id=(string)yolo-v8-out," + "dims=<(int)1,(int)1,max,(int)1,max>," + "dims-order=(string)col-major," + "type=(string)float32" + "" + "}" + "" + )); +/* *INDENT-ON* */ + +/* GstYoloTensorDecoder Prototypes */ +static gboolean gst_yolo_tensor_decoder_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps); +static void gst_yolo_tensor_decoder_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_yolo_tensor_decoder_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static GstFlowReturn gst_yolo_tensor_decoder_transform_ip (GstBaseTransform * + trans, GstBuffer * buf); +static void gst_yolo_tensor_decoder_finalize (GObject * object); + +static void gst_yolo_tensor_decoder_object_found (GstYoloTensorDecoder * self, + GstAnalyticsRelationMeta * rmeta, BBox * bb, gfloat confidence, + GQuark class_quark, const gfloat * candidate_masks, gsize offset, + guint count); + +G_DEFINE_TYPE (GstYoloTensorDecoder, gst_yolo_tensor_decoder, + GST_TYPE_BASE_TRANSFORM); + +static GArray * +read_labels (const char *labels_file) +{ + GArray *array; + GFile *file = g_file_new_for_path (labels_file); + GFileInputStream *file_stream; + GDataInputStream *data_stream; + GError *error = NULL; + gchar *line; + + file_stream = g_file_read (file, NULL, &error); + g_object_unref (file); + if (!file_stream) { + GST_WARNING ("Could not open file %s: %s\n", labels_file, error->message); + g_clear_error (&error); + return NULL; + } + + data_stream = g_data_input_stream_new (G_INPUT_STREAM (file_stream)); + g_object_unref (file_stream); + + array = g_array_new (FALSE, FALSE, sizeof (GQuark)); + + while ((line = g_data_input_stream_read_line (data_stream, NULL, NULL, + &error))) { + GQuark label = g_quark_from_string (line); + g_array_append_val (array, label); + g_free (line); + } + + g_object_unref (data_stream); + + if (error) { + GST_WARNING ("Could not open file %s: %s", labels_file, error->message); + g_array_free (array, TRUE); + g_clear_error (&error); + return NULL; + } + + if (array->len == 0) { + g_array_free (array, TRUE); + return NULL; + } + + return array; +} + +static gboolean +gst_yolo_tensor_decoder_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + GstYoloTensorDecoder *self = GST_YOLO_TENSOR_DECODER (trans); + + if (!gst_video_info_from_caps (&self->video_info, incaps)) { + GST_ERROR_OBJECT (self, "Failed to parse caps"); + return FALSE; + } + + if (gst_base_transform_is_passthrough (trans)) { + GST_ERROR_OBJECT (self, "Failed. Can't handle passthrough"); + return FALSE; + } + + return TRUE; +} + +static void +gst_yolo_tensor_decoder_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + GstYoloTensorDecoder *self = GST_YOLO_TENSOR_DECODER (object); + const gchar *filename; + + switch (prop_id) { + case PROP_BOX_CONFI_THRESH: + self->box_confi_thresh = g_value_get_float (value); + break; + case PROP_CLS_CONFI_THRESH: + self->cls_confi_thresh = g_value_get_float (value); + break; + case PROP_IOU_THRESH: + self->iou_thresh = g_value_get_float (value); + break; + case PROP_MAX_DETECTION: + self->max_detection = g_value_get_uint (value); + break; + case PROP_LABEL_FILE: + { + GArray *labels; + + filename = g_value_get_string (value); + labels = read_labels (filename); + + if (labels) { + g_free (self->label_file); + self->label_file = g_strdup (filename); + g_clear_pointer (&self->labels, g_array_unref); + self->labels = labels; + } else { + GST_WARNING_OBJECT (self, "Label file '%s' not found!", filename); + } + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_yolo_tensor_decoder_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + GstYoloTensorDecoder *self = GST_YOLO_TENSOR_DECODER (object); + + switch (prop_id) { + case PROP_BOX_CONFI_THRESH: + g_value_set_float (value, self->box_confi_thresh); + break; + case PROP_CLS_CONFI_THRESH: + g_value_set_float (value, self->cls_confi_thresh); + break; + case PROP_IOU_THRESH: + g_value_set_float (value, self->iou_thresh); + break; + case PROP_MAX_DETECTION: + g_value_set_uint (value, self->max_detection); + break; + case PROP_LABEL_FILE: + g_value_set_string (value, self->label_file); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_yolo_tensor_decoder_class_init (GstYoloTensorDecoderClass * klass) +{ + GObjectClass *gobject_class = (GObjectClass *) klass; + GstElementClass *element_class = (GstElementClass *) klass; + GstBaseTransformClass *basetransform_class = (GstBaseTransformClass *) klass; + + /* Define GstYoloTensorDecoder debug category. */ + GST_DEBUG_CATEGORY_INIT (yolo_tensor_decoder_debug, + "yolov8tensordec", 0, "Tensor decoder for Yolo detection models"); + + YOLO_DETECTION_MASK_ID = g_quark_from_static_string (YOLO_DETECTION_MASK); + + /* Set GObject vmethod to get and set property */ + gobject_class->set_property = gst_yolo_tensor_decoder_set_property; + gobject_class->get_property = gst_yolo_tensor_decoder_get_property; + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_BOX_CONFI_THRESH, + g_param_spec_float ("box-confidence-threshold", + "Box location confidence threshold", + "Boxes with a location confidence level inferior to this threshold " + "will be excluded", + 0.0, 1.0, DEFAULT_BOX_CONFI_THRESH, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_CLS_CONFI_THRESH, + g_param_spec_float ("class-confidence-threshold", + "Class confidence threshold", + "Classes with a confidence level inferior to this threshold " + "will be excluded", + 0.0, 1.0, DEFAULT_CLS_CONFI_THRESH, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_IOU_THRESH, + g_param_spec_float ("iou-threshold", + "Maximum IOU threshold", + "Maximum intersection-over-union between bounding boxes to " + "consider them distinct.", + 0.0, 1.0, DEFAULT_IOU_THRESH, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), + PROP_MAX_DETECTION, + g_param_spec_uint ("max-detections", + "Maximum object/masks detections.", + "Maximum object/masks detections.", + 1, G_MAXUINT, DEFAULT_MAX_DETECTION, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (G_OBJECT_CLASS (klass), PROP_LABEL_FILE, + g_param_spec_string ("label-file", + "Label file", "Label file", NULL, (GParamFlags) + (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + + /* Element description. */ + gst_element_class_set_static_metadata (element_class, + "YOLO v8-11 object detection tensor decoder", "Tensordecoder/Video", + "Decode tensors output from the inference of YOLO Object Detection or FastSAM model (Detection)" + "on video frames. This works on YOLO version 8 and later(v11), and FastSAM models.", + "Daniel Morin <daniel.morin@collabora.com>, Santosh Mahto <santosh.mahto@collabora.com>"); + + /* Add pads to element base on pad template defined earlier */ + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_yolo_tensor_decoder_src_template)); + gst_element_class_add_pad_template (element_class, + gst_static_pad_template_get (&gst_yolo_tensor_decoder_sink_template)); + + /* Set GstBaseTransform vmethod transform_ip. This methode is called + * by the srcpad when it receive buffer. ip stand for in-place meaning the + * buffer remain unchanged by the element. Tensor-decoder only monitor + * buffer it receive for a meta attach to the buffer that is a GstTensorMeta + * and has a tensor-id can be handled by GstYoloTensorDecoder. */ + basetransform_class->transform_ip = + GST_DEBUG_FUNCPTR (gst_yolo_tensor_decoder_transform_ip); + + /* Set GstBaseTransform set_caps vmethod. This will be called once the + * capability negotiation has been completed. We will be able to extract + * resolution from this callback. */ + basetransform_class->set_caps = + GST_DEBUG_FUNCPTR (gst_yolo_tensor_decoder_set_caps); + + gobject_class->finalize = gst_yolo_tensor_decoder_finalize; + + klass->object_found = gst_yolo_tensor_decoder_object_found; + + /* Calculate the class id placeholder (also a quark) that will be set + * as label if object if labels are not provided via label-file. */ + OOI_CLASS_ID = g_quark_from_static_string ("Yolo-None"); +} + +struct Candidate +{ + const float *candidate; + const float max_confidence; + const guint max_class_offset; +}; + +static void +gst_yolo_tensor_decoder_init (GstYoloTensorDecoder * self) +{ + /* GstYoloTensorDecoder instance initialization */ + self->box_confi_thresh = DEFAULT_BOX_CONFI_THRESH; + self->cls_confi_thresh = DEFAULT_CLS_CONFI_THRESH; + self->iou_thresh = DEFAULT_IOU_THRESH; + self->max_detection = DEFAULT_MAX_DETECTION; + + self->sel_candidates = g_array_new (FALSE, FALSE, sizeof (struct Candidate)); + self->selected = g_ptr_array_new (); + + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); +} + +static const GstTensor * +gst_yolo_tensor_decoder_get_tensor (GstYoloTensorDecoder * + self, GstBuffer * buf) +{ + GstMeta *meta = NULL; + gpointer iter_state = NULL; + + if (!gst_buffer_get_meta (buf, GST_TENSOR_META_API_TYPE)) { + GST_DEBUG_OBJECT (self, + "missing tensor meta from buffer %" GST_PTR_FORMAT, buf); + return NULL; + } + + while ((meta = gst_buffer_iterate_meta_filtered (buf, &iter_state, + GST_TENSOR_META_API_TYPE))) { + GstTensorMeta *tensor_meta = (GstTensorMeta *) meta; + const GstTensor *tensor; + const gsize YOLO_DETECTIONS_TENSOR_N_DIMS = 3; + static const gsize dims3 = { 1, G_MAXSIZE, G_MAXSIZE }; + + tensor = gst_tensor_meta_get_typed_tensor (tensor_meta, + YOLO_DETECTION_MASK_ID, GST_TENSOR_DATA_TYPE_FLOAT32, + GST_TENSOR_DIM_ORDER_ROW_MAJOR, YOLO_DETECTIONS_TENSOR_N_DIMS, dims); + + if (tensor) { + if (tensor->dims1 < 5) { + GST_WARNING_OBJECT (self, "Ignore tensor because dims1 is %zu < 5", + tensor->dims1); + continue; + } + + return tensor; + } + } + + return NULL; +} + + +static GstFlowReturn +gst_yolo_tensor_decoder_transform_ip (GstBaseTransform * trans, GstBuffer * buf) +{ + GstYoloTensorDecoder *self = GST_YOLO_TENSOR_DECODER (trans); + GstAnalyticsRelationMeta *rmeta; + const GstTensor *detections_tensor; + + detections_tensor = gst_yolo_tensor_decoder_get_tensor (self, buf); + if (detections_tensor == NULL) { + GST_WARNING_OBJECT (self, "Couldn't find mask tensor, skipping"); + return GST_FLOW_OK; + } + + /* Retrieve or attach an analytics-relation-meta to the buffer. + * Analytics-relation-meta are container that can reveive multiple + * analytics-meta, like OD and Segmentation. The following call will only + * retrieve an analytics-relation-meta if it exist or create one if it + * does not exist. */ + rmeta = gst_buffer_add_analytics_relation_meta (buf); + if (rmeta == NULL) { + GST_ELEMENT_ERROR (trans, STREAM, FAILED, (NULL), + ("Analytics Relation meta allocation failed")); + return GST_FLOW_ERROR; + } + + if (!gst_yolo_tensor_decoder_decode_f32 (self, rmeta, detections_tensor, 0)) + return GST_FLOW_ERROR; + + return GST_FLOW_OK; +} + +static void +gst_yolo_tensor_decoder_finalize (GObject * object) +{ + GstYoloTensorDecoder *self = GST_YOLO_TENSOR_DECODER (object); + + g_clear_pointer (&self->sel_candidates, g_array_unref); + g_clear_pointer (&self->selected, g_ptr_array_unref); + + g_free (self->label_file); + g_clear_pointer (&self->labels, g_array_unref); + + G_OBJECT_CLASS (gst_yolo_tensor_decoder_parent_class)->finalize (object); +} + +/* Extract bounding box from tensor data */ +static void +gst_yolo_tensor_decoder_convert_bbox (const gfloat * candidate, + const gsize * offset, BBox * bbox) +{ + gfloat w = *(candidate + offset2); + gfloat h = *(candidate + offset3); + bbox->x = *(candidate + offset0) - (w / 2); + bbox->y = *(candidate + offset1) - (h / 2); + bbox->w = w + 0.5; + bbox->h = h + 0.5; +} + +/* Calculate iou between boundingbox of candidate c1 and c2 + */ +static gfloat +gst_yolo_tensor_decoder_iou (const gfloat * c1, const gfloat * c2, + const gsize * offset, BBox * bb1, BBox * bb2) +{ + gst_yolo_tensor_decoder_convert_bbox (c1, offset, bb1); + gst_yolo_tensor_decoder_convert_bbox (c2, offset, bb2); + return gst_analytics_image_util_iou_int (bb1->x, bb1->y, bb1->w, bb1->h, + bb2->x, bb2->y, bb2->w, bb2->h); +} + +/* Utility function to find maxmum confidence value across classes + * specified by range. + */ +static gfloat +gst_yolo_tensor_decoder_find_max_class_confidence (const gfloat * c, + const ConfidenceRange * c_range, gsize * max_class_ofs) +{ + gfloat max_val = 0.0; + for (gsize i = c_range->start; i <= c_range->end; i += c_range->step) { + if (*(c + i) > max_val) { + max_val = *(c + i); + *max_class_ofs = i; + } + } + return max_val; +} + +/* Compare c1 and c2 + * Utility function for sorting candiates based on the a field identified + * by offset. + */ +static gint +gst_yolo_tensor_decoder_sort_candidates (gconstpointer p1, gconstpointer p2) +{ + const struct Candidate *c1 = p1; + const struct Candidate *c2 = p2; + + if (c1->max_confidence < c2->max_confidence) + return 1; + else if (c1->max_confidence > c2->max_confidence) + return -1; + else + return 0; +} + +static gboolean +gst_yolo_tensor_decoder_decode_valid_bb (GstYoloTensorDecoder * self, + gfloat x, gfloat y, gfloat w, gfloat h) +{ + GstYoloTensorDecoder *parent = GST_YOLO_TENSOR_DECODER (self); + + if (x > (GST_VIDEO_INFO_WIDTH (&parent->video_info))) + return FALSE; + if (y > (GST_VIDEO_INFO_HEIGHT (&parent->video_info))) + return FALSE; + if (x < -(gfloat) (GST_VIDEO_INFO_WIDTH (&parent->video_info) / 2.0)) + return FALSE; + if (y < -(gfloat) (GST_VIDEO_INFO_HEIGHT (&parent->video_info) / 2.0)) + return FALSE; + if (w <= 0) + return FALSE; + if (h <= 0) + return FALSE; + if (w > (GST_VIDEO_INFO_WIDTH (&parent->video_info))) + return FALSE; + if (h > (GST_VIDEO_INFO_HEIGHT (&parent->video_info))) + return FALSE; + + return TRUE; +} + +gboolean +gst_yolo_tensor_decoder_decode_f32 (GstYoloTensorDecoder * self, + GstAnalyticsRelationMeta * rmeta, const GstTensor * detections_tensor, + guint num_masks) +{ + GstMapInfo map_info_detections; + gfloat iou; + gboolean rv, keep; + gsize offset, x_offset, y_offset, w_offset, h_offset, offsets4; + BBox bb1, bb2; + ConfidenceRange c_range; + gsize max_class_offset = 0, class_index; + GQuark class_quark = OOI_CLASS_ID; + gboolean ret = TRUE; + gsize i; + + /* Retrieve memory at index 0 and map it in READWRITE mode */ + rv = gst_buffer_map (detections_tensor->data, &map_info_detections, + GST_MAP_READ); + if (rv == FALSE) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Could not map tensor buffer %" GST_PTR_FORMAT, + detections_tensor->data)); + return FALSE; + } + + GST_LOG_OBJECT (self, "Mask Tensor shape dims %zu", + detections_tensor->num_dims); + + /* Trace detections tensor dimensions */ + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (gsize i = 0; i < detections_tensor->num_dims; i++) { + GST_TRACE_OBJECT (self, "Detections Tensor dim %zu: %zu", i, + detections_tensor->dimsi); + } + } + + /* Number of candidates can be large, reset the arrays */ + g_array_set_size (self->sel_candidates, 0); + g_ptr_array_set_size (self->selected, 0); + + /* detections_tensor->dims2 contain the number of candidates. Let's call the + * number of candidates C. We store this value in offset as we use it + * calculate the offset of candidate fields. The variable #data_detections above point + * at the detections tensor data, but candidates data is organize like a plane. + * Candidates bbox X coord fields from 0 to C start at the begining of the + * tensor data and are continguous in memory, followed by all candidates + * field Y, followed by field W, ... followed by field class confidence level, + * ..., followed by all candidates mask0, ..., followed by all candidates + * mask31. Bellow we pre-calculate each field offset relative to the + * candidate pointer (pointer to field X), which will allow us to easily + * access each candiates field. + * */ + offset = detections_tensor->dims2; + x_offset = 0; + y_offset = offset; + w_offset = 2 * offset; + h_offset = 3 * offset; + /* first index that contain confidence level */ + c_range.start = 4 * offset; + /* Last index that contains confidence level */ + c_range.end = (detections_tensor->dims1 - num_masks - 1) * offset; + /* Step between class confidence level */ + c_range.step = offset; + offsets0 = x_offset; + offsets1 = y_offset; + offsets2 = w_offset; + offsets3 = h_offset; + +#define BB_X(candidate) candidatex_offset +#define BB_Y(candidate) candidatey_offset +#define BB_W(candidate) candidatew_offset +#define BB_H(candidate) candidateh_offset + + for (gsize c_idx = 0; c_idx < detections_tensor->dims2; c_idx++) { + float *candidate = (float *) map_info_detections.data; + + candidate += c_idx; + + /* Yolo have multiple class, so maximum confidence level across all class is used + * to evaluate the relevance of the candidate. Here we filter candidates + * based on their class confidence level.*/ + gfloat max_confidence = + gst_yolo_tensor_decoder_find_max_class_confidence (candidate, &c_range, + &max_class_offset); + if (max_confidence > self->cls_confi_thresh + && gst_yolo_tensor_decoder_decode_valid_bb (self, + BB_X (candidate), BB_Y (candidate), BB_W (candidate), + BB_H (candidate))) { + + struct Candidate c = { + candidate, + max_confidence, + max_class_offset, + }; + g_array_append_val (self->sel_candidates, c); + + GST_TRACE_OBJECT (self, "%zu: x,y=(%f;%f) w,h=(%f;%f), s=%f c=%f", + c_idx, candidatex_offset, candidatey_offset, + candidatew_offset, candidateh_offset, + candidatew_offset * candidateh_offset, max_confidence); + } + + /* Pointer arithmetic, going to the next candidate. This is the candidate + * pointer that is now incremented to the next candidate which is also + * the field X of the next candidate.*/ + candidate += 1; + } + + GST_LOG_OBJECT (self, "Before NMS selected candidates count: %u", + self->sel_candidates->len); + + /* We sort the remaining candidates because, in the next selection phase we + * have a maximum and we want to make sure that considered only the candidates + * with the highest class confidence level before potentially reaching the + * maximum.*/ + g_array_sort (self->sel_candidates, gst_yolo_tensor_decoder_sort_candidates); + + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) >= GST_LEVEL_TRACE) { + for (i = 0; i < self->sel_candidates->len; i++) { + struct Candidate *c = &g_array_index (self->sel_candidates, + struct Candidate, i); + GST_TRACE_OBJECT (self, + "Sorted: %zu: x,y=(%f;%f) w,h=(%f;%f), s=%f c=%f", + i, + c->candidatex_offset, + c->candidatey_offset, + c->candidatew_offset, + c->candidateh_offset, + c->candidatew_offset * c->candidateh_offset, c->max_confidence); + } + } + + /* Algorithm in part inspired by OpenCV NMSBoxes */ + for (i = 0; i < self->sel_candidates->len; i++) { + const struct Candidate *c = &g_array_index (self->sel_candidates, + struct Candidate, i); + keep = TRUE; + + /* We only want to a NMS using IoU between candidates we've decided to + * keep and the new one we considering to keep. selected array contain + * the candidates we decided to keep and candidatesc is the candidate + * we're considering to keep or reject */ + for (gsize s = 0; s < self->selected->len && keep; s++) { + const float *candidate2 = g_ptr_array_index (self->selected, s); + iou = gst_yolo_tensor_decoder_iou (c->candidate, candidate2, + offsets, &bb1, &bb2); + keep = (iou <= self->iou_thresh); + } + + if (keep) { + if (self->selected->len == 0) { + /* The first bounding-box always get in as there's no others bbox + * to filter on based on IoU */ + gst_yolo_tensor_decoder_convert_bbox (c->candidate, offsets, &bb1); + } + + g_ptr_array_add (self->selected, (gpointer) c->candidate); + + if (self->labels) { + class_index = (c->max_class_offset - c_range.start) / c_range.step; + + if (class_index < self->labels->len) + class_quark = g_array_index (self->labels, GQuark, class_index); + } + + const gfloat *candidate_masks = NULL; + if (num_masks) { + /* detections weight will be stored in last `num_masks` + * row of detections_tensor, so mask offset + * will start at the end of the detections_tensor minus + * `num_masks` + */ + candidate_masks = c->candidate + + ((detections_tensor->dims1 - num_masks) * offset); + + if (candidate_masks + num_masks + offset > + (gfloat *) (map_info_detections.data + map_info_detections.size)) { + GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), + ("Mask tensor data size %zu is smaller than expected (%zu)", + (candidate_masks - (gfloat *) map_info_detections.data) + + offset + num_masks, map_info_detections.size)); + ret = FALSE; + break; + } + } + + GST_YOLO_TENSOR_DECODER_GET_CLASS (self)->object_found (self, rmeta, &bb1, + c->max_confidence, class_quark, candidate_masks, offset, + self->selected->len); + + /* If the maximum number of candidate selected is reached exit the + * selection process. */ + if (self->selected->len >= self->max_detection) { + break; + } + } + } + + GST_LOG_OBJECT (self, "After NMS selected count: %u", self->selected->len); + + /* We unmap the memory */ + gst_buffer_unmap (detections_tensor->data, &map_info_detections); + + return ret; +} + +static void +gst_yolo_tensor_decoder_object_found (GstYoloTensorDecoder * self, + GstAnalyticsRelationMeta * rmeta, BBox * bb, gfloat confidence, + GQuark class_quark, const gfloat * candidate_masks, gsize offset, + guint count) +{ + gst_analytics_relation_meta_add_od_mtd (rmeta, class_quark, bb->x, bb->y, + bb->w, bb->h, confidence, NULL); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/gstyolotensordecoder.h
Added
@@ -0,0 +1,103 @@ +/* + * GStreamer gstreamer-yolotensordecoder + * Copyright (C) 2024 Collabora Ltd + * Authors: Daniel Morin <daniel.morin@collabora.com> + * Vineet Suryan <vineet.suryan@collabora.com> + * Santosh Mahto <santosh.mahto@collabora.com> + * + * gstyolotensordecoder.h + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#ifndef __GST_YOLO_TENSOR_DECODER_H__ +#define __GST_YOLO_TENSOR_DECODER_H__ + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/base/base.h> +#include <gst/analytics/analytics.h> + +GType gst_yolo_tensor_decoder_get_type (void); + +#define GST_TYPE_YOLO_TENSOR_DECODER (gst_yolo_tensor_decoder_get_type ()) +#define GST_YOLO_TENSOR_DECODER(obj) \ + (G_TYPE_CHECK_INSTANCE_CAST ((obj), GST_TYPE_YOLO_TENSOR_DECODER, GstYoloTensorDecoder)) +#define GST_YOLO_TENSOR_DECODER_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_YOLO_TENSOR_DECODER, GstYoloTensorDecoderClass)) +#define GST_IS_YOLO_TENSOR_DECODER(obj) \ + (G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_YOLO_TENSOR_DECODER)) +#define GST_IS_YOLO_TENSOR_DECODER_CLASS(klass) \ + (G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_YOLO_TENSOR_DECODER)) +#define GST_YOLO_TENSOR_DECODER_GET_CLASS(obj) \ + (G_TYPE_INSTANCE_GET_CLASS ((obj), GST_TYPE_YOLO_TENSOR_DECODER, GstYoloTensorDecoderClass)) + +typedef struct _GstYoloTensorDecoder GstYoloTensorDecoder; +typedef struct _GstYoloTensorDecoderClass GstYoloTensorDecoderClass; + +typedef struct _BBox +{ + gint x; + gint y; + guint w; + guint h; +} BBox; + +struct _GstYoloTensorDecoder +{ + GstBaseTransform basetransform; + /* Box confidence threshold */ + gfloat box_confi_thresh; + /* Class confidence threshold */ + gfloat cls_confi_thresh; + /* Intersection-of-Union threshold */ + gfloat iou_thresh; + /* Maximum detection/mask */ + gsize max_detection; + /* Candidates with a class confidence level above threshold. */ + GArray *sel_candidates; + /* Final candidates selected that respect class confidence level, + * NMS and maximum detection. */ + GPtrArray *selected; + /* Video Info */ + GstVideoInfo video_info; + /* Labels file */ + gchar *label_file; + /* Labels */ + GArray *labels; +}; + +struct _GstYoloTensorDecoderClass +{ + GstBaseTransformClass parent_class; + + void (*object_found) (GstYoloTensorDecoder *self, + GstAnalyticsRelationMeta *rmeta, BBox *bb, gfloat confidence, + GQuark class_quark, const gfloat *candidate_masks, gsize offset, + guint count); +}; + +G_DEFINE_AUTOPTR_CLEANUP_FUNC (GstYoloTensorDecoder, g_object_unref) + +GST_ELEMENT_REGISTER_DECLARE (yolo_tensor_decoder) + +gboolean +gst_yolo_tensor_decoder_decode_f32 (GstYoloTensorDecoder * self, + GstAnalyticsRelationMeta * rmeta, const GstTensor * detections_tensor, + guint num_masks); + +#endif /* __GST_YOLO_TENSOR_DECODER_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/tensordecoders/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/tensordecoders/meson.build
Changed
@@ -1,10 +1,22 @@ tensordecoders_sources = 'gsttensordecoders.c', - 'gstssdobjectdetector.c' + 'gstssdtensordec.c', + 'gstclassifiertensordecoder.c', + 'gstfacedetectortensordecoder.c', + 'gstioutracker.c', + 'gstyolotensordecoder.c', + 'gstyolosegtensordecoder.c', + 'gsttensordecodebin.c', tensordecoders_headers = - 'gstssdobjectdetector.h', + 'gstssdtensordec.h', + 'gstclassifiertensordecoder.h', + 'gstfacedetectortensordecoder.h', + 'gstioutracker.h', + 'gstyolotensordecoder.h', + 'gstyolosegtensordecoder.h', + 'gsttensordecodebin.h' doc_sources =
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/unixfd/gstunixfdallocator.c
Added
@@ -0,0 +1,150 @@ +/* GStreamer unix file-descriptor source/sink + * + * Copyright (C) 2025 Netflix Inc. + * Author: Xavier Claessens <xclaessens@netflix.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "gstunixfdallocator.h" + +struct _GstUnixFdAllocator +{ + GstShmAllocator parent; + + GMutex lock; + GList *pool; + gboolean flush; +}; + +G_DEFINE_TYPE (GstUnixFdAllocator, gst_unix_fd_allocator, + GST_TYPE_SHM_ALLOCATOR); + +static gboolean +gst_unix_fd_allocator_mem_dispose (GstMiniObject * obj) +{ + GstMemory *mem = GST_MEMORY_CAST (obj); + GstUnixFdAllocator *self = GST_UNIX_FD_ALLOCATOR (mem->allocator); + + g_mutex_lock (&self->lock); + + if (self->flush) { + g_mutex_unlock (&self->lock); + return TRUE; + } + + gsize offset, maxsize; + gst_memory_get_sizes (mem, &offset, &maxsize); + gst_memory_resize (mem, -offset, maxsize); + + self->pool = g_list_prepend (self->pool, gst_memory_ref (mem)); + + g_mutex_unlock (&self->lock); + + return FALSE; +} + +static GstMemory * +gst_unix_fd_allocator_alloc (GstAllocator * allocator, gsize size, + GstAllocationParams * params) +{ + GstUnixFdAllocator *self = GST_UNIX_FD_ALLOCATOR (allocator); + gsize smallest_size = G_MAXSIZE; + GList *smallest_link = NULL; + + /* Check if we have a memory big enough in our pool. */ + g_mutex_lock (&self->lock); + for (GList * l = self->pool; l != NULL; l = l->next) { + GstMemory *mem = l->data; + gsize maxsize; + + gst_memory_get_sizes (mem, NULL, &maxsize); + if (maxsize >= size) { + self->pool = g_list_delete_link (self->pool, l); + g_mutex_unlock (&self->lock); + return mem; + } + if (maxsize < smallest_size) { + smallest_size = maxsize; + smallest_link = l; + } + } + /* All our memories are too small. Delete the smallest one to converge to a + * size that will avoid re-allocations in the future. */ + if (smallest_link != NULL) { + GstMemory *mem = smallest_link->data; + self->pool = g_list_delete_link (self->pool, smallest_link); + GST_MINI_OBJECT_CAST (mem)->dispose = NULL; + gst_memory_unref (mem); + } + g_mutex_unlock (&self->lock); + + /* Allocate a new memory */ + GstMemory *mem = + GST_ALLOCATOR_CLASS (gst_unix_fd_allocator_parent_class)->alloc + (allocator, size, params); + if (mem != NULL) + GST_MINI_OBJECT_CAST (mem)->dispose = gst_unix_fd_allocator_mem_dispose; + + return mem; +} + +static void +gst_unix_fd_allocator_finalize (GObject * object) +{ + GstUnixFdAllocator *self = GST_UNIX_FD_ALLOCATOR (object); + + g_mutex_clear (&self->lock); + + G_OBJECT_CLASS (gst_unix_fd_allocator_parent_class)->finalize (object); +} + +static void +gst_unix_fd_allocator_class_init (GstUnixFdAllocatorClass * klass) +{ + GstAllocatorClass *alloc_class = (GstAllocatorClass *) klass; + GObjectClass *object_class = (GObjectClass *) klass; + + object_class->finalize = gst_unix_fd_allocator_finalize; + + alloc_class->alloc = GST_DEBUG_FUNCPTR (gst_unix_fd_allocator_alloc); +} + +static void +gst_unix_fd_allocator_init (GstUnixFdAllocator * self) +{ + g_mutex_init (&self->lock); +} + +GstUnixFdAllocator * +gst_unix_fd_allocator_new (void) +{ + return g_object_new (GST_TYPE_UNIX_FD_ALLOCATOR, NULL); +} + +void +gst_unix_fd_allocator_flush (GstUnixFdAllocator * self) +{ + g_return_if_fail (GST_IS_UNIX_FD_ALLOCATOR (self)); + + g_mutex_lock (&self->lock); + GList *pool = self->pool; + self->pool = NULL; + self->flush = TRUE; + g_mutex_unlock (&self->lock); + + g_list_free_full (pool, (GDestroyNotify) gst_memory_unref); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/unixfd/gstunixfdallocator.h
Added
@@ -0,0 +1,29 @@ +/* GStreamer unix file-descriptor source/sink + * + * Copyright (C) 2025 Netflix Inc. + * Author: Xavier Claessens <xclaessens@netflix.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include <gst/allocators/allocators.h> + +#define GST_TYPE_UNIX_FD_ALLOCATOR gst_unix_fd_allocator_get_type() +G_DECLARE_FINAL_TYPE (GstUnixFdAllocator, gst_unix_fd_allocator, + GST, UNIX_FD_ALLOCATOR, GstShmAllocator); + +GstUnixFdAllocator *gst_unix_fd_allocator_new (void); +void gst_unix_fd_allocator_flush (GstUnixFdAllocator * self);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/unixfd/gstunixfdsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/unixfd/gstunixfdsink.c
Changed
@@ -35,8 +35,8 @@ * * ## Example launch lines * | - * gst-launch-1.0 -v videotestsrc ! unixfdsink socket-path=/tmp/blah - * gst-launch-1.0 -v unixfdsrc socket-path=/tmp/blah ! autovideosink + * gst-launch-1.0 -v videotestsrc ! video/x-raw,format=RGBx,width=1920,height=1080 ! timeoverlay ! unixfdsink socket-path=/tmp/blah + * gst-launch-1.0 -v unixfdsrc socket-path=/tmp/blah ! videoconvert ! autovideosink * | * * Since: 1.24 @@ -44,9 +44,12 @@ #include "gstunixfd.h" +#include "gstunixfdallocator.h" + #include <gst/base/base.h> #include <gst/allocators/allocators.h> +#include <stdint.h> #include <glib/gstdio.h> #include <gio/gio.h> #include <gio/gunixsocketaddress.h> @@ -91,6 +94,9 @@ gboolean wait_for_connection; GCond wait_for_connection_cond; gboolean unlock; + + GstUnixFdAllocator *allocator; + gint64 min_memory_size; }; G_DEFINE_TYPE (GstUnixFdSink, gst_unix_fd_sink, GST_TYPE_BASE_SINK); @@ -99,6 +105,7 @@ #define DEFAULT_SOCKET_TYPE G_UNIX_SOCKET_ADDRESS_PATH #define DEFAULT_WAIT_FOR_CONNECTION FALSE +#define DEFAULT_MIN_MEMORY_SIZE 0 enum { @@ -106,8 +113,12 @@ PROP_SOCKET_PATH, PROP_SOCKET_TYPE, PROP_WAIT_FOR_CONNECTION, + PROP_MIN_MEMORY_SIZE, + PROP_NUM_CLIENTS, + NUM_PROPERTIES }; +static GParamSpec *propertiesNUM_PROPERTIES; static void client_free (Client * client) @@ -118,6 +129,66 @@ g_free (client); } +static GstMemory * +copy_to_shm (GstUnixFdSink * self, GstMemory * mem) +{ + GST_OBJECT_LOCK (self); + + if (self->min_memory_size < 0) { + GST_ERROR_OBJECT (self, + "Buffer has non-FD memories and copying is disabled. Set min-memory-size to a value >= 0 to allow copying."); + GST_OBJECT_UNLOCK (self); + return NULL; + } + + if (self->allocator == NULL) + self->allocator = gst_unix_fd_allocator_new (); + + gsize size = gst_memory_get_sizes (mem, NULL, NULL); + gsize alloc_size = MAX (size, self->min_memory_size); + GstMemory *fd_mem = + gst_allocator_alloc (GST_ALLOCATOR_CAST (self->allocator), alloc_size, + NULL); + + GST_OBJECT_UNLOCK (self); + + if (fd_mem == NULL) { + GST_ERROR_OBJECT (self, "Shared memory allocation failed."); + return NULL; + } + + gst_memory_resize (fd_mem, 0, size); + + GstMapInfo src_map, dst_map; + + if (!gst_memory_map (mem, &src_map, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Mapping of source memory failed."); + gst_memory_unref (fd_mem); + return NULL; + } + + if (!gst_memory_map (fd_mem, &dst_map, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Mapping of shared memory failed."); + gst_memory_unmap (mem, &src_map); + gst_memory_unref (fd_mem); + return NULL; + } + + memcpy (dst_map.data, src_map.data, src_map.size); + + gst_memory_unmap (mem, &src_map); + gst_memory_unmap (fd_mem, &dst_map); + + return fd_mem; +} + +static void +allocator_unref (GstUnixFdAllocator * allocator) +{ + gst_unix_fd_allocator_flush (allocator); + g_object_unref (allocator); +} + static void gst_unix_fd_sink_init (GstUnixFdSink * self) { @@ -175,6 +246,10 @@ self->wait_for_connection = g_value_get_boolean (value); g_cond_signal (&self->wait_for_connection_cond); break; + case PROP_MIN_MEMORY_SIZE: + self->min_memory_size = g_value_get_int64 (value); + g_clear_pointer (&self->allocator, allocator_unref); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -201,6 +276,12 @@ case PROP_WAIT_FOR_CONNECTION: g_value_set_boolean (value, self->wait_for_connection); break; + case PROP_MIN_MEMORY_SIZE: + g_value_set_int64 (value, self->min_memory_size); + break; + case PROP_NUM_CLIENTS: + g_value_set_uint (value, g_hash_table_size (self->clients)); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -252,7 +333,9 @@ } /* id is actually the GstBuffer pointer casted to guint64. * We can now drop its reference kept for this client. */ - if (!g_hash_table_remove (client->buffers, (gpointer) release_buffer->id)) { + if (release_buffer->id > UINTPTR_MAX + || !g_hash_table_remove (client->buffers, + (gpointer) (guintptr) release_buffer->id)) { GST_ERROR_OBJECT (self, "Received wrong id %" G_GUINT64_FORMAT " in release-buffer command from client %p", release_buffer->id, @@ -277,6 +360,7 @@ g_clear_error (&error); g_free (payload); GST_OBJECT_UNLOCK (self); + g_object_notify_by_pspec (G_OBJECT (self), propertiesPROP_NUM_CLIENTS); return G_SOURCE_REMOVE; } @@ -333,6 +417,8 @@ GST_OBJECT_UNLOCK (self); + g_object_notify_by_pspec (G_OBJECT (self), propertiesPROP_NUM_CLIENTS); + return G_SOURCE_CONTINUE; } @@ -418,7 +504,7 @@ return TRUE; } -static void +static gboolean send_command_to_all (GstUnixFdSink * self, CommandType type, GUnixFDList * fds, const guint8 * payload, gsize payload_size, GstBuffer * buffer) { @@ -426,6 +512,7 @@ GSocket *socket; Client *client; GError *error = NULL; + gboolean client_removed = FALSE; g_hash_table_iter_init (&iter, self->clients); while (g_hash_table_iter_next (&iter, (gpointer) & socket, @@ -436,12 +523,15 @@ type, client, error->message); g_clear_error (&error); g_hash_table_iter_remove (&iter); + client_removed = TRUE; continue; } /* Keep a ref on this buffer until all clients released it. */ if (buffer != NULL) g_hash_table_add (client->buffers, gst_buffer_ref (buffer)); } + + return client_removed; } static GstClockTime @@ -510,7 +600,7 @@ NewBufferPayload *new_buffer = (NewBufferPayload *) self->payload->data; /* Cast buffer pointer to guint64 identifier. Client will send us back that * id so we know which buffer to unref. */ - new_buffer->id = (guint64) buffer; + new_buffer->id = (guint64) (guintptr) buffer; new_buffer->pts = to_monotonic (GST_BUFFER_PTS (buffer), &GST_BASE_SINK_CAST (self)->segment, base_time, latency, clock_diff); @@ -534,14 +624,29 @@ return GST_FLOW_ERROR; } - gboolean dmabuf_count = 0; + /* dst_buffer is used to hold reference on new GstMemory we'll create, if any. + * ref_original_buffer is set to TRUE if dst_buffer also needs to hold + * reference on the original buffer. */ + GstBuffer *dst_buffer = NULL; + gboolean ref_original_buffer = FALSE; + + gint dmabuf_count = 0; GUnixFDList *fds = g_unix_fd_list_new (); + for (int i = 0; i < n_memory; i++) { GstMemory *mem = gst_buffer_peek_memory (buffer, i); + if (!gst_is_fd_memory (mem)) { - GST_ERROR_OBJECT (self, "Expecting buffers with FD memories"); - ret = GST_FLOW_ERROR; - goto out; + if (dst_buffer == NULL) + dst_buffer = gst_buffer_new (); + mem = copy_to_shm (self, mem); + if (mem == NULL) { + ret = GST_FLOW_ERROR; + goto out; + } + gst_buffer_append_memory (dst_buffer, mem); + } else { + ref_original_buffer = TRUE; } if (gst_is_dmabuf_memory (mem)) @@ -567,6 +672,13 @@ if (dmabuf_count > 0) new_buffer->type = MEMORY_TYPE_DMABUF; + if (dst_buffer != NULL) { + new_buffer->id = (guint64) (guintptr) dst_buffer; + if (ref_original_buffer) + gst_buffer_add_parent_buffer_meta (dst_buffer, buffer); + buffer = dst_buffer; + } + GST_OBJECT_LOCK (self); while (self->wait_for_connection && g_hash_table_size (self->clients) == 0) { @@ -580,12 +692,18 @@ } } - send_command_to_all (self, COMMAND_TYPE_NEW_BUFFER, fds, + gboolean client_removed = + send_command_to_all (self, COMMAND_TYPE_NEW_BUFFER, fds, self->payload->data, self->payload->len, buffer); GST_OBJECT_UNLOCK (self); + if (client_removed) { + g_object_notify_by_pspec (G_OBJECT (self), propertiesPROP_NUM_CLIENTS); + } + out: + gst_clear_buffer (&dst_buffer); g_clear_object (&fds); g_clear_error (&error); return ret; @@ -631,16 +749,30 @@ self->caps); gsize payload_size; guint8 *payload = caps_to_payload (self->caps, &payload_size); - send_command_to_all (self, COMMAND_TYPE_CAPS, NULL, payload, payload_size, + gboolean client_removed = + send_command_to_all (self, COMMAND_TYPE_CAPS, NULL, payload, + payload_size, NULL); g_free (payload); + /* New caps could mean new buffer size, or even no copies needed anymore. + * We'll create a new pool if still needed. */ + g_clear_pointer (&self->allocator, allocator_unref); GST_OBJECT_UNLOCK (self); + if (client_removed) { + g_object_notify_by_pspec (G_OBJECT (self), + propertiesPROP_NUM_CLIENTS); + } break; } case GST_EVENT_EOS:{ GST_OBJECT_LOCK (self); - send_command_to_all (self, COMMAND_TYPE_EOS, NULL, NULL, 0, NULL); + gboolean client_removed = + send_command_to_all (self, COMMAND_TYPE_EOS, NULL, NULL, 0, NULL); GST_OBJECT_UNLOCK (self); + if (client_removed) { + g_object_notify_by_pspec (G_OBJECT (self), + propertiesPROP_NUM_CLIENTS); + } break; } default: @@ -666,17 +798,33 @@ { GstUnixFdSink *self = (GstUnixFdSink *) element; - self->uses_monotonic_clock = FALSE; - if (clock != NULL && G_OBJECT_TYPE (clock) == GST_TYPE_SYSTEM_CLOCK) { - GstClockType clock_type; - g_object_get (clock, "clock-type", &clock_type, NULL); - self->uses_monotonic_clock = clock_type == GST_CLOCK_TYPE_MONOTONIC; - } + self->uses_monotonic_clock = clock != NULL + && gst_clock_is_system_monotonic (clock); return GST_ELEMENT_CLASS (gst_unix_fd_sink_parent_class)->set_clock (element, clock); } +static GstStateChangeReturn +gst_unix_fd_sink_change_state (GstElement * element, GstStateChange transition) +{ + GstUnixFdSink *self = (GstUnixFdSink *) element; + + GstStateChangeReturn ret = + GST_ELEMENT_CLASS (gst_unix_fd_sink_parent_class)->change_state (element, + transition); + + switch (transition) { + case GST_STATE_CHANGE_PAUSED_TO_READY: + g_clear_pointer (&self->allocator, allocator_unref); + break; + default: + break; + } + + return ret; +} + static void gst_unix_fd_sink_class_init (GstUnixFdSinkClass * klass) { @@ -698,6 +846,8 @@ gobject_class->get_property = gst_unix_fd_sink_get_property; gstelement_class->set_clock = GST_DEBUG_FUNCPTR (gst_unix_fd_sink_set_clock); + gstelement_class->change_state = + GST_DEBUG_FUNCPTR (gst_unix_fd_sink_change_state); gstbasesink_class->start = GST_DEBUG_FUNCPTR (gst_unix_fd_sink_start); gstbasesink_class->stop = GST_DEBUG_FUNCPTR (gst_unix_fd_sink_stop); @@ -709,21 +859,20 @@ gstbasesink_class->unlock_stop = GST_DEBUG_FUNCPTR (gst_unix_fd_sink_unlock_stop); - g_object_class_install_property (gobject_class, PROP_SOCKET_PATH, + propertiesPROP_SOCKET_PATH = g_param_spec_string ("socket-path", - "Path to the control socket", - "The path to the control socket used to control the shared memory " - "transport. This may be modified during the NULL->READY transition", - NULL, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | - GST_PARAM_MUTABLE_READY)); - - g_object_class_install_property (gobject_class, PROP_SOCKET_TYPE, + "Path to the control socket", + "The path to the control socket used to control the shared memory " + "transport. This may be modified during the NULL->READY transition", + NULL, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY); + + propertiesPROP_SOCKET_TYPE = g_param_spec_enum ("socket-type", "Socket type", - "The type of underlying socket", - G_TYPE_UNIX_SOCKET_ADDRESS_TYPE, DEFAULT_SOCKET_TYPE, - G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT | - GST_PARAM_MUTABLE_READY)); + "The type of underlying socket", + G_TYPE_UNIX_SOCKET_ADDRESS_TYPE, DEFAULT_SOCKET_TYPE, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT | + GST_PARAM_MUTABLE_READY); /** * GstUnixFdSink:wait-for-connection: @@ -732,10 +881,46 @@ * * Since: 1.26 */ - g_object_class_install_property (gobject_class, PROP_WAIT_FOR_CONNECTION, + propertiesPROP_WAIT_FOR_CONNECTION = g_param_spec_boolean ("wait-for-connection", - "Wait for a connection until rendering", - "Block the stream until a least one client is connected", - DEFAULT_WAIT_FOR_CONNECTION, - G_PARAM_READWRITE | G_PARAM_CONSTRUCT | G_PARAM_STATIC_STRINGS)); + "Wait for a connection until rendering", + "Block the stream until a least one client is connected", + DEFAULT_WAIT_FOR_CONNECTION, + G_PARAM_READWRITE | G_PARAM_CONSTRUCT | G_PARAM_STATIC_STRINGS); + + /** + * GstUnixFdSink:min-memory-size: + * + * Minimum size to allocate in the case a copy into shared memory is needed. + * Memories are kept in a pool and reused when possible. + * + * A value of 0 (the default) means only the needed size is allocated which + * reduces the possibility of reusing the memory in the case not all buffers + * need the same size. + * + * A negative value disables copying and the pipeline will stop with an error + * in the case a copy into shared memory is needed. + * + * Since: 1.28 + */ + propertiesPROP_MIN_MEMORY_SIZE = + g_param_spec_int64 ("min-memory-size", "Minimum memory size", + "Minimum size to allocate in the case a copy into shared memory is needed.", + -1, G_MAXINT64, DEFAULT_MIN_MEMORY_SIZE, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT); + + /** + * GstUnixFdSink:num-clients: + * + * The number of clients that are currently connected to the sink. + * This property is read-only and reflects the current connection count. + * + * Since: 1.28 + */ + propertiesPROP_NUM_CLIENTS = + g_param_spec_uint ("num-clients", "Number of clients", + "The number of clients that are connected currently", + 0, G_MAXUINT, 0, G_PARAM_READABLE | G_PARAM_STATIC_STRINGS); + + g_object_class_install_properties (gobject_class, NUM_PROPERTIES, properties); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/unixfd/gstunixfdsrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/unixfd/gstunixfdsrc.c
Changed
@@ -28,8 +28,8 @@ * * ## Example launch lines * | - * gst-launch-1.0 -v videotestsrc ! unixfdsink socket-path=/tmp/blah - * gst-launch-1.0 -v unixfdsrc socket-path=/tmp/blah ! autovideosink + * gst-launch-1.0 -v videotestsrc ! video/x-raw,format=RGBx,width=1920,height=1080 ! timeoverlay ! unixfdsink socket-path=/tmp/blah + * gst-launch-1.0 -v unixfdsrc socket-path=/tmp/blah ! videoconvert ! autovideosink * | * * Since: 1.24 @@ -405,11 +405,10 @@ ctx->id = new_buffer->id; ctx->n_memory = new_buffer->n_memory; for (int i = 0; i < new_buffer->n_memory; i++) { - GstMemory *mem = gst_fd_allocator_alloc (allocator, fds_arri, - new_buffer->memoriesi.size + new_buffer->memoriesi.offset, + GstMemory *mem = gst_fd_allocator_alloc_full (allocator, fds_arri, + new_buffer->memoriesi.offset + new_buffer->memoriesi.size, + new_buffer->memoriesi.offset, new_buffer->memoriesi.size, GST_FD_MEMORY_FLAG_KEEP_MAPPED); - gst_memory_resize (mem, new_buffer->memoriesi.offset, - new_buffer->memoriesi.size); GST_MINI_OBJECT_FLAG_SET (mem, GST_MEMORY_FLAG_READONLY); g_hash_table_insert (self->memories, mem, ctx); @@ -471,12 +470,8 @@ { GstUnixFdSrc *self = (GstUnixFdSrc *) element; - self->uses_monotonic_clock = FALSE; - if (clock != NULL && G_OBJECT_TYPE (clock) == GST_TYPE_SYSTEM_CLOCK) { - GstClockType clock_type; - g_object_get (clock, "clock-type", &clock_type, NULL); - self->uses_monotonic_clock = clock_type == GST_CLOCK_TYPE_MONOTONIC; - } + self->uses_monotonic_clock = clock != NULL + && gst_clock_is_system_monotonic (clock); return GST_ELEMENT_CLASS (gst_unix_fd_src_parent_class)->set_clock (element, clock);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/unixfd/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/unixfd/meson.build
Changed
@@ -1,5 +1,6 @@ unixfd_sources = 'gstunixfd.c', + 'gstunixfdallocator.c', 'gstunixfdsink.c', 'gstunixfdsrc.c',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videofilters/gstscenechangeorc-dist.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videofilters/gstscenechangeorc-dist.c
Changed
@@ -67,6 +67,7 @@ orc_int32 x22; float x2f2; orc_int16 x44; + orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT @@ -74,6 +75,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -101,6 +104,7 @@ /* begin Orc C target preamble */ +#include <math.h> #define ORC_CLAMP(x,a,b) ((x)<(a) ? (a) : ((x)>(b) ? (b) : (x))) #define ORC_ABS(a) ((a)<0 ? -(a) : (a)) #define ORC_MIN(a,b) ((a)<(b) ? (a) : (b)) @@ -136,6 +140,8 @@ #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif @@ -219,42 +225,38 @@ int m) { OrcExecutor _ex, *ex = &_ex; - static volatile int p_inited = 0; - static OrcCode *c = 0; - void (*func) (OrcExecutor *); + static OrcOnce once = ORC_ONCE_INIT; + OrcCode *c; + OrcExecutorFunc func = NULL; - if (!p_inited) { - orc_once_mutex_lock (); - if (!p_inited) { - OrcProgram *p; + if (!orc_once_enter (&once, (void **) &c)) { + OrcProgram *p; #if 1 - static const orc_uint8 bc = { - 1, 7, 9, 14, 111, 114, 99, 95, 115, 97, 100, 95, 110, 120, 109, 95, - 117, 56, 12, 1, 1, 12, 1, 1, 13, 4, 182, 12, 4, 5, 2, 0, + static const orc_uint8 bc = { + 1, 7, 9, 14, 111, 114, 99, 95, 115, 97, 100, 95, 110, 120, 109, 95, + 117, 56, 12, 1, 1, 12, 1, 1, 13, 4, 182, 12, 4, 5, 2, 0, - }; - p = orc_program_new_from_static_bytecode (bc); - orc_program_set_backup_function (p, _backup_orc_sad_nxm_u8); + }; + p = orc_program_new_from_static_bytecode (bc); + orc_program_set_backup_function (p, _backup_orc_sad_nxm_u8); #else - p = orc_program_new (); - orc_program_set_2d (p); - orc_program_set_name (p, "orc_sad_nxm_u8"); - orc_program_set_backup_function (p, _backup_orc_sad_nxm_u8); - orc_program_add_source (p, 1, "s1"); - orc_program_add_source (p, 1, "s2"); - orc_program_add_accumulator (p, 4, "a1"); + p = orc_program_new (); + orc_program_set_2d (p); + orc_program_set_name (p, "orc_sad_nxm_u8"); + orc_program_set_backup_function (p, _backup_orc_sad_nxm_u8); + orc_program_add_source (p, 1, "s1"); + orc_program_add_source (p, 1, "s2"); + orc_program_add_accumulator (p, 4, "a1"); - orc_program_append_2 (p, "accsadubl", 0, ORC_VAR_A1, ORC_VAR_S1, - ORC_VAR_S2, ORC_VAR_D1); + orc_program_append_2 (p, "accsadubl", 0, ORC_VAR_A1, ORC_VAR_S1, ORC_VAR_S2, + ORC_VAR_D1); #endif - orc_program_compile (p); - c = orc_program_take_code (p); - orc_program_free (p); - } - p_inited = TRUE; - orc_once_mutex_unlock (); + orc_program_compile (p); + c = orc_program_take_code (p); + orc_program_free (p); + orc_once_leave (&once, c); } ex->arraysORC_VAR_A2 = c; ex->program = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videofilters/gstscenechangeorc-dist.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videofilters/gstscenechangeorc-dist.h
Changed
@@ -55,13 +55,15 @@ #endif typedef union { orc_int16 i; orc_int8 x22; } orc_union16; typedef union { orc_int32 i; float f; orc_int16 x22; orc_int8 x44; } orc_union32; -typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; } orc_union64; +typedef union { orc_int64 i; double f; orc_int32 x22; float x2f2; orc_int16 x44; orc_int8 x88; } orc_union64; #endif #ifndef ORC_RESTRICT #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define ORC_RESTRICT restrict #elif defined(__GNUC__) && __GNUC__ >= 4 #define ORC_RESTRICT __restrict__ +#elif defined(_MSC_VER) +#define ORC_RESTRICT __restrict #else #define ORC_RESTRICT #endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gstav1parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gstav1parse.c
Changed
@@ -145,6 +145,18 @@ GstClockTime buffer_pts; GstClockTime buffer_dts; GstClockTime buffer_duration; + + GstVideoMasteringDisplayInfo mastering_display_info; + guint mastering_display_info_state; + + GstVideoContentLightLevel content_light_level; + guint content_light_level_state; +}; + +enum +{ + GST_AV1_PARSE_OBU_EXPIRED = 0, + GST_AV1_PARSE_OBU_PARSED = 1, }; static GstStaticPadTemplate sinktemplate = GST_STATIC_PAD_TEMPLATE ("sink", @@ -342,6 +354,10 @@ gst_adapter_clear (self->cache_out); gst_adapter_clear (self->frame_cache); gst_av1_parse_reset_tu_timestamp (self); + gst_video_mastering_display_info_init (&self->mastering_display_info); + self->mastering_display_info_state = GST_AV1_PARSE_OBU_EXPIRED; + gst_video_content_light_level_init (&self->content_light_level); + self->content_light_level_state = GST_AV1_PARSE_OBU_EXPIRED; } static void @@ -745,6 +761,8 @@ const gchar *profile = NULL; const gchar *level = NULL; const gchar *tier = NULL; + const gchar *mdi_str = NULL; + const gchar *cll_str = NULL; if (G_UNLIKELY (!gst_pad_has_current_caps (GST_BASE_PARSE_SRC_PAD (self)))) self->update_caps = TRUE; @@ -863,6 +881,28 @@ } } + if (s) + mdi_str = gst_structure_get_string (s, "mastering-display-info"); + if (mdi_str) { + gst_caps_set_simple (final_caps, "mastering-display-info", G_TYPE_STRING, + mdi_str, NULL); + } else if (self->mastering_display_info_state != GST_AV1_PARSE_OBU_EXPIRED && + !gst_video_mastering_display_info_add_to_caps + (&self->mastering_display_info, final_caps)) { + GST_WARNING_OBJECT (self, "Couldn't set mastering display info to caps"); + } + + if (s) + cll_str = gst_structure_get_string (s, "content-light-level"); + if (mdi_str) { + gst_caps_set_simple (final_caps, "content-light-level", G_TYPE_STRING, + cll_str, NULL); + } else if (self->content_light_level_state != GST_AV1_PARSE_OBU_EXPIRED && + !gst_video_content_light_level_add_to_caps + (&self->content_light_level, final_caps)) { + GST_WARNING_OBJECT (self, "Couldn't set content light level to caps"); + } + src_caps = gst_pad_get_current_caps (GST_BASE_PARSE_SRC_PAD (self)); if (!(src_caps && gst_caps_is_strictly_equal (src_caps, final_caps))) { @@ -1529,6 +1569,27 @@ return ret; } +static guint64 +fixed_scale (guint in, guint fracbits, guint scale, guint max_bits) +{ + guint fracmax = 1 << fracbits; + guint fracmask = fracmax - 1; + guint whole; + guint64 out; + + out = in & fracmask; + out = gst_util_uint64_scale_int (scale, out, fracmask); + whole = in >> fracbits; + out += scale * whole; + + if (max_bits == 16) + out = MIN (out, G_MAXUINT16); + else if (max_bits == 32) + out = MIN (out, G_MAXUINT32); + + return out; +} + /* frame_complete will be set true if it is the frame edge. */ static GstAV1ParserResult gst_av1_parse_handle_one_obu (GstAV1Parse * self, GstAV1OBU * obu, @@ -1687,6 +1748,67 @@ } } + if (obu->obu_type == GST_AV1_OBU_METADATA) { + switch (metadata.metadata_type) { + case GST_AV1_METADATA_TYPE_HDR_CLL: + { + GstVideoContentLightLevel new_cll; + + new_cll.max_content_light_level = metadata.hdr_cll.max_cll; + new_cll.max_frame_average_light_level = metadata.hdr_cll.max_fall; + + if (self->content_light_level_state == GST_AV1_PARSE_OBU_EXPIRED) { + self->update_caps = TRUE; + } else if (new_cll.max_content_light_level != + self->content_light_level.max_content_light_level || + new_cll.max_frame_average_light_level != + self->content_light_level.max_frame_average_light_level) { + self->update_caps = TRUE; + } + + self->content_light_level = new_cll; + self->content_light_level_state = GST_AV1_PARSE_OBU_PARSED; + break; + } + case GST_AV1_METADATA_TYPE_HDR_MDCV: + { + GstVideoMasteringDisplayInfo new_minfo; + GstAV1MetadataHdrMdcv *mdcv = &metadata.hdr_mdcv; + gint i; + + for (i = 0; i < 3; i++) { + new_minfo.display_primariesi.x = + fixed_scale (mdcv->primary_chromaticity_xi, 16, 50000, 16); + + new_minfo.display_primariesi.y = + fixed_scale (mdcv->primary_chromaticity_yi, 16, 50000, 16); + } + + new_minfo.white_point.x = + fixed_scale (mdcv->white_point_chromaticity_x, 16, 50000, 16); + new_minfo.white_point.y = + fixed_scale (mdcv->white_point_chromaticity_y, 16, 50000, 16); + new_minfo.max_display_mastering_luminance = + fixed_scale (mdcv->luminance_max, 8, 10000, 32); + new_minfo.min_display_mastering_luminance = + fixed_scale (mdcv->luminance_min, 14, 10000, 32); + + if (self->mastering_display_info_state == GST_AV1_PARSE_OBU_EXPIRED) { + self->update_caps = TRUE; + } else if (!gst_video_mastering_display_info_is_equal + (&self->mastering_display_info, &new_minfo)) { + self->update_caps = TRUE; + } + + self->mastering_display_info = new_minfo; + self->mastering_display_info_state = GST_AV1_PARSE_OBU_PARSED; + break; + } + default: + break; + } + } + out: if (res != GST_AV1_PARSER_OK) { /* Some verbose OBU can be skip */ @@ -1823,7 +1945,6 @@ GstBuffer *buffer = gst_buffer_ref (frame->buffer); guint32 offset, consumed_before_push, consumed; gboolean frame_complete; - GstBaseParseFrame subframe; if (!gst_buffer_map (buffer, &map_info, GST_MAP_READ)) { GST_ERROR_OBJECT (parse, "Couldn't map incoming buffer"); @@ -1856,9 +1977,11 @@ if ((self->align == GST_AV1_PARSE_ALIGN_OBU) || (self->align == GST_AV1_PARSE_ALIGN_FRAME && frame_complete)) { + GstBaseParseFrame subframe; gst_av1_parse_create_subframe (frame, &subframe, buffer); ret = gst_av1_parse_push_data (self, &subframe, consumed_before_push, frame_complete); + gst_base_parse_frame_free (&subframe); if (ret != GST_FLOW_OK) goto out;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth264parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth264parse.c
Changed
@@ -265,6 +265,7 @@ h264parse->parsed_colorimetry.matrix = GST_VIDEO_COLOR_MATRIX_UNKNOWN; h264parse->parsed_colorimetry.transfer = GST_VIDEO_TRANSFER_UNKNOWN; h264parse->parsed_colorimetry.primaries = GST_VIDEO_COLOR_PRIMARIES_UNKNOWN; + h264parse->lcevc = FALSE; h264parse->have_pps = FALSE; h264parse->have_sps = FALSE; @@ -1258,17 +1259,13 @@ GstH264NalUnit nalu; const guint nl = h264parse->nal_length_size; GstMapInfo map; - gint left; + gsize parsed, left; if (nl < 1 || nl > 4) { GST_DEBUG_OBJECT (h264parse, "insufficient data to split input"); return GST_FLOW_NOT_NEGOTIATED; } - /* need to save buffer from invalidation upon _finish_frame */ - if (h264parse->split_packetized) - buffer = gst_buffer_copy (frame->buffer); - gst_buffer_map (buffer, &map, GST_MAP_READ); left = map.size; @@ -1317,29 +1314,66 @@ * a replacement output buffer is provided anyway. */ gst_h264_parse_parse_frame (parse, &tmp_frame); ret = gst_base_parse_finish_frame (parse, &tmp_frame, nl + nalu.size); - left -= nl + nalu.size; + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) { + gst_buffer_unmap (buffer, &map); + return ret; + } } + left -= nl + nalu.size; parse_res = gst_h264_parser_identify_nalu_avc (h264parse->nalparser, map.data, nalu.offset + nalu.size, map.size, nl, &nalu); } + parsed = map.size - left; gst_buffer_unmap (buffer, &map); if (!h264parse->split_packetized) { - h264parse->marker = TRUE; - gst_h264_parse_parse_frame (parse, frame); - ret = gst_base_parse_finish_frame (parse, frame, map.size); - } else { - gst_buffer_unref (buffer); - if (G_UNLIKELY (left)) { - /* should not be happening for nice AVC */ - GST_WARNING_OBJECT (parse, "skipping leftover AVC data %d", left); - frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; - ret = gst_base_parse_finish_frame (parse, frame, map.size); + /* Nothing to do if no NAL unit was parsed, the whole AU will be dropped + * below. */ + if (parsed > 0) { + if (G_UNLIKELY (left)) { + /* Only part of the AU could be parsed, split out that part the rest + * will be dropped below. Should not be happening for nice AVC. */ + GST_WARNING_OBJECT (parse, "Problem parsing part of AU, keep part that " + "has been correctly parsed (%" G_GSIZE_FORMAT " bytes).", parsed); + GstBaseParseFrame tmp_frame; + + gst_base_parse_frame_init (&tmp_frame); + tmp_frame.flags |= frame->flags; + tmp_frame.offset = frame->offset; + tmp_frame.overhead = frame->overhead; + tmp_frame.buffer = gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, + 0, parsed); + + h264parse->marker = TRUE; + gst_h264_parse_parse_frame (parse, &tmp_frame); + ret = gst_base_parse_finish_frame (parse, &tmp_frame, parsed); + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) + return ret; + } else { + /* The whole AU succesfully parsed. */ + h264parse->marker = TRUE; + gst_h264_parse_parse_frame (parse, frame); + ret = gst_base_parse_finish_frame (parse, frame, parsed); + } } } + if (G_UNLIKELY (left)) { + /* should not be happening for nice AVC */ + GST_WARNING_OBJECT (parse, "skipping leftover AVC data %" G_GSIZE_FORMAT, + left); + frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; + ret = gst_base_parse_finish_frame (parse, frame, left); + } + if (parse_res == GST_H264_PARSER_NO_NAL_END || parse_res == GST_H264_PARSER_BROKEN_DATA) { @@ -2205,6 +2239,8 @@ gint par_n, par_d; GstH264VUIParams *vui = &sps->vui_parameters; gchar *colorimetry = NULL; + gint upstream_fps_n = 0; + gint upstream_fps_d = 1; if (sps->frame_cropping_flag) { crop_width = sps->crop_rect_width; @@ -2223,6 +2259,14 @@ modified = TRUE; } + if (s && gst_structure_get_fraction (s, + "framerate", &upstream_fps_n, &upstream_fps_d)) { + if (upstream_fps_n <= 0 || upstream_fps_d <= 0) { + upstream_fps_n = 0; + upstream_fps_d = 1; + } + } + /* 0/1 is set as the default in the codec parser, we will set * it in case we have no info */ gst_h264_video_calculate_framerate (sps, h264parse->field_pic_flag, @@ -2369,8 +2413,9 @@ "height", G_TYPE_INT, height, NULL); /* upstream overrides */ - if (s && gst_structure_has_field (s, "framerate")) { - gst_structure_get_fraction (s, "framerate", &fps_num, &fps_den); + if (upstream_fps_n > 0 && upstream_fps_d > 0) { + fps_num = upstream_fps_n; + fps_den = upstream_fps_d; } /* but not necessarily or reliably this */ @@ -2491,7 +2536,7 @@ "Couldn't set content light level to caps"); } - if (h264parse->user_data.lcevc_enhancement_data) + if (h264parse->user_data.lcevc_enhancement_data || h264parse->lcevc) gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, TRUE, NULL); else gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, FALSE, NULL); @@ -3670,6 +3715,7 @@ &h264parse->fps_den); gst_structure_get_fraction (str, "pixel-aspect-ratio", &h264parse->upstream_par_n, &h264parse->upstream_par_d); + gst_structure_get_boolean (str, "lcevc", &h264parse->lcevc); /* get upstream format and align from caps */ gst_h264_parse_format_from_caps (caps, &format, &align); @@ -3863,6 +3909,7 @@ gst_structure_remove_field (s, "stream-format"); } gst_structure_remove_field (s, "parsed"); + gst_structure_remove_field (s, "lcevc"); } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth264parse.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth264parse.h
Changed
@@ -62,6 +62,7 @@ gint parsed_par_n, parsed_par_d; gint parsed_fps_n, parsed_fps_d; GstVideoColorimetry parsed_colorimetry; + gboolean lcevc; /* current codec_data in output caps, if any */ GstBuffer *codec_data; /* input codec_data, if any */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth265parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth265parse.c
Changed
@@ -174,6 +174,9 @@ gst_base_parse_set_infer_ts (GST_BASE_PARSE (h265parse), FALSE); GST_PAD_SET_ACCEPT_INTERSECT (GST_BASE_PARSE_SINK_PAD (h265parse)); GST_PAD_SET_ACCEPT_TEMPLATE (GST_BASE_PARSE_SINK_PAD (h265parse)); + + h265parse->aud_needed = TRUE; + h265parse->aud_insert = TRUE; } @@ -208,6 +211,9 @@ h265parse->have_vps_in_frame = FALSE; h265parse->have_sps_in_frame = FALSE; h265parse->have_pps_in_frame = FALSE; + h265parse->have_aud_in_frame = FALSE; + h265parse->layer_id = 0; + h265parse->temporal_id_plus1 = 0; gst_adapter_clear (h265parse->frame_out); gst_video_clear_user_data (&h265parse->user_data, FALSE); gst_video_clear_user_data_unregistered (&h265parse->user_data_unregistered, @@ -231,6 +237,7 @@ h265parse->parsed_colorimetry.matrix = GST_VIDEO_COLOR_MATRIX_UNKNOWN; h265parse->parsed_colorimetry.transfer = GST_VIDEO_TRANSFER_UNKNOWN; h265parse->parsed_colorimetry.primaries = GST_VIDEO_COLOR_PRIMARIES_UNKNOWN; + h265parse->lcevc = FALSE; h265parse->have_pps = FALSE; h265parse->have_sps = FALSE; h265parse->have_vps = FALSE; @@ -243,6 +250,8 @@ h265parse->packetized = FALSE; h265parse->push_codec = FALSE; h265parse->first_frame = TRUE; + h265parse->layer_id = 0; + h265parse->temporal_id_plus1 = 0; gst_buffer_replace (&h265parse->codec_data, NULL); gst_buffer_replace (&h265parse->codec_data_in, NULL); @@ -288,6 +297,10 @@ h265parse->nalparser = gst_h265_parser_new (); h265parse->state = 0; + h265parse->layer_id = 0; + h265parse->temporal_id_plus1 = 0; + h265parse->aud_needed = TRUE; + h265parse->aud_insert = FALSE; gst_base_parse_set_min_frame_size (parse, 5); @@ -684,6 +697,8 @@ break; } default: + GST_DEBUG_OBJECT (h265parse, "Unknown SEI payload type %d", + sei.payloadType); break; } } @@ -698,8 +713,9 @@ GstByteReader br; GstVideoParseUtilsField field = GST_VIDEO_PARSE_UTILS_FIELD_1; - /* only US country code is currently supported */ + /* only US and UK country codes are currently supported */ switch (rud->country_code) { + case ITU_T_T35_COUNTRY_CODE_UK: case ITU_T_T35_COUNTRY_CODE_US: break; default: @@ -900,10 +916,6 @@ GST_H265_PARSE_STATE_VALID_PICTURE_HEADERS)) return FALSE; - /* This is similar to the GOT_SLICE state, but is only reset when the - * AU is complete. This is used to keep track of AU */ - h265parse->picture_start = TRUE; - pres = gst_h265_parser_parse_slice_hdr (nalparser, nalu, &slice); if (pres == GST_H265_PARSER_OK) { @@ -916,6 +928,15 @@ h265parse->state |= GST_H265_PARSE_STATE_GOT_SLICE; } + + /* This is similar to the GOT_SLICE state, but is only reset when the + * AU is complete. This is used to keep track of AU */ + if (!h265parse->picture_start) { + h265parse->picture_start = TRUE; + h265parse->layer_id = nalu->layer_id; + h265parse->temporal_id_plus1 = nalu->temporal_id_plus1; + } + if (slice.first_slice_segment_in_pic_flag == 1) GST_DEBUG_OBJECT (h265parse, "frame start, first_slice_segment_in_pic_flag = 1"); @@ -989,6 +1010,13 @@ return FALSE; break; case GST_H265_NAL_AUD: + pres = gst_h265_parser_parse_nal (nalparser, nalu); + if (pres != GST_H265_PARSER_OK) + return FALSE; + + h265parse->aud_needed = FALSE; + h265parse->have_aud_in_frame = TRUE; + break; default: /* Just accumulate AU Delimiter, whether it's before SPS or not */ pres = gst_h265_parser_parse_nal (nalparser, nalu); @@ -1059,17 +1087,13 @@ GstH265NalUnit nalu; const guint nl = h265parse->nal_length_size; GstMapInfo map; - gint left; + gsize parsed, left; if (nl < 1 || nl > 4) { GST_DEBUG_OBJECT (h265parse, "insufficient data to split input"); return GST_FLOW_NOT_NEGOTIATED; } - /* need to save buffer from invalidation upon _finish_frame */ - if (h265parse->split_packetized) - buffer = gst_buffer_copy (frame->buffer); - gst_buffer_map (buffer, &map, GST_MAP_READ); left = map.size; @@ -1080,6 +1104,11 @@ parse_res = gst_h265_parser_identify_nalu_hevc (h265parse->nalparser, map.data, 0, map.size, nl, &nalu); + /* Always enable AUD insertion per frame here. The pre_push function + * will only add it once, and will only add it for byte-stream output + * if AUD doesn't exist in the current frame */ + h265parse->aud_insert = TRUE; + while (parse_res == GST_H265_PARSER_OK) { GST_DEBUG_OBJECT (h265parse, "HEVC nal offset %d", nalu.offset + nalu.size); @@ -1113,29 +1142,66 @@ * a replacement output buffer is provided anyway. */ gst_h265_parse_parse_frame (parse, &tmp_frame); ret = gst_base_parse_finish_frame (parse, &tmp_frame, nl + nalu.size); - left -= nl + nalu.size; + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) { + gst_buffer_unmap (buffer, &map); + return ret; + } } + left -= nl + nalu.size; parse_res = gst_h265_parser_identify_nalu_hevc (h265parse->nalparser, map.data, nalu.offset + nalu.size, map.size, nl, &nalu); } + parsed = map.size - left; gst_buffer_unmap (buffer, &map); if (!h265parse->split_packetized) { - h265parse->marker = TRUE; - gst_h265_parse_parse_frame (parse, frame); - ret = gst_base_parse_finish_frame (parse, frame, map.size); - } else { - gst_buffer_unref (buffer); - if (G_UNLIKELY (left)) { - /* should not be happening for nice HEVC */ - GST_WARNING_OBJECT (parse, "skipping leftover HEVC data %d", left); - frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; - ret = gst_base_parse_finish_frame (parse, frame, map.size); + /* Nothing to do if no NAL unit was parsed, the whole AU will be dropped + * below. */ + if (parsed > 0) { + if (G_UNLIKELY (left)) { + /* Only part of the AU could be parsed, split out that part the rest + * will be dropped below. Should not be happening for nice HEVC. */ + GST_WARNING_OBJECT (parse, "Problem parsing part of AU, keep part that " + "has been correctly parsed (%" G_GSIZE_FORMAT " bytes).", parsed); + GstBaseParseFrame tmp_frame; + + gst_base_parse_frame_init (&tmp_frame); + tmp_frame.flags |= frame->flags; + tmp_frame.offset = frame->offset; + tmp_frame.overhead = frame->overhead; + tmp_frame.buffer = gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, + 0, parsed); + + h265parse->marker = TRUE; + gst_h265_parse_parse_frame (parse, &tmp_frame); + ret = gst_base_parse_finish_frame (parse, &tmp_frame, parsed); + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) + return ret; + } else { + /* The whole AU succesfully parsed. */ + h265parse->marker = TRUE; + gst_h265_parse_parse_frame (parse, frame); + ret = gst_base_parse_finish_frame (parse, frame, parsed); + } } } + if (G_UNLIKELY (left)) { + /* should not be happening for nice HEVC */ + GST_WARNING_OBJECT (parse, "skipping leftover AVC data %" G_GSIZE_FORMAT, + left); + frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; + ret = gst_base_parse_finish_frame (parse, frame, left); + } + if (parse_res == GST_H265_PARSER_NO_NAL_END || parse_res == GST_H265_PARSER_BROKEN_DATA) { @@ -1327,6 +1393,7 @@ data, nalu.offset, nalu.size); if (gst_h265_parse_collect_nal (h265parse, data, size, &nalu)) { + h265parse->aud_needed = TRUE; /* complete current frame, if it exist */ if (current_off > 0) { nalu.size = 0; @@ -1344,6 +1411,12 @@ goto skip; } + /* Make sure the next buffer will contain an AUD */ + if (h265parse->aud_needed) { + h265parse->aud_insert = TRUE; + h265parse->aud_needed = FALSE; + } + /* Do not push immediatly if we don't have all headers. This ensure that * our caps are complete, avoiding a renegotiation */ if (h265parse->align == GST_H265_PARSE_ALIGN_NAL && @@ -2166,6 +2239,8 @@ GstH265VPS *vps = sps->vps; GstH265VUIParams *vui = &sps->vui_params; gchar *colorimetry = NULL; + gint upstream_fps_n = 0; + gint upstream_fps_d = 1; GST_DEBUG_OBJECT (h265parse, "vps: %p", vps); @@ -2189,8 +2264,16 @@ modified = TRUE; } + if (s && gst_structure_get_fraction (s, + "framerate", &upstream_fps_n, &upstream_fps_d)) { + if (upstream_fps_n <= 0 || upstream_fps_d <= 0) { + upstream_fps_n = 0; + upstream_fps_d = 1; + } + } + /* 0/1 is set as the default in the codec parser */ - if (vui->timing_info_present_flag && !h265parse->framerate_from_caps) { + if (vui->timing_info_present_flag && !upstream_fps_n) { gint fps_num = 0, fps_den = 1; if (!(sps->fps_num == 0 && sps->fps_den == 1)) { @@ -2289,10 +2372,11 @@ gst_caps_set_simple (caps, "width", G_TYPE_INT, width, "height", G_TYPE_INT, height, NULL); - h265parse->framerate_from_caps = FALSE; /* upstream overrides */ - if (s && gst_structure_has_field (s, "framerate")) - gst_structure_get_fraction (s, "framerate", &fps_num, &fps_den); + if (upstream_fps_n > 0 && upstream_fps_d > 0) { + fps_num = upstream_fps_n; + fps_den = upstream_fps_d; + } /* but not necessarily or reliably this */ if (fps_den > 0) { @@ -2309,7 +2393,6 @@ fps_num, fps_den, 0, 0); val = gst_h265_parse_is_field_interlaced (h265parse) ? GST_SECOND / 2 : GST_SECOND; - h265parse->framerate_from_caps = TRUE; /* If we know the frame duration, and if we are not in one of the zero * latency pattern, add one frame of latency */ @@ -2340,6 +2423,8 @@ chroma_format = "4:4:4"; break; default: + GST_DEBUG_OBJECT (h265parse, "Unknown Chroma Format IDC %d", + sps->chroma_format_idc); break; } @@ -2519,6 +2604,11 @@ "Couldn't set content light level to caps"); } + if (h265parse->user_data.lcevc_enhancement_data || h265parse->lcevc) + gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, TRUE, NULL); + else + gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, FALSE, NULL); + src_caps = gst_pad_get_current_caps (GST_BASE_PARSE_SRC_PAD (h265parse)); if (src_caps) { @@ -2954,7 +3044,50 @@ h265parse->first_frame = FALSE; } - buffer = frame->buffer; + if (h265parse->aud_insert && !h265parse->have_aud_in_frame && + h265parse->format == GST_H265_PARSE_FORMAT_BYTE && + h265parse->align == GST_H265_PARSE_ALIGN_AU && + h265parse->temporal_id_plus1 > 0) { + static const guint8 aud7 = { + 0x00, 0x00, 0x00, 0x01, + 0x46, 0x01, /* AUD, layer_id = 0, temporal_id_plus1 = 1 */ + 0x50 /* primary_pic_type = 2 (I/P/B) */ + }; + GstMemory *mem; + + GST_DEBUG_OBJECT (h265parse, "Inserting AUD into the stream"); + + if (h265parse->layer_id == 0 && h265parse->temporal_id_plus1 == 1) { + /* Common single layer I/P frame case, use static memory without + * heap allocation */ + mem = gst_memory_new_wrapped (GST_MEMORY_FLAG_READONLY, (gpointer) aud, + sizeof (aud), 0, sizeof (aud), NULL, NULL); + } else { + guint16 layer_info = ((GST_H265_NAL_AUD & 0x3f) << 9) | + ((h265parse->layer_id & 0x3f) << 3) | + (h265parse->temporal_id_plus1 & 0x7); + guint8 *aud_data = g_memdup2 (aud, sizeof (aud)); + + aud_data4 = (layer_info >> 8) & 0xff; + aud_data5 = layer_info & 0xff; + + mem = gst_memory_new_wrapped (0, aud_data, sizeof (aud), + 0, sizeof (aud), aud_data, g_free); + } + + frame->out_buffer = gst_buffer_copy (frame->buffer); + gst_buffer_prepend_memory (frame->out_buffer, mem); + if (h265parse->idr_pos >= 0) + h265parse->idr_pos += sizeof (aud); + if (h265parse->sei_pos >= 0) + h265parse->sei_pos += sizeof (aud); + + buffer = frame->out_buffer; + } else { + buffer = frame->buffer; + } + + h265parse->aud_insert = FALSE; if ((event = check_pending_key_unit_event (h265parse->force_key_unit_event, &parse->segment, GST_BUFFER_TIMESTAMP (buffer), @@ -3103,6 +3236,10 @@ case GST_H265_SEI_PIC_STRUCT_FRAME_TRIPLING: field_count = 0; break; + default: + GST_DEBUG_OBJECT (h265parse, "h265 sei_pic_struct %d", + h265parse->sei_pic_struct); + break; } if (field_count == -1) { @@ -3246,6 +3383,7 @@ &h265parse->fps_den); gst_structure_get_fraction (str, "pixel-aspect-ratio", &h265parse->upstream_par_n, &h265parse->upstream_par_d); + gst_structure_get_boolean (str, "lcevc", &h265parse->lcevc); /* get upstream format and align from caps */ gst_h265_parse_format_from_caps (caps, &format, &align); @@ -3325,19 +3463,7 @@ } if (format == h265parse->format && align == h265parse->align) { - /* do not set CAPS and passthrough mode if SPS/PPS have not been parsed */ - if (h265parse->have_sps && h265parse->have_pps) { - /* Don't enable passthrough here. This element will parse various - * SEI messages which would be very important/useful for downstream - * (HDR, timecode for example) - */ -#if 0 - gst_base_parse_set_passthrough (parse, TRUE); -#endif - - /* we did parse codec-data and might supplement src caps */ - gst_h265_parse_update_src_caps (h265parse, caps); - } + h265parse->have_vps = TRUE; } else if (format == GST_H265_PARSE_FORMAT_HVC1 || format == GST_H265_PARSE_FORMAT_HEV1) { /* if input != output, and input is hevc, must split before anything else */ @@ -3388,6 +3514,7 @@ gst_structure_remove_field (s, "stream-format"); } gst_structure_remove_field (s, "parsed"); + gst_structure_remove_field (s, "lcevc"); } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth265parse.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth265parse.h
Changed
@@ -56,6 +56,7 @@ gint parsed_par_n, parsed_par_d; gint parsed_fps_n, parsed_fps_d; GstVideoColorimetry parsed_colorimetry; + gboolean lcevc; /* current codec_data in output caps, if any */ GstBuffer *codec_data; /* input codec_data, if any */ @@ -85,6 +86,7 @@ gboolean have_vps_in_frame; gboolean have_sps_in_frame; gboolean have_pps_in_frame; + gboolean have_aud_in_frame; gboolean first_frame; @@ -110,10 +112,20 @@ gboolean predicted; gboolean bidirectional; gboolean header; - gboolean framerate_from_caps; /* AU state */ gboolean picture_start; + /* tracing state whether h265parse needs to insert AUD or not. + * Used when in_format == byte-stream */ + gboolean aud_needed; + + /* For insertion of AU Delimiter */ + gboolean aud_insert; + + /* layer id info of first slice of the current AU */ + guint layer_id; + guint temporal_id_plus1; + GstVideoParseUserData user_data; GstVideoParseUserDataUnregistered user_data_unregistered;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth266parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth266parse.c
Changed
@@ -163,6 +163,10 @@ GstCaps * filter); static void +gst_h266_parse_process_sei_user_data (GstH266Parse * h266parse, + GstH266RegisteredUserData * rud); + +static void gst_h266_parse_class_init (GstH266ParseClass * klass) { GObjectClass *gobject_class = (GObjectClass *) klass; @@ -272,6 +276,7 @@ h266parse->parsed_colorimetry.matrix = GST_VIDEO_COLOR_MATRIX_UNKNOWN; h266parse->parsed_colorimetry.transfer = GST_VIDEO_TRANSFER_UNKNOWN; h266parse->parsed_colorimetry.primaries = GST_VIDEO_COLOR_PRIMARIES_UNKNOWN; + h266parse->lcevc = FALSE; h266parse->have_pps = FALSE; h266parse->have_sps = FALSE; h266parse->have_vps = FALSE; @@ -618,6 +623,10 @@ case GST_H266_SEI_SUBPIC_LEVEL_INFO: /* FIXME */ break; + case GST_H266_SEI_REGISTERED_USER_DATA: + gst_h266_parse_process_sei_user_data (h266parse, + &sei.payload.registered_user_data); + break; default: break; } @@ -994,7 +1003,7 @@ GstH266NalUnit nalu; const guint nl = h266parse->nal_length_size; GstMapInfo map; - gint left; + gsize parsed, left; GST_TRACE_OBJECT (h266parse, "Handling packetized frame"); @@ -1003,10 +1012,6 @@ return GST_FLOW_NOT_NEGOTIATED; } - /* need to save buffer from invalidation upon _finish_frame */ - if (h266parse->split_packetized) - buffer = gst_buffer_copy (frame->buffer); - gst_buffer_map (buffer, &map, GST_MAP_READ); left = map.size; @@ -1050,29 +1055,66 @@ * a replacement output buffer is provided anyway. */ gst_h266_parse_parse_frame (parse, &tmp_frame); ret = gst_base_parse_finish_frame (parse, &tmp_frame, nl + nalu.size); - left -= nl + nalu.size; + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) { + gst_buffer_unmap (buffer, &map); + return ret; + } } + left -= nl + nalu.size; parse_res = gst_h266_parser_identify_nalu_vvc (h266parse->nalparser, map.data, nalu.offset + nalu.size, map.size, nl, &nalu); } + parsed = map.size - left; gst_buffer_unmap (buffer, &map); if (!h266parse->split_packetized) { - h266parse->marker = TRUE; - gst_h266_parse_parse_frame (parse, frame); - ret = gst_base_parse_finish_frame (parse, frame, map.size); - } else { - gst_buffer_unref (buffer); - if (G_UNLIKELY (left)) { - /* should not be happening for nice VVC */ - GST_WARNING_OBJECT (parse, "skipping leftover VVC data %d", left); - frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; - ret = gst_base_parse_finish_frame (parse, frame, map.size); + /* Nothing to do if no NAL unit was parsed, the whole AU will be dropped + * below. */ + if (parsed > 0) { + if (G_UNLIKELY (left)) { + /* Only part of the AU could be parsed, split out that part the rest + * will be dropped below. Should not be happening for nice VVC. */ + GST_WARNING_OBJECT (parse, "Problem parsing part of AU, keep part that " + "has been correctly parsed (%" G_GSIZE_FORMAT " bytes).", parsed); + GstBaseParseFrame tmp_frame; + + gst_base_parse_frame_init (&tmp_frame); + tmp_frame.flags |= frame->flags; + tmp_frame.offset = frame->offset; + tmp_frame.overhead = frame->overhead; + tmp_frame.buffer = gst_buffer_copy_region (buffer, GST_BUFFER_COPY_ALL, + 0, parsed); + + h266parse->marker = TRUE; + gst_h266_parse_parse_frame (parse, &tmp_frame); + ret = gst_base_parse_finish_frame (parse, &tmp_frame, parsed); + gst_base_parse_frame_free (&tmp_frame); + + /* Bail out if we get a flow error. */ + if (ret != GST_FLOW_OK) + return ret; + } else { + /* The whole AU succesfully parsed. */ + h266parse->marker = TRUE; + gst_h266_parse_parse_frame (parse, frame); + ret = gst_base_parse_finish_frame (parse, frame, parsed); + } } } + if (G_UNLIKELY (left)) { + /* should not be happening for nice VVC */ + GST_WARNING_OBJECT (parse, "skipping leftover VVC data %" G_GSIZE_FORMAT, + left); + frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP; + ret = gst_base_parse_finish_frame (parse, frame, left); + } + if (parse_res == GST_H266_PARSER_NO_NAL_END || parse_res == GST_H266_PARSER_BROKEN_DATA) { @@ -2048,7 +2090,7 @@ g_value_unset (&value); } - gst_caps_set_value (caps, "profile", &compat_profiles); + gst_caps_set_value (compat_caps, "profile", &compat_profiles); g_value_unset (&compat_profiles); g_array_unref (profiles); } @@ -2191,6 +2233,8 @@ GstH266VUIParams *vui = &sps->vui_params; gchar *colorimetry = NULL; guint interlaced_mode; + gint upstream_fps_n = 0; + gint upstream_fps_d = 1; GST_DEBUG_OBJECT (h266parse, "vps: %p", vps); @@ -2223,7 +2267,15 @@ modified = TRUE; } - if (!h266parse->framerate_from_caps) { + if (s && gst_structure_get_fraction (s, + "framerate", &upstream_fps_n, &upstream_fps_d)) { + if (upstream_fps_n <= 0 || upstream_fps_d <= 0) { + upstream_fps_n = 0; + upstream_fps_d = 1; + } + } + + if (!upstream_fps_n) { gint fps_num, fps_den; /* 0/1 is set as the default in the codec parser */ @@ -2320,12 +2372,10 @@ gst_caps_set_simple (caps, "width", G_TYPE_INT, width, "height", G_TYPE_INT, height, NULL); - h266parse->framerate_from_caps = FALSE; /* upstream overrides */ - if (s && gst_structure_has_field (s, "framerate")) { - gst_structure_get_fraction (s, "framerate", &fps_num, &fps_den); - if (fps_den > 0) - h266parse->framerate_from_caps = TRUE; + if (upstream_fps_n > 0 && upstream_fps_d > 0) { + fps_num = upstream_fps_n; + fps_den = upstream_fps_d; } /* but not necessarily or reliably this */ @@ -2505,6 +2555,11 @@ "Couldn't set content light level to caps"); } + if (h266parse->user_data.lcevc_enhancement_data || h266parse->lcevc) + gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, TRUE, NULL); + else + gst_caps_set_simple (caps, "lcevc", G_TYPE_BOOLEAN, FALSE, NULL); + src_caps = gst_pad_get_current_caps (GST_BASE_PARSE_SRC_PAD (h266parse)); if (src_caps) { @@ -3100,6 +3155,7 @@ &h266parse->fps_den); gst_structure_get_fraction (str, "pixel-aspect-ratio", &h266parse->upstream_par_n, &h266parse->upstream_par_d); + gst_structure_get_boolean (str, "lcevc", &h266parse->lcevc); /* get upstream format and align from caps */ gst_h266_parse_format_from_caps (h266parse, caps, &format, &align); @@ -3176,10 +3232,7 @@ h266parse->nal_length_size = 4; } - if (format == h266parse->format && align == h266parse->align) { - /* we did parse codec-data and might supplement src caps */ - gst_h266_parse_update_src_caps (h266parse, caps); - } else if (format == GST_H266_PARSE_FORMAT_VVC1 + if (format == GST_H266_PARSE_FORMAT_VVC1 || format == GST_H266_PARSE_FORMAT_VVI1) { /* if input != output, and input is vvc, must split before anything else */ /* arrange to insert codec-data in-stream if needed. @@ -3229,6 +3282,7 @@ gst_structure_remove_field (s, "stream-format"); } gst_structure_remove_field (s, "parsed"); + gst_structure_remove_field (s, "lcevc"); } } @@ -3277,6 +3331,36 @@ } static void +gst_h266_parse_process_sei_user_data (GstH266Parse * h266parse, + GstH266RegisteredUserData * rud) +{ + guint16 provider_code; + GstByteReader br; + GstVideoParseUtilsField field = GST_VIDEO_PARSE_UTILS_FIELD_1; + + /* only US and UK country codes are currently supported */ + switch (rud->country_code) { + case ITU_T_T35_COUNTRY_CODE_UK: + case ITU_T_T35_COUNTRY_CODE_US: + break; + default: + GST_LOG_OBJECT (h266parse, "Unsupported country code %d", + rud->country_code); + return; + } + + if (rud->data == NULL || rud->size < 2) + return; + + gst_byte_reader_init (&br, rud->data, rud->size); + + provider_code = gst_byte_reader_get_uint16_be_unchecked (&br); + + gst_video_parse_user_data ((GstElement *) h266parse, &h266parse->user_data, + &br, field, provider_code); +} + +static void gst_h266_parse_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gsth266parse.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gsth266parse.h
Changed
@@ -66,6 +66,7 @@ gint parsed_par_n, parsed_par_d; gint parsed_fps_n, parsed_fps_d; GstVideoColorimetry parsed_colorimetry; + gboolean lcevc; /* current codec_data in output caps, if any */ GstBuffer *codec_data; /* input codec_data, if any */ @@ -119,7 +120,6 @@ gboolean predicted; gboolean bidirectional; gboolean header; - gboolean framerate_from_caps; /* AU state */ gboolean picture_start; guint last_nuh_layer_id;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gstmpeg4videoparse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gstmpeg4videoparse.c
Changed
@@ -516,6 +516,7 @@ if (ret) { framesize = off - 3; + g_assert (framesize <= map.size); } else { goto next; } @@ -526,7 +527,6 @@ if (ret) { GstFlowReturn res; - g_assert (framesize <= map.size); res = gst_mpeg4vparse_parse_frame (parse, frame); if (res == GST_BASE_PARSE_FLOW_DROPPED) frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gstmpegvideoparse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gstmpegvideoparse.c
Changed
@@ -634,7 +634,7 @@ gint off = 0; GstMpegVideoPacket packet; guint8 *data; - gint size; + gsize size; gboolean need_more = FALSE; GstMapInfo map; @@ -729,7 +729,7 @@ GstFlowReturn res; *skipsize = 0; - g_assert (off <= map.size); + g_assert (off <= size); res = gst_mpegv_parse_parse_frame (parse, frame); if (res == GST_BASE_PARSE_FLOW_DROPPED) frame->flags |= GST_BASE_PARSE_FRAME_FLAG_DROP;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/gst/videoparsers/gstvp9parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/gst/videoparsers/gstvp9parse.c
Changed
@@ -503,7 +503,7 @@ GstFlowReturn ret = GST_FLOW_OK; GstVp9ParserResult parse_res = GST_VP9_PARSER_ERROR; GstMapInfo map; - gsize offset = 0; + gsize offset = 0, size; GstVp9SuperframeInfo superframe_info; guint i; GstVp9FrameHdr frame_hdr; @@ -513,10 +513,6 @@ else self->discont = FALSE; - /* need to save buffer from invalidation upon _finish_frame */ - if (self->align == GST_VP9_PARSE_ALIGN_FRAME) - buffer = gst_buffer_copy (frame->buffer); - if (!gst_buffer_map (buffer, &map, GST_MAP_READ)) { GST_ELEMENT_ERROR (parse, CORE, NOT_IMPLEMENTED, (NULL), ("Couldn't map incoming buffer")); @@ -579,6 +575,7 @@ GST_BUFFER_FLAG_SET (subframe.buffer, GST_BUFFER_FLAG_DECODE_ONLY); ret = gst_base_parse_finish_frame (parse, &subframe, frame_size); + gst_base_parse_frame_free (&subframe); } else { /* FIXME: need to parse all frames belong to this superframe? */ break; @@ -590,16 +587,16 @@ gst_vp9_parse_reset_super_frame (self); done: + size = map.size; gst_buffer_unmap (buffer, &map); if (self->align != GST_VP9_PARSE_ALIGN_FRAME) { if (parse_res == GST_VP9_PARSER_OK) gst_vp9_parse_parse_frame (self, frame, &frame_hdr); - ret = gst_base_parse_finish_frame (parse, frame, map.size); + ret = gst_base_parse_finish_frame (parse, frame, size); } else { - gst_buffer_unref (buffer); - if (offset != map.size) { - gsize left = map.size - offset; + if (offset != size) { + gsize left = size - offset; if (left != superframe_info.superframe_index_size) { GST_WARNING_OBJECT (parse, "Skipping leftover frame data %" G_GSIZE_FORMAT, left);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/meson.build
Changed
@@ -1,5 +1,5 @@ project('gst-plugins-bad', 'c', 'cpp', - version : '1.26.10', + version : '1.28.0', meson_version : '>= 1.4', default_options : 'warning_level=1', 'buildtype=debugoptimized' ) @@ -18,7 +18,7 @@ gst_version_is_dev = gst_version_minor.is_odd() and gst_version_micro < 90 glib_req = '>= 2.64.0' -orc_req = '>= 0.4.17' +orc_req = '>= 0.4.34' if gst_version_is_stable gst_req = '>= @0@.@1@.0'.format(gst_version_major, gst_version_minor) @@ -396,7 +396,7 @@ endif libm = cc.find_library('m', required : false) -gio_dep = dependency('gio-2.0', version: glib_req) +gio_dep = dependency('gio-2.0', version: glib_req, default_options: {'sysprof': 'disabled'}) gmodule_dep = dependency('gmodule-no-export-2.0') # gio-unix-2.0 is used by sys/bluez @@ -484,16 +484,21 @@ orcc_args = orc_targets = # Used by various libraries/elements that use Orc code -orc_dep = dependency('orc-0.4', version : orc_req, required : get_option('orc'), allow_fallback: true) -orcc = find_program('orcc', required : get_option('orc')) -if orc_dep.found() and orcc.found() - have_orcc = true - orcc_args = orcc, '--include', 'glib.h' +orc_dep = dependency('orc-0.4', version: orc_req, required: get_option('orc'), allow_fallback: true) +orcc = find_program('orcc', required: get_option('orc-compiler')) +if orc_dep.found() + if orcc.found() + have_orcc = true + orcc_args = orcc, '--include', 'glib.h' + else + message('Orc Compiler not found, not regenerating Orc sources') + endif cdata.set('HAVE_ORC', 1) else - message('Orc Compiler not found or disabled, will use backup C code') + warning('Orc not found or disabled, will use backup C code') cdata.set('DISABLE_ORC', 1) endif + cdata.set('GST_ENABLE_EXTRA_CHECKS', not get_option('extra-checks').disabled()) # Disable compiler warnings for unused variables and args if gst debug system is disabled @@ -636,7 +641,7 @@ configure_file(output : 'config.h', configuration : cdata) -meson.add_dist_script('scripts/gen-changelog.py', meson.project_name(), '1.24.0', meson.project_version()) +meson.add_dist_script('scripts/gen-changelog.py', meson.project_name(), '1.26.0', meson.project_version()) subdir('docs')
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/meson.options
Added
@@ -0,0 +1,328 @@ +option('gst_play_tests', type: 'boolean', value: false, + description: 'Enable GstPlay tests that need network access') + +# Feature options for plugins without external deps +option('accurip', type : 'feature', value : 'auto') +option('adpcmdec', type : 'feature', value : 'auto') +option('adpcmenc', type : 'feature', value : 'auto') +option('aiff', type : 'feature', value : 'auto') +option('asfmux', type : 'feature', value : 'auto') +option('audiobuffersplit', type : 'feature', value : 'auto') +option('audiofxbad', type : 'feature', value : 'auto') +option('audiolatency', type : 'feature', value : 'auto') +option('audiomixmatrix', type : 'feature', value : 'auto') +option('audiovisualizers', type : 'feature', value : 'auto') +option('autoconvert', type : 'feature', value : 'auto') +option('bayer', type : 'feature', value : 'auto') +option('camerabin2', type : 'feature', value : 'auto') +option('closedcaption', type : 'feature', value : 'auto') +option('codecalpha', type : 'feature', value : 'auto') +option('codectimestamper', type : 'feature', value : 'auto') +option('coloreffects', type : 'feature', value : 'auto') +option('debugutils', type : 'feature', value : 'auto') +option('dvbsubenc', type : 'feature', value : 'auto') +option('dvbsuboverlay', type : 'feature', value : 'auto') +option('dvdspu', type : 'feature', value : 'auto') +option('faceoverlay', type : 'feature', value : 'auto') +option('festival', type : 'feature', value : 'auto') +option('fieldanalysis', type : 'feature', value : 'auto') +option('freeverb', type : 'feature', value : 'auto') +option('frei0r', type : 'feature', value : 'auto') +option('gaudieffects', type : 'feature', value : 'auto') +option('gdp', type : 'feature', value : 'auto') +option('geometrictransform', type : 'feature', value : 'auto') +option('id3tag', type : 'feature', value : 'auto') +option('insertbin', type : 'feature', value : 'auto') +option('inter', type : 'feature', value : 'auto') +option('interlace', type : 'feature', value : 'auto') +option('ivfparse', type : 'feature', value : 'auto') +option('ivtc', type : 'feature', value : 'auto') +option('jp2kdecimator', type : 'feature', value : 'auto') +option('jpegformat', type : 'feature', value : 'auto') +option('lcevcdecoder', type : 'feature', value : 'auto') +option('lcevcencoder', type : 'feature', value : 'auto') +option('librfb', type : 'feature', value : 'auto') +option('midi', type : 'feature', value : 'auto') +option('mpegdemux', type : 'feature', value : 'auto') +option('mpegpsmux', type : 'feature', value : 'auto') +option('mpegtsdemux', type : 'feature', value : 'auto') +option('mpegtsmux', type : 'feature', value : 'auto') +option('mse', type : 'feature', value : 'auto') +option('mxf', type : 'feature', value : 'auto') +option('netsim', type : 'feature', value : 'auto') +option('onvif', type : 'feature', value : 'auto') +option('pcapparse', type : 'feature', value : 'auto') +option('pnm', type : 'feature', value : 'auto') +option('proxy', type : 'feature', value : 'auto') +option('rawparse', type : 'feature', value : 'auto') +option('removesilence', type : 'feature', value : 'auto') +option('rist', type : 'feature', value : 'auto') +option('rtmp2', type : 'feature', value : 'auto') +option('rtp', type : 'feature', value : 'auto') +option('sdp', type : 'feature', value : 'auto') +option('segmentclip', type : 'feature', value : 'auto') +option('siren', type : 'feature', value : 'auto') +option('smooth', type : 'feature', value : 'auto') +option('speed', type : 'feature', value : 'auto') +option('subenc', type : 'feature', value : 'auto') +option('switchbin', type : 'feature', value : 'auto') +option('tensordecoders', type : 'feature', value : 'auto') +option('timecode', type : 'feature', value : 'auto') +option('unixfd', type : 'feature', value : 'auto') +option('videofilters', type : 'feature', value : 'auto') +option('videoframe_audiolevel', type : 'feature', value : 'auto') +option('videoparsers', type : 'feature', value : 'auto') +option('videosignal', type : 'feature', value : 'auto') +option('vmnc', type : 'feature', value : 'auto') + +# Feature options for libraries that need external deps +option('opencv', type : 'feature', value : 'auto', description : 'OpenCV computer vision library support') + +# Feature options for optional deps in plugins +option('drm', type : 'feature', value : 'auto', description: 'libdrm support in the GstVA library') +option('udev', type : 'feature', value : 'auto', description: 'gudev support in the new VA-API plugin') +option('wayland', type : 'feature', value : 'auto', description : 'Wayland plugin/library, support in the Vulkan plugin') +option('x11', type : 'feature', value : 'auto', description : 'X11 support in Vulkan, GL and rfb plugins') + +# Feature options for plugins that need external deps +option('aes', type : 'feature', value : 'auto', description : 'AES encryption/decryption plugin') +option('aja', type : 'feature', value : 'auto', description : 'AJA audio/video source/sink plugin') +option('aom', type : 'feature', value : 'auto', description : 'AOM AV1 video codec plugin') +option('avtp', type : 'feature', value : 'auto', description : 'Audio/Video Transport Protocol (AVTP) plugin') +option('amfcodec', type : 'feature', value : 'auto', description : 'AMD AMF codec plugin') +option('analyticsoverlay', type: 'feature', value : 'auto') +option('androidmedia', type : 'feature', value : 'auto', description : 'Video capture and codec plugins for Android') +option('applemedia', type : 'feature', value : 'auto', description : 'Video capture and codec access plugins for macOS and iOS') +option('asio', type : 'feature', value : 'auto', description : 'Steinberg Audio Streaming Input Output (ASIO) plugin') +option('assrender', type : 'feature', value : 'auto', description : 'ASS/SSA subtitle renderer plugin') +option('bluez', type : 'feature', value : 'auto', description : 'Bluetooth audio A2DP/AVDTP sink, AVDTP source plugin') +option('bs2b', type : 'feature', value : 'auto', description : 'Bauer stereophonic-to-binaural audio plugin') +option('bz2', type : 'feature', value : 'auto', description : 'bz2 stream encoder and decoder plugin') +option('chromaprint', type : 'feature', value : 'auto', description : 'Chromaprint fingerprint audio plugin') +option('codec2json', type : 'feature', value : 'auto') +option('colormanagement', type : 'feature', value : 'auto', description : 'Color management correction plugin') +option('curl', type : 'feature', value : 'auto', description : 'cURL network source and sink plugin') +option('curl-ssh2', type : 'feature', value : 'auto', description : 'cURL network source and sink plugin libssh2 support') +option('d3dvideosink', type : 'feature', value : 'auto', description : 'Direct3D video sink plugin') +option('d3d11', type : 'feature', value : 'auto', description : 'Direct3D11 plugin') +option('d3d12', type : 'feature', value : 'auto', description : 'Direct3D12 plugin') +option('dash', type : 'feature', value : 'auto', description : 'DASH demuxer plugin') +option('dc1394', type : 'feature', value : 'auto', description : 'libdc1394 IIDC camera source plugin') +option('decklink', type : 'feature', value : 'auto', description : 'DeckLink audio/video source/sink plugin') +option('directfb', type : 'feature', value : 'auto', description : 'DirectFB video sink plugin') +option('directsound', type : 'feature', value : 'auto', description : 'Directsound audio source plugin') +option('directshow', type : 'feature', value : 'auto', description : 'Directshow audio/video plugins') +option('dtls', type : 'feature', value : 'auto', description : 'DTLS encoder and decoder plugin') +option('dts', type : 'feature', value : 'auto', description : 'DTS audio decoder plugin (GPL - only built if gpl option is also enabled!)') +option('dvb', type : 'feature', value : 'auto', description : 'DVB video bin and source plugin') +option('dwrite', type : 'feature', value : 'auto', description : 'DirectWrite plugin') +option('faac', type : 'feature', value : 'auto', description : 'Free AAC audio encoder plugin') +option('faad', type : 'feature', value : 'auto', description : 'Free AAC audio decoder plugin (GPL - only built if gpl option is also enabled!)') +option('fbdev', type : 'feature', value : 'auto', description : 'Framebuffer video sink plugin') +option('fdkaac', type : 'feature', value : 'auto', description : 'Fraunhofer AAC audio codec plugin') +option('flite', type : 'feature', value : 'auto', description : 'Flite speech synthesizer source plugin') +option('fluidsynth', type : 'feature', value : 'auto', description : 'Fluidsynth MIDI decoder plugin') +option('gl', type : 'feature', value : 'auto', description : 'GStreamer OpenGL integration support (used by various plugins)') +option('gme', type : 'feature', value : 'auto', description : 'libgme gaming console music file decoder plugin') +option('gs', type : 'feature', value : 'auto', description : 'Google Cloud Storage source and sink plugin') +option('gsm', type : 'feature', value : 'auto', description : 'GSM encoder/decoder plugin') +option('gtk3', type : 'feature', value : 'auto', description : 'GTK+ video sink plugin') +option('hip', type : 'feature', value : 'auto', description : 'AMD HIP plugin') +option('ipcpipeline', type : 'feature', value : 'auto', description : 'Inter-process communication plugin') +option('iqa', type : 'feature', value : 'auto', description : 'Image quality assessment plugin (AGPL - only built if gpl option is also enabled!)') +option('kms', type : 'feature', value : 'auto', description : 'KMS video sink plugin') +option('ladspa', type : 'feature', value : 'auto', description : 'LADSPA plugin bridge') +option('ladspa-rdf', type : 'feature', value : 'auto', description : 'LADSPA plugin bridge RDF support') +option('lc3', type : 'feature', value : 'auto', description : 'LC3 (Bluetooth) LE audio codec plugin') +option('ldac', type : 'feature', value : 'auto', description : 'LDAC bluetooth audio codec plugin') +option('libde265', type : 'feature', value : 'auto', description : 'HEVC/H.265 video decoder plugin') +option('openaptx', type : 'feature', value : 'auto', description : 'Open Source implementation of Audio Processing Technology codec (aptX) plugin') +option('lv2', type : 'feature', value : 'auto', description : 'LV2 audio plugin bridge') +option('mediafoundation', type : 'feature', value : 'auto', description : 'Microsoft Media Foundation plugin') +option('microdns', type : 'feature', value : 'auto', description : 'libmicrodns-based device provider') +option('modplug', type : 'feature', value : 'auto', description : 'ModPlug audio decoder plugin') +option('mpeg2enc', type : 'feature', value : 'auto', description : 'mpeg2enc video encoder plugin (GPL - only built if gpl option is also enabled!)') +option('mpeghdec', type : 'feature', value : 'auto', description : 'MPEG-H audio decoder plugin') +option('mplex', type : 'feature', value : 'auto', description : 'mplex audio/video multiplexer plugin (GPL - only built if gpl option is also enabled!)') +option('msdk', type : 'feature', value : 'auto', description : 'Intel Media SDK video encoder/decoder plugin') +option('musepack', type : 'feature', value : 'auto', description : 'libmpcdec Musepack decoder plugin') +option('neon', type : 'feature', value : 'auto', description : 'NEON HTTP source plugin') +option('nvcomp', type : 'feature', value : 'auto', description : 'NVIDIA nvCOMP compression/decompression plugin') +option('nvcodec', type : 'feature', value : 'auto', description : 'NVIDIA GPU codec plugin') +option('nvdswrapper', type : 'feature', value : 'auto', description : 'NVIDIA DeepStream SDK wrapper plugin') +option('onnx', type : 'feature', value : 'auto', description : 'ONNX neural network plugin') +option('openal', type : 'feature', value : 'auto', description : 'OpenAL plugin') +option('openexr', type : 'feature', value : 'auto', description : 'OpenEXR plugin') +option('openh264', type : 'feature', value : 'auto', description : 'H.264 video codec plugin') +option('openjpeg', type : 'feature', value : 'auto', description : 'JPEG2000 image codec plugin') +option('openmpt', type : 'feature', value : 'auto', description : 'OpenMPT module music library plugin') +option('openni2', type : 'feature', value : 'auto', description : 'OpenNI2 library plugin') +option('opensles', type : 'feature', value : 'auto', description : 'OpenSL ES audio source/sink plugin') +option('opus', type : 'feature', value : 'auto', description : 'OPUS audio parser plugin') +option('qroverlay', type : 'feature', value : 'auto', description : 'Element to set random data on a qroverlay') +option('qsv', type : 'feature', value : 'auto', description : 'Intel Quick Sync Video plugin') +option('resindvd', type : 'feature', value : 'auto', description : 'Resin DVD playback plugin (GPL - only built if gpl option is also enabled!)') +option('rsvg', type : 'feature', value : 'auto', description : 'SVG overlayer and image decoder plugin') +option('rtmp', type : 'feature', value : 'auto', description : 'RTMP video network source and sink plugin') +option('sbc', type : 'feature', value : 'auto', description : 'SBC bluetooth audio codec plugin') +option('sctp', type : 'feature', value : 'auto', description : 'SCTP plugin') +option('shm', type : 'feature', value : 'auto', description : 'Shared memory source/sink plugin') +option('smoothstreaming', type : 'feature', value : 'auto', description : 'Microsoft Smooth Streaming demuxer plugin') +option('sndfile', type : 'feature', value : 'auto', description : 'libsndfile plugin') +option('soundtouch', type : 'feature', value : 'auto', description : 'Audio pitch controller & BPM detection plugin') +option('spandsp', type : 'feature', value : 'auto', description : 'Packet loss concealment audio plugin') +option('srt', type : 'feature', value : 'auto', description : 'Secure, Reliable, Transport client/server network source/sink plugin') +option('srtp', type : 'feature', value : 'auto', description : 'Secure RTP codec plugin') +option('svtav1', type : 'feature', value : 'auto', description : 'Scalable Video Technology for AV1 plugin') +option('svthevcenc', type : 'feature', value : 'auto', description : 'Scalable Video Technology for HEVC encoder plugin') +option('svtjpegxs', type : 'feature', value : 'auto', description : 'Scalable Video Technology for JPEG-XS plugin') +option('teletext', type : 'feature', value : 'auto', description : 'Teletext plugin') +option('tflite', type : 'feature', value : 'auto', description : 'TensorFlow Lite (LiteRT) plugin') +option('tflite-edgetpu', type : 'feature', value : 'auto', description : 'TensorFlow Lite (LiteRT) EdgeTPU (Coral) support') +option('tflite-vsi', type : 'feature', value : 'disabled', description : 'TensorFlow Lite (LiteRT) Verisilicon support') +option('tinyalsa', type : 'feature', value : 'auto', description : 'TinyALSA plugin') +option('transcode', type : 'feature', value : 'auto', description : 'Transcode plugin') +option('ttml', type : 'feature', value : 'auto', description : 'TTML subtitle parser and renderer plugin') +option('uvch264', type : 'feature', value : 'auto', description : 'UVC compliant H.264 camera source plugin') +option('va', type : 'feature', value : 'auto', description: 'VA-API new plugin') +option('vmaf', type : 'feature', value : 'auto', description : 'Netflix VMAF image quality assessment plugin') +option('voaacenc', type : 'feature', value : 'auto', description : 'AAC audio encoder plugin') +option('voamrwbenc', type : 'feature', value : 'auto', description : 'AMR-WB audio encoder plugin') +option('wasapi', type : 'feature', value : 'auto', description : 'Windows Audio Session API source/sink plugin') +option('wasapi2', type : 'feature', value : 'auto', description : 'Windows Audio Session API source/sink plugin with WinRT API') +option('webview2', type : 'feature', value : 'auto', description : 'WebView2 plugin') +option('webp', type : 'feature', value : 'auto', description : 'WebP image codec plugin') +option('webrtc', type : 'feature', value : 'auto', yield: true, description : 'WebRTC audio/video network bin plugin') +option('webrtcdsp', type : 'feature', value : 'auto', description : 'Plugin with various audio filters provided by the WebRTC audio processing library') +option('wildmidi', type : 'feature', value : 'auto', description : 'WildMidi midi soft synth plugin') +option('wic', type : 'feature', value : 'auto', description : 'Windows Imaging Component plugin') +option('win32ipc', type : 'feature', value : 'auto', description : 'Windows IPC plugin') +option('winks', type : 'feature', value : 'auto', description : 'Windows Kernel Streaming video source plugin') +option('winscreencap', type : 'feature', value : 'auto', description : 'Windows Screen Capture video source plugin') +option('x265', type : 'feature', value : 'auto', description : 'HEVC/H.265 video encoder plugin (GPL - only built if gpl option is also enabled!)') +option('zbar', type : 'feature', value : 'auto', description : 'Barcode image scanner plugin using zbar library') +option('zxing', type : 'feature', value : 'auto', description : 'Barcode image scanner plugin using zxing-cpp library') +option('wpe', type : 'feature', value : 'auto', description : 'WPE Web browser plugin') +option( + 'wpe_api', + type: 'combo', + value: 'auto', + choices: 'auto', '1.0', '1.1', '2.0', + description: 'WPE WebKit API to target (1.0 = soup2, 1.1/2.0 = soup3)' +) +option('wpe2', type : 'feature', value : 'auto', description : 'WPE Web browser plugin') + +option('magicleap', type : 'feature', value : 'auto', description : 'Magic Leap platform support') +option('v4l2codecs', type : 'feature', value : 'auto', description : 'Video4Linux Stateless CODECs support') +option('uvcgadget', type : 'feature', value : 'auto', description : 'uvc video gadget plugin') +option('isac', type : 'feature', value : 'auto', description : 'iSAC plugin') + +# AJA plugin options +option('aja-include-dir', type : 'string', value : '', + description : 'Directory where AJA NTV2 headers are located') +option('aja-lib-dir', type : 'string', value : '', + description : 'Directory where AJA NTV2 library is located') + +# CUDA library options +option('cuda-nvmm', type : 'feature', value : 'auto', description : 'Enable NVMM support in cuda library') +option('cuda-nvmm-include-path', type : 'string', value : '', description : 'Include path for NVMM support in cuda library') + +# D3D11/D3D12 HLSL library options +option('d3d-hlsl-precompile', type : 'feature', value : 'auto', description : 'Enable buildtime HLSL compile for d3d11/d3d12 library/plugin') + +# D3D11 plugin options +option('d3d11-math', type : 'feature', value : 'auto', description : 'Enable DirectX SIMD Math support') +option('d3d11-hlsl-precompile', type : 'feature', value : 'auto', description : 'Enable buildtime HLSL compile for d3d11 library/plugin') +option('d3d11-wgc', type : 'feature', value : 'auto', description : 'Windows Graphics Capture API support in d3d11 plugin') + +# D3D12 plugin options +option('d3d12-wgc', type : 'feature', value : 'auto', description : 'Windows Graphics Capture API support in d3d12 plugin') + +# HLS plugin options +option('hls', type : 'feature', value : 'auto', description : 'HTTP Live Streaming plugin') +option('hls-crypto', type : 'combo', value : 'auto', choices : 'auto', 'nettle', 'libgcrypt', 'openssl', + description: 'Crypto library to use for HLS plugin') + +# SCTP plugin options +option('sctp-internal-usrsctp', type: 'feature', value : 'enabled', + description: 'Whether to use the bundled usrsctp library or the system one') + +# MSDK plugin options +option('mfx_api', type : 'combo', choices : 'MSDK', 'oneVPL', 'auto', value : 'auto', + description : 'Select MFX API to build against') + +# nvcodec plugin options +option('nvcodec-cuda-precompile', type : 'feature', value : 'disabled', description : 'Enable CUDA kernel precompile') +option('nvcodec-nvcc-arch', type : 'string', value : 'compute_52', description : 'GPU architectur for nvcc -arch option') + +# nvCOMP plugin options +option('nvcomp-sdk-path', type: 'string', value : '', + description : 'nvCOMP SDK root directory') + +# nvdswrapper plugin options +option('nvds-include-path', type: 'string', value : '', + description : 'DeepStream SDK include directory') +option('nvds-lib-path', type: 'string', value : '', + description : 'DeepStream SDK library directory') + +# QSV plugin options +option('mfx-modules-dir', type: 'string', value : '', + description : 'libmfx runtime module dir, linux only') + +# Qt6 plugin options +option('qt6d3d11', type : 'feature', value : 'auto', description : 'Qt6 Direct3D11 plugin') +option('qt-method', type: 'combo', value: 'auto', choices: 'auto', 'pkg-config', 'qmake', + yield: true, description: 'Method to use to find Qt') + +# Vulkan integration library and plugin options +option('vulkan', type: 'feature', value: 'auto', description: 'Vulkan integration library and video sink plugin') +option('vulkan-video', type: 'feature', value: 'auto', description: 'Whether to use Vulkan Video Extensions for encoding/decoding') +option('vulkan-windowing', type : 'array', + choices : 'x11', 'wayland', 'auto', value : 'auto', + description : 'A comma separated list of Vulkan windowing systems to enable. Non-Linux platforms are auto-detected.') + +# License-related feature options +option('gpl', type: 'feature', value: 'disabled', yield: true, + description: 'Allow build plugins that have (A)GPL-licensed dependencies') + +# HIP plugin options +option('hip-amd-precompile', type : 'feature', value : 'disabled', description : 'Enable HIP kernel precompile for AMD') +option('hip-hipcc-arch', type : 'string', value : '', description : 'GPU architectur for hipcc --offload-arch option') +option('hip-nvidia-precompile', type : 'feature', value : 'disabled', description : 'Enable HIP kernel precompile for NVIDIA') +option('hip-nvcc-arch', type : 'string', value : 'compute_52', description : 'GPU architectur for nvcc -arch option') + +# Common feature options +option('examples', type : 'feature', value : 'auto', yield : true) +option('tools', type : 'feature', value : 'auto', yield : true) +option('tests', type : 'feature', value : 'auto', yield : true) +option('introspection', type : 'feature', value : 'auto', yield : true, description : 'Generate gobject-introspection bindings') +option('nls', type : 'feature', value : 'auto', yield: true, description : 'Enable native language support (translations)') +option('orc', type : 'feature', value : 'auto', yield : true) +option('orc-compiler', type : 'feature', value : 'auto', yield: true, description : 'Enable targets to allow regeneration of disted orc files') +option('extra-checks', type : 'feature', value : 'enabled', yield : true, description : 'Enable extra runtime checks') + +# Common options +option('package-name', type : 'string', yield : true, + description : 'package name to use in plugins') +option('package-origin', type : 'string', value : 'Unknown package origin', yield : true, + description : 'package origin URL to use in plugins') +option('doc', type : 'feature', value : 'auto', yield: true, + description: 'Enable documentation.') +option('glib_debug', type : 'feature', value : 'auto', yield : true, description : 'Enable GLib debug infrastructure (see docs/macros.txt)') +option('glib_assert', type : 'boolean', value : true, yield : true, description : 'Enable GLib assertion (see docs/macros.txt)', + deprecated: {'enabled' : 'true', 'disabled' : 'false', 'auto' : 'false'}, +) +option('glib_checks', type : 'boolean', value : true, yield : true, description : 'Enable GLib checks such as API guards (see docs/macros.txt)', + deprecated: {'enabled' : 'true', 'disabled' : 'false', 'auto' : 'false'}, +) + +# Deprecated, kept for backward compat +option('gobject-cast-checks', type : 'feature', value : 'auto', yield : true, + description: 'Enable run-time GObject cast checks (auto = enabled for development, disabled for stable releases)', + deprecated: 'glib_debug') +option('glib-asserts', type : 'feature', value : 'enabled', yield : true, + description: 'Enable GLib assertion (auto = enabled for development, disabled for stable releases)', + deprecated: 'glib_assert') +option('glib-checks', type : 'feature', value : 'enabled', yield : true, + description: 'Enable GLib checks such as API guards (auto = enabled for development, disabled for stable releases)', + deprecated: 'glib_checks')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/po/LINGUAS -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/LINGUAS
Changed
@@ -1,1 +1,1 @@ -af ast az bg ca cs da de el en_GB eo es eu fi fr fur gl hr hu id it ja ka ky lt lv mt nb nl or pl pt_BR ro ru sk sl sq sr sv tr uk vi zh_CN zh_TW +af ar ast az bg ca cs da de el en_GB eo es eu fi fr fur gl hr hu id it ja ka ky lt lv mt nb nl or pl pt_BR ro ru sk sl sq sr sv tr uk vi zh_CN zh_TW
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/ar.po
Added
@@ -0,0 +1,128 @@ +# Arabic translation for gst-plugins-bad-1.0 package. +# Copyright (C) 2026 Free Software Foundation, Inc. +# This file is distributed under the same license as the gst-plugins-bad package. +# Zayed Al-Saidi <zayed.alsaidi@gmail.com>, 2026. +# +# +msgid "" +msgstr "" +"Project-Id-Version: gst-plugins-bad-1.27.90\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2026-01-25 17:18+0000\n" +"PO-Revision-Date: 2026-01-11 14:03+0400\n" +"Last-Translator: Zayed Al-Saidi <zayed.alsaidi@gmail.com>\n" +"Language-Team: Arabic <(nothing)>\n" +"Language: ar\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"X-Bugs: Report translation errors to the Language-Team address.\n" +"Plural-Forms: nplurals=6; plural=n==0 ? 0 : n==1 ? 1 : n==2 ? 2 : n%100>=3 " +"&& n%100<=10 ? 3 : n%100>=11 ? 4 : 5;\n" +"X-Generator: Lokalize 23.08.5\n" + +msgid "No URL set." +msgstr "لم يعين عنوان." + +msgid "OpenCV failed to load template image" +msgstr "فشل OpenCV في تحميل صورة القالب" + +msgid "Could not read title information for DVD." +msgstr "يتعذر قراءة معلومات العنوان لقرص DVD." + +#, c-format +msgid "Failed to open DVD device '%s'." +msgstr "فشل فتح جهاز DVD '%s'." + +msgid "Failed to set PGC based seeking." +msgstr "فشل تعيين السعي المستند إلى PGC." + +msgid "" +"Could not read DVD. This may be because the DVD is encrypted and a DVD " +"decryption library is not installed." +msgstr "" +"يتعذر قراءة قرص DVD. قد يكون هذا بسبب أن قرص DVD معميّ ومكتبة فك التعمية لقرص " +"DVD غير مثبتة." + +msgid "Could not read DVD." +msgstr "يتعذر قراءة DVD." + +msgid "This file contains no playable streams." +msgstr "لا يحتوي هذا الملف على دفقات قابلة للتشغيل." + +msgid "Could not open sndfile stream for reading." +msgstr "يتعذر فتح دفق sndfile للقراءة." + +msgid "Generated file has a larger preroll time than its streams duration" +msgstr "الملف المولد له وقت تقديم (preroll) أكبر من مدة دفقاته" + +#, c-format +msgid "Missing element '%s' - check your GStreamer installation." +msgstr "العنصر '%s' مفقود - افحص تثبيت جي ستريمر في حاسوبك." + +msgid "File location is set to NULL, please set it to a valid filename" +msgstr "موقع الملف معين كقيمة فارغة (NULL)، يرجى تعيينه لاسم ملف صالح" + +msgid "Digitalzoom element couldn't be created" +msgstr "يتعذر إنشاء عنصر التقريب الرقمي" + +msgid "Subpicture format was not configured before data flow" +msgstr "لم يضبط تنسيق الصورة الفرعية قبل تدفق البيانات" + +msgid "Failed to get fragment URL." +msgstr "فشل الحصول على عنوان الشظية (fragment URL)." + +#, c-format +msgid "Couldn't download fragments" +msgstr "يتعذر تنزيل الشظايا" + +msgid "Internal data stream error." +msgstr "خطأ داخلي في دفق البيانات." + +#, c-format +msgid "Device \"%s\" does not exist." +msgstr "الجهاز \"%s\" غير موجود." + +#, c-format +msgid "Could not open frontend device \"%s\"." +msgstr "يتعذر فتح جهاز الواجهة الأمامية \"%s\"." + +#, c-format +msgid "Could not get settings from frontend device \"%s\"." +msgstr "يتعذر الحصول على الإعدادات من جهاز الواجهة الأمامية \"%s\"." + +#, c-format +msgid "Cannot enumerate delivery systems from frontend device \"%s\"." +msgstr "يتعذر حصر أنظمة التوصيل من جهاز الواجهة الأمامية \"%s\"." + +#, c-format +msgid "Could not open file \"%s\" for reading." +msgstr "تعذر فتح ملف \"%s\" للقراءة." + +#, c-format +msgid "Couldn't find channel configuration file" +msgstr "يتعذر العثور على ملف ضبط القنوات" + +#, c-format +msgid "Couldn't load channel configuration file: '%s'" +msgstr "يتعذر تحميل ملف ضبط القنوات: '%s'" + +#, c-format +msgid "Couldn't find details for channel '%s'" +msgstr "يتعذر العثور على تفاصيل للقناة '%s'" + +#, c-format +msgid "No properties for channel '%s'" +msgstr "لا توجد خصائص للقناة '%s'" + +#, c-format +msgid "Failed to set properties for channel '%s'" +msgstr "فشل تعيين خصائص القناة '%s'" + +#, c-format +msgid "Couldn't find channel configuration file: '%s'" +msgstr "يتعذر العثور على ملف ضبط القنوات: '%s'" + +#, c-format +msgid "Channel configuration file doesn't contain any channels" +msgstr "ملف ضبط القنوات لا يحتوي على أي قنوات"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/po/gst-plugins-bad-1.0.pot -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/gst-plugins-bad-1.0.pot
Changed
@@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: gst-plugins-bad-1.26.10\n" +"Project-Id-Version: gst-plugins-bad-1.28.0\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2025-12-25 15:45+0100\n" +"POT-Creation-Date: 2026-01-27 17:03+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language-Team: LANGUAGE <LL@li.org>\n" @@ -48,7 +48,7 @@ msgid "Could not read DVD." msgstr "" -#: ext/smoothstreaming/gstmssdemux.c:439 +#: ext/smoothstreaming/gstmssdemux.c:441 #: gst-libs/gst/adaptivedemux/gstadaptivedemux.c:735 msgid "This file contains no playable streams." msgstr "" @@ -75,7 +75,7 @@ msgid "Digitalzoom element couldn't be created" msgstr "" -#: gst/dvdspu/gstdvdspu.c:1570 +#: gst/dvdspu/gstdvdspu.c:1574 msgid "Subpicture format was not configured before data flow" msgstr "" @@ -89,7 +89,7 @@ msgstr "" #: gst-libs/gst/adaptivedemux/gstadaptivedemux.c:4102 -#: gst/mpegtsdemux/mpegtsbase.c:1776 +#: gst/mpegtsdemux/mpegtsbase.c:1799 msgid "Internal data stream error." msgstr ""
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/po/gst-plugins-bad.pot -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/gst-plugins-bad.pot
Changed
@@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: gst-plugins-bad-1.26.10\n" +"Project-Id-Version: gst-plugins-bad-1.28.0\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2025-12-25 15:45+0100\n" +"POT-Creation-Date: 2026-01-27 17:03+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language-Team: LANGUAGE <LL@li.org>\n" @@ -48,7 +48,7 @@ msgid "Could not read DVD." msgstr "" -#: ext/smoothstreaming/gstmssdemux.c:439 +#: ext/smoothstreaming/gstmssdemux.c:441 #: gst-libs/gst/adaptivedemux/gstadaptivedemux.c:735 msgid "This file contains no playable streams." msgstr "" @@ -75,7 +75,7 @@ msgid "Digitalzoom element couldn't be created" msgstr "" -#: gst/dvdspu/gstdvdspu.c:1570 +#: gst/dvdspu/gstdvdspu.c:1574 msgid "Subpicture format was not configured before data flow" msgstr "" @@ -89,7 +89,7 @@ msgstr "" #: gst-libs/gst/adaptivedemux/gstadaptivedemux.c:4102 -#: gst/mpegtsdemux/mpegtsbase.c:1776 +#: gst/mpegtsdemux/mpegtsbase.c:1799 msgid "Internal data stream error." msgstr ""
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/po/hr.po -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/hr.po
Changed
@@ -4,13 +4,13 @@ # This file is distributed under the same license as the gst-plugins-bad package. # # Tomislav Krznar <tomislav.krznar@gmail.com>, 2012. -# Božidar Putanec <bozidarp@yahoo.com>, 2016, 2018, 2019, 2021, 2022, 2024. +# Božidar Putanec <bozidarp@yahoo.com>, 2016, 2018, 2019, 2021, 2022, 2024, 2026. msgid "" msgstr "" -"Project-Id-Version: gst-plugins-bad-1.24.0\n" +"Project-Id-Version: gst-plugins-bad-1.27.90\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2025-02-23 16:56+0000\n" -"PO-Revision-Date: 2024-03-08 10:41-0800\n" +"POT-Creation-Date: 2026-01-25 17:18+0000\n" +"PO-Revision-Date: 2026-01-07 20:50-0800\n" "Last-Translator: Božidar Putanec <bozidarp@yahoo.com>\n" "Language-Team: Croatian <lokalizacija@linux.hr>\n" "Language: hr\n" @@ -20,7 +20,7 @@ "X-Bugs: Report translation errors to the Language-Team address.\n" "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && " "n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n" -"X-Generator: Poedit 3.0\n" +"X-Generator: Vim9.1\n" msgid "No URL set." msgstr "Nema URL-adrese (nije postavljena)."
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/po/ro.po -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/po/ro.po
Changed
@@ -2,9 +2,9 @@ # Mesajele în limba română pentru pachetul gst-plugins-bad. # This file is distributed under the same license as the gst-plugins-bad package. # -# Lucian Adrian Grijincu <lucian.grijincu@gmail.com>, 2010.. +# Lucian Adrian Grijincu <lucian.grijincu@gmail.com>, 2010. # Florentina Mușat <florentina.musat.28@gmail.com>, 2020. -# Remus-Gabriel Chelu <remusgabriel.chelu@disroot.org>. 2022 - 2024. +# Remus-Gabriel Chelu <remusgabriel.chelu@disroot.org>. 2022 - 2024, 2026. # # Cronologia traducerii fișierului „gstreamer”: # Traducerea inițială, făcută de LAG, pentru versiunea gst-plugins-bad 0.10.18.2 @@ -14,14 +14,15 @@ # Actualizare a traducerii pentru versiunea 1.19.2, făcută de R-GC, ian-2022. # Actualizare a traducerii pentru versiunea 1.21.90, făcută de R-GC, ian-2023. # Actualizare a traducerii pentru versiunea 1.24.0, făcută de R-GC, mar-2024. +# Actualizare a traducerii pentru versiunea 1.27.90, făcută de R-GC, ian-2026. # Actualizare a traducerii pentru versiunea Y, făcută de X, Z(anul). # msgid "" msgstr "" -"Project-Id-Version: gst-plugins-bad 1.24.0\n" +"Project-Id-Version: gst-plugins-bad 1.27.90\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-11-03 17:37+0000\n" -"PO-Revision-Date: 2024-03-08 10:15+0100\n" +"POT-Creation-Date: 2026-01-25 17:18+0000\n" +"PO-Revision-Date: 2026-01-06 20:05+0100\n" "Last-Translator: Remus-Gabriel Chelu <remusgabriel.chelu@disroot.org>\n" "Language-Team: Romanian <translation-team-ro@lists.sourceforge.net>\n" "Language: ro\n" @@ -31,7 +32,7 @@ "Plural-Forms: nplurals=3; plural=(n==1 ? 0 : (n==0 || (n%100 > 0 && n%100 < " "20)) ? 1 : 2);;\n" "X-Bugs: Report translation errors to the Language-Team address.\n" -"X-Generator: Poedit 3.2.2\n" +"X-Generator: Poedit 3.6\n" msgid "No URL set." msgstr "Nicio adresă URL definită." @@ -53,14 +54,14 @@ "Could not read DVD. This may be because the DVD is encrypted and a DVD " "decryption library is not installed." msgstr "" -"Nu s-a putut citi DVD-ul. Aceasta poate fi pentru că DVD-ul este criptat și " -"o bibliotecă de decriptare a DVD-urilor nu este instalată." +"Nu s-a putut citi DVD-ul. Acest lucru se poate datora faptului că DVD-ul " +"este criptat și o bibliotecă de decriptare a DVD-ului nu este instalată." msgid "Could not read DVD." msgstr "Nu s-a putut citi DVD-ul." msgid "This file contains no playable streams." -msgstr "Fișierul nu conține fluxuri de redat." +msgstr "Acest fișier nu conține fluxuri ce pot fi redate." msgid "Could not open sndfile stream for reading." msgstr "Nu s-a putut deschide fluxul sndfile pentru citire." @@ -112,24 +113,30 @@ # sinonimul, traducerea adaptată de «client», # mi se pare cel puțin de moment o alegere # bună. +# **** +# modificat la (versiunea 1.27.90): +# „Nu s-a putut deschide dispozitivul de interfață „%s”.” #, c-format msgid "Could not open frontend device \"%s\"." -msgstr "Nu s-a putut deschide dispozitivul client „%s”." +msgstr "Nu s-a putut deschide dispozitivul de interfață „%s”." # R-GC, scrie: # modificat de la: -# „Nu s-au putut obține configurările de la dispozitivul frontend „%s”.” +# „Nu s-au putut obține configurările de la dispozitivul frontend „%s”.” - la: +# „Nu s-au putut obține configurările de la dispozitivul client „%s”.” +# *** +# modificat la (versiunea 1.27.90): +# „Nu s-au putut obține configurările de la dispozitivul de interfață „%s”.” #, c-format msgid "Could not get settings from frontend device \"%s\"." -msgstr "Nu s-au putut obține configurările de la dispozitivul client „%s”." +msgstr "" +"Nu s-au putut obține configurările de la dispozitivul de interfață „%s”." -# R-GC, scrie: -# modificat de la: -# „Nu s-au putut enumera sistemele de livrare de la dispozitivul de interfață „%s”.” #, c-format msgid "Cannot enumerate delivery systems from frontend device \"%s\"." msgstr "" -"Nu s-au putut enumera sistemele de livrare de la dispozitivul client„%s”." +"Nu s-au putut enumera sistemele de livrare de la dispozitivul de interfață " +"„%s”." #, c-format msgid "Could not open file \"%s\" for reading."
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/scripts/gen-changelog.py -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/scripts/gen-changelog.py
Changed
@@ -28,7 +28,6 @@ 'gst-rtsp-server': '5029c85a46a8c366c4bf272d503e22bbcd624ece', 'gst-editing-services': 'ee8bf88ebf131cf7c7161356540efc20bf411e14', 'gst-python': 'b3e564eff577e2f577d795051bbcca85d47c89dc', - 'gstreamer-vaapi': 'c89e9afc5d43837c498a55f8f13ddf235442b83b', 'gst-devtools': 'da962d096af9460502843e41b7d25fdece7ff1c2', 'gstreamer-sharp': 'b94528f8e7979df49fedf137dfa228d8fe475e1b', }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/scripts/update-orc-dist-files.py -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/scripts/update-orc-dist-files.py
Changed
@@ -43,6 +43,3 @@ # copy generated files from build dir into source dir shutil.copyfile(gen_header, dist_h) shutil.copyfile(gen_source, dist_c) - -# run gst-indent on the .c files -subprocess.run('gst-indent-1.0', dist_c)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/aja/gstajasrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/aja/gstajasrc.cpp
Changed
@@ -2325,7 +2325,7 @@ // if the *next* output frame should have the discont flag set bool discont = true; // if the pipeline clock is the monotonic system clock - bool clock_is_monotonic_system_clock = false; + bool clock_is_system_monotonic = false; // if the next frame is the first one after autocirculate was started bool first_frame_after_start = true; // acFrameTime of the last captured frame. Used to detect @@ -2360,17 +2360,7 @@ gst_clear_object(&clock); clock = gst_element_get_clock(GST_ELEMENT_CAST(self)); - clock_is_monotonic_system_clock = false; - if (G_OBJECT_TYPE(clock) == GST_TYPE_SYSTEM_CLOCK) { - GstClock *system_clock = gst_system_clock_obtain(); - - if (clock == system_clock) { - GstClockType clock_type; - g_object_get(clock, "clock-type", &clock_type, NULL); - clock_is_monotonic_system_clock = clock_type == GST_CLOCK_TYPE_MONOTONIC; - } - gst_clear_object(&system_clock); - } + clock_is_system_monotonic = gst_clock_is_system_monotonic(clock); // Reset all local state after restart have_signal = true; @@ -2890,7 +2880,7 @@ gst_clock_unadjust_with_calibration(NULL, frame_src_time, internal, external, num, denom); - if (clock_is_monotonic_system_clock) { + if (clock_is_system_monotonic) { // If the pipeline is using the monotonic system clock then we can // just use this. GST_OBJECT_LOCK(clock);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/aja/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/aja/meson.build
Changed
@@ -122,7 +122,6 @@ libajantv2_dep = dependency('libajantv2', include_type: 'system', required: aja_option, - allow_fallback: true, default_options: 'warning_level=0') if not libajantv2_dep.found() subdir_done()
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/amfcodec/gstamfav1enc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/amfcodec/gstamfav1enc.cpp
Changed
@@ -695,7 +695,7 @@ } } - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "AMD AMF AV1 Video Encoder", "Codec/Encoder/Video/Hardware", "Encode AV1 video streams using AMF API",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/amfcodec/gstamfh264enc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/amfcodec/gstamfh264enc.cpp
Changed
@@ -820,7 +820,7 @@ pa_param_flags)); } } - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "AMD AMF H.264 Video Encoder", "Codec/Encoder/Video/Hardware", "Encode H.264 video streams using AMF API",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/amfcodec/gstamfh265enc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/amfcodec/gstamfh265enc.cpp
Changed
@@ -716,7 +716,7 @@ } } - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "AMD AMF H.265 Video Encoder", "Codec/Encoder/Video/Hardware", "Encode H.265 video streams using AMF API",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/gstamc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstamc.c
Changed
@@ -36,6 +36,7 @@ #endif #include "gstamc.h" +#include "gstamcutils.h" #include "gstamc-constants.h" #include "gstamcvideodec.h" @@ -256,15 +257,19 @@ gst_codec_info->is_encoder = is_encoder; gst_codec_info->gl_output_only = FALSE; - if (!gst_amc_codec_info_handle_is_hardware_accelerated (codec_info, &is_hw, - &error)) { - GST_WARNING ("Failed to detect if codec is hardware-accelerated: %s", - error ? error->message : "unknown error"); - g_clear_error (&error); - gst_codec_info->accel = AMC_CODEC_ACCEL_IS_UNKNOWN; + if (gst_amc_get_android_level () >= 29) { + if (!gst_amc_codec_info_handle_is_hardware_accelerated (codec_info, + &is_hw, &error)) { + GST_WARNING ("Failed to detect if codec is hardware-accelerated: %s", + error ? error->message : "unknown error"); + g_clear_error (&error); + gst_codec_info->accel = AMC_CODEC_ACCEL_IS_UNKNOWN; + } else { + gst_codec_info->accel = + is_hw ? AMC_CODEC_ACCEL_IS_HW : AMC_CODEC_ACCEL_IS_SW; + } } else { - gst_codec_info->accel = - is_hw ? AMC_CODEC_ACCEL_IS_HW : AMC_CODEC_ACCEL_IS_SW; + gst_codec_info->accel = AMC_CODEC_ACCEL_IS_UNKNOWN; } supported_types =
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstamcutils.c
Added
@@ -0,0 +1,34 @@ +/* + * Copyright (C) 2026 Nirbheek Chauhan <nirbheek@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + * + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <sys/system_properties.h> + +#include "gstamcutils.h" + +int +gst_amc_get_android_level () +{ + char sdk_ver_strPROP_VALUE_MAX; + int len = __system_property_get ("ro.build.version.sdk", sdk_ver_str); + return (len > 0) ? atoi (sdk_ver_str) : -1; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstamcutils.h
Added
@@ -0,0 +1,31 @@ +/* + * Copyright (C) 2026 Nirbheek Chauhan <nirbheek@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + * + */ + +#ifndef __GST_AMC_UTILS_H__ +#define __GST_AMC_UTILS_H__ + +#include <glib.h> + +G_BEGIN_DECLS + +int gst_amc_get_android_level (void); + +G_END_DECLS + +#endif
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/gstamcvideodec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstamcvideodec.c
Changed
@@ -578,8 +578,8 @@ self = GST_AMC_VIDEO_DEC (element); GST_DEBUG_OBJECT (element, "changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/gstamcvideoenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstamcvideoenc.c
Changed
@@ -41,10 +41,7 @@ #define orc_memcpy memcpy #endif -#ifdef HAVE_JNI_H -#include "gstjniutils.h" -#endif - +#include "gstamcutils.h" #include "gstamcvideoenc.h" #include "gstamc-constants.h" @@ -288,15 +285,12 @@ } /* On Android N_MR1 and higher, i-frame-interval can be a float value */ -#ifdef HAVE_JNI_H - if (gst_amc_jni_get_android_level () >= 25) { + if (gst_amc_get_android_level () >= 25) { GST_LOG_OBJECT (encoder, "Setting i-frame-interval to %f", encoder->i_frame_int); gst_amc_format_set_float (format, "i-frame-interval", encoder->i_frame_int, &err); - } else -#endif - { + } else { int i_frame_int = encoder->i_frame_int; /* Round a fractional interval to 1 per sec on older Android */ if (encoder->i_frame_int > 0 && encoder->i_frame_int < 1.0)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/gstjniutils.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstjniutils.c
Changed
@@ -43,27 +43,6 @@ static pthread_key_t current_jni_env; static jobject (*get_class_loader) (void); -gint -gst_amc_jni_get_android_level () -{ - JNIEnv *env; - gint ret = __ANDROID_API__; - jfieldID sdkIntFieldID = NULL; - - env = gst_amc_jni_get_env (); - - jclass versionClass = (*env)->FindClass (env, "android/os/Build$VERSION"); - if (versionClass == NULL) - goto done; - - sdkIntFieldID = (*env)->GetStaticFieldID (env, versionClass, "SDK_INT", "I"); - if (sdkIntFieldID == NULL) - goto done; - - ret = (*env)->GetStaticIntField (env, versionClass, sdkIntFieldID); -done: - return ret; -} jclass gst_amc_jni_get_class (JNIEnv * env, GError ** err, const gchar * name) @@ -878,16 +857,21 @@ return class; } -#define CALL_STATIC_TYPE_METHOD(_type, _name, _jname) \ +#define CALL_STATIC_TYPE_METHOD(_type, _name, _jname) \ gboolean gst_amc_jni_call_static_##_name##_method (JNIEnv *env, GError ** err, jclass klass, jmethodID methodID, _type * value, ...) \ { \ gboolean ret = TRUE; \ va_list args; \ - va_start(args, value); \ + if (methodID == NULL) { \ + gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_INIT, \ + "Java method not found"); \ + return FALSE; \ + } \ + va_start(args, value); \ *value = (*env)->CallStatic##_jname##MethodV(env, klass, methodID, args); \ if ((*env)->ExceptionCheck (env)) { \ - gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_FAILED, \ - "Failed to call static Java method"); \ + gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_FAILED, \ + "Failed to call static Java method"); \ ret = FALSE; \ } \ va_end(args); \ @@ -927,11 +911,16 @@ { \ gboolean ret = TRUE; \ va_list args; \ - va_start(args, value); \ + if (methodID == NULL) { \ + gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_INIT, \ + "Java method not found"); \ + return FALSE; \ + } \ + va_start(args, value); \ *value = (*env)->Call##_jname##MethodV(env, obj, methodID, args); \ if ((*env)->ExceptionCheck (env)) { \ - gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_FAILED, \ - "Failed to call Java method"); \ + gst_amc_jni_set_error (env, err, GST_LIBRARY_ERROR, GST_LIBRARY_ERROR_FAILED, \ + "Failed to call Java method"); \ ret = FALSE; \ } \ va_end(args); \
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/gstjniutils.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/gstjniutils.h
Changed
@@ -37,8 +37,6 @@ #define GPOINTER_TO_JLONG(value) (jlong)(jint)(value) #endif -gint gst_amc_jni_get_android_level(void); - jclass gst_amc_jni_get_class (JNIEnv * env, GError ** err, const gchar * name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/jni/gstamc-codeclist-jni.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/jni/gstamc-codeclist-jni.c
Changed
@@ -24,6 +24,7 @@ #endif #include "../gstjniutils.h" +#include "../gstamcutils.h" #include "../gstamc-codeclist.h" #include "gstamc-jni.h" @@ -159,14 +160,16 @@ return FALSE; } - media_codecinfo.is_hardware_accelerated = - gst_amc_jni_get_method_id (env, &err, media_codecinfo.klass, - "isHardwareAccelerated", "()Z"); - if (!media_codecinfo.is_hardware_accelerated) { - GST_ERROR ("Failed to get android.media.MediaCodecInfo " - "isHardwareAccelerated(): %s", err->message); - g_clear_error (&err); - return FALSE; + if (gst_amc_get_android_level () >= 29) { + media_codecinfo.is_hardware_accelerated = + gst_amc_jni_get_method_id (env, &err, media_codecinfo.klass, + "isHardwareAccelerated", "()Z"); + if (!media_codecinfo.is_hardware_accelerated) { + GST_ERROR ("Failed to get android.media.MediaCodecInfo " + "isHardwareAccelerated(): %s", err->message); + g_clear_error (&err); + return FALSE; + } } media_codeccapabilities.klass =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/androidmedia/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/androidmedia/meson.build
Changed
@@ -1,6 +1,7 @@ androidmedia_sources = 'gstamcaudiodec.c', 'gstamc.c', + 'gstamcutils.c', 'gstamc-codec.c', 'gstamc-format.c', 'gstamcsurfacetexture.c', @@ -33,6 +34,7 @@ 'gstamc-codeclist.h', 'gstamc-constants.h', 'gstamc-format.h', + 'gstamcutils.h', 'gstamc.h', 'gstamcsurfacetexture.h', 'gstamcvideodec.h',
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/avfassetsrc.m -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/avfassetsrc.m
Changed
@@ -44,7 +44,7 @@ #define GST_CAT_DEFAULT gst_avf_asset_src_debug #define CMTIME_TO_GST_TIME(x) \ - (x.value == 0 ? 0 : (guint64)(x.value * GST_SECOND / x.timescale)); + (CMTIME_IS_INVALID(x) ? GST_CLOCK_TIME_NONE : (guint64)(x.value * GST_SECOND / x.timescale)); #define GST_AVF_ASSET_SRC_LOCK(x) (g_mutex_lock (&x->lock)); #define GST_AVF_ASSET_SRC_UNLOCK(x) (g_mutex_unlock (&x->lock)); #define MEDIA_TYPE_TO_STR(x) \ @@ -1050,14 +1050,17 @@ return NULL; } + ts = CMSampleBufferGetPresentationTimeStamp (cmbuf); + if (!CMTIME_IS_VALID (ts)) { + GST_WARNING ("Buffer %p has invalid timestamp", cmbuf); + } + dur = CMSampleBufferGetDuration (cmbuf); buf = gst_core_media_buffer_new (cmbuf, FALSE, NULL); CFRelease (cmbuf); if (buf == NULL) return NULL; - /* cmbuf is now retained by buf (in meta) */ - dur = CMSampleBufferGetDuration (cmbuf); - ts = CMSampleBufferGetPresentationTimeStamp (cmbuf); - if (dur.value != 0) { + + if (CMTIME_IS_VALID (dur) && dur.value != 0) { GST_BUFFER_DURATION (buf) = CMTIME_TO_GST_TIME (dur); } GST_BUFFER_TIMESTAMP (buf) = CMTIME_TO_GST_TIME (ts); @@ -1065,7 +1068,14 @@ GST_TIME_FORMAT, MEDIA_TYPE_TO_STR (type), GST_TIME_ARGS(GST_BUFFER_TIMESTAMP (buf)), GST_TIME_ARGS(GST_BUFFER_DURATION (buf))); - if (GST_BUFFER_TIMESTAMP (buf) > position) { + + /* FIXME: Buffers with invalid timestamp don't contribute to advancing the + * position. We have options: 1) try to use the previous buffer duration if + * available, 2) advance by one nominal frame duration (though this won't work + * with VFR content), or 3) advance by some epsilon value. Not sure yet what's + * best. + */ + if (GST_BUFFER_TIMESTAMP_IS_VALID (buf) && GST_BUFFER_TIMESTAMP (buf) > position) { position = GST_BUFFER_TIMESTAMP (buf); } return buf;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/avfvideosrc.m -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/avfvideosrc.m
Changed
@@ -1225,7 +1225,7 @@ gstpushsrc_class->create = gst_avf_video_src_create; - gst_element_class_set_metadata (gstelement_class, + gst_element_class_set_static_metadata (gstelement_class, "Video Source (AVFoundation)", "Source/Video/Hardware", "Reads frames from an iOS/MacOS AVFoundation device", "Ole André Vadla Ravnås <oleavr@soundrop.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/avsamplevideosink.m -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/avsamplevideosink.m
Changed
@@ -100,7 +100,7 @@ "The CoreAnimation layer that can be placed in the render tree", G_PARAM_READABLE | G_PARAM_STATIC_STRINGS)); - gst_element_class_set_metadata (element_class, "AV Sample video sink", + gst_element_class_set_static_metadata (element_class, "AV Sample video sink", "Sink/Video", "A videosink based on AVSampleBuffers", "Matthew Waters <matthew@centricular.com>"); @@ -290,6 +290,8 @@ return kCVPixelFormatType_24BGR; case GST_VIDEO_FORMAT_NV12: return kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange; + case GST_VIDEO_FORMAT_P010_10LE: + return kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange; case GST_VIDEO_FORMAT_I420: return kCVPixelFormatType_420YpCbCr8Planar; case GST_VIDEO_FORMAT_YUY2: @@ -329,6 +331,8 @@ return GST_VIDEO_FORMAT_BGR; case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange: return GST_VIDEO_FORMAT_NV12; + case kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange: + return GST_VIDEO_FORMAT_P010_10LE; case kCVPixelFormatType_420YpCbCr8Planar: return GST_VIDEO_FORMAT_I420; case kCVPixelFormatType_422YpCbCr8_yuvs:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/coremediabuffer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/coremediabuffer.c
Changed
@@ -127,6 +127,8 @@ return GST_VIDEO_FORMAT_I420; case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange: return GST_VIDEO_FORMAT_NV12; + case kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange: + return GST_VIDEO_FORMAT_P010_10LE; case kCVPixelFormatType_422YpCbCr8_yuvs: return GST_VIDEO_FORMAT_YUY2; case kCVPixelFormatType_422YpCbCr8:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/corevideobuffer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/corevideobuffer.c
Changed
@@ -220,6 +220,8 @@ return GST_VIDEO_FORMAT_I420; case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange: return GST_VIDEO_FORMAT_NV12; + case kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange: + return GST_VIDEO_FORMAT_P010_10LE; case kCVPixelFormatType_422YpCbCr8_yuvs: return GST_VIDEO_FORMAT_YUY2; case kCVPixelFormatType_422YpCbCr8:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/helpers.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/helpers.c
Changed
@@ -32,6 +32,9 @@ case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange: case kCVPixelFormatType_420YpCbCr8BiPlanarFullRange: return GST_VIDEO_FORMAT_NV12; + case kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange: + case kCVPixelFormatType_420YpCbCr10BiPlanarFullRange: + return GST_VIDEO_FORMAT_P010_10LE; case kCVPixelFormatType_422YpCbCr8: return GST_VIDEO_FORMAT_UYVY; case kCVPixelFormatType_422YpCbCr8_yuvs: @@ -62,6 +65,8 @@ return kCVPixelFormatType_420YpCbCr8Planar; case GST_VIDEO_FORMAT_NV12: return kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange; + case GST_VIDEO_FORMAT_P010_10LE: + return kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange; case GST_VIDEO_FORMAT_UYVY: return kCVPixelFormatType_422YpCbCr8; case GST_VIDEO_FORMAT_YUY2:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/helpers.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/helpers.h
Changed
@@ -31,6 +31,12 @@ #endif #define GST_APPLEMEDIA_HAVE_64RGBALE __builtin_available(macOS 11.3, *) +// kCMVideoCodecType_AV1 is only available for M3 series or later +// The actual FourCC value for AV1 is 'av01' +#if defined(MAC_OS_X_VERSION_MAX_ALLOWED) && MAC_OS_X_VERSION_MAX_ALLOWED < 130100 +#define kCMVideoCodecType_AV1 'av01' +#endif + #define GST_CVPIXELFORMAT_FOURCC_ARGS(fourcc) \ __GST_PRINT_CHAR(((fourcc) >> 24) & 0xff), \ __GST_PRINT_CHAR(((fourcc) >> 16) & 0xff), \
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/vtdec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/vtdec.c
Changed
@@ -50,8 +50,12 @@ #include "config.h" #endif +#ifdef HAVE_IOS +#include <dlfcn.h> +#endif #include <string.h> #include <gst/gst.h> +#include <gst/base/gstbytewriter.h> #include <gst/video/video.h> #include <gst/video/gstvideodecoder.h> #include <gst/gl/gstglcontext.h> @@ -76,6 +80,10 @@ VTDEC_FRAME_FLAG_ERROR = (1 << 12), }; +#if (defined(__IPHONE_OS_VERSION_MAX_ALLOWED) && __IPHONE_OS_VERSION_MAX_ALLOWED < 140000) || (defined(MAC_OS_X_VERSION_MAX_ALLOWED) && MAC_OS_X_VERSION_MAX_ALLOWED < 110000) +#define kCMVideoCodecType_VP9 'vp09' +#endif + static void gst_vtdec_finalize (GObject * object); static gboolean gst_vtdec_start (GstVideoDecoder * decoder); @@ -115,21 +123,32 @@ GstBuffer * codec_data, int *length); static gboolean gst_vtdec_compute_dpb_size (GstVtdec * vtdec, CMVideoCodecType cm_format, GstBuffer * codec_data); +static gboolean gst_vtdec_check_vp9_support (GstVtdec * vtdec); +static gboolean gst_vtdec_build_vp9_vpcc_from_caps (GstVtdec * vtdec, + GstStructure * caps_struct); +static gboolean gst_vtdec_check_av1_support (GstVtdec * vtdec); +static gboolean gst_vtdec_handle_av1_sequence_header (GstVtdec * vtdec, + GstVideoCodecFrame * frame); static void gst_vtdec_set_latency (GstVtdec * vtdec); static void gst_vtdec_set_context (GstElement * element, GstContext * context); +static GstCaps *gst_vtdec_getcaps (GstVideoDecoder * decoder, GstCaps * filter); static GstStaticPadTemplate gst_vtdec_sink_template = GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, GST_STATIC_CAPS ("video/x-h264, stream-format=avc, alignment=au," - " width=(int)1, MAX, height=(int)1, MAX;" + " width=(int)8, MAX, height=(int)8, MAX;" "video/x-h265, stream-format=(string){ hev1, hvc1 }, alignment=au," - " width=(int)1, MAX, height=(int)1, MAX;" + " width=(int)16, MAX, height=(int)16, MAX;" + "video/x-av1, stream-format=obu-stream, alignment=(string){ tu, frame }, " + "width=(int)64, MAX, height=(int)64, MAX;" "video/mpeg, mpegversion=2, systemstream=false, parsed=true;" "image/jpeg;" "video/x-prores, variant = { (string)standard, (string)hq, (string)lt," - " (string)proxy, (string)4444, (string)4444xq };") + " (string)proxy, (string)4444, (string)4444xq };" + "video/x-vp9, profile=(string){ 0, 2 }, " + " width=(int)64, MAX, height=(int)64, MAX;") ); /* define EnableHardwareAcceleratedVideoDecoder in < 10.9 */ @@ -142,7 +161,7 @@ CFSTR ("RequireHardwareAcceleratedVideoDecoder"); #endif -#define VIDEO_SRC_CAPS_FORMATS "{ NV12, AYUV64, ARGB64_BE }" +#define VIDEO_SRC_CAPS_FORMATS "{ NV12, AYUV64, ARGB64_BE, P010_10LE }" #define VIDEO_SRC_CAPS_NATIVE \ GST_VIDEO_CAPS_MAKE(VIDEO_SRC_CAPS_FORMATS) ";" \ @@ -200,6 +219,7 @@ video_decoder_class->handle_frame = GST_DEBUG_FUNCPTR (gst_vtdec_handle_frame); video_decoder_class->sink_event = GST_DEBUG_FUNCPTR (gst_vtdec_sink_event); + video_decoder_class->getcaps = GST_DEBUG_FUNCPTR (gst_vtdec_getcaps); } static void @@ -289,6 +309,12 @@ CFRelease (vtdec->format_description); vtdec->format_description = NULL; + g_clear_pointer (&vtdec->vp9_vpcc, g_free); + vtdec->vp9_vpcc_size = 0; + if (vtdec->av1_sequence_header_obu) + gst_buffer_unref (vtdec->av1_sequence_header_obu); + vtdec->av1_sequence_header_obu = NULL; + #if defined(APPLEMEDIA_MOLTENVK) gst_clear_object (&vtdec->device); gst_clear_object (&vtdec->instance); @@ -316,7 +342,7 @@ return; } - /* push a buffer if there are enough frames to guarantee + /* push a buffer if there are enough frames to guarantee * that we push in PTS order, or if we're draining/flushing */ while ((gst_vec_deque_get_length (vtdec->reorder_queue) >= vtdec->dbp_size) || vtdec->is_flushing || vtdec->is_draining) { @@ -360,7 +386,7 @@ g_mutex_unlock (&vtdec->queue_mutex); GST_VIDEO_DECODER_STREAM_LOCK (vtdec); - /* We need to empty the queue immediately so that session_output_callback() + /* We need to empty the queue immediately so that session_output_callback() * can push out the current buffer, otherwise it can deadlock */ if (ret != GST_FLOW_OK) { g_mutex_lock (&vtdec->queue_mutex); @@ -441,6 +467,7 @@ GstVideoFormat vfmt = gst_video_format_from_string (fmt); switch (vfmt) { case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_P010_10LE: if (!prores) return vfmt; break; @@ -710,6 +737,14 @@ GST_ERROR_OBJECT (vtdec, "Invalid ProRes variant %s", variant); return FALSE; } + } else if (!strcmp (caps_name, "video/x-vp9")) { + GST_INFO_OBJECT (vtdec, "cm_format is VP9"); + cm_format = kCMVideoCodecType_VP9; + } else if (!strcmp (caps_name, "video/x-av1")) { + GST_INFO_OBJECT (vtdec, + "Setting up for AV1 - will wait for sequence header"); + cm_format = kCMVideoCodecType_AV1; + vtdec->av1_needs_sequence_header = TRUE; /* Delay session creation until we get sequence header */ } if ((cm_format == kCMVideoCodecType_H264 @@ -717,6 +752,14 @@ && state->codec_data == NULL) { GST_INFO_OBJECT (vtdec, "waiting for codec_data before negotiation"); negotiate_now = FALSE; + } else if (cm_format == kCMVideoCodecType_VP9) { + negotiate_now = gst_vtdec_build_vp9_vpcc_from_caps (vtdec, structure); + } + + if (cm_format == kCMVideoCodecType_AV1 && vtdec->av1_needs_sequence_header) { + GST_INFO_OBJECT (vtdec, + "waiting for AV1 sequence header before negotiation"); + negotiate_now = FALSE; } gst_video_info_from_caps (&vtdec->video_info, state->caps); @@ -841,7 +884,54 @@ goto drop; } - /* Negotiate now so that we know whether we need to use the GL upload meta or not. + /* Check if we need to extract AV1 sequence header for delayed initialization */ + if (vtdec->av1_needs_sequence_header && vtdec->session == NULL) { + if (gst_vtdec_handle_av1_sequence_header (vtdec, frame)) { + GST_INFO_OBJECT (vtdec, + "Successfully initialized AV1 decoder with sequence header"); + vtdec->av1_needs_sequence_header = FALSE; + + /* Recreate the format description with the sequence header OBU */ + if (vtdec->format_description) + CFRelease (vtdec->format_description); + + vtdec->format_description = + create_format_description_from_codec_data (vtdec, + kCMVideoCodecType_AV1, vtdec->input_state->codec_data); + + if (!vtdec->format_description) { + GST_ERROR_OBJECT (vtdec, + "Failed to create format description with sequence header"); + ret = GST_FLOW_NOT_NEGOTIATED; + goto drop; + } + + /* Compute DPB size and set latency for AV1 */ + if (!gst_vtdec_compute_dpb_size (vtdec, kCMVideoCodecType_AV1, + vtdec->input_state->codec_data)) { + GST_ERROR_OBJECT (vtdec, "Failed to compute DPB size for AV1"); + ret = GST_FLOW_NOT_NEGOTIATED; + goto drop; + } + + gst_vtdec_set_latency (vtdec); + + /* Now negotiate with the complete format description */ + if (!gst_vtdec_negotiate (decoder)) { + GST_ERROR_OBJECT (vtdec, + "Failed to negotiate after AV1 sequence header"); + ret = GST_FLOW_NOT_NEGOTIATED; + goto drop; + } + } else { + GST_DEBUG_OBJECT (vtdec, + "Waiting for AV1 sequence header, dropping frame"); + ret = GST_FLOW_OK; + goto drop; + } + } + + /* Negotiate now so that we know whether we need to use the GL upload meta or not. * gst_vtenc_negotiate() will drain before attempting to negotiate. */ if (gst_pad_check_reconfigure (decoder->srcpad)) { if (!gst_vtdec_negotiate (decoder)) { @@ -897,8 +987,8 @@ /* We need to unlock the stream lock here because * the decode call can wait until gst_vtdec_session_output_callback() - * is finished, which in turn can wait until there's space in the - * output queue, which is being handled by the output loop, + * is finished, which in turn can wait until there's space in the + * output queue, which is being handled by the output loop, * which also uses the stream lock... */ GST_VIDEO_DECODER_STREAM_UNLOCK (vtdec); status = VTDecompressionSessionDecodeFrame (vtdec->session, cm_sample_buffer, @@ -987,15 +1077,144 @@ return status; } +/* https://www.webmproject.org/vp9/mp4/#vp-codec-configuration-box */ +static gboolean +gst_vtdec_build_vp9_vpcc_from_caps (GstVtdec * vtdec, + GstStructure * caps_struct) +{ + GST_INFO_OBJECT (vtdec, "gst_vtdec_build_vp9_vpcc_from_caps"); + + gint profile = 0; /* Undefined profile 0 is generally acceptable. */ + guint bit_depth = 8; + guint bit_depth_chroma = 8; + /* Default to 4:2:0 */ + guint8 chroma_subsampling = 1; + const gchar *chroma_format = NULL; + /* Default to BT.709 limited range */ + gboolean video_full_range = FALSE; + guint8 colour_primaries = 1; + guint8 transfer_characteristics = 1; + guint8 matrix_coefficients = 1; + const gchar *colorimetry_str = NULL; + guint8 color_info_field = 0; + gboolean hdl = TRUE; + GstByteWriter writer; + + if (!gst_structure_has_name (caps_struct, "video/x-vp9")) { + return FALSE; + } + + gst_byte_writer_init (&writer); + + /* version is always 1 */ + hdl &= gst_byte_writer_put_uint8 (&writer, 1); + + hdl &= gst_byte_writer_put_uint8 (&writer, 0); + hdl &= gst_byte_writer_put_uint8 (&writer, 0); + hdl &= gst_byte_writer_put_uint8 (&writer, 0); + + gst_structure_get_int (caps_struct, "profile", &profile); + hdl &= gst_byte_writer_put_uint8 (&writer, profile); + + /* level is not in caps for VP9; 0 is acceptable */ + hdl &= gst_byte_writer_put_uint8 (&writer, 0); + + gst_structure_get_uint (caps_struct, "bit-depth-luma", &bit_depth); + + /* ensure chroma bit depth matches luma if present */ + if (gst_structure_get_uint (caps_struct, "bit-depth-chroma", + &bit_depth_chroma) + && (bit_depth != bit_depth_chroma)) { + GST_WARNING_OBJECT (vtdec, + "bit-depth-luma and bit-depth-chroma in caps disagree"); + } + + chroma_format = gst_structure_get_string (caps_struct, "chroma-format"); + if (chroma_format) { + if (g_strcmp0 (chroma_format, "4:2:0") == 0) { + const gchar *chroma_site = + gst_structure_get_string (caps_struct, "chroma-site"); + if (chroma_site) { + const GstVideoChromaSite site = + gst_video_chroma_site_from_string (chroma_site); + if (site == GST_VIDEO_CHROMA_SITE_V_COSITED) { + chroma_subsampling = 0; + } + } + } else if (g_strcmp0 (chroma_format, "4:2:2") == 0) { + chroma_subsampling = 2; + } else if (g_strcmp0 (chroma_format, "4:4:4") == 0) { + chroma_subsampling = 3; + } + } + + colorimetry_str = gst_structure_get_string (caps_struct, "colorimetry"); + if (colorimetry_str) { + GstVideoColorimetry vid_col; + if (gst_video_colorimetry_from_string (&vid_col, colorimetry_str)) { + video_full_range = + (vid_col.range == GST_VIDEO_COLOR_RANGE_0_255) ? TRUE : FALSE; + colour_primaries = gst_video_color_primaries_to_iso (vid_col.primaries); + transfer_characteristics = + gst_video_transfer_function_to_iso (vid_col.transfer); + matrix_coefficients = gst_video_color_matrix_to_iso (vid_col.matrix); + } + } + + color_info_field |= (bit_depth & 0xF) << 4; + color_info_field |= (chroma_subsampling & 0x3) << 1; + color_info_field |= !(!video_full_range); + hdl &= gst_byte_writer_put_uint8 (&writer, color_info_field); + hdl &= gst_byte_writer_put_uint8 (&writer, colour_primaries); + hdl &= gst_byte_writer_put_uint8 (&writer, transfer_characteristics); + hdl &= gst_byte_writer_put_uint8 (&writer, matrix_coefficients); + + /* codec initialization data, unused for VP9 */ + hdl &= gst_byte_writer_put_uint16_le (&writer, 0); + + if (!hdl) { + GST_ERROR_OBJECT (vtdec, "error creating vpcC header"); + return FALSE; + } + + guint vpcc_size = gst_byte_writer_get_size (&writer); + vtdec->vp9_vpcc = gst_byte_writer_reset_and_get_data (&writer); + if (vtdec->vp9_vpcc == NULL) { + GST_ERROR_OBJECT (vtdec, "error acquiring vpcC header"); + return FALSE; + } + vtdec->vp9_vpcc_size = vpcc_size; + + return TRUE; +} + static CMFormatDescriptionRef create_format_description (GstVtdec * vtdec, CMVideoCodecType cm_format) { OSStatus status; - CMFormatDescriptionRef format_description; + CMFormatDescriptionRef format_description = NULL; + CFMutableDictionaryRef extensions = NULL; + + if (vtdec->vp9_vpcc) { + CFMutableDictionaryRef atoms = CFDictionaryCreateMutable (NULL, 0, + &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); + gst_vtutil_dict_set_data (atoms, CFSTR ("vpcC"), vtdec->vp9_vpcc, + vtdec->vp9_vpcc_size); + + extensions = + CFDictionaryCreateMutable (NULL, 0, &kCFTypeDictionaryKeyCallBacks, + &kCFTypeDictionaryValueCallBacks); + gst_vtutil_dict_set_object (extensions, + CFSTR ("SampleDescriptionExtensionAtoms"), (CFTypeRef *) atoms); + } status = CMVideoFormatDescriptionCreate (NULL, cm_format, vtdec->video_info.width, vtdec->video_info.height, - NULL, &format_description); + extensions, &format_description); + + if (extensions) + CFRelease (extensions); + if (status != noErr) return NULL; @@ -1038,8 +1257,43 @@ if (cm_format == kCMVideoCodecType_HEVC) gst_vtutil_dict_set_data (atoms, CFSTR ("hvcC"), map.data, map.size); - else + else if (cm_format == kCMVideoCodecType_AV1) { + GST_INFO_OBJECT (vtdec, "Creating av1C atom for VideoToolbox"); + + if (vtdec->av1_sequence_header_obu) { + /* The av1C atom should contain the 4-byte header followed by the sequence header OBU */ + GstMapInfo seq_map; + if (gst_buffer_map (vtdec->av1_sequence_header_obu, &seq_map, + GST_MAP_READ)) { + gsize total_size = 4 + seq_map.size; /* 4-byte av1C header + sequence header OBU */ + guint8 *av1c_with_obu = g_malloc (total_size); + + /* Copy the 4-byte av1C header */ + memcpy (av1c_with_obu, map.data, 4); + + /* Append the sequence header OBU */ + memcpy (av1c_with_obu + 4, seq_map.data, seq_map.size); + + GST_INFO_OBJECT (vtdec, + "Creating av1C with sequence header OBU: %zu bytes total", + total_size); + + gst_vtutil_dict_set_data (atoms, CFSTR ("av1C"), av1c_with_obu, + total_size); + g_free (av1c_with_obu); + gst_buffer_unmap (vtdec->av1_sequence_header_obu, &seq_map); + } else { + GST_ERROR_OBJECT (vtdec, "Missing sequence header OBU"); + return NULL; + } + } else { + /* No sequence header OBU yet, just use the 4-byte header */ + gst_vtutil_dict_set_data (atoms, CFSTR ("av1C"), map.data, MIN (map.size, + 4)); + } + } else { gst_vtutil_dict_set_data (atoms, CFSTR ("avcC"), map.data, map.size); + } gst_vtutil_dict_set_object (extensions, CFSTR ("SampleDescriptionExtensionAtoms"), (CFTypeRef *) atoms); @@ -1417,6 +1671,8 @@ &vtdec->dbp_size)) { return FALSE; } + } else if (cm_format == kCMVideoCodecType_AV1) { + vtdec->dbp_size = GST_AV1_NUM_REF_FRAMES; } else { vtdec->dbp_size = 0; } @@ -1480,7 +1736,7 @@ && sps.vui_parameters.bitstream_restriction_flag) max_dpb_frames = MAX (1, sps.vui_parameters.max_dec_frame_buffering); - /* Some non-conforming H264 streams may request a number of frames + /* Some non-conforming H264 streams may request a number of frames * larger than the calculated limit. * See https://chromium-review.googlesource.com/c/chromium/src/+/760276/ */ @@ -1590,6 +1846,190 @@ gst_video_decoder_set_latency (GST_VIDEO_DECODER (vtdec), latency, latency); } +typedef void (*VTRegisterSupplementalVideoDecoderIfAvailableFunc) + (CMVideoCodecType codecType); + +static gboolean +gst_vtdec_check_vp9_support (GstVtdec * vtdec) +{ + gboolean vp9_supported = FALSE; + + GST_DEBUG_OBJECT (vtdec, "Checking VP9 VideoToolbox support"); + +#if !defined(HAVE_IOS) || (defined(__IPHONE_OS_VERSION_MAX_ALLOWED) && __IPHONE_OS_VERSION_MAX_ALLOWED >= 260200) + if (__builtin_available (macos 11.0, ios 26.2, *)) { + VTRegisterSupplementalVideoDecoderIfAvailable (kCMVideoCodecType_VP9); + } +#else + /* FIXME: Temporary measure until Xcode on CI has a SDK version that has the + * variant that introduces VTRegisterSupplementalVideoDecoderIfAvailable on + * iOS 26.2. + */ + VTRegisterSupplementalVideoDecoderIfAvailableFunc func = + (VTRegisterSupplementalVideoDecoderIfAvailableFunc) + dlsym (RTLD_DEFAULT, "VTRegisterSupplementalVideoDecoderIfAvailable"); + + if (func != NULL) { + func (kCMVideoCodecType_VP9); + } +#endif + + vp9_supported = VTIsHardwareDecodeSupported (kCMVideoCodecType_VP9); + + if (vp9_supported) { + GST_INFO_OBJECT (vtdec, "VP9 hardware decoding is supported"); + } else { + GST_WARNING_OBJECT (vtdec, + "VP9 hardware decoding is not supported on this system"); + } + + return vp9_supported; +} + +static gboolean +gst_vtdec_check_av1_support (GstVtdec * vtdec) +{ + gboolean av1_supported = FALSE; + + GST_DEBUG_OBJECT (vtdec, "Checking AV1 VideoToolbox support"); + + +#if !defined(HAVE_IOS) || (defined(__IPHONE_OS_VERSION_MAX_ALLOWED) && __IPHONE_OS_VERSION_MAX_ALLOWED >= 260200) + if (__builtin_available (macos 11.0, ios 26.2, *)) { + VTRegisterSupplementalVideoDecoderIfAvailable (kCMVideoCodecType_AV1); + } +#else + /* FIXME: Temporary measure until Xcode on CI has a SDK version that has the + * variant that introduces VTRegisterSupplementalVideoDecoderIfAvailable on + * iOS 26.2. + */ + VTRegisterSupplementalVideoDecoderIfAvailableFunc func = + (VTRegisterSupplementalVideoDecoderIfAvailableFunc) + dlsym (RTLD_DEFAULT, + "VTRegisterSupplementalVideoDecoderIfAvailable"); + + if (func != NULL) { + func (kCMVideoCodecType_AV1); + } +#endif + + /* Check if hardware decode is supported for AV1 */ + av1_supported = VTIsHardwareDecodeSupported (kCMVideoCodecType_AV1); + + if (av1_supported) { + GST_INFO_OBJECT (vtdec, "AV1 hardware decoding is supported"); + } else { + GST_WARNING_OBJECT (vtdec, + "AV1 hardware decoding is not supported on this system"); + } + + return av1_supported; +} + +static GstCaps * +gst_vtdec_getcaps (GstVideoDecoder * decoder, GstCaps * filter) +{ + GstVtdec *vtdec = GST_VTDEC (decoder); + GstCaps *sinkcaps, *result; + + sinkcaps = + gst_pad_get_pad_template_caps (GST_VIDEO_DECODER_SINK_PAD (decoder)); + sinkcaps = gst_caps_make_writable (sinkcaps); + + guint n = gst_caps_get_size (sinkcaps); + for (guint i = 0; i < n;) { + GstStructure *s = gst_caps_get_structure (sinkcaps, i); + + if ((gst_structure_has_name (s, "video/x-av1") + && !gst_vtdec_check_av1_support (vtdec)) + || (gst_structure_has_name (s, "video/x-vp9") + && !gst_vtdec_check_vp9_support (vtdec))) { + gst_caps_remove_structure (sinkcaps, i); + n--; + } else { + i++; + } + } + + result = gst_video_decoder_proxy_getcaps (decoder, sinkcaps, filter); + gst_caps_unref (sinkcaps); + + return result; +} + +static gboolean +gst_vtdec_handle_av1_sequence_header (GstVtdec * vtdec, + GstVideoCodecFrame * frame) +{ + GstMapInfo map_info; + GstAV1Parser *parser; + GstAV1OBU obu; + GstAV1ParserResult result; + guint32 consumed = 0; + gboolean found_sequence_header = FALSE; + + if (!gst_buffer_map (frame->input_buffer, &map_info, GST_MAP_READ)) { + GST_ERROR_OBJECT (vtdec, "Failed to map input buffer"); + return FALSE; + } + + GST_DEBUG_OBJECT (vtdec, "Checking for AV1 sequence header in %zu bytes", + map_info.size); + + /* Create AV1 parser to identify and parse OBUs */ + parser = gst_av1_parser_new (); + if (!parser) { + GST_ERROR_OBJECT (vtdec, "Failed to create AV1 parser"); + gst_buffer_unmap (frame->input_buffer, &map_info); + return FALSE; + } + + /* Search for sequence header OBU */ + while (consumed < map_info.size) { + guint32 bytes_consumed = 0; + result = gst_av1_parser_identify_one_obu (parser, map_info.data + consumed, + map_info.size - consumed, &obu, &bytes_consumed); + + if (result != GST_AV1_PARSER_OK) { + if (result == GST_AV1_PARSER_NO_MORE_DATA) + break; + GST_DEBUG_OBJECT (vtdec, "Failed to identify OBU: %d", result); + consumed += bytes_consumed; + continue; + } + + GST_DEBUG_OBJECT (vtdec, "Found OBU type %d", obu.obu_type); + + if (obu.obu_type == GST_AV1_OBU_SEQUENCE_HEADER) { + GST_INFO_OBJECT (vtdec, "Found AV1 sequence header OBU"); + + /* Store the sequence header OBU */ + if (vtdec->av1_sequence_header_obu) + gst_buffer_unref (vtdec->av1_sequence_header_obu); + + /* Calculate the complete OBU size including header */ + gsize obu_offset = consumed; + gsize obu_total_size = bytes_consumed; + + vtdec->av1_sequence_header_obu = + gst_buffer_copy_region (frame->input_buffer, GST_BUFFER_COPY_MEMORY, + obu_offset, obu_total_size); + + GST_INFO_OBJECT (vtdec, "Stored AV1 sequence header OBU (%zu bytes)", + obu_total_size); + found_sequence_header = TRUE; + break; + } + + consumed += bytes_consumed; + } + + gst_av1_parser_free (parser); + gst_buffer_unmap (frame->input_buffer, &map_info); + + return found_sequence_header; +} + static void gst_vtdec_set_context (GstElement * element, GstContext * context) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/vtdec.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/vtdec.h
Changed
@@ -32,6 +32,7 @@ #include <gst/vulkan/vulkan.h> #endif #include <gst/codecparsers/gsth264parser.h> +#include <gst/codecparsers/gstav1parser.h> G_BEGIN_DECLS @@ -72,6 +73,12 @@ #endif gboolean require_hardware; + + gboolean av1_needs_sequence_header; /* TRUE if we need to wait for sequence header OBU before creating session */ + GstBuffer *av1_sequence_header_obu; /* Store the sequence header OBU for format description */ + + guint8* vp9_vpcc; + gsize vp9_vpcc_size; }; struct _GstVtdecClass
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/applemedia/vtenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/applemedia/vtenc.c
Changed
@@ -290,7 +290,8 @@ } static GstStaticCaps sink_caps = -GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE ("{ AYUV64, UYVY, NV12, I420 }")); +GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE + ("{ AYUV64, UYVY, NV12, I420, P010_10LE }")); static void gst_vtenc_base_init (GstVTEncClass * klass) @@ -918,7 +919,7 @@ profile = "main"; if (level_arg == NULL) level_arg = "AutoLevel"; - strncpy (level, level_arg, sizeof (level)); + strlcpy (level, level_arg, sizeof (level)); if (!strcmp (profile, "constrained-baseline") || !strcmp (profile, "baseline")) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/asio/gstasiodeviceprovider.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/asio/gstasiodeviceprovider.cpp
Changed
@@ -21,6 +21,14 @@ #include "config.h" #endif +#ifdef HAVE_CM_NOTIFY +#define _WIN32_LEAN_AND_MEAN +#include <windows.h> +#include <cfgmgr32.h> +#include <initguid.h> +#include <usbiodef.h> +#endif + #include "gstasiodeviceprovider.h" #include "gstasioutils.h" #include "gstasioobject.h" @@ -132,30 +140,260 @@ struct _GstAsioDeviceProvider { GstDeviceProvider parent; + +#ifdef HAVE_CM_NOTIFY + GThread *thread; + GMainLoop *loop; + GMainContext *context; + guint device_update_id; + GMutex lock; + GCond cond; +#endif /* HAVE_CM_NOTIFY */ }; +GST_DEBUG_CATEGORY_STATIC (gst_asio_dp_debug); +#define GST_CAT_DEFAULT gst_asio_dp_debug + G_DEFINE_TYPE (GstAsioDeviceProvider, gst_asio_device_provider, GST_TYPE_DEVICE_PROVIDER); +#ifdef HAVE_CM_NOTIFY +static void gst_asio_device_provider_finalize (GObject * obj); +static gboolean gst_asio_device_provider_start (GstDeviceProvider * provider); +static void gst_asio_device_provider_stop (GstDeviceProvider * provider); +#endif /* HAVE_CM_NOTIFY */ static GList *gst_asio_device_provider_probe (GstDeviceProvider * provider); static void gst_asio_device_provider_class_init (GstAsioDeviceProviderClass * klass) { GstDeviceProviderClass *provider_class = GST_DEVICE_PROVIDER_CLASS (klass); +#ifdef HAVE_CM_NOTIFY + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + gobject_class->finalize = gst_asio_device_provider_finalize; + + provider_class->start = GST_DEBUG_FUNCPTR (gst_asio_device_provider_start); + provider_class->stop = GST_DEBUG_FUNCPTR (gst_asio_device_provider_stop); +#endif /* HAVE_CM_NOTIFY */ provider_class->probe = GST_DEBUG_FUNCPTR (gst_asio_device_provider_probe); gst_device_provider_class_set_static_metadata (provider_class, "ASIO Device Provider", "Source/Sink/Audio", "List ASIO source and sink devices", "Seungha Yang <seungha@centricular.com>"); + + GST_DEBUG_CATEGORY_INIT (gst_asio_dp_debug, "asio-deviceprovider", + 0, "ASIO device provider"); +} + +static void +gst_asio_device_provider_init (GstAsioDeviceProvider * self) +{ +#ifdef HAVE_CM_NOTIFY + self->context = g_main_context_new (); + self->loop = g_main_loop_new (self->context, FALSE); + self->device_update_id = 0; + g_mutex_init (&self->lock); + g_cond_init (&self->cond); +#endif +} + +#ifdef HAVE_CM_NOTIFY +static void +gst_asio_device_provider_finalize (GObject * obj) +{ + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (obj); + + g_main_loop_quit (self->loop); + g_clear_pointer (&self->thread, g_thread_join); + g_main_loop_unref (self->loop); + g_main_context_unref (self->context); + g_mutex_clear (&self->lock); + g_cond_clear (&self->cond); + + G_OBJECT_CLASS (gst_asio_device_provider_parent_class)->finalize (obj); +} + +static gboolean +gst_asio_device_provider_update_devices (GstDeviceProvider * provider) +{ + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (provider); + + GST_LOG_OBJECT (self, "Updating devices"); + + GList *prev_devices = g_list_copy_deep (provider->devices, + (GCopyFunc) gst_object_ref, NULL); + GList *new_devices = gst_asio_device_provider_probe (provider); + + // Emit device removal messages + for (GList *p = prev_devices; p; p = p->next) { + GstAsioDevice *pdev = GST_ASIO_DEVICE (p->data); + bool found = false; + for (GList *n = new_devices; n; n = n->next) { + GstAsioDevice *ndev = GST_ASIO_DEVICE (n->data); + if (g_strcmp0 (ndev->device_clsid, pdev->device_clsid) == 0 && + g_strcmp0 (ndev->factory_name, pdev->factory_name) == 0) { + found = true; + break; + } + } + if (!found) { + gst_device_provider_device_remove (provider, GST_DEVICE (gst_object_ref(pdev))); + } + } + + // Emit device added messages + for (GList *n = new_devices; n; n = n->next) { + GstAsioDevice *ndev = GST_ASIO_DEVICE (n->data); + bool found = false; + for (GList *p = prev_devices; p; p = p->next) { + GstAsioDevice *pdev = GST_ASIO_DEVICE (p->data); + if (g_strcmp0 (ndev->device_clsid, pdev->device_clsid) == 0 && + g_strcmp0 (ndev->factory_name, pdev->factory_name) == 0) { + found = true; + break; + } + } + if (!found) { + gst_device_provider_device_add (provider, GST_DEVICE (gst_object_ref (ndev))); + } + } + + g_list_free_full (new_devices, (GDestroyNotify) gst_object_unref); + g_list_free_full (prev_devices, (GDestroyNotify) gst_object_unref); + self->device_update_id = 0; + + return G_SOURCE_REMOVE; +} + +static DWORD +device_event_cb (HCMNOTIFICATION, PVOID data, CM_NOTIFY_ACTION action, + PCM_NOTIFY_EVENT_DATA, DWORD) +{ + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (data); + + GST_LOG_OBJECT (self, "USB device event: %d", action); + + if (action == CM_NOTIFY_ACTION_DEVICEINTERFACEARRIVAL || + action == CM_NOTIFY_ACTION_DEVICEINTERFACEREMOVAL || + action == CM_NOTIFY_ACTION_DEVICEINSTANCEREMOVED) { + GSource *source; + // We need to wait a bit before probing for ASIO devices, because device + // initialization can take some time. However, we must also not probe too + // frequently, so if there's already a device update scheduled when we get + // a WM_DEVICECHANGE event, then cancel the existing update and schedule + // a new one in the near future. + if (self->device_update_id > 0) { + source = g_main_context_find_source_by_id (self->context, self->device_update_id); + self->device_update_id = 0; + g_source_destroy (source); + } + source = g_timeout_source_new (500); + g_source_set_callback (source, + (GSourceFunc) gst_asio_device_provider_update_devices, self, NULL); + self->device_update_id = g_source_attach (source, self->context); + } + + return ERROR_SUCCESS; +} + +static gpointer +gst_asio_device_provider_thread_func (GstDeviceProvider * provider) +{ + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (provider); + + g_main_context_push_thread_default (self->context); + + CONFIGRET cr; + CM_NOTIFY_FILTER filter = { 0 }; + + // Filter for USB device interface events + filter.cbSize = sizeof (CM_NOTIFY_FILTER); + filter.FilterType = CM_NOTIFY_FILTER_TYPE_DEVICEINTERFACE; + filter.u.DeviceInterface.ClassGuid = GUID_DEVINTERFACE_USB_DEVICE; + + HCMNOTIFICATION notify = NULL; + // Register for USB device interface events + cr = CM_Register_Notification (&filter, (gpointer) self, + (PCM_NOTIFY_CALLBACK) device_event_cb, ¬ify); + if (cr != CR_SUCCESS) { + GST_ERROR_OBJECT (self, "Failed to register for USB notifications:" + " 0x%lX\n", cr); + return NULL; + } + + auto source = g_idle_source_new (); + g_source_set_callback (source, (gpointer user_data) -> gboolean { + auto self = GST_ASIO_DEVICE_PROVIDER (user_data); + g_mutex_lock (&self->lock); + g_cond_broadcast (&self->cond); + g_mutex_unlock (&self->lock); + return G_SOURCE_REMOVE; + }, + self, nullptr); + + g_source_attach (source, self->context); + g_source_unref (source); + + GST_LOG_OBJECT (self, "Started device provider loop"); + g_main_loop_run (self->loop); + GST_LOG_OBJECT (self, "Stopped device provider loop"); + + // Cleanup + if (notify) { + CM_Unregister_Notification (notify); + } + + g_main_context_pop_thread_default (self->context); + + return NULL; +} + +static gboolean +gst_asio_device_provider_start (GstDeviceProvider * provider) +{ + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (provider); + + GList *devices = gst_asio_device_provider_probe (provider); + if (devices) { + for (GList *iter = devices; iter; iter = g_list_next (iter)) { + gst_device_provider_device_add (provider, GST_DEVICE (iter->data)); + } + + g_list_free (devices); + } + + self->thread = g_thread_new ("GstAsioDeviceProvider", + (GThreadFunc) gst_asio_device_provider_thread_func, self); + + g_mutex_lock (&self->lock); + while (!g_main_loop_is_running (self->loop)) + g_cond_wait (&self->cond, &self->lock); + g_mutex_unlock (&self->lock); + + GST_LOG_OBJECT (self, "Started ASIO device provider"); + + return TRUE; } static void -gst_asio_device_provider_init (GstAsioDeviceProvider * provider) +gst_asio_device_provider_stop (GstDeviceProvider * provider) { + GstAsioDeviceProvider *self = GST_ASIO_DEVICE_PROVIDER (provider); + + if (self->thread) { + g_main_loop_quit (self->loop); + g_clear_pointer (&self->thread, g_thread_join); + } + + if (self->device_update_id > 0) { + GSource *source = g_main_context_find_source_by_id (self->context, self->device_update_id); + self->device_update_id = 0; + g_source_destroy (source); + } } +#endif /* HAVE_CM_NOTIFY */ static void gst_asio_device_provider_probe_internal (GstAsioDeviceProvider * self,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/asio/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/asio/meson.build
Changed
@@ -42,21 +42,44 @@ if asio_option.enabled() error('asio plugin requires WINAPI_PARTITION_DESKTOP') else - subdir_done () + subdir_done() endif endif avrt_lib = cc.find_library('avrt', required: asio_option) if not avrt_lib.found() - subdir_done () + subdir_done() +endif + +have_cm_notify = false +cfgmgr32 = cc.find_library('cfgmgr32', required: false) +if cfgmgr32.found() + if cxx.links('''#define _WIN32_LEAN_AND_MEAN + #include <windows.h> + #include <cfgmgr32.h> + int main() { + CM_Unregister_Notification(nullptr); + return 0; + }''', name: 'CM_NOTIFY', dependencies: cfgmgr32) + have_cm_notify = true + endif +endif + +asio_args = +asio_deps = avrt_lib +if have_cm_notify + asio_deps += cfgmgr32 + asio_args += '-DHAVE_CM_NOTIFY' +else + message('CM_NOTIFY not found, your SDK is too old. ASIO will not support device monitoring.') endif gstasio = library('gstasio', asio_sources, include_directories : configinc, - dependencies : gstaudio_dep, avrt_lib, - c_args : gst_plugins_bad_args, - cpp_args : gst_plugins_bad_args, + dependencies : gstaudio_dep + asio_deps, + c_args : gst_plugins_bad_args + asio_args, + cpp_args : gst_plugins_bad_args + asio_args, install : true, install_dir : plugins_install_dir) plugins += gstasio
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d11/gstd3d11ipc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d11/gstd3d11ipc.cpp
Changed
@@ -323,27 +323,6 @@ memcpy (&buf0, &header, GST_D3D11_IPC_PKT_HEADER_SIZE); } -bool -gst_d3d11_ipc_clock_is_system (GstClock * clock) -{ - GstClockType clock_type = GST_CLOCK_TYPE_MONOTONIC; - GstClock *mclock; - - if (G_OBJECT_TYPE (clock) != GST_TYPE_SYSTEM_CLOCK) - return false; - - g_object_get (clock, "clock-type", &clock_type, nullptr); - if (clock_type != GST_CLOCK_TYPE_MONOTONIC) - return false; - - mclock = gst_clock_get_master (clock); - if (!mclock) - return true; - - gst_object_unref (mclock); - return false; -} - std::string gst_d3d11_ipc_wstring_to_string (const std::wstring & str) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d11/gstd3d11ipc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d11/gstd3d11ipc.h
Changed
@@ -136,8 +136,6 @@ void gst_d3d11_ipc_pkt_build_fin (std::vector<guint8> & buf); -bool gst_d3d11_ipc_clock_is_system (GstClock * clock); - std::string gst_d3d11_ipc_wstring_to_string (const std::wstring & str); std::wstring gst_d3d11_ipc_string_to_wstring (const std::string & str);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d11/gstd3d11ipcsink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d11/gstd3d11ipcsink.cpp
Changed
@@ -727,7 +727,7 @@ if (GST_CLOCK_TIME_IS_VALID (buffer_clock)) { GstClock *clock = gst_element_get_clock (GST_ELEMENT_CAST (sink)); - if (!gst_d3d11_ipc_clock_is_system (clock)) { + if (!gst_clock_is_system_monotonic (clock)) { GstClockTime now_gst = gst_clock_get_time (clock); GstClockTimeDiff converted = buffer_clock;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d11/gstd3d11ipcsrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d11/gstd3d11ipcsrc.cpp
Changed
@@ -483,7 +483,7 @@ clock = gst_element_get_clock (GST_ELEMENT_CAST (self)); now_gst = gst_clock_get_time (clock); base_time = GST_ELEMENT_CAST (self)->base_time; - is_system_clock = gst_d3d11_ipc_clock_is_system (clock); + is_system_clock = gst_clock_is_system_monotonic (clock); gst_object_unref (clock); buffer = gst_sample_get_buffer (sample);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12basefilter.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12basefilter.cpp
Changed
@@ -22,6 +22,7 @@ #endif #include "gstd3d12basefilter.h" +#include <mutex> GST_DEBUG_CATEGORY_STATIC (gst_d3d12_base_filter_debug); #define GST_CAT_DEFAULT gst_d3d12_base_filter_debug @@ -37,6 +38,19 @@ #define DEFAULT_ADAPTER -1 +struct _GstD3D12BaseFilterPrivate +{ + ~_GstD3D12BaseFilterPrivate () + { + gst_clear_object (&device); + } + + GstD3D12Device *device = nullptr; + + gint adapter = DEFAULT_ADAPTER; + std::recursive_mutex lock; +}; + #define gst_d3d12_base_filter_parent_class parent_class G_DEFINE_ABSTRACT_TYPE_WITH_CODE (GstD3D12BaseFilter, gst_d3d12_base_filter, GST_TYPE_BASE_TRANSFORM, @@ -47,7 +61,7 @@ const GValue * value, GParamSpec * pspec); static void gst_d3d12_base_filter_get_property (GObject * object, guint prop_id, GValue * value, GParamSpec * pspec); -static void gst_d3d12_base_filter_dispose (GObject * object); +static void gst_d3d12_base_filter_finalize (GObject * object); static void gst_d3d12_base_filter_set_context (GstElement * element, GstContext * context); static gboolean gst_d3d12_base_filter_start (GstBaseTransform * trans); @@ -61,6 +75,19 @@ GstBuffer * buffer); static gboolean gst_d3d12_base_filter_transform_meta (GstBaseTransform * trans, GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); +static gboolean +gst_d3d12_base_filter_propose_allocation (GstBaseTransform * trans, + GstQuery * decide_query, GstQuery * query); +static gboolean +gst_d3d12_base_filter_decide_allocation (GstBaseTransform * trans, + GstQuery * query); +static gboolean +gst_d3d12_base_filter_default_decide_allocation (GstD3D12BaseFilter * self, + GstD3D12Device * device, GstQuery * query); + +static gboolean +gst_d3d12_base_filter_default_propose_allocation (GstD3D12BaseFilter * self, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); static void gst_d3d12_base_filter_class_init (GstD3D12BaseFilterClass * klass) @@ -69,7 +96,7 @@ GstElementClass *element_class = GST_ELEMENT_CLASS (klass); GstBaseTransformClass *trans_class = GST_BASE_TRANSFORM_CLASS (klass); - gobject_class->dispose = gst_d3d12_base_filter_dispose; + gobject_class->finalize = gst_d3d12_base_filter_finalize; gobject_class->set_property = gst_d3d12_base_filter_set_property; gobject_class->get_property = gst_d3d12_base_filter_get_property; @@ -98,6 +125,15 @@ GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_before_transform); trans_class->transform_meta = GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_transform_meta); + trans_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_propose_allocation); + trans_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_decide_allocation); + + klass->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_default_propose_allocation); + klass->decide_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_filter_default_decide_allocation); gst_type_mark_as_plugin_api (GST_TYPE_D3D12_BASE_FILTER, (GstPluginAPIFlags) 0); @@ -107,17 +143,17 @@ static void gst_d3d12_base_filter_init (GstD3D12BaseFilter * self) { - self->adapter = DEFAULT_ADAPTER; + self->priv = new GstD3D12BaseFilterPrivate (); } static void -gst_d3d12_base_filter_dispose (GObject * object) +gst_d3d12_base_filter_finalize (GObject * object) { auto self = GST_D3D12_BASE_FILTER (object); - gst_clear_object (&self->device); + delete self->priv; - G_OBJECT_CLASS (parent_class)->dispose (object); + G_OBJECT_CLASS (parent_class)->finalize (object); } static void @@ -125,10 +161,11 @@ const GValue * value, GParamSpec * pspec) { auto self = GST_D3D12_BASE_FILTER (object); + auto priv = self->priv; switch (prop_id) { case PROP_ADAPTER: - self->adapter = g_value_get_int (value); + priv->adapter = g_value_get_int (value); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); @@ -141,10 +178,11 @@ GValue * value, GParamSpec * pspec) { auto self = GST_D3D12_BASE_FILTER (object); + auto priv = self->priv; switch (prop_id) { case PROP_ADAPTER: - g_value_set_int (value, self->adapter); + g_value_set_int (value, priv->adapter); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); @@ -156,8 +194,12 @@ gst_d3d12_base_filter_set_context (GstElement * element, GstContext * context) { auto self = GST_D3D12_BASE_FILTER (element); + auto priv = self->priv; - gst_d3d12_handle_set_context (element, context, -1, &self->device); + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_d3d12_handle_set_context (element, context, -1, &priv->device); + } GST_ELEMENT_CLASS (parent_class)->set_context (element, context); } @@ -166,9 +208,11 @@ gst_d3d12_base_filter_start (GstBaseTransform * trans) { auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; + std::lock_guard < std::recursive_mutex > lk (priv->lock); if (!gst_d3d12_ensure_element_data (GST_ELEMENT_CAST (self), - self->adapter, &self->device)) { + priv->adapter, &priv->device)) { GST_ERROR_OBJECT (self, "Failed to get D3D12 device"); return FALSE; } @@ -180,8 +224,10 @@ gst_d3d12_base_filter_stop (GstBaseTransform * trans) { auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; - gst_clear_object (&self->device); + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_clear_object (&priv->device); return TRUE; } @@ -191,15 +237,11 @@ GstCaps * outcaps) { auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; GstVideoInfo in_info, out_info; GstD3D12BaseFilterClass *klass; gboolean res; - if (!self->device) { - GST_ERROR_OBJECT (self, "No available D3D12 device"); - return FALSE; - } - if (!gst_video_info_from_caps (&in_info, incaps)) { GST_ERROR_OBJECT (self, "Invalid input caps %" GST_PTR_FORMAT, incaps); return FALSE; @@ -211,10 +253,18 @@ } klass = GST_D3D12_BASE_FILTER_GET_CLASS (self); - if (klass->set_info) - res = klass->set_info (self, incaps, &in_info, outcaps, &out_info); - else + if (klass->set_info) { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->device) { + GST_ERROR_OBJECT (self, "No available D3D12 device"); + return FALSE; + } + + res = klass->set_info (self, + priv->device, incaps, &in_info, outcaps, &out_info); + } else { res = TRUE; + } if (res) { self->in_info = in_info; @@ -229,12 +279,14 @@ GstPadDirection direction, GstQuery * query) { auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; switch (GST_QUERY_TYPE (query)) { case GST_QUERY_CONTEXT: { + std::lock_guard < std::recursive_mutex > lk (priv->lock); if (gst_d3d12_handle_context_query (GST_ELEMENT (self), query, - self->device)) { + priv->device)) { return TRUE; } break; @@ -252,6 +304,7 @@ GstBuffer * buffer) { auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; GstD3D12Memory *dmem; GstMemory *mem; GstCaps *in_caps = nullptr; @@ -262,15 +315,17 @@ return; dmem = GST_D3D12_MEMORY_CAST (mem); + + std::lock_guard < std::recursive_mutex > lk (priv->lock); /* d3d12 devices are singletons per adapter */ - if (gst_d3d12_device_is_equal (dmem->device, self->device)) + if (gst_d3d12_device_is_equal (dmem->device, priv->device)) return; GST_INFO_OBJECT (self, "Updating device %" GST_PTR_FORMAT " -> %" - GST_PTR_FORMAT, self->device, dmem->device); + GST_PTR_FORMAT, priv->device, dmem->device); - gst_object_unref (self->device); - self->device = (GstD3D12Device *) gst_object_ref (dmem->device); + gst_object_unref (priv->device); + priv->device = (GstD3D12Device *) gst_object_ref (dmem->device); in_caps = gst_pad_get_current_caps (GST_BASE_TRANSFORM_SINK_PAD (trans)); if (!in_caps) { @@ -316,3 +371,239 @@ return GST_BASE_TRANSFORM_CLASS (parent_class)->transform_meta (trans, outbuf, meta, inbuf); } + +static gboolean +gst_d3d12_base_filter_propose_allocation (GstBaseTransform * trans, + GstQuery * decide_query, GstQuery * query) +{ + auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; + auto klass = GST_D3D12_BASE_FILTER_GET_CLASS (self); + + if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, + decide_query, query)) { + return FALSE; + } + + if (klass->propose_allocation) { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->device) { + GST_DEBUG_OBJECT (self, "No device configured"); + return FALSE; + } + + return klass->propose_allocation (self, priv->device, decide_query, query); + } + + return TRUE; +} + +static gboolean +gst_d3d12_base_filter_decide_allocation (GstBaseTransform * trans, + GstQuery * query) +{ + auto self = GST_D3D12_BASE_FILTER (trans); + auto priv = self->priv; + auto klass = GST_D3D12_BASE_FILTER_GET_CLASS (self); + + if (klass->decide_allocation) { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->device) { + GST_DEBUG_OBJECT (self, "No device configured"); + return FALSE; + } + + if (!klass->decide_allocation (self, priv->device, query)) + return FALSE; + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, + query); +} + +static gboolean +gst_d3d12_base_filter_default_propose_allocation (GstD3D12BaseFilter * self, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) +{ + GstVideoInfo info; + GstBufferPool *pool = nullptr; + GstCaps *caps; + guint n_pools, i; + guint size; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (!caps) + return FALSE; + + if (!gst_video_info_from_caps (&info, caps)) { + GST_ERROR_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + n_pools = gst_query_get_n_allocation_pools (query); + for (i = 0; i < n_pools; i++) { + gst_query_parse_nth_allocation_pool (query, i, &pool, nullptr, nullptr, + nullptr); + if (pool) { + if (!GST_IS_D3D12_BUFFER_POOL (pool)) { + gst_clear_object (&pool); + } else { + auto dpool = GST_D3D12_BUFFER_POOL (pool); + if (!gst_d3d12_device_is_equal (dpool->device, device)) + gst_clear_object (&pool); + } + } + } + + if (!pool) + pool = gst_d3d12_buffer_pool_new (device); + + auto config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + + auto d3d12_params = + gst_buffer_pool_config_get_d3d12_allocation_params (config); + if (!d3d12_params) { + d3d12_params = gst_d3d12_allocation_params_new (device, &info, + GST_D3D12_ALLOCATION_FLAG_DEFAULT, + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS, D3D12_HEAP_FLAG_NONE); + } else { + gst_d3d12_allocation_params_set_resource_flags (d3d12_params, + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); + gst_d3d12_allocation_params_unset_resource_flags (d3d12_params, + D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE); + } + + gst_buffer_pool_config_set_d3d12_allocation_params (config, d3d12_params); + gst_d3d12_allocation_params_free (d3d12_params); + + /* size will be updated by d3d12 buffer pool */ + gst_buffer_pool_config_set_params (config, caps, 0, 0, 0); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (self, "failed to set config"); + gst_object_unref (pool); + return FALSE; + } + + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + + /* d3d12 buffer pool will update buffer size based on allocated texture, + * get size from config again */ + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + gst_query_add_allocation_pool (query, pool, size, 0, 0); + + gst_object_unref (pool); + + return TRUE; +} + +static gboolean +gst_d3d12_base_filter_default_decide_allocation (GstD3D12BaseFilter * self, + GstD3D12Device * device, GstQuery * query) +{ + GstCaps *outcaps = nullptr; + GstBufferPool *pool = nullptr; + guint size, min = 0, max = 0; + GstStructure *config; + gboolean update_pool = FALSE; + GstVideoInfo info; + + gst_query_parse_allocation (query, &outcaps, nullptr); + + if (!outcaps) + return FALSE; + + if (!gst_video_info_from_caps (&info, outcaps)) { + GST_ERROR_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, outcaps); + return FALSE; + } + + GstD3D12Format device_format; + if (!gst_d3d12_device_get_format (device, + GST_VIDEO_INFO_FORMAT (&info), &device_format)) { + GST_ERROR_OBJECT (self, "Couldn't get device foramt"); + return FALSE; + } + + size = GST_VIDEO_INFO_SIZE (&info); + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + if (pool) { + if (!GST_IS_D3D12_BUFFER_POOL (pool)) { + gst_clear_object (&pool); + } else { + auto dpool = GST_D3D12_BUFFER_POOL (pool); + if (!gst_d3d12_device_is_equal (dpool->device, device)) + gst_clear_object (&pool); + } + } + + update_pool = TRUE; + } + + if (!pool) + pool = gst_d3d12_buffer_pool_new (device); + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + + D3D12_RESOURCE_FLAGS resource_flags = + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS; + if ((device_format.format_flags & GST_D3D12_FORMAT_FLAG_OUTPUT_UAV) + == GST_D3D12_FORMAT_FLAG_OUTPUT_UAV) { + resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS; + } + + if ((device_format.support1 & D3D12_FORMAT_SUPPORT1_RENDER_TARGET) == + D3D12_FORMAT_SUPPORT1_RENDER_TARGET) { + resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET; + } + + auto d3d12_params = + gst_buffer_pool_config_get_d3d12_allocation_params (config); + if (!d3d12_params) { + d3d12_params = gst_d3d12_allocation_params_new (device, &info, + GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, + D3D12_HEAP_FLAG_SHARED); + } else { + gst_d3d12_allocation_params_set_resource_flags (d3d12_params, + resource_flags); + } + + gst_buffer_pool_config_set_d3d12_allocation_params (config, d3d12_params); + gst_d3d12_allocation_params_free (d3d12_params); + + gst_buffer_pool_config_set_params (config, outcaps, size, min, max); + gst_buffer_pool_set_config (pool, config); + + /* d3d12 buffer pool will update buffer size based on allocated texture, + * get size from config again */ + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + return TRUE; +} + +GstD3D12Device * +gst_d3d12_base_filter_get_device (GstD3D12BaseFilter * filter) +{ + auto priv = filter->priv; + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->device) + return nullptr; + + return (GstD3D12Device *) gst_object_ref (priv->device); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12basefilter.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12basefilter.h
Changed
@@ -35,17 +35,16 @@ typedef struct _GstD3D12BaseFilter GstD3D12BaseFilter; typedef struct _GstD3D12BaseFilterClass GstD3D12BaseFilterClass; +typedef struct _GstD3D12BaseFilterPrivate GstD3D12BaseFilterPrivate; struct _GstD3D12BaseFilter { GstBaseTransform parent; - GstD3D12Device *device; - - gint adapter; - GstVideoInfo in_info; GstVideoInfo out_info; + + GstD3D12BaseFilterPrivate *priv; }; struct _GstD3D12BaseFilterClass @@ -53,14 +52,26 @@ GstBaseTransformClass parent_class; gboolean (*set_info) (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * in_caps, GstVideoInfo * in_info, GstCaps * out_caps, GstVideoInfo * out_info); + + gboolean (*propose_allocation) (GstD3D12BaseFilter * filter, + GstD3D12Device * device, + GstQuery * decide_query, + GstQuery * query); + + gboolean (*decide_allocation) (GstD3D12BaseFilter * filter, + GstD3D12Device * device, + GstQuery * query); }; GType gst_d3d12_base_filter_get_type (void); +GstD3D12Device * gst_d3d12_base_filter_get_device (GstD3D12BaseFilter * filter); + G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstD3D12BaseFilter, gst_object_unref) G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12convert.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12convert.cpp
Changed
@@ -265,10 +265,6 @@ trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); static GstCaps *gst_d3d12_base_convert_fixate_caps (GstBaseTransform * base, GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); -static gboolean gst_d3d12_base_convert_propose_allocation (GstBaseTransform * - trans, GstQuery * decide_query, GstQuery * query); -static gboolean gst_d3d12_base_convert_decide_allocation (GstBaseTransform * - trans, GstQuery * query); static GstFlowReturn gst_d3d12_base_convert_generate_output (GstBaseTransform * trans, GstBuffer ** buffer); static gboolean gst_d3d12_base_convert_transform_meta (GstBaseTransform * trans, @@ -278,8 +274,12 @@ static GstFlowReturn gst_d3d12_base_convert_transform (GstBaseTransform * trans, GstBuffer * inbuf, GstBuffer * outbuf); static gboolean gst_d3d12_base_convert_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info); + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean gst_d3d12_base_convert_propose_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); +static gboolean gst_d3d12_base_convert_decide_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * query); /** * GstD3D12BaseConvert: @@ -333,10 +333,6 @@ GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_transform_caps); trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_fixate_caps); - trans_class->propose_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_propose_allocation); - trans_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_decide_allocation); trans_class->generate_output = GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_generate_output); trans_class->transform_meta = @@ -346,6 +342,10 @@ trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_transform); filter_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_propose_allocation); + filter_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_base_convert_decide_allocation); gst_type_mark_as_plugin_api (GST_TYPE_D3D12_BASE_CONVERT, (GstPluginAPIFlags) 0); @@ -1526,20 +1526,15 @@ } static gboolean -gst_d3d12_base_convert_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query) +gst_d3d12_base_convert_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) { - auto filter = GST_D3D12_BASE_FILTER (trans); - auto self = GST_D3D12_BASE_CONVERT (trans); + auto trans = GST_BASE_TRANSFORM (filter); + auto self = GST_D3D12_BASE_CONVERT (filter); auto priv = self->priv; - GstVideoInfo info; - GstBufferPool *pool = nullptr; - GstCaps *caps; - guint n_pools, i; - guint size; - if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, - decide_query, query)) { + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { return FALSE; } @@ -1550,108 +1545,22 @@ return TRUE; } - gst_query_parse_allocation (query, &caps, nullptr); - - if (!caps) - return FALSE; - - if (!gst_video_info_from_caps (&info, caps)) { - GST_ERROR_OBJECT (filter, "Invalid caps %" GST_PTR_FORMAT, caps); - return FALSE; - } - - n_pools = gst_query_get_n_allocation_pools (query); - for (i = 0; i < n_pools; i++) { - gst_query_parse_nth_allocation_pool (query, i, &pool, nullptr, nullptr, - nullptr); - if (pool) { - if (!GST_IS_D3D12_BUFFER_POOL (pool)) { - gst_clear_object (&pool); - } else { - auto dpool = GST_D3D12_BUFFER_POOL (pool); - if (!gst_d3d12_device_is_equal (dpool->device, filter->device)) - gst_clear_object (&pool); - } - } - } - - if (!pool) - pool = gst_d3d12_buffer_pool_new (filter->device); - - auto config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - - auto d3d12_params = - gst_buffer_pool_config_get_d3d12_allocation_params (config); - if (!d3d12_params) { - d3d12_params = gst_d3d12_allocation_params_new (filter->device, &info, - GST_D3D12_ALLOCATION_FLAG_DEFAULT, - D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS, D3D12_HEAP_FLAG_NONE); - } else { - gst_d3d12_allocation_params_set_resource_flags (d3d12_params, - D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); - gst_d3d12_allocation_params_unset_resource_flags (d3d12_params, - D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE); - } - - gst_buffer_pool_config_set_d3d12_allocation_params (config, d3d12_params); - gst_d3d12_allocation_params_free (d3d12_params); - - /* size will be updated by d3d12 buffer pool */ - gst_buffer_pool_config_set_params (config, caps, 0, 0, 0); - - if (!gst_buffer_pool_set_config (pool, config)) { - GST_ERROR_OBJECT (filter, "failed to set config"); - gst_object_unref (pool); - return FALSE; - } - - gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); gst_query_add_allocation_meta (query, GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, nullptr); - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - gst_query_add_allocation_pool (query, pool, size, 0, 0); - - gst_object_unref (pool); - return TRUE; } static gboolean -gst_d3d12_base_convert_decide_allocation (GstBaseTransform * trans, - GstQuery * query) +gst_d3d12_base_convert_decide_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * query) { - auto filter = GST_D3D12_BASE_FILTER (trans); - auto self = GST_D3D12_BASE_CONVERT (trans); + auto self = GST_D3D12_BASE_CONVERT (filter); auto priv = self->priv; - GstCaps *outcaps = nullptr; - GstBufferPool *pool = nullptr; - guint size, min = 0, max = 0; - GstStructure *config; - gboolean update_pool = FALSE; - GstVideoInfo info; - - gst_query_parse_allocation (query, &outcaps, nullptr); - - if (!outcaps) - return FALSE; - - if (!gst_video_info_from_caps (&info, outcaps)) { - GST_ERROR_OBJECT (filter, "Invalid caps %" GST_PTR_FORMAT, outcaps); - return FALSE; - } - GstD3D12Format device_format; - if (!gst_d3d12_device_get_format (filter->device, - GST_VIDEO_INFO_FORMAT (&info), &device_format)) { - GST_ERROR_OBJECT (self, "Couldn't get device foramt"); + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->decide_allocation (filter, + device, query)) { return FALSE; } @@ -1660,72 +1569,7 @@ GST_DEBUG_OBJECT (self, "Downstream crop meta support: %d", priv->downstream_supports_crop_meta); - size = GST_VIDEO_INFO_SIZE (&info); - if (gst_query_get_n_allocation_pools (query) > 0) { - gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); - if (pool) { - if (!GST_IS_D3D12_BUFFER_POOL (pool)) { - gst_clear_object (&pool); - } else { - auto dpool = GST_D3D12_BUFFER_POOL (pool); - if (!gst_d3d12_device_is_equal (dpool->device, filter->device)) - gst_clear_object (&pool); - } - } - - update_pool = TRUE; - } - - if (!pool) - pool = gst_d3d12_buffer_pool_new (filter->device); - - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - - D3D12_RESOURCE_FLAGS resource_flags = - D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS; - if ((device_format.format_flags & GST_D3D12_FORMAT_FLAG_OUTPUT_UAV) - == GST_D3D12_FORMAT_FLAG_OUTPUT_UAV) { - resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS; - } - - if ((device_format.support1 & D3D12_FORMAT_SUPPORT1_RENDER_TARGET) == - D3D12_FORMAT_SUPPORT1_RENDER_TARGET) { - resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET; - } - - auto d3d12_params = - gst_buffer_pool_config_get_d3d12_allocation_params (config); - if (!d3d12_params) { - d3d12_params = gst_d3d12_allocation_params_new (filter->device, &info, - GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, - D3D12_HEAP_FLAG_SHARED); - } else { - gst_d3d12_allocation_params_set_resource_flags (d3d12_params, - resource_flags); - } - - gst_buffer_pool_config_set_d3d12_allocation_params (config, d3d12_params); - gst_d3d12_allocation_params_free (d3d12_params); - - gst_buffer_pool_config_set_params (config, outcaps, size, min, max); - gst_buffer_pool_set_config (pool, config); - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - if (update_pool) - gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); - else - gst_query_add_allocation_pool (query, pool, size, min, max); - - gst_object_unref (pool); - - return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, - query); + return TRUE; } static gboolean @@ -1759,8 +1603,8 @@ static gboolean gst_d3d12_base_convert_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info) + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) { auto self = GST_D3D12_BASE_CONVERT (filter); auto klass = GST_D3D12_BASE_CONVERT_GET_CLASS (self); @@ -1884,9 +1728,9 @@ klass->enable_mip_levels ? GST_D3D12_CONVERTER_MIP_GEN_ENABLED : GST_D3D12_CONVERTER_MIP_GEN_DISABLED, nullptr); - auto ctx = std::make_unique < ConvertContext > (filter->device); + auto ctx = std::make_unique < ConvertContext > (device); - ctx->conv = gst_d3d12_converter_new (filter->device, nullptr, in_info, + ctx->conv = gst_d3d12_converter_new (ctx->device, nullptr, in_info, out_info, nullptr, nullptr, config); if (!ctx->conv) { GST_ERROR_OBJECT (self, "Couldn't create converter");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12deinterlace.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12deinterlace.cpp
Changed
@@ -260,23 +260,21 @@ trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); static GstCaps *gst_d3d12_deinterlace_fixate_caps (GstBaseTransform * base, GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); -static gboolean gst_d3d12_deinterlace_propose_allocation (GstBaseTransform * - trans, GstQuery * decide_query, GstQuery * query); -static gboolean gst_d3d12_deinterlace_decide_allocation (GstBaseTransform * - trans, GstQuery * query); static gboolean gst_d3d12_deinterlace_sink_event (GstBaseTransform * trans, GstEvent * event); static gboolean gst_d3d12_deinterlace_query (GstBaseTransform * trans, GstPadDirection direction, GstQuery * query); -static gboolean gst_d3d12_deinterlace_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info); static GstFlowReturn gst_d3d12_deinterlace_generate_output (GstBaseTransform * trans, GstBuffer ** buffer); static GstFlowReturn gst_d3d12_deinterlace_transform (GstBaseTransform * trans, GstBuffer * inbuf, GstBuffer * outbuf); static GstFlowReturn gst_d3d12_deinterlace_submit_input_buffer (GstBaseTransform * trans, gboolean is_discont, GstBuffer * input); +static gboolean gst_d3d12_deinterlace_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean gst_d3d12_deinterlace_propose_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); #define gst_d3d12_deinterlace_parent_class parent_class G_DEFINE_TYPE (GstD3D12Deinterlace, gst_d3d12_deinterlace, @@ -321,10 +319,6 @@ GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_transform_caps); trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_fixate_caps); - trans_class->propose_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_propose_allocation); - trans_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_decide_allocation); trans_class->sink_event = GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_sink_event); trans_class->query = GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_query); @@ -335,6 +329,8 @@ trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_transform); filter_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_deinterlace_propose_allocation); gst_type_mark_as_plugin_api (GST_TYPE_D3D12_DEINTERLACE_FIELDS, (GstPluginAPIFlags) 0); @@ -572,148 +568,20 @@ } static gboolean -gst_d3d12_deinterlace_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query) +gst_d3d12_deinterlace_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) { - auto filter = GST_D3D12_BASE_FILTER (trans); - GstVideoInfo info; - GstBufferPool *pool; - GstCaps *caps; - guint size; - gboolean is_d3d12 = FALSE; - - if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, - decide_query, query)) + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { return FALSE; - - /* passthrough, we're done */ - if (!decide_query) - return TRUE; - - gst_query_parse_allocation (query, &caps, nullptr); - - if (!caps) { - GST_WARNING_OBJECT (filter, "Allocation query without caps"); - return FALSE; - } - - if (!gst_video_info_from_caps (&info, caps)) - return FALSE; - - if (gst_query_get_n_allocation_pools (query) == 0) { - GstCapsFeatures *features; - GstStructure *config; - - features = gst_caps_get_features (caps, 0); - - if (features && gst_caps_features_contains (features, - GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { - GST_DEBUG_OBJECT (filter, "upstream support d3d12 memory"); - pool = gst_d3d12_buffer_pool_new (filter->device); - is_d3d12 = TRUE; - } else { - pool = gst_video_buffer_pool_new (); - } - - config = gst_buffer_pool_get_config (pool); - - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_META); - if (!is_d3d12) { - gst_buffer_pool_config_add_option (config, - GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); - } - - size = GST_VIDEO_INFO_SIZE (&info); - gst_buffer_pool_config_set_params (config, caps, size, 0, 0); - - if (!gst_buffer_pool_set_config (pool, config)) { - GST_ERROR_OBJECT (filter, "Bufferpool config failed"); - gst_object_unref (pool); - return FALSE; - } - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, - nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - gst_query_add_allocation_pool (query, pool, size, 0, 0); - gst_object_unref (pool); } - gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); gst_query_add_allocation_meta (query, GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); return TRUE; } -static gboolean -gst_d3d12_deinterlace_decide_allocation (GstBaseTransform * trans, - GstQuery * query) -{ - auto filter = GST_D3D12_BASE_FILTER (trans); - GstCaps *outcaps = nullptr; - GstBufferPool *pool = nullptr; - guint size, min = 0, max = 0; - GstStructure *config; - gboolean update_pool = FALSE; - GstVideoInfo info; - - gst_query_parse_allocation (query, &outcaps, nullptr); - - if (!outcaps) - return FALSE; - - if (!gst_video_info_from_caps (&info, outcaps)) { - GST_ERROR_OBJECT (filter, "Invalid caps %" GST_PTR_FORMAT, outcaps); - return FALSE; - } - - size = GST_VIDEO_INFO_SIZE (&info); - if (gst_query_get_n_allocation_pools (query) > 0) { - gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); - if (pool) { - if (!GST_IS_D3D12_BUFFER_POOL (pool)) { - gst_clear_object (&pool); - } else { - auto dpool = GST_D3D12_BUFFER_POOL (pool); - if (!gst_d3d12_device_is_equal (dpool->device, filter->device)) - gst_clear_object (&pool); - } - } - - update_pool = TRUE; - } - - if (!pool) - pool = gst_d3d12_buffer_pool_new (filter->device); - - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - gst_buffer_pool_config_set_params (config, outcaps, size, min, max); - gst_buffer_pool_set_config (pool, config); - - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - if (update_pool) - gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); - else - gst_query_add_allocation_pool (query, pool, size, min, max); - - gst_object_unref (pool); - - return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, - query); -} - static void gst_d3d12_deinterlace_drain (GstD3D12Deinterlace * self) { @@ -744,9 +612,9 @@ static gboolean gst_d3d12_deinterlace_prepare_convert (GstD3D12Deinterlace * self, - GstCaps * in_caps, const GstVideoInfo * in_info, GstVideoInfo * yadif_info) + GstD3D12Device * device, GstCaps * in_caps, const GstVideoInfo * in_info, + GstVideoInfo * yadif_info) { - auto filter = GST_D3D12_BASE_FILTER (self); auto priv = self->priv; auto format = GST_VIDEO_INFO_FORMAT (in_info); @@ -768,9 +636,9 @@ GstCaps *caps = gst_video_info_to_caps (yadif_info); - auto ctx = std::make_shared < DeinterlaceConvCtx > (filter->device); - ctx->pre_pool = gst_d3d12_buffer_pool_new (filter->device); - ctx->post_pool = gst_d3d12_buffer_pool_new (filter->device); + auto ctx = std::make_shared < DeinterlaceConvCtx > (device); + ctx->pre_pool = gst_d3d12_buffer_pool_new (device); + ctx->post_pool = gst_d3d12_buffer_pool_new (device); auto config = gst_buffer_pool_get_config (ctx->pre_pool); gst_buffer_pool_config_set_params (config, caps, yadif_info->size, 0, 0); @@ -803,7 +671,7 @@ GST_TYPE_D3D12_CONVERTER_SAMPLER_FILTER, D3D12_FILTER_MIN_MAG_MIP_POINT, nullptr); - ctx->pre_conv = gst_d3d12_converter_new (filter->device, + ctx->pre_conv = gst_d3d12_converter_new (device, nullptr, in_info, yadif_info, nullptr, nullptr, gst_structure_copy (config)); if (!ctx->pre_conv) { @@ -812,7 +680,7 @@ return FALSE; } - ctx->post_conv = gst_d3d12_converter_new (filter->device, + ctx->post_conv = gst_d3d12_converter_new (device, nullptr, yadif_info, in_info, nullptr, nullptr, config); if (!ctx->post_conv) { GST_ERROR_OBJECT (self, "Couldn't create post converter"); @@ -826,8 +694,8 @@ static gboolean gst_d3d12_deinterlace_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info) + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) { auto trans = GST_BASE_TRANSFORM (filter); auto self = GST_D3D12_DEINTERLACE (filter); @@ -862,8 +730,8 @@ return TRUE; priv->in_info = *in_info; - if (!gst_d3d12_deinterlace_prepare_convert (self, incaps, &priv->in_info, - &priv->yadif_info)) { + if (!gst_d3d12_deinterlace_prepare_convert (self, device, incaps, + &priv->in_info, &priv->yadif_info)) { return FALSE; } @@ -871,7 +739,7 @@ if (priv->engine == GST_D3D12_DEINTERLACE_ENGINE_COMPUTE) { priv->use_compute = TRUE; } else if (priv->engine == GST_D3D12_DEINTERLACE_ENGINE_AUTO && - !gst_d3d12_device_is_uma (filter->device) && !priv->conv_ctx) { + !gst_d3d12_device_is_uma (device) && !priv->conv_ctx) { /* Since yadif shader is full compute shader, in case of dGPU, * prefer compute queue so that task can be overlapped with other 3D tasks */ @@ -880,7 +748,7 @@ GST_DEBUG_OBJECT (self, "Use compute engine: %d", priv->use_compute); - priv->yadif = gst_d3d12_yadif_new (filter->device, &priv->yadif_info, + priv->yadif = gst_d3d12_yadif_new (device, &priv->yadif_info, priv->use_compute); if (!priv->yadif) { GST_ERROR_OBJECT (self, "Couldn't create yadif object");
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12fisheyedewarp.cpp
Added
@@ -0,0 +1,1280 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-d3d12fisheyedewarp + * @title: d3d12fisheyedewarp + * + * A Direct3D12-based fisheye dewarping element + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12fisheyedewarp.h" +#include "gstd3d12pluginutils.h" +#include <directx/d3dx12.h> +#include <mutex> +#include <memory> +#include <wrl.h> +#include <math.h> +#include <gst/d3dshader/gstd3dshader.h> +#include <DirectXMath.h> + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +using namespace DirectX; +/* *INDENT-ON* */ + +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_fisheye_dewarp_debug); +#define GST_CAT_DEFAULT gst_d3d12_fisheye_dewarp_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +enum ProjectionType +{ + PROJECTION_PASSTHROUGH, + PROJECTION_EQUIRECT, + PROJECTION_PANORAMA, + PROJECTION_PERSPECTIVE, +}; + +/** + * GstD3D12FisheyeDewarpProjectionType: + * + * Since: 1.28 + */ +static GType +gst_d3d12_fisheye_dewarp_projection_type_get_type (void) +{ + static GType type = 0; + static const GEnumValue types = { + {PROJECTION_PASSTHROUGH, "Passthrough", "passthrough"}, + {PROJECTION_EQUIRECT, "Equirectangular", "equirect"}, + {PROJECTION_PANORAMA, "Panorama", "panorama"}, + {PROJECTION_PERSPECTIVE, "Perspective", "perspective"}, + {0, nullptr, nullptr}, + }; + + GST_D3D12_CALL_ONCE_BEGIN { + type = g_enum_register_static ("GstD3D12FisheyeDewarpProjectionType", + types); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + +enum RotationSpace +{ + ROTATION_SPACE_LOCAL, + ROTATION_SPACE_WORLD, +}; + +/** + * GstD3D12FisheyeDewarpRotationSpace: + * + * Since: 1.28 + */ +static GType +gst_d3d12_fisheye_dewarp_rotation_space_get_type (void) +{ + static GType type = 0; + static const GEnumValue types = { + {ROTATION_SPACE_LOCAL, "Local", "local"}, + {ROTATION_SPACE_WORLD, "World", "world"}, + {0, nullptr, nullptr}, + }; + + GST_D3D12_CALL_ONCE_BEGIN { + type = g_enum_register_static ("GstD3D12FisheyeDewarpRotationSpace", types); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + +enum RotationOrder +{ + ROT_XYZ, + ROT_XZY, + ROT_YXZ, + ROT_YZX, + ROT_ZXY, + ROT_ZYX, +}; + +/** + * GstD3D12FisheyeDewarpRotationOrder: + * + * Since: 1.28 + */ +static GType +gst_d3d12_fisheye_rotation_order_get_type (void) +{ + static GType type = 0; + static const GEnumValue types = { + {ROT_XYZ, "XYZ", "xyz"}, + {ROT_XZY, "XZY", "xzy"}, + {ROT_YXZ, "YXZ", "yxz"}, + {ROT_YZX, "YZX", "yzx"}, + {ROT_ZXY, "ZXY", "zxy"}, + {ROT_ZYX, "ZYX", "zyx"}, + {0, nullptr, nullptr}, + }; + + GST_D3D12_CALL_ONCE_BEGIN { + type = g_enum_register_static ("GstD3D12FisheyeDewarpRotationOrder", types); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + +enum +{ + PROP_0, + PROP_PROJ_TYPE, + PROP_ROTATION_SPACE, + PROP_CENTER_X, + PROP_CENTER_Y, + PROP_RADIUS_X, + PROP_RADIUS_Y, + PROP_VIEWPORT_X, + PROP_VIEWPORT_Y, + PROP_VIEWPORT_WIDTH, + PROP_VIEWPORT_HEIGHT, + PROP_ROI_X, + PROP_ROI_Y, + PROP_ROI_WIDTH, + PROP_ROI_HEIGHT, + PROP_FISHEYE_FOV, + PROP_VERTICAL_FOV, + PROP_HORIZONTAL_FOV, + PROP_ROTATION_ORDER, + PROP_ROTATION_X, + PROP_ROTATION_Y, + PROP_ROTATION_Z, + PROP_INNER_RADIUS, +}; + +#define DEFAULT_PROJ_TYPE PROJECTION_EQUIRECT +#define DEFAULT_ROTATION_SPACE ROTATION_SPACE_LOCAL +#define DEFAULT_CENTER_X 0.5 +#define DEFAULT_CENTER_Y 0.5 +#define DEFAULT_RADIUS_X 0.5 +#define DEFAULT_RADIUS_Y 0.5 +#define DEFAULT_RECT_X 0.0 +#define DEFAULT_RECT_Y 0.0 +#define DEFAULT_RECT_WIDTH 1.0 +#define DEFAULT_RECT_HEIGHT 1.0 +#define DEFAULT_FISHEYE_FOV 180.0 +#define DEFAULT_VERTICAL_FOV 90.0 +#define DEFAULT_HORIZONTAL_FOV 90.0 +#define DEFAULT_ROTATION_ORDER ROT_ZXY +#define DEFAULT_ANGLE 0.0 +#define DEFAULT_INNER_RADIUS 0.3 + +/* *INDENT-OFF* */ +struct DewarpRect +{ + double x = DEFAULT_RECT_X; + double y = DEFAULT_RECT_Y; + double width = DEFAULT_RECT_WIDTH; + double height = DEFAULT_RECT_HEIGHT; +}; + +struct DewarpConstBuf +{ + XMFLOAT2 fisheyeCenter; + XMFLOAT2 fisheyeRadius; + + FLOAT maxAngle; + FLOAT horizontalFOV; + FLOAT verticalFOV; + FLOAT rollAngle; + + XMFLOAT2 roiOffset; + XMFLOAT2 roiScale; + + FLOAT innerRadius; + FLOAT invFocalLenX; + FLOAT invFocalLenY; + FLOAT padding; + + XMFLOAT4 RotationMatrixRow0; + XMFLOAT4 RotationMatrixRow1; + XMFLOAT4 RotationMatrixRow2; +}; + +struct DewarpContext +{ + ~DewarpContext() + { + if (fence_val) { + gst_d3d12_device_fence_wait (device, + D3D12_COMMAND_LIST_TYPE_DIRECT, fence_val); + } + + gst_clear_object (&conv); + gst_clear_object (&ca_pool); + gst_clear_object (&desc_pool); + gst_clear_object (&device); + } + + ComPtr<ID3D12RootSignature> rs; + ComPtr<ID3D12PipelineState> pso_equirect; + ComPtr<ID3D12PipelineState> pso_panorama; + ComPtr<ID3D12PipelineState> pso_perspective; + ComPtr<ID3D12GraphicsCommandList> cl; + ComPtr<ID3D12Resource> uv_remap; + + guint dispatch_x; + guint dispatch_y; + + ID3D12Fence *cq_fence; + GstD3D12CmdAllocPool *ca_pool = nullptr; + GstD3D12DescHeapPool *desc_pool = nullptr; + GstD3D12Device *device = nullptr; + GstD3D12CmdQueue *cq = nullptr; + guint64 fence_val = 0; + GstD3D12Converter *conv = nullptr; +}; + +struct GstD3D12FisheyeDewarpPrivate +{ + GstD3D12FisheyeDewarpPrivate () + { + fence_data_pool = gst_d3d12_fence_data_pool_new (); + } + + ~GstD3D12FisheyeDewarpPrivate () + { + gst_clear_object (&fence_data_pool); + } + + GstD3D12FenceDataPool *fence_data_pool; + + std::shared_ptr<DewarpContext> ctx; + + gboolean prop_updated = FALSE; + gboolean viewport_updated = FALSE; + DewarpConstBuf cbuf; + GstVideoRectangle original_viewport; + + ProjectionType proj_type = DEFAULT_PROJ_TYPE; + RotationSpace rotation_space = DEFAULT_ROTATION_SPACE; + double center2 = { DEFAULT_CENTER_X, DEFAULT_CENTER_Y }; + double radius2 = { DEFAULT_RADIUS_X, DEFAULT_RADIUS_Y }; + DewarpRect viewport; + DewarpRect roi; + double fisheye_fov = DEFAULT_FISHEYE_FOV; + double vertical_fov = DEFAULT_VERTICAL_FOV; + double horizontal_fov = DEFAULT_HORIZONTAL_FOV; + RotationOrder rotation_order = DEFAULT_ROTATION_ORDER; + double rotation_x = DEFAULT_ANGLE; + double rotation_y = DEFAULT_ANGLE; + double rotation_z = DEFAULT_ANGLE; + double inner_radius = DEFAULT_INNER_RADIUS; + + std::recursive_mutex lock; +}; +/* *INDENT-ON* */ + +struct _GstD3D12FisheyeDewarp +{ + GstD3D12BaseFilter parent; + + GstD3D12FisheyeDewarpPrivate *priv; +}; + +static void gst_d3d12_fisheye_dewarp_finalize (GObject * object); +static void gst_d3d12_fisheye_dewarp_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_d3d12_fisheye_dewarp_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static gboolean gst_d3d12_fisheye_dewarp_stop (GstBaseTransform * trans); +static gboolean gst_d3d12_fisheye_dewarp_transform_meta (GstBaseTransform * + trans, GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); +static GstFlowReturn gst_d3d12_fisheye_dewarp_generate_output (GstBaseTransform + * trans, GstBuffer ** buffer); +static GstFlowReturn gst_d3d12_fisheye_dewarp_transform (GstBaseTransform * + trans, GstBuffer * inbuf, GstBuffer * outbuf); +static gboolean gst_d3d12_fisheye_dewarp_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean +gst_d3d12_fisheye_dewarp_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); + +#define gst_d3d12_fisheye_dewarp_parent_class parent_class +G_DEFINE_TYPE (GstD3D12FisheyeDewarp, gst_d3d12_fisheye_dewarp, + GST_TYPE_D3D12_BASE_FILTER); + +static void +gst_d3d12_fisheye_dewarp_class_init (GstD3D12FisheyeDewarpClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + auto filter_class = GST_D3D12_BASE_FILTER_CLASS (klass); + GParamFlags param_flags = + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS); + + object_class->set_property = gst_d3d12_fisheye_dewarp_set_property; + object_class->get_property = gst_d3d12_fisheye_dewarp_get_property; + object_class->finalize = gst_d3d12_fisheye_dewarp_finalize; + + g_object_class_install_property (object_class, PROP_PROJ_TYPE, + g_param_spec_enum ("projection-type", "Projection Type", + "Projection type to use", + gst_d3d12_fisheye_dewarp_projection_type_get_type (), + DEFAULT_PROJ_TYPE, param_flags)); + + g_object_class_install_property (object_class, PROP_ROTATION_SPACE, + g_param_spec_enum ("rotation-space", "Rotation Space", + "Controls whether rotations are applied in local " + "(intrinsic, camera-relative) or world (extrinsic, fixed-axis) space", + gst_d3d12_fisheye_dewarp_rotation_space_get_type (), + DEFAULT_ROTATION_SPACE, param_flags)); + + g_object_class_install_property (object_class, PROP_CENTER_X, + g_param_spec_double ("center-x", "Center X", + "Normalized X position of fisheye circle", + 0, 1.0, DEFAULT_CENTER_X, param_flags)); + + g_object_class_install_property (object_class, PROP_CENTER_Y, + g_param_spec_double ("center-y", "Center Y", + "Normalized Y position of fisheye circle", + 0, 1.0, DEFAULT_CENTER_Y, param_flags)); + + g_object_class_install_property (object_class, PROP_RADIUS_X, + g_param_spec_double ("radius-x", "Radius X", + "Normalized horizontal radius of fisheye circle", + 0, 1.0, DEFAULT_RADIUS_X, param_flags)); + + g_object_class_install_property (object_class, PROP_RADIUS_Y, + g_param_spec_double ("radius-y", "Radius Y", + "Normalized vertical radius of fisheye circle", + 0, 1.0, DEFAULT_RADIUS_Y, param_flags)); + + g_object_class_install_property (object_class, PROP_VIEWPORT_X, + g_param_spec_double ("viewport-x", "Viewport X", + "Normalized top-left viewport X position", + 0, 1.0, DEFAULT_RECT_X, param_flags)); + + g_object_class_install_property (object_class, PROP_VIEWPORT_Y, + g_param_spec_double ("viewport-y", "Viewport Y", + "Normalized top-left viewport Y position", + 0, 1.0, DEFAULT_RECT_Y, param_flags)); + + g_object_class_install_property (object_class, PROP_VIEWPORT_WIDTH, + g_param_spec_double ("viewport-width", "Viewport Width", + "Normalized viewport width", + 0, 1.0, DEFAULT_RECT_WIDTH, param_flags)); + + g_object_class_install_property (object_class, PROP_VIEWPORT_HEIGHT, + g_param_spec_double ("viewport-height", "Viewport Height", + "Normalized viewport height", + 0, 1.0, DEFAULT_RECT_HEIGHT, param_flags)); + + g_object_class_install_property (object_class, PROP_ROI_X, + g_param_spec_double ("roi-x", "ROI X", + "Normalized horizontal ROI offset (top-left), in output image space", + 0, 1.0, DEFAULT_RECT_X, param_flags)); + + g_object_class_install_property (object_class, PROP_ROI_Y, + g_param_spec_double ("roi-y", "ROI Y", + "Normalized vertical ROI offset (top-left), in output image space", + 0, 1.0, DEFAULT_RECT_Y, param_flags)); + + g_object_class_install_property (object_class, PROP_ROI_WIDTH, + g_param_spec_double ("roi-width", "ROI Width", + "Normalized ROI width, in output image space", + 0, 1.0, DEFAULT_RECT_WIDTH, param_flags)); + + g_object_class_install_property (object_class, PROP_ROI_HEIGHT, + g_param_spec_double ("roi-height", "ROI Height", + "Normalized ROI height, in output image space", + 0, 1.0, DEFAULT_RECT_HEIGHT, param_flags)); + + g_object_class_install_property (object_class, PROP_FISHEYE_FOV, + g_param_spec_double ("fisheye-fov", "Fisheye FOV", + "Fisheye image field-of-view angle, in degrees", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_FISHEYE_FOV, param_flags)); + + g_object_class_install_property (object_class, PROP_VERTICAL_FOV, + g_param_spec_double ("vertical-fov", "Vertical FOV", + "Vertical field-of-view angle of output, in degrees; " + "ignored in 'panorama' projection", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_VERTICAL_FOV, param_flags)); + + g_object_class_install_property (object_class, PROP_HORIZONTAL_FOV, + g_param_spec_double ("horizontal-fov", "Horizontal FOV", + "Horizontal field-of-view angle of output, in degrees; " + "ignored in 'panorama' projection", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_HORIZONTAL_FOV, param_flags)); + + g_object_class_install_property (object_class, PROP_ROTATION_ORDER, + g_param_spec_enum ("rotation-order", "Rotation Order", + "Rotation axis order to apply, ignored in 'panorama' projection", + gst_d3d12_fisheye_rotation_order_get_type (), + DEFAULT_ROTATION_ORDER, param_flags)); + + g_object_class_install_property (object_class, PROP_ROTATION_X, + g_param_spec_double ("rotation-x", "Rotation X", + "Pitch (X-axis rotation) angle, in degrees; " + "ignored in 'panorama' projection", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_ANGLE, param_flags)); + + g_object_class_install_property (object_class, PROP_ROTATION_Y, + g_param_spec_double ("rotation-y", "Rotation Y", + "Yaw (Y-axis rotation) angle, in degrees; " + "ignored in 'panorama' projection", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_ANGLE, param_flags)); + + g_object_class_install_property (object_class, PROP_ROTATION_Z, + g_param_spec_double ("rotation-z", "Rotation Z", + "Roll (Z-axis rotation) angle, in degrees", + -G_MAXDOUBLE, G_MAXDOUBLE, DEFAULT_ANGLE, param_flags)); + + g_object_class_install_property (object_class, PROP_INNER_RADIUS, + g_param_spec_double ("inner-radius", "Inner Radius", + "Normalized inner radius for cropping central area " + "(0.0 = center, 1.0 = full crop). Only used in 'panorama' projection", + 0.0, 1.0, DEFAULT_INNER_RADIUS, param_flags)); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, + "Direct3D12 Fisheye Dewarp", "Filter/Converter/Video/Hardware", + "Dewarping fisheye image", "Seungha Yang <seungha@centricular.com>"); + + trans_class->passthrough_on_same_caps = FALSE; + + trans_class->stop = GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_stop); + trans_class->transform_meta = + GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_transform_meta); + trans_class->generate_output = + GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_generate_output); + trans_class->transform = + GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_transform); + + filter_class->set_info = + GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_fisheye_dewarp_propose_allocation); + + gst_type_mark_as_plugin_api (gst_d3d12_fisheye_dewarp_projection_type_get_type + (), (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (gst_d3d12_fisheye_dewarp_rotation_space_get_type + (), (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (gst_d3d12_fisheye_rotation_order_get_type (), + (GstPluginAPIFlags) 0); + + GST_DEBUG_CATEGORY_INIT (gst_d3d12_fisheye_dewarp_debug, "d3d12fisheyedewarp", + 0, "d3d12fisheyedewarp"); +} + +static void +gst_d3d12_fisheye_dewarp_init (GstD3D12FisheyeDewarp * self) +{ + self->priv = new GstD3D12FisheyeDewarpPrivate (); +} + +static void +gst_d3d12_fisheye_dewarp_finalize (GObject * object) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +update_double_value (GstD3D12FisheyeDewarp * self, double *old_val, + const GValue * new_val) +{ + auto priv = self->priv; + auto tmp = g_value_get_double (new_val); + + if (tmp != *old_val) { + priv->prop_updated = TRUE; + *old_val = tmp; + } +} + +static void +update_viewport_value (GstD3D12FisheyeDewarp * self, double *old_val, + const GValue * new_val) +{ + auto priv = self->priv; + auto tmp = g_value_get_double (new_val); + + if (tmp != *old_val) { + priv->viewport_updated = TRUE; + *old_val = tmp; + } +} + +static void +gst_d3d12_fisheye_dewarp_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_PROJ_TYPE: + { + auto type = (ProjectionType) g_value_get_enum (value); + if (type != priv->proj_type) { + priv->proj_type = type; + priv->prop_updated = TRUE; + } + break; + } + case PROP_ROTATION_SPACE: + { + auto space = (RotationSpace) g_value_get_enum (value); + if (space != priv->rotation_space) { + priv->rotation_space = space; + priv->prop_updated = TRUE; + } + break; + } + case PROP_CENTER_X: + update_double_value (self, &priv->center0, value); + break; + case PROP_CENTER_Y: + update_double_value (self, &priv->center1, value); + break; + case PROP_RADIUS_X: + update_double_value (self, &priv->radius0, value); + break; + case PROP_RADIUS_Y: + update_double_value (self, &priv->radius1, value); + break; + case PROP_VIEWPORT_X: + update_viewport_value (self, &priv->viewport.x, value); + break; + case PROP_VIEWPORT_Y: + update_viewport_value (self, &priv->viewport.y, value); + break; + case PROP_VIEWPORT_WIDTH: + update_viewport_value (self, &priv->viewport.width, value); + break; + case PROP_VIEWPORT_HEIGHT: + update_viewport_value (self, &priv->viewport.height, value); + break; + case PROP_ROI_X: + update_double_value (self, &priv->roi.x, value); + break; + case PROP_ROI_Y: + update_double_value (self, &priv->roi.y, value); + break; + case PROP_ROI_WIDTH: + update_double_value (self, &priv->roi.width, value); + break; + case PROP_ROI_HEIGHT: + update_double_value (self, &priv->roi.height, value); + break; + case PROP_FISHEYE_FOV: + update_double_value (self, &priv->fisheye_fov, value); + break; + case PROP_VERTICAL_FOV: + update_double_value (self, &priv->vertical_fov, value); + break; + case PROP_HORIZONTAL_FOV: + update_double_value (self, &priv->horizontal_fov, value); + break; + case PROP_ROTATION_ORDER: + { + auto order = (RotationOrder) g_value_get_enum (value); + if (order != priv->rotation_order) { + priv->rotation_order = order; + priv->prop_updated = TRUE; + } + break; + } + case PROP_ROTATION_X: + update_double_value (self, &priv->rotation_x, value); + break; + case PROP_ROTATION_Y: + update_double_value (self, &priv->rotation_y, value); + break; + case PROP_ROTATION_Z: + update_double_value (self, &priv->rotation_z, value); + break; + case PROP_INNER_RADIUS: + update_double_value (self, &priv->inner_radius, value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_d3d12_fisheye_dewarp_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_PROJ_TYPE: + g_value_set_enum (value, priv->proj_type); + break; + case PROP_ROTATION_SPACE: + g_value_set_enum (value, priv->rotation_space); + break; + case PROP_CENTER_X: + g_value_set_double (value, priv->center0); + break; + case PROP_CENTER_Y: + g_value_set_double (value, priv->center1); + break; + case PROP_RADIUS_X: + g_value_set_double (value, priv->radius0); + break; + case PROP_RADIUS_Y: + g_value_set_double (value, priv->radius1); + break; + case PROP_VIEWPORT_X: + g_value_set_double (value, priv->viewport.x); + break; + case PROP_VIEWPORT_Y: + g_value_set_double (value, priv->viewport.y); + break; + case PROP_VIEWPORT_WIDTH: + g_value_set_double (value, priv->viewport.width); + break; + case PROP_VIEWPORT_HEIGHT: + g_value_set_double (value, priv->viewport.height); + break; + case PROP_ROI_X: + g_value_set_double (value, priv->roi.x); + break; + case PROP_ROI_Y: + g_value_set_double (value, priv->roi.y); + break; + case PROP_ROI_WIDTH: + g_value_set_double (value, priv->roi.width); + break; + case PROP_ROI_HEIGHT: + g_value_set_double (value, priv->roi.height); + break; + case PROP_FISHEYE_FOV: + g_value_set_double (value, priv->fisheye_fov); + break; + case PROP_VERTICAL_FOV: + g_value_set_double (value, priv->vertical_fov); + break; + case PROP_HORIZONTAL_FOV: + g_value_set_double (value, priv->horizontal_fov); + break; + case PROP_ROTATION_ORDER: + g_value_set_enum (value, priv->rotation_order); + break; + case PROP_ROTATION_X: + g_value_set_double (value, priv->rotation_x); + break; + case PROP_ROTATION_Y: + g_value_set_double (value, priv->rotation_y); + break; + case PROP_ROTATION_Z: + g_value_set_double (value, priv->rotation_z); + break; + case PROP_INNER_RADIUS: + g_value_set_double (value, priv->inner_radius); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_d3d12_fisheye_dewarp_stop (GstBaseTransform * trans) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (trans); + auto priv = self->priv; + + priv->ctx = nullptr; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->stop (trans); +} + +static gboolean +gst_d3d12_fisheye_dewarp_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) +{ + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { + return FALSE; + } + + gst_query_add_allocation_meta (query, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); + + return TRUE; +} + +static HRESULT +gst_d3d12_fisheye_dewarp_get_rs_blob (GstD3D12Device * device, ID3DBlob ** blob) +{ + static ID3DBlob *rs_blob = nullptr; + static HRESULT hr = S_OK; + + GST_D3D12_CALL_ONCE_BEGIN { + D3D12_VERSIONED_ROOT_SIGNATURE_DESC desc = { }; + CD3DX12_ROOT_PARAMETER root_params2; + CD3DX12_DESCRIPTOR_RANGE range_uav; + + root_params0.InitAsConstants (sizeof (DewarpConstBuf) / 4, 0); + + range_uav.Init (D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 0); + root_params1.InitAsDescriptorTable (1, &range_uav); + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (desc, 2, root_params, + 0, nullptr, + D3D12_ROOT_SIGNATURE_FLAG_DENY_VERTEX_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS); + + ComPtr < ID3DBlob > error_blob; + hr = D3DX12SerializeVersionedRootSignature (&desc, + D3D_ROOT_SIGNATURE_VERSION_1_0, &rs_blob, &error_blob); + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + GST_ERROR_OBJECT (device, + "Couldn't serialize rs, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } + } GST_D3D12_CALL_ONCE_END; + + if (rs_blob) { + *blob = rs_blob; + rs_blob->AddRef (); + } + + return hr; +} + +static inline float +fmod_angle (double angle) +{ + return (float) fmod (fmod (angle, 360.0f) + 360.0f, 360.0f); +} + +static gboolean +gst_d3d12_fisheye_dewarp_update_cbuf (GstD3D12FisheyeDewarp * self) +{ + auto priv = self->priv; + + if (!priv->prop_updated) + return TRUE; + + priv->cbuf.fisheyeCenter.x = (FLOAT) priv->center0; + priv->cbuf.fisheyeCenter.y = (FLOAT) priv->center1; + priv->cbuf.fisheyeRadius.x = (FLOAT) priv->radius0; + priv->cbuf.fisheyeRadius.y = (FLOAT) priv->radius1; + + priv->cbuf.maxAngle = + XMConvertToRadians (fmod_angle (priv->fisheye_fov) * 0.5f); + priv->cbuf.horizontalFOV = + XMConvertToRadians (fmod_angle (priv->horizontal_fov)); + priv->cbuf.verticalFOV = XMConvertToRadians (fmod_angle (priv->vertical_fov)); + + priv->cbuf.roiOffset.x = (FLOAT) priv->roi.x; + priv->cbuf.roiOffset.y = (FLOAT) priv->roi.y; + priv->cbuf.roiScale.x = (FLOAT) priv->roi.width; + priv->cbuf.roiScale.y = (FLOAT) priv->roi.height; + + priv->cbuf.innerRadius = priv->inner_radius; + priv->cbuf.invFocalLenX = tanf (priv->cbuf.horizontalFOV * 0.5f); + priv->cbuf.invFocalLenY = tanf (priv->cbuf.verticalFOV * 0.5f); + + auto pitch_angle = XMConvertToRadians (fmod_angle (priv->rotation_x)); + auto yaw_angle = XMConvertToRadians (fmod_angle (priv->rotation_y)); + auto roll_angle = XMConvertToRadians (fmod_angle (priv->rotation_z)); + + priv->cbuf.rollAngle = roll_angle; + + auto rx = XMMatrixRotationX (pitch_angle); + auto ry = XMMatrixRotationY (yaw_angle); + auto rz = XMMatrixRotationZ (roll_angle); + + XMMATRIX m = XMMatrixIdentity (); + if (priv->rotation_space == ROTATION_SPACE_WORLD) { + switch (priv->rotation_order) { + case ROT_XYZ: + m = rx * ry * rz; + break; + case ROT_XZY: + m = rx * rz * ry; + break; + case ROT_YXZ: + m = ry * rx * rz; + break; + case ROT_YZX: + m = ry * rz * rx; + break; + case ROT_ZXY: + m = rz * rx * ry; + break; + case ROT_ZYX: + m = rz * ry * rx; + break; + } + } else { + switch (priv->rotation_order) { + case ROT_XYZ: + m = rz * ry * rx; + break; + case ROT_XZY: + m = ry * rz * rx; + break; + case ROT_YXZ: + m = rz * rx * ry; + break; + case ROT_YZX: + m = rx * rz * ry; + break; + case ROT_ZXY: + m = ry * rx * rz; + break; + case ROT_ZYX: + m = rx * ry * rz; + break; + } + } + + XMFLOAT3X3 mat3x3; + XMStoreFloat3x3 (&mat3x3, m); + + priv->cbuf.RotationMatrixRow0 = + XMFLOAT4 (mat3x3._11, mat3x3._12, mat3x3._13, 0.0f); + priv->cbuf.RotationMatrixRow1 = + XMFLOAT4 (mat3x3._21, mat3x3._22, mat3x3._23, 0.0f); + priv->cbuf.RotationMatrixRow2 = + XMFLOAT4 (mat3x3._31, mat3x3._32, mat3x3._33, 0.0f); + + return TRUE; +} + +static void +get_viewport (GstD3D12FisheyeDewarp * self, GstVideoRectangle * viewport) +{ + auto priv = self->priv; + + if (priv->original_viewport.w > 0 && priv->original_viewport.h > 0) { + double x = priv->viewport.x; + double y = priv->viewport.y; + double w = priv->viewport.width; + double h = priv->viewport.height; + + /* Ensure normalized coordinate */ + x = CLAMP (x, 0.0, 1.0); + y = CLAMP (y, 0.0, 1.0); + w = CLAMP (w, 0.0, 1.0); + h = CLAMP (h, 0.0, 1.0); + + /* Scale to real viewport size */ + gint xi = (gint) round ((double) priv->original_viewport.w * x) + + priv->original_viewport.x; + gint yi = (gint) round ((double) priv->original_viewport.h * y) + + priv->original_viewport.y; + gint wi = (gint) round ((double) priv->original_viewport.w * w); + gint hi = (gint) round ((double) priv->original_viewport.h * h); + + viewport->x = xi; + viewport->y = yi; + viewport->w = wi; + viewport->h = hi; + } else { + viewport->x = 0; + viewport->y = 0; + viewport->w = 0; + viewport->h = 0; + } +} + +static gboolean +gst_d3d12_fisheye_dewarp_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (filter); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + if (priv->ctx) { + if (!gst_d3d12_device_is_equal (priv->ctx->device, device)) { + priv->ctx = nullptr; + } else { + gst_d3d12_device_fence_wait (priv->ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, priv->ctx->fence_val); + gst_clear_object (&priv->ctx->conv); + } + } + + if (priv->ctx && priv->ctx->uv_remap) { + auto desc = GetDesc (priv->ctx->uv_remap); + if ((gint) desc.Width != in_info->width || + (gint) desc.Height != in_info->height) { + priv->ctx->uv_remap = nullptr; + } + } + + if (!priv->ctx) { + auto ctx = std::make_shared < DewarpContext > (); + ctx->device = (GstD3D12Device *) gst_object_ref (device); + auto device_handle = gst_d3d12_device_get_device_handle (device); + ctx->ca_pool = gst_d3d12_cmd_alloc_pool_new (device_handle, + D3D12_COMMAND_LIST_TYPE_DIRECT); + + D3D12_DESCRIPTOR_HEAP_DESC desc_heap_desc = { }; + desc_heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; + desc_heap_desc.NumDescriptors = 1; + desc_heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; + ctx->desc_pool = + gst_d3d12_desc_heap_pool_new (device_handle, &desc_heap_desc); + + ctx->cq = gst_d3d12_device_get_cmd_queue (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + ctx->cq_fence = gst_d3d12_cmd_queue_get_fence_handle (ctx->cq); + + ComPtr < ID3DBlob > rs_blob; + auto hr = gst_d3d12_fisheye_dewarp_get_rs_blob (device, &rs_blob); + if (!gst_d3d12_result (hr, device)) + return FALSE; + + hr = device_handle->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&ctx->rs)); + if (!gst_d3d12_result (hr, device)) { + GST_ERROR_OBJECT (self, "Couldn't create root signature"); + return FALSE; + } + + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + + GstD3DShaderByteCode cs_code; + if (!gst_d3d_plugin_shader_get_cs_blob (GST_D3D_PLUGIN_CS_FISHEYE_EQUIRECT, + GST_D3D_SM_5_0, &cs_code)) { + GST_ERROR_OBJECT (self, "Couldn't get compute shader bytecode"); + return FALSE; + } + + pso_desc.pRootSignature = ctx->rs.Get (); + pso_desc.CS.pShaderBytecode = cs_code.byte_code; + pso_desc.CS.BytecodeLength = cs_code.byte_code_len; + hr = device_handle->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&ctx->pso_equirect)); + if (!gst_d3d12_result (hr, device)) { + GST_ERROR_OBJECT (self, "Couldn't create PSO"); + return FALSE; + } + + if (!gst_d3d_plugin_shader_get_cs_blob (GST_D3D_PLUGIN_CS_FISHEYE_PANORAMA, + GST_D3D_SM_5_0, &cs_code)) { + GST_ERROR_OBJECT (self, "Couldn't get compute shader bytecode"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = cs_code.byte_code; + pso_desc.CS.BytecodeLength = cs_code.byte_code_len; + hr = device_handle->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&ctx->pso_panorama)); + if (!gst_d3d12_result (hr, device)) { + GST_ERROR_OBJECT (self, "Couldn't create PSO"); + return FALSE; + } + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_FISHEYE_PERSPECTIVE, GST_D3D_SM_5_0, &cs_code)) { + GST_ERROR_OBJECT (self, "Couldn't get compute shader bytecode"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = cs_code.byte_code; + pso_desc.CS.BytecodeLength = cs_code.byte_code_len; + hr = device_handle->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&ctx->pso_perspective)); + if (!gst_d3d12_result (hr, device)) { + GST_ERROR_OBJECT (self, "Couldn't create PSO"); + return FALSE; + } + + priv->ctx = std::move (ctx); + } + + auto & ctx = priv->ctx; + if (!ctx->uv_remap) { + D3D12_HEAP_PROPERTIES heap_prop = + CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); + D3D12_RESOURCE_DESC desc = + CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_R16G16B16A16_UNORM, + in_info->width, in_info->height, 1, 1, 1, 0, + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); + + auto device = gst_d3d12_device_get_device_handle (ctx->device); + auto hr = device->CreateCommittedResource (&heap_prop, + gst_d3d12_device_non_zeroed_supported (ctx->device) ? + D3D12_HEAP_FLAG_CREATE_NOT_ZEROED : D3D12_HEAP_FLAG_NONE, + &desc, D3D12_RESOURCE_STATE_COMMON, nullptr, + IID_PPV_ARGS (&ctx->uv_remap)); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't create LUT texture"); + return FALSE; + } + } + + ctx->conv = gst_d3d12_converter_new (ctx->device, nullptr, + in_info, out_info, nullptr, nullptr, nullptr); + + priv->original_viewport.x = 0; + priv->original_viewport.y = 0; + priv->original_viewport.w = out_info->width; + priv->original_viewport.h = out_info->height; + + GstVideoRectangle viewport; + get_viewport (self, &viewport); + gst_d3d12_converter_update_viewport (ctx->conv, viewport.x, viewport.y, + viewport.w, viewport.h); + + ctx->dispatch_x = (in_info->width + 7) / 8; + ctx->dispatch_y = (in_info->height + 7) / 8; + + /* need to build LUT later */ + priv->prop_updated = TRUE; + priv->viewport_updated = FALSE; + + return TRUE; +} + +static gboolean +gst_d3d12_fisheye_dewarp_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) +{ + if (meta->info->api == GST_VIDEO_CROP_META_API_TYPE) + return FALSE; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->transform_meta (trans, + outbuf, meta, inbuf); +} + +static GstFlowReturn +gst_d3d12_fisheye_dewarp_generate_output (GstBaseTransform * trans, + GstBuffer ** buffer) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (trans); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!trans->queued_buf) + return GST_FLOW_OK; + + if (priv->proj_type != PROJECTION_PASSTHROUGH) { + return GST_BASE_TRANSFORM_CLASS (parent_class)->generate_output (trans, + buffer); + } + + *buffer = trans->queued_buf; + trans->queued_buf = nullptr; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_d3d12_fisheye_dewarp_transform (GstBaseTransform * trans, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + auto self = GST_D3D12_FISHEYE_DEWARP (trans); + auto priv = self->priv; + GstD3D12CmdAlloc *gst_ca; + GstD3D12FenceData *fence_data; + auto ctx = priv->ctx; + HRESULT hr; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + g_assert (priv->proj_type != PROJECTION_PASSTHROUGH); + + if (!ctx) { + GST_ERROR_OBJECT (self, "Context is not configured"); + return GST_FLOW_ERROR; + } + + if (!gst_d3d12_fisheye_dewarp_update_cbuf (self)) { + GST_ERROR_OBJECT (self, "Couldn't update constant buffer"); + return GST_FLOW_ERROR; + } + + auto device = gst_d3d12_device_get_device_handle (ctx->device); + + gst_d3d12_fence_data_pool_acquire (priv->fence_data_pool, &fence_data); + + if (!gst_d3d12_cmd_alloc_pool_acquire (ctx->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + hr = ca->Reset (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + if (!ctx->cl) { + hr = device->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + ca, nullptr, IID_PPV_ARGS (&priv->ctx->cl)); + } else { + hr = ctx->cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + if (priv->prop_updated) { + GstD3D12DescHeap *heap; + if (!gst_d3d12_desc_heap_pool_acquire (ctx->desc_pool, &heap)) { + GST_ERROR_OBJECT (self, "Couldn't acquire descriptor heap"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + auto heap_handle = gst_d3d12_desc_heap_get_handle (heap); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (heap)); + + auto device = gst_d3d12_device_get_device_handle (ctx->device); + + auto cpu_handle = GetCPUDescriptorHandleForHeapStart (heap_handle); + D3D12_UNORDERED_ACCESS_VIEW_DESC uav_desc = { }; + uav_desc.Format = DXGI_FORMAT_R16G16B16A16_UNORM; + uav_desc.ViewDimension = D3D12_UAV_DIMENSION_TEXTURE2D; + device->CreateUnorderedAccessView (ctx->uv_remap.Get (), + nullptr, &uav_desc, cpu_handle); + + ID3D12DescriptorHeap *heaps = { heap_handle }; + + ctx->cl->SetComputeRootSignature (ctx->rs.Get ()); + switch (priv->proj_type) { + case PROJECTION_EQUIRECT: + ctx->cl->SetPipelineState (ctx->pso_equirect.Get ()); + break; + case PROJECTION_PANORAMA: + ctx->cl->SetPipelineState (ctx->pso_panorama.Get ()); + break; + case PROJECTION_PERSPECTIVE: + ctx->cl->SetPipelineState (ctx->pso_perspective.Get ()); + break; + default: + g_assert_not_reached (); + return GST_FLOW_ERROR; + } + + ctx->cl->SetDescriptorHeaps (1, heaps); + ctx->cl->SetComputeRoot32BitConstants (0, sizeof (DewarpConstBuf) / 4, + &priv->cbuf, 0); + ctx->cl->SetComputeRootDescriptorTable (1, + GetGPUDescriptorHandleForHeapStart (heap_handle)); + ctx->cl->Dispatch (ctx->dispatch_x, ctx->dispatch_y, 1); + + D3D12_RESOURCE_BARRIER barrier = + CD3DX12_RESOURCE_BARRIER::Transition (ctx->uv_remap.Get (), + D3D12_RESOURCE_STATE_UNORDERED_ACCESS, + D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE); + ctx->cl->ResourceBarrier (1, &barrier); + + priv->prop_updated = FALSE; + } + + gst_d3d12_converter_set_remap (ctx->conv, ctx->uv_remap.Get ()); + + if (priv->viewport_updated) { + GstVideoRectangle viewport; + get_viewport (self, &viewport); + gst_d3d12_converter_update_viewport (ctx->conv, viewport.x, viewport.y, + viewport.w, viewport.h); + priv->viewport_updated = FALSE; + } + + if (!gst_d3d12_converter_convert_buffer (ctx->conv, inbuf, outbuf, fence_data, + ctx->cl.Get (), TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't convert buffer"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + hr = ctx->cl->Close (); + if (!gst_d3d12_result (hr, ctx->device)) { + gst_d3d12_fence_data_unref (fence_data); + GST_ERROR_OBJECT (self, "Couldn't close command list"); + return GST_FLOW_ERROR; + } + + ID3D12CommandList *cl = { ctx->cl.Get () }; + gst_d3d12_cmd_queue_execute_command_lists (ctx->cq, 1, cl, &ctx->fence_val); + + gst_d3d12_cmd_queue_set_notify (ctx->cq, ctx->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + + gst_d3d12_buffer_set_fence (outbuf, ctx->cq_fence, ctx->fence_val, FALSE); + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12fisheyedewarp.h
Added
@@ -0,0 +1,32 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstd3d12basefilter.h" + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_FISHEYE_DEWARP (gst_d3d12_fisheye_dewarp_get_type()) +G_DECLARE_FINAL_TYPE (GstD3D12FisheyeDewarp, gst_d3d12_fisheye_dewarp, + GST, D3D12_FISHEYE_DEWARP, GstD3D12BaseFilter) + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12interlace.cpp
Added
@@ -0,0 +1,1016 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-d3d12interlace + * @title: d3d12interlace + * + * A Direct3D12 based interlacing element + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12interlace.h" +#include "gstd3d12pluginutils.h" +#include "gstd3d12weaveinterlace.h" +#include <directx/d3dx12.h> +#include <mutex> +#include <memory> +#include <wrl.h> + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +/* *INDENT-ON* */ + +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_interlace_debug); +#define GST_CAT_DEFAULT gst_d3d12_interlace_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) + ", interlace-mode = progressive; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS) + ", interlace-mode = progressive; ")); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) + ", interlace-mode = interleaved; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS) + ", interlace-mode = interleaved; ")); + + +enum GstD3D12InterlacePattern +{ + GST_D3D12_INTERLACE_PATTERN_1_1, + GST_D3D12_INTERLACE_PATTERN_2_2, +}; + +/** + * GstD3D12InterlacePattern: + * + * Since: 1.28 + */ +#define GST_TYPE_D3D12_INTERLACE_PATTERN (gst_d3d12_interlace_pattern_get_type()) +static GType +gst_d3d12_interlace_pattern_get_type (void) +{ + static GType type = 0; + + GST_D3D12_CALL_ONCE_BEGIN { + static const GEnumValue types = { + /** + * GstD3D12InterlacePattern::1:1: + * + * Since: 1.28 + */ + {GST_D3D12_INTERLACE_PATTERN_1_1, "1:1 (e.g. 60p -> 60i)", "1:1"}, + + /** + * GstD3D12InterlaceFields::2:2: + * + * Since: 1.28 + */ + {GST_D3D12_INTERLACE_PATTERN_2_2, "2:2 (e.g. 30p -> 60i)", "2:2"}, + + + {0, nullptr, nullptr}, + }; + + type = g_enum_register_static ("GstD3D12InterlacePattern", types); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + +enum GstD3D12InterlaceEngine +{ + GST_D3D12_INTERLACE_ENGINE_AUTO, + GST_D3D12_INTERLACE_ENGINE_3D, + GST_D3D12_INTERLACE_ENGINE_COMPUTE, +}; + +/** + * GstD3D12InterlaceEngine: + * + * Since: 1.28 + */ +#define GST_TYPE_D3D12_INTERLACE_ENGINE (gst_d3d12_interlace_engine_get_type()) +static GType +gst_d3d12_interlace_engine_get_type (void) +{ + static GType type = 0; + + GST_D3D12_CALL_ONCE_BEGIN { + static const GEnumValue types = { + /** + * GstD3D12InterlaceEngine::auto: + * + * Since: 1.28 + */ + {GST_D3D12_INTERLACE_ENGINE_AUTO, + "iGPU uses 3D engine, dGPU uses compute engine", "auto"}, + + /** + * GstD3D12InterlaceEngine::3d: + * + * Since: 1.28 + */ + {GST_D3D12_INTERLACE_ENGINE_3D, "3D", "3d"}, + + /** + * GstD3D12InterlaceEngine::compute: + * + * Since: 1.28 + */ + {GST_D3D12_INTERLACE_ENGINE_COMPUTE, "Compute", "compute"}, + {0, nullptr, nullptr}, + }; + + type = g_enum_register_static ("GstD3D12InterlaceEngine", types); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + +enum +{ + PROP_0, + PROP_TFF, + PROP_FIELD_PATTERN, + PROP_ENGINE, +}; + +#define DEFAULT_TFF FALSE +#define DEFAULT_FIELD_PATTERN GST_D3D12_INTERLACE_PATTERN_1_1 +#define DEFAULT_ENGINE GST_D3D12_INTERLACE_ENGINE_AUTO + + +/* *INDENT-OFF* */ +struct InterlaceConvCtx +{ + InterlaceConvCtx (GstD3D12Device * dev) + { + device = (GstD3D12Device *) gst_object_ref (dev); + auto device_handle = gst_d3d12_device_get_device_handle (device); + ca_pool = gst_d3d12_cmd_alloc_pool_new (device_handle, + D3D12_COMMAND_LIST_TYPE_DIRECT); + } + + ~InterlaceConvCtx () + { + gst_d3d12_device_fence_wait (device, D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_val); + + if (pre_pool) + gst_buffer_pool_set_active (pre_pool, FALSE); + if (post_pool) + gst_buffer_pool_set_active (post_pool, FALSE); + cl = nullptr; + gst_clear_object (&pre_pool); + gst_clear_object (&post_pool); + gst_clear_object (&pre_conv); + gst_clear_object (&post_conv); + gst_clear_object (&ca_pool); + gst_clear_object (&device); + } + + GstD3D12Device *device = nullptr; + GstD3D12Converter *pre_conv = nullptr; + GstD3D12Converter *post_conv = nullptr; + GstBufferPool *pre_pool = nullptr; + GstBufferPool *post_pool = nullptr; + ComPtr<ID3D12GraphicsCommandList> cl; + GstD3D12CmdAllocPool *ca_pool = nullptr; + guint64 fence_val = 0; +}; + +struct GstD3D12InterlacePrivate +{ + GstD3D12InterlacePrivate () + { + fence_pool = gst_d3d12_fence_data_pool_new (); + } + + ~GstD3D12InterlacePrivate () + { + gst_clear_object (&weave); + gst_clear_object (&fence_pool); + } + + std::mutex lock; + GstD3D12WeaveInterlace *weave = nullptr; + GstD3D12FenceDataPool *fence_pool = nullptr; + std::shared_ptr<InterlaceConvCtx> conv_ctx; + GstVideoInfo in_info; + GstVideoInfo weave_info; + GstClockTime latency = 0; + gboolean use_compute = FALSE; + gboolean tff = DEFAULT_TFF; + GstD3D12InterlacePattern pattern = DEFAULT_FIELD_PATTERN; + GstD3D12InterlaceEngine engine = DEFAULT_ENGINE; +}; +/* *INDENT-ON* */ + +struct _GstD3D12Interlace +{ + GstD3D12BaseFilter parent; + + GstD3D12InterlacePrivate *priv; +}; + +static void gst_d3d12_interlace_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_d3d12_interlace_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); +static void gst_d3d12_interlace_finalize (GObject * object); +static gboolean gst_d3d12_interlace_start (GstBaseTransform * trans); +static gboolean gst_d3d12_interlace_stop (GstBaseTransform * trans); +static GstCaps *gst_d3d12_interlace_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_d3d12_interlace_fixate_caps (GstBaseTransform * + base, GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); +static gboolean gst_d3d12_interlace_sink_event (GstBaseTransform * trans, + GstEvent * event); +static gboolean gst_d3d12_interlace_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query); +static GstFlowReturn gst_d3d12_interlace_generate_output (GstBaseTransform * + trans, GstBuffer ** buffer); +static GstFlowReturn gst_d3d12_interlace_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf); +static GstFlowReturn gst_d3d12_interlace_submit_input_buffer (GstBaseTransform + * trans, gboolean is_discont, GstBuffer * input); +static gboolean gst_d3d12_interlace_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean gst_d3d12_interlace_propose_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); + +#define gst_d3d12_interlace_parent_class parent_class +G_DEFINE_TYPE (GstD3D12Interlace, gst_d3d12_interlace, + GST_TYPE_D3D12_BASE_FILTER); + +static void +gst_d3d12_interlace_class_init (GstD3D12InterlaceClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + auto filter_class = GST_D3D12_BASE_FILTER_CLASS (klass); + + object_class->set_property = gst_d3d12_interlace_set_property; + object_class->get_property = gst_d3d12_interlace_get_property; + object_class->finalize = gst_d3d12_interlace_finalize; + + g_object_class_install_property (object_class, PROP_TFF, + g_param_spec_boolean ("top-field-first", "top field first", + "Interlaced stream should be top field first", DEFAULT_TFF, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (object_class, PROP_FIELD_PATTERN, + g_param_spec_enum ("field-pattern", "Field pattern", + "The output field pattern", GST_TYPE_D3D12_INTERLACE_PATTERN, + DEFAULT_FIELD_PATTERN, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (object_class, PROP_ENGINE, + g_param_spec_enum ("engine", "Engine", "Engine to use", + GST_TYPE_D3D12_INTERLACE_ENGINE, DEFAULT_ENGINE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, + "Direct3D12 Interlacer", + "Filter/Interlace/Effect/Video/Hardware", + "A Direct3D12 interlacer element", + "Seungha Yang <seungha@centricular.com>"); + + trans_class->passthrough_on_same_caps = TRUE; + + trans_class->start = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_start); + trans_class->stop = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_stop); + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_d3d12_interlace_transform_caps); + trans_class->fixate_caps = + GST_DEBUG_FUNCPTR (gst_d3d12_interlace_fixate_caps); + trans_class->sink_event = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_sink_event); + trans_class->query = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_query); + trans_class->submit_input_buffer = + GST_DEBUG_FUNCPTR (gst_d3d12_interlace_submit_input_buffer); + trans_class->generate_output = + GST_DEBUG_FUNCPTR (gst_d3d12_interlace_generate_output); + trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_transform); + + filter_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_interlace_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_interlace_propose_allocation); + + gst_type_mark_as_plugin_api (GST_TYPE_D3D12_INTERLACE_PATTERN, + (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_D3D12_INTERLACE_ENGINE, + (GstPluginAPIFlags) 0); + + GST_DEBUG_CATEGORY_INIT (gst_d3d12_interlace_debug, "d3d12interlace", 0, + "d3d12interlace"); +} + +static void +gst_d3d12_interlace_init (GstD3D12Interlace * self) +{ + self->priv = new GstD3D12InterlacePrivate (); +} + +static void +gst_d3d12_interlace_finalize (GObject * object) +{ + auto self = GST_D3D12_INTERLACE (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static gboolean +is_half_framerate (GstD3D12InterlacePattern pattern) +{ + if (pattern == GST_D3D12_INTERLACE_PATTERN_1_1) + return TRUE; + + return FALSE; +} + +static void +gst_d3d12_interlace_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_INTERLACE (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_TFF: + priv->tff = g_value_get_boolean (value); + break; + case PROP_FIELD_PATTERN: + priv->pattern = (GstD3D12InterlacePattern) g_value_get_enum (value); + break; + case PROP_ENGINE: + priv->engine = (GstD3D12InterlaceEngine) g_value_get_enum (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (self, prop_id, pspec); + break; + } +} + +static void +gst_d3d12_interlace_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_INTERLACE (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_TFF: + g_value_set_boolean (value, priv->tff); + break; + case PROP_FIELD_PATTERN: + g_value_set_enum (value, priv->pattern); + break; + case PROP_ENGINE: + g_value_set_enum (value, priv->engine); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (self, prop_id, pspec); + break; + } +} + +static gboolean +gst_d3d12_interlace_start (GstBaseTransform * trans) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + priv->latency = 0; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->start (trans); +} + +static gboolean +gst_d3d12_interlace_stop (GstBaseTransform * trans) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + { + std::lock_guard < std::mutex > lk (priv->lock); + gst_clear_object (&priv->weave); + priv->conv_ctx = nullptr; + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->stop (trans); +} + +static GstCaps * +gst_d3d12_interlace_remove_interlace_info (GstCaps * caps, + gboolean remove_framerate) +{ + auto res = gst_caps_new_empty (); + auto n = gst_caps_get_size (caps); + for (guint i = 0; i < n; i++) { + auto s = gst_caps_get_structure (caps, i); + auto f = gst_caps_get_features (caps, i); + + /* If this is already expressed by the existing caps + * skip this structure */ + if (i > 0 && gst_caps_is_subset_structure_full (res, s, f)) + continue; + + s = gst_structure_copy (s); + /* Only remove format info for the cases when we can actually convert */ + if (!gst_caps_features_is_any (f) + && gst_caps_features_contains (f, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { + if (remove_framerate) { + gst_structure_remove_fields (s, + "interlace-mode", "field-order", "framerate", nullptr); + } else { + gst_structure_remove_fields (s, + "interlace-mode", "field-order", nullptr); + } + } + + gst_caps_append_structure_full (res, s, gst_caps_features_copy (f)); + } + + return res; +} + +static GstCaps * +gst_d3d12_interlace_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + /* Get all possible caps that we can transform to */ + auto ret = gst_d3d12_interlace_remove_interlace_info (caps, + is_half_framerate (priv->pattern)); + + if (filter) { + auto tmp = gst_caps_intersect_full (filter, ret, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (ret); + ret = tmp; + } + + GST_DEBUG_OBJECT (trans, "transformed %" GST_PTR_FORMAT " into %" + GST_PTR_FORMAT, caps, ret); + + return ret; +} + +static GstCaps * +gst_d3d12_interlace_fixate_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + GST_DEBUG_OBJECT (self, + "trying to fixate othercaps %" GST_PTR_FORMAT " based on caps %" + GST_PTR_FORMAT, othercaps, caps); + + othercaps = gst_caps_truncate (othercaps); + othercaps = gst_caps_make_writable (othercaps); + + if (direction == GST_PAD_SRC) + return gst_caps_fixate (othercaps); + + auto tmp = gst_caps_copy (caps); + tmp = gst_caps_fixate (tmp); + + GstVideoInfo info; + if (!gst_video_info_from_caps (&info, tmp)) { + GST_WARNING_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + gst_caps_unref (tmp); + + return gst_caps_fixate (othercaps); + } + + auto s = gst_caps_get_structure (tmp, 0); + + gint fps_n, fps_d; + if (is_half_framerate (priv->pattern) && + gst_structure_get_fraction (s, "framerate", &fps_n, &fps_d) && + fps_n > 0 && fps_d > 0) { + fps_d *= 2; + + gst_caps_set_simple (othercaps, + "framerate", GST_TYPE_FRACTION, fps_n, fps_d, nullptr); + } + + gst_caps_set_simple (othercaps, + "interlace-mode", G_TYPE_STRING, "interleaved", + "field-order", G_TYPE_STRING, priv->tff ? "top-field-first" : + "bottom-field-first", nullptr); + gst_caps_unref (tmp); + + return gst_caps_fixate (othercaps); +} + +static gboolean +gst_d3d12_interlace_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) +{ + /* passthrough, we're done */ + if (!decide_query) + return TRUE; + + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { + return FALSE; + } + + gst_query_add_allocation_meta (query, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); + + return TRUE; +} + +static void +gst_d3d12_interlace_drain (GstD3D12Interlace * self) +{ + auto trans = GST_BASE_TRANSFORM (self); + auto priv = self->priv; + GstFlowReturn ret = GST_FLOW_OK; + GstBuffer *outbuf = nullptr; + + if (!priv->weave) + return; + + if (gst_base_transform_is_passthrough (trans)) { + gst_d3d12_weave_interlace_flush (priv->weave); + return; + } + + gst_d3d12_weave_interlace_drain (priv->weave); + do { + outbuf = nullptr; + ret = gst_d3d12_weave_interlace_pop (priv->weave, &outbuf); + if (ret == GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA) + ret = GST_FLOW_OK; + + if (outbuf) + ret = gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (trans), outbuf); + } while (ret == GST_FLOW_OK && outbuf); +} + +static gboolean +gst_d3d12_interlace_prepare_convert (GstD3D12Interlace * self, + GstD3D12Device * device, GstCaps * in_caps, const GstVideoInfo * in_info, + GstVideoInfo * weave_info) +{ + auto priv = self->priv; + + auto format = GST_VIDEO_INFO_FORMAT (in_info); + switch (format) { + case GST_VIDEO_FORMAT_RGB16: + case GST_VIDEO_FORMAT_BGR16: + case GST_VIDEO_FORMAT_RGB15: + case GST_VIDEO_FORMAT_BGR15: + break; + default: + *weave_info = *in_info; + return TRUE; + } + + gst_video_info_set_interlaced_format (weave_info, GST_VIDEO_FORMAT_RGBA, + in_info->interlace_mode, in_info->width, in_info->height); + GST_VIDEO_INFO_FIELD_ORDER (weave_info) = + GST_VIDEO_INFO_FIELD_ORDER (in_info); + + GstCaps *caps = gst_video_info_to_caps (weave_info); + + auto ctx = std::make_shared < InterlaceConvCtx > (device); + ctx->pre_pool = gst_d3d12_buffer_pool_new (device); + ctx->post_pool = gst_d3d12_buffer_pool_new (device); + + auto config = gst_buffer_pool_get_config (ctx->pre_pool); + gst_buffer_pool_config_set_params (config, caps, weave_info->size, 0, 0); + gst_caps_unref (caps); + if (!gst_buffer_pool_set_config (ctx->pre_pool, config)) { + GST_ERROR_OBJECT (self, "Couldn't set pool config"); + return FALSE; + } + + if (!gst_buffer_pool_set_active (ctx->pre_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Pool active failed"); + return FALSE; + } + + config = gst_buffer_pool_get_config (ctx->post_pool); + gst_buffer_pool_config_set_params (config, in_caps, in_info->size, 0, 0); + + if (!gst_buffer_pool_set_config (ctx->post_pool, config)) { + GST_ERROR_OBJECT (self, "Couldn't set pool config"); + return FALSE; + } + + if (!gst_buffer_pool_set_active (ctx->post_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Pool active failed"); + return FALSE; + } + + config = gst_structure_new ("convert-config", + GST_D3D12_CONVERTER_OPT_SAMPLER_FILTER, + GST_TYPE_D3D12_CONVERTER_SAMPLER_FILTER, + D3D12_FILTER_MIN_MAG_MIP_POINT, nullptr); + + ctx->pre_conv = gst_d3d12_converter_new (device, + nullptr, in_info, weave_info, nullptr, nullptr, + gst_structure_copy (config)); + if (!ctx->pre_conv) { + GST_ERROR_OBJECT (self, "Couldn't create pre converter"); + gst_structure_free (config); + return FALSE; + } + + ctx->post_conv = gst_d3d12_converter_new (device, + nullptr, weave_info, in_info, nullptr, nullptr, config); + if (!ctx->post_conv) { + GST_ERROR_OBJECT (self, "Couldn't create post converter"); + return FALSE; + } + + priv->conv_ctx = ctx; + + return TRUE; +} + +static gboolean +gst_d3d12_interlace_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) +{ + auto trans = GST_BASE_TRANSFORM (filter); + auto self = GST_D3D12_INTERLACE (filter); + auto priv = self->priv; + + gboolean post_msg = FALSE; + + { + std::lock_guard < std::mutex > lk (priv->lock); + + GstClockTime latency = 0; + if (priv->pattern == GST_D3D12_INTERLACE_PATTERN_1_1) { + auto fps_n = in_info->fps_n; + auto fps_d = in_info->fps_d; + if (fps_n <= 0 || fps_d <= 0) { + fps_n = 25; + fps_d = 1; + } + + /* We have one frame latency in 1:1 pattern */ + latency = gst_util_uint64_scale (GST_SECOND, fps_d, fps_n); + } + + if (latency != priv->latency) { + priv->latency = latency; + post_msg = TRUE; + } + + gst_clear_object (&priv->weave); + priv->conv_ctx = nullptr; + + priv->in_info = *in_info; + + if (!gst_d3d12_interlace_prepare_convert (self, device, incaps, + &priv->in_info, &priv->weave_info)) { + return FALSE; + } + + priv->use_compute = FALSE; + if (priv->engine == GST_D3D12_INTERLACE_ENGINE_COMPUTE) { + priv->use_compute = TRUE; + } else if (priv->engine == GST_D3D12_INTERLACE_ENGINE_AUTO && + !gst_d3d12_device_is_uma (device) && !priv->conv_ctx) { + /* Since weave shader is full compute shader, in case of dGPU, + * prefer compute queue so that task can be overlapped with other 3D tasks + */ + priv->use_compute = TRUE; + } + + GST_DEBUG_OBJECT (self, "Use compute engine: %d", priv->use_compute); + + priv->weave = gst_d3d12_weave_interlace_new (device, + &priv->weave_info, (GstD3D12WeaveInterlacPattern) priv->pattern, + !priv->tff, priv->use_compute); + + if (!priv->weave) { + GST_ERROR_OBJECT (self, "Couldn't create weave object"); + priv->conv_ctx = nullptr; + return FALSE; + } + + gst_d3d12_weave_interlace_set_direction (priv->weave, + trans->segment.rate >= 0); + } + + if (post_msg) { + gst_element_post_message (GST_ELEMENT_CAST (self), + gst_message_new_latency (GST_OBJECT_CAST (self))); + } + + return TRUE; +} + +static gboolean +gst_d3d12_interlace_sink_event (GstBaseTransform * trans, GstEvent * event) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + switch (GST_EVENT_TYPE (event)) { + case GST_EVENT_EOS: + gst_d3d12_interlace_drain (self); + break; + case GST_EVENT_FLUSH_STOP: + if (priv->weave) + gst_d3d12_weave_interlace_flush (priv->weave); + break; + case GST_EVENT_SEGMENT: + if (priv->weave) { + const GstSegment *segment; + gst_event_parse_segment (event, &segment); + if (segment->format == GST_FORMAT_TIME) { + std::lock_guard < std::mutex > lk (priv->lock); + gst_d3d12_weave_interlace_set_direction (priv->weave, + segment->rate >= 0); + } + } + break; + default: + break; + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->sink_event (trans, event); +} + +static gboolean +gst_d3d12_interlace_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_LATENCY: + { + GstClockTime latency; + + { + std::lock_guard < std::mutex > lk (priv->lock); + latency = priv->latency; + } + + if (latency != 0 && GST_CLOCK_TIME_IS_VALID (latency) && + !gst_base_transform_is_passthrough (trans)) { + auto otherpad = (direction == GST_PAD_SRC) ? + GST_BASE_TRANSFORM_SINK_PAD (trans) : + GST_BASE_TRANSFORM_SRC_PAD (trans); + + auto ret = gst_pad_peer_query (otherpad, query); + if (ret) { + gboolean live; + GstClockTime min_latency, max_latency; + gst_query_parse_latency (query, &live, &min_latency, &max_latency); + + GST_DEBUG_OBJECT (self, "peer latency: min %" + GST_TIME_FORMAT " max %" GST_TIME_FORMAT + ", our latency: %" GST_TIME_FORMAT, + GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency), + GST_TIME_ARGS (latency)); + + min_latency += latency; + if (GST_CLOCK_TIME_IS_VALID (max_latency)) + max_latency += latency; + + gst_query_set_latency (query, live, min_latency, max_latency); + } + + return ret; + } + break; + } + default: + break; + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->query (trans, direction, + query); +} + +static GstBuffer * +gst_d3d12_interlace_convert (GstD3D12Interlace * self, GstBuffer * buffer, + gboolean is_preproc) +{ + auto priv = self->priv; + if (!priv->conv_ctx) + return buffer; + + GstBuffer *outbuf = nullptr; + auto ctx = priv->conv_ctx; + GstBufferPool *pool; + GstD3D12Converter *conv; + if (is_preproc) { + pool = ctx->pre_pool; + conv = ctx->pre_conv; + } else { + pool = ctx->post_pool; + conv = ctx->post_conv; + } + + gst_buffer_pool_acquire_buffer (pool, &outbuf, nullptr); + if (!outbuf) { + GST_ERROR_OBJECT (self, "Couldn't acquire buffer"); + gst_buffer_unref (buffer); + return nullptr; + } + + gst_buffer_copy_into (outbuf, buffer, GST_BUFFER_COPY_METADATA, 0, -1); + + GstD3D12FenceData *fence_data; + gst_d3d12_fence_data_pool_acquire (priv->fence_pool, &fence_data); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (buffer)); + + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (ctx->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + auto hr = ca->Reset (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + auto device = gst_d3d12_device_get_device_handle (ctx->device); + if (!ctx->cl) { + hr = device->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + ca, nullptr, IID_PPV_ARGS (&ctx->cl)); + } else { + hr = ctx->cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + if (!gst_d3d12_converter_convert_buffer (conv, buffer, outbuf, + fence_data, ctx->cl.Get (), is_preproc ? TRUE : priv->use_compute)) { + GST_ERROR_OBJECT (self, "Couldn't convert buffer"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + hr = ctx->cl->Close (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + ID3D12CommandList *cmd_list = { ctx->cl.Get () }; + hr = gst_d3d12_device_execute_command_lists (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, 1, cmd_list, &ctx->fence_val); + + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + gst_d3d12_device_set_fence_notify (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, ctx->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + + auto fence = gst_d3d12_device_get_fence_handle (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + gst_d3d12_buffer_set_fence (outbuf, fence, ctx->fence_val, FALSE); + + return outbuf; +} + +static GstFlowReturn +gst_d3d12_interlace_submit_input_buffer (GstBaseTransform * trans, + gboolean is_discont, GstBuffer * input) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + /* Let baseclass handle QoS first */ + auto ret = GST_BASE_TRANSFORM_CLASS (parent_class)->submit_input_buffer + (trans, is_discont, input); + if (ret != GST_FLOW_OK) + return ret; + + /* at this moment, baseclass must hold queued_buf */ + g_assert (trans->queued_buf != NULL); + auto buf = trans->queued_buf; + trans->queued_buf = nullptr; + + buf = gst_d3d12_interlace_convert (self, buf, TRUE); + if (!buf) + return GST_FLOW_ERROR; + + ret = gst_d3d12_weave_interlace_push (priv->weave, buf); + if (ret == GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA) + ret = GST_FLOW_OK; + + return ret; +} + +static GstFlowReturn +gst_d3d12_interlace_generate_output (GstBaseTransform * trans, + GstBuffer ** buffer) +{ + auto self = GST_D3D12_INTERLACE (trans); + auto priv = self->priv; + + if (gst_base_transform_is_passthrough (trans)) { + return GST_BASE_TRANSFORM_CLASS (parent_class)->generate_output (trans, + buffer); + } + + GstBuffer *outbuf = nullptr; + auto ret = gst_d3d12_weave_interlace_pop (priv->weave, &outbuf); + if (ret == GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA) + ret = GST_FLOW_OK; + + if (outbuf) { + outbuf = gst_d3d12_interlace_convert (self, outbuf, FALSE); + if (!outbuf) + ret = GST_FLOW_ERROR; + } + + *buffer = outbuf; + + return ret; +} + +static GstFlowReturn +gst_d3d12_interlace_transform (GstBaseTransform * trans, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + /* generate_output() will do actual process */ + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12interlace.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstd3d12basefilter.h" + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_INTERLACE (gst_d3d12_interlace_get_type()) +G_DECLARE_FINAL_TYPE (GstD3D12Interlace, gst_d3d12_interlace, + GST, D3D12_INTERLACE, GstD3D12BaseFilter) + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12ipc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12ipc.cpp
Changed
@@ -336,27 +336,6 @@ memcpy (&buf0, &header, GST_D3D12_IPC_PKT_HEADER_SIZE); } -bool -gst_d3d12_ipc_clock_is_system (GstClock * clock) -{ - GstClockType clock_type = GST_CLOCK_TYPE_MONOTONIC; - GstClock *mclock; - - if (G_OBJECT_TYPE (clock) != GST_TYPE_SYSTEM_CLOCK) - return false; - - g_object_get (clock, "clock-type", &clock_type, nullptr); - if (clock_type != GST_CLOCK_TYPE_MONOTONIC) - return false; - - mclock = gst_clock_get_master (clock); - if (!mclock) - return true; - - gst_object_unref (mclock); - return false; -} - std::string gst_d3d12_ipc_wstring_to_string (const std::wstring & str) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12ipc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12ipc.h
Changed
@@ -139,8 +139,6 @@ void gst_d3d12_ipc_pkt_build_fin (std::vector<guint8> & buf); -bool gst_d3d12_ipc_clock_is_system (GstClock * clock); - std::string gst_d3d12_ipc_wstring_to_string (const std::wstring & str); std::wstring gst_d3d12_ipc_string_to_wstring (const std::string & str);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12ipcsink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12ipcsink.cpp
Changed
@@ -686,7 +686,7 @@ if (GST_CLOCK_TIME_IS_VALID (buffer_clock)) { GstClock *clock = gst_element_get_clock (GST_ELEMENT_CAST (sink)); - if (!gst_d3d12_ipc_clock_is_system (clock)) { + if (!gst_clock_is_system_monotonic (clock)) { GstClockTime now_gst = gst_clock_get_time (clock); GstClockTimeDiff converted = buffer_clock;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12ipcsrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12ipcsrc.cpp
Changed
@@ -481,7 +481,7 @@ clock = gst_element_get_clock (GST_ELEMENT_CAST (self)); now_gst = gst_clock_get_time (clock); base_time = GST_ELEMENT_CAST (self)->base_time; - is_system_clock = gst_d3d12_ipc_clock_is_system (clock); + is_system_clock = gst_clock_is_system_monotonic (clock); gst_object_unref (clock); buffer = gst_sample_get_buffer (sample);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12memorycopy.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12memorycopy.cpp
Changed
@@ -21,17 +21,22 @@ #include <config.h> #endif +#include "gstd3d12plugin-config.h" + #include "gstd3d12memorycopy.h" #include <gst/d3d12/gstd3d12.h> #include <gst/d3d12/gstd3d12-private.h> +#ifdef HAVE_GST_D3D11 #include <gst/d3d11/gstd3d11.h> #include <gst/d3d11/gstd3d11-private.h> #include <gst/d3d11/gstd3d11device-private.h> +#endif #include <directx/d3dx12.h> #include <mutex> #include <condition_variable> #include <memory> #include <wrl.h> +#include <atomic> /* *INDENT-OFF* */ using namespace Microsoft::WRL; @@ -43,49 +48,85 @@ #define META_TAG_VIDEO meta_tag_video_quark static GQuark meta_tag_video_quark; +#ifdef HAVE_GST_D3D11 +#define SINK_STATIC_CAPS \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY, \ + GST_D3D11_ALL_FORMATS) ";" \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY \ + "," GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D11_ALL_FORMATS) ";" \ + GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) + +#define SRC_STATIC_CAPS \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY, \ + GST_D3D11_ALL_FORMATS) ";" \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY \ + "," GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D11_ALL_FORMATS) ";" \ + GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) +#else +#define SINK_STATIC_CAPS \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) + +#define SRC_STATIC_CAPS \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " \ + GST_VIDEO_CAPS_MAKE_WITH_FEATURES \ + (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," \ + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, \ + GST_D3D12_ALL_FORMATS) +#endif + static GstStaticPadTemplate sink_template = - GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY, - GST_D3D11_ALL_FORMATS) ";" - GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY - "," GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D11_ALL_FORMATS) ";" - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); +GST_STATIC_PAD_TEMPLATE ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, + GST_STATIC_CAPS (SINK_STATIC_CAPS)); static GstStaticPadTemplate src_template = - GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, - GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY, - GST_D3D11_ALL_FORMATS) ";" - GST_VIDEO_CAPS_MAKE_WITH_FEATURES (GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY - "," GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D11_ALL_FORMATS) ";" - GST_VIDEO_CAPS_MAKE (GST_D3D12_ALL_FORMATS) "; " - GST_VIDEO_CAPS_MAKE_WITH_FEATURES - (GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY "," - GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, - GST_D3D12_ALL_FORMATS))); +GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, + GST_STATIC_CAPS (SRC_STATIC_CAPS)); enum class TransferType { SYSTEM, D3D11_TO_12, D3D12_TO_11, + D3D12_TO_SYSTEM, + SYSTEM_TO_D3D12, }; enum class MemoryType @@ -102,14 +143,52 @@ LUID, }; +enum GstD3D12MemcpyCmdQueueType +{ + GST_D3D12_MEMCPY_CMD_QUEUE_AUTO, + GST_D3D12_MEMCPY_CMD_QUEUE_3D, + GST_D3D12_MEMCPY_CMD_QUEUE_COMPUTE, + GST_D3D12_MEMCPY_CMD_QUEUE_COPY, +}; + +/** + * GstD3D12MemcpyCmdQueueType: + * + * Since: 1.28 + */ +#define GST_TYPE_D3D12_MEMCPY_CMD_QUEUE_TYPE (gst_d3d12_memcpy_cmd_queue_type_get_type()) +static GType +gst_d3d12_memcpy_cmd_queue_type_get_type (void) +{ + static GType type = 0; + static const GEnumValue queue_type = { + {GST_D3D12_MEMCPY_CMD_QUEUE_AUTO, "Auto", "auto"}, + {GST_D3D12_MEMCPY_CMD_QUEUE_3D, "3D", "3d"}, + {GST_D3D12_MEMCPY_CMD_QUEUE_COMPUTE, "Compute", "compute"}, + {GST_D3D12_MEMCPY_CMD_QUEUE_COPY, "Copy", "copy"}, + {0, nullptr, nullptr}, + }; + + GST_D3D12_CALL_ONCE_BEGIN { + type = g_enum_register_static ("GstD3D12MemcpyCmdQueueType", queue_type); + } GST_D3D12_CALL_ONCE_END; + + return type; +} + enum { PROP_0, PROP_ADAPTER, + PROP_QUEUE_TYPE, + PROP_USE_STAGING_MEMORY, }; #define DEFAULT_ADAPTER -1 +#define DEFAULT_QUEUE_TYPE GST_D3D12_MEMCPY_CMD_QUEUE_AUTO +#define DEFAULT_USE_STAGING_MEMORY TRUE +#ifdef HAVE_GST_D3D11 #define ASYNC_FENCE_WAIT_DEPTH 16 struct FenceWaitData @@ -214,6 +293,7 @@ return nullptr; } +#endif struct _GstD3D12MemoryCopyPrivate { @@ -228,16 +308,22 @@ gst_buffer_pool_set_active (fallback_pool12, FALSE); gst_clear_object (&fallback_pool12); - fence_waiter = nullptr; + if (staging_pool) + gst_buffer_pool_set_active (staging_pool, FALSE); + gst_clear_object (&staging_pool); fence12 = nullptr; - fence11 = nullptr; fence12_external = nullptr; - fence11_external = nullptr; fence12_on_11 = nullptr; + +#ifdef HAVE_GST_D3D11 + fence_waiter = nullptr; + fence11 = nullptr; + fence11_external = nullptr; fence11_on_11 = nullptr; context11_4 = nullptr; device11_5 = nullptr; +#endif in_type = MemoryType::SYSTEM; out_type = MemoryType::SYSTEM; @@ -248,28 +334,33 @@ if (full) { luid = 0; gst_clear_object (&device12); +#ifdef HAVE_GST_D3D11 gst_clear_object (&device11); +#endif gst_clear_caps (&incaps); gst_clear_caps (&outcaps); } } GstD3D12Device *device12 = nullptr; - GstD3D11Device *device11 = nullptr; ComPtr < ID3D12Fence > fence12; - ComPtr < ID3D11Fence > fence11; - ComPtr < ID3D12Fence > fence12_external; - ComPtr < ID3D11Fence > fence11_external; - ComPtr < ID3D12Fence > fence12_on_11; - ComPtr < ID3D11Fence > fence11_on_11; +#ifdef HAVE_GST_D3D11 + std::shared_ptr < FenceAsyncWaiter > fence_waiter; + + GstD3D11Device *device11 = nullptr; + ComPtr < ID3D11Fence > fence11; + ComPtr < ID3D11Fence > fence11_external; + ComPtr < ID3D11Fence > fence11_on_11; ComPtr < ID3D11Device5 > device11_5; ComPtr < ID3D11DeviceContext4 > context11_4; +#endif GstBufferPool *fallback_pool12 = nullptr; + GstBufferPool *staging_pool = nullptr; GstCaps *incaps = nullptr; GstCaps *outcaps = nullptr; @@ -284,9 +375,10 @@ MemoryType out_type = MemoryType::SYSTEM; UINT64 fence_val = 0; - std::shared_ptr < FenceAsyncWaiter > fence_waiter; - gint adapter = DEFAULT_ADAPTER; + GstD3D12MemcpyCmdQueueType queue_type = DEFAULT_QUEUE_TYPE; + D3D12_COMMAND_LIST_TYPE selected_queue_type = D3D12_COMMAND_LIST_TYPE_COPY; + std::atomic < gboolean > use_staging = { DEFAULT_USE_STAGING_MEMORY }; std::recursive_mutex lock; }; @@ -346,6 +438,34 @@ (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | G_PARAM_STATIC_STRINGS))); + /** + * GstD3D12MemoryCopy:queue-type: + * + * Command queue type to use for copy operation + * + * Since: 1.28 + */ + g_object_class_install_property (object_class, PROP_QUEUE_TYPE, + g_param_spec_enum ("queue-type", "Queue Type", + "Command queue type to use for copy operation", + GST_TYPE_D3D12_MEMCPY_CMD_QUEUE_TYPE, DEFAULT_QUEUE_TYPE, + (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | + G_PARAM_STATIC_STRINGS))); + + /** + * GstD3D12MemoryCopy:use-staging-memory: + * + * Use GPU-visible staging memory for upload/download operations + * instead of system memory + * + * Since: 1.28 + */ + g_object_class_install_property (object_class, PROP_USE_STAGING_MEMORY, + g_param_spec_boolean ("use-staging-memory", "Use Staging Memory", + "If FALSE, system memory pool will be used instead of GPU-visible " + "staging memory", DEFAULT_USE_STAGING_MEMORY, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + element_class->set_context = GST_DEBUG_FUNCPTR (gst_d3d12_memory_copy_set_context); @@ -374,6 +494,8 @@ gst_type_mark_as_plugin_api (GST_TYPE_D3D12_MEMORY_COPY, (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_D3D12_MEMCPY_CMD_QUEUE_TYPE, + (GstPluginAPIFlags) 0); GST_DEBUG_CATEGORY_INIT (gst_d3d12_memory_copy_debug, "d3d12memorycopy", 0, "d3d12memorycopy"); } @@ -406,6 +528,12 @@ case PROP_ADAPTER: priv->adapter = g_value_get_int (value); break; + case PROP_QUEUE_TYPE: + priv->queue_type = (GstD3D12MemcpyCmdQueueType) g_value_get_enum (value); + break; + case PROP_USE_STAGING_MEMORY: + priv->use_staging = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -424,6 +552,12 @@ case PROP_ADAPTER: g_value_set_int (value, priv->adapter); break; + case PROP_QUEUE_TYPE: + g_value_set_enum (value, priv->queue_type); + break; + case PROP_USE_STAGING_MEMORY: + g_value_set_boolean (value, priv->use_staging); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -441,19 +575,25 @@ switch (priv->search_type) { case DeviceSearchType::ANY: gst_d3d12_handle_set_context (element, context, -1, &priv->device12); +#ifdef HAVE_GST_D3D11 gst_d3d11_handle_set_context (element, context, -1, &priv->device11); +#endif break; case DeviceSearchType::PROPERTY: gst_d3d12_handle_set_context (element, context, priv->adapter, &priv->device12); +#ifdef HAVE_GST_D3D11 gst_d3d11_handle_set_context (element, context, priv->adapter, &priv->device11); +#endif break; case DeviceSearchType::LUID: gst_d3d12_handle_set_context_for_adapter_luid (element, context, priv->luid, &priv->device12); +#ifdef HAVE_GST_D3D11 gst_d3d11_handle_set_context_for_adapter_luid (element, context, priv->luid, &priv->device11); +#endif break; } } @@ -502,21 +642,22 @@ if (gst_d3d12_handle_context_query (elem, query, priv->device12)) return TRUE; +#ifdef HAVE_GST_D3D11 if (gst_d3d11_handle_context_query (elem, query, priv->device11)) return TRUE; +#endif } return GST_BASE_TRANSFORM_CLASS (parent_class)->query (trans, direction, query); } +#ifdef HAVE_GST_D3D11 static gboolean -gst_d3d12_memory_copy_setup_resource (GstD3D12MemoryCopy * self) +gst_d3d12_memory_copy_setup_interop_resource (GstD3D12MemoryCopy * self) { auto priv = self->priv; - priv->transfer_type = TransferType::SYSTEM; - if (priv->in_type == priv->out_type) return TRUE; @@ -706,6 +847,7 @@ return TRUE; } +#endif static gboolean gst_d3d12_memory_copy_set_caps (GstBaseTransform * trans, GstCaps * incaps, @@ -729,25 +871,88 @@ priv->Reset (false); + std::lock_guard < std::recursive_mutex > lk (priv->lock); + priv->transfer_type = TransferType::SYSTEM; + + switch (priv->queue_type) { + case GST_D3D12_MEMCPY_CMD_QUEUE_3D: + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_DIRECT; + break; + case GST_D3D12_MEMCPY_CMD_QUEUE_COMPUTE: + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_COMPUTE; + break; + case GST_D3D12_MEMCPY_CMD_QUEUE_COPY: + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_COPY; + break; + default: + if (!gst_d3d12_device_is_uma (priv->device12)) { + /* dGPU, prefer COPY queue */ + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_COPY; + } else { + /* iGPU may have weak COPY engine. Prefer direct queue + * in case of upload, otherwise use COPY queue so that + * copy task can overlap with 3D task */ + if (priv->is_uploader) + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_DIRECT; + else + priv->selected_queue_type = D3D12_COMMAND_LIST_TYPE_COPY; + } + break; + } + + GST_DEBUG_OBJECT (self, + "Selected command queue type %d", priv->selected_queue_type); + auto features = gst_caps_get_features (incaps, 0); if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { priv->in_type = MemoryType::D3D12; - } else if (features && gst_caps_features_contains (features, + } +#ifdef HAVE_GST_D3D11 + else if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY)) { priv->in_type = MemoryType::D3D11; } +#endif features = gst_caps_get_features (outcaps, 0); if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)) { priv->out_type = MemoryType::D3D12; - } else if (features && gst_caps_features_contains (features, + } +#ifdef HAVE_GST_D3D11 + else if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY)) { priv->out_type = MemoryType::D3D11; } - return gst_d3d12_memory_copy_setup_resource (self); + if (priv->in_type == MemoryType::D3D11 || priv->out_type == MemoryType::D3D11) + return gst_d3d12_memory_copy_setup_interop_resource (self); +#endif + + if (priv->in_type == MemoryType::D3D12 && + priv->out_type == MemoryType::SYSTEM) { + priv->transfer_type = TransferType::D3D12_TO_SYSTEM; + } else if (priv->in_type == MemoryType::SYSTEM && + priv->out_type == MemoryType::D3D12) { + priv->transfer_type = TransferType::SYSTEM_TO_D3D12; + } + + if (priv->transfer_type == TransferType::SYSTEM_TO_D3D12 || + priv->transfer_type == TransferType::D3D12_TO_SYSTEM) { + priv->staging_pool = gst_d3d12_staging_buffer_pool_new (priv->device12); + auto config = gst_buffer_pool_get_config (priv->staging_pool); + gst_buffer_pool_config_set_params (config, incaps, priv->info.size, 0, 0); + if (!gst_buffer_pool_set_config (priv->staging_pool, config)) { + GST_ERROR_OBJECT (self, "Bufferpool config failed"); + gst_clear_object (&priv->staging_pool); + } else if (!gst_buffer_pool_set_active (priv->staging_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Bufferpool set active failed"); + gst_clear_object (&priv->staging_pool); + } + } + + return TRUE; } static GstCaps * @@ -783,22 +988,29 @@ _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY); tmp = gst_caps_merge (caps_12, gst_caps_ref (caps)); } else { - auto caps_11 = - _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY); auto caps_sys = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); +#ifdef HAVE_GST_D3D11 + auto caps_11 = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY); tmp = gst_caps_merge (caps_11, caps_sys); tmp = gst_caps_merge (gst_caps_ref (caps), tmp); +#else + tmp = gst_caps_merge (gst_caps_ref (caps), caps_sys); +#endif } } else { if (priv->is_uploader) { - auto caps_11 = - _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY); auto caps_sys = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); - +#ifdef HAVE_GST_D3D11 + auto caps_11 = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY); tmp = gst_caps_merge (caps_11, caps_sys); tmp = gst_caps_merge (tmp, gst_caps_ref (caps)); +#else + tmp = gst_caps_merge (caps_sys, gst_caps_ref (caps)); +#endif } else { auto caps_12 = _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY); @@ -829,7 +1041,9 @@ GstCaps *caps; guint size; bool is_d3d12 = false; +#ifdef HAVE_GST_D3D11 bool is_d3d11 = false; +#endif if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, decide_query, query)) @@ -858,7 +1072,9 @@ GST_DEBUG_OBJECT (self, "upstream support d3d12 memory"); pool = gst_d3d12_buffer_pool_new (priv->device12); is_d3d12 = true; - } else if (features && gst_caps_features_contains (features, + } +#ifdef HAVE_GST_D3D11 + else if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY)) { if (!priv->device11) { GST_ERROR_OBJECT (self, "D3D11 device is not configured"); @@ -866,6 +1082,12 @@ } pool = gst_d3d11_buffer_pool_new (priv->device11); is_d3d11 = true; + } +#endif + else if (priv->transfer_type == TransferType::SYSTEM_TO_D3D12 && + priv->use_staging) { + pool = gst_d3d12_staging_buffer_pool_new (priv->device12); + GST_DEBUG_OBJECT (self, "Proposing staging pool"); } else { pool = gst_video_buffer_pool_new (); } @@ -894,16 +1116,14 @@ resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS; } - D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; - if (priv->transfer_type == TransferType::D3D12_TO_11) - heap_flags = D3D12_HEAP_FLAG_SHARED; - auto params = gst_d3d12_allocation_params_new (priv->device12, &info, GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, D3D12_HEAP_FLAG_SHARED); gst_buffer_pool_config_set_d3d12_allocation_params (config, params); gst_d3d12_allocation_params_free (params); - } else if (is_d3d11) { + } +#ifdef HAVE_GST_D3D11 + else if (is_d3d11) { GstD3D11Format format11; gst_d3d11_device_get_format (priv->device11, GST_VIDEO_INFO_FORMAT (&info), &format11); @@ -932,7 +1152,9 @@ } gst_buffer_pool_config_set_d3d11_allocation_params (config, params); gst_d3d11_allocation_params_free (params); - } else { + } +#endif + else if (GST_IS_VIDEO_BUFFER_POOL (pool)) { gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); } @@ -974,7 +1196,9 @@ GstCaps *caps = nullptr; bool update_pool = false; bool is_d3d12 = false; +#ifdef HAVE_GST_D3D11 bool is_d3d11 = false; +#endif gst_query_parse_allocation (query, &caps, nullptr); @@ -1014,7 +1238,9 @@ pool = gst_d3d12_buffer_pool_new (priv->device12); is_d3d12 = true; - } else if (features && gst_caps_features_contains (features, + } +#ifdef HAVE_GST_D3D11 + else if (features && gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_D3D11_MEMORY)) { if (!priv->device11) { GST_ERROR_OBJECT (self, "D3D11 device is not configured"); @@ -1035,13 +1261,17 @@ pool = gst_d3d11_buffer_pool_new (priv->device11); is_d3d11 = true; - } else if (!pool) { - pool = gst_video_buffer_pool_new (); + } +#endif + else if (priv->transfer_type == TransferType::D3D12_TO_SYSTEM && + priv->use_staging) { + gst_clear_object (&pool); + pool = gst_d3d12_staging_buffer_pool_new (priv->device12); + GST_DEBUG_OBJECT (self, "Creating staging buffer pool"); } - if (!pool) { + if (!pool) pool = gst_video_buffer_pool_new (); - } auto config = gst_buffer_pool_get_config (pool); gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); @@ -1074,7 +1304,9 @@ gst_buffer_pool_config_set_d3d12_allocation_params (config, params); gst_d3d12_allocation_params_free (params); - } else if (is_d3d11) { + } +#ifdef HAVE_GST_D3D11 + else if (is_d3d11) { GstD3D11Format format11; gst_d3d11_device_get_format (priv->device11, GST_VIDEO_INFO_FORMAT (&info), &format11); @@ -1104,6 +1336,7 @@ gst_buffer_pool_config_set_d3d11_allocation_params (config, params); gst_d3d11_allocation_params_free (params); } +#endif gst_buffer_pool_set_config (pool, config); @@ -1154,6 +1387,7 @@ return; auto mem = gst_buffer_peek_memory (buffer, 0); +#ifdef HAVE_GST_D3D11 if (priv->in_type == MemoryType::D3D11) { if (!gst_is_d3d11_memory (mem)) { GST_WARNING_OBJECT (self, "Input memory is not d3d11"); @@ -1192,7 +1426,9 @@ need_reconfigure = true; } } - } else if (priv->in_type == MemoryType::D3D12) { + } else +#endif + if (priv->in_type == MemoryType::D3D12) { if (!gst_is_d3d12_memory (mem)) { GST_WARNING_OBJECT (self, "Input memory is not d3d12"); priv->transfer_type = TransferType::SYSTEM; @@ -1208,6 +1444,8 @@ g_object_get (priv->device12, "adapter-luid", &priv->luid, nullptr); + need_reconfigure = true; +#ifdef HAVE_GST_D3D11 auto prev_device11 = priv->device11; priv->device11 = nullptr; priv->search_type = DeviceSearchType::LUID; @@ -1222,7 +1460,7 @@ priv->search_type = DeviceSearchType::PROPERTY; gst_clear_object (&prev_device11); - need_reconfigure = true; +#endif } } @@ -1233,36 +1471,7 @@ } } -static GstFlowReturn -gst_d3d12_memory_copy_system_copy (GstD3D12MemoryCopy * self, - GstBuffer * inbuf, GstBuffer * outbuf) -{ - auto priv = self->priv; - GstVideoFrame in_frame, out_frame; - GstFlowReturn ret = GST_FLOW_OK; - - if (!gst_video_frame_map (&in_frame, &priv->info, inbuf, GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Couldn't map input frame"); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_map (&out_frame, &priv->info, outbuf, GST_MAP_WRITE)) { - GST_ERROR_OBJECT (self, "Couldn't map output frame"); - gst_video_frame_unmap (&in_frame); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_copy (&out_frame, &in_frame)) { - GST_ERROR_OBJECT (self, "Copy failed"); - ret = GST_FLOW_ERROR; - } - - gst_video_frame_unmap (&out_frame); - gst_video_frame_unmap (&in_frame); - - return ret; -} - +#ifdef HAVE_GST_D3D11 static gboolean gst_d3d12_memory_copy_11_to_12 (GstD3D12MemoryCopy * self, GstBuffer * inbuf, GstBuffer * outbuf) @@ -1477,10 +1686,12 @@ return TRUE; } +#endif static GstBuffer * gst_d3d12_memory_copy_upload (GstD3D12MemoryCopy * self, GstBuffer * buffer) { +#ifdef HAVE_GST_D3D11 auto priv = self->priv; if (priv->transfer_type == TransferType::D3D12_TO_11) { @@ -1516,6 +1727,7 @@ return upload_buf; } } +#endif return gst_buffer_ref (buffer); } @@ -1527,19 +1739,20 @@ auto self = GST_D3D12_MEMORY_COPY (trans); auto priv = self->priv; - if (priv->transfer_type != TransferType::SYSTEM) { + if (priv->transfer_type == TransferType::D3D11_TO_12 || + priv->transfer_type == TransferType::D3D12_TO_11) { if (gst_buffer_n_memory (inbuf) != gst_buffer_n_memory (outbuf)) { GST_WARNING_OBJECT (self, "Different memory layout"); priv->transfer_type = TransferType::SYSTEM; } } - GstBuffer *upload_buf = gst_d3d12_memory_copy_upload (self, inbuf); + auto upload_buf = gst_d3d12_memory_copy_upload (self, inbuf); if (!upload_buf) { GST_ERROR_OBJECT (self, "Null upload buffer"); return GST_FLOW_ERROR; } - +#ifdef HAVE_GST_D3D11 if (priv->transfer_type == TransferType::D3D11_TO_12) { if (gst_d3d12_memory_copy_11_to_12 (self, upload_buf, outbuf)) { GST_LOG_OBJECT (self, "Copy 11-to-12 done"); @@ -1557,11 +1770,69 @@ priv->transfer_type = TransferType::SYSTEM; } +#endif + + if (priv->transfer_type == TransferType::SYSTEM_TO_D3D12 && + priv->staging_pool) { + auto mem = gst_buffer_peek_memory (upload_buf, 0); + if (!gst_is_d3d12_staging_memory (mem) && !gst_is_d3d12_memory (mem)) { + GstBuffer *staging = nullptr; + gst_buffer_pool_acquire_buffer (priv->staging_pool, &staging, nullptr); + if (staging) { + GstVideoFrame in_frame, out_frame; + gboolean copy_ret = FALSE; + if (gst_video_frame_map (&in_frame, &priv->info, upload_buf, + GST_MAP_READ)) { + if (gst_video_frame_map (&out_frame, &priv->info, staging, + GST_MAP_WRITE)) { + copy_ret = gst_video_frame_copy (&out_frame, &in_frame); + gst_video_frame_unmap (&out_frame); + } + + gst_video_frame_unmap (&in_frame); + } + + if (copy_ret) { + gst_buffer_unref (upload_buf); + upload_buf = staging; + GST_TRACE_OBJECT (self, + "Intermediate upload using staging buffer done"); + } else { + gst_buffer_unref (staging); + } + } + } + } else if (priv->transfer_type == TransferType::D3D12_TO_SYSTEM && + priv->staging_pool) { + auto in_mem = gst_buffer_peek_memory (upload_buf, 0); + auto out_mem = gst_buffer_peek_memory (outbuf, 0); + + if (gst_is_d3d12_memory (in_mem) && !gst_is_d3d12_memory (out_mem) && + !gst_is_d3d12_staging_memory (out_mem)) { + GstBuffer *staging = nullptr; + gst_buffer_pool_acquire_buffer (priv->staging_pool, &staging, nullptr); + if (staging) { + if (gst_d3d12_buffer_copy_into_full (staging, upload_buf, + &priv->info, priv->selected_queue_type)) { + gst_buffer_unref (upload_buf); + upload_buf = staging; + GST_TRACE_OBJECT (self, + "Intermediate download using staging buffer done"); + } else { + gst_buffer_unref (staging); + } + } + } + } - auto ret = gst_d3d12_memory_copy_system_copy (self, upload_buf, outbuf); + auto ret = gst_d3d12_buffer_copy_into_full (outbuf, upload_buf, &priv->info, + priv->selected_queue_type); gst_buffer_unref (upload_buf); - return ret; + if (ret) + return GST_FLOW_OK; + + return GST_FLOW_ERROR; } struct _GstD3D12Upload
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12mipmapping.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12mipmapping.cpp
Changed
@@ -153,17 +153,17 @@ trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); static GstCaps *gst_d3d12_mip_mapping_fixate_caps (GstBaseTransform * base, GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); -static gboolean gst_d3d12_mip_mapping_propose_allocation (GstBaseTransform * - trans, GstQuery * decide_query, GstQuery * query); -static gboolean gst_d3d12_mip_mapping_decide_allocation (GstBaseTransform * - trans, GstQuery * query); static gboolean gst_d3d12_mip_mapping_transform_meta (GstBaseTransform * trans, GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); static GstFlowReturn gst_d3d12_mip_mapping_transform (GstBaseTransform * trans, GstBuffer * inbuf, GstBuffer * outbuf); static gboolean gst_d3d12_mip_mapping_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info); + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean gst_d3d12_mip_mapping_propose_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); +static gboolean gst_d3d12_mip_mapping_decide_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * query); #define gst_d3d12_mip_mapping_parent_class parent_class G_DEFINE_TYPE (GstD3D12MipMapping, gst_d3d12_mip_mapping, @@ -210,15 +210,15 @@ GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_transform_caps); trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_fixate_caps); - trans_class->propose_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_propose_allocation); - trans_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_decide_allocation); trans_class->transform_meta = GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_transform_meta); trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_transform); filter_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_propose_allocation); + filter_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_mip_mapping_decide_allocation); gst_type_mark_as_plugin_api (GST_TYPE_D3D12_SAMPLING_METHOD, (GstPluginAPIFlags) 0); @@ -692,101 +692,26 @@ } static gboolean -gst_d3d12_mip_mapping_propose_allocation (GstBaseTransform * trans, - GstQuery * decide_query, GstQuery * query) +gst_d3d12_mip_mapping_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) { - auto filter = GST_D3D12_BASE_FILTER (trans); - GstVideoInfo info; - GstBufferPool *pool = nullptr; - GstCaps *caps; - guint n_pools, i; - guint size; - - if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, - decide_query, query)) { - return FALSE; - } - - gst_query_parse_allocation (query, &caps, nullptr); - - if (!caps) - return FALSE; - - if (!gst_video_info_from_caps (&info, caps)) { - GST_ERROR_OBJECT (filter, "Invalid caps %" GST_PTR_FORMAT, caps); + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { return FALSE; } - n_pools = gst_query_get_n_allocation_pools (query); - for (i = 0; i < n_pools; i++) { - gst_query_parse_nth_allocation_pool (query, i, &pool, nullptr, nullptr, - nullptr); - if (pool) { - if (!GST_IS_D3D12_BUFFER_POOL (pool)) { - gst_clear_object (&pool); - } else { - auto dpool = GST_D3D12_BUFFER_POOL (pool); - if (!gst_d3d12_device_is_equal (dpool->device, filter->device)) - gst_clear_object (&pool); - } - } - } - - if (!pool) - pool = gst_d3d12_buffer_pool_new (filter->device); - - auto config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - - auto d3d12_params = - gst_buffer_pool_config_get_d3d12_allocation_params (config); - if (!d3d12_params) { - d3d12_params = gst_d3d12_allocation_params_new (filter->device, &info, - GST_D3D12_ALLOCATION_FLAG_DEFAULT, - D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS, D3D12_HEAP_FLAG_NONE); - } else { - gst_d3d12_allocation_params_set_resource_flags (d3d12_params, - D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); - gst_d3d12_allocation_params_unset_resource_flags (d3d12_params, - D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE); - } - - gst_buffer_pool_config_set_d3d12_allocation_params (config, d3d12_params); - gst_d3d12_allocation_params_free (d3d12_params); - - /* size will be updated by d3d12 buffer pool */ - gst_buffer_pool_config_set_params (config, caps, 0, 0, 0); - - if (!gst_buffer_pool_set_config (pool, config)) { - GST_ERROR_OBJECT (filter, "failed to set config"); - gst_object_unref (pool); - return FALSE; - } - - gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); gst_query_add_allocation_meta (query, GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, nullptr); - /* d3d12 buffer pool will update buffer size based on allocated texture, - * get size from config again */ - config = gst_buffer_pool_get_config (pool); - gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); - gst_structure_free (config); - - gst_query_add_allocation_pool (query, pool, size, 0, 0); - - gst_object_unref (pool); - return TRUE; } static gboolean -gst_d3d12_mip_mapping_decide_allocation (GstBaseTransform * trans, - GstQuery * query) +gst_d3d12_mip_mapping_decide_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * query) { - auto filter = GST_D3D12_BASE_FILTER (trans); - auto self = GST_D3D12_MIP_MAPPING (trans); + auto self = GST_D3D12_MIP_MAPPING (filter); auto priv = self->priv; GstCaps *outcaps = nullptr; GstBufferPool *pool = nullptr; @@ -813,7 +738,7 @@ gst_clear_object (&pool); } else { auto dpool = GST_D3D12_BUFFER_POOL (pool); - if (!gst_d3d12_device_is_equal (dpool->device, filter->device)) + if (!gst_d3d12_device_is_equal (dpool->device, device)) gst_clear_object (&pool); } } @@ -822,7 +747,7 @@ } if (!pool) - pool = gst_d3d12_buffer_pool_new (filter->device); + pool = gst_d3d12_buffer_pool_new (device); config = gst_buffer_pool_get_config (pool); gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); @@ -832,7 +757,7 @@ D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS | D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET; - auto d3d12_params = gst_d3d12_allocation_params_new (filter->device, &info, + auto d3d12_params = gst_d3d12_allocation_params_new (device, &info, GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, D3D12_HEAP_FLAG_SHARED); @@ -868,14 +793,13 @@ gst_object_unref (pool); - return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, - query); + return TRUE; } static gboolean gst_d3d12_mip_mapping_set_info (GstD3D12BaseFilter * filter, - GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, - GstVideoInfo * out_info) + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) { auto self = GST_D3D12_MIP_MAPPING (filter); auto priv = self->priv; @@ -892,9 +816,9 @@ return FALSE; } - auto ctx = std::make_unique < MipMappingContext > (filter->device); + auto ctx = std::make_unique < MipMappingContext > (device); - ctx->conv = gst_d3d12_converter_new (filter->device, nullptr, in_info, + ctx->conv = gst_d3d12_converter_new (device, nullptr, in_info, out_info, nullptr, nullptr, nullptr); if (!ctx->conv) { GST_ERROR_OBJECT (self, "Couldn't create converter"); @@ -913,7 +837,7 @@ cs_type = GST_D3D_PLUGIN_CS_MIP_GEN_VUYA; } - ctx->gen = gst_d3d12_mip_gen_new (filter->device, cs_type); + ctx->gen = gst_d3d12_mip_gen_new (device, cs_type); if (!ctx->gen) { GST_ERROR_OBJECT (self, "Couldn't create mip generator"); return FALSE;
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12overlayblender.cpp
Added
@@ -0,0 +1,894 @@ +/* GStreamer + * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02120-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12overlayblender.h" +#include "gstd3d12pluginutils.h" +#include <directx/d3dx12.h> +#include <wrl.h> +#include <memory> +#include <vector> +#include <unordered_map> +#include <algorithm> +#include <gst/d3dshader/gstd3dshader.h> + +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_overlay_blender_debug); +#define GST_CAT_DEFAULT gst_d3d12_overlay_blender_debug + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; + +struct VertexData +{ + struct { + FLOAT x; + FLOAT y; + FLOAT z; + } position; + struct { + FLOAT u; + FLOAT v; + } texture; +}; + +struct GstD3D12OverlayRect : public GstMiniObject +{ + ~GstD3D12OverlayRect () + { + if (overlay_rect) + gst_video_overlay_rectangle_unref (overlay_rect); + + gst_clear_d3d12_desc_heap (&srv_heap); + } + + GstVideoOverlayRectangle *overlay_rect = nullptr; + ComPtr<ID3D12Resource> texture; + ComPtr<ID3D12Resource> staging; + ComPtr<ID3D12Resource> vertex_buf; + GstD3D12DescHeap *srv_heap = nullptr; + D3D12_VERTEX_BUFFER_VIEW vbv; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layout; + gboolean premul_alpha = FALSE; + gboolean need_upload = TRUE; +}; + +GST_DEFINE_MINI_OBJECT_TYPE (GstD3D12OverlayRect, gst_d3d12_overlay_rect); + +struct GstD3D12OverlayBlenderPrivate +{ + GstD3D12OverlayBlenderPrivate () + { + sample_desc.Count = 1; + sample_desc.Quality = 0; + } + + ~GstD3D12OverlayBlenderPrivate () + { + ClearOverlays (); + + gst_clear_object (&ca_pool); + gst_clear_object (&srv_heap_pool); + } + + void ClearOverlays () + { + overlays.clear (); + + for (auto &it : cache) + gst_mini_object_unref (it.second); + cache.clear (); + } + + GstVideoInfo info; + + D3D12_VIEWPORT viewport; + D3D12_RECT scissor_rect; + + D3D12_INPUT_ELEMENT_DESC input_desc2; + D3D12_GRAPHICS_PIPELINE_STATE_DESC pso_desc = { }; + D3D12_GRAPHICS_PIPELINE_STATE_DESC pso_premul_desc = { }; + DXGI_SAMPLE_DESC sample_desc; + + ComPtr<ID3D12RootSignature> rs; + ComPtr<ID3D12PipelineState> pso; + ComPtr<ID3D12PipelineState> pso_premul; + D3D12_INDEX_BUFFER_VIEW idv; + ComPtr<ID3D12Resource> index_buf; + ComPtr<ID3D12GraphicsCommandList> cl; + GstD3D12CmdAllocPool *ca_pool = nullptr; + GstD3D12DescHeapPool *srv_heap_pool = nullptr; + + /* Only cache will hold strong reference to GstD3D12OverlayRect */ + std::unordered_map<GstVideoOverlayRectangle*, GstD3D12OverlayRect*> cache; + std::vector<GstD3D12OverlayRect*> overlays; + std::vector<GstD3D12OverlayRect *> new_overlays; + std::vector<GstVideoOverlayRectangle *> rects_to_upload; +}; +/* *INDENT-ON* */ + +struct _GstD3D12OverlayBlender +{ + GstObject parent; + + GstD3D12Device *device; + + GstD3D12OverlayBlenderPrivate *priv; +}; + +static void gst_d3d12_overlay_blender_finalize (GObject * object); + +#define gst_d3d12_overlay_blender_parent_class parent_class +G_DEFINE_TYPE (GstD3D12OverlayBlender, + gst_d3d12_overlay_blender, GST_TYPE_OBJECT); + +static void +gst_d3d12_overlay_blender_class_init (GstD3D12OverlayBlenderClass * klass) +{ + GObjectClass *object_class = G_OBJECT_CLASS (klass); + + object_class->finalize = gst_d3d12_overlay_blender_finalize; + + GST_DEBUG_CATEGORY_INIT (gst_d3d12_overlay_blender_debug, + "d3d12overlayblender", 0, "d3d12overlayblender"); +} + +static void +gst_d3d12_overlay_blender_init (GstD3D12OverlayBlender * self) +{ + self->priv = new GstD3D12OverlayBlenderPrivate (); +} + +static void +gst_d3d12_overlay_blender_finalize (GObject * object) +{ + GstD3D12OverlayBlender *self = GST_D3D12_OVERLAY_BLENDER (object); + + delete self->priv; + + gst_clear_object (&self->device); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_d3d12_overlay_rect_free (GstD3D12OverlayRect * rect) +{ + if (rect) + delete rect; +} + +static GstD3D12OverlayRect * +gst_d3d12_overlay_rect_new (GstD3D12OverlayBlender * self, + GstVideoOverlayRectangle * overlay_rect) +{ + auto priv = self->priv; + gint x, y; + guint width, height; + VertexData vertex_data4; + FLOAT x1, y1, x2, y2; + gdouble val; + GstVideoOverlayFormatFlags flags; + gboolean premul_alpha = FALSE; + + if (!gst_video_overlay_rectangle_get_render_rectangle (overlay_rect, &x, &y, + &width, &height)) { + GST_ERROR_OBJECT (self, "Failed to get render rectangle"); + return nullptr; + } + + flags = gst_video_overlay_rectangle_get_flags (overlay_rect); + if ((flags & GST_VIDEO_OVERLAY_FORMAT_FLAG_PREMULTIPLIED_ALPHA) != 0) { + premul_alpha = TRUE; + flags = GST_VIDEO_OVERLAY_FORMAT_FLAG_PREMULTIPLIED_ALPHA; + } else { + flags = GST_VIDEO_OVERLAY_FORMAT_FLAG_NONE; + } + + auto buf = gst_video_overlay_rectangle_get_pixels_unscaled_argb (overlay_rect, + flags); + if (!buf) { + GST_ERROR_OBJECT (self, "Failed to get overlay buffer"); + return nullptr; + } + + auto device = gst_d3d12_device_get_device_handle (self->device); + auto mem = gst_buffer_peek_memory (buf, 0); + bool is_d3d12 = false; + ComPtr < ID3D12Resource > texture; + if (gst_is_d3d12_memory (mem)) { + GST_LOG_OBJECT (self, "Overlay is d3d12 memory"); + auto dmem = GST_D3D12_MEMORY_CAST (mem); + if (gst_d3d12_device_is_equal (dmem->device, self->device) && + gst_d3d12_memory_get_shader_resource_view_heap (dmem)) { + texture = gst_d3d12_memory_get_resource_handle (dmem); + is_d3d12 = true; + } + } + + ComPtr < ID3D12Resource > staging; + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layout; + D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; + if (gst_d3d12_device_non_zeroed_supported (self->device)) + heap_flags = D3D12_HEAP_FLAG_CREATE_NOT_ZEROED; + + if (!is_d3d12) { + auto vmeta = gst_buffer_get_video_meta (buf); + + if (!vmeta) { + GST_ERROR_OBJECT (self, "Failed to get video meta"); + return nullptr; + } + + D3D12_HEAP_PROPERTIES heap_prop = + CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); + D3D12_RESOURCE_DESC desc = + CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_B8G8R8A8_UNORM, vmeta->width, + vmeta->height, 1, 1); + + auto hr = device->CreateCommittedResource (&heap_prop, heap_flags, + &desc, D3D12_RESOURCE_STATE_COPY_DEST, nullptr, + IID_PPV_ARGS (&texture)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create texture"); + return nullptr; + } + + UINT64 size; + device->GetCopyableFootprints (&desc, 0, 1, 0, &layout, nullptr, nullptr, + &size); + + heap_prop = CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); + desc = CD3DX12_RESOURCE_DESC::Buffer (size); + hr = device->CreateCommittedResource (&heap_prop, heap_flags, + &desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, + IID_PPV_ARGS (&staging)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create upload buffer"); + return nullptr; + } + + guint8 *map_data; + hr = staging->Map (0, nullptr, (void **) &map_data); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't map staging"); + return nullptr; + } + + guint8 *data; + gint stride; + GstMapInfo info; + if (!gst_video_meta_map (vmeta, + 0, &info, (gpointer *) & data, &stride, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Failed to map"); + return nullptr; + } + + if (layout.Footprint.RowPitch == (UINT) stride) { + memcpy (map_data, data, stride * layout.Footprint.Height); + } else { + guint width_in_bytes = 4 * layout.Footprint.Width; + for (UINT i = 0; i < layout.Footprint.Height; i++) { + memcpy (map_data, data, width_in_bytes); + map_data += layout.Footprint.RowPitch; + data += stride; + } + } + + staging->Unmap (0, nullptr); + gst_video_meta_unmap (vmeta, 0, &info); + } + + /* bottom left */ + gst_util_fraction_to_double (x, GST_VIDEO_INFO_WIDTH (&priv->info), &val); + x1 = (val * 2.0f) - 1.0f; + + gst_util_fraction_to_double (y + height, + GST_VIDEO_INFO_HEIGHT (&priv->info), &val); + y1 = (val * -2.0f) + 1.0f; + + /* top right */ + gst_util_fraction_to_double (x + width, + GST_VIDEO_INFO_WIDTH (&priv->info), &val); + x2 = (val * 2.0f) - 1.0f; + + gst_util_fraction_to_double (y, GST_VIDEO_INFO_HEIGHT (&priv->info), &val); + y2 = (val * -2.0f) + 1.0f; + + /* bottom left */ + vertex_data0.position.x = x1; + vertex_data0.position.y = y1; + vertex_data0.position.z = 0.0f; + vertex_data0.texture.u = 0.0f; + vertex_data0.texture.v = 1.0f; + + /* top left */ + vertex_data1.position.x = x1; + vertex_data1.position.y = y2; + vertex_data1.position.z = 0.0f; + vertex_data1.texture.u = 0.0f; + vertex_data1.texture.v = 0.0f; + + /* top right */ + vertex_data2.position.x = x2; + vertex_data2.position.y = y2; + vertex_data2.position.z = 0.0f; + vertex_data2.texture.u = 1.0f; + vertex_data2.texture.v = 0.0f; + + /* bottom right */ + vertex_data3.position.x = x2; + vertex_data3.position.y = y1; + vertex_data3.position.z = 0.0f; + vertex_data3.texture.u = 1.0f; + vertex_data3.texture.v = 1.0f; + + ComPtr < ID3D12Resource > vertex_buf; + D3D12_HEAP_PROPERTIES heap_prop = + CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); + D3D12_RESOURCE_DESC desc = + CD3DX12_RESOURCE_DESC::Buffer (sizeof (VertexData) * 4); + auto hr = device->CreateCommittedResource (&heap_prop, heap_flags, + &desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, + IID_PPV_ARGS (&vertex_buf)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create vertex buffer"); + return nullptr; + } + + guint8 *map_data; + hr = vertex_buf->Map (0, nullptr, (void **) &map_data); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't map vertex buffer"); + return nullptr; + } + + memcpy (map_data, vertex_data, sizeof (VertexData) * 4); + vertex_buf->Unmap (0, nullptr); + + GstD3D12DescHeap *srv_heap; + if (!gst_d3d12_desc_heap_pool_acquire (priv->srv_heap_pool, &srv_heap)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + return nullptr; + } + + auto srv_heap_handle = gst_d3d12_desc_heap_get_handle (srv_heap); + D3D12_SHADER_RESOURCE_VIEW_DESC srv_desc = { }; + srv_desc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D; + srv_desc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING; + srv_desc.Texture2D.MipLevels = 1; + srv_desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; + + device->CreateShaderResourceView (texture.Get (), &srv_desc, + GetCPUDescriptorHandleForHeapStart (srv_heap_handle)); + + auto rect = new GstD3D12OverlayRect (); + gst_mini_object_init (rect, 0, gst_d3d12_overlay_rect_get_type (), + nullptr, nullptr, + (GstMiniObjectFreeFunction) gst_d3d12_overlay_rect_free); + + rect->overlay_rect = gst_video_overlay_rectangle_ref (overlay_rect); + rect->texture = texture; + rect->staging = staging; + rect->vertex_buf = vertex_buf; + rect->vbv.BufferLocation = vertex_buf->GetGPUVirtualAddress (); + rect->vbv.SizeInBytes = sizeof (VertexData) * 4; + rect->vbv.StrideInBytes = sizeof (VertexData); + rect->layout = layout; + rect->srv_heap = srv_heap; + rect->premul_alpha = premul_alpha; + if (is_d3d12) + rect->need_upload = FALSE; + + return rect; +} + +static gboolean +gst_d3d12_overlay_blender_setup_shader (GstD3D12OverlayBlender * self) +{ + auto priv = self->priv; + GstVideoInfo *info = &priv->info; + const WORD indices6 = { 0, 1, 2, 3, 0, 2 }; + const D3D12_ROOT_SIGNATURE_FLAGS rs_flags = + D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS; + const D3D12_STATIC_SAMPLER_DESC static_sampler_desc = { + D3D12_FILTER_MIN_MAG_LINEAR_MIP_POINT, + D3D12_TEXTURE_ADDRESS_MODE_CLAMP, + D3D12_TEXTURE_ADDRESS_MODE_CLAMP, + D3D12_TEXTURE_ADDRESS_MODE_CLAMP, + 0, + 1, + D3D12_COMPARISON_FUNC_ALWAYS, + D3D12_STATIC_BORDER_COLOR_OPAQUE_BLACK, + 0, + D3D12_FLOAT32_MAX, + 0, + 0, + D3D12_SHADER_VISIBILITY_PIXEL + }; + + CD3DX12_ROOT_PARAMETER param; + D3D12_DESCRIPTOR_RANGE range; + std::vector < D3D12_ROOT_PARAMETER > param_list; + + range = CD3DX12_DESCRIPTOR_RANGE (D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0); + param.InitAsDescriptorTable (1, &range, D3D12_SHADER_VISIBILITY_PIXEL); + param_list.push_back (param); + + D3D12_VERSIONED_ROOT_SIGNATURE_DESC rs_desc = { }; + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (rs_desc, + param_list.size (), param_list.data (), + 1, &static_sampler_desc, rs_flags); + + ComPtr < ID3DBlob > rs_blob; + ComPtr < ID3DBlob > error_blob; + auto hr = D3DX12SerializeVersionedRootSignature (&rs_desc, + D3D_ROOT_SIGNATURE_VERSION_1_1, &rs_blob, &error_blob); + if (!gst_d3d12_result (hr, self->device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + GST_ERROR_OBJECT (self, "Couldn't serialize root signature, error: %s", + GST_STR_NULL (error_msg)); + return FALSE; + } + + GstD3D12Format device_format; + gst_d3d12_device_get_format (self->device, GST_VIDEO_INFO_FORMAT (info), + &device_format); + + GstD3DShaderByteCode vs_code; + GstD3DShaderByteCode ps_sample_code; + GstD3DShaderByteCode ps_sample_premul_code; + if (!gst_d3d_plugin_shader_get_vs_blob (GST_D3D_PLUGIN_VS_COORD, + GST_D3D_SM_5_0, &vs_code)) { + GST_ERROR_OBJECT (self, "Couldn't get vs bytecode"); + return FALSE; + } + + GstD3DPluginPS ps_sample = GST_D3D_PLUGIN_PS_SAMPLE; + GstD3DPluginPS ps_sample_premul = GST_D3D_PLUGIN_PS_SAMPLE_PREMULT; + + if (GST_VIDEO_INFO_FORMAT (info) == GST_VIDEO_FORMAT_VUYA) { + if (info->colorimetry.range == GST_VIDEO_COLOR_RANGE_0_255) { + ps_sample = GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL; + ps_sample_premul = GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_FULL_PREMUL; + } else { + ps_sample = GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED; + ps_sample_premul = GST_D3D_PLUGIN_PS_SAMPLE_BGRA_TO_VUYA_LIMITED_PREMUL; + } + } + + if (!gst_d3d_plugin_shader_get_ps_blob (ps_sample, + GST_D3D_SM_5_0, &ps_sample_code)) { + GST_ERROR_OBJECT (self, "Couldn't get ps bytecode"); + return FALSE; + } + + if (!gst_d3d_plugin_shader_get_ps_blob (ps_sample_premul, + GST_D3D_SM_5_0, &ps_sample_premul_code)) { + GST_ERROR_OBJECT (self, "Couldn't get ps bytecode"); + return FALSE; + } + + auto device = gst_d3d12_device_get_device_handle (self->device); + ComPtr < ID3D12RootSignature > rs; + device->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&rs)); + + priv->input_desc0.SemanticName = "POSITION"; + priv->input_desc0.SemanticIndex = 0; + priv->input_desc0.Format = DXGI_FORMAT_R32G32B32_FLOAT; + priv->input_desc0.InputSlot = 0; + priv->input_desc0.AlignedByteOffset = D3D12_APPEND_ALIGNED_ELEMENT; + priv->input_desc0.InputSlotClass = + D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA; + priv->input_desc0.InstanceDataStepRate = 0; + + priv->input_desc1.SemanticName = "TEXCOORD"; + priv->input_desc1.SemanticIndex = 0; + priv->input_desc1.Format = DXGI_FORMAT_R32G32_FLOAT; + priv->input_desc1.InputSlot = 0; + priv->input_desc1.AlignedByteOffset = D3D12_APPEND_ALIGNED_ELEMENT; + priv->input_desc1.InputSlotClass = + D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA; + priv->input_desc1.InstanceDataStepRate = 0; + + auto & pso_desc = priv->pso_desc; + pso_desc.pRootSignature = rs.Get (); + pso_desc.VS.BytecodeLength = vs_code.byte_code_len; + pso_desc.VS.pShaderBytecode = vs_code.byte_code; + pso_desc.PS.BytecodeLength = ps_sample_code.byte_code_len; + pso_desc.PS.pShaderBytecode = ps_sample_code.byte_code; + pso_desc.BlendState = CD3DX12_BLEND_DESC (D3D12_DEFAULT); + pso_desc.BlendState.RenderTarget0.BlendEnable = TRUE; + pso_desc.BlendState.RenderTarget0.LogicOpEnable = FALSE; + pso_desc.BlendState.RenderTarget0.SrcBlend = D3D12_BLEND_SRC_ALPHA; + pso_desc.BlendState.RenderTarget0.DestBlend = D3D12_BLEND_INV_SRC_ALPHA; + pso_desc.BlendState.RenderTarget0.BlendOp = D3D12_BLEND_OP_ADD; + pso_desc.BlendState.RenderTarget0.SrcBlendAlpha = D3D12_BLEND_ONE; + pso_desc.BlendState.RenderTarget0.DestBlendAlpha = + D3D12_BLEND_INV_SRC_ALPHA; + pso_desc.BlendState.RenderTarget0.BlendOpAlpha = D3D12_BLEND_OP_ADD; + pso_desc.BlendState.RenderTarget0.LogicOp = D3D12_LOGIC_OP_NOOP; + pso_desc.BlendState.RenderTarget0.RenderTargetWriteMask = + D3D12_COLOR_WRITE_ENABLE_ALL; + pso_desc.SampleMask = UINT_MAX; + pso_desc.RasterizerState = CD3DX12_RASTERIZER_DESC (D3D12_DEFAULT); + pso_desc.RasterizerState.CullMode = D3D12_CULL_MODE_NONE; + pso_desc.DepthStencilState.DepthEnable = FALSE; + pso_desc.DepthStencilState.StencilEnable = FALSE; + pso_desc.InputLayout.pInputElementDescs = priv->input_desc; + pso_desc.InputLayout.NumElements = 2; + pso_desc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; + pso_desc.NumRenderTargets = 1; + pso_desc.RTVFormats0 = device_format.resource_format0; + pso_desc.SampleDesc.Count = 1; + + ComPtr < ID3D12PipelineState > pso; + hr = device->CreateGraphicsPipelineState (&pso_desc, IID_PPV_ARGS (&pso)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + ComPtr < ID3D12PipelineState > pso_premul; + auto & pso_premul_desc = priv->pso_premul_desc; + pso_premul_desc = priv->pso_desc; + pso_premul_desc.PS.BytecodeLength = ps_sample_premul_code.byte_code_len; + pso_premul_desc.PS.pShaderBytecode = ps_sample_premul_code.byte_code; + hr = device->CreateGraphicsPipelineState (&pso_premul_desc, + IID_PPV_ARGS (&pso_premul)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + D3D12_HEAP_PROPERTIES heap_prop = + CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); + D3D12_RESOURCE_DESC buffer_desc = + CD3DX12_RESOURCE_DESC::Buffer (sizeof (indices)); + D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; + if (gst_d3d12_device_non_zeroed_supported (self->device)) + heap_flags = D3D12_HEAP_FLAG_CREATE_NOT_ZEROED; + + ComPtr < ID3D12Resource > index_buf; + hr = device->CreateCommittedResource (&heap_prop, heap_flags, + &buffer_desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, + IID_PPV_ARGS (&index_buf)); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't create index buffer"); + return FALSE; + } + + void *data; + hr = index_buf->Map (0, nullptr, &data); + if (!gst_d3d12_result (hr, self->device)) { + GST_ERROR_OBJECT (self, "Couldn't map index buffer"); + return FALSE; + } + + memcpy (data, indices, sizeof (indices)); + index_buf->Unmap (0, nullptr); + + D3D12_DESCRIPTOR_HEAP_DESC heap_desc = { }; + heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; + heap_desc.NumDescriptors = 1; + heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; + + priv->rs = rs; + priv->pso = pso; + priv->pso_premul = pso_premul; + priv->idv.BufferLocation = index_buf->GetGPUVirtualAddress (); + priv->idv.SizeInBytes = sizeof (indices); + priv->idv.Format = DXGI_FORMAT_R16_UINT; + priv->index_buf = index_buf; + priv->srv_heap_pool = gst_d3d12_desc_heap_pool_new (device, &heap_desc); + priv->ca_pool = gst_d3d12_cmd_alloc_pool_new (device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + + priv->viewport.TopLeftX = 0; + priv->viewport.TopLeftY = 0; + priv->viewport.Width = GST_VIDEO_INFO_WIDTH (info); + priv->viewport.Height = GST_VIDEO_INFO_HEIGHT (info); + priv->viewport.MinDepth = 0.0f; + priv->viewport.MaxDepth = 1.0f; + + priv->scissor_rect.left = 0; + priv->scissor_rect.top = 0; + priv->scissor_rect.right = GST_VIDEO_INFO_WIDTH (info); + priv->scissor_rect.bottom = GST_VIDEO_INFO_HEIGHT (info); + + return TRUE; +} + +GstD3D12OverlayBlender * +gst_d3d12_overlay_blender_new (GstD3D12Device * device, + const GstVideoInfo * info) +{ + GstD3D12OverlayBlender *self = nullptr; + GstD3D12OverlayBlenderPrivate *priv; + + g_return_val_if_fail (GST_IS_D3D12_DEVICE (device), nullptr); + g_return_val_if_fail (info != nullptr, nullptr); + + self = (GstD3D12OverlayBlender *) + g_object_new (GST_TYPE_D3D12_OVERLAY_BLENDER, nullptr); + gst_object_ref_sink (self); + priv = self->priv; + + self->device = (GstD3D12Device *) gst_object_ref (device); + priv->info = *info; + + if (!gst_d3d12_overlay_blender_setup_shader (self)) { + gst_object_unref (self); + return nullptr; + } + + return self; +} + +static gboolean +gst_d3d12_overlay_blender_foreach_meta (GstBuffer * buffer, GstMeta ** meta, + GstD3D12OverlayBlender * self) +{ + auto priv = self->priv; + + if ((*meta)->info->api != GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE) + return TRUE; + + auto cmeta = (GstVideoOverlayCompositionMeta *) (*meta); + if (!cmeta->overlay) + return TRUE; + + auto num_rect = gst_video_overlay_composition_n_rectangles (cmeta->overlay); + for (guint i = 0; i < num_rect; i++) { + auto rect = gst_video_overlay_composition_get_rectangle (cmeta->overlay, i); + if (std::find (priv->rects_to_upload.begin (), + priv->rects_to_upload.end (), + rect) == priv->rects_to_upload.end ()) { + priv->rects_to_upload.push_back (rect); + } + } + + return TRUE; +} + +/* *INDENT-OFF* */ +gboolean +gst_d3d12_overlay_blender_upload (GstD3D12OverlayBlender * compositor, + GstBuffer * buf) +{ + g_return_val_if_fail (compositor != nullptr, FALSE); + g_return_val_if_fail (GST_IS_BUFFER (buf), FALSE); + + auto priv = compositor->priv; + priv->rects_to_upload.clear (); + + gst_buffer_foreach_meta (buf, + (GstBufferForeachMetaFunc) gst_d3d12_overlay_blender_foreach_meta, + compositor); + + if (priv->rects_to_upload.empty ()) { + priv->ClearOverlays (); + return TRUE; + } + + GST_LOG_OBJECT (compositor, "Found %" G_GSIZE_FORMAT + " overlay rectangles", priv->rects_to_upload.size ()); + + priv->new_overlays.clear (); + + for (auto it : priv->rects_to_upload) { + auto found = priv->cache.find (it); + GstD3D12OverlayRect *rect = nullptr; + + if (found != priv->cache.end ()) { + rect = found->second; + } else { + rect = gst_d3d12_overlay_rect_new (compositor, it); + if (!rect) + continue; + + priv->cache.emplace (it, rect); + } + + priv->new_overlays.push_back (rect); + } + + auto it = priv->cache.begin (); + while (it != priv->cache.end ()) { + if (std::find (priv->rects_to_upload.begin (), + priv->rects_to_upload.end (), it->first) == + priv->rects_to_upload.end ()) { + gst_mini_object_unref (it->second); + it = priv->cache.erase (it); + } else { + it++; + } + } + + priv->overlays.swap (priv->new_overlays); + + return TRUE; +} +/* *INDENT-ON* */ + +gboolean +gst_d3d12_overlay_blender_update_viewport (GstD3D12OverlayBlender * + compositor, GstVideoRectangle * viewport) +{ + g_return_val_if_fail (GST_IS_D3D12_OVERLAY_BLENDER (compositor), FALSE); + g_return_val_if_fail (viewport != nullptr, FALSE); + + auto priv = compositor->priv; + + priv->viewport.TopLeftX = viewport->x; + priv->viewport.TopLeftY = viewport->y; + priv->viewport.Width = viewport->w; + priv->viewport.Height = viewport->h; + + priv->scissor_rect.left = viewport->x; + priv->scissor_rect.top = viewport->y; + priv->scissor_rect.right = viewport->x + viewport->w; + priv->scissor_rect.bottom = viewport->y + viewport->h; + + return TRUE; +} + +/* *INDENT-OFF* */ +static gboolean +gst_d3d12_overlay_blender_execute (GstD3D12OverlayBlender * self, + GstBuffer * buf, GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * cl) +{ + auto priv = self->priv; + + auto mem = (GstD3D12Memory *) gst_buffer_peek_memory (buf, 0); + auto rtv_heap = gst_d3d12_memory_get_render_target_view_heap (mem); + if (!rtv_heap) { + GST_ERROR_OBJECT (self, "Couldn't get rtv heap"); + return FALSE; + } + + ComPtr < ID3D12PipelineState > prev_pso; + for (auto rect : priv->overlays) { + if (rect->need_upload) { + D3D12_TEXTURE_COPY_LOCATION src = + CD3DX12_TEXTURE_COPY_LOCATION (rect->staging.Get (), rect->layout); + D3D12_TEXTURE_COPY_LOCATION dst = + CD3DX12_TEXTURE_COPY_LOCATION (rect->texture.Get ()); + GST_LOG_OBJECT (self, "First render, uploading texture"); + cl->CopyTextureRegion (&dst, 0, 0, 0, &src, nullptr); + D3D12_RESOURCE_BARRIER barrier = + CD3DX12_RESOURCE_BARRIER::Transition (rect->texture.Get (), + D3D12_RESOURCE_STATE_COPY_DEST, + D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE); + cl->ResourceBarrier (1, &barrier); + rect->need_upload = FALSE; + } + + cl->SetGraphicsRootSignature (priv->rs.Get ()); + + ComPtr < ID3D12PipelineState > pso; + if (rect->premul_alpha) + pso = priv->pso; + else + pso = priv->pso_premul; + + if (!prev_pso) { + cl->SetPipelineState (pso.Get ()); + cl->IASetIndexBuffer (&priv->idv); + cl->IASetPrimitiveTopology (D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); + cl->RSSetViewports (1, &priv->viewport); + cl->RSSetScissorRects (1, &priv->scissor_rect); + D3D12_CPU_DESCRIPTOR_HANDLE rtv_heaps = { + GetCPUDescriptorHandleForHeapStart (rtv_heap) + }; + cl->OMSetRenderTargets (1, rtv_heaps, FALSE, nullptr); + } else if (pso != prev_pso) { + cl->SetPipelineState (pso.Get ()); + } + + auto srv_heap = gst_d3d12_desc_heap_get_handle (rect->srv_heap); + ID3D12DescriptorHeap *heaps = { srv_heap }; + cl->SetDescriptorHeaps (1, heaps); + cl->SetGraphicsRootDescriptorTable (0, + GetGPUDescriptorHandleForHeapStart (srv_heap)); + cl->IASetVertexBuffers (0, 1, &rect->vbv); + + cl->DrawIndexedInstanced (6, 1, 0, 0, 0); + + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (gst_mini_object_ref (rect))); + + prev_pso = nullptr; + prev_pso = pso; + } + + priv->pso->AddRef (); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_COM (priv->pso.Get ())); + + priv->pso_premul->AddRef (); + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_COM (priv->pso_premul.Get ())); + + return TRUE; +} +/* *INDENT-ON* */ + +gboolean +gst_d3d12_overlay_blender_draw (GstD3D12OverlayBlender * compositor, + GstBuffer * buf, GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * command_list) +{ + g_return_val_if_fail (compositor != nullptr, FALSE); + g_return_val_if_fail (GST_IS_BUFFER (buf), FALSE); + g_return_val_if_fail (fence_data, FALSE); + g_return_val_if_fail (command_list, FALSE); + + auto priv = compositor->priv; + + if (priv->overlays.empty ()) + return TRUE; + + auto mem = (GstD3D12Memory *) gst_buffer_peek_memory (buf, 0); + auto resource = gst_d3d12_memory_get_resource_handle (mem); + auto desc = GetDesc (resource); + if (desc.SampleDesc.Count != priv->sample_desc.Count || + desc.SampleDesc.Quality != priv->sample_desc.Quality) { + auto device = gst_d3d12_device_get_device_handle (compositor->device); + + auto pso_desc = priv->pso_desc; + pso_desc.SampleDesc = desc.SampleDesc; + ComPtr < ID3D12PipelineState > pso; + auto hr = device->CreateGraphicsPipelineState (&pso_desc, + IID_PPV_ARGS (&pso)); + if (!gst_d3d12_result (hr, compositor->device)) { + GST_ERROR_OBJECT (compositor, "Couldn't create pso"); + return FALSE; + } + + ComPtr < ID3D12PipelineState > pso_premul; + auto pso_premul_desc = priv->pso_premul_desc; + pso_premul_desc.SampleDesc = desc.SampleDesc; + hr = device->CreateGraphicsPipelineState (&pso_premul_desc, + IID_PPV_ARGS (&pso_premul)); + if (!gst_d3d12_result (hr, compositor->device)) { + GST_ERROR_OBJECT (compositor, "Couldn't create pso"); + return FALSE; + } + + priv->pso = nullptr; + priv->pso_premul = nullptr; + + priv->pso = pso; + priv->pso_premul = pso_premul; + priv->sample_desc = desc.SampleDesc; + } + + return gst_d3d12_overlay_blender_execute (compositor, + buf, fence_data, command_list); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12overlayblender.h
Added
@@ -0,0 +1,49 @@ +/* GStreamer + * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02120-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/d3d12/gstd3d12.h> + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_OVERLAY_BLENDER (gst_d3d12_overlay_blender_get_type()) +G_DECLARE_FINAL_TYPE (GstD3D12OverlayBlender, gst_d3d12_overlay_blender, + GST, D3D12_OVERLAY_BLENDER, GstObject) + +GType gst_d3d12_overlay_rect_get_type (void); + +GstD3D12OverlayBlender * gst_d3d12_overlay_blender_new (GstD3D12Device * device, + const GstVideoInfo * info); + +gboolean gst_d3d12_overlay_blender_upload (GstD3D12OverlayBlender * compositor, + GstBuffer * buf); + +gboolean gst_d3d12_overlay_blender_update_viewport (GstD3D12OverlayBlender * compositor, + GstVideoRectangle * viewport); + +gboolean gst_d3d12_overlay_blender_draw (GstD3D12OverlayBlender * compositor, + GstBuffer * buf, + GstD3D12FenceData * fence_data, + ID3D12GraphicsCommandList * command_list); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12overlaycompositor.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12overlaycompositor.cpp
Changed
@@ -1,5 +1,4 @@ -/* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> +/* GStreamer * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Library General Public @@ -14,7 +13,16 @@ * You should have received a copy of the GNU Library General Public * License along with this library; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02120-1301, USA. + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-d3d12overlaycompositor + * @title: d3d12overlaycompositor + * + * A Direct3D12-based overlay composing element + * + * Since: 1.28 */ #ifdef HAVE_CONFIG_H @@ -22,120 +30,170 @@ #endif #include "gstd3d12overlaycompositor.h" +#include "gstd3d12overlayblender.h" #include "gstd3d12pluginutils.h" -#include <directx/d3dx12.h> -#include <wrl.h> #include <memory> -#include <vector> -#include <algorithm> -#include <gst/d3dshader/gstd3dshader.h> - -GST_DEBUG_CATEGORY_STATIC (gst_d3d12_overlay_compositor_debug); -#define GST_CAT_DEFAULT gst_d3d12_overlay_compositor_debug +#include <wrl.h> +#include <directx/d3dx12.h> /* *INDENT-OFF* */ using namespace Microsoft::WRL; +/* *INDENT-ON* */ -struct VertexData +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_overlay_compositor_debug); +#define GST_CAT_DEFAULT gst_d3d12_overlay_compositor_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +enum BlendMode { - struct { - FLOAT x; - FLOAT y; - FLOAT z; - } position; - struct { - FLOAT u; - FLOAT v; - } texture; + BLEND_MODE_PASSTHROUGH, + BLEND_MODE_BLEND, + BLEND_MODE_CONVERT_BLEND, }; -struct GstD3D12OverlayRect : public GstMiniObject +/* *INDENT-OFF* */ +struct OverlayBlendCtx { - ~GstD3D12OverlayRect () + OverlayBlendCtx (GstD3D12Device * dev) { - if (overlay_rect) - gst_video_overlay_rectangle_unref (overlay_rect); + device = (GstD3D12Device *) gst_object_ref (dev); + auto device_handle = gst_d3d12_device_get_device_handle (device); + ca_pool = gst_d3d12_cmd_alloc_pool_new (device_handle, + D3D12_COMMAND_LIST_TYPE_DIRECT); + } + + ~OverlayBlendCtx () + { + if (fence_val > 0) { + gst_d3d12_device_fence_wait (device, D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_val); + } + + if (blend_pool) + gst_buffer_pool_set_active (blend_pool, FALSE); - gst_clear_d3d12_desc_heap (&srv_heap); + gst_clear_object (&blend_pool); + gst_clear_object (&ca_pool); + gst_clear_object (&pre_conv); + gst_clear_object (&post_conv); + gst_clear_object (&device); } - GstVideoOverlayRectangle *overlay_rect = nullptr; - ComPtr<ID3D12Resource> texture; - ComPtr<ID3D12Resource> staging; - ComPtr<ID3D12Resource> vertex_buf; - GstD3D12DescHeap *srv_heap = nullptr; - D3D12_VERTEX_BUFFER_VIEW vbv; - D3D12_PLACED_SUBRESOURCE_FOOTPRINT layout; - gboolean premul_alpha = FALSE; - gboolean need_upload = TRUE; + GstD3D12Device *device = nullptr; + ComPtr<ID3D12GraphicsCommandList> cl; + GstD3D12CmdAllocPool *ca_pool; + guint64 fence_val = 0; + + GstD3D12OverlayBlender *blender = nullptr; + GstBufferPool *blend_pool = nullptr; + GstVideoInfo origin_info; + GstVideoInfo blend_info; + GstD3D12Converter *pre_conv = nullptr; + GstD3D12Converter *post_conv = nullptr; }; -GST_DEFINE_MINI_OBJECT_TYPE (GstD3D12OverlayRect, gst_d3d12_overlay_rect); - struct GstD3D12OverlayCompositorPrivate { GstD3D12OverlayCompositorPrivate () { - sample_desc.Count = 1; - sample_desc.Quality = 0; + fence_data_pool = gst_d3d12_fence_data_pool_new (); } ~GstD3D12OverlayCompositorPrivate () { - if (overlays) - g_list_free_full (overlays, (GDestroyNotify) gst_mini_object_unref); - - gst_clear_object (&ca_pool); - gst_clear_object (&srv_heap_pool); + gst_object_unref (fence_data_pool); } - GstVideoInfo info; - - D3D12_VIEWPORT viewport; - D3D12_RECT scissor_rect; + GstD3D12FenceDataPool *fence_data_pool; - D3D12_INPUT_ELEMENT_DESC input_desc2; - D3D12_GRAPHICS_PIPELINE_STATE_DESC pso_desc = { }; - D3D12_GRAPHICS_PIPELINE_STATE_DESC pso_premul_desc = { }; - DXGI_SAMPLE_DESC sample_desc; - - ComPtr<ID3D12RootSignature> rs; - ComPtr<ID3D12PipelineState> pso; - ComPtr<ID3D12PipelineState> pso_premul; - D3D12_INDEX_BUFFER_VIEW idv; - ComPtr<ID3D12Resource> index_buf; - ComPtr<ID3D12GraphicsCommandList> cl; - GstD3D12CmdAllocPool *ca_pool = nullptr; - GstD3D12DescHeapPool *srv_heap_pool = nullptr; - - GList *overlays = nullptr; - - std::vector<GstVideoOverlayRectangle *> rects_to_upload; + std::shared_ptr<OverlayBlendCtx> ctx; + gboolean downstream_supports_meta = FALSE; + BlendMode blend_mode = BLEND_MODE_PASSTHROUGH; }; /* *INDENT-ON* */ struct _GstD3D12OverlayCompositor { - GstObject parent; - - GstD3D12Device *device; + GstD3D12BaseFilter parent; GstD3D12OverlayCompositorPrivate *priv; }; -static void gst_d3d12_overlay_compositor_finalize (GObject * object); - #define gst_d3d12_overlay_compositor_parent_class parent_class G_DEFINE_TYPE (GstD3D12OverlayCompositor, - gst_d3d12_overlay_compositor, GST_TYPE_OBJECT); + gst_d3d12_overlay_compositor, GST_TYPE_D3D12_BASE_FILTER); + +static void gst_d3d12_overlay_compositor_finalize (GObject * object); +static gboolean gst_d3d12_overlay_compositor_stop (GstBaseTransform * trans); +static GstCaps *gst_d3d12_overlay_compositor_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_d3d12_overlay_compositor_fixate_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); +static GstFlowReturn gst_d3d12_overlay_compositor_transform (GstBaseTransform * + trans, GstBuffer * inbuf, GstBuffer * outbuf); +static GstFlowReturn +gst_d3d12_overlay_compositor_generate_output (GstBaseTransform * trans, + GstBuffer ** buffer); +static gboolean gst_d3d12_overlay_compositor_set_info (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean +gst_d3d12_overlay_compositor_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); static void gst_d3d12_overlay_compositor_class_init (GstD3D12OverlayCompositorClass * klass) { - GObjectClass *object_class = G_OBJECT_CLASS (klass); + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + auto filter_class = GST_D3D12_BASE_FILTER_CLASS (klass); object_class->finalize = gst_d3d12_overlay_compositor_finalize; + gst_element_class_set_static_metadata (element_class, + "Direct3D12 Overlay Compositor", "Filter/Effect/Video/Hardware", + "Blend overlay into stream", "Seungha Yang <seungha@centricular.com>"); + + gst_element_class_add_static_pad_template (element_class, &src_template); + gst_element_class_add_static_pad_template (element_class, &sink_template); + + trans_class->passthrough_on_same_caps = FALSE; + + trans_class->stop = GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_stop); + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_transform_caps); + trans_class->fixate_caps = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_fixate_caps); + trans_class->transform = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_transform); + trans_class->generate_output = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_generate_output); + + filter_class->set_info = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_overlay_compositor_propose_allocation); + GST_DEBUG_CATEGORY_INIT (gst_d3d12_overlay_compositor_debug, "d3d12overlaycompositor", 0, "d3d12overlaycompositor"); } @@ -149,721 +207,450 @@ static void gst_d3d12_overlay_compositor_finalize (GObject * object) { - GstD3D12OverlayCompositor *self = GST_D3D12_OVERLAY_COMPOSITOR (object); + auto self = GST_D3D12_OVERLAY_COMPOSITOR (object); delete self->priv; - gst_clear_object (&self->device); - G_OBJECT_CLASS (parent_class)->finalize (object); } -static void -gst_d3d12_overlay_rect_free (GstD3D12OverlayRect * rect) +static gboolean +gst_d3d12_overlay_compositor_stop (GstBaseTransform * trans) { - if (rect) - delete rect; + auto self = GST_D3D12_OVERLAY_COMPOSITOR (trans); + auto priv = self->priv; + + priv->ctx = nullptr; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->stop (trans); } -static GstD3D12OverlayRect * -gst_d3d12_overlay_rect_new (GstD3D12OverlayCompositor * self, - GstVideoOverlayRectangle * overlay_rect) +static gboolean +gst_d3d12_overlay_compositor_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) { - auto priv = self->priv; - gint x, y; - guint width, height; - VertexData vertex_data4; - FLOAT x1, y1, x2, y2; - gdouble val; - GstVideoOverlayFormatFlags flags; - gboolean premul_alpha = FALSE; - - if (!gst_video_overlay_rectangle_get_render_rectangle (overlay_rect, &x, &y, - &width, &height)) { - GST_ERROR_OBJECT (self, "Failed to get render rectangle"); - return nullptr; + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { + return FALSE; } - flags = gst_video_overlay_rectangle_get_flags (overlay_rect); - if ((flags & GST_VIDEO_OVERLAY_FORMAT_FLAG_PREMULTIPLIED_ALPHA) != 0) { - premul_alpha = TRUE; - flags = GST_VIDEO_OVERLAY_FORMAT_FLAG_PREMULTIPLIED_ALPHA; - } else { - flags = GST_VIDEO_OVERLAY_FORMAT_FLAG_NONE; - } + gst_query_add_allocation_meta (query, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); - auto buf = gst_video_overlay_rectangle_get_pixels_unscaled_argb (overlay_rect, - flags); - if (!buf) { - GST_ERROR_OBJECT (self, "Failed to get overlay buffer"); - return nullptr; - } + return TRUE; +} - auto device = gst_d3d12_device_get_device_handle (self->device); - auto mem = gst_buffer_peek_memory (buf, 0); - bool is_d3d12 = false; - ComPtr < ID3D12Resource > texture; - if (gst_is_d3d12_memory (mem)) { - GST_LOG_OBJECT (self, "Overlay is d3d12 memory"); - auto dmem = GST_D3D12_MEMORY_CAST (mem); - if (gst_d3d12_device_is_equal (dmem->device, self->device) && - gst_d3d12_memory_get_shader_resource_view_heap (dmem)) { - texture = gst_d3d12_memory_get_resource_handle (dmem); - is_d3d12 = true; +static GstCaps * +add_feature (GstCaps * caps) +{ + auto new_caps = gst_caps_new_empty (); + auto caps_size = gst_caps_get_size (caps); + + for (guint i = 0; i < caps_size; i++) { + auto s = gst_caps_get_structure (caps, i); + auto f = gst_caps_features_copy (gst_caps_get_features (caps, i)); + auto c = gst_caps_new_full (gst_structure_copy (s), nullptr); + + if (!gst_caps_features_is_any (f) && + !gst_caps_features_contains (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION)) { + gst_caps_features_add (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION); } + + gst_caps_set_features (c, 0, f); + gst_caps_append (new_caps, c); } - ComPtr < ID3D12Resource > staging; - D3D12_PLACED_SUBRESOURCE_FOOTPRINT layout; - D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; - if (gst_d3d12_device_non_zeroed_supported (self->device)) - heap_flags = D3D12_HEAP_FLAG_CREATE_NOT_ZEROED; + return new_caps; +} - if (!is_d3d12) { - auto vmeta = gst_buffer_get_video_meta (buf); +static GstCaps * +remove_feature (GstCaps * caps) +{ + auto new_caps = gst_caps_new_empty (); + auto caps_size = gst_caps_get_size (caps); + + for (guint i = 0; i < caps_size; i++) { + auto s = gst_caps_get_structure (caps, i); + auto f = gst_caps_features_copy (gst_caps_get_features (caps, i)); + auto c = gst_caps_new_full (gst_structure_copy (s), nullptr); + + gst_caps_features_remove (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION); + gst_caps_set_features (c, 0, f); + gst_caps_append (new_caps, c); + } - if (!vmeta) { - GST_ERROR_OBJECT (self, "Failed to get video meta"); - return nullptr; - } + return new_caps; +} - D3D12_HEAP_PROPERTIES heap_prop = - CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); - D3D12_RESOURCE_DESC desc = - CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_B8G8R8A8_UNORM, vmeta->width, - vmeta->height, 1, 1); - - auto hr = device->CreateCommittedResource (&heap_prop, heap_flags, - &desc, D3D12_RESOURCE_STATE_COPY_DEST, nullptr, - IID_PPV_ARGS (&texture)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create texture"); - return nullptr; - } +static GstCaps * +gst_d3d12_overlay_compositor_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + GstCaps *result, *tmp; - UINT64 size; - device->GetCopyableFootprints (&desc, 0, 1, 0, &layout, nullptr, nullptr, - &size); - - heap_prop = CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); - desc = CD3DX12_RESOURCE_DESC::Buffer (size); - hr = device->CreateCommittedResource (&heap_prop, heap_flags, - &desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, - IID_PPV_ARGS (&staging)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create upload buffer"); - return nullptr; - } + GST_DEBUG_OBJECT (trans, + "Transforming caps %" GST_PTR_FORMAT " in direction %s", caps, + (direction == GST_PAD_SINK) ? "sink" : "src"); - guint8 *map_data; - hr = staging->Map (0, nullptr, (void **) &map_data); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't map staging"); - return nullptr; - } + if (direction == GST_PAD_SINK) { + tmp = remove_feature (caps); + tmp = gst_caps_merge (tmp, gst_caps_ref (caps)); + } else { + tmp = add_feature (caps); + tmp = gst_caps_merge (gst_caps_ref (caps), tmp); + } - guint8 *data; - gint stride; - GstMapInfo info; - if (!gst_video_meta_map (vmeta, - 0, &info, (gpointer *) & data, &stride, GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Failed to map"); - return nullptr; - } + if (filter) { + result = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp); + } else { + result = tmp; + } - if (layout.Footprint.RowPitch == (UINT) stride) { - memcpy (map_data, data, stride * layout.Footprint.Height); - } else { - guint width_in_bytes = 4 * layout.Footprint.Width; - for (UINT i = 0; i < layout.Footprint.Height; i++) { - memcpy (map_data, data, width_in_bytes); - map_data += layout.Footprint.RowPitch; - data += stride; - } - } + GST_DEBUG_OBJECT (trans, "returning caps: %" GST_PTR_FORMAT, result); - staging->Unmap (0, nullptr); - gst_video_meta_unmap (vmeta, 0, &info); - } + return result; +} - /* bottom left */ - gst_util_fraction_to_double (x, GST_VIDEO_INFO_WIDTH (&priv->info), &val); - x1 = (val * 2.0f) - 1.0f; - - gst_util_fraction_to_double (y + height, - GST_VIDEO_INFO_HEIGHT (&priv->info), &val); - y1 = (val * -2.0f) + 1.0f; - - /* top right */ - gst_util_fraction_to_double (x + width, - GST_VIDEO_INFO_WIDTH (&priv->info), &val); - x2 = (val * 2.0f) - 1.0f; - - gst_util_fraction_to_double (y, GST_VIDEO_INFO_HEIGHT (&priv->info), &val); - y2 = (val * -2.0f) + 1.0f; - - /* bottom left */ - vertex_data0.position.x = x1; - vertex_data0.position.y = y1; - vertex_data0.position.z = 0.0f; - vertex_data0.texture.u = 0.0f; - vertex_data0.texture.v = 1.0f; - - /* top left */ - vertex_data1.position.x = x1; - vertex_data1.position.y = y2; - vertex_data1.position.z = 0.0f; - vertex_data1.texture.u = 0.0f; - vertex_data1.texture.v = 0.0f; - - /* top right */ - vertex_data2.position.x = x2; - vertex_data2.position.y = y2; - vertex_data2.position.z = 0.0f; - vertex_data2.texture.u = 1.0f; - vertex_data2.texture.v = 0.0f; - - /* bottom right */ - vertex_data3.position.x = x2; - vertex_data3.position.y = y1; - vertex_data3.position.z = 0.0f; - vertex_data3.texture.u = 1.0f; - vertex_data3.texture.v = 1.0f; - - ComPtr < ID3D12Resource > vertex_buf; - D3D12_HEAP_PROPERTIES heap_prop = - CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); - D3D12_RESOURCE_DESC desc = - CD3DX12_RESOURCE_DESC::Buffer (sizeof (VertexData) * 4); - auto hr = device->CreateCommittedResource (&heap_prop, heap_flags, - &desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, - IID_PPV_ARGS (&vertex_buf)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create vertex buffer"); - return nullptr; +static GstCaps * +gst_d3d12_overlay_compositor_fixate_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GstCaps *overlay_caps = nullptr; + auto caps_size = gst_caps_get_size (othercaps); + GstCaps *ret; + + GST_DEBUG_OBJECT (trans, "Fixate caps in direction %s, caps %" + GST_PTR_FORMAT ", other caps %" GST_PTR_FORMAT, + (direction == GST_PAD_SINK) ? "sink" : "src", caps, othercaps); + + /* Prefer overlaycomposition caps */ + for (guint i = 0; i < caps_size; i++) { + auto f = gst_caps_get_features (othercaps, i); + + if (f && !gst_caps_features_is_any (f) && + gst_caps_features_contains (f, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION)) { + auto s = gst_caps_get_structure (othercaps, i); + overlay_caps = gst_caps_new_full (gst_structure_copy (s), nullptr); + gst_caps_set_features_simple (overlay_caps, gst_caps_features_copy (f)); + break; + } } - guint8 *map_data; - hr = vertex_buf->Map (0, nullptr, (void **) &map_data); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't map vertex buffer"); - return nullptr; + if (overlay_caps) { + gst_caps_unref (othercaps); + ret = gst_caps_fixate (overlay_caps); + } else { + ret = gst_caps_fixate (othercaps); } - memcpy (map_data, vertex_data, sizeof (VertexData) * 4); - vertex_buf->Unmap (0, nullptr); - - GstD3D12DescHeap *srv_heap; - if (!gst_d3d12_desc_heap_pool_acquire (priv->srv_heap_pool, &srv_heap)) { - GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); - return nullptr; - } + GST_DEBUG_OBJECT (trans, "Fixated caps %" GST_PTR_FORMAT, ret); - auto srv_heap_handle = gst_d3d12_desc_heap_get_handle (srv_heap); - D3D12_SHADER_RESOURCE_VIEW_DESC srv_desc = { }; - srv_desc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D; - srv_desc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING; - srv_desc.Texture2D.MipLevels = 1; - srv_desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; - - device->CreateShaderResourceView (texture.Get (), &srv_desc, - GetCPUDescriptorHandleForHeapStart (srv_heap_handle)); - - auto rect = new GstD3D12OverlayRect (); - gst_mini_object_init (rect, 0, gst_d3d12_overlay_rect_get_type (), - nullptr, nullptr, - (GstMiniObjectFreeFunction) gst_d3d12_overlay_rect_free); - - rect->overlay_rect = gst_video_overlay_rectangle_ref (overlay_rect); - rect->texture = texture; - rect->staging = staging; - rect->vertex_buf = vertex_buf; - rect->vbv.BufferLocation = vertex_buf->GetGPUVirtualAddress (); - rect->vbv.SizeInBytes = sizeof (VertexData) * 4; - rect->vbv.StrideInBytes = sizeof (VertexData); - rect->layout = layout; - rect->srv_heap = srv_heap; - rect->premul_alpha = premul_alpha; - if (is_d3d12) - rect->need_upload = FALSE; - - return rect; + return ret; } static gboolean -gst_d3d12_overlay_compositor_setup_shader (GstD3D12OverlayCompositor * self) +gst_d3d12_overlay_compositor_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info) { + auto self = GST_D3D12_OVERLAY_COMPOSITOR (filter); auto priv = self->priv; - GstVideoInfo *info = &priv->info; - const WORD indices6 = { 0, 1, 2, 3, 0, 2 }; - const D3D12_ROOT_SIGNATURE_FLAGS rs_flags = - D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT | - D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | - D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | - D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS; - const D3D12_STATIC_SAMPLER_DESC static_sampler_desc = { - D3D12_FILTER_MIN_MAG_LINEAR_MIP_POINT, - D3D12_TEXTURE_ADDRESS_MODE_CLAMP, - D3D12_TEXTURE_ADDRESS_MODE_CLAMP, - D3D12_TEXTURE_ADDRESS_MODE_CLAMP, - 0, - 1, - D3D12_COMPARISON_FUNC_ALWAYS, - D3D12_STATIC_BORDER_COLOR_OPAQUE_BLACK, - 0, - D3D12_FLOAT32_MAX, - 0, - 0, - D3D12_SHADER_VISIBILITY_PIXEL - }; - - CD3DX12_ROOT_PARAMETER param; - D3D12_DESCRIPTOR_RANGE range; - std::vector < D3D12_ROOT_PARAMETER > param_list; - - range = CD3DX12_DESCRIPTOR_RANGE (D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0); - param.InitAsDescriptorTable (1, &range, D3D12_SHADER_VISIBILITY_PIXEL); - param_list.push_back (param); - - D3D12_VERSIONED_ROOT_SIGNATURE_DESC rs_desc = { }; - CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (rs_desc, - param_list.size (), param_list.data (), - 1, &static_sampler_desc, rs_flags); - - ComPtr < ID3DBlob > rs_blob; - ComPtr < ID3DBlob > error_blob; - auto hr = D3DX12SerializeVersionedRootSignature (&rs_desc, - D3D_ROOT_SIGNATURE_VERSION_1_1, &rs_blob, &error_blob); - if (!gst_d3d12_result (hr, self->device)) { - const gchar *error_msg = nullptr; - if (error_blob) - error_msg = (const gchar *) error_blob->GetBufferPointer (); - - GST_ERROR_OBJECT (self, "Couldn't serialize root signature, error: %s", - GST_STR_NULL (error_msg)); - return FALSE; - } - GstD3D12Format device_format; - gst_d3d12_device_get_format (self->device, GST_VIDEO_INFO_FORMAT (info), - &device_format); + priv->ctx = nullptr; + priv->blend_mode = BLEND_MODE_PASSTHROUGH; - GstD3DShaderByteCode vs_code; - GstD3DShaderByteCode ps_sample_code; - GstD3DShaderByteCode ps_sample_premul_code; - if (!gst_d3d_plugin_shader_get_vs_blob (GST_D3D_PLUGIN_VS_COORD, - GST_D3D_SM_5_0, &vs_code)) { - GST_ERROR_OBJECT (self, "Couldn't get vs bytecode"); - return FALSE; - } + auto features = gst_caps_get_features (outcaps, 0); + if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION)) { + /* Let downstream blend */ + priv->blend_mode = BLEND_MODE_PASSTHROUGH; + } else { + auto format = GST_VIDEO_INFO_FORMAT (in_info); + GstVideoFormat blend_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoColorRange range = GST_VIDEO_COLOR_RANGE_0_255; + + switch (format) { + case GST_VIDEO_FORMAT_RGBA: + case GST_VIDEO_FORMAT_BGRA: + case GST_VIDEO_FORMAT_RGBA64_LE: + case GST_VIDEO_FORMAT_VUYA: + priv->blend_mode = BLEND_MODE_BLEND; + range = in_info->colorimetry.range; + blend_format = format; + break; + default: + priv->blend_mode = BLEND_MODE_CONVERT_BLEND; + if (GST_VIDEO_INFO_IS_YUV (in_info)) { + if (GST_VIDEO_INFO_COMP_DEPTH (in_info, 0) <= 8) + blend_format = GST_VIDEO_FORMAT_VUYA; + else + blend_format = GST_VIDEO_FORMAT_RGBA64_LE; + } else { + if (GST_VIDEO_INFO_COMP_DEPTH (in_info, 0) <= 8) + blend_format = GST_VIDEO_FORMAT_RGBA; + else + blend_format = GST_VIDEO_FORMAT_RGBA64_LE; + } + break; + } - if (!gst_d3d_plugin_shader_get_ps_blob (GST_D3D_PLUGIN_PS_SAMPLE, - GST_D3D_SM_5_0, &ps_sample_code)) { - GST_ERROR_OBJECT (self, "Couldn't get ps bytecode"); - return FALSE; - } + auto ctx = std::make_shared < OverlayBlendCtx > (device); + ctx->origin_info = *in_info; + + gst_video_info_set_format (&ctx->blend_info, blend_format, + in_info->width, in_info->height); + ctx->blend_info.colorimetry.range = range; + + ctx->blender = gst_d3d12_overlay_blender_new (device, &ctx->blend_info); + if (priv->blend_mode == BLEND_MODE_CONVERT_BLEND) { + ctx->pre_conv = gst_d3d12_converter_new (device, + nullptr, &ctx->origin_info, &ctx->blend_info, nullptr, nullptr, + nullptr); + ctx->post_conv = gst_d3d12_converter_new (device, + nullptr, &ctx->blend_info, &ctx->origin_info, nullptr, nullptr, + nullptr); + } - if (!gst_d3d_plugin_shader_get_ps_blob (GST_D3D_PLUGIN_PS_SAMPLE_PREMULT, - GST_D3D_SM_5_0, &ps_sample_premul_code)) { - GST_ERROR_OBJECT (self, "Couldn't get ps bytecode"); - return FALSE; - } + auto blend_caps = gst_video_info_to_caps (&ctx->blend_info); - auto device = gst_d3d12_device_get_device_handle (self->device); - ComPtr < ID3D12RootSignature > rs; - device->CreateRootSignature (0, rs_blob->GetBufferPointer (), - rs_blob->GetBufferSize (), IID_PPV_ARGS (&rs)); - - priv->input_desc0.SemanticName = "POSITION"; - priv->input_desc0.SemanticIndex = 0; - priv->input_desc0.Format = DXGI_FORMAT_R32G32B32_FLOAT; - priv->input_desc0.InputSlot = 0; - priv->input_desc0.AlignedByteOffset = D3D12_APPEND_ALIGNED_ELEMENT; - priv->input_desc0.InputSlotClass = - D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA; - priv->input_desc0.InstanceDataStepRate = 0; - - priv->input_desc1.SemanticName = "TEXCOORD"; - priv->input_desc1.SemanticIndex = 0; - priv->input_desc1.Format = DXGI_FORMAT_R32G32_FLOAT; - priv->input_desc1.InputSlot = 0; - priv->input_desc1.AlignedByteOffset = D3D12_APPEND_ALIGNED_ELEMENT; - priv->input_desc1.InputSlotClass = - D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA; - priv->input_desc1.InstanceDataStepRate = 0; - - auto & pso_desc = priv->pso_desc; - pso_desc.pRootSignature = rs.Get (); - pso_desc.VS.BytecodeLength = vs_code.byte_code_len; - pso_desc.VS.pShaderBytecode = vs_code.byte_code; - pso_desc.PS.BytecodeLength = ps_sample_code.byte_code_len; - pso_desc.PS.pShaderBytecode = ps_sample_code.byte_code; - pso_desc.BlendState = CD3DX12_BLEND_DESC (D3D12_DEFAULT); - pso_desc.BlendState.RenderTarget0.BlendEnable = TRUE; - pso_desc.BlendState.RenderTarget0.LogicOpEnable = FALSE; - pso_desc.BlendState.RenderTarget0.SrcBlend = D3D12_BLEND_SRC_ALPHA; - pso_desc.BlendState.RenderTarget0.DestBlend = D3D12_BLEND_INV_SRC_ALPHA; - pso_desc.BlendState.RenderTarget0.BlendOp = D3D12_BLEND_OP_ADD; - pso_desc.BlendState.RenderTarget0.SrcBlendAlpha = D3D12_BLEND_ONE; - pso_desc.BlendState.RenderTarget0.DestBlendAlpha = - D3D12_BLEND_INV_SRC_ALPHA; - pso_desc.BlendState.RenderTarget0.BlendOpAlpha = D3D12_BLEND_OP_ADD; - pso_desc.BlendState.RenderTarget0.LogicOp = D3D12_LOGIC_OP_NOOP; - pso_desc.BlendState.RenderTarget0.RenderTargetWriteMask = - D3D12_COLOR_WRITE_ENABLE_ALL; - pso_desc.SampleMask = UINT_MAX; - pso_desc.RasterizerState = CD3DX12_RASTERIZER_DESC (D3D12_DEFAULT); - pso_desc.RasterizerState.CullMode = D3D12_CULL_MODE_NONE; - pso_desc.DepthStencilState.DepthEnable = FALSE; - pso_desc.DepthStencilState.StencilEnable = FALSE; - pso_desc.InputLayout.pInputElementDescs = priv->input_desc; - pso_desc.InputLayout.NumElements = 2; - pso_desc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; - pso_desc.NumRenderTargets = 1; - pso_desc.RTVFormats0 = device_format.resource_format0; - pso_desc.SampleDesc.Count = 1; - - ComPtr < ID3D12PipelineState > pso; - hr = device->CreateGraphicsPipelineState (&pso_desc, IID_PPV_ARGS (&pso)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create pso"); - return FALSE; - } + ctx->blend_pool = gst_d3d12_buffer_pool_new (device); + auto config = gst_buffer_pool_get_config (ctx->blend_pool); + gst_buffer_pool_config_set_params (config, blend_caps, 0, 0, 0); + gst_caps_unref (blend_caps); - ComPtr < ID3D12PipelineState > pso_premul; - auto & pso_premul_desc = priv->pso_premul_desc; - pso_premul_desc = priv->pso_desc; - pso_premul_desc.PS.BytecodeLength = ps_sample_premul_code.byte_code_len; - pso_premul_desc.PS.pShaderBytecode = ps_sample_premul_code.byte_code; - hr = device->CreateGraphicsPipelineState (&pso_premul_desc, - IID_PPV_ARGS (&pso_premul)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create pso"); - return FALSE; - } + if (!gst_buffer_pool_set_config (ctx->blend_pool, config)) { + GST_ERROR_OBJECT (self, "Couldn't set config"); + return FALSE; + } - D3D12_HEAP_PROPERTIES heap_prop = - CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_UPLOAD); - D3D12_RESOURCE_DESC buffer_desc = - CD3DX12_RESOURCE_DESC::Buffer (sizeof (indices)); - D3D12_HEAP_FLAGS heap_flags = D3D12_HEAP_FLAG_NONE; - if (gst_d3d12_device_non_zeroed_supported (self->device)) - heap_flags = D3D12_HEAP_FLAG_CREATE_NOT_ZEROED; - - ComPtr < ID3D12Resource > index_buf; - hr = device->CreateCommittedResource (&heap_prop, heap_flags, - &buffer_desc, D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, - IID_PPV_ARGS (&index_buf)); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't create index buffer"); - return FALSE; - } + if (!gst_buffer_pool_set_active (ctx->blend_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't set config"); + return FALSE; + } - void *data; - hr = index_buf->Map (0, nullptr, &data); - if (!gst_d3d12_result (hr, self->device)) { - GST_ERROR_OBJECT (self, "Couldn't map index buffer"); - return FALSE; + priv->ctx = ctx; } - memcpy (data, indices, sizeof (indices)); - index_buf->Unmap (0, nullptr); - - D3D12_DESCRIPTOR_HEAP_DESC heap_desc = { }; - heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; - heap_desc.NumDescriptors = 1; - heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; - - priv->rs = rs; - priv->pso = pso; - priv->pso_premul = pso_premul; - priv->idv.BufferLocation = index_buf->GetGPUVirtualAddress (); - priv->idv.SizeInBytes = sizeof (indices); - priv->idv.Format = DXGI_FORMAT_R16_UINT; - priv->index_buf = index_buf; - priv->srv_heap_pool = gst_d3d12_desc_heap_pool_new (device, &heap_desc); - priv->ca_pool = gst_d3d12_cmd_alloc_pool_new (device, - D3D12_COMMAND_LIST_TYPE_DIRECT); - - priv->viewport.TopLeftX = 0; - priv->viewport.TopLeftY = 0; - priv->viewport.Width = GST_VIDEO_INFO_WIDTH (info); - priv->viewport.Height = GST_VIDEO_INFO_HEIGHT (info); - priv->viewport.MinDepth = 0.0f; - priv->viewport.MaxDepth = 1.0f; - - priv->scissor_rect.left = 0; - priv->scissor_rect.top = 0; - priv->scissor_rect.right = GST_VIDEO_INFO_WIDTH (info); - priv->scissor_rect.bottom = GST_VIDEO_INFO_HEIGHT (info); + GST_DEBUG_OBJECT (self, "Selected blend mode: %d", priv->blend_mode); return TRUE; } -GstD3D12OverlayCompositor * -gst_d3d12_overlay_compositor_new (GstD3D12Device * device, - const GstVideoInfo * info) +static gboolean +foreach_meta (GstBuffer * buffer, GstMeta ** meta, gpointer user_data) { - GstD3D12OverlayCompositor *self = nullptr; - GstD3D12OverlayCompositorPrivate *priv; - - g_return_val_if_fail (GST_IS_D3D12_DEVICE (device), nullptr); - g_return_val_if_fail (info != nullptr, nullptr); - - self = (GstD3D12OverlayCompositor *) - g_object_new (GST_TYPE_D3D12_OVERLAY_COMPOSITOR, nullptr); - gst_object_ref_sink (self); - priv = self->priv; + if ((*meta)->info->api == GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE) + *meta = nullptr; - self->device = (GstD3D12Device *) gst_object_ref (device); - priv->info = *info; - - if (!gst_d3d12_overlay_compositor_setup_shader (self)) { - gst_object_unref (self); - return nullptr; - } - - return self; + return TRUE; } static gboolean -gst_d3d12_overlay_compositor_foreach_meta (GstBuffer * buffer, GstMeta ** meta, - GstD3D12OverlayCompositor * self) +buffer_has_overlay_rect (GstBuffer * buf) { - auto priv = self->priv; - - if ((*meta)->info->api != GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE) - return TRUE; - - auto cmeta = (GstVideoOverlayCompositionMeta *) (*meta); - if (!cmeta->overlay) - return TRUE; - - auto num_rect = gst_video_overlay_composition_n_rectangles (cmeta->overlay); - for (guint i = 0; i < num_rect; i++) { - auto rect = gst_video_overlay_composition_get_rectangle (cmeta->overlay, i); - priv->rects_to_upload.push_back (rect); + gboolean has_rect = FALSE; + gpointer state = nullptr; + GstMeta *meta; + while ((meta = gst_buffer_iterate_meta_filtered (buf, &state, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE)) != nullptr) { + auto ometa = (GstVideoOverlayCompositionMeta *) meta; + if (gst_video_overlay_composition_n_rectangles (ometa->overlay) > 0) { + has_rect = TRUE; + break; + } } - return TRUE; + return has_rect; } -gboolean -gst_d3d12_overlay_compositor_upload (GstD3D12OverlayCompositor * compositor, - GstBuffer * buf) +static GstFlowReturn +gst_d3d12_overlay_compositor_generate_output (GstBaseTransform * trans, + GstBuffer ** buffer) { - g_return_val_if_fail (compositor != nullptr, FALSE); - g_return_val_if_fail (GST_IS_BUFFER (buf), FALSE); + auto self = GST_D3D12_OVERLAY_COMPOSITOR (trans); + auto priv = self->priv; - auto priv = compositor->priv; - priv->rects_to_upload.clear (); + if (!trans->queued_buf) + return GST_FLOW_OK; - gst_buffer_foreach_meta (buf, - (GstBufferForeachMetaFunc) gst_d3d12_overlay_compositor_foreach_meta, - compositor); + auto buf = trans->queued_buf; + trans->queued_buf = nullptr; - if (priv->rects_to_upload.empty ()) { - if (priv->overlays) - g_list_free_full (priv->overlays, (GDestroyNotify) gst_mini_object_unref); - priv->overlays = nullptr; - return TRUE; + auto has_rect = buffer_has_overlay_rect (buf); + if (priv->blend_mode == BLEND_MODE_PASSTHROUGH || !has_rect) { + *buffer = buf; + return GST_FLOW_OK; } - GST_LOG_OBJECT (compositor, "Found %" G_GSIZE_FORMAT - " overlay rectangles", priv->rects_to_upload.size ()); + auto & ctx = priv->ctx; + gst_d3d12_overlay_blender_upload (ctx->blender, buf); - for (size_t i = 0; i < priv->rects_to_upload.size (); i++) { - GList *iter; - bool found = false; - for (iter = priv->overlays; iter; iter = g_list_next (iter)) { - auto rect = (GstD3D12OverlayRect *) iter->data; - if (rect->overlay_rect == priv->rects_to_uploadi) { - found = true; - break; - } - } + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (ctx->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; + } - if (!found) { - auto new_rect = gst_d3d12_overlay_rect_new (compositor, - priv->rects_to_uploadi); - if (new_rect) - priv->overlays = g_list_append (priv->overlays, new_rect); - } + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + auto hr = ca->Reset (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_cmd_alloc_unref (gst_ca); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; } - /* Remove old overlay */ - GList *iter; - GList *next; - for (iter = priv->overlays; iter; iter = next) { - auto rect = (GstD3D12OverlayRect *) iter->data; - next = g_list_next (iter); - - if (std::find_if (priv->rects_to_upload.begin (), - priv->rects_to_upload.end (),&(const auto & overlay)->bool - { - return overlay == rect->overlay_rect;} - ) == priv->rects_to_upload.end ()) { - gst_mini_object_unref (rect); - priv->overlays = g_list_delete_link (priv->overlays, iter); + if (!ctx->cl) { + auto device = gst_d3d12_device_get_device_handle (ctx->device); + hr = device->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + ca, nullptr, IID_PPV_ARGS (&ctx->cl)); + if (!gst_d3d12_result (hr, priv->ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't create command list"); + gst_d3d12_cmd_alloc_unref (gst_ca); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; + } + } else { + hr = ctx->cl->Reset (ca, nullptr); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_cmd_alloc_unref (gst_ca); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; } } - return TRUE; -} + GstD3D12FenceData *fence_data; + gst_d3d12_fence_data_pool_acquire (priv->fence_data_pool, &fence_data); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + buf = gst_buffer_make_writable (buf); + if (priv->blend_mode == BLEND_MODE_BLEND) { + /* Ensure writable memory */ + GstD3D12Frame frame; + if (!gst_d3d12_frame_map (&frame, &priv->ctx->origin_info, buf, + GST_MAP_WRITE_D3D12, GST_D3D12_FRAME_MAP_FLAG_RTV)) { + GST_WARNING_OBJECT (self, "Couldn't map buffer"); + GstBuffer *fallback_buf = nullptr; + gst_buffer_pool_acquire_buffer (ctx->blend_pool, &fallback_buf, nullptr); + if (!fallback_buf) { + GST_ERROR_OBJECT (self, "Couldn't acquire fallback buffer"); + ctx->cl->Close (); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; + } -gboolean -gst_d3d12_overlay_compositor_update_viewport (GstD3D12OverlayCompositor * - compositor, GstVideoRectangle * viewport) -{ - g_return_val_if_fail (GST_IS_D3D12_OVERLAY_COMPOSITOR (compositor), FALSE); - g_return_val_if_fail (viewport != nullptr, FALSE); + if (!gst_d3d12_buffer_copy_into (fallback_buf, buf, &ctx->origin_info)) { + GST_ERROR_OBJECT (self, "Couldn't copy to fallback buffer"); + ctx->cl->Close (); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + gst_buffer_unref (fallback_buf); + return GST_FLOW_ERROR; + } - auto priv = compositor->priv; + gst_buffer_copy_into (fallback_buf, buf, GST_BUFFER_COPY_METADATA, 0, -1); + gst_buffer_unref (buf); + buf = fallback_buf; + } else { + gst_d3d12_frame_unmap (&frame); + } - priv->viewport.TopLeftX = viewport->x; - priv->viewport.TopLeftY = viewport->y; - priv->viewport.Width = viewport->w; - priv->viewport.Height = viewport->h; + gst_d3d12_overlay_blender_draw (ctx->blender, + buf, fence_data, ctx->cl.Get ()); + } else { + GstBuffer *blend_buf = nullptr; + GstBuffer *out_buf = nullptr; + + gst_buffer_pool_acquire_buffer (ctx->blend_pool, &blend_buf, nullptr); + if (!blend_buf) { + GST_ERROR_OBJECT (self, "Couldn't acquire blend buffer"); + ctx->cl->Close (); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; + } - priv->scissor_rect.left = viewport->x; - priv->scissor_rect.top = viewport->y; - priv->scissor_rect.right = viewport->x + viewport->w; - priv->scissor_rect.bottom = viewport->y + viewport->h; + auto ret = + GST_BASE_TRANSFORM_CLASS (parent_class)->prepare_output_buffer (trans, + buf, &out_buf); + if (ret != GST_FLOW_OK) { + ctx->cl->Close (); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + gst_buffer_unref (blend_buf); + return ret; + } - return TRUE; -} + gst_d3d12_converter_convert_buffer (ctx->pre_conv, buf, blend_buf, + fence_data, ctx->cl.Get (), TRUE); + gst_d3d12_overlay_blender_draw (ctx->blender, + blend_buf, fence_data, ctx->cl.Get ()); -static gboolean -gst_d3d12_overlay_compositor_execute (GstD3D12OverlayCompositor * self, - GstBuffer * buf, GstD3D12FenceData * fence_data, - ID3D12GraphicsCommandList * cl) -{ - auto priv = self->priv; + auto dmem = (GstD3D12Memory *) gst_buffer_peek_memory (blend_buf, 0); + auto resource = gst_d3d12_memory_get_resource_handle (dmem); - auto mem = (GstD3D12Memory *) gst_buffer_peek_memory (buf, 0); - auto rtv_heap = gst_d3d12_memory_get_render_target_view_heap (mem); - if (!rtv_heap) { - GST_ERROR_OBJECT (self, "Couldn't get rtv heap"); - return FALSE; - } + auto barrier = CD3DX12_RESOURCE_BARRIER::Transition (resource, + D3D12_RESOURCE_STATE_RENDER_TARGET, + D3D12_RESOURCE_STATE_NON_PIXEL_SHADER_RESOURCE | + D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE); + ctx->cl->ResourceBarrier (1, &barrier); - GList *iter; - ComPtr < ID3D12PipelineState > prev_pso; - for (iter = priv->overlays; iter; iter = g_list_next (iter)) { - auto rect = (GstD3D12OverlayRect *) iter->data; - if (rect->need_upload) { - D3D12_TEXTURE_COPY_LOCATION src = - CD3DX12_TEXTURE_COPY_LOCATION (rect->staging.Get (), rect->layout); - D3D12_TEXTURE_COPY_LOCATION dst = - CD3DX12_TEXTURE_COPY_LOCATION (rect->texture.Get ()); - GST_LOG_OBJECT (self, "First render, uploading texture"); - cl->CopyTextureRegion (&dst, 0, 0, 0, &src, nullptr); - D3D12_RESOURCE_BARRIER barrier = - CD3DX12_RESOURCE_BARRIER::Transition (rect->texture.Get (), - D3D12_RESOURCE_STATE_COPY_DEST, - D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE); - cl->ResourceBarrier (1, &barrier); - rect->need_upload = FALSE; - } + gst_d3d12_converter_convert_buffer (ctx->post_conv, blend_buf, out_buf, + fence_data, ctx->cl.Get (), FALSE); - cl->SetGraphicsRootSignature (priv->rs.Get ()); - - ComPtr < ID3D12PipelineState > pso; - if (rect->premul_alpha) - pso = priv->pso; - else - pso = priv->pso_premul; - - if (!prev_pso) { - cl->SetPipelineState (pso.Get ()); - cl->IASetIndexBuffer (&priv->idv); - cl->IASetPrimitiveTopology (D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); - cl->RSSetViewports (1, &priv->viewport); - cl->RSSetScissorRects (1, &priv->scissor_rect); - D3D12_CPU_DESCRIPTOR_HANDLE rtv_heaps = { - GetCPUDescriptorHandleForHeapStart (rtv_heap) - }; - cl->OMSetRenderTargets (1, rtv_heaps, FALSE, nullptr); - } else if (pso != prev_pso) { - cl->SetPipelineState (pso.Get ()); - } + /* fence data will hold all source buffers */ + gst_buffer_unref (buf); + gst_buffer_unref (blend_buf); - auto srv_heap = gst_d3d12_desc_heap_get_handle (rect->srv_heap); - ID3D12DescriptorHeap *heaps = { srv_heap }; - cl->SetDescriptorHeaps (1, heaps); - cl->SetGraphicsRootDescriptorTable (0, - GetGPUDescriptorHandleForHeapStart (srv_heap)); - cl->IASetVertexBuffers (0, 1, &rect->vbv); + buf = out_buf; + } - cl->DrawIndexedInstanced (6, 1, 0, 0, 0); + hr = ctx->cl->Close (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't close command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; + } - gst_d3d12_fence_data_push (fence_data, - FENCE_NOTIFY_MINI_OBJECT (gst_mini_object_ref (rect))); + ID3D12CommandList *cmd_list = { priv->ctx->cl.Get () }; - prev_pso = nullptr; - prev_pso = pso; + hr = gst_d3d12_device_execute_command_lists (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, 1, cmd_list, &ctx->fence_val); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (buf); + return GST_FLOW_ERROR; } - priv->pso->AddRef (); - gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_COM (priv->pso.Get ())); + auto fence = gst_d3d12_device_get_fence_handle (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + gst_d3d12_buffer_set_fence (buf, fence, priv->ctx->fence_val, FALSE); + gst_d3d12_device_set_fence_notify (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, priv->ctx->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + + gst_buffer_foreach_meta (buf, foreach_meta, nullptr); - priv->pso_premul->AddRef (); - gst_d3d12_fence_data_push (fence_data, - FENCE_NOTIFY_COM (priv->pso_premul.Get ())); + *buffer = buf; - return TRUE; + return GST_FLOW_OK; } -gboolean -gst_d3d12_overlay_compositor_draw (GstD3D12OverlayCompositor * compositor, - GstBuffer * buf, GstD3D12FenceData * fence_data, - ID3D12GraphicsCommandList * command_list) +static GstFlowReturn +gst_d3d12_overlay_compositor_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf) { - g_return_val_if_fail (compositor != nullptr, FALSE); - g_return_val_if_fail (GST_IS_BUFFER (buf), FALSE); - g_return_val_if_fail (fence_data, FALSE); - g_return_val_if_fail (command_list, FALSE); - - auto priv = compositor->priv; - - if (!priv->overlays) - return TRUE; - - auto mem = (GstD3D12Memory *) gst_buffer_peek_memory (buf, 0); - auto resource = gst_d3d12_memory_get_resource_handle (mem); - auto desc = GetDesc (resource); - if (desc.SampleDesc.Count != priv->sample_desc.Count || - desc.SampleDesc.Quality != priv->sample_desc.Quality) { - auto device = gst_d3d12_device_get_device_handle (compositor->device); - - auto pso_desc = priv->pso_desc; - pso_desc.SampleDesc = desc.SampleDesc; - ComPtr < ID3D12PipelineState > pso; - auto hr = device->CreateGraphicsPipelineState (&pso_desc, - IID_PPV_ARGS (&pso)); - if (!gst_d3d12_result (hr, compositor->device)) { - GST_ERROR_OBJECT (compositor, "Couldn't create pso"); - return FALSE; - } - - ComPtr < ID3D12PipelineState > pso_premul; - auto pso_premul_desc = priv->pso_premul_desc; - pso_premul_desc.SampleDesc = desc.SampleDesc; - hr = device->CreateGraphicsPipelineState (&pso_premul_desc, - IID_PPV_ARGS (&pso_premul)); - if (!gst_d3d12_result (hr, compositor->device)) { - GST_ERROR_OBJECT (compositor, "Couldn't create pso"); - return FALSE; - } - - priv->pso = nullptr; - priv->pso_premul = nullptr; - - priv->pso = pso; - priv->pso_premul = pso_premul; - priv->sample_desc = desc.SampleDesc; - } + g_assert_not_reached (); - return gst_d3d12_overlay_compositor_execute (compositor, - buf, fence_data, command_list); + return GST_FLOW_ERROR; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12overlaycompositor.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12overlaycompositor.h
Changed
@@ -1,5 +1,5 @@ /* GStreamer - * Copyright (C) 2023 Seungha Yang <seungha@centricular.com> + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Library General Public @@ -14,36 +14,19 @@ * You should have received a copy of the GNU Library General Public * License along with this library; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, - * Boston, MA 02120-1301, USA. + * Boston, MA 02110-1301, USA. */ #pragma once #include <gst/gst.h> -#include <gst/video/video.h> -#include <gst/d3d12/gstd3d12.h> +#include "gstd3d12basefilter.h" G_BEGIN_DECLS #define GST_TYPE_D3D12_OVERLAY_COMPOSITOR (gst_d3d12_overlay_compositor_get_type()) G_DECLARE_FINAL_TYPE (GstD3D12OverlayCompositor, gst_d3d12_overlay_compositor, - GST, D3D12_OVERLAY_COMPOSITOR, GstObject) - -GType gst_d3d12_overlay_rect_get_type (void); - -GstD3D12OverlayCompositor * gst_d3d12_overlay_compositor_new (GstD3D12Device * device, - const GstVideoInfo * info); - -gboolean gst_d3d12_overlay_compositor_upload (GstD3D12OverlayCompositor * compositor, - GstBuffer * buf); - -gboolean gst_d3d12_overlay_compositor_update_viewport (GstD3D12OverlayCompositor * compositor, - GstVideoRectangle * viewport); - -gboolean gst_d3d12_overlay_compositor_draw (GstD3D12OverlayCompositor * compositor, - GstBuffer * buf, - GstD3D12FenceData * fence_data, - ID3D12GraphicsCommandList * command_list); + GST, D3D12_OVERLAY_COMPOSITOR, GstD3D12BaseFilter) G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12remap.cpp
Added
@@ -0,0 +1,421 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-d3d12remap + * @title: d3d12remap + * + * A Direct3D12-based UV coordinate remapping element + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12remap.h" +#include "gstd3d12pluginutils.h" +#include <directx/d3dx12.h> +#include <mutex> +#include <memory> +#include <wrl.h> + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +/* *INDENT-ON* */ + +GST_DEBUG_CATEGORY_STATIC (gst_d3d12_remap_debug); +#define GST_CAT_DEFAULT gst_d3d12_remap_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY, GST_D3D12_ALL_FORMATS) "; " + GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY "," + GST_CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION, + GST_D3D12_ALL_FORMATS))); + +enum +{ + PROP_0, + PROP_UV_REMAP, +}; + +/* *INDENT-OFF* */ +struct RemapContext +{ + ~RemapContext() + { + if (fence_val) { + gst_d3d12_device_fence_wait (device, + D3D12_COMMAND_LIST_TYPE_DIRECT, fence_val); + } + + gst_clear_object (&conv); + gst_clear_object (&ca_pool); + gst_clear_object (&device); + } + + ComPtr<ID3D12GraphicsCommandList> cl; + ID3D12Fence *cq_fence; + GstD3D12CmdAllocPool *ca_pool = nullptr; + GstD3D12Device *device = nullptr; + GstD3D12CmdQueue *cq = nullptr; + guint64 fence_val = 0; + GstD3D12Converter *conv = nullptr; +}; + +struct GstD3D12RemapPrivate +{ + GstD3D12RemapPrivate () + { + fence_data_pool = gst_d3d12_fence_data_pool_new (); + } + + ~GstD3D12RemapPrivate () + { + gst_clear_object (&fence_data_pool); + } + + GstD3D12FenceDataPool *fence_data_pool; + + std::shared_ptr<RemapContext> ctx; + ComPtr<ID3D12Resource> uv_remap; + + std::mutex lock; +}; +/* *INDENT-ON* */ + +struct _GstD3D12Remap +{ + GstD3D12BaseFilter parent; + + GstD3D12RemapPrivate *priv; +}; + +static void gst_d3d12_remap_finalize (GObject * object); +static void gst_d3d12_remap_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_d3d12_remap_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); +static gboolean gst_d3d12_remap_stop (GstBaseTransform * trans); +static gboolean gst_d3d12_remap_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); +static GstFlowReturn gst_d3d12_remap_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf); +static gboolean gst_d3d12_remap_set_info (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstCaps * incaps, GstVideoInfo * in_info, + GstCaps * outcaps, GstVideoInfo * out_info); +static gboolean gst_d3d12_remap_propose_allocation (GstD3D12BaseFilter * + filter, GstD3D12Device * device, GstQuery * decide_query, GstQuery * query); + +#define gst_d3d12_remap_parent_class parent_class +G_DEFINE_TYPE (GstD3D12Remap, gst_d3d12_remap, GST_TYPE_D3D12_BASE_FILTER); + +static void +gst_d3d12_remap_class_init (GstD3D12RemapClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + auto filter_class = GST_D3D12_BASE_FILTER_CLASS (klass); + + object_class->set_property = gst_d3d12_remap_set_property; + object_class->get_property = gst_d3d12_remap_get_property; + object_class->finalize = gst_d3d12_remap_finalize; + + g_object_class_install_property (object_class, PROP_UV_REMAP, + g_param_spec_pointer ("uv-remap", "UV Remap", + "ID3D12Resource for UV coordinates remapping. Valid formats are " + "R8G8B8A8_UNORM and R16G16B16A16_UNORM. R -> U, " + "G -> U, B -> unused, and A -> mask where A >= 0.5 " + "applies remapping, otherwise fill background color", + (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + + gst_element_class_set_static_metadata (element_class, + "Direct3D12 Remap", "Filter/Converter/Video/Hardware", + "Remap pixels", "Seungha Yang <seungha@centricular.com>"); + + trans_class->passthrough_on_same_caps = FALSE; + + trans_class->stop = GST_DEBUG_FUNCPTR (gst_d3d12_remap_stop); + trans_class->transform_meta = + GST_DEBUG_FUNCPTR (gst_d3d12_remap_transform_meta); + trans_class->transform = GST_DEBUG_FUNCPTR (gst_d3d12_remap_transform); + + filter_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_remap_set_info); + filter_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_d3d12_remap_propose_allocation); + + gst_type_mark_as_plugin_api (GST_TYPE_D3D12_SAMPLING_METHOD, + (GstPluginAPIFlags) 0); + + GST_DEBUG_CATEGORY_INIT (gst_d3d12_remap_debug, "d3d12remap", 0, + "d3d12remap"); +} + +static void +gst_d3d12_remap_init (GstD3D12Remap * self) +{ + self->priv = new GstD3D12RemapPrivate (); +} + +static void +gst_d3d12_remap_finalize (GObject * object) +{ + auto self = GST_D3D12_REMAP (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_d3d12_remap_set_remap_resource (GstD3D12Remap * self) +{ + auto priv = self->priv; + + if (!priv->ctx) + return; + + if (priv->uv_remap) { + ComPtr < ID3D12Device > other_device; + priv->uv_remap->GetDevice (IID_PPV_ARGS (&other_device)); + auto other_device_luid = GetAdapterLuid (other_device); + + auto device = gst_d3d12_device_get_device_handle (priv->ctx->device); + auto device_luid = GetAdapterLuid (device); + + if (other_device_luid.HighPart != device_luid.HighPart || + other_device_luid.LowPart != device_luid.LowPart) { + GST_ERROR_OBJECT (self, "Remap resource belongs to other device"); + } else { + gst_d3d12_converter_set_remap (priv->ctx->conv, priv->uv_remap.Get ()); + } + } else { + gst_d3d12_converter_set_remap (priv->ctx->conv, nullptr); + } +} + +static void +gst_d3d12_remap_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_REMAP (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_UV_REMAP: + priv->uv_remap = (ID3D12Resource *) g_value_get_pointer (value); + if (priv->uv_remap) { + auto desc = GetDesc (priv->uv_remap); + if (desc.Format != DXGI_FORMAT_R8G8B8A8_UNORM + && desc.Format != DXGI_FORMAT_R16G16B16A16_UNORM) { + GST_ERROR_OBJECT (self, + "Not supported format %d", (guint) desc.Format); + priv->uv_remap = nullptr; + } + } + + gst_d3d12_remap_set_remap_resource (self); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_d3d12_remap_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_D3D12_REMAP (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_UV_REMAP: + g_value_set_pointer (value, priv->uv_remap.Get ()); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static gboolean +gst_d3d12_remap_stop (GstBaseTransform * trans) +{ + auto self = GST_D3D12_REMAP (trans); + auto priv = self->priv; + + priv->ctx = nullptr; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->stop (trans); +} + +static gboolean +gst_d3d12_remap_propose_allocation (GstD3D12BaseFilter * filter, + GstD3D12Device * device, GstQuery * decide_query, GstQuery * query) +{ + if (!GST_D3D12_BASE_FILTER_CLASS (parent_class)->propose_allocation (filter, + device, decide_query, query)) { + return FALSE; + } + + gst_query_add_allocation_meta (query, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE, nullptr); + + return TRUE; +} + +static gboolean +gst_d3d12_remap_set_info (GstD3D12BaseFilter * filter, GstD3D12Device * device, + GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, + GstVideoInfo * out_info) +{ + auto self = GST_D3D12_REMAP (filter); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + priv->ctx = nullptr; + + auto ctx = std::make_shared < RemapContext > (); + ctx->device = (GstD3D12Device *) gst_object_ref (device); + auto device_handle = gst_d3d12_device_get_device_handle (device); + ctx->ca_pool = gst_d3d12_cmd_alloc_pool_new (device_handle, + D3D12_COMMAND_LIST_TYPE_DIRECT); + + ctx->cq = gst_d3d12_device_get_cmd_queue (ctx->device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + ctx->cq_fence = gst_d3d12_cmd_queue_get_fence_handle (ctx->cq); + ctx->conv = gst_d3d12_converter_new (ctx->device, nullptr, + in_info, out_info, nullptr, nullptr, nullptr); + + priv->ctx = ctx; + gst_d3d12_remap_set_remap_resource (self); + + return TRUE; +} + +static gboolean +gst_d3d12_remap_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) +{ + if (meta->info->api == GST_VIDEO_CROP_META_API_TYPE) + return FALSE; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->transform_meta (trans, + outbuf, meta, inbuf); +} + +static GstFlowReturn +gst_d3d12_remap_transform (GstBaseTransform * trans, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + auto self = GST_D3D12_REMAP (trans); + auto priv = self->priv; + GstD3D12CmdAlloc *gst_ca; + GstD3D12FenceData *fence_data; + auto ctx = priv->ctx; + HRESULT hr; + + if (!ctx) { + GST_ERROR_OBJECT (self, "Context is not configured"); + return GST_FLOW_ERROR; + } + + auto device = gst_d3d12_device_get_device_handle (ctx->device); + + gst_d3d12_fence_data_pool_acquire (priv->fence_data_pool, &fence_data); + + if (!gst_d3d12_cmd_alloc_pool_acquire (ctx->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + hr = ca->Reset (); + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + if (!ctx->cl) { + hr = device->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + ca, nullptr, IID_PPV_ARGS (&priv->ctx->cl)); + } else { + hr = ctx->cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, ctx->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + if (!gst_d3d12_converter_convert_buffer (ctx->conv, inbuf, outbuf, fence_data, + ctx->cl.Get (), TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't convert buffer"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + hr = ctx->cl->Close (); + if (!gst_d3d12_result (hr, ctx->device)) { + gst_d3d12_fence_data_unref (fence_data); + GST_ERROR_OBJECT (self, "Couldn't close command list"); + return GST_FLOW_ERROR; + } + + ID3D12CommandList *cl = { ctx->cl.Get () }; + gst_d3d12_cmd_queue_execute_command_lists (ctx->cq, 1, cl, &ctx->fence_val); + + gst_d3d12_cmd_queue_set_notify (ctx->cq, ctx->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + + gst_d3d12_buffer_set_fence (outbuf, ctx->cq_fence, ctx->fence_val, FALSE); + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12remap.h
Added
@@ -0,0 +1,32 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstd3d12basefilter.h" + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_REMAP (gst_d3d12_remap_get_type()) +G_DECLARE_FINAL_TYPE (GstD3D12Remap, gst_d3d12_remap, + GST, D3D12_REMAP, GstD3D12BaseFilter) + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12swapchainsink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12swapchainsink.cpp
Changed
@@ -35,12 +35,13 @@ #include "gstd3d12swapchainsink.h" #include "gstd3d12pluginutils.h" -#include "gstd3d12overlaycompositor.h" +#include "gstd3d12overlayblender.h" #include <directx/d3dx12.h> #include <mutex> #include <wrl.h> #include <vector> #include <memory> +#include <math.h> /* *INDENT-OFF* */ using namespace Microsoft::WRL; @@ -83,6 +84,9 @@ enum { SIGNAL_RESIZE, + SIGNAL_UV_REMAP, + SIGNAL_REDRAW, + SIGNAL_LAST_RENDERED_SAMPLE, SIGNAL_LAST }; @@ -215,7 +219,7 @@ GstBuffer *msaa_buf = nullptr; GstCaps *caps = nullptr; GstD3D12Converter *conv = nullptr; - GstD3D12OverlayCompositor *comp = nullptr; + GstD3D12OverlayBlender *comp = nullptr; guint64 fence_val = 0; bool caps_updated = false; bool first_present = true; @@ -225,6 +229,16 @@ FLOAT border_color_val4; GstVideoRectangle viewport = { }; gboolean auto_resize = FALSE; + gboolean did_redraw = FALSE; + guint last_backbuf_idx = 0; + GstClockTime last_backbuf_pts = 0; + GstClockTime last_backbuf_dur = 0; + GstSegment segment; + + std::vector<ComPtr<ID3D12Resource>> uv_remap; + std::vector<D3D12_VIEWPORT> uv_remap_viewport_origin; + std::vector<GstVideoRectangle> uv_remap_viewport; + std::vector<guint64> uv_remap_bg_color; gint adapter = DEFAULT_ADAPTER; gint force_aspect_ratio = DEFAULT_FORCE_ASPECT_RATIO; @@ -266,15 +280,24 @@ GstQuery * query); static GstFlowReturn gst_d3d12_swapchain_sink_prepare (GstBaseSink * sink, GstBuffer * buf); +static gboolean gst_d3d12_swapchain_sink_event (GstBaseSink * sink, + GstEvent * event); static gboolean gst_d3d12_swapchain_sink_set_info (GstVideoSink * sink, GstCaps * caps, const GstVideoInfo * info); static GstFlowReturn gst_d3d12_swapchain_sink_show_frame (GstVideoSink * sink, GstBuffer * buf); static void gst_d3d12_swapchain_sink_resize (GstD3D12SwapChainSink * self, guint width, guint height); +static void gst_d3d12_swapchain_sink_uv_remap (GstD3D12SwapChainSink * self, + guint num_lut, ID3D12Resource ** lut, D3D12_VIEWPORT * viewport, + guint64 * bg_color); +static void gst_d3d12_swapchain_sink_redraw (GstD3D12SwapChainSink * self); static void gst_d3d12_swapchain_sink_resize_internal (GstD3D12SwapChainSink * self, guint width, guint height); +static GstSample + * gst_d3d12_swapchain_sink_last_back_buffer (GstD3D12SwapChainSink * self, + gboolean remove_borders); static void gst_d3d12_swapchain_sink_color_balance_init (GstColorBalanceInterface * iface); @@ -388,6 +411,65 @@ G_CALLBACK (gst_d3d12_swapchain_sink_resize), nullptr, nullptr, nullptr, G_TYPE_NONE, 2, G_TYPE_UINT, G_TYPE_UINT); + /** + * GstD3D12SwapChainSink::uv-remap + * @videosink: the #GstD3D12SwapChainSink + * @num_lut: LUT resource array length + * @lut: Array of ID3D12Resource used for UV remap operation + * @viewport: Array of D3D12_VIEWPORT + * @bg_color: Array of background color represented via ARGB64 value + * + * Sets list of ID3D12Resource for UV coordinates remapping. + * Valid formats are R8G8B8A8_UNORM and R16G16B16A16_UNORM. + * R -> U, G -> U, B -> unused, and A -> mask where A >= 0.5 + * applies remapping, otherwise fill background color" + * + * TopLeftX, TopLeftY, Width, and Height values are used to calculate + * final viewport size. The coordinates must be normalized value in 0, 1 + * range instead of real viewport size. + * + * Since: 1.28 + */ + d3d12_swapchain_sink_signalsSIGNAL_UV_REMAP = + g_signal_new_class_handler ("uv-remap", G_TYPE_FROM_CLASS (klass), + (GSignalFlags) (G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION), + G_CALLBACK (gst_d3d12_swapchain_sink_uv_remap), nullptr, nullptr, nullptr, + G_TYPE_NONE, 4, G_TYPE_UINT, G_TYPE_POINTER, G_TYPE_POINTER, + G_TYPE_POINTER); + + /** + * GstD3D12SwapChainSink::redraw + * @videosink: the #GstD3D12SwapChainSink + * + * Redraw last buffer and present it + * + * Since: 1.28 + */ + d3d12_swapchain_sink_signalsSIGNAL_REDRAW = + g_signal_new_class_handler ("redraw", G_TYPE_FROM_CLASS (klass), + (GSignalFlags) (G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION), + G_CALLBACK (gst_d3d12_swapchain_sink_redraw), nullptr, nullptr, nullptr, + G_TYPE_NONE, 0); + + /** + * GstD3D12SwapChainSink::last-rendered-sample: + * @videosink: the #GstD3D12SwapChainSink + * @remove_borders: Remove background borders + * + * Get last rendered swapchain backbuffer content + * + * Returns: a #GstSample of the last rendered swapchain backbuffer content + * or %NULL if swapchain is not configured yet + * + * Since: 1.28 + */ + d3d12_swapchain_sink_signalsSIGNAL_LAST_RENDERED_SAMPLE = + g_signal_new_class_handler ("last-rendered-sample", + G_TYPE_FROM_CLASS (klass), + (GSignalFlags) (G_SIGNAL_RUN_LAST | G_SIGNAL_ACTION), + G_CALLBACK (gst_d3d12_swapchain_sink_last_back_buffer), + nullptr, nullptr, nullptr, GST_TYPE_SAMPLE, 1, G_TYPE_BOOLEAN); + element_class->set_context = GST_DEBUG_FUNCPTR (gst_d3d12_swapchain_sink_set_context); @@ -405,6 +487,7 @@ basesink_class->query = GST_DEBUG_FUNCPTR (gst_d3d12_swapchain_sink_query); basesink_class->prepare = GST_DEBUG_FUNCPTR (gst_d3d12_swapchain_sink_prepare); + basesink_class->event = GST_DEBUG_FUNCPTR (gst_d3d12_swapchain_sink_event); videosink_class->set_info = GST_DEBUG_FUNCPTR (gst_d3d12_swapchain_sink_set_info); @@ -479,8 +562,8 @@ auto val = g_value_get_boolean (value); if (val != priv->force_aspect_ratio) { priv->force_aspect_ratio = val; - gst_d3d12_swapchain_sink_resize_internal (self, - priv->width, priv->height); + priv->output_updated = true; + gst_d3d12_swapchain_sink_redraw (self); } break; } @@ -675,7 +758,7 @@ GstVideoInfo info; gst_video_info_set_format (&info, GST_VIDEO_FORMAT_RGBA, priv->width, priv->height); - priv->comp = gst_d3d12_overlay_compositor_new (self->device, &info); + priv->comp = gst_d3d12_overlay_blender_new (self->device, &info); return gst_d3d12_swapchain_sink_resize_unlocked (self, priv->width, priv->height); @@ -749,6 +832,20 @@ } static gboolean +gst_d3d12_swapchain_sink_event (GstBaseSink * sink, GstEvent * event) +{ + auto self = GST_D3D12_SWAPCHAIN_SINK (sink); + auto priv = self->priv; + + if (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT) { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_event_copy_segment (event, &priv->segment); + } + + return GST_BASE_SINK_CLASS (parent_class)->event (sink, event); +} + +static gboolean gst_d3d12_swapchain_sink_set_info (GstVideoSink * sink, GstCaps * caps, const GstVideoInfo * info) { @@ -855,6 +952,53 @@ return TRUE; } +static void +calculate_remap_viewport (GstD3D12SwapChainSink * self, + const D3D12_VIEWPORT * d3d12_viewport, GstVideoRectangle * viewport) +{ + auto priv = self->priv; + + if (priv->viewport.w > 0 && priv->viewport.h > 0) { + double x = d3d12_viewport->TopLeftX; + double y = d3d12_viewport->TopLeftY; + double w = d3d12_viewport->Width; + double h = d3d12_viewport->Height; + + /* Ensure normalized coordinate */ + x = CLAMP (x, 0.0, 1.0); + y = CLAMP (y, 0.0, 1.0); + w = CLAMP (w, 0.0, 1.0); + h = CLAMP (h, 0.0, 1.0); + + /* Scale to real viewport size */ + gint xi = (gint) round ((double) priv->viewport.w * x) + priv->viewport.x; + gint yi = (gint) round ((double) priv->viewport.h * y) + priv->viewport.y; + gint wi = (gint) round ((double) priv->viewport.w * w); + gint hi = (gint) round ((double) priv->viewport.h * h); + + /* clamp */ + auto r = xi + wi; + auto rr = priv->viewport.x + priv->viewport.w; + if (rr < r) + wi = rr - xi; + + auto b = yi + hi; + auto bb = priv->viewport.y + priv->viewport.h; + if (bb < b) + hi = bb - hi; + + viewport->x = xi; + viewport->y = yi; + viewport->w = wi; + viewport->h = hi; + } else { + viewport->x = 0; + viewport->y = 0; + viewport->w = 0; + viewport->h = 0; + } +} + static gboolean gst_d3d12_swapchain_sink_render (GstD3D12SwapChainSink * self) { @@ -879,7 +1023,7 @@ priv->prev_crop_rect = crop_rect; } - priv->lock.lock (); + std::lock_guard < std::recursive_mutex > lk (priv->lock); if (priv->first_present || priv->output_updated) { GstVideoRectangle dst_rect = { }; dst_rect.w = priv->width; @@ -895,20 +1039,27 @@ priv->viewport = dst_rect; } + priv->uv_remap_viewport.clear (); + for (size_t i = 0; i < priv->uv_remap_viewport_origin.size (); i++) { + GstVideoRectangle uv_viewport = { }; + calculate_remap_viewport (self, &priv->uv_remap_viewport_origini, + &uv_viewport); + priv->uv_remap_viewport.push_back (uv_viewport); + } + g_object_set (priv->conv, "dest-x", priv->viewport.x, "dest-y", priv->viewport.y, "dest-width", priv->viewport.w, "dest-height", priv->viewport.h, "hue", priv->hue, "saturation", priv->saturation, "brightness", priv->brightness, "contrast", priv->contrast, "max-mip-levels", priv->mip_levels, nullptr); - gst_d3d12_overlay_compositor_update_viewport (priv->comp, &priv->viewport); + gst_d3d12_overlay_blender_update_viewport (priv->comp, &priv->viewport); priv->first_present = false; priv->output_updated = false; } - priv->lock.unlock (); - gst_d3d12_overlay_compositor_upload (priv->comp, priv->cached_buf); + gst_d3d12_overlay_blender_upload (priv->comp, priv->cached_buf); GstD3D12CmdAlloc *gst_ca; if (!gst_d3d12_cmd_alloc_pool_acquire (priv->ca_pool, &gst_ca)) { @@ -946,8 +1097,10 @@ } } - auto cur_idx = priv->swapchain->GetCurrentBackBufferIndex (); - auto backbuf = priv->backbufcur_idx->backbuf; + priv->last_backbuf_idx = priv->swapchain->GetCurrentBackBufferIndex (); + priv->last_backbuf_pts = GST_BUFFER_PTS (priv->cached_buf); + priv->last_backbuf_dur = GST_BUFFER_DURATION (priv->cached_buf); + auto backbuf = priv->backbufpriv->last_backbuf_idx->backbuf; GstD3D12FenceData *fence_data; gst_d3d12_fence_data_pool_acquire (priv->fence_data_pool, &fence_data); @@ -974,20 +1127,40 @@ if (priv->viewport.x != 0 || priv->viewport.y != 0 || (guint) priv->viewport.w != priv->width || - (guint) priv->viewport.h != priv->height) { + (guint) priv->viewport.h != priv->height || !priv->uv_remap.empty ()) { auto rtv_heap = gst_d3d12_memory_get_render_target_view_heap (mem); auto cpu_handle = GetCPUDescriptorHandleForHeapStart (rtv_heap); cl->ClearRenderTargetView (cpu_handle, priv->border_color_val, 0, nullptr); } - if (!gst_d3d12_converter_convert_buffer (priv->conv, - priv->cached_buf, conv_outbuf, fence_data, cl.Get (), TRUE)) { - GST_ERROR_OBJECT (self, "Couldn't build convert command"); - gst_d3d12_fence_data_unref (fence_data); - return FALSE; + if (!priv->uv_remap.empty ()) { + std::vector < ID3D12Resource * >uv_remap; + + for (size_t i = 0; i < priv->uv_remap.size (); i++) + uv_remap.push_back (priv->uv_remapi.Get ()); + + if (!gst_d3d12_converter_convert_buffer_for_uv_remap (priv->conv, + priv->cached_buf, conv_outbuf, fence_data, cl.Get (), TRUE, + (guint) priv->uv_remap.size (), uv_remap.data (), + priv->uv_remap_viewport.data (), priv->uv_remap_bg_color.data ())) { + GST_ERROR_OBJECT (self, "Couldn't build convert command"); + gst_d3d12_fence_data_unref (fence_data); + return FALSE; + } + } else { + gst_d3d12_converter_update_viewport (priv->conv, priv->viewport.x, + priv->viewport.y, priv->viewport.w, priv->viewport.h); + gst_d3d12_converter_set_remap (priv->conv, nullptr); + + if (!gst_d3d12_converter_convert_buffer (priv->conv, + priv->cached_buf, conv_outbuf, fence_data, cl.Get (), TRUE)) { + GST_ERROR_OBJECT (self, "Couldn't build convert command"); + gst_d3d12_fence_data_unref (fence_data); + return FALSE; + } } - if (!gst_d3d12_overlay_compositor_draw (priv->comp, + if (!gst_d3d12_overlay_blender_draw (priv->comp, conv_outbuf, fence_data, cl.Get ())) { GST_ERROR_OBJECT (self, "Couldn't build overlay command"); gst_d3d12_fence_data_unref (fence_data); @@ -1074,7 +1247,12 @@ GST_VIDEO_SINK_WIDTH (self), GST_VIDEO_SINK_HEIGHT (self)); } } else { - need_render = false; + if (priv->did_redraw) { + need_render = true; + } else { + need_render = false; + } + update_converter = false; } } @@ -1135,6 +1313,7 @@ if (!need_render) return TRUE; + priv->did_redraw = FALSE; auto mem = gst_buffer_peek_memory (buffer, 0); if (!gst_is_d3d12_memory (mem)) { GstBuffer *upload = nullptr; @@ -1144,27 +1323,10 @@ return FALSE; } - GstVideoFrame in_frame, out_frame; - if (!gst_video_frame_map (&in_frame, &priv->info, buffer, GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Couldn't map input frame"); - gst_buffer_unref (upload); - return FALSE; - } - - if (!gst_video_frame_map (&out_frame, &priv->info, upload, GST_MAP_WRITE)) { - GST_ERROR_OBJECT (self, "Couldn't map upload frame"); - gst_video_frame_unmap (&in_frame); - gst_buffer_unref (upload); - return FALSE; - } - - auto copy_ret = gst_video_frame_copy (&out_frame, &in_frame); - gst_video_frame_unmap (&out_frame); - gst_video_frame_unmap (&in_frame); - if (!copy_ret) { - GST_ERROR_OBJECT (self, "Couldn't copy frame"); + if (!gst_d3d12_buffer_copy_into (upload, buffer, &priv->info)) { + GST_ERROR_OBJECT (self, "Couldn't upload buffer"); gst_buffer_unref (upload); - return FALSE; + return GST_FLOW_ERROR; } gst_buffer_foreach_meta (buffer, @@ -1201,10 +1363,187 @@ gst_d3d12_cmd_queue_execute_command_lists (priv->cq, 0, nullptr, &priv->fence_val); + + priv->did_redraw = TRUE; } } static void +gst_d3d12_swapchain_sink_redraw (GstD3D12SwapChainSink * self) +{ + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + GST_DEBUG_OBJECT (self, "Redraw"); + + if (priv->swapchain && priv->cached_buf && + gst_d3d12_swapchain_sink_render (self)) { + GST_DEBUG_OBJECT (self, "Presenting redraw frame"); + auto hr = priv->swapchain->Present (0, 0); + if (!gst_d3d12_result (hr, self->device)) + GST_ERROR_OBJECT (self, "Present failed"); + + gst_d3d12_cmd_queue_execute_command_lists (priv->cq, + 0, nullptr, &priv->fence_val); + + priv->did_redraw = TRUE; + } +} + +static GstSample * +gst_d3d12_swapchain_sink_last_back_buffer (GstD3D12SwapChainSink * self, + gboolean remove_borders) +{ + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->swapchain || !priv->cl || !priv->ca_pool) + return nullptr; + + if (priv->viewport.w <= 0 || priv->viewport.h <= 0) + return nullptr; + + ComPtr < ID3D12Resource > backbuf; + ComPtr < ID3D12Resource > dst_resource; + auto hr = priv->swapchain->GetBuffer (priv->last_backbuf_idx, + IID_PPV_ARGS (&backbuf)); + if (!gst_d3d12_result (hr, self->device)) + return nullptr; + + auto device = gst_d3d12_device_get_device_handle (self->device); + auto src_desc = GetDesc (backbuf); + + UINT64 width; + UINT height; + if (remove_borders) { + width = priv->viewport.w; + height = priv->viewport.h; + } else { + width = src_desc.Width; + height = src_desc.Height; + } + + auto dst_desc = CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_R8G8B8A8_UNORM, + width, height, 1, 1, 1, 0, + D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET | + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS); + auto heap_props = CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); + hr = device->CreateCommittedResource (&heap_props, D3D12_HEAP_FLAG_SHARED, + &dst_desc, D3D12_RESOURCE_STATE_COMMON, nullptr, + IID_PPV_ARGS (&dst_resource)); + if (!gst_d3d12_result (hr, self->device)) + return nullptr; + + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (priv->ca_pool, &gst_ca)) + return nullptr; + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + hr = ca->Reset (); + if (!gst_d3d12_result (hr, self->device)) { + gst_d3d12_cmd_alloc_unref (gst_ca); + return nullptr; + } + + hr = priv->cl->Reset (ca, nullptr); + if (!gst_d3d12_result (hr, self->device)) { + gst_d3d12_cmd_alloc_unref (gst_ca); + return nullptr; + } + + auto barrier = CD3DX12_RESOURCE_BARRIER::Transition (backbuf.Get (), + D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_COPY_SOURCE); + priv->cl->ResourceBarrier (1, &barrier); + + if (remove_borders) { + D3D12_BOX src_box = { }; + src_box.left = priv->viewport.x; + src_box.top = priv->viewport.y; + src_box.right = priv->viewport.x + priv->viewport.w; + src_box.bottom = priv->viewport.y + priv->viewport.h; + src_box.front = 0; + src_box.back = 1; + + auto src_location = CD3DX12_TEXTURE_COPY_LOCATION (backbuf.Get ()); + auto dst_location = CD3DX12_TEXTURE_COPY_LOCATION (dst_resource.Get ()); + + priv->cl->CopyTextureRegion (&dst_location, + 0, 0, 0, &src_location, &src_box); + } else { + priv->cl->CopyResource (dst_resource.Get (), backbuf.Get ()); + } + + barrier = CD3DX12_RESOURCE_BARRIER::Transition (backbuf.Get (), + D3D12_RESOURCE_STATE_COPY_SOURCE, D3D12_RESOURCE_STATE_COMMON); + priv->cl->ResourceBarrier (1, &barrier); + + hr = priv->cl->Close (); + if (!gst_d3d12_result (hr, self->device)) { + gst_d3d12_cmd_alloc_unref (gst_ca); + return nullptr; + } + + ID3D12CommandList *cmd_list = { priv->cl.Get () }; + + hr = gst_d3d12_cmd_queue_execute_command_lists (priv->cq, + 1, cmd_list, &priv->fence_val); + if (!gst_d3d12_result (hr, self->device)) { + gst_d3d12_cmd_alloc_unref (gst_ca); + return nullptr; + } + + gst_d3d12_cmd_queue_set_notify (priv->cq, priv->fence_val, + gst_ca, (GDestroyNotify) gst_d3d12_cmd_alloc_unref); + + auto mem = gst_d3d12_allocator_alloc_wrapped (nullptr, self->device, + dst_resource.Get (), 0, nullptr, nullptr); + if (!mem) + return nullptr; + + GstVideoInfo info; + gst_video_info_set_format (&info, GST_VIDEO_FORMAT_RGBA, (guint) width, + height); + info.fps_n = priv->info.fps_n; + info.fps_d = priv->info.fps_d; + + D3D12_PLACED_SUBRESOURCE_FOOTPRINT layout; + device->GetCopyableFootprints (&dst_desc, + 0, 1, 0, &layout, nullptr, nullptr, nullptr); + + gsize offset4; + gint stride4; + + offset0 = 0; + stride0 = layout.Footprint.RowPitch; + + auto buf = gst_buffer_new (); + gst_buffer_append_memory (buf, mem); + gst_buffer_add_video_meta_full (buf, GST_VIDEO_FRAME_FLAG_NONE, + GST_VIDEO_INFO_FORMAT (&info), GST_VIDEO_INFO_WIDTH (&info), + GST_VIDEO_INFO_HEIGHT (&info), GST_VIDEO_INFO_N_PLANES (&info), + offset, stride); + + GST_BUFFER_DTS (buf) = GST_CLOCK_TIME_NONE; + GST_BUFFER_PTS (buf) = priv->last_backbuf_pts; + GST_BUFFER_DURATION (buf) = priv->last_backbuf_dur; + + auto fence = gst_d3d12_cmd_queue_get_fence_handle (priv->cq); + gst_d3d12_buffer_set_fence (buf, fence, priv->fence_val, FALSE); + + auto caps = gst_video_info_to_caps (&info); + gst_caps_set_features_simple (caps, + gst_caps_features_from_string (GST_CAPS_FEATURE_MEMORY_D3D12_MEMORY)); + + auto sample = gst_sample_new (buf, caps, &priv->segment, nullptr); + gst_buffer_unref (buf); + gst_caps_unref (caps); + + return sample; +} + +static void gst_d3d12_swapchain_sink_resize (GstD3D12SwapChainSink * self, guint width, guint height) { @@ -1238,6 +1577,33 @@ gst_d3d12_swapchain_sink_resize_internal (self, width, height); } +static void +gst_d3d12_swapchain_sink_uv_remap (GstD3D12SwapChainSink * self, guint num_lut, + ID3D12Resource ** lut, D3D12_VIEWPORT * viewport, guint64 * bg_color) +{ + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + priv->uv_remap.clear (); + priv->uv_remap_viewport.clear (); + priv->uv_remap_viewport_origin.clear (); + priv->uv_remap_bg_color.clear (); + + for (guint i = 0; i < num_lut; i++) { + ComPtr < ID3D12Resource > remap = luti; + priv->uv_remap.push_back (remap); + priv->uv_remap_viewport_origin.push_back (viewporti); + priv->uv_remap_bg_color.push_back (bg_colori); + + GstVideoRectangle rect = { }; + calculate_remap_viewport (self, &viewporti, &rect); + GST_DEBUG_OBJECT (self, + "Calculated viewport %d (x, y, w, h): %d, %d, %d, %d", i, + rect.x, rect.y, rect.w, rect.h); + priv->uv_remap_viewport.push_back (rect); + } +} + static gboolean gst_d3d12_swapchain_sink_start (GstBaseSink * sink) { @@ -1375,6 +1741,8 @@ auto self = GST_D3D12_SWAPCHAIN_SINK (sink); auto priv = self->priv; + GST_TRACE_OBJECT (self, "Prepare"); + auto pts = GST_BUFFER_PTS (buffer); if (GST_CLOCK_TIME_IS_VALID (pts)) { auto stream_time = gst_segment_to_stream_time (&sink->segment, @@ -1398,6 +1766,8 @@ auto self = GST_D3D12_SWAPCHAIN_SINK (sink); auto priv = self->priv; + GST_TRACE_OBJECT (self, "Show frame"); + std::lock_guard < std::recursive_mutex > lk (priv->lock); if (!gst_d3d12_swapchain_sink_set_buffer (self, buf, FALSE)) { GST_ERROR_OBJECT (self, "Set buffer failed");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12videosink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12videosink.cpp
Changed
@@ -1323,25 +1323,8 @@ return GST_FLOW_ERROR; } - GstVideoFrame in_frame, out_frame; - if (!gst_video_frame_map (&in_frame, &priv->info, buffer, GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Couldn't map input frame"); - gst_buffer_unref (upload); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_map (&out_frame, &priv->info, upload, GST_MAP_WRITE)) { - GST_ERROR_OBJECT (self, "Couldn't map upload frame"); - gst_video_frame_unmap (&in_frame); - gst_buffer_unref (upload); - return GST_FLOW_ERROR; - } - - auto copy_ret = gst_video_frame_copy (&out_frame, &in_frame); - gst_video_frame_unmap (&out_frame); - gst_video_frame_unmap (&in_frame); - if (!copy_ret) { - GST_ERROR_OBJECT (self, "Couldn't copy frame"); + if (!gst_d3d12_buffer_copy_into (upload, buffer, &priv->info)) { + GST_ERROR_OBJECT (self, "Couldn't upload buffer"); gst_buffer_unref (upload); return GST_FLOW_ERROR; }
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12weaveinterlace.cpp
Added
@@ -0,0 +1,1774 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstd3d12weaveinterlace.h" +#include "gstd3d12pluginutils.h" +#include <gst/d3dshader/gstd3dshader.h> +#include <directx/d3dx12.h> +#include <wrl.h> +#include <vector> +#include <math.h> +#include <memory> +#include <mutex> + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +/* *INDENT-ON* */ + +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + + GST_D3D12_CALL_ONCE_BEGIN { + cat = _gst_debug_category_new ("d3d12weaveinterlace", + 0, "d3d12weaveinterlace"); + } GST_D3D12_CALL_ONCE_END; + + return cat; +} +#endif /* GST_DISABLE_GST_DEBUG */ + +/* *INDENT-OFF* */ +struct WeaveCBData +{ + UINT Width; + UINT Height; + UINT Mode; + UINT FieldOrder; +}; + +struct WeaveContext +{ + ComPtr<ID3D12PipelineState> pso; + WeaveCBData cb_data = { }; + guint dispatch_x; + guint dispatch_y; +}; + +struct WeaveConvertContext +{ + ComPtr<ID3D12PipelineState> pso; + guint dispatch_x; + guint dispatch_y; +}; + +struct GstD3D12WeaveInterlacePrivate +{ + GstD3D12WeaveInterlacePrivate () + { + fence_pool = gst_d3d12_fence_data_pool_new (); + output_queue = gst_vec_deque_new (2); + gst_vec_deque_set_clear_func (output_queue, + (GDestroyNotify) gst_buffer_unref); + } + + ~GstD3D12WeaveInterlacePrivate () + { + if (device) { + gst_d3d12_device_fence_wait (device, queue_type, + fence_val); + } + + contexts.clear (); + pre_context = nullptr; + post_context = nullptr; + rs = nullptr; + cl = nullptr; + fence = nullptr; + Flush (); + gst_vec_deque_free (output_queue); + if (output_pool) + gst_buffer_pool_set_active (output_pool, FALSE); + gst_clear_object (&output_pool); + if (convert_pool) + gst_buffer_pool_set_active (convert_pool, FALSE); + gst_clear_object (&convert_pool); + gst_clear_object (&desc_pool); + gst_clear_object (&ca_pool); + gst_clear_object (&fence_pool); + gst_clear_object (&cq); + gst_clear_object (&device); + } + + void Flush () + { + gst_clear_buffer (&prev_buf); + gst_clear_buffer (&cur_buf); + gst_clear_buffer (&out_buf); + } + + std::vector<std::shared_ptr<WeaveContext>> contexts; + std::shared_ptr<WeaveConvertContext> pre_context; + std::shared_ptr<WeaveConvertContext> post_context; + GstVecDeque *output_queue = nullptr; + ComPtr<ID3D12GraphicsCommandList> cl; + ComPtr<ID3D12RootSignature> rs; + ComPtr<ID3D12RootSignature> convert_rs; + GstD3D12Device *device = nullptr; + GstD3D12CmdQueue *cq = nullptr; + ComPtr<ID3D12Fence> fence; + GstD3D12FenceDataPool *fence_pool = nullptr; + GstD3D12DescHeapPool *desc_pool = nullptr; + GstD3D12CmdAllocPool *ca_pool = nullptr; + GstBuffer *prev_buf = nullptr; + GstBuffer *cur_buf = nullptr; + GstBuffer *out_buf = nullptr; + GstBufferPool *output_pool = nullptr; + GstBufferPool *convert_pool = nullptr; + GstVideoInfo info; + GstVideoInfo origin_info; + guint64 fence_val = 0; + guint desc_inc_size; + GstD3D12WeaveInterlacPattern pattern = GST_D3D12_WEAVE_INTERLACE_PATTERN_1_1; + gboolean bff = FALSE; + gboolean is_forward = TRUE; + D3D12_COMMAND_LIST_TYPE queue_type = D3D12_COMMAND_LIST_TYPE_DIRECT; + std::mutex lock; +}; +/* *INDENT-ON* */ + +struct _GstD3D12WeaveInterlace +{ + GstObject parent; + + GstD3D12WeaveInterlacePrivate *priv; +}; + +static void gst_d3d12_weave_interlace_finalize (GObject * object); + +#define gst_d3d12_weave_interlace_parent_class parent_class +G_DEFINE_TYPE (GstD3D12WeaveInterlace, gst_d3d12_weave_interlace, + GST_TYPE_OBJECT); + +static void +gst_d3d12_weave_interlace_class_init (GstD3D12WeaveInterlaceClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + + object_class->finalize = gst_d3d12_weave_interlace_finalize; +} + +static void +gst_d3d12_weave_interlace_init (GstD3D12WeaveInterlace * self) +{ + self->priv = new GstD3D12WeaveInterlacePrivate (); +} + +static void +gst_d3d12_weave_interlace_finalize (GObject * object) +{ + auto self = GST_D3D12_WEAVE_INTERLACE (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static gboolean +gst_d3d12_weave_interlace_get_rs_blob (GstD3D12Device * device, + ID3DBlob ** blob) +{ + static ID3DBlob *rs_blob_ = nullptr; + + GST_D3D12_CALL_ONCE_BEGIN { + std::vector < D3D12_DESCRIPTOR_RANGE > ranges; + std::vector < D3D12_ROOT_PARAMETER > params; + + for (guint i = 0; i < 2; i++) { + ranges.push_back (CD3DX12_DESCRIPTOR_RANGE + (D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, i)); + } + + ranges.push_back (CD3DX12_DESCRIPTOR_RANGE (D3D12_DESCRIPTOR_RANGE_TYPE_UAV, + 1, 0)); + + CD3DX12_ROOT_PARAMETER param; + param.InitAsDescriptorTable (ranges.size (), ranges.data ()); + params.push_back (param); + + param.InitAsConstants (4, 0); + params.push_back (param); + + D3D12_VERSIONED_ROOT_SIGNATURE_DESC desc = { }; + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (desc, params.size (), + params.data (), 0, nullptr, + D3D12_ROOT_SIGNATURE_FLAG_DENY_VERTEX_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_PIXEL_SHADER_ROOT_ACCESS); + + ComPtr < ID3DBlob > rs_blob; + ComPtr < ID3DBlob > error_blob; + auto hr = D3DX12SerializeVersionedRootSignature (&desc, + D3D_ROOT_SIGNATURE_VERSION_1_0, &rs_blob, &error_blob); + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + GST_ERROR_OBJECT (device, + "Couldn't serialize rs, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } else { + rs_blob_ = rs_blob.Detach (); + } + } + GST_D3D12_CALL_ONCE_END; + + if (rs_blob_) { + *blob = rs_blob_; + rs_blob_->AddRef (); + return TRUE; + } + + return FALSE; +} + +static gboolean +gst_d3d12_weave_interlace_get_convert_rs_blob (GstD3D12Device * device, + ID3DBlob ** blob) +{ + static ID3DBlob *rs_blob_ = nullptr; + + GST_D3D12_CALL_ONCE_BEGIN { + CD3DX12_ROOT_PARAMETER param; + CD3DX12_DESCRIPTOR_RANGE range2; + range0.Init (D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0); + range1.Init (D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 0, 0); + + param.InitAsDescriptorTable (2, range); + + D3D12_VERSIONED_ROOT_SIGNATURE_DESC desc = { }; + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (desc, 1, ¶m, 0, + nullptr, + D3D12_ROOT_SIGNATURE_FLAG_DENY_VERTEX_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_PIXEL_SHADER_ROOT_ACCESS); + + ComPtr < ID3DBlob > rs_blob; + ComPtr < ID3DBlob > error_blob; + auto hr = D3DX12SerializeVersionedRootSignature (&desc, + D3D_ROOT_SIGNATURE_VERSION_1_0, &rs_blob, &error_blob); + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + GST_ERROR_OBJECT (device, + "Couldn't serialize rs, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } else { + rs_blob_ = rs_blob.Detach (); + } + } + GST_D3D12_CALL_ONCE_END; + + if (rs_blob_) { + *blob = rs_blob_; + rs_blob_->AddRef (); + return TRUE; + } + + return FALSE; +} + +static gboolean +gst_d3d12_weave_interlace_prepare_convert (GstD3D12WeaveInterlace * self) +{ + auto priv = self->priv; + + GstVideoFormat conv_format = GST_VIDEO_FORMAT_UNKNOWN; + auto format = GST_VIDEO_INFO_FORMAT (&priv->origin_info); + switch (format) { + case GST_VIDEO_FORMAT_YUY2: + case GST_VIDEO_FORMAT_UYVY: + case GST_VIDEO_FORMAT_VYUY: + case GST_VIDEO_FORMAT_YVYU: + case GST_VIDEO_FORMAT_v308: + case GST_VIDEO_FORMAT_IYU2: + conv_format = GST_VIDEO_FORMAT_AYUV; + break; + case GST_VIDEO_FORMAT_Y210: + case GST_VIDEO_FORMAT_Y212_LE: + case GST_VIDEO_FORMAT_Y216_LE: + case GST_VIDEO_FORMAT_v210: + case GST_VIDEO_FORMAT_v216: + conv_format = GST_VIDEO_FORMAT_AYUV64; + break; + case GST_VIDEO_FORMAT_RGB: + case GST_VIDEO_FORMAT_BGR: + conv_format = GST_VIDEO_FORMAT_RGBA; + break; + case GST_VIDEO_FORMAT_r210: + conv_format = GST_VIDEO_FORMAT_RGB10A2_LE; + break; + default: + return TRUE; + } + + GstD3DConverterCSByteCode pre_byte_code = { }; + GstD3DConverterCSByteCode post_byte_code = { }; + if (!gst_d3d_converter_shader_get_cs_blob (format, conv_format, + GST_D3D_SM_5_0, &pre_byte_code) || + !gst_d3d_converter_shader_get_cs_blob (conv_format, format, + GST_D3D_SM_5_0, &post_byte_code)) { + GST_ERROR_OBJECT (self, "Couldn't get convert shader blob"); + return FALSE; + } + + gst_video_info_set_format (&priv->info, conv_format, + priv->origin_info.width, priv->origin_info.height); + + ComPtr < ID3DBlob > rs_blob; + if (!gst_d3d12_weave_interlace_get_convert_rs_blob (priv->device, &rs_blob)) { + GST_ERROR_OBJECT (self, "Couldn't get rs blob"); + return FALSE; + } + + auto device = gst_d3d12_device_get_device_handle (priv->device); + auto hr = device->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&priv->convert_rs)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create rs"); + return FALSE; + } + + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->convert_rs.Get (); + auto pre_context = std::make_shared < WeaveConvertContext > (); + pso_desc.CS.pShaderBytecode = pre_byte_code.byte_code.byte_code; + pso_desc.CS.BytecodeLength = pre_byte_code.byte_code.byte_code_len; + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&pre_context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + pre_context->dispatch_x = (guint) ceil (priv->info.width / + (float) pre_byte_code.x_unit); + pre_context->dispatch_y = (guint) ceil (priv->info.height / + (float) pre_byte_code.y_unit); + + auto post_context = std::make_shared < WeaveConvertContext > (); + pso_desc.CS.pShaderBytecode = post_byte_code.byte_code.byte_code; + pso_desc.CS.BytecodeLength = post_byte_code.byte_code.byte_code_len; + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&post_context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + post_context->dispatch_x = (guint) ceil (priv->info.width / + (float) post_byte_code.x_unit); + post_context->dispatch_y = (guint) ceil (priv->info.height / + (float) post_byte_code.y_unit); + + priv->pre_context = pre_context; + priv->post_context = post_context; + + priv->convert_pool = gst_d3d12_buffer_pool_new (priv->device); + auto config = gst_buffer_pool_get_config (priv->convert_pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + auto caps = gst_video_info_to_caps (&priv->origin_info); + gst_buffer_pool_config_set_params (config, + caps, priv->origin_info.size, 0, 0); + gst_caps_unref (caps); + + GstD3D12Format d3d12_format; + gst_d3d12_device_get_format (priv->device, format, &d3d12_format); + + D3D12_RESOURCE_FLAGS resource_flags = + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS; + if ((d3d12_format.support1 & D3D12_FORMAT_SUPPORT1_RENDER_TARGET) == + D3D12_FORMAT_SUPPORT1_RENDER_TARGET) { + resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET; + } + + auto params = gst_d3d12_allocation_params_new (priv->device, + &priv->origin_info, GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, + D3D12_HEAP_FLAG_SHARED); + gst_buffer_pool_config_set_d3d12_allocation_params (config, params); + gst_d3d12_allocation_params_free (params); + + if (!gst_buffer_pool_set_config (priv->convert_pool, config)) { + GST_ERROR_OBJECT (self, "Couldn't set pool config"); + return FALSE; + } + + if (!gst_buffer_pool_set_active (priv->convert_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Pool active failed"); + return FALSE; + } + + return TRUE; +} + +static gboolean +gst_d3d12_weave_interlace_prepare_context (GstD3D12WeaveInterlace * self, + const GstVideoInfo * info) +{ + auto priv = self->priv; + + ComPtr < ID3DBlob > rs_blob; + if (!gst_d3d12_weave_interlace_get_rs_blob (priv->device, &rs_blob)) { + GST_ERROR_OBJECT (self, "Couldn't get rs blob"); + return FALSE; + } + + auto device = gst_d3d12_device_get_device_handle (priv->device); + auto hr = device->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&priv->rs)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create rs"); + return FALSE; + } + + auto format = GST_VIDEO_INFO_FORMAT (info); + switch (format) { + case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_NV21: + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + case GST_VIDEO_FORMAT_AV12: + case GST_VIDEO_FORMAT_NV16: + case GST_VIDEO_FORMAT_NV61: + case GST_VIDEO_FORMAT_NV24: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode_luma = { }; + GstD3DShaderByteCode bytecode_chroma = { }; + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode_luma) + || + !gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_2, GST_D3D_SM_5_0, + &bytecode_chroma)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode_luma.byte_code; + pso_desc.CS.BytecodeLength = bytecode_luma.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + context = std::make_shared < WeaveContext > (); + + pso_desc.CS.pShaderBytecode = bytecode_chroma.byte_code; + pso_desc.CS.BytecodeLength = bytecode_chroma.byte_code_len; + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + switch (format) { + case GST_VIDEO_FORMAT_NV16: + case GST_VIDEO_FORMAT_NV61: + context->cb_data.Width = width / 2; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 8.0); + break; + case GST_VIDEO_FORMAT_NV24: + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + break; + default: + context->cb_data.Width = width / 2; + context->cb_data.Height = height / 2; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 16.0); + break; + } + + priv->contexts.push_back (context); + + if (format == GST_VIDEO_FORMAT_AV12) { + context = std::make_shared < WeaveContext > (); + context->pso = priv->contexts0->pso; + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_I420: + case GST_VIDEO_FORMAT_YV12: + case GST_VIDEO_FORMAT_I420_10LE: + case GST_VIDEO_FORMAT_I420_12LE: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 2; + context->cb_data.Height = height / 2; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 16.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_Y41B: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_4 (info->width); + guint height = GST_ROUND_UP_4 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 4; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 32.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_Y42B: + case GST_VIDEO_FORMAT_I422_10LE: + case GST_VIDEO_FORMAT_I422_12LE: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 2; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_YUV9: + case GST_VIDEO_FORMAT_YVU9: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_4 (info->width); + guint height = GST_ROUND_UP_4 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 4; + context->cb_data.Height = height / 4; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 32.0); + context->dispatch_y = (guint) ceil (height / 32.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_Y444: + case GST_VIDEO_FORMAT_Y444_10LE: + case GST_VIDEO_FORMAT_Y444_12LE: + case GST_VIDEO_FORMAT_Y444_16LE: + case GST_VIDEO_FORMAT_GBR: + case GST_VIDEO_FORMAT_GBR_10LE: + case GST_VIDEO_FORMAT_GBR_12LE: + case GST_VIDEO_FORMAT_GBR_16LE: + case GST_VIDEO_FORMAT_BGRP: + case GST_VIDEO_FORMAT_RGBP: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + break; + } + case GST_VIDEO_FORMAT_RGBA64_LE: + case GST_VIDEO_FORMAT_BGRA64_LE: + case GST_VIDEO_FORMAT_Y412_LE: + case GST_VIDEO_FORMAT_Y416_LE: + case GST_VIDEO_FORMAT_RGB10A2_LE: + case GST_VIDEO_FORMAT_Y410: + case GST_VIDEO_FORMAT_BGR10A2_LE: + case GST_VIDEO_FORMAT_VUYA: + case GST_VIDEO_FORMAT_RGBA: + case GST_VIDEO_FORMAT_BGRA: + case GST_VIDEO_FORMAT_RGBx: + case GST_VIDEO_FORMAT_BGRx: + case GST_VIDEO_FORMAT_ARGB64_LE: + case GST_VIDEO_FORMAT_AYUV64: + case GST_VIDEO_FORMAT_AYUV: + case GST_VIDEO_FORMAT_ABGR: + case GST_VIDEO_FORMAT_ARGB: + case GST_VIDEO_FORMAT_xBGR: + case GST_VIDEO_FORMAT_xRGB: + case GST_VIDEO_FORMAT_GRAY16_LE: + case GST_VIDEO_FORMAT_GRAY8: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + GstD3DPluginCS cs = GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_4; + switch (format) { + case GST_VIDEO_FORMAT_GRAY16_LE: + case GST_VIDEO_FORMAT_GRAY8: + cs = GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1; + break; + default: + break; + } + + if (!gst_d3d_plugin_shader_get_cs_blob (cs, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + break; + } + case GST_VIDEO_FORMAT_A420: + case GST_VIDEO_FORMAT_A420_10LE: + case GST_VIDEO_FORMAT_A420_12LE: + case GST_VIDEO_FORMAT_A420_16LE: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 2; + context->cb_data.Height = height / 2; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 16.0); + + priv->contexts.push_back (context); + } + + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + break; + } + case GST_VIDEO_FORMAT_A422: + case GST_VIDEO_FORMAT_A422_10LE: + case GST_VIDEO_FORMAT_A422_12LE: + case GST_VIDEO_FORMAT_A422_16LE: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 2; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width / 2; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 16.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + break; + } + case GST_VIDEO_FORMAT_GBRA: + case GST_VIDEO_FORMAT_GBRA_10LE: + case GST_VIDEO_FORMAT_GBRA_12LE: + case GST_VIDEO_FORMAT_A444: + case GST_VIDEO_FORMAT_A444_10LE: + case GST_VIDEO_FORMAT_A444_12LE: + case GST_VIDEO_FORMAT_A444_16LE: + { + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = priv->rs.Get (); + + GstD3DShaderByteCode bytecode = { }; + + if (!gst_d3d_plugin_shader_get_cs_blob + (GST_D3D_PLUGIN_CS_WEAVE_INTERLACE_1, GST_D3D_SM_5_0, &bytecode)) { + GST_ERROR_OBJECT (self, "Couldn't get cs blob"); + return FALSE; + } + + pso_desc.CS.pShaderBytecode = bytecode.byte_code; + pso_desc.CS.BytecodeLength = bytecode.byte_code_len; + + auto context = std::make_shared < WeaveContext > (); + + hr = device->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&context->pso)); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't create pso"); + return FALSE; + } + + guint width = GST_ROUND_UP_2 (info->width); + guint height = GST_ROUND_UP_2 (info->height); + + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + + for (guint i = 0; i < 3; i++) { + context = std::make_shared < WeaveContext > (); + context->cb_data.Width = width; + context->cb_data.Height = height; + context->cb_data.Mode = (UINT) priv->pattern; + context->cb_data.FieldOrder = priv->bff ? 1 : 0; + context->dispatch_x = (guint) ceil (width / 8.0); + context->dispatch_y = (guint) ceil (height / 8.0); + + priv->contexts.push_back (context); + } + break; + } + default: + GST_ERROR_OBJECT (self, "Not supported format %s", + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (info))); + return FALSE; + } + + D3D12_DESCRIPTOR_HEAP_DESC heap_desc = { }; + heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; + /* max 3 descriptors per Dispatch (2 SRV and 1 UAV) */ + heap_desc.NumDescriptors = 3 * GST_VIDEO_INFO_N_PLANES (info); + heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; + priv->desc_pool = gst_d3d12_desc_heap_pool_new (device, &heap_desc); + + priv->desc_inc_size = device->GetDescriptorHandleIncrementSize + (D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + + priv->output_pool = gst_d3d12_buffer_pool_new (priv->device); + auto config = gst_buffer_pool_get_config (priv->output_pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + auto caps = gst_video_info_to_caps (info); + gst_buffer_pool_config_set_params (config, caps, info->size, 0, 0); + gst_caps_unref (caps); + + GstD3D12Format d3d12_format; + gst_d3d12_device_get_format (priv->device, GST_VIDEO_INFO_FORMAT (info), + &d3d12_format); + + D3D12_RESOURCE_FLAGS resource_flags = + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS; + if ((d3d12_format.support1 & D3D12_FORMAT_SUPPORT1_RENDER_TARGET) == + D3D12_FORMAT_SUPPORT1_RENDER_TARGET) { + resource_flags |= D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET; + } + + auto params = gst_d3d12_allocation_params_new (priv->device, info, + GST_D3D12_ALLOCATION_FLAG_DEFAULT, resource_flags, + D3D12_HEAP_FLAG_SHARED); + gst_buffer_pool_config_set_d3d12_allocation_params (config, params); + gst_d3d12_allocation_params_free (params); + + if (!gst_buffer_pool_set_config (priv->output_pool, config)) { + GST_ERROR_OBJECT (self, "Couldn't set pool config"); + return FALSE; + } + + if (!gst_buffer_pool_set_active (priv->output_pool, TRUE)) { + GST_ERROR_OBJECT (self, "Pool active failed"); + return FALSE; + } + + return TRUE; +} + +GstD3D12WeaveInterlace * +gst_d3d12_weave_interlace_new (GstD3D12Device * device, + const GstVideoInfo * info, GstD3D12WeaveInterlacPattern pattern, + gboolean bff, gboolean use_compute) +{ + g_return_val_if_fail (GST_IS_D3D12_DEVICE (device), nullptr); + g_return_val_if_fail (info, nullptr); + + auto self = (GstD3D12WeaveInterlace *) + g_object_new (GST_TYPE_D3D12_WEAVE_INTERLACE, nullptr); + gst_object_ref_sink (self); + + auto priv = self->priv; + priv->info = *info; + priv->origin_info = *info; + priv->device = (GstD3D12Device *) gst_object_ref (device); + priv->queue_type = use_compute ? + D3D12_COMMAND_LIST_TYPE_COMPUTE : D3D12_COMMAND_LIST_TYPE_DIRECT; + priv->pattern = pattern; + priv->bff = bff; + + if (priv->pattern == GST_D3D12_WEAVE_INTERLACE_PATTERN_2_2) { + /* In case of 2:2, we just modify buffer flags without any other processing. + * Do not allocate any GPU resources */ + return self; + } + + if (!gst_d3d12_weave_interlace_prepare_convert (self)) { + gst_object_unref (self); + return nullptr; + } + + if (!gst_d3d12_weave_interlace_prepare_context (self, &priv->info)) { + gst_object_unref (self); + return nullptr; + } + + auto device_handle = gst_d3d12_device_get_device_handle (device); + priv->ca_pool = gst_d3d12_cmd_alloc_pool_new (device_handle, + priv->queue_type); + priv->cq = gst_d3d12_device_get_cmd_queue (priv->device, priv->queue_type); + gst_object_ref (priv->cq); + priv->fence = gst_d3d12_cmd_queue_get_fence_handle (priv->cq); + + return self; +} + +struct GstD3D12WeaveInterlaceFrameCtx +{ + GstD3D12Frame prev; + GstD3D12Frame cur; + GstD3D12Frame out_frame; + GstD3D12Frame conv_frame; +}; + +static void +gst_d3d12_weave_interlace_unmap_frame_ctx (GstD3D12WeaveInterlaceFrameCtx * ctx) +{ + gst_d3d12_frame_unmap (&ctx->prev); + gst_d3d12_frame_unmap (&ctx->cur); + gst_d3d12_frame_unmap (&ctx->out_frame); + gst_d3d12_frame_unmap (&ctx->conv_frame); +} + +static inline void +clear_buffer_interlace_flags (GstBuffer * buffer) +{ + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_TFF); + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_RFF); + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_ONEFIELD); + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_TOP_FIELD); + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD); + GST_BUFFER_FLAG_UNSET (buffer, GST_VIDEO_BUFFER_FLAG_INTERLACED); +} + +static gboolean +gst_d3d12_weave_interlace_map_frames (GstD3D12WeaveInterlace * self, + GstD3D12WeaveInterlaceFrameCtx * ctx, GstD3D12FenceData * fence_data, + std::vector < ID3D12Fence * >&fences_to_wait, + std::vector < guint64 > &fence_values_to_wait) +{ + auto priv = self->priv; + GstBuffer *output_buf = nullptr; + GstBuffer *output_conv_buf = nullptr; + GstD3D12FrameMapFlags out_map_flags = GST_D3D12_FRAME_MAP_FLAG_UAV; + + if (priv->post_context) + out_map_flags |= GST_D3D12_FRAME_MAP_FLAG_SRV; + + memset (ctx, 0, sizeof (GstD3D12WeaveInterlaceFrameCtx)); + + if (!gst_d3d12_frame_map (&ctx->prev, &priv->info, priv->prev_buf, + GST_MAP_READ, GST_D3D12_FRAME_MAP_FLAG_SRV)) { + GST_ERROR_OBJECT (self, "Couldn't map prev frame"); + goto error; + } + + if (!gst_d3d12_frame_map (&ctx->cur, &priv->info, priv->cur_buf, + GST_MAP_READ, GST_D3D12_FRAME_MAP_FLAG_SRV)) { + GST_ERROR_OBJECT (self, "Couldn't map cur frame"); + goto error; + } + + gst_buffer_pool_acquire_buffer (priv->output_pool, &output_buf, nullptr); + if (!output_buf) { + GST_ERROR_OBJECT (self, "Couldn't acquire first field buffer"); + goto error; + } + + if (priv->post_context) { + gst_buffer_pool_acquire_buffer (priv->convert_pool, + &output_conv_buf, nullptr); + if (!output_conv_buf) { + GST_ERROR_OBJECT (self, "Couldn't acquire first field output buffer"); + gst_clear_buffer (&output_buf); + goto error; + } + + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (output_buf)); + priv->out_buf = output_conv_buf; + } else { + priv->out_buf = output_buf; + } + + /* Copy buffer flags except for interlace related ones */ + gst_buffer_copy_into (priv->out_buf, priv->prev_buf, GST_BUFFER_COPY_METADATA, + 0, -1); + clear_buffer_interlace_flags (priv->out_buf); + GST_BUFFER_FLAG_SET (priv->out_buf, GST_VIDEO_BUFFER_FLAG_INTERLACED); + if (!priv->bff) + GST_BUFFER_FLAG_SET (priv->out_buf, GST_VIDEO_BUFFER_FLAG_TFF); + + { + auto start_pts = GST_BUFFER_PTS (priv->prev_buf); + if (GST_CLOCK_TIME_IS_VALID (start_pts)) { + auto end_pts = GST_BUFFER_PTS (priv->cur_buf); + if (GST_CLOCK_TIME_IS_VALID (end_pts)) { + if (GST_BUFFER_DURATION_IS_VALID (priv->cur_buf)) + end_pts += GST_BUFFER_DURATION (priv->cur_buf); + + if (end_pts > start_pts) + GST_BUFFER_DURATION (priv->out_buf) = end_pts - start_pts; + } + } + } + + if (!gst_d3d12_frame_map (&ctx->out_frame, &priv->info, output_buf, + GST_MAP_D3D12, out_map_flags)) { + GST_ERROR_OBJECT (self, "Couldn't map first field output"); + goto error; + } + + if (output_conv_buf && !gst_d3d12_frame_map (&ctx->conv_frame, + &priv->origin_info, output_conv_buf, + GST_MAP_D3D12, GST_D3D12_FRAME_MAP_FLAG_UAV)) { + GST_ERROR_OBJECT (self, "Couldn't map first field convert output"); + goto error; + } + + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (gst_buffer_ref (priv->prev_buf))); + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (gst_buffer_ref (priv->cur_buf))); + + for (guint i = 0; i < GST_VIDEO_INFO_N_PLANES (&priv->info); i++) { + if (ctx->prev.fencei.fence && + ctx->prev.fencei.fence != priv->fence.Get ()) { + fences_to_wait.push_back (ctx->prev.fencei.fence); + fence_values_to_wait.push_back (ctx->prev.fencei.fence_value); + } + + if (ctx->cur.fencei.fence && + ctx->cur.fencei.fence != priv->fence.Get ()) { + fences_to_wait.push_back (ctx->cur.fencei.fence); + fence_values_to_wait.push_back (ctx->cur.fencei.fence_value); + } + } + + return TRUE; + +error: + gst_d3d12_weave_interlace_unmap_frame_ctx (ctx); + gst_clear_buffer (&priv->out_buf); + + return FALSE; +} + +static GstFlowReturn +gst_d3d12_weave_interlace_process_frame (GstD3D12WeaveInterlace * self) +{ + auto priv = self->priv; + + auto device = gst_d3d12_device_get_device_handle (priv->device); + GstD3D12FenceData *fence_data; + gst_d3d12_fence_data_pool_acquire (priv->fence_pool, &fence_data); + + GstD3D12WeaveInterlaceFrameCtx frame_ctx; + std::vector < ID3D12Fence * >fences_to_wait; + std::vector < guint64 > fence_values_to_wait; + + if (!gst_d3d12_weave_interlace_map_frames (self, &frame_ctx, fence_data, + fences_to_wait, fence_values_to_wait)) { + GST_ERROR_OBJECT (self, "Couldn't map frame context"); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + GstD3D12DescHeap *desc_heap; + if (!gst_d3d12_desc_heap_pool_acquire (priv->desc_pool, &desc_heap)) { + GST_ERROR_OBJECT (self, "Couldn't acquire descriptor heap"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (desc_heap)); + + GstD3D12DescHeap *conv_desc_heap = nullptr; + ID3D12DescriptorHeap *conv_desc_handle = nullptr; + if (priv->post_context) { + if (!gst_d3d12_desc_heap_pool_acquire (priv->desc_pool, &conv_desc_heap)) { + GST_ERROR_OBJECT (self, "Couldn't acquire descriptor heap"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + gst_d3d12_fence_data_push (fence_data, + FENCE_NOTIFY_MINI_OBJECT (conv_desc_heap)); + } + + auto desc_handle = gst_d3d12_desc_heap_get_handle (desc_heap); + auto cpu_handle = CD3DX12_CPU_DESCRIPTOR_HANDLE + (GetCPUDescriptorHandleForHeapStart (desc_handle)); + + for (guint i = 0; i < GST_VIDEO_INFO_N_PLANES (&priv->info); i++) { + device->CopyDescriptorsSimple (1, cpu_handle, + frame_ctx.prev.srv_desc_handlei, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + cpu_handle.Offset (priv->desc_inc_size); + + device->CopyDescriptorsSimple (1, cpu_handle, + frame_ctx.cur.srv_desc_handlei, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + cpu_handle.Offset (priv->desc_inc_size); + + device->CopyDescriptorsSimple (1, cpu_handle, + frame_ctx.out_frame.uav_desc_handlei, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + cpu_handle.Offset (priv->desc_inc_size); + } + + if (conv_desc_heap) { + conv_desc_handle = gst_d3d12_desc_heap_get_handle (conv_desc_heap); + auto conv_cpu_handle = CD3DX12_CPU_DESCRIPTOR_HANDLE + (GetCPUDescriptorHandleForHeapStart (conv_desc_handle)); + + device->CopyDescriptorsSimple (1, conv_cpu_handle, + frame_ctx.out_frame.srv_desc_handle0, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + conv_cpu_handle.Offset (priv->desc_inc_size); + + device->CopyDescriptorsSimple (1, conv_cpu_handle, + frame_ctx.conv_frame.uav_desc_handle0, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + conv_cpu_handle.Offset (priv->desc_inc_size); + } + + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (priv->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + HRESULT hr = ca->Reset (); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + if (!priv->cl) { + hr = device->CreateCommandList (0, priv->queue_type, + ca, nullptr, IID_PPV_ARGS (&priv->cl)); + } else { + hr = priv->cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + return GST_FLOW_ERROR; + } + + auto gpu_handle = CD3DX12_GPU_DESCRIPTOR_HANDLE + (GetGPUDescriptorHandleForHeapStart (desc_handle)); + + priv->cl->SetComputeRootSignature (priv->rs.Get ()); + ID3D12DescriptorHeap *heaps = { desc_handle }; + priv->cl->SetDescriptorHeaps (1, heaps); + + for (size_t i = 0; i < priv->contexts.size (); i++) { + auto & ctx = priv->contextsi; + if (ctx->pso) + priv->cl->SetPipelineState (ctx->pso.Get ()); + + priv->cl->SetComputeRootDescriptorTable (0, gpu_handle); + gpu_handle.Offset (priv->desc_inc_size * 3); + + priv->cl->SetComputeRoot32BitConstants (1, 4, &ctx->cb_data, 0); + priv->cl->Dispatch (ctx->dispatch_x, ctx->dispatch_y, 1); + + if (priv->post_context) { + auto barrier = + CD3DX12_RESOURCE_BARRIER::Transition (frame_ctx.out_frame.data0, + D3D12_RESOURCE_STATE_UNORDERED_ACCESS, + D3D12_RESOURCE_STATE_NON_PIXEL_SHADER_RESOURCE, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, + D3D12_RESOURCE_BARRIER_FLAG_BEGIN_ONLY); + priv->cl->ResourceBarrier (1, &barrier); + } + } + + if (priv->post_context) { + auto conv_gpu_handle = CD3DX12_GPU_DESCRIPTOR_HANDLE + (GetGPUDescriptorHandleForHeapStart (conv_desc_handle)); + auto ctx = priv->post_context; + + priv->cl->SetComputeRootSignature (priv->convert_rs.Get ()); + ID3D12DescriptorHeap *conv_heaps = { conv_desc_handle }; + priv->cl->SetDescriptorHeaps (1, conv_heaps); + priv->cl->SetPipelineState (ctx->pso.Get ()); + + auto barrier = CD3DX12_RESOURCE_BARRIER::Transition + (frame_ctx.out_frame.data0, + D3D12_RESOURCE_STATE_UNORDERED_ACCESS, + D3D12_RESOURCE_STATE_NON_PIXEL_SHADER_RESOURCE, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, + D3D12_RESOURCE_BARRIER_FLAG_END_ONLY); + priv->cl->ResourceBarrier (1, &barrier); + + priv->cl->SetComputeRootDescriptorTable (0, conv_gpu_handle); + conv_gpu_handle.Offset (priv->desc_inc_size * 2); + priv->cl->Dispatch (ctx->dispatch_x, ctx->dispatch_y, 1); + } + + hr = priv->cl->Close (); + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't close command list"); + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + gst_d3d12_fence_data_unref (fence_data); + gst_clear_buffer (&priv->out_buf); + return GST_FLOW_ERROR; + } + + ID3D12CommandList *cmd_list = { priv->cl.Get () }; + if (fences_to_wait.empty ()) { + hr = gst_d3d12_cmd_queue_execute_command_lists (priv->cq, + 1, cmd_list, &priv->fence_val); + } else { + hr = gst_d3d12_cmd_queue_execute_command_lists_full (priv->cq, + fences_to_wait.size (), fences_to_wait.data (), + fence_values_to_wait.data (), 1, cmd_list, &priv->fence_val); + } + + gst_d3d12_weave_interlace_unmap_frame_ctx (&frame_ctx); + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_clear_buffer (&priv->out_buf); + return GST_FLOW_ERROR; + } + + gst_d3d12_cmd_queue_set_notify (priv->cq, priv->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + + gst_d3d12_buffer_set_fence (priv->out_buf, priv->fence.Get (), + priv->fence_val, FALSE); + gst_vec_deque_push_tail (priv->output_queue, priv->out_buf); + priv->out_buf = nullptr; + + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_d3d12_weave_interlace_push_unlocked (GstD3D12WeaveInterlace * self, + GstBuffer * buffer) +{ + auto priv = self->priv; + + if (!priv->prev_buf) { + priv->prev_buf = buffer; + return GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA; + } + + priv->cur_buf = buffer; + if (!priv->is_forward) { + auto tmp = priv->prev_buf; + priv->prev_buf = priv->cur_buf; + priv->cur_buf = tmp; + } + + auto ret = gst_d3d12_weave_interlace_process_frame (self); + priv->Flush (); + + return ret; +} + +static GstBuffer * +gst_d3d12_weave_interlace_preproc (GstD3D12WeaveInterlace * self, + GstBuffer * buffer) +{ + auto priv = self->priv; + + if (!priv->pre_context) + return buffer; + + GstD3D12FenceData *fence_data; + gst_d3d12_fence_data_pool_acquire (priv->fence_pool, &fence_data); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (buffer)); + + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (priv->ca_pool, &gst_ca)) { + GST_ERROR_OBJECT (self, "Couldn't acquire command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (gst_ca)); + + auto hr = ca->Reset (); + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command allocator"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + auto device = gst_d3d12_device_get_device_handle (priv->device); + if (!priv->cl) { + hr = device->CreateCommandList (0, priv->queue_type, + ca, nullptr, IID_PPV_ARGS (&priv->cl)); + } else { + hr = priv->cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't reset command list"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + GstD3D12DescHeap *desc_heap; + if (!gst_d3d12_desc_heap_pool_acquire (priv->desc_pool, &desc_heap)) { + GST_ERROR_OBJECT (self, "Couldn't acquire descriptor heap"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + gst_d3d12_fence_data_push (fence_data, FENCE_NOTIFY_MINI_OBJECT (desc_heap)); + + GstBuffer *outbuf = nullptr; + gst_buffer_pool_acquire_buffer (priv->output_pool, &outbuf, nullptr); + if (!outbuf) { + GST_ERROR_OBJECT (self, "Couldn't acquire output buffer"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + gst_buffer_copy_into (outbuf, buffer, GST_BUFFER_COPY_METADATA, 0, -1); + GstD3D12Frame in_frame, out_frame; + if (!gst_d3d12_frame_map (&in_frame, &priv->origin_info, buffer, + GST_MAP_READ, GST_D3D12_FRAME_MAP_FLAG_SRV)) { + GST_ERROR_OBJECT (self, "Couldn't map frame"); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + if (!gst_d3d12_frame_map (&out_frame, &priv->info, outbuf, + GST_MAP_D3D12, GST_D3D12_FRAME_MAP_FLAG_UAV)) { + GST_ERROR_OBJECT (self, "Couldn't map frame"); + gst_d3d12_frame_unmap (&in_frame); + gst_d3d12_fence_data_unref (fence_data); + return nullptr; + } + + auto desc_handle = gst_d3d12_desc_heap_get_handle (desc_heap); + auto cpu_handle = CD3DX12_CPU_DESCRIPTOR_HANDLE + (GetCPUDescriptorHandleForHeapStart (desc_handle)); + + device->CopyDescriptorsSimple (1, cpu_handle, in_frame.srv_desc_handle0, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + cpu_handle.Offset (priv->desc_inc_size); + device->CopyDescriptorsSimple (1, cpu_handle, out_frame.uav_desc_handle0, + D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); + + auto gpu_handle = CD3DX12_GPU_DESCRIPTOR_HANDLE + (GetGPUDescriptorHandleForHeapStart (desc_handle)); + priv->cl->SetComputeRootSignature (priv->rs.Get ()); + + ID3D12DescriptorHeap *heaps = { desc_handle }; + priv->cl->SetDescriptorHeaps (1, heaps); + + auto ctx = priv->pre_context; + priv->cl->SetPipelineState (ctx->pso.Get ()); + priv->cl->SetComputeRootDescriptorTable (0, gpu_handle); + priv->cl->Dispatch (ctx->dispatch_x, ctx->dispatch_y, 1); + hr = priv->cl->Close (); + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't close command list"); + gst_d3d12_frame_unmap (&in_frame); + gst_d3d12_frame_unmap (&out_frame); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + ID3D12CommandList *cmd_list = { priv->cl.Get () }; + if (in_frame.fence->fence) { + hr = gst_d3d12_cmd_queue_execute_command_lists_full (priv->cq, + 1, &in_frame.fence->fence, &in_frame.fence->fence_value, + 1, cmd_list, &priv->fence_val); + } else { + hr = gst_d3d12_cmd_queue_execute_command_lists (priv->cq, + 1, cmd_list, &priv->fence_val); + } + + gst_d3d12_frame_unmap (&in_frame); + gst_d3d12_frame_unmap (&out_frame); + + if (!gst_d3d12_result (hr, priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + gst_buffer_unref (outbuf); + return nullptr; + } + + gst_d3d12_cmd_queue_set_notify (priv->cq, priv->fence_val, + FENCE_NOTIFY_MINI_OBJECT (fence_data)); + gst_d3d12_buffer_set_fence (outbuf, priv->fence.Get (), + priv->fence_val, FALSE); + + return outbuf; +} + +void +gst_d3d12_weave_interlace_set_direction (GstD3D12WeaveInterlace * interlace, + gboolean is_forward) +{ + g_return_if_fail (GST_IS_D3D12_WEAVE_INTERLACE (interlace)); + + auto priv = interlace->priv; + priv->is_forward = is_forward; +} + +GstFlowReturn +gst_d3d12_weave_interlace_push (GstD3D12WeaveInterlace * interlace, + GstBuffer * buffer) +{ + g_return_val_if_fail (GST_IS_D3D12_WEAVE_INTERLACE (interlace), + GST_FLOW_ERROR); + g_return_val_if_fail (GST_IS_BUFFER (buffer), GST_FLOW_ERROR); + + auto priv = interlace->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->pattern == GST_D3D12_WEAVE_INTERLACE_PATTERN_2_2) { + buffer = gst_buffer_make_writable (buffer); + clear_buffer_interlace_flags (buffer); + + GST_BUFFER_FLAG_SET (buffer, GST_VIDEO_BUFFER_FLAG_INTERLACED); + if (!priv->bff) + GST_BUFFER_FLAG_SET (buffer, GST_VIDEO_BUFFER_FLAG_TFF); + + gst_vec_deque_push_tail (priv->output_queue, buffer); + + return GST_FLOW_OK; + } + + buffer = gst_d3d12_weave_interlace_preproc (interlace, buffer); + if (!buffer) + return GST_FLOW_ERROR; + + return gst_d3d12_weave_interlace_push_unlocked (interlace, buffer); +} + +GstFlowReturn +gst_d3d12_weave_interlace_pop (GstD3D12WeaveInterlace * interlace, + GstBuffer ** buffer) +{ + g_return_val_if_fail (GST_IS_D3D12_WEAVE_INTERLACE (interlace), + GST_FLOW_ERROR); + g_return_val_if_fail (buffer, GST_FLOW_ERROR); + + auto priv = interlace->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + *buffer = nullptr; + if (gst_vec_deque_is_empty (priv->output_queue)) + return GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA; + + *buffer = (GstBuffer *) gst_vec_deque_pop_head (priv->output_queue); + + return GST_FLOW_OK; +} + +GstFlowReturn +gst_d3d12_weave_interlace_drain (GstD3D12WeaveInterlace * interlace) +{ + g_return_val_if_fail (GST_IS_D3D12_WEAVE_INTERLACE (interlace), + GST_FLOW_ERROR); + + auto priv = interlace->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (!priv->prev_buf) { + priv->Flush (); + return GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA; + } + + auto ret = gst_d3d12_weave_interlace_push_unlocked (interlace, + gst_buffer_copy (priv->prev_buf)); + priv->Flush (); + + return ret; +} + +void +gst_d3d12_weave_interlace_flush (GstD3D12WeaveInterlace * interlace) +{ + g_return_if_fail (GST_IS_D3D12_WEAVE_INTERLACE (interlace)); + + auto priv = interlace->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + priv->Flush (); + gst_vec_deque_clear (priv->output_queue); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12weaveinterlace.h
Added
@@ -0,0 +1,59 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/d3d12/gstd3d12.h> + +G_BEGIN_DECLS + +#define GST_TYPE_D3D12_WEAVE_INTERLACE (gst_d3d12_weave_interlace_get_type()) +G_DECLARE_FINAL_TYPE (GstD3D12WeaveInterlace, gst_d3d12_weave_interlace, + GST, D3D12_WEAVE_INTERLACE, GstObject) + +#define GST_D3D12_WEAVE_INTERLACE_FLOW_NEED_DATA GST_FLOW_CUSTOM_SUCCESS + +typedef enum +{ + GST_D3D12_WEAVE_INTERLACE_PATTERN_1_1, + GST_D3D12_WEAVE_INTERLACE_PATTERN_2_2, +} GstD3D12WeaveInterlacPattern; + +GstD3D12WeaveInterlace * gst_d3d12_weave_interlace_new (GstD3D12Device * device, + const GstVideoInfo * info, + GstD3D12WeaveInterlacPattern pattern, + gboolean bff, + gboolean use_compute); + +void gst_d3d12_weave_interlace_set_direction (GstD3D12WeaveInterlace * interlace, + gboolean is_forward); + +GstFlowReturn gst_d3d12_weave_interlace_push (GstD3D12WeaveInterlace * interlace, + GstBuffer * buffer); + +GstFlowReturn gst_d3d12_weave_interlace_pop (GstD3D12WeaveInterlace * interlace, + GstBuffer ** buffer); + +GstFlowReturn gst_d3d12_weave_interlace_drain (GstD3D12WeaveInterlace * interlace); + +void gst_d3d12_weave_interlace_flush (GstD3D12WeaveInterlace * interlace); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12window-swapchain-resource.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12window-swapchain-resource.h
Changed
@@ -21,7 +21,7 @@ #include <gst/d3d12/gstd3d12.h> #include "gstd3d12pluginutils.h" -#include "gstd3d12overlaycompositor.h" +#include "gstd3d12overlayblender.h" #include <mutex> #include <vector> #include <queue> @@ -66,7 +66,7 @@ GstBuffer *msaa_buf = nullptr; GstBuffer *cached_buf = nullptr; GstD3D12Converter *conv = nullptr; - GstD3D12OverlayCompositor *comp = nullptr; + GstD3D12OverlayBlender *comp = nullptr; GstD3D12Device *device = nullptr; GstD3D12CmdAllocPool *ca_pool = nullptr; UINT64 fence_val = 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12window-swapchain.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12window-swapchain.cpp
Changed
@@ -349,7 +349,7 @@ } if (!resource_->comp) { - resource_->comp = gst_d3d12_overlay_compositor_new (resource_->device, + resource_->comp = gst_d3d12_overlay_blender_new (resource_->device, out_info); if (!resource_->comp) { GST_ERROR ("Couldn't create overlay compositor");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/gstd3d12window.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/gstd3d12window.cpp
Changed
@@ -543,14 +543,14 @@ "video-direction", priv->orientation, nullptr); } - gst_d3d12_overlay_compositor_update_viewport (resource->comp, + gst_d3d12_overlay_blender_update_viewport (resource->comp, &priv->output_rect); } priv->output_updated = FALSE; } - gst_d3d12_overlay_compositor_upload (resource->comp, buffer); + gst_d3d12_overlay_blender_upload (resource->comp, buffer); GstD3D12CmdAlloc *gst_ca; if (!gst_d3d12_cmd_alloc_pool_acquire (resource->ca_pool, &gst_ca)) { @@ -628,7 +628,7 @@ return GST_FLOW_ERROR; } - if (!gst_d3d12_overlay_compositor_draw (resource->comp, + if (!gst_d3d12_overlay_blender_draw (resource->comp, conv_outbuf, fence_data, cl.Get ())) { GST_ERROR_OBJECT (self, "Couldn't build overlay command"); gst_d3d12_fence_data_unref (fence_data);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/meson.build
Changed
@@ -21,8 +21,9 @@ 'gstd3d12ipcsrc.cpp', 'gstd3d12mpeg2dec.cpp', 'gstd3d12mipmapping.cpp', - 'gstd3d12overlaycompositor.cpp', + 'gstd3d12overlayblender.cpp', 'gstd3d12pluginutils.cpp', + 'gstd3d12remap.cpp', 'gstd3d12screencapture.cpp', 'gstd3d12screencapturedevice.cpp', 'gstd3d12screencapturesrc.cpp', @@ -34,6 +35,11 @@ 'gstd3d12window-swapchain.cpp', 'gstd3d12window-win32.cpp', 'gstd3d12window.cpp', + 'gstd3d12fisheyedewarp.cpp', + 'gstd3d12memorycopy.cpp', + 'gstd3d12weaveinterlace.cpp', + 'gstd3d12interlace.cpp', + 'gstd3d12overlaycompositor.cpp', 'plugin.cpp', @@ -60,9 +66,10 @@ 'gstd3d12window-swapchain.h', 'gstd3d12dpbstorage.h', 'gstd3d12pluginutils.h', + 'gstd3d12remap.h', 'gstd3d12h265dec.h', 'gstd3d12screencapturesrc.h', - 'gstd3d12overlaycompositor.h', + 'gstd3d12overlayblender.h', 'gstd3d12window.h', 'gstd3d12av1dec.h', 'gstd3d12videosink.h', @@ -79,23 +86,15 @@ 'gstd3d12basefilter.h', 'gstd3d12testsrc.h', 'gstd3d12vp8dec.h', + 'gstd3d12fisheyedewarp.h', graphicscapture_sources = 'gstd3d12graphicscapture.cpp', -memorycopy_sources = - 'gstd3d12memorycopy.cpp', - - -download_upload_sources = - 'gstd3d12download.cpp', - 'gstd3d12upload.cpp', - - doc_sources = -foreach s: d3d12_sources + graphicscapture_sources + memorycopy_sources + download_upload_sources + d3d12_headers +foreach s: d3d12_sources + graphicscapture_sources + d3d12_headers doc_sources += meson.current_source_dir() / s endforeach @@ -235,9 +234,6 @@ if gstd3d11_dep.found() d3d12_cdata.set('HAVE_GST_D3D11', true) extra_deps += gstd3d11_dep - d3d12_sources += memorycopy_sources -else - d3d12_sources += download_upload_sources endif configure_file(
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3d12/plugin.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3d12/plugin.cpp
Changed
@@ -51,19 +51,20 @@ #include "gstd3d12swapchainsink.h" #include "gstd3d12mipmapping.h" #include "gstd3d12deinterlace.h" +#include "gstd3d12remap.h" +#include "gstd3d12fisheyedewarp.h" +#include "gstd3d12memorycopy.h" +#include "gstd3d12interlace.h" +#include "gstd3d12overlaycompositor.h" #include <windows.h> #include <versionhelpers.h> #include <wrl.h> #include <glib/gi18n-lib.h> #ifdef HAVE_GST_D3D11 -#include "gstd3d12memorycopy.h" #include <gst/d3d11/gstd3d11.h> #include <gst/d3d11/gstd3d11device-private.h> #include <d3d11_4.h> -#else -#include "gstd3d12download.h" -#include "gstd3d12upload.h" #endif #ifdef HAVE_WGC @@ -199,6 +200,14 @@ "d3d12mipmapping", GST_RANK_NONE, GST_TYPE_D3D12_MIP_MAPPING); gst_element_register (plugin, "d3d12deinterlace", GST_RANK_NONE, GST_TYPE_D3D12_DEINTERLACE); + gst_element_register (plugin, + "d3d12remap", GST_RANK_NONE, GST_TYPE_D3D12_REMAP); + gst_element_register (plugin, + "d3d12fisheyedewarp", GST_RANK_NONE, GST_TYPE_D3D12_FISHEYE_DEWARP); + gst_element_register (plugin, + "d3d12interlace", GST_RANK_NONE, GST_TYPE_D3D12_INTERLACE); + gst_element_register (plugin, "d3d12overlaycompositor", + GST_RANK_NONE, GST_TYPE_D3D12_OVERLAY_COMPOSITOR); g_object_set_data_full (G_OBJECT (plugin), "plugin-d3d12-shutdown", (gpointer) "shutdown-data",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3dvideosink/d3dhelpers.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3dvideosink/d3dhelpers.c
Changed
@@ -57,8 +57,7 @@ static void d3d_class_display_device_destroy (GstD3DVideoSinkClass * klass); static gboolean d3d_class_display_device_create (GstD3DVideoSinkClass * klass, UINT adapter); -static void d3d_class_hidden_window_message_queue (gpointer data, - gpointer user_data); +static void d3d_class_hidden_window_message_queue (gpointer data); static LRESULT APIENTRY d3d_wnd_proc_internal (HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); @@ -76,6 +75,7 @@ typedef struct { + GstD3DVideoSinkClass *klass; gint window_message_id; guint create_count; } GstD3DVideoSinkEvent; @@ -2675,11 +2675,10 @@ GstD3DVideoSinkClass *klass = GST_D3DVIDEOSINK_GET_CLASS (sink); GstD3DVideoSinkEvent *evt = g_new0 (GstD3DVideoSinkEvent, 1); + evt->klass = klass; evt->window_message_id = IDT_DEVICE_RESET_TIMER; evt->create_count = klass->create_count; - gst_element_call_async (GST_ELEMENT (klass), - (GstElementCallAsyncFunc) d3d_class_hidden_window_message_queue, evt, - g_free); + gst_call_async (d3d_class_hidden_window_message_queue, evt); } static void @@ -2746,14 +2745,16 @@ /* Hidden Window Loop Thread */ static void -d3d_class_hidden_window_message_queue (gpointer data, gpointer user_data) +d3d_class_hidden_window_message_queue (gpointer data) { guint id = 0; - GstD3DVideoSinkClass *klass = (GstD3DVideoSinkClass *) data; - GstD3DVideoSinkEvent *evt = (GstD3DVideoSinkEvent *) user_data; + GstD3DVideoSinkEvent *evt = (GstD3DVideoSinkEvent *) data; + GstD3DVideoSinkClass *klass = evt->klass; - if (!klass || !evt) + if (!klass) { + g_free (data); return; + } switch (evt->window_message_id) { case IDT_DEVICE_RESET_TIMER: @@ -2773,6 +2774,8 @@ } break; } + + g_free (data); } static LRESULT APIENTRY @@ -2787,11 +2790,10 @@ switch (wParam) { case IDT_DEVICE_RESET_TIMER: evt = g_new0 (GstD3DVideoSinkEvent, 1); + evt->klass = klass; evt->window_message_id = IDT_DEVICE_RESET_TIMER; evt->create_count = klass->create_count; - gst_element_call_async (GST_ELEMENT (klass), - (GstElementCallAsyncFunc) d3d_class_hidden_window_message_queue, - evt, g_free); + gst_call_async (d3d_class_hidden_window_message_queue, evt); break; } return 0;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3dvideosink/d3dhelpers.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3dvideosink/d3dhelpers.h
Changed
@@ -95,7 +95,7 @@ gboolean device_lost; /* list of GstD3DVideoSinkOverlay structs */ - GList * overlay; + GQueue overlays; gboolean overlay_needs_resize; } GstD3DData;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3dvideosink/d3dvideosink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3dvideosink/d3dvideosink.c
Changed
@@ -176,9 +176,8 @@ sink->create_internal_window = DEFAULT_CREATE_RENDER_WINDOW; sink->stream_stop_on_close = DEFAULT_STREAM_STOP_ON_CLOSE; sink->enable_navigation_events = DEFAULT_ENABLE_NAVIGATION_EVENTS; - sink->d3d.surface = NULL; - sink->d3d.overlay = NULL; sink->d3d.overlay_needs_resize = FALSE; + g_queue_init (&sink->d3d.overlays); g_rec_mutex_init (&sink->lock); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/d3dvideosink/gstd3d9overlay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/d3dvideosink/gstd3d9overlay.c
Changed
@@ -69,12 +69,6 @@ /* Transformed vertex with 1 set of texture coordinates */ static DWORD tri_fvf = D3DFVF_XYZRHW | D3DFVF_TEX1; -static gboolean -_is_rectangle_in_overlays (GList * overlays, - GstVideoOverlayRectangle * rectangle); -static gboolean -_is_overlay_in_composition (GstVideoOverlayComposition * composition, - GstD3DVideoSinkOverlay * overlay); static HRESULT gst_d3d9_overlay_init_vb (GstD3DVideoSink * sink, GstD3DVideoSinkOverlay * overlay); @@ -149,82 +143,47 @@ } } -static gboolean -_is_rectangle_in_overlays (GList * overlays, - GstVideoOverlayRectangle * rectangle) -{ - GList *l; - - for (l = overlays; l != NULL; l = l->next) { - GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) l->data; - if (overlay->rectangle == rectangle) - return TRUE; - } - return FALSE; -} - -static gboolean -_is_overlay_in_composition (GstVideoOverlayComposition * composition, - GstD3DVideoSinkOverlay * overlay) +static gint +_find_overlay_cmp (gconstpointer item, gconstpointer user_data) { - guint i; - - for (i = 0; i < gst_video_overlay_composition_n_rectangles (composition); i++) { - GstVideoOverlayRectangle *rectangle = - gst_video_overlay_composition_get_rectangle (composition, i); - if (overlay->rectangle == rectangle) - return TRUE; - } - return FALSE; + GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) item; + GstVideoOverlayRectangle *rectangle = (GstVideoOverlayRectangle *) user_data; + return overlay->rectangle == rectangle ? 0 : 1; } GstFlowReturn gst_d3d9_overlay_prepare (GstD3DVideoSink * sink, GstBuffer * buf) { GstD3DVideoSinkClass *klass = GST_D3DVIDEOSINK_GET_CLASS (sink); - GList *l = NULL; - GstVideoOverlayComposition *composition = NULL; - guint num_overlays, i; - GstVideoOverlayCompositionMeta *composition_meta = - gst_buffer_get_video_overlay_composition_meta (buf); - gboolean found_new_overlay_rectangle = FALSE; - - if (!composition_meta) { - gst_d3d9_overlay_free (sink); - return GST_FLOW_OK; - } - l = sink->d3d.overlay; - composition = composition_meta->overlay; - num_overlays = gst_video_overlay_composition_n_rectangles (composition); - GST_DEBUG_OBJECT (sink, "GstVideoOverlayCompositionMeta found."); + /* Steal previous list of overlays */ + GList *overlays = sink->d3d.overlays.head; + g_queue_init (&sink->d3d.overlays); - /* check for new overlays */ - for (i = 0; i < num_overlays; i++) { - GstVideoOverlayRectangle *rectangle = - gst_video_overlay_composition_get_rectangle (composition, i); + LOCK_CLASS (sink, klass); - if (!_is_rectangle_in_overlays (sink->d3d.overlay, rectangle)) { - found_new_overlay_rectangle = TRUE; - break; - } - } + gpointer state = NULL; + GstMeta *meta; + while ((meta = + gst_buffer_iterate_meta_filtered (buf, &state, + GST_VIDEO_OVERLAY_COMPOSITION_META_API_TYPE)) != NULL) { + GstVideoOverlayCompositionMeta *ometa = + (GstVideoOverlayCompositionMeta *) meta; + guint n = gst_video_overlay_composition_n_rectangles (ometa->overlay); - /* add new overlays to list */ - if (found_new_overlay_rectangle) { - GST_DEBUG_OBJECT (sink, "New overlay composition rectangles found."); - LOCK_CLASS (sink, klass); - if (!klass->d3d.refs) { - GST_ERROR_OBJECT (sink, "Direct3D object ref count = 0"); - gst_d3d9_overlay_free (sink); - UNLOCK_CLASS (sink, klass); - return GST_FLOW_ERROR; - } - for (i = 0; i < num_overlays; i++) { + for (int i = 0; i < n; i++) { GstVideoOverlayRectangle *rectangle = - gst_video_overlay_composition_get_rectangle (composition, i); + gst_video_overlay_composition_get_rectangle (ometa->overlay, i); - if (!_is_rectangle_in_overlays (sink->d3d.overlay, rectangle)) { + if (!klass->d3d.refs) { + GST_ERROR_OBJECT (sink, "Direct3D object ref count = 0"); + gst_d3d9_overlay_free (sink); + UNLOCK_CLASS (sink, klass); + return GST_FLOW_ERROR; + } + + GList *l = g_list_find_custom (overlays, rectangle, _find_overlay_cmp); + if (l == NULL) { GstVideoOverlayFormatFlags flags; gint x, y; guint width, height; @@ -283,21 +242,21 @@ continue; } } - sink->d3d.overlay = g_list_append (sink->d3d.overlay, overlay); + g_queue_push_tail (&sink->d3d.overlays, overlay); + } else { + overlays = g_list_remove_link (overlays, l); + g_queue_push_tail_link (&sink->d3d.overlays, l); } } - UNLOCK_CLASS (sink, klass); } - /* remove old overlays from list */ - while (l != NULL) { - GList *next = l->next; - GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) l->data; - if (!_is_overlay_in_composition (composition, overlay)) { - gst_d3d9_overlay_free_overlay (sink, overlay); - sink->d3d.overlay = g_list_delete_link (sink->d3d.overlay, l); - } - l = next; + UNLOCK_CLASS (sink, klass); + + /* Free any previous overlays that are not in use anymore */ + while (overlays != NULL) { + GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) overlays->data; + gst_d3d9_overlay_free_overlay (sink, overlay); + overlays = g_list_delete_link (overlays, overlays); } return GST_FLOW_OK; @@ -306,7 +265,7 @@ gboolean gst_d3d9_overlay_resize (GstD3DVideoSink * sink) { - GList *l = sink->d3d.overlay; + GList *l = sink->d3d.overlays.head; while (l != NULL) { GList *next = l->next; @@ -325,18 +284,15 @@ void gst_d3d9_overlay_free (GstD3DVideoSink * sink) { - GList *l = sink->d3d.overlay; + GList *overlays = sink->d3d.overlays.head; + g_queue_init (&sink->d3d.overlays); - while (l != NULL) { - GList *next = l->next; - GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) l->data; + while (overlays != NULL) { + GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) overlays->data; gst_d3d9_overlay_free_overlay (sink, overlay); - sink->d3d.overlay = g_list_delete_link (sink->d3d.overlay, l); - l = next; + overlays = g_list_delete_link (overlays, overlays); } - g_list_free (sink->d3d.overlay); - sink->d3d.overlay = NULL; } static HRESULT @@ -464,13 +420,13 @@ gboolean ret = FALSE; GstD3DVideoSinkClass *klass = GST_D3DVIDEOSINK_GET_CLASS (sink); - if (!sink->d3d.overlay) + if (g_queue_is_empty (&sink->d3d.overlays)) return TRUE; if (sink->d3d.overlay_needs_resize && !gst_d3d9_overlay_resize (sink)) return FALSE; sink->d3d.overlay_needs_resize = FALSE; - iter = sink->d3d.overlay; + iter = sink->d3d.overlays.head; while (iter != NULL) { GList *next = iter->next; GstD3DVideoSinkOverlay *overlay = (GstD3DVideoSinkOverlay *) iter->data;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/decklink/gstdecklinkvideosink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/decklink/gstdecklinkvideosink.cpp
Changed
@@ -724,6 +724,14 @@ gint m_refcount; }; +struct VAncPacket +{ + guint line_number; + guint8 DID, SDID; + guint8 data_count; + guint8 data256; +}; + /** * GstDecklinkMappingFormat: * @GST_DECKLINK_MAPPING_FORMAT_DEFAULT: Don't change the mapping format @@ -746,7 +754,8 @@ PROP_CC_LINE, PROP_AFD_BAR_LINE, PROP_MAPPING_FORMAT, - PROP_PERSISTENT_ID + PROP_PERSISTENT_ID, + PROP_OUTPUT_VANC }; static void gst_decklink_video_sink_set_property (GObject * object, @@ -944,6 +953,23 @@ (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT))); + /** + * GstDecklinkVideoSink:output-vanc + * + * Output `GstAncillaryMeta` from input buffers as part of the ancillary data + * of the video frames. + * + * Note that currently the horizontal offset is not preserved. + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_OUTPUT_VANC, + g_param_spec_boolean ("output-vanc", "Output VANC", + "Output ancillary data from input buffers", + FALSE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | + G_PARAM_CONSTRUCT))); + templ_caps = gst_decklink_mode_get_template_caps (FALSE); templ_caps = gst_caps_make_writable (templ_caps); /* For output we support any framerate and only really care about timestamps */ @@ -976,8 +1002,14 @@ self->timecode_format = bmdTimecodeRP188Any; self->caption_line = 0; self->afd_bar_line = 0; + self->output_vanc = FALSE; self->mapping_format = GST_DECKLINK_MAPPING_FORMAT_DEFAULT; - self->pending_frames = g_queue_new(); + self->pending_frames = gst_vec_deque_new (16); + gst_vec_deque_set_clear_func (self->pending_frames, + (GDestroyNotify) +(GstDecklinkVideoFrame * frame) { + frame->Release (); + }); + self->vanc_cache = g_array_new (FALSE, FALSE, sizeof (VAncPacket)); gst_base_sink_set_max_lateness (GST_BASE_SINK_CAST (self), 20 * GST_MSECOND); gst_base_sink_set_qos_enabled (GST_BASE_SINK_CAST (self), TRUE); @@ -1041,6 +1073,9 @@ case PROP_PERSISTENT_ID: self->persistent_id = g_value_get_int64 (value); break; + case PROP_OUTPUT_VANC: + self->output_vanc = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); break; @@ -1095,26 +1130,24 @@ case PROP_PERSISTENT_ID: g_value_set_int64 (value, self->persistent_id); break; + case PROP_OUTPUT_VANC: + g_value_set_boolean (value, self->output_vanc); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); break; } } -static void -unref_frame (GstDecklinkVideoFrame * frame) -{ - if (frame) - frame->Release(); -} - void gst_decklink_video_sink_finalize (GObject * object) { GstDecklinkVideoSink *self = GST_DECKLINK_VIDEO_SINK_CAST (object); - g_queue_free_full (self->pending_frames, (GDestroyNotify) unref_frame); + gst_vec_deque_free (self->pending_frames); self->pending_frames = NULL; + g_array_free (self->vanc_cache, TRUE); + self->vanc_cache = NULL; G_OBJECT_CLASS (parent_class)->finalize (object); } @@ -1225,7 +1258,7 @@ else flags = bmdVideoOutputRP188; - if (self->caption_line > 0 || self->afd_bar_line > 0) + if (self->caption_line > 0 || self->afd_bar_line > 0 || self->output_vanc) flags = (BMDVideoOutputFlags) (flags | bmdVideoOutputVANC); ret = self->output->output->EnableVideoOutput (mode->mode, flags); @@ -1541,114 +1574,107 @@ GstVideoTimeCodeMeta * tc_meta) { IDeckLinkVideoFrameAncillary *vanc_frame = NULL; - gpointer iter = NULL; - GstVideoCaptionMeta *cc_meta; - guint8 *vancdata; - gboolean got_captions = FALSE; - if (self->caption_line == 0 && self->afd_bar_line == 0) + if (self->caption_line == 0 && self->afd_bar_line == 0 && !self->output_vanc) return; - if (self->vbiencoder == NULL) { - self->vbiencoder = - gst_video_vbi_encoder_new (GST_VIDEO_FORMAT_v210, self->info.width); - self->anc_vformat = GST_VIDEO_FORMAT_v210; - } + // First collect all metas and transform them to generic VAncPackets + if (self->caption_line != 0) { + GstVideoCaptionMeta *cc_meta; + gpointer meta_iter = NULL; - /* Put any closed captions into the configured line */ - while ((cc_meta = - (GstVideoCaptionMeta *) gst_buffer_iterate_meta_filtered (buffer, - &iter, GST_VIDEO_CAPTION_META_API_TYPE))) { - switch (cc_meta->caption_type) { - case GST_VIDEO_CAPTION_TYPE_CEA608_RAW:{ - guint8 data138; - guint i, n; + /* FIXME: Add captions to the correct field? Captions for the second + * field should probably be inserted into the second field */ - n = cc_meta->size / 2; - if (cc_meta->size > 46) { - GST_WARNING_OBJECT (self, "Too big raw CEA608 buffer"); + while ((cc_meta = + (GstVideoCaptionMeta *) gst_buffer_iterate_meta_filtered (buffer, + &meta_iter, GST_VIDEO_CAPTION_META_API_TYPE))) { + switch (cc_meta->caption_type) { + case GST_VIDEO_CAPTION_TYPE_CEA608_RAW:{ + VAncPacket packet; + + guint n = cc_meta->size / 2; + if (cc_meta->size > 46) { + GST_WARNING_OBJECT (self, "Too big raw CEA608 buffer"); + break; + } + + packet.line_number = self->caption_line; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 & 0xff; + packet.data_count = 3 * n; + /* This is the offset from line 9 for 525-line fields and from line + * 5 for 625-line fields. + * + * The highest bit is set for field 1 but not for field 0, but we + * have no way of knowning the field here + */ + for (guint i = 0; i < n; i++) { + packet.data3 * i = 0x80 | (self->info.height == + 525 ? self->caption_line - 9 : self->caption_line - 5); + packet.data3 * i + 1 = cc_meta->data2 * i; + packet.data3 * i + 2 = cc_meta->data2 * i + 1; + } + + g_array_append_val (self->vanc_cache, packet); break; } + case GST_VIDEO_CAPTION_TYPE_CEA608_S334_1A:{ + VAncPacket packet; - /* This is the offset from line 9 for 525-line fields and from line - * 5 for 625-line fields. - * - * The highest bit is set for field 1 but not for field 0, but we - * have no way of knowning the field here - */ - for (i = 0; i < n; i++) { - data3 * i = 0x80 | (self->info.height == - 525 ? self->caption_line - 9 : self->caption_line - 5); - data3 * i + 1 = cc_meta->data2 * i; - data3 * i + 2 = cc_meta->data2 * i + 1; - } + packet.line_number = self->caption_line; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 & 0xff; + packet.data_count = cc_meta->size; + memcpy (packet.data, cc_meta->data, cc_meta->size); - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 >> 8, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 & 0xff, data, 3)) - GST_WARNING_OBJECT (self, "Couldn't add meta to ancillary data"); + g_array_append_val (self->vanc_cache, packet); + break; + } + case GST_VIDEO_CAPTION_TYPE_CEA708_RAW:{ + VAncPacket packet; - got_captions = TRUE; + guint n = cc_meta->size / 3; + if (cc_meta->size > 46) { + GST_WARNING_OBJECT (self, "Too big raw CEA708 buffer"); + break; + } - break; - } - case GST_VIDEO_CAPTION_TYPE_CEA608_S334_1A:{ - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 >> 8, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_608 & 0xff, cc_meta->data, - cc_meta->size)) - GST_WARNING_OBJECT (self, "Couldn't add meta to ancillary data"); + packet.line_number = self->caption_line; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 & 0xff; - got_captions = TRUE; + n = convert_cea708_cc_data_cea708_cdp_internal (self, cc_meta->data, + cc_meta->size, packet.data, sizeof (packet.data), tc_meta); - break; - } - case GST_VIDEO_CAPTION_TYPE_CEA708_RAW:{ - guint8 data256; - guint n; + packet.data_count = n; - n = cc_meta->size / 3; - if (cc_meta->size > 46) { - GST_WARNING_OBJECT (self, "Too big raw CEA708 buffer"); + g_array_append_val (self->vanc_cache, packet); break; } + case GST_VIDEO_CAPTION_TYPE_CEA708_CDP:{ + VAncPacket packet; - n = convert_cea708_cc_data_cea708_cdp_internal (self, cc_meta->data, - cc_meta->size, data, sizeof (data), tc_meta); - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, FALSE, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 >> 8, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 & 0xff, data, n)) - GST_WARNING_OBJECT (self, "Couldn't add meta to ancillary data"); - - got_captions = TRUE; - - break; - } - case GST_VIDEO_CAPTION_TYPE_CEA708_CDP:{ - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 >> 8, - GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 & 0xff, cc_meta->data, - cc_meta->size)) - GST_WARNING_OBJECT (self, "Couldn't add meta to ancillary data"); + packet.line_number = self->caption_line; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S334_EIA_708 & 0xff; + packet.data_count = cc_meta->size; + memcpy (packet.data, cc_meta->data, cc_meta->size); - got_captions = TRUE; - - break; - } - default:{ - GST_FIXME_OBJECT (self, "Caption type %d not supported", - cc_meta->caption_type); - break; + g_array_append_val (self->vanc_cache, packet); + break; + } + default:{ + GST_FIXME_OBJECT (self, "Caption type %d not supported", + cc_meta->caption_type); + break; + } } } } - if ((got_captions || self->afd_bar_line != 0) - && self->output->output->CreateAncillaryData (bmdFormat10BitYUV, - &vanc_frame) == S_OK) { + if (self->afd_bar_line != 0) { + VAncPacket packet; GstVideoAFDMeta *afd_meta = NULL, *afd_meta2 = NULL; GstVideoBarMeta *bar_meta = NULL, *bar_meta2 = NULL; GstMeta *meta; @@ -1722,48 +1748,13 @@ GST_WRITE_UINT16_BE (&afd_bar_data_ptr6, bar2); } - /* AFD on the same line as the captions */ - if (self->caption_line == self->afd_bar_line) { - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR >> 8, - GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR & 0xff, afd_bar_data, - sizeof (afd_bar_data))) - GST_WARNING_OBJECT (self, - "Couldn't add AFD/Bar data to ancillary data"); - } + packet.line_number = self->afd_bar_line; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR & 0xff; + packet.data_count = sizeof (afd_bar_data); + memcpy (packet.data, afd_bar_data, sizeof (afd_bar_data)); - /* FIXME: Add captions to the correct field? Captions for the second - * field should probably be inserted into the second field */ - - if (got_captions || self->caption_line == self->afd_bar_line) { - if (vanc_frame->GetBufferForVerticalBlankingLine (self->caption_line, - (void **) &vancdata) == S_OK) { - gst_video_vbi_encoder_write_line (self->vbiencoder, vancdata); - } else { - GST_WARNING_OBJECT (self, - "Failed to get buffer for line %d ancillary data", - self->caption_line); - } - } - - /* AFD on a different line than the captions */ - if (self->afd_bar_line != 0 && self->caption_line != self->afd_bar_line) { - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR >> 8, - GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR & 0xff, afd_bar_data, - sizeof (afd_bar_data))) - GST_WARNING_OBJECT (self, - "Couldn't add AFD/Bar data to ancillary data"); - - if (vanc_frame->GetBufferForVerticalBlankingLine (self->afd_bar_line, - (void **) &vancdata) == S_OK) { - gst_video_vbi_encoder_write_line (self->vbiencoder, vancdata); - } else { - GST_WARNING_OBJECT (self, - "Failed to get buffer for line %d ancillary data", - self->afd_bar_line); - } - } + g_array_append_val (self->vanc_cache, packet); /* For interlaced video we need to also add AFD to the second field */ if (GST_VIDEO_INFO_IS_INTERLACED (&self->info) && self->afd_bar_line != 0) { @@ -1789,31 +1780,124 @@ g_assert_not_reached (); } - if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, - FALSE, GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR >> 8, - GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR & 0xff, afd_bar_data2, - sizeof (afd_bar_data))) - GST_WARNING_OBJECT (self, - "Couldn't add AFD/Bar data to ancillary data"); + packet.line_number = self->afd_bar_line + field2_offset; + packet.DID = GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR >> 8; + packet.SDID = GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR & 0xff; + packet.data_count = sizeof (afd_bar_data2); + memcpy (packet.data, afd_bar_data2, sizeof (afd_bar_data2)); - if (vanc_frame->GetBufferForVerticalBlankingLine (self->afd_bar_line + - field2_offset, (void **) &vancdata) == S_OK) { - gst_video_vbi_encoder_write_line (self->vbiencoder, vancdata); - } else { + g_array_append_val (self->vanc_cache, packet); + } + } + + if (self->output_vanc) { + GstMeta *meta; + gpointer meta_iter = NULL; + + while ((meta = + gst_buffer_iterate_meta_filtered (buffer, &meta_iter, + GST_ANCILLARY_META_API_TYPE))) { + VAncPacket packet; + GstAncillaryMeta *anc_meta = (GstAncillaryMeta *) meta; + + // Skip unpositioned anc for now + if (anc_meta->line >= 0x7fe) + continue; + + packet.line_number = anc_meta->line; + packet.DID = anc_meta->DID & 0xff; + packet.SDID = anc_meta->SDID_block_number & 0xff; + packet.data_count = anc_meta->data_count & 0xff; + for (guint i = 0; i < packet.data_count; i++) + packet.datai = anc_meta->datai & 0xff; + + g_array_append_val (self->vanc_cache, packet); + } + } + + if (self->vanc_cache->len == 0) + return; + + // Sort by line number + g_array_sort (self->vanc_cache, (GCompareFunc) +(const VAncPacket * a, + const VAncPacket * b)->gint { + return (gint) a->line_number - (gint) b->line_number; + } + ); + + int res; + res = self->output->output->CreateAncillaryData (bmdFormat10BitYUV, + &vanc_frame); + if (res != S_OK) { + GST_WARNING_OBJECT (self, "Failed to allocate ancillary data: %d", res); + return; + } + + if (!self->vbiencoder) { + self->vbiencoder = + gst_video_vbi_encoder_new (GST_VIDEO_FORMAT_v210, self->info.width); + self->anc_vformat = GST_VIDEO_FORMAT_v210; + } + + guint previous_line = G_MAXUINT; + for (guint i = 0; i < self->vanc_cache->len; i++) { + const VAncPacket *packet = &g_array_index (self->vanc_cache, VAncPacket, i); + + if (packet->line_number != previous_line && previous_line != G_MAXUINT) { + guint8 *vancdata; + + res = vanc_frame->GetBufferForVerticalBlankingLine (previous_line, + (void **) &vancdata); + if (res != S_OK) { GST_WARNING_OBJECT (self, - "Failed to get buffer for line %d ancillary data", - self->afd_bar_line); + "Failed to get buffer for line %u ancillary data: %d", + previous_line, res); + gst_video_vbi_encoder_free (self->vbiencoder); + self->vbiencoder = + gst_video_vbi_encoder_new (GST_VIDEO_FORMAT_v210, self->info.width); + } else { + gst_video_vbi_encoder_write_line (self->vbiencoder, vancdata); } } - if (frame->SetAncillaryData (vanc_frame) != S_OK) { - GST_WARNING_OBJECT (self, "Failed to set ancillary data"); + previous_line = packet->line_number; + + GST_TRACE_OBJECT (self, + "Writing ancillary data with DID %08x, SDID %08x, DC %u into line %u", + packet->DID, packet->SDID, packet->data_count, packet->line_number); + + if (!gst_video_vbi_encoder_add_ancillary (self->vbiencoder, + FALSE, + packet->DID, packet->SDID, packet->data, packet->data_count)) { + GST_WARNING_OBJECT (self, "Couldn't add ancillary data to line %u", + packet->line_number); + } + } + + // And write the last line + if (previous_line != G_MAXUINT) { + guint8 *vancdata; + + res = vanc_frame->GetBufferForVerticalBlankingLine (previous_line, + (void **) &vancdata); + if (res != S_OK) { + GST_WARNING_OBJECT (self, + "Failed to get buffer for line %u ancillary data: %d", + previous_line, res); + gst_video_vbi_encoder_free (self->vbiencoder); + self->vbiencoder = + gst_video_vbi_encoder_new (GST_VIDEO_FORMAT_v210, self->info.width); + } else { + gst_video_vbi_encoder_write_line (self->vbiencoder, vancdata); } + } - vanc_frame->Release (); - } else if (got_captions || self->afd_bar_line != 0) { - GST_WARNING_OBJECT (self, "Failed to allocate ancillary data frame"); + res = frame->SetAncillaryData (vanc_frame); + if (res != S_OK) { + GST_WARNING_OBJECT (self, "Failed to set ancillary data: %d", res); } + + vanc_frame->Release (); } static gboolean @@ -1968,14 +2052,14 @@ frame->SetMastringInfo (&self->mastering_info); write_vbi (self, buffer, format, frame, tc_meta); + g_array_set_size (self->vanc_cache, 0); frame->running_time = running_time; frame->running_time_duration = running_time_duration; frame->sync_buffer = gst_buffer_ref (buffer); - g_queue_push_tail (self->pending_frames, frame); + gst_vec_deque_push_tail (self->pending_frames, g_steal_pointer (&frame)); - frame = nullptr; flow_ret = GST_FLOW_OK; out: @@ -1999,7 +2083,7 @@ return flow_ret; frame = - (GstDecklinkVideoFrame *) g_queue_pop_head (self->pending_frames); + (GstDecklinkVideoFrame *) gst_vec_deque_pop_head (self->pending_frames); running_time = gst_clock_get_internal_time (self->output->clock); frame_duration = @@ -2039,20 +2123,23 @@ if ((flow_ret = gst_decklink_video_sink_prepare (bsink, buffer)) != GST_FLOW_OK) return flow_ret; - GST_TRACE_OBJECT (bsink, "render with %u pending frames", self->pending_frames->length); + GST_TRACE_OBJECT (bsink, "render with %" G_GSIZE_FORMAT " pending frames", + gst_vec_deque_get_length (self->pending_frames)); GST_OBJECT_LOCK (self); if (self->initial_sync) { /* this is effectively the preroll logic. We wait for at least 2 buffers */ GstDecklinkVideoFrame *frame; - if (self->pending_frames->length < 1) { + if (gst_vec_deque_is_empty (self->pending_frames)) { GST_OBJECT_UNLOCK (self); return GST_FLOW_OK; } GST_OBJECT_UNLOCK (self); - frame = (GstDecklinkVideoFrame *) g_queue_peek_head (self->pending_frames); + frame = + (GstDecklinkVideoFrame *) + gst_vec_deque_peek_head (self->pending_frames); GST_DEBUG_OBJECT (self, "attempting preroll"); flow_ret = gst_base_sink_do_preroll (bsink, @@ -2098,9 +2185,9 @@ } GST_OBJECT_UNLOCK (self); - while (self->pending_frames->length > 0) { + while (!gst_vec_deque_is_empty (self->pending_frames)) { GstDecklinkVideoFrame *frame = - (GstDecklinkVideoFrame *) g_queue_pop_head (self->pending_frames); + (GstDecklinkVideoFrame *) gst_vec_deque_pop_head (self->pending_frames); GstClockTime sync_time = frame->running_time; GstClockTime running_time = frame->running_time; GstClockTime running_time_duration = frame->running_time_duration; @@ -2149,13 +2236,17 @@ running_time = gst_util_uint64_scale (running_time, 1, frame_duration); running_time = gst_util_uint64_scale_ceil (running_time, frame_duration, 1); + if (clock_time >= running_time) { + GST_DEBUG_OBJECT (self, "Video frame %p is being scheduled late and may be dropped", frame); + } + GST_DEBUG_OBJECT (self, "Scheduling video frame %p at %" GST_TIME_FORMAT " with duration %" GST_TIME_FORMAT " sync time %" GST_TIME_FORMAT " clock time %" GST_TIME_FORMAT, frame, GST_TIME_ARGS (running_time), - GST_TIME_ARGS (running_time_duration), GST_TIME_ARGS (sync_time), GST_TIME_ARGS (clock_time)); + GST_TIME_ARGS (frame_duration), GST_TIME_ARGS (sync_time), GST_TIME_ARGS (clock_time)); ret = self->output->output->ScheduleVideoFrame (frame, - running_time, running_time_duration, GST_SECOND); + running_time, frame_duration, GST_SECOND); if (ret != S_OK) { GST_ELEMENT_ERROR (self, STREAM, FAILED, (NULL), ("Failed to schedule frame: 0x%08lx", (unsigned long) ret)); @@ -2251,6 +2342,9 @@ self->anc_vformat = GST_VIDEO_FORMAT_UNKNOWN; } + gst_vec_deque_clear (self->pending_frames); + g_array_set_size (self->vanc_cache, 0); + return TRUE; } @@ -2401,8 +2495,8 @@ GstStateChangeReturn ret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG_OBJECT (self, "changing state: %s => %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_READY_TO_PAUSED:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/decklink/gstdecklinkvideosink.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/decklink/gstdecklinkvideosink.h
Changed
@@ -75,12 +75,15 @@ GstDecklinkMappingFormat mapping_format; gboolean initial_sync; - GQueue *pending_frames; + GstVecDeque *pending_frames; gboolean have_light_level; GstVideoContentLightLevel light_level; gboolean have_mastering_info; GstVideoMasteringDisplayInfo mastering_info; + + gboolean output_vanc; + GArray *vanc_cache; }; struct _GstDecklinkVideoSinkClass
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/decklink/gstdecklinkvideosrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/decklink/gstdecklinkvideosrc.cpp
Changed
@@ -150,6 +150,7 @@ #define DEFAULT_OUTPUT_CC (FALSE) #define DEFAULT_OUTPUT_AFD_BAR (FALSE) #define DEFAULT_PERSISTENT_ID (-1) +#define DEFAULT_OUTPUT_VANC (FALSE) #ifndef ABSDIFF #define ABSDIFF(x, y) ( (x) > (y) ? ((x) - (y)) : ((y) - (x)) ) @@ -175,6 +176,7 @@ PROP_PERSISTENT_ID, PROP_OUTPUT_CC, PROP_OUTPUT_AFD_BAR, + PROP_OUTPUT_VANC, }; typedef struct @@ -400,6 +402,22 @@ DEFAULT_OUTPUT_AFD_BAR, (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + /** + * GstDecklinkVideoSrc:output-vanc + * + * Extract VANC data from input frames and output it as `GstAncillaryMeta` on + * the video frames. + * + * Note that currently the horizontal offset is not preserved. + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_OUTPUT_VANC, + g_param_spec_boolean ("output-vanc", "Output VANC data", + "Extract and output VANC as GstMeta (if present)", + DEFAULT_OUTPUT_AFD_BAR, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + templ_caps = gst_decklink_mode_get_template_caps (TRUE); gst_element_class_add_pad_template (element_class, gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, templ_caps)); @@ -527,6 +545,9 @@ case PROP_OUTPUT_AFD_BAR: self->output_afd_bar = g_value_get_boolean (value); break; + case PROP_OUTPUT_VANC: + self->output_vanc = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); break; @@ -589,6 +610,9 @@ case PROP_OUTPUT_AFD_BAR: g_value_set_boolean (value, self->output_afd_bar); break; + case PROP_OUTPUT_VANC: + g_value_set_boolean (value, self->output_vanc); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); break; @@ -1187,123 +1211,134 @@ g_mutex_unlock (&self->lock); } +static inline guint16 +with_parity(const guint8 word) { + guint8 bit8, parity; + + parity = word ^ (word >> 4); + parity ^= (parity >> 2); + parity ^= (parity >> 1); + bit8 = parity & 1; + + return (word | (bit8 << 8) | ((!bit8) << 9)); +} + static void extract_vbi_line (GstDecklinkVideoSrc * self, GstBuffer ** buffer, - IDeckLinkVideoFrameAncillary * vanc_frame, guint field2_offset, guint line, - gboolean * found_cc_out, gboolean * found_afd_bar_out) + IDeckLinkVideoFrameAncillary * vanc_frame, bool hd, bool interlaced, bool f2, guint line) { GstVideoAncillary gstanc; const guint8 *vancdata; - gboolean found_cc = FALSE, found_afd_bar = FALSE; + int ret; - if (vanc_frame->GetBufferForVerticalBlankingLine (field2_offset + line, - (void **) &vancdata) != S_OK) + ret = + vanc_frame->GetBufferForVerticalBlankingLine (line, (void **) &vancdata); + if (ret != S_OK) { + GST_TRACE_OBJECT (self, "Failed getting VBI data line %u (field 2: %d): %d", + line, f2, ret); return; + } - GST_DEBUG_OBJECT (self, "Checking for VBI data on field line %u (field %u)", - field2_offset + line, field2_offset ? 2 : 1); + GST_TRACE_OBJECT (self, "Checking for VBI data on line %u (field 2: %d)", + line, f2); gst_video_vbi_parser_add_line (self->vbiparser, vancdata); - /* Check if CC or AFD/Bar is on this line if we didn't find any on a - * previous line. Remember the line where we found them */ - while (gst_video_vbi_parser_get_ancillary (self->vbiparser, &gstanc) == GST_VIDEO_VBI_PARSER_RESULT_OK) { + GST_DEBUG_OBJECT (self, + "Found DID %02x SDID/BN %02x DC %u on line %u (field 2: %d)", + gstanc.DID, gstanc.SDID_block_number, gstanc.data_count, line, f2); + + if (self->output_vanc) { + GST_DEBUG_OBJECT (self, "Adding ancillary meta to buffer"); + GstAncillaryMeta *meta = gst_buffer_add_ancillary_meta (*buffer); + + meta->c_not_y_channel = hd ? 1 : 0; + meta->line = line; + meta->offset = 0xfff; + + meta->DID = with_parity (gstanc.DID); + meta->SDID_block_number = with_parity (gstanc.SDID_block_number); + meta->data_count = with_parity (gstanc.data_count); + + meta->data = g_new (guint16, gstanc.data_count); + guint16 checksum = meta->DID + meta->SDID_block_number + meta->data_count; + for (guint i = 0; i < gstanc.data_count; i++) { + meta->datai = with_parity (gstanc.datai); + checksum += meta->datai; + } + + meta->checksum = with_parity(checksum & 0x1FF); + } + switch (GST_VIDEO_ANCILLARY_DID16 (&gstanc)) { case GST_VIDEO_ANCILLARY_DID16_S334_EIA_708: - if (*found_cc_out || !self->output_cc) - continue; - - GST_DEBUG_OBJECT (self, - "Adding CEA-708 CDP meta to buffer for line %u", - field2_offset + line); - GST_MEMDUMP_OBJECT (self, "CDP", gstanc.data, gstanc.data_count); - gst_buffer_add_video_caption_meta (*buffer, - GST_VIDEO_CAPTION_TYPE_CEA708_CDP, gstanc.data, gstanc.data_count); - - found_cc = TRUE; - if (field2_offset) - self->last_cc_vbi_line_field2 = line; - else - self->last_cc_vbi_line = line; + if (self->output_cc) { + GST_DEBUG_OBJECT (self, "Adding CEA-708 CDP meta to buffer"); + GST_MEMDUMP_OBJECT (self, "CDP", gstanc.data, gstanc.data_count); + gst_buffer_add_video_caption_meta (*buffer, + GST_VIDEO_CAPTION_TYPE_CEA708_CDP, gstanc.data, + gstanc.data_count); + } break; case GST_VIDEO_ANCILLARY_DID16_S334_EIA_608: - if (*found_cc_out || !self->output_cc) - continue; - - GST_DEBUG_OBJECT (self, - "Adding CEA-608 meta to buffer for line %u", field2_offset + line); - GST_MEMDUMP_OBJECT (self, "CEA608", gstanc.data, gstanc.data_count); - gst_buffer_add_video_caption_meta (*buffer, - GST_VIDEO_CAPTION_TYPE_CEA608_S334_1A, gstanc.data, - gstanc.data_count); - - found_cc = TRUE; - if (field2_offset) - self->last_cc_vbi_line_field2 = line; - else - self->last_cc_vbi_line = line; + if (self->output_cc) { + GST_DEBUG_OBJECT (self, "Adding CEA-608 meta to buffer"); + GST_MEMDUMP_OBJECT (self, "CEA608", gstanc.data, gstanc.data_count); + gst_buffer_add_video_caption_meta (*buffer, + GST_VIDEO_CAPTION_TYPE_CEA608_S334_1A, gstanc.data, + gstanc.data_count); + } break; case GST_VIDEO_ANCILLARY_DID16_S2016_3_AFD_BAR:{ - GstVideoAFDValue afd; - gboolean is_letterbox; - guint16 bar1, bar2; - - if (*found_afd_bar_out || !self->output_afd_bar) - continue; + if (self->output_afd_bar) { + GstVideoAFDValue afd; + gboolean is_letterbox; + guint16 bar1, bar2; - GST_DEBUG_OBJECT (self, - "Adding AFD/Bar meta to buffer for line %u", field2_offset + line); - GST_MEMDUMP_OBJECT (self, "AFD/Bar", gstanc.data, gstanc.data_count); - - if (gstanc.data_count < 8) { - GST_WARNING_OBJECT (self, "AFD/Bar data too small"); - continue; - } + GST_DEBUG_OBJECT (self, "Adding AFD/Bar meta to buffer"); + GST_MEMDUMP_OBJECT (self, "AFD/Bar", gstanc.data, gstanc.data_count); - self->aspect_ratio_flag = (gstanc.data0 >> 2) & 0x1; + if (gstanc.data_count < 8) { + GST_WARNING_OBJECT (self, "AFD/Bar data too small"); + break; + } - afd = (GstVideoAFDValue) ((gstanc.data0 >> 3) & 0xf); - is_letterbox = ((gstanc.data3 >> 4) & 0x3) == 0; - bar1 = GST_READ_UINT16_BE (&gstanc.data4); - bar2 = GST_READ_UINT16_BE (&gstanc.data6); + self->aspect_ratio_flag = (gstanc.data0 >> 2) & 0x1; - gst_buffer_add_video_afd_meta (*buffer, field2_offset ? 1 : 0, - GST_VIDEO_AFD_SPEC_SMPTE_ST2016_1, afd); - gst_buffer_add_video_bar_meta (*buffer, field2_offset ? 1 : 0, - is_letterbox, bar1, bar2); + afd = (GstVideoAFDValue) ((gstanc.data0 >> 3) & 0xf); + is_letterbox = ((gstanc.data3 >> 4) & 0x3) == 0; + bar1 = GST_READ_UINT16_BE (&gstanc.data4); + bar2 = GST_READ_UINT16_BE (&gstanc.data6); - found_afd_bar = TRUE; - if (field2_offset) - self->last_afd_bar_vbi_line_field2 = line; - else - self->last_afd_bar_vbi_line = line; + gst_buffer_add_video_afd_meta (*buffer, f2 ? 1 : 0, + GST_VIDEO_AFD_SPEC_SMPTE_ST2016_1, afd); + gst_buffer_add_video_bar_meta (*buffer, f2 ? 1 : 0, + is_letterbox, bar1, bar2); + } break; } default: - /* otherwise continue looking */ - continue; + break; } } - - if (found_cc) - *found_cc_out = TRUE; - if (found_afd_bar) - *found_afd_bar_out = TRUE; } static void extract_vbi (GstDecklinkVideoSrc * self, GstBuffer ** buffer, VideoFrame * vf) { IDeckLinkVideoFrameAncillary *vanc_frame = NULL; - gint line; + guint line_start = 0, line_end = 0, f2_line_start = 0, f2_line_end = 0; + int ret; GstVideoFormat videoformat; GstDecklinkModeEnum mode_enum; const GstDecklinkMode *mode; - gboolean found_cc = FALSE, found_afd_bar = FALSE; - if (vf->frame->GetAncillaryData (&vanc_frame) != S_OK) + ret = vf->frame->GetAncillaryData (&vanc_frame); + if (ret != S_OK) { + GST_TRACE_OBJECT (self, "Failed getting VBI data: %d", ret); return; + } videoformat = gst_decklink_video_format_from_type (vanc_frame->GetPixelFormat ()); @@ -1312,7 +1347,7 @@ mode = gst_decklink_get_mode (mode_enum); if (videoformat == GST_VIDEO_FORMAT_UNKNOWN) { - GST_DEBUG_OBJECT (self, "Unknown video format for Ancillary data"); + GST_DEBUG_OBJECT (self, "Unknown video format for ancillary data"); vanc_frame->Release (); return; } @@ -1329,100 +1364,164 @@ self->anc_width = mode->width; } - GST_DEBUG_OBJECT (self, "Checking for ancillary data in VBI"); + switch (mode_enum) { + case GST_DECKLINK_MODE_NTSC: + case GST_DECKLINK_MODE_NTSC2398: + line_start = 4; + line_end = 21; + f2_line_start = 264; + f2_line_end = 284; + break; + case GST_DECKLINK_MODE_NTSC_P: + line_start = 4; + line_end = 43; + break; + case GST_DECKLINK_MODE_PAL: + line_start = 1; + line_end = 22; + f2_line_start = 311; + f2_line_end = 335; + break; + case GST_DECKLINK_MODE_PAL_P: + line_start = 1; + line_end = 45; + break; + case GST_DECKLINK_MODE_1080p2398: + case GST_DECKLINK_MODE_1080p24: + case GST_DECKLINK_MODE_1080p25: + case GST_DECKLINK_MODE_1080p2997: + case GST_DECKLINK_MODE_1080p30: + line_start = 1; + line_end = 41; + break; + case GST_DECKLINK_MODE_1080p50: + case GST_DECKLINK_MODE_1080p5994: + case GST_DECKLINK_MODE_1080p60: + line_start = 1; + line_end = 19; + break; + case GST_DECKLINK_MODE_1080i50: + case GST_DECKLINK_MODE_1080i5994: + case GST_DECKLINK_MODE_1080i60: + line_start = 1; + line_end = 20; + f2_line_start = 561; + f2_line_end = 583; + break; + case GST_DECKLINK_MODE_720p50: + case GST_DECKLINK_MODE_720p5994: + case GST_DECKLINK_MODE_720p60: + line_start = 1; + line_end = 25; + break; + case GST_DECKLINK_MODE_2160p2398: + case GST_DECKLINK_MODE_2160p24: + case GST_DECKLINK_MODE_2160p25: + case GST_DECKLINK_MODE_2160p2997: + case GST_DECKLINK_MODE_2160p30: + case GST_DECKLINK_MODE_2160p50: + case GST_DECKLINK_MODE_2160p5994: + case GST_DECKLINK_MODE_2160p60: + line_start = 1; + line_end = 41; + break; + case GST_DECKLINK_MODE_2KDCI2398: + case GST_DECKLINK_MODE_2KDCI24: + case GST_DECKLINK_MODE_2KDCI25: + case GST_DECKLINK_MODE_2KDCI2997: + case GST_DECKLINK_MODE_2KDCI30: + case GST_DECKLINK_MODE_2KDCI50: + case GST_DECKLINK_MODE_2KDCI5994: + case GST_DECKLINK_MODE_2KDCI60: + line_start = 1; + // TODO: Correct end of 50+fps? + line_end = 41; + break; + case GST_DECKLINK_MODE_4Kp2398: + case GST_DECKLINK_MODE_4Kp24: + case GST_DECKLINK_MODE_4Kp25: + case GST_DECKLINK_MODE_4Kp2997: + case GST_DECKLINK_MODE_4Kp30: + case GST_DECKLINK_MODE_4Kp50: + case GST_DECKLINK_MODE_4Kp5994: + case GST_DECKLINK_MODE_4Kp60: + line_start = 1; + // TODO: Correct end of 50+fps? + line_end = 41; + break; + case GST_DECKLINK_MODE_4320p2398: + case GST_DECKLINK_MODE_4320p24: + case GST_DECKLINK_MODE_4320p25: + case GST_DECKLINK_MODE_4320p2997: + case GST_DECKLINK_MODE_4320p30: + case GST_DECKLINK_MODE_4320p50: + case GST_DECKLINK_MODE_4320p5994: + case GST_DECKLINK_MODE_4320p60: + case GST_DECKLINK_MODE_8Kp2398: + case GST_DECKLINK_MODE_8Kp24: + case GST_DECKLINK_MODE_8Kp25: + case GST_DECKLINK_MODE_8Kp2997: + case GST_DECKLINK_MODE_8Kp30: + case GST_DECKLINK_MODE_8Kp50: + case GST_DECKLINK_MODE_8Kp5994: + case GST_DECKLINK_MODE_8Kp60: + // FIXME: Untested + line_start = 1; + // TODO: Correct end of 50+fps? + line_end = 41; + break; - /* First check last known lines, if any */ - if (self->last_cc_vbi_line > 0) { - extract_vbi_line (self, buffer, vanc_frame, 0, self->last_cc_vbi_line, - &found_cc, &found_afd_bar); - } - if (self->last_afd_bar_vbi_line > 0 - && self->last_cc_vbi_line != self->last_afd_bar_vbi_line) { - extract_vbi_line (self, buffer, vanc_frame, 0, self->last_afd_bar_vbi_line, - &found_cc, &found_afd_bar); + case GST_DECKLINK_MODE_1556p2398: + case GST_DECKLINK_MODE_1556p24: + case GST_DECKLINK_MODE_1556p25: + case GST_DECKLINK_MODE_640x480p60: + case GST_DECKLINK_MODE_800x600p60: + case GST_DECKLINK_MODE_1440x900p50: + case GST_DECKLINK_MODE_1440x900p60: + case GST_DECKLINK_MODE_1440x1080p50: + case GST_DECKLINK_MODE_1440x1080p60: + case GST_DECKLINK_MODE_1600x1200p50: + case GST_DECKLINK_MODE_1600x1200p60: + case GST_DECKLINK_MODE_1920x1200p50: + case GST_DECKLINK_MODE_1920x1200p60: + case GST_DECKLINK_MODE_1920x1440p50: + case GST_DECKLINK_MODE_1920x1440p60: + case GST_DECKLINK_MODE_2560x1440p50: + case GST_DECKLINK_MODE_2560x1440p60: + case GST_DECKLINK_MODE_2560x1600p50: + case GST_DECKLINK_MODE_2560x1600p60: + default: + // Unknown, unsupported + break; + case GST_DECKLINK_MODE_AUTO: + case GST_DECKLINK_MODE_NTSC_WIDESCREEN: + case GST_DECKLINK_MODE_NTSC2398_WIDESCREEN: + case GST_DECKLINK_MODE_PAL_WIDESCREEN: + case GST_DECKLINK_MODE_NTSC_P_WIDESCREEN: + case GST_DECKLINK_MODE_PAL_P_WIDESCREEN: + g_assert_not_reached (); + break; } - if (!found_cc) - self->last_cc_vbi_line = -1; - if (!found_afd_bar) - self->last_afd_bar_vbi_line = -1; - - if ((self->output_cc && !found_cc) || (self->output_afd_bar - && !found_afd_bar)) { - /* Otherwise loop through the first 21 lines and hope to find the data */ - /* FIXME: For the different formats the number of lines that can contain - * VANC are different */ - for (line = 1; line < 22; line++) { - extract_vbi_line (self, buffer, vanc_frame, 0, line, &found_cc, - &found_afd_bar); - - /* If we found everything we wanted to extract, stop here */ - if ((!self->output_cc || found_cc) && - (!self->output_afd_bar || found_afd_bar)) - break; - } + if (line_start == 0) { + GST_DEBUG_OBJECT (self, "Unsupported mode for extracting VBI"); + vanc_frame->Release (); + return; } - /* Do the same for field 2 in case of interlaced content */ - if (GST_VIDEO_INFO_IS_INTERLACED (&self->info)) { - gboolean found_cc_field2 = FALSE, found_afd_bar_field2 = FALSE; - guint field2_offset = 0; - - /* The VANC lines for the second field are at an offset, depending on - * the format in use - */ - switch (self->info.height) { - case 486: - /* NTSC: 525 / 2 + 1 */ - field2_offset = 263; - break; - case 576: - /* PAL: 625 / 2 + 1 */ - field2_offset = 313; - break; - case 1080: - /* 1080i: 1125 / 2 + 1 */ - field2_offset = 563; - break; - default: - g_assert_not_reached (); - } - - /* First try the same lines as for field 1 if we don't know yet */ - if (self->last_cc_vbi_line_field2 <= 0) - self->last_cc_vbi_line_field2 = self->last_cc_vbi_line; - if (self->last_afd_bar_vbi_line_field2 <= 0) - self->last_afd_bar_vbi_line_field2 = self->last_afd_bar_vbi_line; - - if (self->last_cc_vbi_line_field2 > 0) { - extract_vbi_line (self, buffer, vanc_frame, field2_offset, - self->last_cc_vbi_line_field2, &found_cc_field2, - &found_afd_bar_field2); - } - if (self->last_afd_bar_vbi_line_field2 > 0 - && self->last_cc_vbi_line_field2 != - self->last_afd_bar_vbi_line_field2) { - extract_vbi_line (self, buffer, vanc_frame, field2_offset, - self->last_afd_bar_vbi_line_field2, &found_cc_field2, - &found_afd_bar_field2); - } - - if (!found_cc_field2) - self->last_cc_vbi_line_field2 = -1; - if (!found_afd_bar_field2) - self->last_afd_bar_vbi_line_field2 = -1; + GST_DEBUG_OBJECT (self, + "Checking for ancillary data in VBI (lines %u-%u, %u-%u)", line_start, + line_end, f2_line_start, f2_line_end); - if (((self->output_cc && !found_cc_field2) || (self->output_afd_bar - && !found_afd_bar_field2))) { - for (line = 1; line < 22; line++) { - extract_vbi_line (self, buffer, vanc_frame, field2_offset, line, - &found_cc_field2, &found_afd_bar_field2); + /* Extract all progressive / field 1 VANC */ + for (guint line = line_start; line <= line_end; line++) { + extract_vbi_line (self, buffer, vanc_frame, self->anc_width < 1280, GST_VIDEO_INFO_IS_INTERLACED (&self->info), false, line); + } - /* If we found everything we wanted to extract, stop here */ - if ((!self->output_cc || found_cc_field2) && - (!self->output_afd_bar || found_afd_bar_field2)) - break; - } + /* Do the same for field 2 in case of interlaced content */ + if (f2_line_start > 0 && GST_VIDEO_INFO_IS_INTERLACED (&self->info)) { + for (guint line = f2_line_start; line <= f2_line_end; line++) { + extract_vbi_line (self, buffer, vanc_frame, self->anc_width < 1280, true, true, line); } } @@ -1516,9 +1615,9 @@ if (self->caps_mode != f.mode) { self->aspect_ratio_flag = -1; } - // If we have a format that supports VANC and we are asked to extract CC, + // If we have a format that supports VANC and we are asked to output it, // then do it here. - if ((self->output_cc || self->output_afd_bar) + if ((self->output_cc || self->output_afd_bar || self->output_vanc) && self->signal_state != SIGNAL_STATE_LOST) extract_vbi (self, buffer, vf); @@ -1657,10 +1756,6 @@ char *colorimetry; const GstDecklinkMode *gst_mode = gst_decklink_get_mode (f.mode); - self->last_cc_vbi_line = -1; - self->last_afd_bar_vbi_line = -1; - self->last_cc_vbi_line_field2 = -1; - self->last_afd_bar_vbi_line_field2 = -1; GST_LOG_OBJECT (self, "mode flags 0x%x", display_mode_flags (self, gst_mode, TRUE)); caps = gst_decklink_mode_get_caps (f.mode,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/decklink/gstdecklinkvideosrc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/decklink/gstdecklinkvideosrc.h
Changed
@@ -115,11 +115,8 @@ GstVideoFormat anc_vformat; gint anc_width; gboolean output_cc; - gint last_cc_vbi_line; - gint last_cc_vbi_line_field2; gboolean output_afd_bar; - gint last_afd_bar_vbi_line; - gint last_afd_bar_vbi_line_field2; + gboolean output_vanc; guint skipped_last; GstClockTime skip_from_timestamp;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/directsound/gstdirectsoundplugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/directsound/gstdirectsoundplugin.c
Changed
@@ -35,12 +35,12 @@ static gboolean plugin_init (GstPlugin * plugin) { - if (!gst_element_register (plugin, "directsoundsrc", GST_RANK_SECONDARY, + if (!gst_element_register (plugin, "directsoundsrc", GST_RANK_MARGINAL, GST_TYPE_DIRECTSOUND_SRC)) return FALSE; if (!gst_device_provider_register (plugin, "directsoundsrcdeviceprovider", - GST_RANK_PRIMARY, GST_TYPE_DIRECTSOUND_DEVICE_PROVIDER)) + GST_RANK_NONE, GST_TYPE_DIRECTSOUND_DEVICE_PROVIDER)) return FALSE;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/dwrite/gstdwriteoverlayobject.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/dwrite/gstdwriteoverlayobject.cpp
Changed
@@ -402,24 +402,10 @@ GstBuffer * buffer) { auto priv = self->priv; - GstVideoOverlayCompositionMeta *meta; - - meta = gst_buffer_get_video_overlay_composition_meta (buffer); - if (meta) { - if (meta->overlay) { - meta->overlay = - gst_video_overlay_composition_make_writable (meta->overlay); - gst_video_overlay_composition_add_rectangle (meta->overlay, - priv->overlay_rect); - } else { - meta->overlay = gst_video_overlay_composition_new (priv->overlay_rect); - } - } else { - GstVideoOverlayComposition *comp = - gst_video_overlay_composition_new (priv->overlay_rect); - meta = gst_buffer_add_video_overlay_composition_meta (buffer, comp); - gst_video_overlay_composition_unref (comp); - } + GstVideoOverlayComposition *comp = + gst_video_overlay_composition_new (priv->overlay_rect); + gst_buffer_add_video_overlay_composition_meta (buffer, comp); + gst_video_overlay_composition_unref (comp); return TRUE; }
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipbasefilter.cpp
Added
@@ -0,0 +1,362 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthipbasefilter.h" +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_hip_base_filter_debug); +#define GST_CAT_DEFAULT gst_hip_base_filter_debug + +/* cached quark to avoid contention on the global quark table lock */ +#define META_TAG_VIDEO meta_tag_video_quark +static GQuark meta_tag_video_quark; + +enum +{ + PROP_0, + PROP_DEVICE_ID, + PROP_VENDOR, +}; + +#define DEFAULT_DEVICE_ID -1 +#define DEFAULT_VENDOR GST_HIP_VENDOR_UNKNOWN + +struct _GstHipBaseFilterPrivate +{ + ~_GstHipBaseFilterPrivate () + { + gst_clear_caps (&in_caps); + gst_clear_caps (&out_caps); + } + + std::recursive_mutex lock; + GstCaps *in_caps = nullptr; + GstCaps *out_caps = nullptr; + + gint device_id = DEFAULT_DEVICE_ID; + GstHipVendor vendor = DEFAULT_VENDOR; +}; + +#define gst_hip_base_filter_parent_class parent_class +G_DEFINE_ABSTRACT_TYPE (GstHipBaseFilter, gst_hip_base_filter, + GST_TYPE_BASE_TRANSFORM); + +static void gst_hip_base_filter_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_hip_base_filter_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static void gst_hip_base_filter_finalize (GObject * object); +static void gst_hip_base_filter_set_context (GstElement * element, + GstContext * context); +static gboolean gst_hip_base_filter_start (GstBaseTransform * trans); +static gboolean gst_hip_base_filter_stop (GstBaseTransform * trans); +static gboolean gst_hip_base_filter_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps); +static gboolean gst_hip_base_filter_get_unit_size (GstBaseTransform * trans, + GstCaps * caps, gsize * size); +static gboolean gst_hip_base_filter_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query); +static void gst_hip_base_filter_before_transform (GstBaseTransform * trans, + GstBuffer * buffer); +static gboolean +gst_hip_base_filter_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); + +static void +gst_hip_base_filter_class_init (GstHipBaseFilterClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + + object_class->finalize = gst_hip_base_filter_finalize; + object_class->set_property = gst_hip_base_filter_set_property; + object_class->get_property = gst_hip_base_filter_get_property; + + g_object_class_install_property (object_class, PROP_DEVICE_ID, + g_param_spec_int ("device-id", + "Device ID", "HIP device ID to use (-1 = auto)", + -1, G_MAXINT, DEFAULT_DEVICE_ID, + (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | + G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_VENDOR, + g_param_spec_enum ("vendor", "Vendor", "Vendor type", + GST_TYPE_HIP_VENDOR, GST_HIP_VENDOR_UNKNOWN, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + element_class->set_context = + GST_DEBUG_FUNCPTR (gst_hip_base_filter_set_context); + + trans_class->passthrough_on_same_caps = TRUE; + + trans_class->start = GST_DEBUG_FUNCPTR (gst_hip_base_filter_start); + trans_class->stop = GST_DEBUG_FUNCPTR (gst_hip_base_filter_stop); + trans_class->set_caps = GST_DEBUG_FUNCPTR (gst_hip_base_filter_set_caps); + trans_class->get_unit_size = + GST_DEBUG_FUNCPTR (gst_hip_base_filter_get_unit_size); + trans_class->query = GST_DEBUG_FUNCPTR (gst_hip_base_filter_query); + trans_class->before_transform = + GST_DEBUG_FUNCPTR (gst_hip_base_filter_before_transform); + trans_class->transform_meta = + GST_DEBUG_FUNCPTR (gst_hip_base_filter_transform_meta); + + GST_DEBUG_CATEGORY_INIT (gst_hip_base_filter_debug, + "hipbasefilter", 0, "hipbasefilter"); + + gst_type_mark_as_plugin_api (GST_TYPE_HIP_BASE_FILTER, (GstPluginAPIFlags) 0); + meta_tag_video_quark = g_quark_from_static_string (GST_META_TAG_VIDEO_STR); +} + +static void +gst_hip_base_filter_init (GstHipBaseFilter * self) +{ + self->priv = new GstHipBaseFilterPrivate (); +} + +static void +gst_hip_base_filter_finalize (GObject * object) +{ + auto self = GST_HIP_BASE_FILTER (object); + + gst_clear_object (&self->device); + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_hip_base_filter_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_BASE_FILTER (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE_ID: + priv->device_id = g_value_get_int (value); + break; + case PROP_VENDOR: + priv->vendor = (GstHipVendor) g_value_get_enum (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_base_filter_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_BASE_FILTER (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE_ID: + g_value_set_int (value, priv->device_id); + break; + case PROP_VENDOR: + g_value_set_enum (value, priv->vendor); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_base_filter_set_context (GstElement * element, GstContext * context) +{ + auto self = GST_HIP_BASE_FILTER (element); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_hip_handle_set_context (element, context, priv->vendor, + priv->device_id, &self->device); + } + + GST_ELEMENT_CLASS (parent_class)->set_context (element, context); +} + +static gboolean +gst_hip_base_filter_start (GstBaseTransform * trans) +{ + auto self = GST_HIP_BASE_FILTER (trans); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!gst_hip_ensure_element_data (GST_ELEMENT (trans), + priv->vendor, priv->device_id, &self->device)) { + GST_ERROR_OBJECT (self, "Couldn't get HIP device"); + return FALSE; + } + } + + return TRUE; +} + +static gboolean +gst_hip_base_filter_stop (GstBaseTransform * trans) +{ + auto self = GST_HIP_BASE_FILTER (trans); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_clear_object (&self->device); + gst_clear_caps (&priv->in_caps); + gst_clear_caps (&priv->out_caps); + } + + return TRUE; +} + +static gboolean +gst_hip_base_filter_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + auto self = GST_HIP_BASE_FILTER (trans); + auto priv = self->priv; + GstVideoInfo in_info, out_info; + + if (!self->device) { + GST_ERROR_OBJECT (self, "HIP device is not configured"); + return FALSE; + } + + /* input caps */ + if (!gst_video_info_from_caps (&in_info, incaps)) { + GST_ERROR_OBJECT (self, "invalid incaps %" GST_PTR_FORMAT, incaps); + return FALSE; + } + + /* output caps */ + if (!gst_video_info_from_caps (&out_info, outcaps)) { + GST_ERROR_OBJECT (self, "invalid incaps %" GST_PTR_FORMAT, incaps); + return FALSE; + } + + self->in_info = in_info; + self->out_info = out_info; + gst_caps_replace (&priv->in_caps, incaps); + gst_caps_replace (&priv->out_caps, outcaps); + + auto klass = GST_HIP_BASE_FILTER_GET_CLASS (self); + if (klass->set_info) + return klass->set_info (self, incaps, &in_info, outcaps, &out_info); + + return TRUE; +} + +static gboolean +gst_hip_base_filter_get_unit_size (GstBaseTransform * trans, GstCaps * caps, + gsize * size) +{ + GstVideoInfo info; + if (!gst_video_info_from_caps (&info, caps)) + return FALSE; + + *size = GST_VIDEO_INFO_SIZE (&info); + + return TRUE; +} + +static gboolean +gst_hip_base_filter_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query) +{ + auto self = GST_HIP_BASE_FILTER (trans); + auto priv = self->priv; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (gst_hip_handle_context_query (GST_ELEMENT (self), query, + self->device)) { + return TRUE; + } + break; + } + default: + break; + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->query (trans, direction, + query); +} + +static void +gst_hip_base_filter_before_transform (GstBaseTransform * trans, + GstBuffer * buffer) +{ + auto self = GST_HIP_BASE_FILTER (trans); + auto priv = self->priv; + + auto mem = gst_buffer_peek_memory (buffer, 0); + if (!gst_is_hip_memory (mem)) + return; + + auto hmem = GST_HIP_MEMORY_CAST (mem); + /* Same context, nothing to do */ + if (gst_hip_device_is_equal (self->device, hmem->device)) + return; + + GST_INFO_OBJECT (self, "Updating device %" GST_PTR_FORMAT " -> %" + GST_PTR_FORMAT, self->device, hmem->device); + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_clear_object (&self->device); + self->device = (GstHipDevice *) gst_object_ref (hmem->device); + } + + /* subclass will update internal object. + * Note that gst_base_transform_reconfigure() might not trigger this + * unless caps was changed meanwhile */ + gst_hip_base_filter_set_caps (trans, priv->in_caps, priv->out_caps); + + /* Mark reconfigure so that we can update pool */ + gst_base_transform_reconfigure_src (trans); +} + +static gboolean +gst_hip_base_filter_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) +{ + auto info = meta->info; + auto tags = gst_meta_api_type_get_tags (info->api); + + if (!tags || (g_strv_length ((gchar **) tags) == 1 + && gst_meta_api_type_has_tag (info->api, META_TAG_VIDEO))) + return TRUE; + + return GST_BASE_TRANSFORM_CLASS (parent_class)->transform_meta (trans, outbuf, + meta, inbuf); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipbasefilter.h
Added
@@ -0,0 +1,68 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/base/gstbasetransform.h> +#include <gst/video/video.h> +#include <gst/hip/gsthip.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_BASE_FILTER (gst_hip_base_filter_get_type()) +#define GST_HIP_BASE_FILTER(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_HIP_BASE_FILTER,GstHipBaseFilter)) +#define GST_HIP_BASE_FILTER_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_HIP_BASE_FILTER,GstHipBaseFilterClass)) +#define GST_HIP_BASE_FILTER_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_BASE_FILTER,GstHipBaseFilterClass)) +#define GST_IS_HIP_BASE_FILTER(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_HIP_BASE_FILTER)) +#define GST_IS_HIP_BASE_FILTER_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_HIP_BASE_FILTER)) + +typedef struct _GstHipBaseFilter GstHipBaseFilter; +typedef struct _GstHipBaseFilterClass GstHipBaseFilterClass; +typedef struct _GstHipBaseFilterPrivate GstHipBaseFilterPrivate; + +struct _GstHipBaseFilter +{ + GstBaseTransform parent; + + GstHipDevice *device; + + GstVideoInfo in_info; + GstVideoInfo out_info; + + GstHipBaseFilterPrivate *priv; +}; + +struct _GstHipBaseFilterClass +{ + GstBaseTransformClass parent_class; + + gboolean (*set_info) (GstHipBaseFilter *filter, + GstCaps *incaps, + GstVideoInfo *in_info, + GstCaps *outcaps, + GstVideoInfo *out_info); +}; + +GType gst_hip_base_filter_get_type (void); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstHipBaseFilter, gst_object_unref) + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipcompositor.cpp
Added
@@ -0,0 +1,1611 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/hip/gsthip.h> +#include "gsthipcompositor.h" +#include "gsthipconverter.h" +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_hip_compositor_debug); +#define GST_CAT_DEFAULT gst_hip_compositor_debug + +enum GstHipCompositorOperator +{ + GST_HIP_COMPOSITOR_OPERATOR_SOURCE, + GST_HIP_COMPOSITOR_OPERATOR_OVER, +}; + +#define GST_TYPE_HIP_COMPOSITOR_OPERATOR (gst_hip_compositor_operator_get_type()) +static GType +gst_hip_compositor_operator_get_type (void) +{ + static GType compositor_operator_type = 0; + static const GEnumValue compositor_operator = { + {GST_HIP_COMPOSITOR_OPERATOR_SOURCE, "Source", "source"}, + {GST_HIP_COMPOSITOR_OPERATOR_OVER, "Over", "over"}, + {0, nullptr, nullptr}, + }; + static std::once_flag once; + + std::call_once (once,& { + compositor_operator_type = + g_enum_register_static ("GstHipCompositorOperator", + compositor_operator); + }); + + return compositor_operator_type; +} + +enum GstHipCompositorSizingPolicy +{ + GST_HIP_COMPOSITOR_SIZING_POLICY_NONE, + GST_HIP_COMPOSITOR_SIZING_POLICY_KEEP_ASPECT_RATIO, +}; + +#define GST_TYPE_HIP_COMPOSITOR_SIZING_POLICY (gst_hip_compositor_sizing_policy_get_type()) +static GType +gst_hip_compositor_sizing_policy_get_type (void) +{ + static GType sizing_policy_type = 0; + + static const GEnumValue sizing_polices = { + {GST_HIP_COMPOSITOR_SIZING_POLICY_NONE, + "None: Image is scaled to fill configured destination rectangle without " + "padding or keeping the aspect ratio", "none"}, + {GST_HIP_COMPOSITOR_SIZING_POLICY_KEEP_ASPECT_RATIO, + "Keep Aspect Ratio: Image is scaled to fit destination rectangle " + "specified by GstD3D12CompositorPad:{xpos, ypos, width, height} " + "with preserved aspect ratio. Resulting image will be centered in " + "the destination rectangle with padding if necessary", + "keep-aspect-ratio"}, + {0, nullptr, nullptr}, + }; + static std::once_flag once; + + std::call_once (once,& { + sizing_policy_type = + g_enum_register_static ("GstHipCompositorSizingPolicy", sizing_polices); + }); + + return sizing_policy_type; +} + +enum +{ + PROP_PAD_0, + PROP_PAD_XPOS, + PROP_PAD_YPOS, + PROP_PAD_WIDTH, + PROP_PAD_HEIGHT, + PROP_PAD_ALPHA, + PROP_PAD_OPERATOR, + PROP_PAD_SIZING_POLICY, +}; + +#define DEFAULT_PAD_XPOS 0 +#define DEFAULT_PAD_YPOS 0 +#define DEFAULT_PAD_WIDTH 0 +#define DEFAULT_PAD_HEIGHT 0 +#define DEFAULT_PAD_ALPHA 1.0 +#define DEFAULT_PAD_OPERATOR GST_HIP_COMPOSITOR_OPERATOR_OVER +#define DEFAULT_PAD_SIZING_POLICY GST_HIP_COMPOSITOR_SIZING_POLICY_NONE + +enum +{ + PROP_0, + PROP_DEVICE_ID, + PROP_VENDOR, + PROP_IGNORE_INACTIVE_PADS, +}; + +#define DEFAULT_DEVICE_ID -1 +#define DEFAULT_VENDOR GST_HIP_VENDOR_UNKNOWN + +/* *INDENT-OFF* */ +struct GstHipCompositorPadPrivate +{ + ~GstHipCompositorPadPrivate () + { + gst_clear_object (&conv); + gst_clear_buffer (&prepared_buf); + if (fallback_pool) { + gst_buffer_pool_set_active (fallback_pool, FALSE); + gst_object_unref (fallback_pool); + } + } + + GstHipConverter *conv = nullptr; + GstBufferPool *fallback_pool = nullptr; + GstBuffer *prepared_buf = nullptr; + + gboolean config_updated = FALSE; + + std::recursive_mutex lock; + + /* properties */ + gint xpos = DEFAULT_PAD_XPOS; + gint ypos = DEFAULT_PAD_YPOS; + gint width = DEFAULT_PAD_WIDTH; + gint height = DEFAULT_PAD_HEIGHT; + gdouble alpha = DEFAULT_PAD_ALPHA; + GstHipCompositorOperator op = DEFAULT_PAD_OPERATOR; + GstHipCompositorSizingPolicy sizing_policy = DEFAULT_PAD_SIZING_POLICY; +}; + +struct _GstHipCompositorPad +{ + GstVideoAggregatorConvertPad parent; + + GstHipCompositorPadPrivate *priv; +}; + +struct GstHipCompositorPrivate +{ + std::recursive_mutex lock; + + /* properties */ + gint device_id = DEFAULT_DEVICE_ID; + GstHipVendor vendor = DEFAULT_VENDOR; +}; +/* *INDENT-ON* */ + +struct _GstHipCompositor +{ + GstVideoAggregator parent; + + GstHipDevice *device; + + GstHipCompositorPrivate *priv; +}; + +static void gst_hip_compositor_pad_finalize (GObject * object); +static void gst_hip_compositor_pad_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_hip_compositor_pad_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static gboolean +gst_hip_compositor_pad_prepare_frame (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg, GstBuffer * buffer, + GstVideoFrame * prepared_frame); +static void gst_hip_compositor_pad_clean_frame (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg, GstVideoFrame * prepared_frame); + +#define gst_hip_compositor_pad_parent_class parent_pad_class +G_DEFINE_TYPE (GstHipCompositorPad, gst_hip_compositor_pad, + GST_TYPE_VIDEO_AGGREGATOR_PAD); + +static void +gst_hip_compositor_pad_class_init (GstHipCompositorPadClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto vagg_pad_class = GST_VIDEO_AGGREGATOR_PAD_CLASS (klass); + auto param_flags = (GParamFlags) + (G_PARAM_READWRITE | GST_PARAM_CONTROLLABLE | G_PARAM_STATIC_STRINGS); + + object_class->finalize = gst_hip_compositor_pad_finalize; + object_class->set_property = gst_hip_compositor_pad_set_property; + object_class->get_property = gst_hip_compositor_pad_get_property; + + g_object_class_install_property (object_class, PROP_PAD_XPOS, + g_param_spec_int ("xpos", "X Position", "X position of the picture", + G_MININT, G_MAXINT, DEFAULT_PAD_XPOS, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_YPOS, + g_param_spec_int ("ypos", "Y Position", "Y position of the picture", + G_MININT, G_MAXINT, DEFAULT_PAD_YPOS, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_WIDTH, + g_param_spec_int ("width", "Width", "Width of the picture", + G_MININT, G_MAXINT, DEFAULT_PAD_WIDTH, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_HEIGHT, + g_param_spec_int ("height", "Height", "Height of the picture", + G_MININT, G_MAXINT, DEFAULT_PAD_HEIGHT, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_ALPHA, + g_param_spec_double ("alpha", "Alpha", "Alpha of the picture", 0.0, 1.0, + DEFAULT_PAD_ALPHA, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_OPERATOR, + g_param_spec_enum ("operator", "Operator", + "Blending operator to use for blending this pad over the previous ones", + GST_TYPE_HIP_COMPOSITOR_OPERATOR, DEFAULT_PAD_OPERATOR, param_flags)); + g_object_class_install_property (object_class, PROP_PAD_SIZING_POLICY, + g_param_spec_enum ("sizing-policy", "Sizing policy", + "Sizing policy to use for image scaling", + GST_TYPE_HIP_COMPOSITOR_SIZING_POLICY, DEFAULT_PAD_SIZING_POLICY, + param_flags)); + + vagg_pad_class->prepare_frame = + GST_DEBUG_FUNCPTR (gst_hip_compositor_pad_prepare_frame); + vagg_pad_class->clean_frame = + GST_DEBUG_FUNCPTR (gst_hip_compositor_pad_clean_frame); + + gst_type_mark_as_plugin_api (GST_TYPE_HIP_COMPOSITOR_OPERATOR, + (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_HIP_COMPOSITOR_SIZING_POLICY, + (GstPluginAPIFlags) 0); +} + +static void +gst_hip_compositor_pad_init (GstHipCompositorPad * self) +{ + self->priv = new GstHipCompositorPadPrivate (); +} + +static void +gst_hip_compositor_pad_finalize (GObject * object) +{ + auto self = GST_HIP_COMPOSITOR_PAD (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_pad_class)->finalize (object); +} + +static void +pad_update_position (GstHipCompositorPad * self, + gint * old, const GValue * value) +{ + auto priv = self->priv; + auto tmp = g_value_get_int (value); + + if (*old != tmp) { + *old = tmp; + priv->config_updated = TRUE; + } +} + +static void +gst_hip_compositor_pad_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_COMPOSITOR_PAD (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + switch (prop_id) { + case PROP_PAD_XPOS: + pad_update_position (self, &priv->xpos, value); + break; + case PROP_PAD_YPOS: + pad_update_position (self, &priv->ypos, value); + break; + case PROP_PAD_WIDTH: + pad_update_position (self, &priv->width, value); + break; + case PROP_PAD_HEIGHT: + pad_update_position (self, &priv->height, value); + break; + case PROP_PAD_ALPHA: + { + gdouble alpha = g_value_get_double (value); + if (priv->alpha != alpha) { + priv->config_updated = TRUE; + priv->alpha = alpha; + } + break; + } + case PROP_PAD_OPERATOR: + { + auto op = (GstHipCompositorOperator) g_value_get_enum (value); + if (op != priv->op) { + priv->op = op; + priv->config_updated = TRUE; + } + break; + } + case PROP_PAD_SIZING_POLICY: + { + auto policy = (GstHipCompositorSizingPolicy) g_value_get_enum (value); + if (priv->sizing_policy != policy) { + priv->sizing_policy = policy; + priv->config_updated = TRUE; + } + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_compositor_pad_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_COMPOSITOR_PAD (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + switch (prop_id) { + case PROP_PAD_XPOS: + g_value_set_int (value, priv->xpos); + break; + case PROP_PAD_YPOS: + g_value_set_int (value, priv->ypos); + break; + case PROP_PAD_WIDTH: + g_value_set_int (value, priv->width); + break; + case PROP_PAD_HEIGHT: + g_value_set_int (value, priv->height); + break; + case PROP_PAD_ALPHA: + g_value_set_double (value, priv->alpha); + break; + case PROP_PAD_OPERATOR: + g_value_set_enum (value, priv->op); + break; + case PROP_PAD_SIZING_POLICY: + g_value_set_enum (value, priv->sizing_policy); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_compositor_pad_get_output_size (GstHipCompositorPad * self, + gint out_par_n, gint out_par_d, gint * width, gint * height, + gint * x_offset, gint * y_offset) +{ + auto vagg_pad = GST_VIDEO_AGGREGATOR_PAD (self); + auto priv = self->priv; + gint pad_width, pad_height; + guint dar_n, dar_d; + + *x_offset = 0; + *y_offset = 0; + *width = 0; + *height = 0; + + if (!vagg_pad->info.finfo + || vagg_pad->info.finfo->format == GST_VIDEO_FORMAT_UNKNOWN) { + GST_DEBUG_OBJECT (self, "Have no caps yet"); + return; + } + + pad_width = priv->width <= 0 ? + GST_VIDEO_INFO_WIDTH (&vagg_pad->info) : priv->width; + pad_height = priv->height <= 0 ? + GST_VIDEO_INFO_HEIGHT (&vagg_pad->info) : priv->height; + + if (pad_width == 0 || pad_height == 0) + return; + + if (!gst_video_calculate_display_ratio (&dar_n, &dar_d, pad_width, pad_height, + GST_VIDEO_INFO_PAR_N (&vagg_pad->info), + GST_VIDEO_INFO_PAR_D (&vagg_pad->info), out_par_n, out_par_d)) { + GST_WARNING_OBJECT (self, "Cannot calculate display aspect ratio"); + return; + } + + GST_TRACE_OBJECT (self, "scaling %ux%u by %u/%u (%u/%u / %u/%u)", + pad_width, pad_height, dar_n, dar_d, + GST_VIDEO_INFO_PAR_N (&vagg_pad->info), + GST_VIDEO_INFO_PAR_D (&vagg_pad->info), out_par_n, out_par_d); + + switch (priv->sizing_policy) { + case GST_HIP_COMPOSITOR_SIZING_POLICY_NONE: + /* Pick either height or width, whichever is an integer multiple of the + * display aspect ratio. However, prefer preserving the height to account + * for interlaced video. */ + if (pad_height % dar_n == 0) { + pad_width = gst_util_uint64_scale_int (pad_height, dar_n, dar_d); + } else if (pad_width % dar_d == 0) { + pad_height = gst_util_uint64_scale_int (pad_width, dar_d, dar_n); + } else { + pad_width = gst_util_uint64_scale_int (pad_height, dar_n, dar_d); + } + break; + case GST_HIP_COMPOSITOR_SIZING_POLICY_KEEP_ASPECT_RATIO: + { + gint from_dar_n, from_dar_d, to_dar_n, to_dar_d, num, den; + + /* Calculate DAR again with actual video size */ + if (!gst_util_fraction_multiply (GST_VIDEO_INFO_WIDTH (&vagg_pad->info), + GST_VIDEO_INFO_HEIGHT (&vagg_pad->info), + GST_VIDEO_INFO_PAR_N (&vagg_pad->info), + GST_VIDEO_INFO_PAR_D (&vagg_pad->info), &from_dar_n, + &from_dar_d)) { + from_dar_n = from_dar_d = -1; + } + + if (!gst_util_fraction_multiply (pad_width, pad_height, + out_par_n, out_par_d, &to_dar_n, &to_dar_d)) { + to_dar_n = to_dar_d = -1; + } + + if (from_dar_n != to_dar_n || from_dar_d != to_dar_d) { + /* Calculate new output resolution */ + if (from_dar_n != -1 && from_dar_d != -1 + && gst_util_fraction_multiply (from_dar_n, from_dar_d, + out_par_d, out_par_n, &num, &den)) { + GstVideoRectangle src_rect, dst_rect, rst_rect; + + src_rect.h = gst_util_uint64_scale_int (pad_width, den, num); + if (src_rect.h == 0) { + pad_width = 0; + pad_height = 0; + break; + } + + src_rect.x = src_rect.y = 0; + src_rect.w = pad_width; + + dst_rect.x = dst_rect.y = 0; + dst_rect.w = pad_width; + dst_rect.h = pad_height; + + /* Scale rect to be centered in destination rect */ + gst_video_center_rect (&src_rect, &dst_rect, &rst_rect, TRUE); + + GST_LOG_OBJECT (self, + "Re-calculated size %dx%d -> %dx%d (x-offset %d, y-offset %d)", + pad_width, pad_height, rst_rect.w, rst_rect.h, rst_rect.x, + rst_rect.h); + + *x_offset = rst_rect.x; + *y_offset = rst_rect.y; + pad_width = rst_rect.w; + pad_height = rst_rect.h; + } else { + GST_WARNING_OBJECT (self, "Failed to calculate output size"); + + *x_offset = 0; + *y_offset = 0; + pad_width = 0; + pad_height = 0; + } + } + break; + } + } + + *width = pad_width; + *height = pad_height; +} + +static GstVideoRectangle +clamp_rectangle (gint x, gint y, gint w, gint h, gint outer_width, + gint outer_height) +{ + gint x2 = x + w; + gint y2 = y + h; + GstVideoRectangle clamped; + + /* Clamp the x/y coordinates of this frame to the output boundaries to cover + * the case where (say, with negative xpos/ypos or w/h greater than the output + * size) the non-obscured portion of the frame could be outside the bounds of + * the video itself and hence not visible at all */ + clamped.x = CLAMP (x, 0, outer_width); + clamped.y = CLAMP (y, 0, outer_height); + clamped.w = CLAMP (x2, 0, outer_width) - clamped.x; + clamped.h = CLAMP (y2, 0, outer_height) - clamped.y; + + return clamped; +} + +static gboolean +gst_hip_compositor_pad_check_frame_obscured (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg) +{ + auto self = GST_HIP_COMPOSITOR_PAD (pad); + auto priv = self->priv; + gint width, height; + GstVideoInfo *info = &vagg->info; + /* The rectangle representing this frame, clamped to the video's boundaries. + * Due to the clamping, this is different from the frame width/height above. */ + GstVideoRectangle frame_rect; + gint x_offset, y_offset; + + /* There's three types of width/height here: + * 1. GST_VIDEO_FRAME_WIDTH/HEIGHT: + * The frame width/height (same as pad->info.height/width; + * see gst_video_frame_map()) + * 2. cpad->width/height: + * The optional pad property for scaling the frame (if zero, the video is + * left unscaled) + */ + + if (priv->alpha == 0) + return TRUE; + + gst_hip_compositor_pad_get_output_size (self, GST_VIDEO_INFO_PAR_N (info), + GST_VIDEO_INFO_PAR_D (info), &width, &height, &x_offset, &y_offset); + + frame_rect = clamp_rectangle (priv->xpos + x_offset, priv->ypos + y_offset, + width, height, GST_VIDEO_INFO_WIDTH (info), GST_VIDEO_INFO_HEIGHT (info)); + + if (frame_rect.w == 0 || frame_rect.h == 0) { + GST_DEBUG_OBJECT (pad, "Resulting frame is zero-width or zero-height " + "(w: %i, h: %i), skipping", frame_rect.w, frame_rect.h); + return TRUE; + } + + return FALSE; +} + +static GstBuffer * +gst_hip_compositor_upload_frame (GstHipCompositor * self, + GstVideoAggregatorPad * pad, GstBuffer * buffer) +{ + auto cpad = GST_HIP_COMPOSITOR_PAD (pad); + auto priv = cpad->priv; + GstVideoFrame src, dst; + + auto mem = gst_buffer_peek_memory (buffer, 0); + if (gst_is_hip_memory (mem)) { + auto hmem = GST_HIP_MEMORY_CAST (mem); + if (gst_hip_device_is_equal (hmem->device, self->device)) + return gst_buffer_ref (buffer); + } + + if (!priv->fallback_pool) { + priv->fallback_pool = gst_hip_buffer_pool_new (self->device); + auto config = gst_buffer_pool_get_config (priv->fallback_pool); + + auto caps = gst_video_info_to_caps (&pad->info); + gst_buffer_pool_config_set_params (config, caps, pad->info.size, 0, 0); + gst_caps_unref (caps); + if (!gst_buffer_pool_set_config (priv->fallback_pool, config)) { + GST_ERROR_OBJECT (pad, "Set config failed"); + gst_clear_object (&priv->fallback_pool); + return nullptr; + } + + if (!gst_buffer_pool_set_active (priv->fallback_pool, TRUE)) { + GST_ERROR_OBJECT (pad, "Set active failed"); + gst_clear_object (&priv->fallback_pool); + return nullptr; + } + } + + GstBuffer *outbuf = nullptr; + gst_buffer_pool_acquire_buffer (priv->fallback_pool, &outbuf, nullptr); + if (!outbuf) { + GST_ERROR_OBJECT (self, "Couldn't acquire buffer"); + return nullptr; + } + + if (!gst_video_frame_map (&src, &pad->info, buffer, GST_MAP_READ)) { + GST_ERROR_OBJECT (pad, "Couldn't map src frame"); + gst_buffer_unref (outbuf); + return nullptr; + } + + if (!gst_video_frame_map (&dst, &pad->info, outbuf, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (pad, "Couldn't map dst frame"); + gst_video_frame_unmap (&src); + gst_buffer_unref (outbuf); + return nullptr; + } + + auto ret = gst_video_frame_copy (&dst, &src); + gst_video_frame_unmap (&dst); + gst_video_frame_unmap (&src); + + if (!ret) { + GST_ERROR_OBJECT (pad, "Couldn't copy frame"); + gst_buffer_unref (outbuf); + return nullptr; + } + + return outbuf; +} + +static gboolean +gst_hip_compositor_pad_prepare_frame (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg, GstBuffer * buffer, + GstVideoFrame * prepared_frame) +{ + auto self = GST_HIP_COMPOSITOR_PAD (pad); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (gst_hip_compositor_pad_check_frame_obscured (pad, vagg)) + return TRUE; + + buffer = gst_hip_compositor_upload_frame (GST_HIP_COMPOSITOR (vagg), + pad, buffer); + if (!buffer) + return FALSE; + + if (!gst_video_frame_map (prepared_frame, + &pad->info, buffer, GST_MAP_READ_HIP)) { + GST_ERROR_OBJECT (self, "Couldn't map frame"); + gst_buffer_unref (buffer); + return FALSE; + } + + priv->prepared_buf = buffer; + + return TRUE; +} + +static void +gst_hip_compositor_pad_clean_frame (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg, GstVideoFrame * prepared_frame) +{ + auto self = GST_HIP_COMPOSITOR_PAD (pad); + auto priv = self->priv; + + if (prepared_frame->buffer) + gst_video_frame_unmap (prepared_frame); + + memset (prepared_frame, 0, sizeof (GstVideoFrame)); + gst_clear_buffer (&priv->prepared_buf); +} + +static gboolean +gst_hip_compositor_pad_setup_converter (GstVideoAggregatorPad * pad, + GstVideoAggregator * vagg) +{ + auto self = GST_HIP_COMPOSITOR (vagg); + auto cpad = GST_HIP_COMPOSITOR_PAD (pad); + auto priv = cpad->priv; + gint width, height; + GstVideoInfo *info = &vagg->info; + GstVideoRectangle frame_rect; + gint x_offset, y_offset; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!priv->conv) { + priv->conv = gst_hip_converter_new (self->device, &pad->info, &vagg->info, + nullptr); + if (!priv->conv) { + GST_ERROR_OBJECT (self, "Couldn't create converter"); + return FALSE; + } + + priv->config_updated = TRUE; + } + + if (!priv->config_updated) + return TRUE; + + gst_hip_compositor_pad_get_output_size (cpad, GST_VIDEO_INFO_PAR_N (info), + GST_VIDEO_INFO_PAR_D (info), &width, &height, &x_offset, &y_offset); + + frame_rect = clamp_rectangle (priv->xpos + x_offset, priv->ypos + y_offset, + width, height, GST_VIDEO_INFO_WIDTH (info), GST_VIDEO_INFO_HEIGHT (info)); + +#ifndef GST_DISABLE_GST_DEBUG + guint zorder = 0; + g_object_get (pad, "zorder", &zorder, nullptr); + + GST_LOG_OBJECT (pad, "Update position, pad-xpos %d, pad-ypos %d, " + "pad-zorder %d, pad-width %d, pad-height %d, in-resolution %dx%d, " + "out-resoution %dx%d, dst-{x,y,width,height} %d-%d-%d-%d", + priv->xpos, priv->ypos, zorder, priv->width, priv->height, + GST_VIDEO_INFO_WIDTH (&pad->info), GST_VIDEO_INFO_HEIGHT (&pad->info), + GST_VIDEO_INFO_WIDTH (info), GST_VIDEO_INFO_HEIGHT (info), + frame_rect.x, frame_rect.y, frame_rect.w, frame_rect.h); +#endif + + g_object_set (priv->conv, "dest-x", frame_rect.x, + "dest-y", frame_rect.y, "dest-width", frame_rect.w, + "dest-height", frame_rect.h, "alpha", priv->alpha, + "blend", priv->op == GST_HIP_COMPOSITOR_OPERATOR_SOURCE ? FALSE : TRUE, + nullptr); + priv->config_updated = FALSE; + + return TRUE; +} + +#define GST_HIP_COMPOSITOR_FORMATS \ + "{ I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, " \ + "Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, " \ + "BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, " \ + "GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }" + +static GstStaticPadTemplate sink_template = +GST_STATIC_PAD_TEMPLATE ("sink_%u", GST_PAD_SINK, GST_PAD_REQUEST, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_COMPOSITOR_FORMATS))); + +static GstStaticPadTemplate src_template = +GST_STATIC_PAD_TEMPLATE ("src", GST_PAD_SRC, GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_COMPOSITOR_FORMATS))); + +static void gst_hip_compositor_child_proxy_init (gpointer g_iface, + gpointer iface_data); +static void gst_hip_compositor_finalize (GObject * object); +static void gst_hip_compositor_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_hip_compositor_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); + +static GstPad *gst_hip_compositor_request_new_pad (GstElement * element, + GstPadTemplate * templ, const gchar * name, const GstCaps * caps); +static void gst_hip_compositor_release_pad (GstElement * element, GstPad * pad); +static void gst_hip_compositor_set_context (GstElement * element, + GstContext * context); + +static gboolean gst_hip_compositor_start (GstAggregator * agg); +static gboolean gst_hip_compositor_stop (GstAggregator * agg); +static gboolean gst_hip_compositor_sink_query (GstAggregator * agg, + GstAggregatorPad * pad, GstQuery * query); +static gboolean gst_hip_compositor_src_query (GstAggregator * agg, + GstQuery * query); +static GstCaps *gst_hip_compositor_fixate_src_caps (GstAggregator * agg, + GstCaps * caps); +static gboolean gst_hip_compositor_negotiated_src_caps (GstAggregator * agg, + GstCaps * caps); +static gboolean +gst_hip_compositor_propose_allocation (GstAggregator * agg, + GstAggregatorPad * pad, GstQuery * decide_query, GstQuery * query); +static gboolean gst_hip_compositor_decide_allocation (GstAggregator * agg, + GstQuery * query); +static GstFlowReturn +gst_hip_compositor_aggregate_frames (GstVideoAggregator * vagg, + GstBuffer * outbuf); + +#define gst_hip_compositor_parent_class parent_class +G_DEFINE_TYPE_WITH_CODE (GstHipCompositor, gst_hip_compositor, + GST_TYPE_VIDEO_AGGREGATOR, G_IMPLEMENT_INTERFACE (GST_TYPE_CHILD_PROXY, + gst_hip_compositor_child_proxy_init)); + +static void +gst_hip_compositor_class_init (GstHipCompositorClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto agg_class = GST_AGGREGATOR_CLASS (klass); + auto vagg_class = GST_VIDEO_AGGREGATOR_CLASS (klass); + + object_class->finalize = gst_hip_compositor_finalize; + object_class->set_property = gst_hip_compositor_set_property; + object_class->get_property = gst_hip_compositor_get_property; + + g_object_class_install_property (object_class, PROP_DEVICE_ID, + g_param_spec_int ("device-id", + "Device ID", "HIP device ID to use (-1 = auto)", + -1, G_MAXINT, DEFAULT_DEVICE_ID, + (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | + G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_VENDOR, + g_param_spec_enum ("vendor", "Vendor", "Vendor type", + GST_TYPE_HIP_VENDOR, GST_HIP_VENDOR_UNKNOWN, + (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | + G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, + PROP_IGNORE_INACTIVE_PADS, g_param_spec_boolean ("ignore-inactive-pads", + "Ignore inactive pads", + "Avoid timing out waiting for inactive pads", FALSE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + element_class->request_new_pad = + GST_DEBUG_FUNCPTR (gst_hip_compositor_request_new_pad); + element_class->release_pad = + GST_DEBUG_FUNCPTR (gst_hip_compositor_release_pad); + element_class->set_context = + GST_DEBUG_FUNCPTR (gst_hip_compositor_set_context); + + agg_class->start = GST_DEBUG_FUNCPTR (gst_hip_compositor_start); + agg_class->stop = GST_DEBUG_FUNCPTR (gst_hip_compositor_stop); + agg_class->sink_query = GST_DEBUG_FUNCPTR (gst_hip_compositor_sink_query); + agg_class->src_query = GST_DEBUG_FUNCPTR (gst_hip_compositor_src_query); + agg_class->fixate_src_caps = + GST_DEBUG_FUNCPTR (gst_hip_compositor_fixate_src_caps); + agg_class->negotiated_src_caps = + GST_DEBUG_FUNCPTR (gst_hip_compositor_negotiated_src_caps); + agg_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_hip_compositor_propose_allocation); + agg_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_hip_compositor_decide_allocation); + + vagg_class->aggregate_frames = + GST_DEBUG_FUNCPTR (gst_hip_compositor_aggregate_frames); + + gst_element_class_add_static_pad_template_with_gtype (element_class, + &sink_template, GST_TYPE_HIP_COMPOSITOR_PAD); + gst_element_class_add_static_pad_template_with_gtype (element_class, + &src_template, GST_TYPE_AGGREGATOR_PAD); + + gst_element_class_set_static_metadata (element_class, "HIP Compositor", + "Filter/Editor/Video/Compositor/Hardware", "A HIP compositor", + "Seungha Yang <seungha@centricular.com>"); + + gst_type_mark_as_plugin_api (GST_TYPE_HIP_COMPOSITOR_PAD, + (GstPluginAPIFlags) 0); + + GST_DEBUG_CATEGORY_INIT (gst_hip_compositor_debug, + "hipcompositor", 0, "hipcompositor"); +} + +static void +gst_hip_compositor_init (GstHipCompositor * self) +{ + self->priv = new GstHipCompositorPrivate (); +} + +static void +gst_hip_compositor_finalize (GObject * object) +{ + auto self = GST_HIP_COMPOSITOR (object); + + delete self->priv; + + gst_clear_object (&self->device); + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_hip_compositor_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_COMPOSITOR (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + switch (prop_id) { + case PROP_DEVICE_ID: + priv->device_id = g_value_get_int (value); + break; + case PROP_VENDOR: + priv->vendor = (GstHipVendor) g_value_get_enum (value); + break; + case PROP_IGNORE_INACTIVE_PADS: + gst_aggregator_set_ignore_inactive_pads (GST_AGGREGATOR (object), + g_value_get_boolean (value)); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_compositor_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_COMPOSITOR (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + switch (prop_id) { + case PROP_DEVICE_ID: + g_value_set_int (value, priv->device_id); + break; + case PROP_VENDOR: + g_value_set_enum (value, priv->vendor); + break; + case PROP_IGNORE_INACTIVE_PADS: + g_value_set_boolean (value, + gst_aggregator_get_ignore_inactive_pads (GST_AGGREGATOR (object))); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GObject * +gst_hip_compositor_child_proxy_get_child_by_index (GstChildProxy * proxy, + guint index) +{ + auto self = GST_HIP_COMPOSITOR (proxy); + GObject *obj = nullptr; + + GST_OBJECT_LOCK (self); + obj = (GObject *) g_list_nth_data (GST_ELEMENT_CAST (self)->sinkpads, index); + if (obj) + gst_object_ref (obj); + GST_OBJECT_UNLOCK (self); + + return obj; +} + +static guint +gst_hip_compositor_child_proxy_get_children_count (GstChildProxy * proxy) +{ + auto self = GST_HIP_COMPOSITOR (proxy); + guint count = 0; + + GST_OBJECT_LOCK (self); + count = GST_ELEMENT_CAST (self)->numsinkpads; + GST_OBJECT_UNLOCK (self); + GST_INFO_OBJECT (self, "Children Count: %d", count); + + return count; +} + +static void +gst_hip_compositor_child_proxy_init (gpointer g_iface, gpointer iface_data) +{ + GstChildProxyInterface *iface = (GstChildProxyInterface *) g_iface; + + iface->get_child_by_index = gst_hip_compositor_child_proxy_get_child_by_index; + iface->get_children_count = gst_hip_compositor_child_proxy_get_children_count; +} + +static GstPad * +gst_hip_compositor_request_new_pad (GstElement * element, + GstPadTemplate * templ, const gchar * name, const GstCaps * caps) +{ + GstPad *pad; + + pad = GST_ELEMENT_CLASS (parent_class)->request_new_pad (element, + templ, name, caps); + + if (!pad) { + GST_DEBUG_OBJECT (element, "could not create/add pad"); + return nullptr; + } + + gst_child_proxy_child_added (GST_CHILD_PROXY (element), G_OBJECT (pad), + GST_OBJECT_NAME (pad)); + + GST_DEBUG_OBJECT (element, "Created new pad %s:%s", GST_DEBUG_PAD_NAME (pad)); + + return pad; +} + +static void +gst_hip_compositor_release_pad (GstElement * element, GstPad * pad) +{ + auto self = GST_HIP_COMPOSITOR (element); + + GST_DEBUG_OBJECT (self, "Releasing pad %s:%s", GST_DEBUG_PAD_NAME (pad)); + + gst_child_proxy_child_removed (GST_CHILD_PROXY (self), G_OBJECT (pad), + GST_OBJECT_NAME (pad)); + + GST_ELEMENT_CLASS (parent_class)->release_pad (element, pad); +} + +static void +gst_hip_compositor_set_context (GstElement * element, GstContext * context) +{ + auto self = GST_HIP_COMPOSITOR (element); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_hip_handle_set_context (element, context, priv->vendor, priv->device_id, + &self->device); + } + + GST_ELEMENT_CLASS (parent_class)->set_context (element, context); +} + +static gboolean +gst_hip_compositor_start (GstAggregator * agg) +{ + auto self = GST_HIP_COMPOSITOR (agg); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!gst_hip_ensure_element_data (GST_ELEMENT_CAST (self), + priv->vendor, priv->device_id, &self->device)) { + GST_ERROR_OBJECT (self, "Failed to get device"); + return FALSE; + } + } + + return GST_AGGREGATOR_CLASS (parent_class)->start (agg); +} + +static gboolean +gst_hip_compositor_stop (GstAggregator * agg) +{ + auto self = GST_HIP_COMPOSITOR (agg); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_clear_object (&self->device); + } + + return GST_AGGREGATOR_CLASS (parent_class)->stop (agg); +} + +static GstCaps * +gst_hip_compositor_sink_getcaps (GstPad * pad, GstCaps * filter) +{ + GstCaps *sinkcaps; + GstCaps *template_caps; + GstCaps *filtered_caps; + GstCaps *returned_caps; + + template_caps = gst_pad_get_pad_template_caps (pad); + + sinkcaps = gst_pad_get_current_caps (pad); + if (sinkcaps == nullptr) { + sinkcaps = gst_caps_ref (template_caps); + } else { + sinkcaps = gst_caps_merge (sinkcaps, gst_caps_ref (template_caps)); + } + + if (filter) { + filtered_caps = gst_caps_intersect (sinkcaps, filter); + gst_caps_unref (sinkcaps); + } else { + filtered_caps = sinkcaps; /* pass ownership */ + } + + returned_caps = gst_caps_intersect (filtered_caps, template_caps); + + gst_caps_unref (template_caps); + gst_caps_unref (filtered_caps); + + GST_DEBUG_OBJECT (pad, "returning %" GST_PTR_FORMAT, returned_caps); + + return returned_caps; +} + +static gboolean +gst_hip_compositor_sink_acceptcaps (GstPad * pad, GstCaps * caps) +{ + gboolean ret; + GstCaps *template_caps; + + GST_DEBUG_OBJECT (pad, "try accept caps of %" GST_PTR_FORMAT, caps); + + template_caps = gst_pad_get_pad_template_caps (pad); + template_caps = gst_caps_make_writable (template_caps); + + ret = gst_caps_can_intersect (caps, template_caps); + GST_DEBUG_OBJECT (pad, "%saccepted caps %" GST_PTR_FORMAT, + (ret ? "" : "not "), caps); + gst_caps_unref (template_caps); + + return ret; +} + +static gboolean +gst_hip_compositor_sink_query (GstAggregator * agg, + GstAggregatorPad * pad, GstQuery * query) +{ + auto self = GST_HIP_COMPOSITOR (agg); + auto priv = self->priv; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (gst_hip_handle_context_query (GST_ELEMENT (agg), query, self->device)) { + return TRUE; + } + break; + } + case GST_QUERY_CAPS: + { + GstCaps *filter, *caps; + + gst_query_parse_caps (query, &filter); + caps = gst_hip_compositor_sink_getcaps (GST_PAD (pad), filter); + gst_query_set_caps_result (query, caps); + gst_caps_unref (caps); + return TRUE; + } + case GST_QUERY_ACCEPT_CAPS: + { + GstCaps *caps; + gboolean ret; + + gst_query_parse_accept_caps (query, &caps); + ret = gst_hip_compositor_sink_acceptcaps (GST_PAD (pad), caps); + gst_query_set_accept_caps_result (query, ret); + return TRUE; + } + default: + break; + } + + return GST_AGGREGATOR_CLASS (parent_class)->sink_query (agg, pad, query); +} + +static gboolean +gst_hip_compositor_src_query (GstAggregator * agg, GstQuery * query) +{ + auto self = GST_HIP_COMPOSITOR (agg); + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + if (gst_hip_handle_context_query (GST_ELEMENT (agg), query, self->device)) { + return TRUE; + } + break; + default: + break; + } + + return GST_AGGREGATOR_CLASS (parent_class)->src_query (agg, query); +} + +static GstCaps * +gst_hip_compositor_fixate_src_caps (GstAggregator * agg, GstCaps * caps) +{ + auto vagg = GST_VIDEO_AGGREGATOR (agg); + GList *l; + gint best_width = -1, best_height = -1; + gint best_fps_n = -1, best_fps_d = -1; + gint par_n, par_d; + gdouble best_fps = 0.; + GstCaps *ret = nullptr; + GstStructure *s; + + ret = gst_caps_make_writable (caps); + + /* we need this to calculate how large to make the output frame */ + s = gst_caps_get_structure (ret, 0); + if (gst_structure_has_field (s, "pixel-aspect-ratio")) { + gst_structure_fixate_field_nearest_fraction (s, "pixel-aspect-ratio", 1, 1); + gst_structure_get_fraction (s, "pixel-aspect-ratio", &par_n, &par_d); + } else { + par_n = par_d = 1; + } + + GST_OBJECT_LOCK (vagg); + for (l = GST_ELEMENT (vagg)->sinkpads; l; l = l->next) { + auto vaggpad = GST_VIDEO_AGGREGATOR_PAD (l->data); + auto cpad = GST_HIP_COMPOSITOR_PAD (vaggpad); + auto priv = cpad->priv; + gint this_width, this_height; + gint width, height; + gint fps_n, fps_d; + gdouble cur_fps; + gint x_offset; + gint y_offset; + + fps_n = GST_VIDEO_INFO_FPS_N (&vaggpad->info); + fps_d = GST_VIDEO_INFO_FPS_D (&vaggpad->info); + gst_hip_compositor_pad_get_output_size (cpad, + par_n, par_d, &width, &height, &x_offset, &y_offset); + + if (width == 0 || height == 0) + continue; + + /* {x,y}_offset represent padding size of each top and left area. + * To calculate total resolution, count bottom and right padding area + * as well here */ + this_width = width + MAX (priv->xpos + 2 * x_offset, 0); + this_height = height + MAX (priv->ypos + 2 * y_offset, 0); + + if (best_width < this_width) + best_width = this_width; + if (best_height < this_height) + best_height = this_height; + + if (fps_d == 0) + cur_fps = 0.0; + else + gst_util_fraction_to_double (fps_n, fps_d, &cur_fps); + + if (best_fps < cur_fps) { + best_fps = cur_fps; + best_fps_n = fps_n; + best_fps_d = fps_d; + } + } + GST_OBJECT_UNLOCK (vagg); + + if (best_fps_n <= 0 || best_fps_d <= 0 || best_fps == 0.0) { + best_fps_n = 25; + best_fps_d = 1; + best_fps = 25.0; + } + + if (best_width <= 0 || best_height <= 0) { + best_width = 320; + best_height = 240; + } + + gst_structure_fixate_field_nearest_int (s, "width", best_width); + gst_structure_fixate_field_nearest_int (s, "height", best_height); + gst_structure_fixate_field_nearest_fraction (s, "framerate", best_fps_n, + best_fps_d); + ret = gst_caps_fixate (ret); + + GST_LOG_OBJECT (agg, "Fixated caps %" GST_PTR_FORMAT, ret); + + return ret; +} + +static gboolean +gst_hip_compositor_clear_pad_context (GstHipCompositor * self, + GstHipCompositorPad * cpad, gpointer user_data) +{ + auto priv = cpad->priv; + + gst_clear_object (&priv->conv); + + return TRUE; +} + +static gboolean +gst_hip_compositor_negotiated_src_caps (GstAggregator * agg, GstCaps * caps) +{ + gst_element_foreach_sink_pad (GST_ELEMENT_CAST (agg), + (GstElementForeachPadFunc) gst_hip_compositor_clear_pad_context, nullptr); + + return GST_AGGREGATOR_CLASS (parent_class)->negotiated_src_caps (agg, caps); +} + +static gboolean +gst_hip_compositor_propose_allocation (GstAggregator * agg, + GstAggregatorPad * pad, GstQuery * decide_query, GstQuery * query) +{ + auto self = GST_HIP_COMPOSITOR (agg); + GstVideoInfo info; + GstCaps *caps; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (!caps) + return FALSE; + + if (!gst_video_info_from_caps (&info, caps)) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) == 0) { + auto pool = gst_hip_buffer_pool_new (self->device); + + if (!pool) { + GST_ERROR_OBJECT (self, "Failed to create buffer pool"); + return FALSE; + } + + auto config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + + guint size = GST_VIDEO_INFO_SIZE (&info); + gst_buffer_pool_config_set_params (config, caps, size, 0, 0); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (pool, "Couldn't set config"); + gst_object_unref (pool); + + return FALSE; + } + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, + nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + gst_query_add_allocation_pool (query, pool, size, 0, 0); + gst_object_unref (pool); + } + + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + + return TRUE; +} + +static gboolean +gst_hip_compositor_decide_allocation (GstAggregator * agg, GstQuery * query) +{ + auto self = GST_HIP_COMPOSITOR (agg); + GstCaps *caps; + GstBufferPool *pool = nullptr; + guint n, size, min, max; + GstVideoInfo info; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (!caps) { + GST_DEBUG_OBJECT (self, "No output caps"); + return FALSE; + } + + if (!gst_video_info_from_caps (&info, caps)) { + GST_ERROR_OBJECT (self, "Invalid caps"); + return FALSE; + } + + n = gst_query_get_n_allocation_pools (query); + if (n > 0) + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + + /* create our own pool */ + if (pool) { + if (!GST_IS_HIP_BUFFER_POOL (pool)) { + GST_DEBUG_OBJECT (self, + "Downstream pool is not hip, will create new one"); + gst_clear_object (&pool); + } else { + auto hpool = GST_HIP_BUFFER_POOL (pool); + if (!gst_hip_device_is_equal (hpool->device, self->device)) { + GST_DEBUG_OBJECT (self, "Different device, will create new one"); + gst_clear_object (&pool); + } + } + } + + size = (guint) info.size; + + if (!pool) { + pool = gst_hip_buffer_pool_new (self->device); + min = 0; + max = 0; + } + + auto config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_config_set_params (config, caps, size, min, max); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (self, "Set config failed"); + gst_object_unref (pool); + return FALSE; + } + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + if (n > 0) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + return TRUE; +} + +static gboolean +gst_hip_compositor_draw_background (GstHipCompositor * self, + GstVideoFrame * frame, hipStream_t stream) +{ + hipError_t ret; + hipDeviceptr_t data; + guint height, stride; + guint16 uv_val; + auto vendor = gst_hip_device_get_vendor (self->device); + auto format = GST_VIDEO_FRAME_FORMAT (frame); + + switch (format) { + case GST_VIDEO_FORMAT_I420: + case GST_VIDEO_FORMAT_YV12: + case GST_VIDEO_FORMAT_Y42B: + case GST_VIDEO_FORMAT_Y444: + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 0); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD8Async (vendor, data, 0, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + + for (guint i = 1; i < GST_VIDEO_FRAME_N_PLANES (frame); i++) { + data = GST_VIDEO_FRAME_PLANE_DATA (frame, i); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, i); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, i); + + ret = HipMemsetD8Async (vendor, data, 128, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + } + break; + case GST_VIDEO_FORMAT_NV12: + case GST_VIDEO_FORMAT_NV21: + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 0); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD8Async (vendor, data, 0, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 1); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 1); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 1); + ret = HipMemsetD8Async (vendor, data, 128, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + break; + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 0); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD16Async (vendor, data, 0, stride * height / 2, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 1); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 1); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 1); + ret = HipMemsetD16Async (vendor, + data, G_MAXUINT16 / 2, stride * height / 2, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + break; + case GST_VIDEO_FORMAT_I420_10LE: + case GST_VIDEO_FORMAT_I420_12LE: + case GST_VIDEO_FORMAT_I422_10LE: + case GST_VIDEO_FORMAT_I422_12LE: + case GST_VIDEO_FORMAT_Y444_10LE: + case GST_VIDEO_FORMAT_Y444_12LE: + case GST_VIDEO_FORMAT_Y444_16LE: + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, 0); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD16Async (vendor, data, 0, stride * height / 2, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + + uv_val = (((guint) 1 << GST_VIDEO_FRAME_COMP_DEPTH (frame, 0)) / 2); + for (guint i = 1; i < GST_VIDEO_FRAME_N_PLANES (frame); i++) { + data = GST_VIDEO_FRAME_PLANE_DATA (frame, i); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, i); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, i); + + ret = HipMemsetD16Async (vendor, + data, uv_val, stride * height / 2, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + } + break; + case GST_VIDEO_FORMAT_RGBA: + case GST_VIDEO_FORMAT_BGRA: + case GST_VIDEO_FORMAT_RGBx: + case GST_VIDEO_FORMAT_BGRx: + case GST_VIDEO_FORMAT_ARGB: + case GST_VIDEO_FORMAT_ABGR: + case GST_VIDEO_FORMAT_RGB10A2_LE: + case GST_VIDEO_FORMAT_BGR10A2_LE: + case GST_VIDEO_FORMAT_VUYA: + { + guint32 packed = 0; + if (format == GST_VIDEO_FORMAT_ARGB || format == GST_VIDEO_FORMAT_ABGR) { + packed = 0xff; + } else if (format == GST_VIDEO_FORMAT_RGB10A2_LE || + format == GST_VIDEO_FORMAT_BGR10A2_LE) { + packed = ((guint32) 0x3) << 30; + } else if (format == GST_VIDEO_FORMAT_VUYA) { + packed = (((guint32) 0xff) << 24) | (((guint32) 0x80) << 8) | + ((guint32) 0x80); + } else { + packed = ((guint32) 0xff) << 24; + } + + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_HEIGHT (frame); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD32Async (vendor, + data, packed, stride * height / 4, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + break; + } + case GST_VIDEO_FORMAT_RGB: + case GST_VIDEO_FORMAT_BGR: + data = GST_VIDEO_FRAME_PLANE_DATA (frame, 0); + height = GST_VIDEO_FRAME_HEIGHT (frame); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, 0); + + ret = HipMemsetD8Async (vendor, data, 0, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + break; + case GST_VIDEO_FORMAT_RGBP: + case GST_VIDEO_FORMAT_BGRP: + case GST_VIDEO_FORMAT_GBR: + case GST_VIDEO_FORMAT_GBRA: + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (frame); i++) { + guint8 val = 0; + if (format == GST_VIDEO_FORMAT_GBRA && i == 3) + val = 255; + + data = GST_VIDEO_FRAME_PLANE_DATA (frame, i); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, i); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, i); + + ret = HipMemsetD8Async (vendor, data, val, stride * height, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + } + break; + case GST_VIDEO_FORMAT_GBR_10LE: + case GST_VIDEO_FORMAT_GBR_12LE: + case GST_VIDEO_FORMAT_GBR_16LE: + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (frame); i++) { + data = GST_VIDEO_FRAME_PLANE_DATA (frame, i); + height = GST_VIDEO_FRAME_COMP_HEIGHT (frame, i); + stride = GST_VIDEO_FRAME_PLANE_STRIDE (frame, i); + + ret = HipMemsetD16Async (vendor, data, 0, stride * height / 2, stream); + if (!gst_hip_result (ret, vendor)) + return FALSE; + } + break; + default: + g_assert_not_reached (); + return FALSE; + } + + return TRUE; +} + +static GstFlowReturn +gst_hip_compositor_aggregate_frames (GstVideoAggregator * vagg, + GstBuffer * outbuf) +{ + auto self = GST_HIP_COMPOSITOR (vagg); + GList *iter; + GstFlowReturn ret = GST_FLOW_OK; + GstVideoFrame frame; + hipStream_t stream = nullptr; + auto gst_stream = gst_hip_device_get_stream (self->device); + stream = gst_hip_stream_get_handle (gst_stream); + + GST_LOG_OBJECT (self, "aggregate"); + + if (!gst_hip_device_set_current (self->device)) { + GST_ERROR_OBJECT (self, "Couldn't set device"); + return GST_FLOW_ERROR; + } + + if (!gst_video_frame_map (&frame, &vagg->info, outbuf, GST_MAP_WRITE_HIP)) { + GST_ERROR_OBJECT (self, "Couldn't map output frame"); + return GST_FLOW_ERROR; + } + + if (!gst_hip_compositor_draw_background (self, &frame, stream)) { + GST_ERROR_OBJECT (self, "Couldn't draw background"); + gst_video_frame_unmap (&frame); + return GST_FLOW_ERROR; + } + + gst_video_frame_unmap (&frame); + + GST_OBJECT_LOCK (self); + for (iter = GST_ELEMENT (vagg)->sinkpads; iter; iter = g_list_next (iter)) { + auto pad = GST_VIDEO_AGGREGATOR_PAD (iter->data); + auto cpad = GST_HIP_COMPOSITOR_PAD (pad); + auto pad_priv = cpad->priv; + auto in_frame = gst_video_aggregator_pad_get_prepared_frame (pad); + + if (!in_frame) + continue; + + if (!gst_hip_compositor_pad_setup_converter (pad, vagg)) { + GST_ERROR_OBJECT (self, "Couldn't setup converter"); + ret = GST_FLOW_ERROR; + break; + } + + if (!gst_hip_converter_convert_frame (pad_priv->conv, in_frame->buffer, + outbuf)) { + GST_ERROR_OBJECT (pad, "Couldn't convert frame"); + ret = GST_FLOW_ERROR; + break; + } + } + GST_OBJECT_UNLOCK (self); + + if (ret == GST_FLOW_OK) + HipStreamSynchronize (gst_hip_device_get_vendor (self->device), stream); + + return ret; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipcompositor.h
Added
@@ -0,0 +1,37 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/video/gstvideoaggregator.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_COMPOSITOR_PAD (gst_hip_compositor_pad_get_type()) +G_DECLARE_FINAL_TYPE (GstHipCompositorPad, gst_hip_compositor_pad, + GST, HIP_COMPOSITOR_PAD, GstVideoAggregatorPad) + +#define GST_TYPE_HIP_COMPOSITOR (gst_hip_compositor_get_type()) +G_DECLARE_FINAL_TYPE (GstHipCompositor, gst_hip_compositor, + GST, HIP_COMPOSITOR, GstVideoAggregator) + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipconverter.cpp
Added
@@ -0,0 +1,1917 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthip-config.h" + +#include "gsthipconverter.h" +#include <string.h> +#include <mutex> +#include <unordered_map> +#include <string> +#include <vector> +#include "kernel/converter.cu" +#include "kernel/converter-unpack.cu" + +/* *INDENT-OFF* */ +#ifdef HIP_AMD_PRECOMPILED +#include "kernel/converter_hsaco.h" +#else +static std::unordered_map<std::string, const unsigned char *> g_precompiled_hsaco_table; +#endif + +#ifdef HIP_NVIDIA_PRECOMPILED +#include "kernel/converter_ptx.h" +#else +static std::unordered_map<std::string, const char *> g_precompiled_ptx_table; +#endif + +static std::unordered_map<std::string, const char *> g_ptx_table; +static std::mutex g_kernel_table_lock; +/* *INDENT-ON* */ + +GST_DEBUG_CATEGORY_STATIC (gst_hip_converter_debug); +#define GST_CAT_DEFAULT gst_hip_converter_debug + +#define HIP_BLOCK_X 16 +#define HIP_BLOCK_Y 16 +#define DIV_UP(size,block) (((size) + ((block) - 1)) / (block)) + +/* from GstD3D11 */ +struct GstHipColorMatrix +{ + gdouble matrix33; + gdouble offset3; + gdouble min3; + gdouble max3; +}; + +static gchar * +gst_hip_dump_color_matrix (GstHipColorMatrix * matrix) +{ + /* *INDENT-OFF* */ + static const gchar format = + "MATRIX\n" + "|% .6f, % .6f, % .6f|\n" + "|% .6f, % .6f, % .6f|\n" + "|% .6f, % .6f, % .6f|\n" + "OFFSET\n" + "|% .6f, % .6f, % .6f|\n" + "MIN\n" + "|% .6f, % .6f, % .6f|\n" + "MAX\n" + "|% .6f, % .6f, % .6f|"; + /* *INDENT-ON* */ + + return g_strdup_printf (format, + matrix->matrix00, matrix->matrix01, matrix->matrix02, + matrix->matrix10, matrix->matrix11, matrix->matrix12, + matrix->matrix20, matrix->matrix21, matrix->matrix22, + matrix->offset0, matrix->offset1, matrix->offset2, + matrix->min0, matrix->min1, matrix->min2, + matrix->max0, matrix->max1, matrix->max2); +} + +static void +color_matrix_copy (GstHipColorMatrix * dst, const GstHipColorMatrix * src) +{ + for (guint i = 0; i < 3; i++) { + for (guint j = 0; j < 3; j++) { + dst->matrixij = src->matrixij; + } + } +} + +static void +color_matrix_multiply (GstHipColorMatrix * dst, GstHipColorMatrix * a, + GstHipColorMatrix * b) +{ + GstHipColorMatrix tmp; + + for (guint i = 0; i < 3; i++) { + for (guint j = 0; j < 3; j++) { + gdouble val = 0; + for (guint k = 0; k < 3; k++) { + val += a->matrixik * b->matrixkj; + } + + tmp.matrixij = val; + } + } + + color_matrix_copy (dst, &tmp); +} + +static void +color_matrix_identity (GstHipColorMatrix * m) +{ + for (guint i = 0; i < 3; i++) { + for (guint j = 0; j < 3; j++) { + if (i == j) + m->matrixij = 1.0; + else + m->matrixij = 0; + } + } +} + +static gboolean +gst_hip_color_range_adjust_matrix_unorm (const GstVideoInfo * in_info, + const GstVideoInfo * out_info, GstHipColorMatrix * matrix) +{ + gboolean in_rgb, out_rgb; + gint in_offsetGST_VIDEO_MAX_COMPONENTS; + gint in_scaleGST_VIDEO_MAX_COMPONENTS; + gint out_offsetGST_VIDEO_MAX_COMPONENTS; + gint out_scaleGST_VIDEO_MAX_COMPONENTS; + GstVideoColorRange in_range; + GstVideoColorRange out_range; + gdouble src_fullscale, dst_fullscale; + + memset (matrix, 0, sizeof (GstHipColorMatrix)); + for (guint i = 0; i < 3; i++) { + matrix->matrixii = 1.0; + matrix->matrixii = 1.0; + matrix->matrixii = 1.0; + matrix->maxi = 1.0; + } + + in_rgb = GST_VIDEO_INFO_IS_RGB (in_info); + out_rgb = GST_VIDEO_INFO_IS_RGB (out_info); + + if (in_rgb != out_rgb) { + GST_WARNING ("Invalid format conversion"); + return FALSE; + } + + in_range = in_info->colorimetry.range; + out_range = out_info->colorimetry.range; + + if (in_range == GST_VIDEO_COLOR_RANGE_UNKNOWN) { + GST_WARNING ("Unknown input color range"); + if (in_rgb || GST_VIDEO_INFO_IS_GRAY (in_info)) + in_range = GST_VIDEO_COLOR_RANGE_0_255; + else + in_range = GST_VIDEO_COLOR_RANGE_16_235; + } + + if (out_range == GST_VIDEO_COLOR_RANGE_UNKNOWN) { + GST_WARNING ("Unknown output color range"); + if (out_rgb || GST_VIDEO_INFO_IS_GRAY (out_info)) + out_range = GST_VIDEO_COLOR_RANGE_0_255; + else + out_range = GST_VIDEO_COLOR_RANGE_16_235; + } + + src_fullscale = (gdouble) ((1 << in_info->finfo->depth0) - 1); + dst_fullscale = (gdouble) ((1 << out_info->finfo->depth0) - 1); + + gst_video_color_range_offsets (in_range, in_info->finfo, in_offset, in_scale); + gst_video_color_range_offsets (out_range, + out_info->finfo, out_offset, out_scale); + + matrix->min0 = matrix->min1 = matrix->min2 = + (gdouble) out_offset0 / dst_fullscale; + + matrix->max0 = (out_scale0 + out_offset0) / dst_fullscale; + matrix->max1 = matrix->max2 = + (out_scale1 + out_offset0) / dst_fullscale; + + if (in_info->colorimetry.range == out_info->colorimetry.range) { + GST_DEBUG ("Same color range"); + return TRUE; + } + + /* Formula + * + * 1) Scales and offset compensates input to 0..1 range + * SRC_NORMi = (srci * src_fullscale - in_offseti) / in_scalei + * = (srci * src_fullscale / in_scalei) - in_offseti / in_scalei + * + * 2) Reverse to output UNIT scale + * DST_UINTi = SRC_NORMi * out_scalei + out_offseti + * = srci * src_fullscale * out_scalei / in_scalei + * - in_offseti * out_scalei / in_scalei + * + out_offseti + * + * 3) Back to 0..1 scale + * dsti = DST_UINTi / dst_fullscale + * = COEFFi * srci + OFFi + * where + * src_fullscale * out_scalei + * COEFFi = ------------------------------ + * dst_fullscale * in_scalei + * + * out_offseti in_offseti * out_scalei + * OFFi = -------------- - ------------------------------ + * dst_fullscale dst_fullscale * in_scalei + */ + for (guint i = 0; i < 3; i++) { + matrix->matrixii = (src_fullscale * out_scalei) / + (dst_fullscale * in_scalei); + matrix->offseti = (out_offseti / dst_fullscale) - + ((gdouble) in_offseti * out_scalei / (dst_fullscale * in_scalei)); + } + + return TRUE; +} + +static gboolean +gst_hip_yuv_to_rgb_matrix_unorm (const GstVideoInfo * in_yuv_info, + const GstVideoInfo * out_rgb_info, GstHipColorMatrix * matrix) +{ + gint offset4, scale4; + gdouble Kr, Kb, Kg; + + /* + * <Formula> + * + * Input: Unsigned normalized Y'CbCr(unorm), 0.0..1.0 range + * Output: Unsigned normalized non-linear R'G'B'(unorm), 0.0..1.0 range + * + * 1) Y'CbCr(unorm) to scaled Y'CbCr + * | Y' | | Y'(unorm) | + * | Cb | = S | Cb(unorm) | + * | Cb | | Cr(unorm) | + * where S = (2 ^ bitdepth) - 1 + * + * 2) Y'CbCr to YPbPr + * Y = (Y' - offsetY ) / scaleY + * Pb = (Cb - offsetCbCr) / scaleCbCr + * Pr = (Cr - offsetCrCr) / scaleCrCr + * => + * Y = Y'(unorm) * Sy + Oy + * Pb = Cb(unorm) * Suv + Ouv + * Pb = Cr(unorm) * Suv + Ouv + * where + * Sy = S / scaleY + * Suv = S / scaleCbCr + * Oy = -(offsetY / scaleY) + * Ouv = -(offsetCbCr / scaleCbCr) + * + * 3) YPbPr to R'G'B' + * | R' | | Y | + * | G' | = M *| Pb | + * | B' | | Pr | + * where + * | vecR | + * M = | vecG | + * | vecB | + * vecR = | 1, 0 , 2(1 - Kr) | + * vecG = | 1, -(Kb/Kg) * 2(1 - Kb), -(Kr/Kg) * 2(1 - Kr) | + * vecB = | 1, 2(1 - Kb) , 0 | + * => + * R' = dot(vecR, (Syuv * Y'CbCr(unorm))) + dot(vecR, Offset) + * G' = dot(vecG, (Svuy * Y'CbCr(unorm))) + dot(vecG, Offset) + * B' = dot(vecB, (Syuv * Y'CbCr(unorm)) + dot(vecB, Offset) + * where + * | Sy, 0, 0 | + * Syuv = | 0, Suv, 0 | + * | 0 0, Suv | + * + * | Oy | + * Offset = | Ouv | + * | Ouv | + * + * 4) YUV -> RGB matrix + * | R' | | Y'(unorm) | | offsetA | + * | G' | = Matrix * | Cb(unorm) | + | offsetB | + * | B' | | Cr(unorm) | | offsetC | + * + * where + * | vecR | + * Matrix = | vecG | * Syuv + * | vecB | + * + * offsetA = dot(vecR, Offset) + * offsetB = dot(vecG, Offset) + * offsetC = dot(vecB, Offset) + * + * 4) Consider 16-235 scale RGB + * RGBfull(0..255) -> RGBfull(16..235) matrix is represented by + * | Rs | | Rf | | Or | + * | Gs | = Ms | Gf | + | Og | + * | Bs | | Bf | | Ob | + * + * Combining all matrix into + * | Rs | | Y'(unorm) | | offsetA | | Or | + * | Gs | = Ms * ( Matrix * | Cb(unorm) | + | offsetB | ) + | Og | + * | Bs | | Cr(unorm) | | offsetC | | Ob | + * + * | Y'(unorm) | | offsetA | | Or | + * = Ms * Matrix * | Cb(unorm) | + Ms | offsetB | + | Og | + * | Cr(unorm) | | offsetC | | Ob | + */ + + memset (matrix, 0, sizeof (GstHipColorMatrix)); + for (guint i = 0; i < 3; i++) + matrix->maxi = 1.0; + + gst_video_color_range_offsets (in_yuv_info->colorimetry.range, + in_yuv_info->finfo, offset, scale); + + if (gst_video_color_matrix_get_Kr_Kb (in_yuv_info->colorimetry.matrix, + &Kr, &Kb)) { + guint S; + gdouble Sy, Suv; + gdouble Oy, Ouv; + gdouble vecR3, vecG3, vecB3; + + Kg = 1.0 - Kr - Kb; + + vecR0 = 1.0; + vecR1 = 0; + vecR2 = 2 * (1 - Kr); + + vecG0 = 1.0; + vecG1 = -(Kb / Kg) * 2 * (1 - Kb); + vecG2 = -(Kr / Kg) * 2 * (1 - Kr); + + vecB0 = 1.0; + vecB1 = 2 * (1 - Kb); + vecB2 = 0; + + /* Assume all components has the same bitdepth */ + S = (1 << in_yuv_info->finfo->depth0) - 1; + Sy = (gdouble) S / scale0; + Suv = (gdouble) S / scale1; + Oy = -((gdouble) offset0 / scale0); + Ouv = -((gdouble) offset1 / scale1); + + matrix->matrix00 = Sy * vecR0; + matrix->matrix10 = Sy * vecG0; + matrix->matrix20 = Sy * vecB0; + + matrix->matrix01 = Suv * vecR1; + matrix->matrix11 = Suv * vecG1; + matrix->matrix21 = Suv * vecB1; + + matrix->matrix02 = Suv * vecR2; + matrix->matrix12 = Suv * vecG2; + matrix->matrix22 = Suv * vecB2; + + matrix->offset0 = vecR0 * Oy + vecR1 * Ouv + vecR2 * Ouv; + matrix->offset1 = vecG0 * Oy + vecG1 * Ouv + vecG2 * Ouv; + matrix->offset2 = vecB0 * Oy + vecB1 * Ouv + vecB2 * Ouv; + + /* Apply RGB range scale matrix */ + if (out_rgb_info->colorimetry.range == GST_VIDEO_COLOR_RANGE_16_235) { + GstHipColorMatrix scale_matrix, rst; + GstVideoInfo full_rgb = *out_rgb_info; + + full_rgb.colorimetry.range = GST_VIDEO_COLOR_RANGE_0_255; + + if (gst_hip_color_range_adjust_matrix_unorm (&full_rgb, + out_rgb_info, &scale_matrix)) { + /* Ms * Matrix */ + color_matrix_multiply (&rst, &scale_matrix, matrix); + + /* Ms * transform offsets */ + for (guint i = 0; i < 3; i++) { + gdouble val = 0; + for (guint j = 0; j < 3; j++) { + val += scale_matrix.matrixij * matrix->offsetj; + } + rst.offseti = val + scale_matrix.offseti; + } + + /* copy back to output matrix */ + for (guint i = 0; i < 3; i++) { + for (guint j = 0; j < 3; j++) { + matrix->matrixij = rst.matrixij; + } + matrix->offseti = rst.offseti; + matrix->mini = scale_matrix.mini; + matrix->maxi = scale_matrix.maxi; + } + } + } + } else { + /* Unknown matrix */ + matrix->matrix00 = 1.0; + matrix->matrix11 = 1.0; + matrix->matrix22 = 1.0; + } + + return TRUE; +} + +static gboolean +gst_hip_rgb_to_yuv_matrix_unorm (const GstVideoInfo * in_rgb_info, + const GstVideoInfo * out_yuv_info, GstHipColorMatrix * matrix) +{ + gint offset4, scale4; + gdouble Kr, Kb, Kg; + + /* + * <Formula> + * + * Input: Unsigned normalized non-linear R'G'B'(unorm), 0.0..1.0 range + * Output: Unsigned normalized Y'CbCr(unorm), 0.0..1.0 range + * + * 1) R'G'B' to YPbPr + * | Y | | R' | + * | Pb | = M *| G' | + * | Pr | | B' | + * where + * | vecY | + * M = | vecU | + * | vecV | + * vecY = | Kr , Kg , Kb | + * vecU = | -0.5*Kr/(1-Kb), -0.5*Kg/(1-Kb), 0.5 | + * vecV = | 0.5 , -0.5*Kg/(1-Kr), -0.5*Kb(1-Kr) | + * + * 2) YPbPr to Y'CbCr(unorm) + * Y'(unorm) = (Y * scaleY + offsetY) / S + * Cb(unorm) = (Pb * scaleCbCr + offsetCbCr) / S + * Cr(unorm) = (Pr * scaleCbCr + offsetCbCr) / S + * => + * Y'(unorm) = (Y * scaleY / S) + (offsetY / S) + * Cb(unorm) = (Pb * scaleCbCr / S) + (offsetCbCr / S) + * Cr(unorm) = (Pb * scaleCbCr / S) + (offsetCbCr / S) + * where S = (2 ^ bitdepth) - 1 + * + * 3) RGB -> YUV matrix + * | Y'(unorm) | | R' | | offsetA | + * | Cb(unorm) | = Matrix * | G' | + | offsetB | + * | Cr(unorm) | | B' | | offsetC | + * + * where + * | (scaleY/S) * vecY | + * Matrix = | (scaleCbCr/S) * vecU | + * | (scaleCbCr/S) * vecV | + * + * offsetA = offsetY / S + * offsetB = offsetCbCr / S + * offsetC = offsetCbCr / S + * + * 4) Consider 16-235 scale RGB + * RGBstudio(16..235) -> RGBfull(0..255) matrix is represented by + * | Rf | | Rs | | Or | + * | Gf | = Ms | Gs | + | Og | + * | Bf | | Bs | | Ob | + * + * Combining all matrix into + * | Y'(unorm) | | Rs | | Or | | offsetA | + * | Cb(unorm) | = Matrix * ( Ms | Gs | + | Og | ) + | offsetB | + * | Cr(unorm) | | Bs | | Ob | | offsetC | + * + * | Rs | | Or | | offsetA | + * = Matrix * Ms | Gs | + Matrix | Og | + | offsetB | + * | Bs | | Ob | | offsetB | + */ + + memset (matrix, 0, sizeof (GstHipColorMatrix)); + for (guint i = 0; i < 3; i++) + matrix->maxi = 1.0; + + gst_video_color_range_offsets (out_yuv_info->colorimetry.range, + out_yuv_info->finfo, offset, scale); + + if (gst_video_color_matrix_get_Kr_Kb (out_yuv_info->colorimetry.matrix, + &Kr, &Kb)) { + guint S; + gdouble Sy, Suv; + gdouble Oy, Ouv; + gdouble vecY3, vecU3, vecV3; + + Kg = 1.0 - Kr - Kb; + + vecY0 = Kr; + vecY1 = Kg; + vecY2 = Kb; + + vecU0 = -0.5 * Kr / (1 - Kb); + vecU1 = -0.5 * Kg / (1 - Kb); + vecU2 = 0.5; + + vecV0 = 0.5; + vecV1 = -0.5 * Kg / (1 - Kr); + vecV2 = -0.5 * Kb / (1 - Kr); + + /* Assume all components has the same bitdepth */ + S = (1 << out_yuv_info->finfo->depth0) - 1; + Sy = (gdouble) scale0 / S; + Suv = (gdouble) scale1 / S; + Oy = (gdouble) offset0 / S; + Ouv = (gdouble) offset1 / S; + + for (guint i = 0; i < 3; i++) { + matrix->matrix0i = Sy * vecYi; + matrix->matrix1i = Suv * vecUi; + matrix->matrix2i = Suv * vecVi; + } + + matrix->offset0 = Oy; + matrix->offset1 = Ouv; + matrix->offset2 = Ouv; + + matrix->min0 = Oy; + matrix->min1 = Oy; + matrix->min2 = Oy; + + matrix->max0 = ((gdouble) scale0 + offset0) / S; + matrix->max1 = ((gdouble) scale1 + offset0) / S; + matrix->max2 = ((gdouble) scale1 + offset0) / S; + + /* Apply RGB range scale matrix */ + if (in_rgb_info->colorimetry.range == GST_VIDEO_COLOR_RANGE_16_235) { + GstHipColorMatrix scale_matrix, rst; + GstVideoInfo full_rgb = *in_rgb_info; + + full_rgb.colorimetry.range = GST_VIDEO_COLOR_RANGE_0_255; + + if (gst_hip_color_range_adjust_matrix_unorm (in_rgb_info, + &full_rgb, &scale_matrix)) { + /* Matrix * Ms */ + color_matrix_multiply (&rst, matrix, &scale_matrix); + + /* Matrix * scale offsets */ + for (guint i = 0; i < 3; i++) { + gdouble val = 0; + for (guint j = 0; j < 3; j++) { + val += matrix->matrixij * scale_matrix.offsetj; + } + rst.offseti = val + matrix->offseti; + } + + /* copy back to output matrix */ + for (guint i = 0; i < 3; i++) { + for (guint j = 0; j < 3; j++) { + matrix->matrixij = rst.matrixij; + } + matrix->offseti = rst.offseti; + } + } + } + } else { + /* Unknown matrix */ + matrix->matrix00 = 1.0; + matrix->matrix11 = 1.0; + matrix->matrix22 = 1.0; + } + + return TRUE; +} + +struct ColorMatrix +{ + float coeffX3; + float coeffY3; + float coeffZ3; + float offset3; + float min3; + float max3; +}; + +struct ConstBuffer +{ + ColorMatrix convert_matrix; + int width; + int height; + int left; + int top; + int right; + int bottom; + int view_width; + int view_height; + float border_x; + float border_y; + float border_z; + float border_w; + int fill_border; + int video_direction; + float alpha; + int do_blend; + int do_convert; +}; + +#define COLOR_SPACE_IDENTITY "color_space_identity" +#define COLOR_SPACE_CONVERT "color_space_convert" + +#define SAMPLE_YUV_PLANAR "I420" +#define SAMPLE_YV12 "YV12" +#define SAMPLE_YUV_PLANAR_10BIS "I420_10" +#define SAMPLE_YUV_PLANAR_12BIS "I420_12" +#define SAMPLE_SEMI_PLANAR "NV12" +#define SAMPLE_SEMI_PLANAR_SWAP "NV21" +#define SAMPLE_RGBA "RGBA" +#define SAMPLE_BGRA "BGRA" +#define SAMPLE_RGBx "RGBx" +#define SAMPLE_BGRx "BGRx" +#define SAMPLE_ARGB "ARGB" +/* same as ARGB */ +#define SAMPLE_ABGR "ABGR" +#define SAMPLE_RGBP "RGBP" +#define SAMPLE_BGRP "BGRP" +#define SAMPLE_GBR "GBR" +#define SAMPLE_GBR_10 "GBR_10" +#define SAMPLE_GBR_12 "GBR_12" +#define SAMPLE_GBRA "GBRA" +#define SAMPLE_VUYA "VUYA" + +typedef struct _TextureFormat +{ + GstVideoFormat format; + hipArray_Format array_formatGST_VIDEO_MAX_COMPONENTS; + guint channelsGST_VIDEO_MAX_COMPONENTS; + const gchar *sample_func; +} TextureFormat; + +#define HIP_AD_FORMAT_NONE ((hipArray_Format)0) +#define MAKE_FORMAT_YUV_PLANAR(f,cf,sample_func) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE }, {1, 1, 1, 0}, sample_func } +#define MAKE_FORMAT_YUV_SEMI_PLANAR(f,cf,sample_func) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_NONE, HIP_AD_FORMAT_NONE }, {1, 2, 0, 0}, sample_func } +#define MAKE_FORMAT_RGB(f,cf,sample_func) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE, \ + HIP_AD_FORMAT_NONE, HIP_AD_FORMAT_NONE }, {4, 0, 0, 0}, sample_func } +#define MAKE_FORMAT_RGBP(f,cf,sample_func) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_NONE }, {1, 1, 1, 0}, sample_func } +#define MAKE_FORMAT_RGBAP(f,cf,sample_func) \ + { GST_VIDEO_FORMAT_ ##f, { HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf, \ + HIP_AD_FORMAT_ ##cf, HIP_AD_FORMAT_ ##cf }, {1, 1, 1, 1}, sample_func } + +static const TextureFormat format_map = { + MAKE_FORMAT_YUV_PLANAR (I420, UNSIGNED_INT8, SAMPLE_YUV_PLANAR), + MAKE_FORMAT_YUV_PLANAR (YV12, UNSIGNED_INT8, SAMPLE_YV12), + MAKE_FORMAT_YUV_SEMI_PLANAR (NV12, UNSIGNED_INT8, SAMPLE_SEMI_PLANAR), + MAKE_FORMAT_YUV_SEMI_PLANAR (NV21, UNSIGNED_INT8, SAMPLE_SEMI_PLANAR_SWAP), + MAKE_FORMAT_YUV_SEMI_PLANAR (P010_10LE, UNSIGNED_INT16, SAMPLE_SEMI_PLANAR), + MAKE_FORMAT_YUV_SEMI_PLANAR (P012_LE, UNSIGNED_INT16, SAMPLE_SEMI_PLANAR), + MAKE_FORMAT_YUV_SEMI_PLANAR (P016_LE, UNSIGNED_INT16, SAMPLE_SEMI_PLANAR), + MAKE_FORMAT_YUV_PLANAR (I420_10LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_10BIS), + MAKE_FORMAT_YUV_PLANAR (I420_12LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_12BIS), + MAKE_FORMAT_YUV_PLANAR (Y444, UNSIGNED_INT8, SAMPLE_YUV_PLANAR), + MAKE_FORMAT_YUV_PLANAR (Y444_10LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_10BIS), + MAKE_FORMAT_YUV_PLANAR (Y444_12LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_12BIS), + MAKE_FORMAT_YUV_PLANAR (Y444_16LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR), + MAKE_FORMAT_RGB (RGBA, UNSIGNED_INT8, SAMPLE_RGBA), + MAKE_FORMAT_RGB (BGRA, UNSIGNED_INT8, SAMPLE_BGRA), + MAKE_FORMAT_RGB (RGBx, UNSIGNED_INT8, SAMPLE_RGBx), + MAKE_FORMAT_RGB (BGRx, UNSIGNED_INT8, SAMPLE_BGRx), + MAKE_FORMAT_RGB (ARGB, UNSIGNED_INT8, SAMPLE_ARGB), + MAKE_FORMAT_RGB (ARGB64, UNSIGNED_INT16, SAMPLE_ARGB), + MAKE_FORMAT_RGB (ABGR, UNSIGNED_INT8, SAMPLE_ABGR), + MAKE_FORMAT_YUV_PLANAR (Y42B, UNSIGNED_INT8, SAMPLE_YUV_PLANAR), + MAKE_FORMAT_YUV_PLANAR (I422_10LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_10BIS), + MAKE_FORMAT_YUV_PLANAR (I422_12LE, UNSIGNED_INT16, SAMPLE_YUV_PLANAR_12BIS), + MAKE_FORMAT_RGBP (RGBP, UNSIGNED_INT8, SAMPLE_RGBP), + MAKE_FORMAT_RGBP (BGRP, UNSIGNED_INT8, SAMPLE_BGRP), + MAKE_FORMAT_RGBP (GBR, UNSIGNED_INT8, SAMPLE_GBR), + MAKE_FORMAT_RGBP (GBR_10LE, UNSIGNED_INT16, SAMPLE_GBR_10), + MAKE_FORMAT_RGBP (GBR_12LE, UNSIGNED_INT16, SAMPLE_GBR_12), + MAKE_FORMAT_RGBP (GBR_16LE, UNSIGNED_INT16, SAMPLE_GBR), + MAKE_FORMAT_RGBAP (GBRA, UNSIGNED_INT8, SAMPLE_GBRA), + MAKE_FORMAT_RGB (VUYA, UNSIGNED_INT8, SAMPLE_VUYA), +}; + +struct TextureBuffer +{ + gpointer ptr = nullptr; + gsize stride = 0; + hipTextureObject_t texture = nullptr; +}; + +enum +{ + PROP_0, + PROP_DEST_X, + PROP_DEST_Y, + PROP_DEST_WIDTH, + PROP_DEST_HEIGHT, + PROP_FILL_BORDER, + PROP_VIDEO_DIRECTION, + PROP_ALPHA, + PROP_BLEND, +}; + +struct _GstHipConverterPrivate +{ + _GstHipConverterPrivate () + { + config = gst_structure_new_empty ("converter-config"); + const_buf = g_new0 (ConstBuffer, 1); + } + + ~_GstHipConverterPrivate () + { + if (config) + gst_structure_free (config); + g_free (const_buf); + } + + std::mutex lock; + + GstHipVendor vendor; + GstVideoInfo in_info; + GstVideoInfo out_info; + hipStream_t stream = nullptr; + + GstStructure *config = nullptr; + + GstVideoInfo texture_info; + const TextureFormat *texture_fmt; + gint tex_align; + + TextureBuffer fallback_bufferGST_VIDEO_MAX_COMPONENTS; + TextureBuffer unpack_buffer; + ConstBuffer *const_buf = nullptr; + + hipModule_t main_module = nullptr; + hipFunction_t main_func = nullptr; + + hipModule_t unpack_module = nullptr; + hipFunction_t unpack_func = nullptr; + + gboolean update_const_buf = TRUE; + + /* properties */ + gint dest_x = 0; + gint dest_y = 0; + gint dest_width = 0; + gint dest_height = 0; + GstVideoOrientationMethod video_direction = GST_VIDEO_ORIENTATION_IDENTITY; + gboolean fill_border = FALSE; + HIPfilter_mode filter_mode = HIP_TR_FILTER_MODE_LINEAR; + gdouble alpha = 1.0; + gboolean blend = FALSE; +}; + +static void gst_hip_converter_dispose (GObject * object); +static void gst_hip_converter_finalize (GObject * object); +static void gst_hip_converter_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_hip_converter_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +#define gst_hip_converter_parent_class parent_class +G_DEFINE_TYPE (GstHipConverter, gst_hip_converter, GST_TYPE_OBJECT); + +static void +gst_hip_converter_class_init (GstHipConverterClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto param_flags = (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS); + + object_class->dispose = gst_hip_converter_dispose; + object_class->finalize = gst_hip_converter_finalize; + object_class->set_property = gst_hip_converter_set_property; + object_class->get_property = gst_hip_converter_get_property; + + g_object_class_install_property (object_class, PROP_DEST_X, + g_param_spec_int ("dest-x", "Dest-X", + "x poisition in the destination frame", G_MININT, G_MAXINT, 0, + param_flags)); + g_object_class_install_property (object_class, PROP_DEST_Y, + g_param_spec_int ("dest-y", "Dest-Y", + "y poisition in the destination frame", G_MININT, G_MAXINT, 0, + param_flags)); + g_object_class_install_property (object_class, PROP_DEST_WIDTH, + g_param_spec_int ("dest-width", "Dest-Width", + "Width in the destination frame", 0, G_MAXINT, 0, param_flags)); + g_object_class_install_property (object_class, PROP_DEST_HEIGHT, + g_param_spec_int ("dest-height", "Dest-Height", + "Height in the destination frame", 0, G_MAXINT, 0, param_flags)); + g_object_class_install_property (object_class, PROP_FILL_BORDER, + g_param_spec_boolean ("fill-border", "Fill border", + "Fill border", FALSE, param_flags)); + g_object_class_install_property (object_class, PROP_VIDEO_DIRECTION, + g_param_spec_enum ("video-direction", "Video Direction", + "Video direction", GST_TYPE_VIDEO_ORIENTATION_METHOD, + GST_VIDEO_ORIENTATION_IDENTITY, param_flags)); + g_object_class_install_property (object_class, PROP_ALPHA, + g_param_spec_double ("alpha", "Alpha", + "The alpha color value to use", 0, 1.0, 1.0, param_flags)); + g_object_class_install_property (object_class, PROP_BLEND, + g_param_spec_boolean ("blend", "Blend", + "Enable alpha blending", FALSE, param_flags)); + + GST_DEBUG_CATEGORY_INIT (gst_hip_converter_debug, + "hipconverter", 0, "hipconverter"); +} + +static void +gst_hip_converter_init (GstHipConverter * self) +{ + self->priv = new GstHipConverterPrivate (); +} + +static void +gst_hip_converter_dispose (GObject * object) +{ + auto self = GST_HIP_CONVERTER (object); + auto priv = self->priv; + + if (self->device && gst_hip_device_set_current (self->device)) { + if (priv->unpack_module) { + HipModuleUnload (priv->vendor, priv->unpack_module); + priv->unpack_module = nullptr; + } + + if (priv->main_module) { + HipModuleUnload (priv->vendor, priv->main_module); + priv->main_module = nullptr; + } + + for (guint i = 0; i < G_N_ELEMENTS (priv->fallback_buffer); i++) { + if (priv->fallback_bufferi.ptr) { + if (priv->fallback_bufferi.texture) { + HipTexObjectDestroy (priv->vendor, priv->fallback_bufferi.texture); + priv->fallback_bufferi.texture = nullptr; + } + + HipFree (priv->vendor, priv->fallback_bufferi.ptr); + priv->fallback_bufferi.ptr = 0; + } + } + + if (priv->unpack_buffer.ptr) { + if (priv->unpack_buffer.texture) { + HipTexObjectDestroy (priv->vendor, priv->unpack_buffer.texture); + priv->unpack_buffer.texture = 0; + } + + HipFree (priv->vendor, priv->unpack_buffer.ptr); + priv->unpack_buffer.ptr = 0; + } + } + + gst_clear_object (&self->device); + + G_OBJECT_CLASS (parent_class)->dispose (object); +} + +static void +gst_hip_converter_finalize (GObject * object) +{ + auto self = GST_HIP_CONVERTER (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_hip_converter_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_CONVERTER (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_DEST_X: + { + auto dest_x = g_value_get_int (value); + if (priv->dest_x != dest_x) { + priv->update_const_buf = TRUE; + priv->dest_x = dest_x; + priv->const_buf->left = dest_x; + priv->const_buf->right = priv->dest_x + priv->dest_width; + } + break; + } + case PROP_DEST_Y: + { + auto dest_y = g_value_get_int (value); + if (priv->dest_y != dest_y) { + priv->update_const_buf = TRUE; + priv->dest_y = dest_y; + priv->const_buf->top = dest_y; + priv->const_buf->bottom = priv->dest_y + priv->dest_height; + } + break; + } + case PROP_DEST_WIDTH: + { + auto dest_width = g_value_get_int (value); + if (priv->dest_width != dest_width) { + priv->update_const_buf = TRUE; + priv->dest_width = dest_width; + priv->const_buf->right = priv->dest_x + dest_width; + priv->const_buf->view_width = dest_width; + } + break; + } + case PROP_DEST_HEIGHT: + { + auto dest_height = g_value_get_int (value); + if (priv->dest_height != dest_height) { + priv->update_const_buf = TRUE; + priv->dest_height = dest_height; + priv->const_buf->bottom = priv->dest_y + dest_height; + priv->const_buf->view_height = dest_height; + } + break; + } + case PROP_FILL_BORDER: + { + auto fill_border = g_value_get_boolean (value); + if (priv->fill_border != fill_border) { + priv->update_const_buf = TRUE; + priv->fill_border = fill_border; + priv->const_buf->fill_border = fill_border; + } + break; + } + case PROP_VIDEO_DIRECTION: + { + auto video_direction = + (GstVideoOrientationMethod) g_value_get_enum (value); + if (priv->video_direction != video_direction) { + priv->update_const_buf = TRUE; + priv->video_direction = video_direction; + priv->const_buf->video_direction = video_direction; + } + break; + } + case PROP_ALPHA: + { + auto alpha = g_value_get_double (value); + if (priv->alpha != alpha) { + priv->update_const_buf = TRUE; + priv->const_buf->alpha = (float) alpha; + } + break; + } + case PROP_BLEND: + { + auto blend = g_value_get_boolean (value); + if (priv->blend != blend) { + priv->update_const_buf = TRUE; + priv->const_buf->do_blend = blend; + } + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_converter_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_CONVERTER (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_DEST_X: + g_value_set_int (value, priv->dest_x); + break; + case PROP_DEST_Y: + g_value_set_int (value, priv->dest_y); + break; + case PROP_DEST_WIDTH: + g_value_set_int (value, priv->dest_width); + break; + case PROP_DEST_HEIGHT: + g_value_set_int (value, priv->dest_height); + break; + case PROP_FILL_BORDER: + g_value_set_boolean (value, priv->fill_border); + break; + case PROP_VIDEO_DIRECTION: + g_value_set_enum (value, priv->video_direction); + break; + case PROP_ALPHA: + g_value_set_double (value, priv->alpha); + break; + case PROP_BLEND: + g_value_set_boolean (value, priv->blend); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static const gchar * +get_color_range_name (GstVideoColorRange range) +{ + switch (range) { + case GST_VIDEO_COLOR_RANGE_0_255: + return "FULL"; + case GST_VIDEO_COLOR_RANGE_16_235: + return "STUDIO"; + default: + break; + } + + return "UNKNOWN"; +} + +static size_t +do_align (size_t value, size_t align) +{ + if (align == 0) + return value; + + return ((value + align - 1) / align) * align; +} + +static gboolean +gst_hip_converter_setup (GstHipConverter * self) +{ + auto priv = self->priv; + const GstVideoInfo *in_info; + const GstVideoInfo *out_info; + const GstVideoInfo *texture_info; + GstHipColorMatrix convert_matrix; + GstHipColorMatrix border_color_matrix; + gdouble border_color4; + guint i, j; + const GstVideoColorimetry *in_color; + const GstVideoColorimetry *out_color; + gchar *str = nullptr; + hipError_t ret; + std::string output_name; + std::string unpack_name; + + in_info = &priv->in_info; + out_info = &priv->out_info; + texture_info = &priv->texture_info; + in_color = &in_info->colorimetry; + out_color = &out_info->colorimetry; + + memset (&convert_matrix, 0, sizeof (GstHipColorMatrix)); + color_matrix_identity (&convert_matrix); + + switch (GST_VIDEO_INFO_FORMAT (out_info)) { + case GST_VIDEO_FORMAT_I420: + output_name = "I420"; + break; + case GST_VIDEO_FORMAT_YV12: + output_name = "YV12"; + break; + case GST_VIDEO_FORMAT_NV12: + output_name = "NV12"; + break; + case GST_VIDEO_FORMAT_NV21: + output_name = "NV21"; + break; + case GST_VIDEO_FORMAT_P010_10LE: + case GST_VIDEO_FORMAT_P012_LE: + case GST_VIDEO_FORMAT_P016_LE: + output_name = "P010"; + break; + case GST_VIDEO_FORMAT_I420_10LE: + output_name = "I420_10"; + break; + case GST_VIDEO_FORMAT_I420_12LE: + output_name = "I420_12"; + break; + case GST_VIDEO_FORMAT_Y444: + output_name = "Y444"; + break; + case GST_VIDEO_FORMAT_Y444_10LE: + output_name = "Y444_10"; + break; + case GST_VIDEO_FORMAT_Y444_12LE: + output_name = "Y444_12"; + break; + case GST_VIDEO_FORMAT_Y444_16LE: + output_name = "Y444_16"; + break; + case GST_VIDEO_FORMAT_RGBA: + output_name = "RGBA"; + break; + case GST_VIDEO_FORMAT_RGBx: + output_name = "RGBx"; + break; + case GST_VIDEO_FORMAT_BGRA: + output_name = "BGRA"; + break; + case GST_VIDEO_FORMAT_BGRx: + output_name = "BGRx"; + break; + case GST_VIDEO_FORMAT_ARGB: + output_name = "ARGB"; + break; + case GST_VIDEO_FORMAT_ABGR: + output_name = "ABGR"; + break; + case GST_VIDEO_FORMAT_RGB: + output_name = "RGB"; + break; + case GST_VIDEO_FORMAT_BGR: + output_name = "BGR"; + break; + case GST_VIDEO_FORMAT_RGB10A2_LE: + output_name = "RGB10A2"; + break; + case GST_VIDEO_FORMAT_BGR10A2_LE: + output_name = "BGR10A2"; + break; + case GST_VIDEO_FORMAT_Y42B: + output_name = "Y42B"; + break; + case GST_VIDEO_FORMAT_I422_10LE: + output_name = "I422_10"; + break; + case GST_VIDEO_FORMAT_I422_12LE: + output_name = "I422_12"; + break; + case GST_VIDEO_FORMAT_RGBP: + output_name = "RGBP"; + break; + case GST_VIDEO_FORMAT_BGRP: + output_name = "BGRP"; + break; + case GST_VIDEO_FORMAT_GBR: + output_name = "GBR"; + break; + case GST_VIDEO_FORMAT_GBR_10LE: + output_name = "GBR_10"; + break; + case GST_VIDEO_FORMAT_GBR_12LE: + output_name = "GBR_12"; + break; + case GST_VIDEO_FORMAT_GBR_16LE: + output_name = "GBR_16"; + break; + case GST_VIDEO_FORMAT_GBRA: + output_name = "GBRA"; + break; + case GST_VIDEO_FORMAT_VUYA: + output_name = "VUYA"; + break; + default: + break; + } + + if (output_name.empty ()) { + GST_ERROR_OBJECT (self, "Unknown write function for format %s", + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (out_info))); + return FALSE; + } + + /* Decide texture info to use, 3 channel RGB or 10bits packed RGB + * need be converted to other format */ + priv->texture_info = priv->in_info; + switch (GST_VIDEO_INFO_FORMAT (in_info)) { + case GST_VIDEO_FORMAT_RGB: + gst_video_info_set_format (&priv->texture_info, + GST_VIDEO_FORMAT_RGBx, GST_VIDEO_INFO_WIDTH (in_info), + GST_VIDEO_INFO_HEIGHT (in_info)); + unpack_name = "GstHipConverterUnpack_RGB_RGBx"; + break; + case GST_VIDEO_FORMAT_BGR: + gst_video_info_set_format (&priv->texture_info, + GST_VIDEO_FORMAT_BGRx, GST_VIDEO_INFO_WIDTH (in_info), + GST_VIDEO_INFO_HEIGHT (in_info)); + unpack_name = "GstHipConverterUnpack_RGB_RGBx"; + break; + case GST_VIDEO_FORMAT_RGB10A2_LE: + gst_video_info_set_format (&priv->texture_info, + GST_VIDEO_FORMAT_ARGB64, GST_VIDEO_INFO_WIDTH (in_info), + GST_VIDEO_INFO_HEIGHT (in_info)); + unpack_name = "GstHipConverterUnpack_RGB10A2_ARGB64"; + break; + case GST_VIDEO_FORMAT_BGR10A2_LE: + gst_video_info_set_format (&priv->texture_info, + GST_VIDEO_FORMAT_ARGB64, GST_VIDEO_INFO_WIDTH (in_info), + GST_VIDEO_INFO_HEIGHT (in_info)); + unpack_name = "GstHipConverterUnpack_BGR10A2_ARGB64"; + break; + default: + break; + } + + for (i = 0; i < G_N_ELEMENTS (format_map); i++) { + if (format_mapi.format == GST_VIDEO_INFO_FORMAT (texture_info)) { + priv->texture_fmt = &format_mapi; + break; + } + } + + if (!priv->texture_fmt) { + GST_ERROR_OBJECT (self, "Couldn't find texture format for %s (%s)", + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (in_info)), + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (texture_info))); + return FALSE; + } + + /* calculate black color + * TODO: add support border color */ + if (GST_VIDEO_INFO_IS_RGB (out_info)) { + GstVideoInfo rgb_info = *out_info; + rgb_info.colorimetry.range = GST_VIDEO_COLOR_RANGE_0_255; + gst_hip_color_range_adjust_matrix_unorm (&rgb_info, out_info, + &border_color_matrix); + } else { + GstVideoInfo rgb_info; + + gst_video_info_set_format (&rgb_info, GST_VIDEO_FORMAT_RGBA64_LE, + out_info->width, out_info->height); + + gst_hip_rgb_to_yuv_matrix_unorm (&rgb_info, out_info, &border_color_matrix); + } + + for (i = 0; i < 3; i++) { + /* TODO: property */ + gdouble border_rgba4 = { 0, 0, 0 }; + border_colori = 0; + for (j = 0; j < 3; j++) + border_colori += border_color_matrix.matrixij * border_rgbai; + border_colori = border_color_matrix.offseti; + border_colori = CLAMP (border_colori, + border_color_matrix.mini, border_color_matrix.maxi); + } + + /* FIXME: handle primaries and transfer functions */ + priv->const_buf->do_convert = 0; + if (GST_VIDEO_INFO_IS_RGB (texture_info)) { + if (GST_VIDEO_INFO_IS_RGB (out_info)) { + /* RGB -> RGB */ + if (in_color->range == out_color->range) { + GST_DEBUG_OBJECT (self, "RGB -> RGB conversion without matrix"); + } else { + if (!gst_hip_color_range_adjust_matrix_unorm (in_info, out_info, + &convert_matrix)) { + GST_ERROR_OBJECT (self, "Failed to get RGB range adjust matrix"); + return FALSE; + } + + str = gst_hip_dump_color_matrix (&convert_matrix); + GST_DEBUG_OBJECT (self, "RGB range adjust %s -> %s\n%s", + get_color_range_name (in_color->range), + get_color_range_name (out_color->range), str); + g_free (str); + + priv->const_buf->do_convert = 1; + } + } else { + /* RGB -> YUV */ + if (!gst_hip_rgb_to_yuv_matrix_unorm (in_info, out_info, &convert_matrix)) { + GST_ERROR_OBJECT (self, "Failed to get RGB -> YUV transform matrix"); + return FALSE; + } + + str = gst_hip_dump_color_matrix (&convert_matrix); + GST_DEBUG_OBJECT (self, "RGB -> YUV matrix:\n%s", str); + g_free (str); + + priv->const_buf->do_convert = 1; + } + } else { + if (GST_VIDEO_INFO_IS_RGB (out_info)) { + /* YUV -> RGB */ + if (!gst_hip_yuv_to_rgb_matrix_unorm (in_info, out_info, &convert_matrix)) { + GST_ERROR_OBJECT (self, "Failed to get YUV -> RGB transform matrix"); + return FALSE; + } + + str = gst_hip_dump_color_matrix (&convert_matrix); + GST_DEBUG_OBJECT (self, "YUV -> RGB matrix:\n%s", str); + g_free (str); + + priv->const_buf->do_convert = 1; + } else { + /* YUV -> YUV */ + if (in_color->range == out_color->range) { + GST_DEBUG_OBJECT (self, "YUV -> YU conversion without matrix"); + } else { + if (!gst_hip_color_range_adjust_matrix_unorm (in_info, out_info, + &convert_matrix)) { + GST_ERROR_OBJECT (self, "Failed to get GRAY range adjust matrix"); + return FALSE; + } + + str = gst_hip_dump_color_matrix (&convert_matrix); + GST_DEBUG_OBJECT (self, "YUV range adjust matrix:\n%s", str); + g_free (str); + + priv->const_buf->do_convert = 1; + } + } + } + + for (i = 0; i < 3; i++) { + priv->const_buf->convert_matrix.coeffXi = convert_matrix.matrix0i; + priv->const_buf->convert_matrix.coeffYi = convert_matrix.matrix1i; + priv->const_buf->convert_matrix.coeffZi = convert_matrix.matrix2i; + priv->const_buf->convert_matrix.offseti = convert_matrix.offseti; + priv->const_buf->convert_matrix.mini = convert_matrix.mini; + priv->const_buf->convert_matrix.maxi = convert_matrix.maxi; + } + + priv->const_buf->width = out_info->width; + priv->const_buf->height = out_info->height; + priv->const_buf->left = 0; + priv->const_buf->top = 0; + priv->const_buf->right = out_info->width; + priv->const_buf->bottom = out_info->height; + priv->const_buf->view_width = out_info->width; + priv->const_buf->view_height = out_info->height; + priv->const_buf->border_x = border_color0; + priv->const_buf->border_y = border_color1; + priv->const_buf->border_z = border_color2; + priv->const_buf->border_w = border_color3; + priv->const_buf->fill_border = 0; + priv->const_buf->video_direction = 0; + priv->const_buf->alpha = 1; + priv->const_buf->do_blend = 0; + + if (!gst_hip_device_set_current (self->device)) { + GST_ERROR_OBJECT (self, "Couldn't set device"); + return FALSE; + } + + auto device_id = gst_hip_device_get_device_id (self->device); + const gchar *program = nullptr; + + std::string kernel_name_base = "GstHipConverterMain_" + + std::string (priv->texture_fmt->sample_func) + "_" + output_name; + + if (priv->vendor == GST_HIP_VENDOR_AMD) { + auto kernel_name = kernel_name_base + "_amd"; + auto precompiled = g_precompiled_hsaco_table.find (kernel_name); + if (precompiled != g_precompiled_hsaco_table.end ()) + program = (const gchar *) precompiled->second; + } else { + auto kernel_name = kernel_name_base + "_nvidia"; + auto precompiled = g_precompiled_ptx_table.find (kernel_name); + if (precompiled != g_precompiled_ptx_table.end ()) + program = precompiled->second; + } + + if (program) { + ret = HipModuleLoadData (priv->vendor, &priv->main_module, program); + if (ret != hipSuccess) { + GST_WARNING_OBJECT (self, + "Could not load module from precompiled, ret %d", ret); + program = nullptr; + priv->main_module = nullptr; + } else { + GST_DEBUG_OBJECT (self, "Loaded precompiled kernel"); + } + } + + if (!program) { + std::string kernel_name = kernel_name_base + "_" + + std::to_string (device_id); + if (priv->vendor == GST_HIP_VENDOR_AMD) + kernel_name += "_amd"; + else + kernel_name += "_nvidia"; + + std::string sampler_define = std::string ("-DSAMPLER=Sample") + + std::string (priv->texture_fmt->sample_func); + std::string output_define = std::string ("-DOUTPUT=Output") + output_name; + std::string texture_define; + std::string arch_opt; + if (priv->vendor == GST_HIP_VENDOR_AMD) { + texture_define = std::string ("-DTextureObject_t=hipTextureObject_t"); + hipDeviceProp_t prop; + ret = HipGetDeviceProperties (GST_HIP_VENDOR_AMD, &prop, device_id); + if (ret == hipSuccess) + arch_opt = std::string ("--gpu-architecture=") + prop.gcnArchName; + } else { + texture_define = std::string ("-DTextureObject_t=cudaTextureObject_t"); + } + + std::vector < const char *>opts; + opts.push_back (sampler_define.c_str ()); + opts.push_back (output_define.c_str ()); + opts.push_back (texture_define.c_str ()); + if (priv->vendor == GST_HIP_VENDOR_AMD) + opts.push_back (arch_opt.c_str ()); + + std::lock_guard < std::mutex > lk (g_kernel_table_lock); + + auto ptx = g_ptx_table.find (kernel_name); + if (ptx == g_ptx_table.end ()) { + GST_DEBUG_OBJECT (self, "Building PTX"); + program = gst_hip_rtc_compile (self->device, ConverterMain_str, + opts.data (), opts.size ()); + if (program) + g_ptx_tablekernel_name = program; + } else { + GST_DEBUG_OBJECT (self, "Found cached PTX"); + program = ptx->second; + } + + if (program && !priv->main_module) { + GST_DEBUG_OBJECT (self, "Loading PTX module"); + gst_hip_device_set_current (self->device); + ret = HipModuleLoadData (priv->vendor, &priv->main_module, program); + if (ret != hipSuccess) { + GST_ERROR_OBJECT (self, "Could not load module from PTX, ret %d", ret); + program = nullptr; + priv->main_module = nullptr; + } + } + } + + if (!priv->main_module) { + GST_ERROR_OBJECT (self, "Couldn't load module"); + return FALSE; + } + + ret = HipModuleGetFunction (priv->vendor, &priv->main_func, + priv->main_module, "GstHipConverterMain"); + if (!gst_hip_result (ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Could not get main function"); + return FALSE; + } + + /* Allocates intermediate memory for texture */ + if (!unpack_name.empty ()) { + HIP_TEXTURE_DESC texture_desc = { }; + HIP_RESOURCE_DESC resource_desc = { }; + hipTextureObject_t texture = nullptr; + guint stride = GST_VIDEO_INFO_COMP_WIDTH (texture_info, 0) * + GST_VIDEO_INFO_COMP_PSTRIDE (texture_info, 0); + stride = do_align (stride, priv->tex_align); + priv->unpack_buffer.stride = stride; + + ret = HipMalloc (priv->vendor, &priv->unpack_buffer.ptr, stride * + GST_VIDEO_INFO_HEIGHT (texture_info)); + + if (!gst_hip_result (ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Couldn't allocate unpack buffer"); + return FALSE; + } + + resource_desc.resType = HIP_RESOURCE_TYPE_PITCH2D; + resource_desc.res.pitch2D.format = priv->texture_fmt->array_format0; + resource_desc.res.pitch2D.numChannels = 4; + resource_desc.res.pitch2D.width = in_info->width; + resource_desc.res.pitch2D.height = in_info->height; + resource_desc.res.pitch2D.pitchInBytes = priv->unpack_buffer.stride; + resource_desc.res.pitch2D.devPtr = priv->unpack_buffer.ptr; + + texture_desc.filterMode = priv->filter_mode; + texture_desc.flags = 0x2; + texture_desc.addressMode0 = (HIPaddress_mode) 1; + texture_desc.addressMode1 = (HIPaddress_mode) 1; + texture_desc.addressMode2 = (HIPaddress_mode) 1; + + ret = + HipTexObjectCreate (priv->vendor, &texture, &resource_desc, + &texture_desc, nullptr); + if (!gst_hip_result (ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Couldn't create unpack texture"); + return FALSE; + } + + priv->unpack_buffer.texture = texture; + program = nullptr; + + std::string unpack_module_name_base = "GstHipConverterUnpack"; + + if (priv->vendor == GST_HIP_VENDOR_AMD) { + auto kernel_name = unpack_module_name_base + "_amd"; + auto precompiled = g_precompiled_hsaco_table.find (kernel_name); + if (precompiled != g_precompiled_hsaco_table.end ()) + program = (const gchar *) precompiled->second; + } else { + auto kernel_name = unpack_module_name_base + "_nvidia"; + auto precompiled = g_precompiled_ptx_table.find (kernel_name); + if (precompiled != g_precompiled_ptx_table.end ()) + program = precompiled->second; + } + + if (program) { + ret = HipModuleLoadData (priv->vendor, &priv->unpack_module, program); + if (ret != hipSuccess) { + GST_WARNING_OBJECT (self, + "Could not load module from precompiled, ret %d", ret); + program = nullptr; + priv->unpack_module = nullptr; + } else { + GST_DEBUG_OBJECT (self, "Loaded precompiled kernel"); + } + } + + if (!program) { + std::string unpack_module_name = unpack_module_name_base + "_" + + std::to_string (device_id); + if (priv->vendor == GST_HIP_VENDOR_AMD) + unpack_module_name += "_amd"; + else + unpack_module_name += "_nvidia"; + + std::string arch_opt; + if (priv->vendor == GST_HIP_VENDOR_AMD) { + hipDeviceProp_t prop; + ret = HipGetDeviceProperties (GST_HIP_VENDOR_AMD, &prop, device_id); + if (ret == hipSuccess) + arch_opt = std::string ("--gpu-architecture=") + prop.gcnArchName; + } + + std::vector < const char *>opts; + if (!arch_opt.empty ()) + opts.push_back (arch_opt.c_str ()); + + std::lock_guard < std::mutex > lk (g_kernel_table_lock); + auto ptx = g_ptx_table.find (unpack_module_name); + if (ptx == g_ptx_table.end ()) { + GST_DEBUG_OBJECT (self, "Building PTX"); + program = gst_hip_rtc_compile (self->device, ConverterUnpack_str, + opts.empty ()? nullptr : opts.data (), opts.size ()); + if (program) + g_ptx_tableunpack_module_name = program; + } else { + GST_DEBUG_OBJECT (self, "Found cached PTX"); + program = ptx->second; + } + + if (program && !priv->unpack_module) { + GST_DEBUG_OBJECT (self, "PTX CUBIN module"); + ret = HipModuleLoadData (priv->vendor, &priv->unpack_module, program); + if (!gst_hip_result (ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Could not load module from PTX"); + program = nullptr; + priv->unpack_module = nullptr; + } + } + } + + if (!priv->unpack_module) { + GST_ERROR_OBJECT (self, "Couldn't load unpack module"); + return FALSE; + } + + ret = HipModuleGetFunction (priv->vendor, &priv->unpack_func, + priv->unpack_module, unpack_name.c_str ()); + if (!gst_hip_result (ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Could not get unpack function"); + return FALSE; + } + } + + return TRUE; +} + +static gboolean +copy_config (const GstIdStr * fieldname, const GValue * value, + gpointer user_data) +{ + GstHipConverter *self = (GstHipConverter *) user_data; + + gst_structure_id_str_set_value (self->priv->config, fieldname, value); + + return TRUE; +} + +static void +gst_hip_converter_set_config (GstHipConverter * self, GstStructure * config) +{ + gst_structure_foreach_id_str (config, copy_config, self); + gst_structure_free (config); +} + +GstHipConverter * +gst_hip_converter_new (GstHipDevice * device, const GstVideoInfo * in_info, + const GstVideoInfo * out_info, GstStructure * config) +{ + g_return_val_if_fail (in_info != nullptr, nullptr); + g_return_val_if_fail (out_info != nullptr, nullptr); + g_return_val_if_fail (GST_IS_HIP_DEVICE (device), nullptr); + + gboolean tex_support = FALSE; + g_object_get (device, "texture2d-support", &tex_support, nullptr); + if (!tex_support) { + GST_WARNING_OBJECT (device, "Texture not supported"); + return nullptr; + } + + gint tex_align = 0; + auto hip_ret = gst_hip_device_get_attribute (device, + hipDeviceAttributeTextureAlignment, &tex_align); + if (hip_ret != hipSuccess || tex_align <= 0) { + GST_WARNING_OBJECT (device, "Unknown texture alignment"); + return nullptr; + } + + auto self = + (GstHipConverter *) g_object_new (GST_TYPE_HIP_CONVERTER, nullptr); + gst_object_ref_sink (self); + + self->device = (GstHipDevice *) gst_object_ref (device); + + auto priv = self->priv; + priv->in_info = *in_info; + priv->out_info = *out_info; + priv->dest_width = out_info->width; + priv->dest_height = out_info->height; + priv->tex_align = tex_align; + priv->vendor = gst_hip_device_get_vendor (device); + priv->stream = gst_hip_stream_get_handle (gst_hip_device_get_stream (device)); + + if (config) + gst_hip_converter_set_config (self, config); + + if (!gst_hip_converter_setup (self)) { + gst_object_unref (self); + return nullptr; + } + + return self; +} + +static hipTextureObject_t +gst_hip_converter_create_texture_unchecked (GstHipConverter * self, + gpointer src, gint width, gint height, hipArray_Format format, + guint channels, gint stride, gint plane, HIPfilter_mode mode) +{ + auto priv = self->priv; + HIP_TEXTURE_DESC texture_desc = { }; + HIP_RESOURCE_DESC resource_desc = { }; + hipTextureObject_t texture = nullptr; + + resource_desc.resType = HIP_RESOURCE_TYPE_PITCH2D; + resource_desc.res.pitch2D.format = format; + resource_desc.res.pitch2D.numChannels = channels; + resource_desc.res.pitch2D.width = width; + resource_desc.res.pitch2D.height = height; + resource_desc.res.pitch2D.pitchInBytes = stride; + resource_desc.res.pitch2D.devPtr = src; + + texture_desc.filterMode = mode; + /* Will read texture value as a normalized 0, 1 float value + * with 0, 1) coordinates */ + /* CU_TRSF_NORMALIZED_COORDINATES */ + texture_desc.flags = 0x2; + /* CU_TR_ADDRESS_MODE_CLAMP */ + texture_desc.addressMode0 = (HIPaddress_mode) 1; + texture_desc.addressMode1 = (HIPaddress_mode) 1; + texture_desc.addressMode2 = (HIPaddress_mode) 1; + + auto hip_ret = HipTexObjectCreate (priv->vendor, &texture, + &resource_desc, &texture_desc, nullptr); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Could not create texture"); + return nullptr; + } + + return texture; +} + +static gboolean +ensure_fallback_buffer (GstHipConverter * self, gint width_in_bytes, + gint height, guint plane) +{ + auto priv = self->priv; + + if (priv->fallback_bufferplane.ptr) + return TRUE; + + size_t pitch = do_align (width_in_bytes, priv->tex_align); + priv->fallback_bufferplane.stride = pitch; + auto hip_ret = HipMalloc (priv->vendor, &priv->fallback_bufferplane.ptr, + pitch * height); + + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Couldn't allocate fallback buffer"); + return FALSE; + } + + return TRUE; +} + +static hipTextureObject_t +gst_hip_converter_create_texture (GstHipConverter * self, + gpointer src, gint width, gint height, gint stride, HIPfilter_mode mode, + hipArray_Format format, guint channels, gint plane) +{ + auto priv = self->priv; + hip_Memcpy2D params = { }; + + if (!ensure_fallback_buffer (self, stride, height, plane)) + return nullptr; + + params.srcMemoryType = hipMemoryTypeDevice; + params.srcPitch = stride; + params.srcDevice = src; + + params.dstMemoryType = hipMemoryTypeDevice; + params.dstPitch = priv->fallback_bufferplane.stride; + params.dstDevice = priv->fallback_bufferplane.ptr; + params.WidthInBytes = GST_VIDEO_INFO_COMP_WIDTH (&priv->in_info, plane) + * GST_VIDEO_INFO_COMP_PSTRIDE (&priv->in_info, plane), + params.Height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->in_info, plane); + + auto hip_ret = HipMemcpyParam2DAsync (priv->vendor, ¶ms, priv->stream); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Couldn't copy to fallback buffer"); + return nullptr; + } + + if (!priv->fallback_bufferplane.texture) { + auto src_ptr = priv->fallback_bufferplane.ptr; + stride = priv->fallback_bufferplane.stride; + + priv->fallback_bufferplane.texture = + gst_hip_converter_create_texture_unchecked (self, src_ptr, width, + height, format, channels, stride, plane, mode); + } + + return priv->fallback_bufferplane.texture; +} + +static gboolean +gst_hip_converter_unpack_rgb (GstHipConverter * self, GstVideoFrame * src_frame) +{ + auto priv = self->priv; + gpointer src; + gint width, height, src_stride, dst_stride; + gpointer args = { &src, &priv->unpack_buffer.ptr, + &width, &height, &src_stride, &dst_stride + }; + + g_assert (priv->unpack_buffer.ptr); + g_assert (priv->unpack_buffer.stride > 0); + + src = GST_VIDEO_FRAME_PLANE_DATA (src_frame, 0); + width = GST_VIDEO_FRAME_WIDTH (src_frame); + height = GST_VIDEO_FRAME_HEIGHT (src_frame); + src_stride = GST_VIDEO_FRAME_PLANE_STRIDE (src_frame, 0); + dst_stride = (gint) priv->unpack_buffer.stride; + + auto hip_ret = HipModuleLaunchKernel (priv->vendor, priv->unpack_func, + DIV_UP (width, HIP_BLOCK_X), DIV_UP (height, HIP_BLOCK_Y), 1, + HIP_BLOCK_X, HIP_BLOCK_Y, 1, 0, priv->stream, args, nullptr); + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (self, "Couldn't unpack source RGB"); + return FALSE; + } + + return TRUE; +} + +gboolean +gst_hip_converter_convert_frame (GstHipConverter * converter, + GstBuffer * in_buf, GstBuffer * out_buf) +{ + const TextureFormat *format; + hipTextureObject_t textureGST_VIDEO_MAX_COMPONENTS = { }; + guint8 *dstGST_VIDEO_MAX_COMPONENTS = { }; + gint stride2 = { 0, }; + gint width, height; + gint off_x = 0; + gint off_y = 0; + GstVideoFrame in_frame, out_frame; + + g_return_val_if_fail (GST_IS_HIP_CONVERTER (converter), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (in_buf), FALSE); + g_return_val_if_fail (GST_IS_BUFFER (out_buf), FALSE); + + auto priv = converter->priv; + + if (!gst_hip_device_set_current (converter->device)) { + GST_ERROR_OBJECT (converter, "Couldn't set device"); + return FALSE; + } + + if (!gst_video_frame_map (&in_frame, &priv->in_info, in_buf, + GST_MAP_READ_HIP)) { + GST_ERROR_OBJECT (converter, "Couldn't map input buffer"); + return FALSE; + } + + if (!gst_video_frame_map (&out_frame, &priv->out_info, out_buf, + GST_MAP_WRITE_HIP)) { + gst_video_frame_unmap (&in_frame); + GST_ERROR_OBJECT (converter, "Couldn't map output buffer"); + return FALSE; + } + + auto in_hmem = (GstHipMemory *) gst_buffer_peek_memory (in_buf, 0); + auto out_hmem = (GstHipMemory *) gst_buffer_peek_memory (out_buf, 0); + + auto in_stream = gst_hip_memory_get_stream (in_hmem); + auto out_stream = gst_hip_memory_get_stream (out_hmem); + + gboolean set_event = FALSE; + /* Avoid sync if in/out mem use the same hip stream */ + priv->stream = gst_hip_stream_get_handle (out_stream); + if (in_stream != out_stream) { + gst_hip_memory_sync (in_hmem); + } else { + set_event = TRUE; + } + + priv = converter->priv; + format = priv->texture_fmt; + + std::lock_guard < std::mutex > lk (priv->lock); + if (!priv->fill_border && (priv->dest_width <= 0 || priv->dest_height <= 0)) + return TRUE; + + gpointer args = { &texture0, &texture1, &texture2, &texture3, + &dst0, &dst1, &dst2, &dst3, &stride0, &stride1, + priv->const_buf, &off_x, &off_y + }; + + auto cmem = (GstHipMemory *) gst_buffer_peek_memory (in_buf, 0); + + if (priv->unpack_func) { + if (!gst_hip_converter_unpack_rgb (converter, &in_frame)) { + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + return FALSE; + } + + texture0 = priv->unpack_buffer.texture; + if (!texture0) { + GST_ERROR_OBJECT (converter, "Unpack texture is unavailable"); + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + return FALSE; + } + } else { + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&in_frame); i++) { + if (!gst_hip_memory_get_texture (cmem, + i, priv->filter_mode, HIP_TR_ADDRESS_MODE_CLAMP, &texturei)) { + auto src = GST_VIDEO_FRAME_PLANE_DATA (&in_frame, i); + texturei = gst_hip_converter_create_texture (converter, + src, GST_VIDEO_FRAME_COMP_WIDTH (&in_frame, i), + GST_VIDEO_FRAME_COMP_HEIGHT (&in_frame, i), + GST_VIDEO_FRAME_PLANE_STRIDE (&in_frame, i), + priv->filter_mode, format->array_formati, format->channelsi, i); + } + + if (!texturei) { + GST_ERROR_OBJECT (converter, "Couldn't create texture %d", i); + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + return FALSE; + } + } + } + + width = GST_VIDEO_FRAME_WIDTH (&out_frame); + height = GST_VIDEO_FRAME_HEIGHT (&out_frame); + + if (!priv->fill_border) { + if (priv->dest_width < width) { + off_x = priv->dest_x; + width = priv->dest_width; + } + + if (priv->dest_height < height) { + off_y = priv->dest_y; + height = priv->dest_height; + } + } + + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&out_frame); i++) + dsti = (guint8 *) GST_VIDEO_FRAME_PLANE_DATA (&out_frame, i); + + stride0 = stride1 = GST_VIDEO_FRAME_PLANE_STRIDE (&out_frame, 0); + if (GST_VIDEO_FRAME_N_PLANES (&out_frame) > 1) + stride1 = GST_VIDEO_FRAME_PLANE_STRIDE (&out_frame, 1); + + auto hip_ret = HipModuleLaunchKernel (priv->vendor, priv->main_func, + DIV_UP (width, HIP_BLOCK_X), DIV_UP (height, HIP_BLOCK_Y), 1, + HIP_BLOCK_X, HIP_BLOCK_Y, 1, + 0, priv->stream, args, nullptr); + + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + + if (!gst_hip_result (hip_ret, priv->vendor)) { + GST_ERROR_OBJECT (converter, "Couldn't convert frame"); + return FALSE; + } + + auto stream = gst_hip_device_get_stream (converter->device); + GstHipEvent *event; + if (set_event && gst_hip_stream_record_event (stream, &event)) { + auto hmem = (GstHipMemory *) gst_buffer_peek_memory (out_buf, 0); + gst_hip_memory_set_event (hmem, event); + gst_hip_event_unref (event); + } else { + HipStreamSynchronize (priv->vendor, priv->stream); + } + + return TRUE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipconverter.h
Added
@@ -0,0 +1,68 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/hip/gsthip.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_CONVERTER (gst_hip_converter_get_type()) +#define GST_HIP_CONVERTER(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_HIP_CONVERTER,GstHipConverter)) +#define GST_HIP_CONVERTER_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_HIP_CONVERTER,GstHipConverterClass)) +#define GST_HIP_CONVERTER_GET_CLASS(obj) (GST_HIP_CONVERTER_CLASS(G_OBJECT_GET_CLASS(obj))) +#define GST_IS_HIP_CONVERTER(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_HIP_CONVERTER)) +#define GST_IS_HIP_CONVERTER_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_HIP_CONVERTER)) +#define GST_HIP_CONVERTER_CAST(obj) ((GstHipConverter*)(obj)) + +typedef struct _GstHipConverter GstHipConverter; +typedef struct _GstHipConverterClass GstHipConverterClass; +typedef struct _GstHipConverterPrivate GstHipConverterPrivate; + +struct _GstHipConverter +{ + GstObject parent; + + GstHipDevice *device; + + /*< private >*/ + GstHipConverterPrivate *priv; + gpointer _gst_reservedGST_PADDING; +}; + +struct _GstHipConverterClass +{ + GstObjectClass parent_class; + + /*< private >*/ + gpointer _gst_reservedGST_PADDING; +}; + +GType gst_hip_converter_get_type (void); + +GstHipConverter * gst_hip_converter_new (GstHipDevice * device, + const GstVideoInfo * in_info, + const GstVideoInfo * out_info, + GstStructure * config); + +gboolean gst_hip_converter_convert_frame (GstHipConverter * converter, + GstBuffer * in_buf, + GstBuffer * out_buf); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipconvertscale.cpp
Added
@@ -0,0 +1,1868 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gsthipconvertscale.h" +#include "gsthipconverter.h" +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_hip_base_convert_debug); +#define GST_CAT_DEFAULT gst_hip_base_convert_debug + +#define GST_HIP_CONVET_FORMATS \ + "{ I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, " \ + "Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, " \ + "BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, RGBP, BGRP, GBR, " \ + "GBRA, GBR_10LE, GBR_12LE, GBR_16LE, VUYA }" + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_CONVET_FORMATS)) + ); + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_CONVET_FORMATS)) + ); + +#define DEFAULT_ADD_BORDERS TRUE + +struct _GstHipBaseConvertPrivate +{ + ~_GstHipBaseConvertPrivate () + { + gst_clear_object (&conv); + } + + GstHipConverter *conv = nullptr; + + gint borders_h = 0; + gint borders_w = 0; + gboolean add_borders = DEFAULT_ADD_BORDERS; + + /* orientation */ + /* method configured via property */ + GstVideoOrientationMethod method = GST_VIDEO_ORIENTATION_IDENTITY; + /* method parsed from tag */ + GstVideoOrientationMethod tag_method = GST_VIDEO_ORIENTATION_IDENTITY; + /* method currently selected based on "method" and "tag_method" */ + GstVideoOrientationMethod selected_method = GST_VIDEO_ORIENTATION_IDENTITY; + /* method previously selected and used for negotiation */ + GstVideoOrientationMethod active_method = GST_VIDEO_ORIENTATION_IDENTITY; + + std::mutex lock; +}; + +static void gst_hip_base_convert_finalize (GObject * object); +static GstCaps *gst_hip_base_convert_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_hip_base_convert_fixate_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); +static gboolean +gst_hip_base_convert_propose_allocation (GstBaseTransform * trans, + GstQuery * decide_query, GstQuery * query); +static gboolean gst_hip_base_convert_decide_allocation (GstBaseTransform * + trans, GstQuery * query); +static gboolean gst_hip_base_convert_filter_meta (GstBaseTransform * trans, + GstQuery * query, GType api, const GstStructure * params); +static GstFlowReturn gst_hip_base_convert_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf); +static gboolean gst_hip_base_convert_set_info (GstHipBaseFilter * filter, + GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, + GstVideoInfo * out_info); + +#define gst_hip_base_convert_parent_class parent_class +G_DEFINE_ABSTRACT_TYPE_WITH_CODE (GstHipBaseConvert, + gst_hip_base_convert, GST_TYPE_HIP_BASE_FILTER, + GST_DEBUG_CATEGORY_INIT (gst_hip_base_convert_debug, + "hipconvertscale", 0, "hipconvertscale")); + +static void +gst_hip_base_convert_class_init (GstHipBaseConvertClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + auto filter_class = GST_HIP_BASE_FILTER_CLASS (klass); + + object_class->finalize = gst_hip_base_convert_finalize; + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_add_static_pad_template (element_class, &src_template); + + trans_class->passthrough_on_same_caps = TRUE; + + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_hip_base_convert_transform_caps); + trans_class->fixate_caps = + GST_DEBUG_FUNCPTR (gst_hip_base_convert_fixate_caps); + trans_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_hip_base_convert_propose_allocation); + trans_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_hip_base_convert_decide_allocation); + trans_class->filter_meta = + GST_DEBUG_FUNCPTR (gst_hip_base_convert_filter_meta); + trans_class->transform = GST_DEBUG_FUNCPTR (gst_hip_base_convert_transform); + + filter_class->set_info = GST_DEBUG_FUNCPTR (gst_hip_base_convert_set_info); + + gst_type_mark_as_plugin_api (GST_TYPE_HIP_BASE_CONVERT, + (GstPluginAPIFlags) 0); +} + +static void +gst_hip_base_convert_init (GstHipBaseConvert * self) +{ + self->priv = new GstHipBaseConvertPrivate (); +} + +static void +gst_hip_base_convert_finalize (GObject * object) +{ + auto self = GST_HIP_BASE_CONVERT (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static GstCaps * +gst_hip_base_convert_caps_remove_format_info (GstCaps * caps) +{ + GstStructure *st; + GstCapsFeatures *f; + gint i, n; + GstCaps *res; + GstCapsFeatures *feature = + gst_caps_features_new_single_static_str + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY); + + res = gst_caps_new_empty (); + + n = gst_caps_get_size (caps); + for (i = 0; i < n; i++) { + st = gst_caps_get_structure (caps, i); + f = gst_caps_get_features (caps, i); + + /* If this is already expressed by the existing caps + * skip this structure */ + if (i > 0 && gst_caps_is_subset_structure_full (res, st, f)) + continue; + + st = gst_structure_copy (st); + /* Only remove format info for the cases when we can actually convert */ + if (!gst_caps_features_is_any (f) + && gst_caps_features_is_equal (f, feature)) { + gst_structure_remove_fields (st, "format", "colorimetry", "chroma-site", + nullptr); + } + + gst_caps_append_structure_full (res, st, gst_caps_features_copy (f)); + } + gst_caps_features_free (feature); + + return res; +} + +static GstCaps * +gst_hip_base_convert_caps_rangify_size_info (GstCaps * caps) +{ + GstStructure *st; + GstCapsFeatures *f; + gint i, n; + GstCaps *res; + GstCapsFeatures *feature = + gst_caps_features_new_single_static_str + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY); + + res = gst_caps_new_empty (); + + n = gst_caps_get_size (caps); + for (i = 0; i < n; i++) { + st = gst_caps_get_structure (caps, i); + f = gst_caps_get_features (caps, i); + + /* If this is already expressed by the existing caps + * skip this structure */ + if (i > 0 && gst_caps_is_subset_structure_full (res, st, f)) + continue; + + st = gst_structure_copy (st); + /* Only remove format info for the cases when we can actually convert */ + if (!gst_caps_features_is_any (f) + && gst_caps_features_is_equal (f, feature)) { + gst_structure_set (st, "width", GST_TYPE_INT_RANGE, 1, G_MAXINT, + "height", GST_TYPE_INT_RANGE, 1, G_MAXINT, nullptr); + + /* if pixel aspect ratio, make a range of it */ + if (gst_structure_has_field (st, "pixel-aspect-ratio")) { + gst_structure_set (st, "pixel-aspect-ratio", + GST_TYPE_FRACTION_RANGE, 1, G_MAXINT, G_MAXINT, 1, nullptr); + } + } + + gst_caps_append_structure_full (res, st, gst_caps_features_copy (f)); + } + gst_caps_features_free (feature); + + return res; +} + +static GstCaps * +gst_hip_base_convert_caps_remove_format_and_rangify_size_info (GstCaps * caps) +{ + GstStructure *st; + GstCapsFeatures *f; + gint i, n; + GstCaps *res; + GstCapsFeatures *feature = + gst_caps_features_new_single_static_str + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY); + + res = gst_caps_new_empty (); + + n = gst_caps_get_size (caps); + for (i = 0; i < n; i++) { + st = gst_caps_get_structure (caps, i); + f = gst_caps_get_features (caps, i); + + /* If this is already expressed by the existing caps + * skip this structure */ + if (i > 0 && gst_caps_is_subset_structure_full (res, st, f)) + continue; + + st = gst_structure_copy (st); + /* Only remove format info for the cases when we can actually convert */ + if (!gst_caps_features_is_any (f) + && gst_caps_features_is_equal (f, feature)) { + gst_structure_set (st, "width", GST_TYPE_INT_RANGE, 1, G_MAXINT, + "height", GST_TYPE_INT_RANGE, 1, G_MAXINT, nullptr); + /* if pixel aspect ratio, make a range of it */ + if (gst_structure_has_field (st, "pixel-aspect-ratio")) { + gst_structure_set (st, "pixel-aspect-ratio", + GST_TYPE_FRACTION_RANGE, 1, G_MAXINT, G_MAXINT, 1, nullptr); + } + gst_structure_remove_fields (st, "format", "colorimetry", "chroma-site", + nullptr); + } + + gst_caps_append_structure_full (res, st, gst_caps_features_copy (f)); + } + gst_caps_features_free (feature); + + return res; +} + +static GstCaps * +gst_hip_base_convert_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + GstCaps *tmp, *tmp2; + GstCaps *result; + + /* Get all possible caps that we can transform to */ + tmp = gst_hip_base_convert_caps_remove_format_and_rangify_size_info (caps); + + if (filter) { + tmp2 = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp); + tmp = tmp2; + } + + result = tmp; + + GST_DEBUG_OBJECT (trans, "transformed %" GST_PTR_FORMAT " into %" + GST_PTR_FORMAT, caps, result); + + return result; +} + +/* + * This is an incomplete matrix of in formats and a score for the prefered output + * format. + * + * out: RGB24 RGB16 ARGB AYUV YUV444 YUV422 YUV420 YUV411 YUV410 PAL GRAY + * in + * RGB24 0 2 1 2 2 3 4 5 6 7 8 + * RGB16 1 0 1 2 2 3 4 5 6 7 8 + * ARGB 2 3 0 1 4 5 6 7 8 9 10 + * AYUV 3 4 1 0 2 5 6 7 8 9 10 + * YUV444 2 4 3 1 0 5 6 7 8 9 10 + * YUV422 3 5 4 2 1 0 6 7 8 9 10 + * YUV420 4 6 5 3 2 1 0 7 8 9 10 + * YUV411 4 6 5 3 2 1 7 0 8 9 10 + * YUV410 6 8 7 5 4 3 2 1 0 9 10 + * PAL 1 3 2 6 4 6 7 8 9 0 10 + * GRAY 1 4 3 2 1 5 6 7 8 9 0 + * + * PAL or GRAY are never prefered, if we can we would convert to PAL instead + * of GRAY, though + * less subsampling is prefered and if any, preferably horizontal + * We would like to keep the alpha, even if we would need to to colorspace conversion + * or lose depth. + */ +#define SCORE_FORMAT_CHANGE 1 +#define SCORE_DEPTH_CHANGE 1 +#define SCORE_ALPHA_CHANGE 1 +#define SCORE_CHROMA_W_CHANGE 1 +#define SCORE_CHROMA_H_CHANGE 1 +#define SCORE_PALETTE_CHANGE 1 + +#define SCORE_COLORSPACE_LOSS 2 /* RGB <-> YUV */ +#define SCORE_DEPTH_LOSS 4 /* change bit depth */ +#define SCORE_ALPHA_LOSS 8 /* lose the alpha channel */ +#define SCORE_CHROMA_W_LOSS 16 /* vertical subsample */ +#define SCORE_CHROMA_H_LOSS 32 /* horizontal subsample */ +#define SCORE_PALETTE_LOSS 64 /* convert to palette format */ +#define SCORE_COLOR_LOSS 128 /* convert to GRAY */ + +#define COLORSPACE_MASK (GST_VIDEO_FORMAT_FLAG_YUV | \ + GST_VIDEO_FORMAT_FLAG_RGB | GST_VIDEO_FORMAT_FLAG_GRAY) +#define ALPHA_MASK (GST_VIDEO_FORMAT_FLAG_ALPHA) +#define PALETTE_MASK (GST_VIDEO_FORMAT_FLAG_PALETTE) + +/* calculate how much loss a conversion would be */ +static void +score_value (GstBaseTransform * base, const GstVideoFormatInfo * in_info, + const GValue * val, gint * min_loss, const GstVideoFormatInfo ** out_info) +{ + const gchar *fname; + const GstVideoFormatInfo *t_info; + guint in_flags, t_flags; + gint loss; + + fname = g_value_get_string (val); + t_info = gst_video_format_get_info (gst_video_format_from_string (fname)); + if (!t_info || t_info->format == GST_VIDEO_FORMAT_UNKNOWN) + return; + + /* accept input format immediately without loss */ + if (in_info == t_info) { + *min_loss = 0; + *out_info = t_info; + return; + } + + loss = SCORE_FORMAT_CHANGE; + + in_flags = GST_VIDEO_FORMAT_INFO_FLAGS (in_info); + in_flags &= ~GST_VIDEO_FORMAT_FLAG_LE; + in_flags &= ~GST_VIDEO_FORMAT_FLAG_COMPLEX; + in_flags &= ~GST_VIDEO_FORMAT_FLAG_UNPACK; + + t_flags = GST_VIDEO_FORMAT_INFO_FLAGS (t_info); + t_flags &= ~GST_VIDEO_FORMAT_FLAG_LE; + t_flags &= ~GST_VIDEO_FORMAT_FLAG_COMPLEX; + t_flags &= ~GST_VIDEO_FORMAT_FLAG_UNPACK; + + if ((t_flags & PALETTE_MASK) != (in_flags & PALETTE_MASK)) { + loss += SCORE_PALETTE_CHANGE; + if (t_flags & PALETTE_MASK) + loss += SCORE_PALETTE_LOSS; + } + + if ((t_flags & COLORSPACE_MASK) != (in_flags & COLORSPACE_MASK)) { + loss += SCORE_COLORSPACE_LOSS; + if (t_flags & GST_VIDEO_FORMAT_FLAG_GRAY) + loss += SCORE_COLOR_LOSS; + } + + if ((t_flags & ALPHA_MASK) != (in_flags & ALPHA_MASK)) { + loss += SCORE_ALPHA_CHANGE; + if (in_flags & ALPHA_MASK) + loss += SCORE_ALPHA_LOSS; + } + + if ((in_info->h_sub1) != (t_info->h_sub1)) { + loss += SCORE_CHROMA_H_CHANGE; + if ((in_info->h_sub1) < (t_info->h_sub1)) + loss += SCORE_CHROMA_H_LOSS; + } + if ((in_info->w_sub1) != (t_info->w_sub1)) { + loss += SCORE_CHROMA_W_CHANGE; + if ((in_info->w_sub1) < (t_info->w_sub1)) + loss += SCORE_CHROMA_W_LOSS; + } + + if ((in_info->bits) != (t_info->bits)) { + loss += SCORE_DEPTH_CHANGE; + if ((in_info->bits) > (t_info->bits)) + loss += SCORE_DEPTH_LOSS + (in_info->bits - t_info->bits); + } + + GST_DEBUG_OBJECT (base, "score %s -> %s = %d", + GST_VIDEO_FORMAT_INFO_NAME (in_info), + GST_VIDEO_FORMAT_INFO_NAME (t_info), loss); + + if (loss < *min_loss) { + GST_DEBUG_OBJECT (base, "found new best %d", loss); + *out_info = t_info; + *min_loss = loss; + } +} + +static void +gst_hip_base_convert_fixate_format (GstBaseTransform * trans, + GstCaps * caps, GstCaps * result) +{ + GstStructure *ins, *outs; + const gchar *in_format; + const GstVideoFormatInfo *in_info, *out_info = nullptr; + gint min_loss = G_MAXINT; + guint i, capslen; + + ins = gst_caps_get_structure (caps, 0); + in_format = gst_structure_get_string (ins, "format"); + if (!in_format) { + return; + } + + GST_DEBUG_OBJECT (trans, "source format %s", in_format); + + in_info = + gst_video_format_get_info (gst_video_format_from_string (in_format)); + if (!in_info) + return; + + outs = gst_caps_get_structure (result, 0); + + capslen = gst_caps_get_size (result); + GST_DEBUG ("iterate %d structures", capslen); + for (i = 0; i < capslen; i++) { + GstStructure *tests; + const GValue *format; + + tests = gst_caps_get_structure (result, i); + format = gst_structure_get_value (tests, "format"); + + /* should not happen */ + if (format == nullptr) + continue; + + if (GST_VALUE_HOLDS_LIST (format)) { + gint j, len; + + len = gst_value_list_get_size (format); + GST_DEBUG_OBJECT (trans, "have %d formats", len); + for (j = 0; j < len; j++) { + const GValue *val; + + val = gst_value_list_get_value (format, j); + if (G_VALUE_HOLDS_STRING (val)) { + score_value (trans, in_info, val, &min_loss, &out_info); + if (min_loss == 0) + break; + } + } + } else if (G_VALUE_HOLDS_STRING (format)) { + score_value (trans, in_info, format, &min_loss, &out_info); + } + } + if (out_info) + gst_structure_set (outs, "format", G_TYPE_STRING, + GST_VIDEO_FORMAT_INFO_NAME (out_info), nullptr); +} + +static gboolean +subsampling_unchanged (GstVideoInfo * in_info, GstVideoInfo * out_info) +{ + guint i; + const GstVideoFormatInfo *in_format, *out_format; + + if (GST_VIDEO_INFO_N_COMPONENTS (in_info) != + GST_VIDEO_INFO_N_COMPONENTS (out_info)) + return FALSE; + + in_format = in_info->finfo; + out_format = out_info->finfo; + + for (i = 0; i < GST_VIDEO_INFO_N_COMPONENTS (in_info); i++) { + if (GST_VIDEO_FORMAT_INFO_W_SUB (in_format, + i) != GST_VIDEO_FORMAT_INFO_W_SUB (out_format, i)) + return FALSE; + if (GST_VIDEO_FORMAT_INFO_H_SUB (in_format, + i) != GST_VIDEO_FORMAT_INFO_H_SUB (out_format, i)) + return FALSE; + } + + return TRUE; +} + +static void +transfer_colorimetry_from_input (GstBaseTransform * trans, GstCaps * in_caps, + GstCaps * out_caps) +{ + GstStructure *out_caps_s = gst_caps_get_structure (out_caps, 0); + GstStructure *in_caps_s = gst_caps_get_structure (in_caps, 0); + gboolean have_colorimetry = + gst_structure_has_field (out_caps_s, "colorimetry"); + gboolean have_chroma_site = + gst_structure_has_field (out_caps_s, "chroma-site"); + + /* If the output already has colorimetry and chroma-site, stop, + * otherwise try and transfer what we can from the input caps */ + if (have_colorimetry && have_chroma_site) + return; + + { + GstVideoInfo in_info, out_info; + const GValue *in_colorimetry = + gst_structure_get_value (in_caps_s, "colorimetry"); + + if (!gst_video_info_from_caps (&in_info, in_caps)) { + GST_WARNING_OBJECT (trans, + "Failed to convert sink pad caps to video info"); + return; + } + if (!gst_video_info_from_caps (&out_info, out_caps)) { + GST_WARNING_OBJECT (trans, + "Failed to convert src pad caps to video info"); + return; + } + + if (!have_colorimetry && in_colorimetry != nullptr) { + if ((GST_VIDEO_INFO_IS_YUV (&out_info) + && GST_VIDEO_INFO_IS_YUV (&in_info)) + || (GST_VIDEO_INFO_IS_RGB (&out_info) + && GST_VIDEO_INFO_IS_RGB (&in_info)) + || (GST_VIDEO_INFO_IS_GRAY (&out_info) + && GST_VIDEO_INFO_IS_GRAY (&in_info))) { + /* Can transfer the colorimetry intact from the input if it has it */ + gst_structure_set_value (out_caps_s, "colorimetry", in_colorimetry); + } else { + gchar *colorimetry_str; + + /* Changing between YUV/RGB - forward primaries and transfer function, but use + * default range and matrix. + * the primaries is used for conversion between RGB and XYZ (CIE 1931 coordinate). + * the transfer function could be another reference (e.g., HDR) + */ + out_info.colorimetry.primaries = in_info.colorimetry.primaries; + out_info.colorimetry.transfer = in_info.colorimetry.transfer; + + colorimetry_str = + gst_video_colorimetry_to_string (&out_info.colorimetry); + gst_caps_set_simple (out_caps, "colorimetry", G_TYPE_STRING, + colorimetry_str, nullptr); + g_free (colorimetry_str); + } + } + + /* Only YUV output needs chroma-site. If the input was also YUV and had the same chroma + * subsampling, transfer the siting. If the sub-sampling is changing, then the planes get + * scaled anyway so there's no real reason to prefer the input siting. */ + if (!have_chroma_site && GST_VIDEO_INFO_IS_YUV (&out_info)) { + if (GST_VIDEO_INFO_IS_YUV (&in_info)) { + const GValue *in_chroma_site = + gst_structure_get_value (in_caps_s, "chroma-site"); + if (in_chroma_site != nullptr + && subsampling_unchanged (&in_info, &out_info)) + gst_structure_set_value (out_caps_s, "chroma-site", in_chroma_site); + } + } + } +} + +static GstCaps * +gst_hip_base_convert_get_fixed_format (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GstCaps *result; + + result = gst_caps_intersect (othercaps, caps); + if (gst_caps_is_empty (result)) { + gst_caps_unref (result); + result = gst_caps_copy (othercaps); + } + + gst_hip_base_convert_fixate_format (trans, caps, result); + + /* fixate remaining fields */ + result = gst_caps_fixate (result); + + if (direction == GST_PAD_SINK) { + if (gst_caps_is_subset (caps, result)) { + gst_caps_replace (&result, caps); + } else { + /* Try and preserve input colorimetry / chroma information */ + transfer_colorimetry_from_input (trans, caps, result); + } + } + + return result; +} + +static GstCaps * +gst_hip_base_convert_fixate_size (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + auto self = GST_HIP_BASE_CONVERT (base); + auto priv = self->priv; + GstStructure *ins, *outs; + const GValue *from_par, *to_par; + GValue fpar = G_VALUE_INIT, tpar = G_VALUE_INIT; + gboolean rotate = FALSE; + + othercaps = gst_caps_truncate (othercaps); + othercaps = gst_caps_make_writable (othercaps); + ins = gst_caps_get_structure (caps, 0); + outs = gst_caps_get_structure (othercaps, 0); + + from_par = gst_structure_get_value (ins, "pixel-aspect-ratio"); + to_par = gst_structure_get_value (outs, "pixel-aspect-ratio"); + + std::lock_guard < std::mutex > lk (priv->lock); + switch (priv->selected_method) { + case GST_VIDEO_ORIENTATION_90R: + case GST_VIDEO_ORIENTATION_90L: + case GST_VIDEO_ORIENTATION_UL_LR: + case GST_VIDEO_ORIENTATION_UR_LL: + rotate = TRUE; + break; + default: + rotate = FALSE; + break; + } + + if (direction == GST_PAD_SINK) { + if (!from_par) { + g_value_init (&fpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&fpar, 1, 1); + from_par = &fpar; + } + if (!to_par) { + g_value_init (&tpar, GST_TYPE_FRACTION_RANGE); + gst_value_set_fraction_range_full (&tpar, 1, G_MAXINT, G_MAXINT, 1); + to_par = &tpar; + } + } else { + gint from_par_n, from_par_d; + + if (!from_par) { + g_value_init (&fpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&fpar, 1, 1); + from_par = &fpar; + + from_par_n = from_par_d = 1; + } else { + from_par_n = gst_value_get_fraction_numerator (from_par); + from_par_d = gst_value_get_fraction_denominator (from_par); + } + + if (!to_par) { + gint to_par_n, to_par_d; + + if (rotate) { + to_par_n = from_par_n; + to_par_d = from_par_d; + } else { + to_par_n = from_par_n; + to_par_d = from_par_d; + } + + g_value_init (&tpar, GST_TYPE_FRACTION); + gst_value_set_fraction (&tpar, to_par_n, to_par_d); + to_par = &tpar; + + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + to_par_n, to_par_d, nullptr); + } + } + + /* we have both PAR but they might not be fixated */ + { + gint from_w, from_h, from_par_n, from_par_d, to_par_n, to_par_d; + gint w = 0, h = 0; + gint from_dar_n, from_dar_d; + gint num, den; + + /* from_par should be fixed */ + g_return_val_if_fail (gst_value_is_fixed (from_par), othercaps); + + from_par_n = gst_value_get_fraction_numerator (from_par); + from_par_d = gst_value_get_fraction_denominator (from_par); + + gst_structure_get_int (ins, "width", &from_w); + gst_structure_get_int (ins, "height", &from_h); + + gst_structure_get_int (outs, "width", &w); + gst_structure_get_int (outs, "height", &h); + + /* swap dimensions when it's rotated */ + if (rotate) { + gint _tmp = from_w; + from_w = from_h; + from_h = _tmp; + + _tmp = from_par_n; + from_par_n = from_par_d; + from_par_d = _tmp; + } + + /* if both width and height are already fixed, we can't do anything + * about it anymore */ + if (w && h) { + guint n, d; + + GST_DEBUG_OBJECT (base, "dimensions already set to %dx%d, not fixating", + w, h); + if (!gst_value_is_fixed (to_par)) { + if (gst_video_calculate_display_ratio (&n, &d, from_w, from_h, + from_par_n, from_par_d, w, h)) { + GST_DEBUG_OBJECT (base, "fixating to_par to %dx%d", n, d); + if (gst_structure_has_field (outs, "pixel-aspect-ratio")) + gst_structure_fixate_field_nearest_fraction (outs, + "pixel-aspect-ratio", n, d); + else if (n != d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + n, d, nullptr); + } + } + goto done; + } + + /* Calculate input DAR */ + if (!gst_util_fraction_multiply (from_w, from_h, from_par_n, from_par_d, + &from_dar_n, &from_dar_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + GST_DEBUG_OBJECT (base, "Input DAR is %d/%d", from_dar_n, from_dar_d); + + /* If either width or height are fixed there's not much we + * can do either except choosing a height or width and PAR + * that matches the DAR as good as possible + */ + if (h) { + GstStructure *tmp; + gint set_w, set_par_n, set_par_d; + + GST_DEBUG_OBJECT (base, "height is fixed (%d)", h); + + /* If the PAR is fixed too, there's not much to do + * except choosing the width that is nearest to the + * width with the same DAR */ + if (gst_value_is_fixed (to_par)) { + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + GST_DEBUG_OBJECT (base, "PAR is fixed %d/%d", to_par_n, to_par_d); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_d, + to_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (h, num, den); + gst_structure_fixate_field_nearest_int (outs, "width", w); + + goto done; + } + + /* The PAR is not fixed and it's quite likely that we can set + * an arbitrary PAR. */ + + /* Check if we can keep the input width */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + /* Might have failed but try to keep the DAR nonetheless by + * adjusting the PAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, h, set_w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + /* Check if the adjusted PAR is accepted */ + if (set_par_n == to_par_n && set_par_d == to_par_d) { + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "width", G_TYPE_INT, set_w, + "pixel-aspect-ratio", GST_TYPE_FRACTION, set_par_n, set_par_d, + nullptr); + goto done; + } + + /* Otherwise scale the width to the new PAR and check if the + * adjusted with is accepted. If all that fails we can't keep + * the DAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (h, num, den); + gst_structure_fixate_field_nearest_int (outs, "width", w); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + + goto done; + } else if (w) { + GstStructure *tmp; + gint set_h, set_par_n, set_par_d; + + GST_DEBUG_OBJECT (base, "width is fixed (%d)", w); + + /* If the PAR is fixed too, there's not much to do + * except choosing the height that is nearest to the + * height with the same DAR */ + if (gst_value_is_fixed (to_par)) { + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + GST_DEBUG_OBJECT (base, "PAR is fixed %d/%d", to_par_n, to_par_d); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_d, + to_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + h = (guint) gst_util_uint64_scale_int_round (w, den, num); + gst_structure_fixate_field_nearest_int (outs, "height", h); + + goto done; + } + + /* The PAR is not fixed and it's quite likely that we can set + * an arbitrary PAR. */ + + /* Check if we can keep the input height */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + + /* Might have failed but try to keep the DAR nonetheless by + * adjusting the PAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_h, w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + /* Check if the adjusted PAR is accepted */ + if (set_par_n == to_par_n && set_par_d == to_par_d) { + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "height", G_TYPE_INT, set_h, + "pixel-aspect-ratio", GST_TYPE_FRACTION, set_par_n, set_par_d, + nullptr); + goto done; + } + + /* Otherwise scale the height to the new PAR and check if the + * adjusted with is accepted. If all that fails we can't keep + * the DAR */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scale sized - integer overflow")); + goto done; + } + + h = (guint) gst_util_uint64_scale_int_round (w, den, num); + gst_structure_fixate_field_nearest_int (outs, "height", h); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + + goto done; + } else if (gst_value_is_fixed (to_par)) { + GstStructure *tmp; + gint set_h, set_w, f_h, f_w; + + to_par_n = gst_value_get_fraction_numerator (to_par); + to_par_d = gst_value_get_fraction_denominator (to_par); + + /* Calculate scale factor for the PAR change */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, to_par_n, + to_par_d, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + /* Try to keep the input height (because of interlacing) */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + + /* This might have failed but try to scale the width + * to keep the DAR nonetheless */ + w = (guint) gst_util_uint64_scale_int_round (set_h, num, den); + gst_structure_fixate_field_nearest_int (tmp, "width", w); + gst_structure_get_int (tmp, "width", &set_w); + gst_structure_free (tmp); + + /* We kept the DAR and the height is nearest to the original height */ + if (set_w == w) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + goto done; + } + + f_h = set_h; + f_w = set_w; + + /* If the former failed, try to keep the input width at least */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + /* This might have failed but try to scale the width + * to keep the DAR nonetheless */ + h = (guint) gst_util_uint64_scale_int_round (set_w, den, num); + gst_structure_fixate_field_nearest_int (tmp, "height", h); + gst_structure_get_int (tmp, "height", &set_h); + gst_structure_free (tmp); + + /* We kept the DAR and the width is nearest to the original width */ + if (set_h == h) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + goto done; + } + + /* If all this failed, keep the dimensions with the DAR that was closest + * to the correct DAR. This changes the DAR but there's not much else to + * do here. + */ + if (set_w * ABS (set_h - h) < ABS (f_w - w) * f_h) { + f_h = set_h; + f_w = set_w; + } + gst_structure_set (outs, "width", G_TYPE_INT, f_w, "height", G_TYPE_INT, + f_h, nullptr); + goto done; + } else { + GstStructure *tmp; + gint set_h, set_w, set_par_n, set_par_d, tmp2; + + /* width, height and PAR are not fixed but passthrough is not possible */ + + /* First try to keep the height and width as good as possible + * and scale PAR */ + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", from_h); + gst_structure_get_int (tmp, "height", &set_h); + gst_structure_fixate_field_nearest_int (tmp, "width", from_w); + gst_structure_get_int (tmp, "width", &set_w); + + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_h, set_w, + &to_par_n, &to_par_d)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + gst_structure_free (tmp); + goto done; + } + + if (!gst_structure_has_field (tmp, "pixel-aspect-ratio")) + gst_structure_set_value (tmp, "pixel-aspect-ratio", to_par); + gst_structure_fixate_field_nearest_fraction (tmp, "pixel-aspect-ratio", + to_par_n, to_par_d); + gst_structure_get_fraction (tmp, "pixel-aspect-ratio", &set_par_n, + &set_par_d); + gst_structure_free (tmp); + + if (set_par_n == to_par_n && set_par_d == to_par_d) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* Otherwise try to scale width to keep the DAR with the set + * PAR and height */ + if (!gst_util_fraction_multiply (from_dar_n, from_dar_d, set_par_d, + set_par_n, &num, &den)) { + GST_ELEMENT_ERROR (base, CORE, NEGOTIATION, (nullptr), + ("Error calculating the output scaled size - integer overflow")); + goto done; + } + + w = (guint) gst_util_uint64_scale_int_round (set_h, num, den); + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "width", w); + gst_structure_get_int (tmp, "width", &tmp2); + gst_structure_free (tmp); + + if (tmp2 == w) { + gst_structure_set (outs, "width", G_TYPE_INT, tmp2, "height", + G_TYPE_INT, set_h, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* ... or try the same with the height */ + h = (guint) gst_util_uint64_scale_int_round (set_w, den, num); + tmp = gst_structure_copy (outs); + gst_structure_fixate_field_nearest_int (tmp, "height", h); + gst_structure_get_int (tmp, "height", &tmp2); + gst_structure_free (tmp); + + if (tmp2 == h) { + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, tmp2, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + goto done; + } + + /* If all fails we can't keep the DAR and take the nearest values + * for everything from the first try */ + gst_structure_set (outs, "width", G_TYPE_INT, set_w, "height", + G_TYPE_INT, set_h, nullptr); + if (gst_structure_has_field (outs, "pixel-aspect-ratio") || + set_par_n != set_par_d) + gst_structure_set (outs, "pixel-aspect-ratio", GST_TYPE_FRACTION, + set_par_n, set_par_d, nullptr); + } + } + +done: + if (from_par == &fpar) + g_value_unset (&fpar); + if (to_par == &tpar) + g_value_unset (&tpar); + + return othercaps; +} + +static GstCaps * +gst_hip_base_convert_fixate_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GST_DEBUG_OBJECT (trans, + "trying to fixate othercaps %" GST_PTR_FORMAT " based on caps %" + GST_PTR_FORMAT, othercaps, caps); + + auto format = gst_hip_base_convert_get_fixed_format (trans, direction, caps, + othercaps); + + if (gst_caps_is_empty (format)) { + GST_ERROR_OBJECT (trans, "Could not convert formats"); + return format; + } + + /* convert mode is "all" or "size" here */ + othercaps = + gst_hip_base_convert_fixate_size (trans, direction, caps, othercaps); + + if (gst_caps_get_size (othercaps) == 1) { + guint i; + const gchar *format_fields = { "format", "colorimetry", "chroma-site" }; + GstStructure *format_struct = gst_caps_get_structure (format, 0); + GstStructure *fixated_struct; + + othercaps = gst_caps_make_writable (othercaps); + fixated_struct = gst_caps_get_structure (othercaps, 0); + + for (i = 0; i < G_N_ELEMENTS (format_fields); i++) { + if (gst_structure_has_field (format_struct, format_fieldsi)) { + gst_structure_set (fixated_struct, format_fieldsi, G_TYPE_STRING, + gst_structure_get_string (format_struct, format_fieldsi), + nullptr); + } else { + gst_structure_remove_field (fixated_struct, format_fieldsi); + } + } + } + gst_caps_unref (format); + + GST_DEBUG_OBJECT (trans, "fixated othercaps to %" GST_PTR_FORMAT, othercaps); + + return othercaps; +} + +static gboolean +gst_hip_base_convert_propose_allocation (GstBaseTransform * trans, + GstQuery * decide_query, GstQuery * query) +{ + auto filter = GST_HIP_BASE_FILTER (trans); + auto self = GST_HIP_BASE_CONVERT (trans); + GstVideoInfo info; + GstBufferPool *pool; + GstCaps *caps; + guint size; + + if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, + decide_query, query)) + return FALSE; + + /* passthrough, we're done */ + if (decide_query == nullptr) + return TRUE; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (caps == nullptr) + return FALSE; + + if (!gst_video_info_from_caps (&info, caps)) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) == 0) { + GstStructure *config; + + pool = gst_hip_buffer_pool_new (filter->device); + config = gst_buffer_pool_get_config (pool); + + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + + size = GST_VIDEO_INFO_SIZE (&info); + gst_buffer_pool_config_set_params (config, caps, size, 0, 0); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (self, "failed to set config"); + gst_object_unref (pool); + return FALSE; + } + + /* Get updated size by hip buffer pool */ + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, + nullptr); + gst_structure_free (config); + + gst_query_add_allocation_pool (query, pool, size, 0, 0); + + gst_object_unref (pool); + } + + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + + return TRUE; +} + +static gboolean +gst_hip_base_convert_decide_allocation (GstBaseTransform * trans, + GstQuery * query) +{ + auto self = GST_HIP_BASE_CONVERT (trans); + auto filter = GST_HIP_BASE_FILTER (trans); + GstCaps *outcaps = nullptr; + GstBufferPool *pool = nullptr; + guint size, min, max; + GstStructure *config; + gboolean update_pool = FALSE; + + gst_query_parse_allocation (query, &outcaps, nullptr); + + if (!outcaps) + return FALSE; + + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + if (pool) { + if (!GST_IS_HIP_BUFFER_POOL (pool)) { + gst_clear_object (&pool); + } else { + auto hpool = GST_HIP_BUFFER_POOL (pool); + if (!gst_hip_device_is_equal (filter->device, hpool->device)) + gst_clear_object (&pool); + } + } + + update_pool = TRUE; + } else { + GstVideoInfo vinfo; + gst_video_info_from_caps (&vinfo, outcaps); + size = GST_VIDEO_INFO_SIZE (&vinfo); + min = max = 0; + } + + if (!pool) { + GST_DEBUG_OBJECT (self, "create our pool"); + + pool = gst_hip_buffer_pool_new (filter->device); + } + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_config_set_params (config, outcaps, size, min, max); + gst_buffer_pool_set_config (pool, config); + + /* Get updated size by hip buffer pool */ + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, + query); +} + +static gboolean +needs_color_convert (const GstVideoInfo * in_info, + const GstVideoInfo * out_info) +{ + const GstVideoColorimetry *in_cinfo = &in_info->colorimetry; + const GstVideoColorimetry *out_cinfo = &out_info->colorimetry; + + if (in_cinfo->range != out_cinfo->range || + in_cinfo->matrix != out_cinfo->matrix) { + return TRUE; + } + + if (!gst_video_color_primaries_is_equivalent (in_cinfo->primaries, + out_cinfo->primaries)) { + return TRUE; + } + + if (!gst_video_transfer_function_is_equivalent (in_cinfo->transfer, + GST_VIDEO_INFO_COMP_DEPTH (in_info, 0), out_cinfo->transfer, + GST_VIDEO_INFO_COMP_DEPTH (out_info, 0))) { + return TRUE; + } + + return FALSE; +} + +static gboolean +gst_hip_base_convert_set_info (GstHipBaseFilter * filter, + GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, + GstVideoInfo * out_info) +{ + auto self = GST_HIP_BASE_CONVERT (filter); + auto priv = self->priv; + gint from_dar_n, from_dar_d, to_dar_n, to_dar_d; + gboolean need_flip = FALSE; + gint in_width, in_height, in_par_n, in_par_d; + GstVideoOrientationMethod active_method; + + gst_clear_object (&priv->conv); + + std::lock_guard < std::mutex > lk (priv->lock); + active_method = priv->active_method = priv->selected_method; + + if (active_method != GST_VIDEO_ORIENTATION_IDENTITY) + need_flip = TRUE; + + switch (active_method) { + case GST_VIDEO_ORIENTATION_90R: + case GST_VIDEO_ORIENTATION_90L: + case GST_VIDEO_ORIENTATION_UL_LR: + case GST_VIDEO_ORIENTATION_UR_LL: + in_width = in_info->height; + in_height = in_info->width; + in_par_n = in_info->par_d; + in_par_d = in_info->par_n; + break; + default: + in_width = in_info->width; + in_height = in_info->height; + in_par_n = in_info->par_n; + in_par_d = in_info->par_d; + break; + } + + if (!gst_util_fraction_multiply (in_width, + in_height, in_par_n, in_par_d, &from_dar_n, &from_dar_d)) { + from_dar_n = from_dar_d = -1; + } + + if (!gst_util_fraction_multiply (out_info->width, + out_info->height, out_info->par_n, out_info->par_d, &to_dar_n, + &to_dar_d)) { + to_dar_n = to_dar_d = -1; + } + + priv->borders_w = priv->borders_h = 0; + if (to_dar_n != from_dar_n || to_dar_d != from_dar_d) { + if (priv->add_borders) { + gint n, d, to_h, to_w; + + if (from_dar_n != -1 && from_dar_d != -1 + && gst_util_fraction_multiply (from_dar_n, from_dar_d, + out_info->par_d, out_info->par_n, &n, &d)) { + to_h = gst_util_uint64_scale_int (out_info->width, d, n); + if (to_h <= out_info->height) { + priv->borders_h = out_info->height - to_h; + priv->borders_w = 0; + } else { + to_w = gst_util_uint64_scale_int (out_info->height, n, d); + g_assert (to_w <= out_info->width); + priv->borders_h = 0; + priv->borders_w = out_info->width - to_w; + } + } else { + GST_WARNING_OBJECT (self, "Can't calculate borders"); + } + } else { + GST_DEBUG_OBJECT (self, "Can't keep DAR!"); + } + } + + /* if present, these must match */ + if (in_info->interlace_mode != out_info->interlace_mode) { + GST_ERROR_OBJECT (self, "input and output formats do not match"); + return FALSE; + } + + if (in_width == out_info->width && in_height == out_info->height + && in_info->finfo == out_info->finfo && priv->borders_w == 0 && + priv->borders_h == 0 && !need_flip && + !needs_color_convert (in_info, out_info)) { + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), TRUE); + } else { + gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); + + priv->conv = gst_hip_converter_new (filter->device, in_info, + out_info, nullptr); + if (!priv->conv) { + GST_ERROR_OBJECT (self, "Couldn't create converter"); + return FALSE; + } + + g_object_set (priv->conv, "dest-x", priv->borders_w / 2, + "dest-y", priv->borders_h / 2, + "dest-width", out_info->width - priv->borders_w, + "dest-height", out_info->height - priv->borders_h, + "fill-border", TRUE, "video-direction", active_method, nullptr); + } + + GST_DEBUG_OBJECT (self, "%s from=%dx%d (par=%d/%d dar=%d/%d), size %" + G_GSIZE_FORMAT " -> %s to=%dx%d (par=%d/%d dar=%d/%d borders=%d:%d), " + "size %" G_GSIZE_FORMAT, + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (in_info)), + in_info->width, in_info->height, in_info->par_n, in_info->par_d, + from_dar_n, from_dar_d, in_info->size, + gst_video_format_to_string (GST_VIDEO_INFO_FORMAT (out_info)), + out_info->width, + out_info->height, out_info->par_n, out_info->par_d, to_dar_n, to_dar_d, + priv->borders_w, priv->borders_h, out_info->size); + + return TRUE; +} + +static gboolean +gst_hip_base_convert_filter_meta (GstBaseTransform * trans, GstQuery * query, + GType api, const GstStructure * params) +{ + /* This element cannot passthrough the crop meta, because it would convert the + * wrong sub-region of the image, and worst, our output image may not be large + * enough for the crop to be applied later */ + if (api == GST_VIDEO_CROP_META_API_TYPE) + return FALSE; + + /* propose all other metadata upstream */ + return TRUE; +} + +static GstFlowReturn +gst_hip_base_convert_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf) +{ + auto self = GST_HIP_BASE_CONVERT (trans); + auto priv = self->priv; + + if (!gst_hip_converter_convert_frame (priv->conv, inbuf, outbuf)) { + GST_ERROR_OBJECT (self, "Failed to convert frame"); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +static void +gst_hip_base_convert_set_add_border (GstHipBaseConvert * self, + gboolean add_border) +{ + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + gboolean prev = priv->add_borders; + + priv->add_borders = add_border; + if (prev != priv->add_borders) + gst_base_transform_reconfigure_src (GST_BASE_TRANSFORM_CAST (self)); +} + +static void +gst_hip_base_convert_set_orientation (GstHipBaseConvert * self, + GstVideoOrientationMethod method, gboolean from_tag) +{ + auto priv = self->priv; + + if (method == GST_VIDEO_ORIENTATION_CUSTOM) { + GST_WARNING_OBJECT (self, "Unsupported custom orientation"); + return; + } + + std::lock_guard < std::mutex > lk (priv->lock); + if (from_tag) + priv->tag_method = method; + else + priv->method = method; + + if (priv->method == GST_VIDEO_ORIENTATION_AUTO) { + priv->selected_method = priv->tag_method; + } else { + priv->selected_method = priv->method; + } + + if (priv->selected_method != priv->active_method) { + GST_DEBUG_OBJECT (self, "Rotation orientation %d -> %d", + priv->active_method, priv->selected_method); + + gst_base_transform_reconfigure_src (GST_BASE_TRANSFORM (self)); + } +} + +enum +{ + PROP_CONVERT_SCALE_0, + PROP_CONVERT_SCALE_ADD_BORDERS, + PROP_CONVERT_SCALE_VIDEO_DIRECTION, +}; + +struct _GstHipConvertScale +{ + GstHipBaseConvert parent; +}; + +static void +gst_hip_convert_scale_video_direction_interface_init (GstVideoDirectionInterface + * iface) +{ +} + +static void gst_hip_convert_scale_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_hip_convert_scale_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); +static gboolean gst_hip_convert_scale_sink_event (GstBaseTransform * trans, + GstEvent * event); + +#define gst_hip_convert_scale_parent_class convert_scale_parent_class +G_DEFINE_TYPE_WITH_CODE (GstHipConvertScale, gst_hip_convert_scale, + GST_TYPE_HIP_BASE_CONVERT, + G_IMPLEMENT_INTERFACE (GST_TYPE_VIDEO_DIRECTION, + gst_hip_convert_scale_video_direction_interface_init)); + +static void gst_hip_convert_scale_before_transform (GstBaseTransform * trans, + GstBuffer * buffer); + +static void +gst_hip_convert_scale_class_init (GstHipConvertScaleClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + + object_class->set_property = gst_hip_convert_scale_set_property; + object_class->get_property = gst_hip_convert_scale_get_property; + + g_object_class_install_property (object_class, + PROP_CONVERT_SCALE_ADD_BORDERS, + g_param_spec_boolean ("add-borders", "Add Borders", + "Add borders if necessary to keep the display aspect ratio", + DEFAULT_ADD_BORDERS, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_override_property (object_class, + PROP_CONVERT_SCALE_VIDEO_DIRECTION, "video-direction"); + + gst_element_class_set_static_metadata (element_class, + "HIP colorspace converter and scaler", + "Filter/Converter/Video/Scaler/Colorspace/Effect/Hardware", + "Resizes video and allow color conversion using HIP", + "Seungha Yang <seungha@centricular.com>"); + + trans_class->passthrough_on_same_caps = FALSE; + trans_class->before_transform = + GST_DEBUG_FUNCPTR (gst_hip_convert_scale_before_transform); + trans_class->sink_event = + GST_DEBUG_FUNCPTR (gst_hip_convert_scale_sink_event); +} + +static void +gst_hip_convert_scale_init (GstHipConvertScale * self) +{ +} + +static void +gst_hip_convert_scale_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto base = GST_HIP_BASE_CONVERT (object); + + switch (prop_id) { + case PROP_CONVERT_SCALE_ADD_BORDERS: + gst_hip_base_convert_set_add_border (base, g_value_get_boolean (value)); + break; + case PROP_CONVERT_SCALE_VIDEO_DIRECTION: + gst_hip_base_convert_set_orientation (base, + (GstVideoOrientationMethod) g_value_get_enum (value), FALSE); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_convert_scale_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto base = GST_HIP_BASE_CONVERT (object); + auto priv = base->priv; + + switch (prop_id) { + case PROP_CONVERT_SCALE_ADD_BORDERS: + g_value_set_boolean (value, priv->add_borders); + break; + case PROP_CONVERT_SCALE_VIDEO_DIRECTION: + g_value_set_enum (value, priv->method); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_convert_scale_before_transform (GstBaseTransform * trans, + GstBuffer * buffer) +{ + auto base = GST_HIP_BASE_CONVERT (trans); + auto priv = base->priv; + GstCaps *in_caps; + GstCaps *out_caps; + GstBaseTransformClass *klass; + + GST_BASE_TRANSFORM_CLASS (convert_scale_parent_class)->before_transform + (trans, buffer); + + { + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->selected_method == priv->active_method) + return; + } + + /* basetransform wouldn't call set_caps if in/out caps were not changed. + * Update it manually here */ + GST_DEBUG_OBJECT (base, "Updating caps for direction change"); + + in_caps = gst_pad_get_current_caps (GST_BASE_TRANSFORM_SINK_PAD (trans)); + if (!in_caps) { + GST_WARNING_OBJECT (trans, "sinkpad has no current caps"); + return; + } + + out_caps = gst_pad_get_current_caps (GST_BASE_TRANSFORM_SRC_PAD (trans)); + if (!out_caps) { + GST_WARNING_OBJECT (trans, "srcpad has no current caps"); + gst_caps_unref (in_caps); + return; + } + + klass = GST_BASE_TRANSFORM_GET_CLASS (trans); + klass->set_caps (trans, in_caps, out_caps); + gst_caps_unref (in_caps); + gst_caps_unref (out_caps); + + gst_base_transform_reconfigure_src (trans); +} + +static gboolean +gst_hip_convert_scale_sink_event (GstBaseTransform * trans, GstEvent * event) +{ + auto base = GST_HIP_BASE_CONVERT (trans); + + switch (GST_EVENT_TYPE (event)) { + case GST_EVENT_TAG:{ + GstTagList *taglist; + GstVideoOrientationMethod method = GST_VIDEO_ORIENTATION_IDENTITY; + + gst_event_parse_tag (event, &taglist); + if (gst_video_orientation_from_tag (taglist, &method)) + gst_hip_base_convert_set_orientation (base, method, TRUE); + break; + } + default: + break; + } + + return + GST_BASE_TRANSFORM_CLASS (convert_scale_parent_class)->sink_event + (trans, event); +} + +struct _GstHipConvert +{ + GstHipBaseConvert parent; +}; + +static GstCaps *gst_hip_convert_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_hip_convert_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); + +G_DEFINE_TYPE (GstHipConvert, gst_hip_convert, GST_TYPE_HIP_BASE_CONVERT); + +static void +gst_hip_convert_class_init (GstHipConvertClass * klass) +{ + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstBaseTransformClass *trans_class = GST_BASE_TRANSFORM_CLASS (klass); + + gst_element_class_set_static_metadata (element_class, + "HIP colorspace converter", + "Filter/Converter/Video/Hardware", + "Converts video from one colorspace to another using HIP", + "Seungha Yang <seungha@centricular.com>"); + + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_hip_convert_transform_caps); + trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_hip_convert_fixate_caps); +} + +static void +gst_hip_convert_init (GstHipConvert * self) +{ +} + +static GstCaps * +gst_hip_convert_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + GstCaps *tmp, *tmp2; + GstCaps *result; + + /* Get all possible caps that we can transform to */ + tmp = gst_hip_base_convert_caps_remove_format_info (caps); + + if (filter) { + tmp2 = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp); + tmp = tmp2; + } + + result = tmp; + + GST_DEBUG_OBJECT (trans, "transformed %" GST_PTR_FORMAT " into %" + GST_PTR_FORMAT, caps, result); + + return result; +} + +static GstCaps * +gst_hip_convert_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GstCaps *format = nullptr; + + GST_DEBUG_OBJECT (base, + "trying to fixate othercaps %" GST_PTR_FORMAT " based on caps %" + GST_PTR_FORMAT, othercaps, caps); + + format = gst_hip_base_convert_get_fixed_format (base, direction, caps, + othercaps); + gst_caps_unref (othercaps); + + if (gst_caps_is_empty (format)) { + GST_ERROR_OBJECT (base, "Could not convert formats"); + } else { + GST_DEBUG_OBJECT (base, "fixated othercaps to %" GST_PTR_FORMAT, format); + } + + return format; +} + +enum +{ + PROP_SCALE_0, + PROP_SCALE_ADD_BORDERS, +}; + +struct _GstHipScale +{ + GstHipBaseConvert parent; +}; + +static void gst_hip_scale_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_hip_scale_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); +static GstCaps *gst_hip_scale_transform_caps (GstBaseTransform * + trans, GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static GstCaps *gst_hip_scale_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps); + +G_DEFINE_TYPE (GstHipScale, gst_hip_scale, GST_TYPE_HIP_BASE_CONVERT); + +static void +gst_hip_scale_class_init (GstHipScaleClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + + object_class->set_property = gst_hip_scale_set_property; + object_class->get_property = gst_hip_scale_get_property; + + g_object_class_install_property (object_class, PROP_SCALE_ADD_BORDERS, + g_param_spec_boolean ("add-borders", "Add Borders", + "Add borders if necessary to keep the display aspect ratio", + DEFAULT_ADD_BORDERS, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_set_static_metadata (element_class, + "HIP video scaler", + "Filter/Converter/Video/Scaler/Hardware", + "Resize video using HIP", "Seungha Yang <seungha@centricular.com>"); + + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_hip_scale_transform_caps); + trans_class->fixate_caps = GST_DEBUG_FUNCPTR (gst_hip_scale_fixate_caps); +} + +static void +gst_hip_scale_init (GstHipScale * self) +{ +} + +static void +gst_hip_scale_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto base = GST_HIP_BASE_CONVERT (object); + + switch (prop_id) { + case PROP_SCALE_ADD_BORDERS: + gst_hip_base_convert_set_add_border (base, g_value_get_boolean (value)); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_scale_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto base = GST_HIP_BASE_CONVERT (object); + auto priv = base->priv; + + switch (prop_id) { + case PROP_SCALE_ADD_BORDERS: + g_value_set_boolean (value, priv->add_borders); + break; + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstCaps * +gst_hip_scale_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + GstCaps *tmp, *tmp2; + GstCaps *result; + + /* Get all possible caps that we can transform to */ + tmp = gst_hip_base_convert_caps_rangify_size_info (caps); + + if (filter) { + tmp2 = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp); + tmp = tmp2; + } + + result = tmp; + + GST_DEBUG_OBJECT (trans, "transformed %" GST_PTR_FORMAT " into %" + GST_PTR_FORMAT, caps, result); + + return result; +} + +static GstCaps * +gst_hip_scale_fixate_caps (GstBaseTransform * base, + GstPadDirection direction, GstCaps * caps, GstCaps * othercaps) +{ + GST_DEBUG_OBJECT (base, + "trying to fixate othercaps %" GST_PTR_FORMAT " based on caps %" + GST_PTR_FORMAT, othercaps, caps); + + othercaps = + gst_hip_base_convert_fixate_size (base, direction, caps, othercaps); + + GST_DEBUG_OBJECT (base, "fixated othercaps to %" GST_PTR_FORMAT, othercaps); + + return othercaps; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipconvertscale.h
Added
@@ -0,0 +1,66 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gsthipbasefilter.h" + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_BASE_CONVERT (gst_hip_base_convert_get_type()) +#define GST_HIP_BASE_CONVERT(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_HIP_BASE_CONVERT,GstHipBaseConvert)) +#define GST_HIP_BASE_CONVERT_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_HIP_BASE_CONVERT,GstHipBaseConvertClass)) +#define GST_HIP_BASE_CONVERT_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_HIP_BASE_CONVERT,GstHipBaseConvertClass)) +#define GST_IS_HIP_BASE_CONVERT(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_HIP_BASE_CONVERT)) +#define GST_IS_HIP_BASE_CONVERT_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_HIP_BASE_CONVERT)) + +typedef struct _GstHipBaseConvert GstHipBaseConvert; +typedef struct _GstHipBaseConvertClass GstHipBaseConvertClass; +typedef struct _GstHipBaseConvertPrivate GstHipBaseConvertPrivate; + +struct _GstHipBaseConvert +{ + GstHipBaseFilter parent; + + GstHipBaseConvertPrivate *priv; +}; + +struct _GstHipBaseConvertClass +{ + GstHipBaseFilter parent_class; +}; + +GType gst_hip_base_convert_get_type (void); +G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstHipBaseConvert, gst_object_unref) + +#define GST_TYPE_HIP_CONVERT_SCALE (gst_hip_convert_scale_get_type()) +G_DECLARE_FINAL_TYPE (GstHipConvertScale, gst_hip_convert_scale, + GST, HIP_CONVERT_SCALE, GstHipBaseConvert) + +#define GST_TYPE_HIP_CONVERT (gst_hip_convert_get_type()) +G_DECLARE_FINAL_TYPE (GstHipConvert, gst_hip_convert, + GST, HIP_CONVERT, GstHipBaseConvert) + +#define GST_TYPE_HIP_SCALE (gst_hip_scale_get_type()) +G_DECLARE_FINAL_TYPE (GstHipScale, gst_hip_scale, + GST, HIP_SCALE, GstHipBaseConvert) + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipmemorycopy.cpp
Added
@@ -0,0 +1,1425 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gsthip-config.h" +#include <gst/hip/gsthip.h> + +#ifdef HAVE_GST_CUDA +#include <gst/cuda/gstcuda.h> +#endif + +#ifdef HAVE_GST_HIP_GL +#include <gst/hip/gsthip-gl.h> +#endif + +#include "gsthipmemorycopy.h" +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_hip_memory_copy_debug); +#define GST_CAT_DEFAULT gst_hip_memory_copy_debug + +#define GST_HIP_FORMATS \ + "{ I420, YV12, NV12, NV21, P010_10LE, P012_LE, P016_LE, I420_10LE, I420_12LE, Y444, " \ + "Y444_10LE, Y444_12LE, Y444_16LE, BGRA, RGBA, RGBx, BGRx, ARGB, ABGR, RGB, " \ + "BGR, BGR10A2_LE, RGB10A2_LE, Y42B, I422_10LE, I422_12LE, YUY2, UYVY, RGBP, " \ + "BGRP, GBR, GBR_10LE, GBR_12LE, GBR_16LE, GBRA, VUYA }" + +enum class TransferType +{ + SYSTEM, + CUDA_TO_HIP, + GL_TO_HIP, + HIP_TO_CUDA, + HIP_TO_GL, +}; + +enum class MemoryType +{ + SYSTEM, + HIP, + CUDA, + GL, +}; + +enum class DeviceSearchType +{ + ANY, + PROPERTY, + DEVICE_ID, +}; + +enum +{ + PROP_0, + PROP_DEVICE_ID, + PROP_VENDOR, +}; + +#define DEFAULT_DEVICE_ID -1 +#define DEFAULT_VENDOR GST_HIP_VENDOR_UNKNOWN + +/* *INDENT-OFF* */ +struct _GstHipMemoryCopyPrivate +{ + ~_GstHipMemoryCopyPrivate () + { + Reset (true); + } + + void Reset (bool full) + { + in_type = MemoryType::SYSTEM; + out_type = MemoryType::SYSTEM; + transfer_type = TransferType::SYSTEM; + search_type = DeviceSearchType::PROPERTY; + + if (full) { + target_id = -1; + target_vendor = GST_HIP_VENDOR_UNKNOWN; + gst_clear_caps (&in_caps); + gst_clear_caps (&out_caps); + gst_clear_object (&device); +#ifdef HAVE_GST_CUDA + gst_clear_object (&cuda_ctx); +#endif +#ifdef HAVE_GST_HIP_GL + gst_clear_object (&other_gl_ctx); + gst_clear_object (&gl_ctx); + gst_clear_object (&gl_dpy); +#endif + } + } + + gboolean is_uploader; + std::recursive_mutex lock; + + GstVideoInfo info; + + GstCaps *in_caps = nullptr; + GstCaps *out_caps = nullptr; + + GstHipDevice *device = nullptr; +#ifdef HAVE_GST_CUDA + GstCudaContext *cuda_ctx = nullptr; +#endif + +#ifdef HAVE_GST_HIP_GL + GstGLDisplay *gl_dpy = nullptr; + GstGLContext *gl_ctx = nullptr; + GstGLContext *other_gl_ctx = nullptr; +#endif + + DeviceSearchType search_type = DeviceSearchType::PROPERTY; + TransferType transfer_type = TransferType::SYSTEM; + MemoryType in_type = MemoryType::SYSTEM; + MemoryType out_type = MemoryType::SYSTEM; + + gint target_id = -1; + GstHipVendor target_vendor = GST_HIP_VENDOR_UNKNOWN; + + gint device_id = DEFAULT_DEVICE_ID; + GstHipVendor vendor = DEFAULT_VENDOR; +}; +/* *INDENT-ON* */ + +static void gst_hip_memory_copy_finalize (GObject * object); +static void gst_hip_memory_copy_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_hip_memory_copy_get_property (GObject * object, + guint prop_id, GValue * value, GParamSpec * pspec); + +static void gst_hip_memory_copy_set_context (GstElement * element, + GstContext * context); +static gboolean gst_hip_memory_copy_start (GstBaseTransform * trans); +static gboolean gst_hip_memory_copy_stop (GstBaseTransform * trans); +static gboolean gst_hip_memory_copy_set_caps (GstBaseTransform * trans, + GstCaps * incaps, GstCaps * outcaps); +static gboolean gst_hip_memory_copy_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query); +static void gst_hip_memory_copy_before_transform (GstBaseTransform * trans, + GstBuffer * buffer); +static GstCaps *gst_hip_memory_copy_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter); +static gboolean gst_hip_memory_copy_propose_allocation (GstBaseTransform * + trans, GstQuery * decide_query, GstQuery * query); +static gboolean gst_hip_memory_copy_decide_allocation (GstBaseTransform * + trans, GstQuery * query); +static GstFlowReturn gst_hip_memory_copy_transform (GstBaseTransform * trans, + GstBuffer * inbuf, GstBuffer * outbuf); + +#define gst_hip_memory_copy_parent_class parent_class +G_DEFINE_ABSTRACT_TYPE (GstHipMemoryCopy, gst_hip_memory_copy, + GST_TYPE_BASE_TRANSFORM); + +static void +gst_hip_memory_copy_class_init (GstHipMemoryCopyClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto trans_class = GST_BASE_TRANSFORM_CLASS (klass); + + object_class->finalize = gst_hip_memory_copy_finalize; + object_class->set_property = gst_hip_memory_copy_set_property; + object_class->get_property = gst_hip_memory_copy_get_property; + + g_object_class_install_property (object_class, PROP_DEVICE_ID, + g_param_spec_int ("device-id", + "Device ID", "HIP device ID to use (-1 = auto)", + -1, G_MAXINT, DEFAULT_DEVICE_ID, + (GParamFlags) (G_PARAM_READWRITE | GST_PARAM_MUTABLE_READY | + G_PARAM_STATIC_STRINGS))); + g_object_class_install_property (object_class, PROP_VENDOR, + g_param_spec_enum ("vendor", "Vendor", "Vendor type", + GST_TYPE_HIP_VENDOR, GST_HIP_VENDOR_UNKNOWN, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + element_class->set_context = + GST_DEBUG_FUNCPTR (gst_hip_memory_copy_set_context); + + trans_class->passthrough_on_same_caps = TRUE; + + trans_class->start = GST_DEBUG_FUNCPTR (gst_hip_memory_copy_start); + trans_class->stop = GST_DEBUG_FUNCPTR (gst_hip_memory_copy_stop); + trans_class->set_caps = GST_DEBUG_FUNCPTR (gst_hip_memory_copy_set_caps); + trans_class->transform_caps = + GST_DEBUG_FUNCPTR (gst_hip_memory_copy_transform_caps); + trans_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_hip_memory_copy_propose_allocation); + trans_class->decide_allocation = + GST_DEBUG_FUNCPTR (gst_hip_memory_copy_decide_allocation); + trans_class->query = GST_DEBUG_FUNCPTR (gst_hip_memory_copy_query); + trans_class->before_transform = + GST_DEBUG_FUNCPTR (gst_hip_memory_copy_before_transform); + trans_class->transform = GST_DEBUG_FUNCPTR (gst_hip_memory_copy_transform); + + gst_type_mark_as_plugin_api (GST_TYPE_HIP_MEMORY_COPY, (GstPluginAPIFlags) 0); + GST_DEBUG_CATEGORY_INIT (gst_hip_memory_copy_debug, + "hipmemorycopy", 0, "hipmemorycopy"); +} + +static void +gst_hip_memory_copy_init (GstHipMemoryCopy * self) +{ + self->priv = new GstHipMemoryCopyPrivate (); +} + +static void +gst_hip_memory_copy_finalize (GObject * object) +{ + auto self = GST_HIP_MEMORY_COPY (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_hip_memory_copy_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_MEMORY_COPY (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE_ID: + priv->device_id = g_value_get_int (value); + break; + case PROP_VENDOR: + priv->vendor = (GstHipVendor) g_value_get_enum (value); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_memory_copy_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_HIP_MEMORY_COPY (object); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE_ID: + g_value_set_int (value, priv->device_id); + break; + case PROP_VENDOR: + g_value_set_enum (value, priv->vendor); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_hip_memory_copy_set_context (GstElement * element, GstContext * context) +{ + auto self = GST_HIP_MEMORY_COPY (element); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); +#ifdef HAVE_GST_HIP_GL + gst_gl_handle_set_context (element, context, &priv->gl_dpy, + &priv->other_gl_ctx); +#endif + + switch (priv->search_type) { + case DeviceSearchType::ANY: + gst_hip_handle_set_context (element, context, GST_HIP_VENDOR_UNKNOWN, + -1, &priv->device); +#ifdef HAVE_GST_CUDA + gst_cuda_handle_set_context (element, context, -1, &priv->cuda_ctx); +#endif + break; + case DeviceSearchType::PROPERTY: + gst_hip_handle_set_context (element, context, priv->vendor, + priv->device_id, &priv->device); +#ifdef HAVE_GST_CUDA + if (priv->vendor != GST_HIP_VENDOR_AMD) { + gst_cuda_handle_set_context (element, context, priv->device_id, + &priv->cuda_ctx); + } +#endif + break; + case DeviceSearchType::DEVICE_ID: + gst_hip_handle_set_context (element, context, priv->target_vendor, + priv->target_id, &priv->device); +#ifdef HAVE_GST_CUDA + if (priv->vendor != GST_HIP_VENDOR_AMD) { + gst_cuda_handle_set_context (element, context, priv->device_id, + &priv->cuda_ctx); + } +#endif + break; + } + } + + GST_ELEMENT_CLASS (parent_class)->set_context (element, context); +} + +static gboolean +gst_hip_memory_copy_start (GstBaseTransform * trans) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (!gst_hip_ensure_element_data (GST_ELEMENT (trans), + priv->vendor, priv->device_id, &priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't get HIP device"); + return FALSE; + } + } + + return TRUE; +} + +static gboolean +gst_hip_memory_copy_stop (GstBaseTransform * trans) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + priv->Reset (true); + + return TRUE; +} + +#ifdef HAVE_GST_CUDA +static gboolean +gst_hip_memory_copy_ensure_device (GstHipMemoryCopy * self) +{ + auto priv = self->priv; + + if (priv->in_type == priv->out_type) + return TRUE; + + if (priv->in_type != MemoryType::CUDA && priv->out_type != MemoryType::CUDA) + return TRUE; + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + auto elem = GST_ELEMENT (self); + auto vendor = gst_hip_device_get_vendor (priv->device); + if (vendor != GST_HIP_VENDOR_NVIDIA) { + /* Create new device for NVIDIA */ + auto old_dev = priv->device; + priv->device = nullptr; + priv->target_id = -1; + priv->target_vendor = GST_HIP_VENDOR_NVIDIA; + priv->search_type = DeviceSearchType::DEVICE_ID; + auto ret = gst_hip_ensure_element_data (elem, + priv->target_vendor, priv->target_id, &priv->device); + priv->search_type = DeviceSearchType::PROPERTY; + if (!ret) { + GST_WARNING_OBJECT (self, "Couldn't create device for NVIDIA"); + priv->device = old_dev; + return TRUE; + } + + gst_object_unref (old_dev); + } + + auto device_id = gst_hip_device_get_device_id (priv->device); + if (priv->cuda_ctx) { + guint cuda_dev_id; + g_object_get (priv->cuda_ctx, "cuda-device-id", &cuda_dev_id, nullptr); + if (cuda_dev_id != device_id) + gst_clear_object (&priv->cuda_ctx); + } + + if (!priv->cuda_ctx) { + priv->search_type = DeviceSearchType::DEVICE_ID; + auto ret = gst_cuda_ensure_element_context (elem, + device_id, &priv->cuda_ctx); + priv->search_type = DeviceSearchType::PROPERTY; + if (!ret) { + GST_WARNING_OBJECT (self, "Couldn't create device for NVIDIA"); + return TRUE; + } + } + + if (priv->in_type == MemoryType::CUDA) + priv->transfer_type = TransferType::CUDA_TO_HIP; + else + priv->transfer_type = TransferType::HIP_TO_CUDA; + + return TRUE; +} +#else +static gboolean +gst_hip_memory_copy_ensure_device (GstHipMemoryCopy * self) +{ + return TRUE; +} +#endif + +#ifdef HAVE_GST_HIP_GL +static gboolean +gst_hip_memory_copy_ensure_gl_context (GstHipMemoryCopy * self) +{ + auto priv = self->priv; + + if (!gst_gl_ensure_element_data (GST_ELEMENT (self), + &priv->gl_dpy, &priv->other_gl_ctx)) { + GST_DEBUG_OBJECT (self, "No available OpenGL display"); + return FALSE; + } + + auto gl_dpy = priv->gl_dpy; + if (!gst_gl_query_local_gl_context (GST_ELEMENT (self), GST_PAD_SRC, + &priv->gl_ctx) && + !gst_gl_query_local_gl_context (GST_ELEMENT (self), GST_PAD_SINK, + &priv->gl_ctx)) { + GST_INFO_OBJECT (self, "failed to query local OpenGL context"); + + gst_clear_object (&priv->gl_ctx); + priv->gl_ctx = gst_gl_display_get_gl_context_for_thread (gl_dpy, nullptr); + if (!priv->gl_ctx + || !gst_gl_display_add_context (gl_dpy, + GST_GL_CONTEXT (priv->gl_ctx))) { + gst_clear_object (&priv->gl_ctx); + if (!gst_gl_display_create_context (gl_dpy, + priv->other_gl_ctx, &priv->gl_ctx, nullptr)) { + GST_WARNING_OBJECT (self, "failed to create OpenGL context"); + return FALSE; + } + + if (!gst_gl_display_add_context (gl_dpy, priv->gl_ctx)) { + GST_WARNING_OBJECT (self, + "failed to add the OpenGL context to the display"); + return FALSE; + } + } + } + + auto gl_ctx = priv->gl_ctx; + if (!gst_gl_context_check_gl_version (gl_ctx, + (GstGLAPI) (GST_GL_API_OPENGL | GST_GL_API_OPENGL3), 3, 0)) { + GST_WARNING_OBJECT (self, "OpenGL context could not support PBO"); + return FALSE; + } + + GST_DEBUG_OBJECT (self, "Found GL context"); + + return TRUE; +} +#else +static gboolean +gst_hip_memory_copy_ensure_gl_context (GstHipMemoryCopy * self) +{ + return TRUE; +} +#endif + +static gboolean +gst_hip_memory_copy_set_caps (GstBaseTransform * trans, GstCaps * incaps, + GstCaps * outcaps) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + + if (!priv->device) { + GST_ERROR_OBJECT (self, "No availaable HIP device"); + return FALSE; + } + + gst_caps_replace (&priv->in_caps, incaps); + gst_caps_replace (&priv->out_caps, outcaps); + + if (!gst_video_info_from_caps (&priv->info, incaps)) { + GST_ERROR_OBJECT (self, "Invalid input caps %" GST_PTR_FORMAT, incaps); + return FALSE; + } + + priv->Reset (false); + + auto features = gst_caps_get_features (incaps, 0); + if (gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_HIP_MEMORY)) { + priv->in_type = MemoryType::HIP; + } +#ifdef HAVE_GST_CUDA + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY)) { + priv->in_type = MemoryType::CUDA; + } +#endif +#ifdef HAVE_GST_HIP_GL + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_GL_MEMORY)) { + priv->in_type = MemoryType::GL; + } +#endif + + features = gst_caps_get_features (outcaps, 0); + if (features && gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_HIP_MEMORY)) { + priv->out_type = MemoryType::HIP; + } +#ifdef HAVE_GST_CUDA + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY)) { + priv->out_type = MemoryType::CUDA; + } +#endif +#ifdef HAVE_GST_HIP_GL + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_GL_MEMORY)) { + priv->out_type = MemoryType::GL; + } +#endif + + priv->transfer_type = TransferType::SYSTEM; +#ifdef HAVE_GST_HIP_GL + if (priv->in_type == MemoryType::GL && priv->out_type == MemoryType::HIP && + gst_hip_memory_copy_ensure_gl_context (self)) { + priv->transfer_type = TransferType::GL_TO_HIP; + return TRUE; + } else if (priv->out_type == MemoryType::GL && + priv->in_type == MemoryType::HIP && + gst_hip_memory_copy_ensure_gl_context (self)) { + priv->transfer_type = TransferType::HIP_TO_GL; + return TRUE; + } +#endif + + return gst_hip_memory_copy_ensure_device (self); +} + +static gboolean +gst_hip_memory_copy_query (GstBaseTransform * trans, + GstPadDirection direction, GstQuery * query) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + + if (GST_QUERY_TYPE (query) == GST_QUERY_CONTEXT) { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + auto elem = GST_ELEMENT (trans); + if (gst_hip_handle_context_query (elem, query, priv->device)) + return TRUE; + +#ifdef HAVE_GST_HIP_GL + if (gst_gl_handle_context_query (elem, query, + priv->gl_dpy, priv->gl_ctx, priv->other_gl_ctx)) { + return TRUE; + } +#endif + +#ifdef HAVE_GST_CUDA + if (gst_cuda_handle_context_query (elem, query, priv->cuda_ctx)) + return TRUE; +#endif + } + + return GST_BASE_TRANSFORM_CLASS (parent_class)->query (trans, direction, + query); +} + +static void +gst_hip_memory_copy_before_transform (GstBaseTransform * trans, + GstBuffer * buffer) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + + bool need_reconfigure = false; + if (priv->transfer_type == TransferType::SYSTEM) + return; + + auto mem = gst_buffer_peek_memory (buffer, 0); + if (priv->in_type == MemoryType::CUDA) { + if (!gst_is_cuda_memory (mem)) { + GST_WARNING_OBJECT (self, "Input memory is not cuda"); + priv->transfer_type = TransferType::SYSTEM; + return; + } + + auto cmem = GST_CUDA_MEMORY_CAST (mem); + guint device_id = gst_hip_device_get_device_id (priv->device); + guint cuda_dev_id; + g_object_get (cmem->context, "cuda-device-id", &cuda_dev_id, nullptr); + if (cuda_dev_id != device_id) { + GST_INFO_OBJECT (self, "cuda device is updated"); + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_clear_object (&priv->cuda_ctx); + priv->cuda_ctx = (GstCudaContext *) gst_object_ref (cmem->context); + + auto old_dev = priv->device; + priv->device = nullptr; + priv->target_vendor = GST_HIP_VENDOR_NVIDIA; + priv->target_id = device_id; + priv->search_type = DeviceSearchType::DEVICE_ID; + auto ret = + gst_hip_ensure_element_data (GST_ELEMENT (self), priv->target_vendor, + priv->target_id, &priv->device); + priv->search_type = DeviceSearchType::PROPERTY; + if (!ret) { + GST_WARNING_OBJECT (self, "Couldn't get hip device"); + priv->device = old_dev; + priv->transfer_type = TransferType::SYSTEM; + return; + } + + gst_clear_object (&old_dev); + need_reconfigure = true; + } + } else if (priv->in_type == MemoryType::HIP) { + if (!gst_is_hip_memory (mem)) { + GST_WARNING_OBJECT (self, "Input memory is not hip"); + priv->transfer_type = TransferType::SYSTEM; + return; + } + + auto hmem = GST_HIP_MEMORY_CAST (mem); + if (!gst_hip_device_is_equal (hmem->device, priv->device)) { + GST_INFO_OBJECT (self, "hip device is updated"); + std::lock_guard < std::recursive_mutex > lk (priv->lock); + + auto other_vendor = gst_hip_device_get_vendor (hmem->device); + if (other_vendor != GST_HIP_VENDOR_NVIDIA) { + GST_INFO_OBJECT (self, "Input is not NVIDIA"); + priv->transfer_type = TransferType::SYSTEM; + return; + } + + + gst_clear_object (&priv->device); + priv->device = (GstHipDevice *) gst_object_ref (hmem->device); + + auto new_dev_id = gst_hip_device_get_device_id (priv->device); + gst_clear_object (&priv->cuda_ctx); + + priv->target_id = new_dev_id; + priv->target_vendor = GST_HIP_VENDOR_NVIDIA; + priv->search_type = DeviceSearchType::DEVICE_ID; + auto ret = gst_cuda_ensure_element_context (GST_ELEMENT (self), + priv->target_id, &priv->cuda_ctx); + priv->search_type = DeviceSearchType::PROPERTY; + if (!ret) { + GST_WARNING_OBJECT (self, "Couldn't get cuda context"); + priv->transfer_type = TransferType::SYSTEM; + } + + need_reconfigure = true; + } + } + + if (need_reconfigure) { + GST_DEBUG_OBJECT (self, "Reconfiguring for device update"); + gst_hip_memory_copy_set_caps (trans, priv->in_caps, priv->out_caps); + gst_base_transform_reconfigure_src (trans); + } +} + +static GstCaps * +_set_caps_features (const GstCaps * caps, const gchar * feature_name) +{ + GstCaps *tmp = gst_caps_copy (caps); + guint n = gst_caps_get_size (tmp); + guint i = 0; + + for (i = 0; i < n; i++) { + gst_caps_set_features (tmp, i, + gst_caps_features_new_single_static_str (feature_name)); + } + + return tmp; +} + +static void +_remove_field (GstCaps * caps, const gchar * field) +{ + guint n = gst_caps_get_size (caps); + guint i = 0; + + for (i = 0; i < n; i++) { + GstStructure *s = gst_caps_get_structure (caps, i); + gst_structure_remove_field (s, field); + } +} + +static GstCaps * +gst_hip_memory_copy_transform_caps (GstBaseTransform * trans, + GstPadDirection direction, GstCaps * caps, GstCaps * filter) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + GstCaps *result, *tmp; + + GST_DEBUG_OBJECT (self, + "Transforming caps %" GST_PTR_FORMAT " in direction %s", caps, + (direction == GST_PAD_SINK) ? "sink" : "src"); + + if (direction == GST_PAD_SINK) { + if (priv->is_uploader) { + auto caps_hip = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_HIP_MEMORY); + tmp = gst_caps_merge (caps_hip, gst_caps_ref (caps)); + } else { + auto ret = gst_caps_ref (caps); + +#ifdef HAVE_GST_CUDA + auto caps_cuda = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY); + ret = gst_caps_merge (ret, caps_cuda); +#endif +#ifdef HAVE_GST_HIP_GL + auto caps_gl = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_GL_MEMORY); + ret = gst_caps_merge (ret, caps_gl); +#endif + + auto caps_sys = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); + tmp = gst_caps_merge (ret, caps_sys); + } + } else { + if (priv->is_uploader) { + auto ret = gst_caps_ref (caps); + +#ifdef HAVE_GST_CUDA + auto caps_cuda = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY); + ret = gst_caps_merge (ret, caps_cuda); +#endif +#ifdef HAVE_GST_HIP_GL + auto caps_gl = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_GL_MEMORY); + ret = gst_caps_merge (ret, caps_gl); +#endif + + auto caps_sys = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_SYSTEM_MEMORY); + tmp = gst_caps_merge (ret, caps_sys); + } else { + auto caps_hip = + _set_caps_features (caps, GST_CAPS_FEATURE_MEMORY_HIP_MEMORY); + tmp = gst_caps_merge (caps_hip, gst_caps_ref (caps)); + } + } + + tmp = gst_caps_make_writable (tmp); + _remove_field (tmp, "texture-target"); + + if (filter) { + result = gst_caps_intersect_full (filter, tmp, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (tmp); + } else { + result = tmp; + } + + GST_DEBUG_OBJECT (trans, "returning caps: %" GST_PTR_FORMAT, result); + + return result; +} + +static gboolean +gst_hip_memory_copy_propose_allocation (GstBaseTransform * trans, + GstQuery * decide_query, GstQuery * query) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + GstVideoInfo info; + GstBufferPool *pool = nullptr; + GstCaps *caps; + guint size; + bool is_system = true; + + if (!GST_BASE_TRANSFORM_CLASS (parent_class)->propose_allocation (trans, + decide_query, query)) + return FALSE; + + /* passthrough, we're done */ + if (!decide_query) + return TRUE; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (!caps) { + GST_WARNING_OBJECT (self, "Allocation query without caps"); + return FALSE; + } + + if (!gst_video_info_from_caps (&info, caps)) { + GST_ERROR_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + if (gst_query_get_n_allocation_pools (query) == 0) { + auto features = gst_caps_get_features (caps, 0); + if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_HIP_MEMORY)) { + GST_DEBUG_OBJECT (self, "upstream support hip memory"); + pool = gst_hip_buffer_pool_new (priv->device); + is_system = false; + } +#ifdef HAVE_GST_CUDA + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY) && priv->cuda_ctx) { + GST_DEBUG_OBJECT (self, "upstream support cuda memory"); + pool = gst_cuda_buffer_pool_new (priv->cuda_ctx); + is_system = false; + } +#endif +#ifdef HAVE_GST_HIP_GL + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_GL_MEMORY) && + gst_hip_memory_copy_ensure_gl_context (self)) { + GST_DEBUG_OBJECT (self, "upstream support gl memory"); + pool = gst_gl_buffer_pool_new (priv->gl_ctx); + is_system = false; + } +#endif + + if (!pool) + pool = gst_video_buffer_pool_new (); + + auto config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + + if (is_system) { + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); + } + + size = GST_VIDEO_INFO_SIZE (&info); + gst_buffer_pool_config_set_params (config, caps, size, 0, 0); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (self, "Bufferpool config failed"); + gst_object_unref (pool); + return FALSE; + } + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, + nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + gst_query_add_allocation_pool (query, pool, size, 0, 0); + gst_object_unref (pool); + } + + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + + return TRUE; +} + +static gboolean +gst_hip_memory_copy_decide_allocation (GstBaseTransform * trans, + GstQuery * query) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + GstBufferPool *pool = nullptr; + GstVideoInfo info; + guint min, max, size; + GstCaps *caps = nullptr; + bool update_pool = false; + + gst_query_parse_allocation (query, &caps, nullptr); + + if (!caps) { + GST_WARNING_OBJECT (self, "Allocation query without caps"); + return FALSE; + } + + if (!gst_video_info_from_caps (&info, caps)) { + GST_ERROR_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + if (gst_query_get_n_allocation_pools (query) > 0) { + gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); + update_pool = true; + } else { + size = info.size; + min = max = 0; + } + + auto features = gst_caps_get_features (caps, 0); + if (gst_caps_features_contains (features, GST_CAPS_FEATURE_MEMORY_HIP_MEMORY)) { + GST_DEBUG_OBJECT (self, "downstream support hip memory"); + if (pool) { + if (!GST_IS_HIP_BUFFER_POOL (pool)) { + gst_clear_object (&pool); + } else { + auto hpool = GST_HIP_BUFFER_POOL (pool); + if (!gst_hip_device_is_equal (hpool->device, priv->device)) + gst_clear_object (&pool); + } + } + + if (!pool) + pool = gst_hip_buffer_pool_new (priv->device); + } +#ifdef HAVE_GST_CUDA + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY)) { + GST_DEBUG_OBJECT (self, "downstream support cuda memory"); + if (pool) { + if (!GST_IS_CUDA_BUFFER_POOL (pool)) { + gst_clear_object (&pool); + } else { + auto cpool = GST_CUDA_BUFFER_POOL (pool); + if (cpool->context != priv->cuda_ctx) + gst_clear_object (&pool); + } + } + + if (!pool) + pool = gst_cuda_buffer_pool_new (priv->cuda_ctx); + } +#endif +#ifdef HAVE_GST_HIP_GL + else if (gst_caps_features_contains (features, + GST_CAPS_FEATURE_MEMORY_GL_MEMORY) && + gst_hip_memory_copy_ensure_gl_context (self)) { + GST_DEBUG_OBJECT (self, "downstream support gl memory"); + if (pool && !GST_IS_GL_BUFFER_POOL (pool)) + gst_clear_object (&pool); + + if (!pool) + pool = gst_gl_buffer_pool_new (priv->gl_ctx); + } +#endif + + if (!pool) + pool = gst_video_buffer_pool_new (); + + auto config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_config_set_params (config, caps, size, min, max); + gst_buffer_pool_set_config (pool, config); + + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_get_params (config, nullptr, &size, nullptr, nullptr); + gst_structure_free (config); + + if (update_pool) + gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max); + else + gst_query_add_allocation_pool (query, pool, size, min, max); + + gst_object_unref (pool); + + return GST_BASE_TRANSFORM_CLASS (parent_class)->decide_allocation (trans, + query); +} + +static GstFlowReturn +gst_hip_memory_copy_system_copy (GstHipMemoryCopy * self, + GstBuffer * inbuf, GstBuffer * outbuf) +{ + auto priv = self->priv; + GstVideoFrame in_frame, out_frame; + GstFlowReturn ret = GST_FLOW_OK; + + if (!gst_video_frame_map (&in_frame, &priv->info, inbuf, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Couldn't map input frame"); + return GST_FLOW_ERROR; + } + + if (!gst_video_frame_map (&out_frame, &priv->info, outbuf, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Couldn't map output frame"); + gst_video_frame_unmap (&in_frame); + return GST_FLOW_ERROR; + } + + if (!gst_video_frame_copy (&out_frame, &in_frame)) { + GST_ERROR_OBJECT (self, "Copy failed"); + ret = GST_FLOW_ERROR; + } + + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + + return ret; +} + +#ifdef HAVE_GST_CUDA +static gboolean +gst_hip_memory_copy_device_copy (GstHipMemoryCopy * self, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + auto priv = self->priv; + CUstream stream = nullptr; + GstVideoFrame in_frame, out_frame; + gboolean ret = TRUE; + + if (!gst_hip_device_set_current (priv->device)) { + GST_ERROR_OBJECT (self, "Couldn't set device"); + return FALSE; + } + + if (priv->transfer_type == TransferType::CUDA_TO_HIP) { + auto cmem = (GstCudaMemory *) gst_buffer_peek_memory (inbuf, 0); + stream = gst_cuda_stream_get_handle (gst_cuda_memory_get_stream (cmem)); + } else { + auto hmem = (GstHipMemory *) gst_buffer_peek_memory (inbuf, 0); + stream = + (CUstream) gst_hip_stream_get_handle (gst_hip_memory_get_stream (hmem)); + } + + if (!gst_video_frame_map (&in_frame, &priv->info, inbuf, GST_MAP_READ_HIP)) { + GST_ERROR_OBJECT (self, "Couldn't map input frame"); + return FALSE; + } + + if (!gst_video_frame_map (&out_frame, &priv->info, outbuf, GST_MAP_WRITE_HIP)) { + GST_ERROR_OBJECT (self, "Couldn't map output frame"); + gst_video_frame_unmap (&in_frame); + return FALSE; + } + + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&in_frame); i++) { + auto in_data = (CUdeviceptr) GST_VIDEO_FRAME_PLANE_DATA (&in_frame, i); + auto out_data = (CUdeviceptr) GST_VIDEO_FRAME_PLANE_DATA (&out_frame, i); + auto in_stride = GST_VIDEO_FRAME_PLANE_STRIDE (&in_frame, i); + auto out_stride = GST_VIDEO_FRAME_PLANE_STRIDE (&out_frame, i); + auto width_in_bytes = GST_VIDEO_FRAME_COMP_PSTRIDE (&in_frame, i) * + GST_VIDEO_FRAME_COMP_WIDTH (&in_frame, i); + auto height = GST_VIDEO_FRAME_COMP_HEIGHT (&in_frame, i); + + CUDA_MEMCPY2D param = { }; + + param.srcMemoryType = CU_MEMORYTYPE_DEVICE; + param.srcDevice = in_data; + param.srcPitch = in_stride; + + param.dstMemoryType = CU_MEMORYTYPE_DEVICE; + param.dstDevice = out_data; + param.dstPitch = out_stride; + param.WidthInBytes = width_in_bytes; + param.Height = height; + + ret = gst_cuda_result (CuMemcpy2DAsync (¶m, stream)); + if (!ret) + break; + } + + if (ret) + ret = gst_cuda_result (CuStreamSynchronize (stream)); + + gst_video_frame_unmap (&out_frame); + gst_video_frame_unmap (&in_frame); + + return ret; +} +#endif + +#ifdef HAVE_GST_HIP_GL +struct GLCopyData +{ + GstHipMemoryCopy *self; + GstHipDevice *device; + GstBuffer *gl_buf; + GstBuffer *hip_buf; + gboolean gl_to_hip; + gboolean ret = FALSE; +}; + +static void +gl_copy_thread_func (GstGLContext * gl_ctx, GLCopyData * data) +{ + auto self = data->self; + auto priv = self->priv; + auto vendor = gst_hip_device_get_vendor (data->device); + guint device_count = 0; + int device_list1 = { 0, }; + GstHipGraphicsResource *resources4 = { }; + GstVideoFrame hip_frame; + hipStream_t stream = nullptr; + + data->ret = FALSE; + + auto hip_ret = HipGLGetDevices (vendor, + &device_count, device_list, 1, hipGLDeviceListAll); + if (!gst_hip_result (hip_ret, vendor) || device_count == 0) { + GST_WARNING_OBJECT (self, "GL context is not compatible with HIP device"); + return; + } + + if (!gst_hip_device_set_current (data->device)) { + GST_ERROR_OBJECT (self, "Couldn't set device"); + return; + } + + auto n_mem = gst_buffer_n_memory (data->gl_buf); + if (n_mem != GST_VIDEO_INFO_N_PLANES (&priv->info)) { + GST_ERROR_OBJECT (self, "Plane count mismatch"); + return; + } + + for (guint i = 0; i < n_mem; i++) { + auto mem = gst_buffer_peek_memory (data->gl_buf, i); + auto pbo_mem = (GstGLMemoryPBO *) mem; + + hip_ret = gst_hip_gl_get_graphics_resource_from_memory (data->device, + mem, &resourcesi); + if (!gst_hip_result (hip_ret, vendor)) { + GST_WARNING_OBJECT (self, "Couldn't get graphics resource"); + for (guint j = 0; j < i; j++) + gst_clear_hip_graphics_resource (&resourcesj); + + return; + } + + if (data->gl_to_hip) { + /* get the texture into the PBO */ + gst_gl_memory_pbo_upload_transfer (pbo_mem); + gst_gl_memory_pbo_download_transfer (pbo_mem); + } else { + /* Need PBO -> texture */ + GST_MINI_OBJECT_FLAG_SET (mem, GST_GL_BASE_MEMORY_TRANSFER_NEED_UPLOAD); + + /* PBO -> sysmem */ + GST_MINI_OBJECT_FLAG_SET (pbo_mem->pbo, + GST_GL_BASE_MEMORY_TRANSFER_NEED_DOWNLOAD); + } + } + + GstMapFlags map_flags; + if (data->gl_to_hip) + map_flags = GST_MAP_WRITE_HIP; + else + map_flags = GST_MAP_READ_HIP; + + if (!gst_video_frame_map (&hip_frame, &priv->info, data->hip_buf, map_flags)) { + GST_ERROR_OBJECT (self, "Couldn't map HIP frame"); + for (guint i = 0; i < n_mem; i++) + gst_clear_hip_graphics_resource (&resourcesi); + + return; + } + + auto hmem = (GstHipMemory *) gst_buffer_peek_memory (data->hip_buf, 0); + auto gst_stream = gst_hip_memory_get_stream (hmem); + if (!gst_stream) + gst_stream = gst_hip_device_get_stream (hmem->device); + + stream = gst_hip_stream_get_handle (gst_stream); + + gboolean copy_ret = TRUE; + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&hip_frame); i++) { + hip_ret = gst_hip_graphics_resource_map (resourcesi, stream); + copy_ret = gst_hip_result (hip_ret, vendor); + if (!copy_ret) { + GST_ERROR_OBJECT (self, "Couldn't map resource %d", i); + break; + } + + void *gl_dev_ptr; + size_t gl_size; + hip_ret = gst_hip_graphics_resource_get_mapped_pointer (resourcesi, + &gl_dev_ptr, &gl_size); + copy_ret = gst_hip_result (hip_ret, vendor); + if (!copy_ret) { + GST_ERROR_OBJECT (self, "Couldn't get mapped pointer %d", i); + gst_hip_graphics_resource_unmap (resourcesi, stream); + break; + } + + hip_Memcpy2D param = { }; + + param.srcMemoryType = hipMemoryTypeDevice; + param.dstMemoryType = hipMemoryTypeDevice; + param.Height = GST_VIDEO_FRAME_COMP_HEIGHT (&hip_frame, i); + + if (data->gl_to_hip) { + param.srcDevice = gl_dev_ptr; + param.srcPitch = GST_VIDEO_INFO_PLANE_STRIDE (&priv->info, i); + + param.dstDevice = GST_VIDEO_FRAME_PLANE_DATA (&hip_frame, i); + param.dstPitch = GST_VIDEO_FRAME_PLANE_STRIDE (&hip_frame, i); + + } else { + param.dstDevice = gl_dev_ptr; + param.dstPitch = GST_VIDEO_INFO_PLANE_STRIDE (&priv->info, i); + + param.srcDevice = GST_VIDEO_FRAME_PLANE_DATA (&hip_frame, i); + param.srcPitch = GST_VIDEO_FRAME_PLANE_STRIDE (&hip_frame, i); + } + + param.WidthInBytes = MIN (param.srcPitch, param.dstPitch); + + hip_ret = HipMemcpyParam2DAsync (vendor, ¶m, stream); + copy_ret = gst_hip_result (hip_ret, vendor); + gst_hip_graphics_resource_unmap (resourcesi, stream); + + if (!copy_ret) { + GST_ERROR_OBJECT (self, "Couldn't copy plane %d", i); + break; + } + } + + if (copy_ret) + HipStreamSynchronize (vendor, stream); + + for (guint i = 0; i < n_mem; i++) + gst_clear_hip_graphics_resource (&resourcesi); + + gst_video_frame_unmap (&hip_frame); + + data->ret = copy_ret; +} + +static gboolean +gst_hip_memory_copy_gl_copy (GstHipMemoryCopy * self, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + auto priv = self->priv; + GstGLContext *gl_ctx = nullptr; + GLCopyData data; + + data.self = self; + data.ret = FALSE; + + if (priv->transfer_type == TransferType::GL_TO_HIP) { + data.gl_buf = inbuf; + data.hip_buf = outbuf; + data.gl_to_hip = TRUE; + } else { + data.gl_buf = outbuf; + data.hip_buf = inbuf; + data.gl_to_hip = FALSE; + } + + auto mem = gst_buffer_peek_memory (data.gl_buf, 0); + if (!gst_is_gl_memory_pbo (mem)) { + GST_WARNING_OBJECT (self, "Not a GL PBO buffer"); + return FALSE; + } + gl_ctx = GST_GL_MEMORY_CAST (mem)->mem.context; + + mem = gst_buffer_peek_memory (data.hip_buf, 0); + if (!gst_is_hip_memory (mem)) { + GST_WARNING_OBJECT (self, "Not a HIP buffer"); + return FALSE; + } + + data.device = GST_HIP_MEMORY_CAST (mem)->device; + + gst_gl_context_thread_add (gl_ctx, + (GstGLContextThreadFunc) gl_copy_thread_func, &data); + + return data.ret; +} +#endif + +static GstFlowReturn +gst_hip_memory_copy_transform (GstBaseTransform * trans, GstBuffer * inbuf, + GstBuffer * outbuf) +{ + auto self = GST_HIP_MEMORY_COPY (trans); + auto priv = self->priv; + +#ifdef HAVE_GST_HIP_GL + if (priv->transfer_type == TransferType::GL_TO_HIP || + priv->transfer_type == TransferType::HIP_TO_GL) { + if (gst_hip_memory_copy_gl_copy (self, inbuf, outbuf)) { + GST_TRACE_OBJECT (self, "Done GL interop copy"); + return GST_FLOW_OK; + } + + GST_WARNING_OBJECT (self, + "GL interop copy failed, fallback to system copy"); + priv->transfer_type = TransferType::SYSTEM; + } +#endif + +#ifdef HAVE_GST_CUDA + if (priv->transfer_type == TransferType::HIP_TO_CUDA || + priv->transfer_type == TransferType::CUDA_TO_HIP) { + auto ret = gst_hip_memory_copy_device_copy (self, inbuf, outbuf); + if (ret) { + GST_TRACE_OBJECT (self, "Done using device copy"); + return GST_FLOW_OK; + } + + priv->transfer_type = TransferType::SYSTEM; + } +#endif + + return gst_hip_memory_copy_system_copy (self, inbuf, outbuf); +} + +struct _GstHipUpload +{ + GstHipMemoryCopy parent; +}; + +G_DEFINE_TYPE (GstHipUpload, gst_hip_upload, GST_TYPE_HIP_MEMORY_COPY); + +static void +gst_hip_upload_class_init (GstHipUploadClass * klass) +{ + auto element_class = GST_ELEMENT_CLASS (klass); + + gst_element_class_set_static_metadata (element_class, + "HIP Uploader", "Filter/Video", + "Uploads system memory into HIP device memory", + "Seungha Yang <seungha@centricular.com>"); + + auto sys_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE (GST_HIP_FORMATS)); + auto hip_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_FORMATS)); + + auto src_caps = gst_caps_merge (gst_caps_ref (hip_caps), + gst_caps_ref (sys_caps)); + + auto sink_caps = sys_caps; +#ifdef HAVE_GST_HIP_GL + auto gl_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_GL_MEMORY, GST_HIP_FORMATS)); + sink_caps = gst_caps_merge (sink_caps, gl_caps); +#endif + +#ifdef HAVE_GST_CUDA + auto cuda_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY, GST_HIP_FORMATS)); + sink_caps = gst_caps_merge (sink_caps, cuda_caps); +#endif + + sink_caps = gst_caps_merge (sink_caps, hip_caps); + + GST_MINI_OBJECT_FLAG_SET (sink_caps, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (src_caps, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + + gst_element_class_add_pad_template (element_class, + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, sink_caps)); + gst_element_class_add_pad_template (element_class, + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, src_caps)); +} + +static void +gst_hip_upload_init (GstHipUpload * self) +{ + auto memcpy = GST_HIP_MEMORY_COPY (self); + memcpy->priv->is_uploader = true; +} + +struct _GstHipDownload +{ + GstHipMemoryCopy parent; +}; + +G_DEFINE_TYPE (GstHipDownload, gst_hip_download, GST_TYPE_HIP_MEMORY_COPY); + +static void +gst_hip_download_class_init (GstHipDownloadClass * klass) +{ + auto element_class = GST_ELEMENT_CLASS (klass); + + gst_element_class_set_static_metadata (element_class, + "HIP Downloader", "Filter/Video", + "Downloads HIP device memory into system memory", + "Seungha Yang <seungha@centricular.com>"); + + auto sys_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE (GST_HIP_FORMATS)); + auto hip_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_HIP_MEMORY, GST_HIP_FORMATS)); + + auto sink_caps = gst_caps_merge (gst_caps_ref (hip_caps), + gst_caps_ref (sys_caps)); + + auto src_caps = sys_caps; +#ifdef HAVE_GST_HIP_GL + auto gl_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_GL_MEMORY, GST_HIP_FORMATS)); + src_caps = gst_caps_merge (src_caps, gl_caps); +#endif + +#ifdef HAVE_GST_CUDA + auto cuda_caps = gst_caps_from_string (GST_VIDEO_CAPS_MAKE_WITH_FEATURES + (GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY, GST_HIP_FORMATS)); + src_caps = gst_caps_merge (src_caps, cuda_caps); +#endif + + src_caps = gst_caps_merge (src_caps, hip_caps); + + GST_MINI_OBJECT_FLAG_SET (sink_caps, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + GST_MINI_OBJECT_FLAG_SET (src_caps, GST_MINI_OBJECT_FLAG_MAY_BE_LEAKED); + + gst_element_class_add_pad_template (element_class, + gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, sink_caps)); + gst_element_class_add_pad_template (element_class, + gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, src_caps)); +} + +static void +gst_hip_download_init (GstHipDownload * self) +{ + auto memcpy = GST_HIP_MEMORY_COPY (self); + memcpy->priv->is_uploader = false; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/gsthipmemorycopy.h
Added
@@ -0,0 +1,63 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/base/gstbasetransform.h> +#include <gst/hip/gsthip.h> + +G_BEGIN_DECLS + +#define GST_TYPE_HIP_MEMORY_COPY (gst_hip_memory_copy_get_type()) +#define GST_HIP_MEMORY_COPY(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_HIP_MEMORY_COPY,GstHipMemoryCopy)) +#define GST_HIP_MEMORY_COPY_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_HIP_MEMORY_COPY,GstHipMemoryCopyClass)) +#define GST_HIP_MEMORY_COPY_GET_CLASS(obj) (GST_HIP_MEMORY_COPY_CLASS(G_OBJECT_GET_CLASS(obj))) +#define GST_IS_HIP_MEMORY_COPY(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_HIP_MEMORY_COPY)) +#define GST_IS_HIP_MEMORY_COPY_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_HIP_MEMORY_COPY)) + +typedef struct _GstHipMemoryCopy GstHipMemoryCopy; +typedef struct _GstHipMemoryCopyClass GstHipMemoryCopyClass; +typedef struct _GstHipMemoryCopyPrivate GstHipMemoryCopyPrivate; + +struct _GstHipMemoryCopy +{ + GstBaseTransform parent; + + GstHipMemoryCopyPrivate *priv; +}; + +struct _GstHipMemoryCopyClass +{ + GstBaseTransformClass parent_class; +}; + +GType gst_hip_memory_copy_get_type (void); +G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstHipMemoryCopy, gst_object_unref) + +#define GST_TYPE_HIP_UPLOAD (gst_hip_upload_get_type()) +G_DECLARE_FINAL_TYPE (GstHipUpload, + gst_hip_upload, GST, HIP_UPLOAD, GstHipMemoryCopy); + +#define GST_TYPE_HIP_DOWNLOAD (gst_hip_download_get_type()) +G_DECLARE_FINAL_TYPE (GstHipDownload, + gst_hip_download, GST, HIP_DOWNLOAD, GstHipMemoryCopy); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel/collect_hsaco_headers.py
Added
@@ -0,0 +1,92 @@ +#!/usr/bin/env python3 +# GStreamer +# Copyright (C) 2025 Seungha Yang <seungha@centricular.com> +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Library General Public +# License as published by the Free Software Foundation; either +# version 2 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Library General Public License for more details. +# +# You should have received a copy of the GNU Library General Public +# License along with this library; if not, write to the +# Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, +# Boston, MA 02110-1301, USA. + +import sys +import os +import argparse + +start_header = """/* + * This file is autogenerated by collect_hsaco_headers.py + */ +#pragma once + +""" + +start_map = """ +#define MAKE_BYTECODE(name) { G_STRINGIFY (name), g_##name } +static std::unordered_map<std::string, const unsigned char *> +""" + +end_map = """}; +#undef MAKE_BYTECODE +""" + +def convert_hsaco_to_header(hsaco_file, header_file): + with open(hsaco_file, 'rb') as f: + hsaco_content = f.read() + + header_lines = + header_lines.append("// Generated by collect_hsaco_headers.py") + header_lines.append("#pragma once") + header_lines.append("/* Generated by bin2header.py */") + header_lines.append("static const unsigned char g_{} = {{".format(os.path.splitext(os.path.basename(hsaco_file))0)) + + bytes_per_line = 12 + for i in range(0, len(hsaco_content), bytes_per_line): + chunk = hsaco_contenti:i+bytes_per_line + line = " " + ", ".join("0x{:02x}".format(b) for b in chunk) + if i + bytes_per_line < len(hsaco_content): + line += "," + header_lines.append(line) + + header_lines.append("};") + header_lines.append("") + header_content = "\n".join(header_lines) + + with open(header_file, "w", encoding='utf8') as f: + f.write(header_content) + +def main(args): + parser = argparse.ArgumentParser(description='Read HIP HSACO from directory and make single header') + parser.add_argument("--input", help="the precompiled HIP HSACO directory") + parser.add_argument("--output", help="output header file location") + parser.add_argument("--prefix", help="HIP HSACO header filename prefix") + parser.add_argument("--name", help="Hash map variable name") + + args = parser.parse_args(args) + + hsaco_files = os.path.join(args.input, file) for file in os.listdir(args.input) if file.startswith(args.prefix) and file.endswith(".hsaco") + + with open(args.output, 'w', newline='\n', encoding='utf8') as f: + f.write(start_header) + for hsaco_file in hsaco_files: + header_file = os.path.splitext(hsaco_file)0 + '.h' + convert_hsaco_to_header(hsaco_file, header_file) + f.write("#include \"") + f.write(os.path.basename(header_file)) + f.write("\"\n") + f.write(start_map) + f.write(args.name) + f.write(" = {\n") + for hsaco_file in hsaco_files: + f.write(" MAKE_BYTECODE ({}),\n".format(os.path.splitext(os.path.basename(hsaco_file))0)) + f.write(end_map) + +if __name__ == "__main__": + sys.exit(main(sys.argv1:))
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel/collect_ptx_headers.py
Added
@@ -0,0 +1,79 @@ +#!/usr/bin/env python3 +# GStreamer +# Copyright (C) 2025 Seungha Yang <seungha@centricular.com> +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Library General Public +# License as published by the Free Software Foundation; either +# version 2 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Library General Public License for more details. +# +# You should have received a copy of the GNU Library General Public +# License along with this library; if not, write to the +# Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, +# Boston, MA 02110-1301, USA. + +import sys +import os +import argparse + +start_header = """/* + * This file is autogenerated by collect_ptx_headers.py + */ +#pragma once + +""" + +start_map = """ +#define MAKE_BYTECODE(name) { G_STRINGIFY (name), g_##name } +static std::unordered_map<std::string, const char *> +""" + +end_map = """}; +#undef MAKE_BYTECODE +""" + +def convert_ptx_to_header(ptx_file, header_file): + with open(ptx_file, 'r', encoding='utf8') as ptx: + ptx_content = ptx.read() + + with open(header_file, 'w', newline='\n', encoding='utf8') as header: + header.write('#pragma once\n') + header.write('// This file is autogenerated by collect_ptx_headers.py\n') + header.write(f'static const char* g_{os.path.splitext(os.path.basename(ptx_file))0} = R"(\n') + header.write(ptx_content) + header.write(')";\n\n') + + +def main(args): + parser = argparse.ArgumentParser(description='Read CUDA PTX from directory and make single header') + parser.add_argument("--input", help="the precompiled CUDA PTX directory") + parser.add_argument("--output", help="output header file location") + parser.add_argument("--prefix", help="CUDA PTX header filename prefix") + parser.add_argument("--name", help="Hash map variable name") + + args = parser.parse_args(args) + + ptx_files = os.path.join(args.input, file) for file in os.listdir(args.input) if file.startswith(args.prefix) and file.endswith(".ptx") + + with open(args.output, 'w', newline='\n', encoding='utf8') as f: + f.write(start_header) + for ptx_file in ptx_files: + header_file = os.path.splitext(ptx_file)0 + '.h' + convert_ptx_to_header(ptx_file, header_file) + f.write("#include \"") + f.write(os.path.basename(header_file)) + f.write("\"\n") + f.write(start_map) + f.write(args.name) + f.write(" = {\n") + for ptx_file in ptx_files: + f.write(" MAKE_BYTECODE ({}),\n".format(os.path.splitext(os.path.basename(ptx_file))0)) + f.write(end_map) + +if __name__ == "__main__": + sys.exit(main(sys.argv1:))
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel/converter-unpack.cu
Added
@@ -0,0 +1,172 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#if defined(__NVCC__) || defined(__HIPCC__) +#ifdef __HIPCC__ +#include <hip/hip_runtime.h> +#endif + +extern "C" { +__global__ void +GstHipConverterUnpack_RGB_RGBx +(unsigned char *src, unsigned char *dst, int width, int height, + int src_stride, int dst_stride) +{ + int x_pos = blockIdx.x * blockDim.x + threadIdx.x; + int y_pos = blockIdx.y * blockDim.y + threadIdx.y; + if (x_pos < width && y_pos < height) { + int dst_pos = x_pos * 4 + y_pos * dst_stride; + int src_pos = x_pos * 3 + y_pos * src_stride; + dstdst_pos = srcsrc_pos; + dstdst_pos + 1 = srcsrc_pos + 1; + dstdst_pos + 2 = srcsrc_pos + 2; + dstdst_pos + 3 = 0xff; + } +} + +__global__ void +GstHipConverterUnpack_RGB10A2_ARGB64 +(unsigned char *src, unsigned char *dst, int width, int height, + int src_stride, int dst_stride) +{ + int x_pos = blockIdx.x * blockDim.x + threadIdx.x; + int y_pos = blockIdx.y * blockDim.y + threadIdx.y; + if (x_pos < width && y_pos < height) { + unsigned short a, r, g, b; + unsigned int val; + int dst_pos = x_pos * 8 + y_pos * dst_stride; + val = *(unsigned int *)&srcx_pos * 4 + y_pos * src_stride; + a = (val >> 30) & 0x03; + a = (a << 14) | (a << 12) | (a << 10) | (a << 8) | (a << 6) | (a << 4) | (a << 2) | (a << 0); + r = (val & 0x3ff); + r = (r << 6) | (r >> 4); + g = ((val >> 10) & 0x3ff); + g = (g << 6) | (g >> 4); + b = ((val >> 20) & 0x3ff); + b = (b << 6) | (b >> 4); + *(unsigned short *) &dstdst_pos = a; + *(unsigned short *) &dstdst_pos + 2 = r; + *(unsigned short *) &dstdst_pos + 4 = g; + *(unsigned short *) &dstdst_pos + 6 = b; + } +} + +__global__ void +GstHipConverterUnpack_BGR10A2_ARGB64 +(unsigned char *src, unsigned char *dst, int width, int height, + int src_stride, int dst_stride) +{ + int x_pos = blockIdx.x * blockDim.x + threadIdx.x; + int y_pos = blockIdx.y * blockDim.y + threadIdx.y; + if (x_pos < width && y_pos < height) { + unsigned short a, r, g, b; + unsigned int val; + int dst_pos = x_pos * 8 + y_pos * dst_stride; + val = *(unsigned int *)&srcx_pos * 4 + y_pos * src_stride; + a = (val >> 30) & 0x03; + a = (a << 14) | (a << 12) | (a << 10) | (a << 8) | (a << 6) | (a << 4) | (a << 2) | (a << 0); + b = (val & 0x3ff); + b = (b << 6) | (b >> 4); + g = ((val >> 10) & 0x3ff); + g = (g << 6) | (g >> 4); + r = ((val >> 20) & 0x3ff); + r = (r << 6) | (r >> 4); + *(unsigned short *) &dstdst_pos = a; + *(unsigned short *) &dstdst_pos + 2 = r; + *(unsigned short *) &dstdst_pos + 4 = g; + *(unsigned short *) &dstdst_pos + 6 = b; + } +} +} +#else +static const char ConverterUnpack_str = +"extern \"C\" {\n" +"__global__ void\n" +"GstHipConverterUnpack_RGB_RGBx\n" +"(unsigned char *src, unsigned char *dst, int width, int height,\n" +" int src_stride, int dst_stride)\n" +"{\n" +" int x_pos = blockIdx.x * blockDim.x + threadIdx.x;\n" +" int y_pos = blockIdx.y * blockDim.y + threadIdx.y;\n" +" if (x_pos < width && y_pos < height) {\n" +" int dst_pos = x_pos * 4 + y_pos * dst_stride;\n" +" int src_pos = x_pos * 3 + y_pos * src_stride;\n" +" dstdst_pos = srcsrc_pos;\n" +" dstdst_pos + 1 = srcsrc_pos + 1;\n" +" dstdst_pos + 2 = srcsrc_pos + 2;\n" +" dstdst_pos + 3 = 0xff;\n" +" }\n" +"}\n" +"\n" +"__global__ void\n" +"GstHipConverterUnpack_RGB10A2_ARGB64\n" +"(unsigned char *src, unsigned char *dst, int width, int height,\n" +" int src_stride, int dst_stride)\n" +"{\n" +" int x_pos = blockIdx.x * blockDim.x + threadIdx.x;\n" +" int y_pos = blockIdx.y * blockDim.y + threadIdx.y;\n" +" if (x_pos < width && y_pos < height) {\n" +" unsigned short a, r, g, b;\n" +" unsigned int val;\n" +" int dst_pos = x_pos * 8 + y_pos * dst_stride;\n" +" val = *(unsigned int *)&srcx_pos * 4 + y_pos * src_stride;\n" +" a = (val >> 30) & 0x03;\n" +" a = (a << 14) | (a << 12) | (a << 10) | (a << 8) | (a << 6) | (a << 4) | (a << 2) | (a << 0);\n" +" r = (val & 0x3ff);\n" +" r = (r << 6) | (r >> 4);\n" +" g = ((val >> 10) & 0x3ff);\n" +" g = (g << 6) | (g >> 4);\n" +" b = ((val >> 20) & 0x3ff);\n" +" b = (b << 6) | (b >> 4);\n" +" *(unsigned short *) &dstdst_pos = a;\n" +" *(unsigned short *) &dstdst_pos + 2 = r;\n" +" *(unsigned short *) &dstdst_pos + 4 = g;\n" +" *(unsigned short *) &dstdst_pos + 6 = b;\n" +" }\n" +"}\n" +"\n" +"__global__ void\n" +"GstHipConverterUnpack_BGR10A2_ARGB64\n" +"(unsigned char *src, unsigned char *dst, int width, int height,\n" +" int src_stride, int dst_stride)\n" +"{\n" +" int x_pos = blockIdx.x * blockDim.x + threadIdx.x;\n" +" int y_pos = blockIdx.y * blockDim.y + threadIdx.y;\n" +" if (x_pos < width && y_pos < height) {\n" +" unsigned short a, r, g, b;\n" +" unsigned int val;\n" +" int dst_pos = x_pos * 8 + y_pos * dst_stride;\n" +" val = *(unsigned int *)&srcx_pos * 4 + y_pos * src_stride;\n" +" a = (val >> 30) & 0x03;\n" +" a = (a << 14) | (a << 12) | (a << 10) | (a << 8) | (a << 6) | (a << 4) | (a << 2) | (a << 0);\n" +" b = (val & 0x3ff);\n" +" b = (b << 6) | (b >> 4);\n" +" g = ((val >> 10) & 0x3ff);\n" +" g = (g << 6) | (g >> 4);\n" +" r = ((val >> 20) & 0x3ff);\n" +" r = (r << 6) | (r >> 4);\n" +" *(unsigned short *) &dstdst_pos = a;\n" +" *(unsigned short *) &dstdst_pos + 2 = r;\n" +" *(unsigned short *) &dstdst_pos + 4 = g;\n" +" *(unsigned short *) &dstdst_pos + 6 = b;\n" +" }\n" +"}\n" +"}\n" +"\n"; +#endif \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel/converter.cu
Added
@@ -0,0 +1,2931 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#if defined(__NVCC__) || defined(__HIPCC__) +#ifdef __HIPCC__ +#include <hip/hip_runtime.h> +#define TextureObject_t hipTextureObject_t +#else +#define TextureObject_t cudaTextureObject_t +#endif + +struct ColorMatrix +{ + float CoeffX3; + float CoeffY3; + float CoeffZ3; + float Offset3; + float Min3; + float Max3; +}; + +struct ConstBuffer +{ + ColorMatrix matrix; + int width; + int height; + int left; + int top; + int right; + int bottom; + int view_width; + int view_height; + float border_x; + float border_y; + float border_z; + float border_w; + int fill_border; + int video_direction; + float alpha; + int do_blend; + int do_convert; +}; + +__device__ inline float +dot (const float coeff3, float3 val) +{ + return coeff0 * val.x + coeff1 * val.y + coeff2 * val.z; +} + +__device__ inline float +clamp (float val, float min_val, float max_val) +{ + return max (min_val, min (val, max_val)); +} + +__device__ inline float3 +clamp3 (float3 val, const float min_val3, const float max_val3) +{ + return make_float3 (clamp (val.x, min_val0, max_val0), + clamp (val.y, min_val1, max_val2), + clamp (val.z, min_val1, max_val2)); +} + +__device__ inline unsigned char +scale_to_2bits (float val) +{ + return (unsigned short) __float2int_rz (val * 3.0); +} + +__device__ inline unsigned char +scale_to_uchar (float val) +{ + return (unsigned char) __float2int_rz (val * 255.0); +} + +__device__ inline unsigned short +scale_to_ushort (float val) +{ + return (unsigned short) __float2int_rz (val * 65535.0); +} + +__device__ inline unsigned short +scale_to_10bits (float val) +{ + return (unsigned short) __float2int_rz (val * 1023.0); +} + +__device__ inline unsigned short +scale_to_12bits (float val) +{ + return (unsigned short) __float2int_rz (val * 4095.0); +} + +__device__ inline unsigned char +blend_uchar (unsigned char dst, float src, float src_alpha) +{ + // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor + float src_val = src * src_alpha; + float dst_val = __int2float_rz (dst) / 255.0 * (1.0 - src_alpha); + return scale_to_uchar(clamp(src_val + dst_val, 0, 1.0)); +} + +__device__ inline unsigned short +blend_ushort (unsigned short dst, float src, float src_alpha) +{ + // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor + float src_val = src * src_alpha; + float dst_val = __int2float_rz (dst) / 65535.0 * (1.0 - src_alpha); + return scale_to_ushort(clamp(src_val + dst_val, 0, 1.0)); +} + +__device__ inline unsigned short +blend_10bits (unsigned short dst, float src, float src_alpha) +{ + // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor + float src_val = src * src_alpha; + float dst_val = __int2float_rz (dst) / 1023.0 * (1.0 - src_alpha); + return scale_to_10bits(clamp(src_val + dst_val, 0, 1.0)); +} + +__device__ inline unsigned short +blend_12bits (unsigned short dst, float src, float src_alpha) +{ + // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor + float src_val = src * src_alpha; + float dst_val = __int2float_rz (dst) / 4095.0 * (1.0 - src_alpha); + return scale_to_12bits(clamp(src_val + dst_val, 0, 1.0)); +} + +struct IConverter +{ + __device__ virtual float3 + Execute (float3 sample, const ColorMatrix * matrix) = 0; +}; + +struct ConvertSimple : public IConverter +{ + __device__ float3 + Execute (float3 sample, const ColorMatrix * matrix) + { + float3 out; + out.x = dot (matrix->CoeffX, sample); + out.y = dot (matrix->CoeffY, sample); + out.z = dot (matrix->CoeffZ, sample); + out.x += matrix->Offset0; + out.y += matrix->Offset1; + out.z += matrix->Offset2; + return clamp3 (out, matrix->Min, matrix->Max); + } +}; + +struct ISampler +{ + __device__ virtual float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) = 0; +}; + +struct SampleI420 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float u = tex2D<float>(tex1, x, y); + float v = tex2D<float>(tex2, x, y); + return make_float4 (luma, u, v, 1); + } +}; + +struct SampleYV12 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float u = tex2D<float>(tex2, x, y); + float v = tex2D<float>(tex1, x, y); + return make_float4 (luma, u, v, 1); + } +}; + +struct SampleI420_10 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float u = tex2D<float>(tex1, x, y); + float v = tex2D<float>(tex2, x, y); + /* (1 << 6) to scale 0, 1.0) range */ + return make_float4 (luma * 64, u * 64, v * 64, 1); + } +}; + +struct SampleI420_12 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float u = tex2D<float>(tex1, x, y); + float v = tex2D<float>(tex2, x, y); + /* (1 << 4) to scale 0, 1.0) range */ + return make_float4 (luma * 16, u * 16, v * 16, 1); + } +}; + +struct SampleNV12 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float2 uv = tex2D<float2>(tex1, x, y); + return make_float4 (luma, uv.x, uv.y, 1); + } +}; + +struct SampleNV21 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float luma = tex2D<float>(tex0, x, y); + float2 vu = tex2D<float2>(tex1, x, y); + return make_float4 (luma, vu.y, vu.x, 1); + } +}; + +struct SampleRGBA : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + return tex2D<float4>(tex0, x, y); + } +}; + +struct SampleBGRA : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 bgra = tex2D<float4>(tex0, x, y); + return make_float4 (bgra.z, bgra.y, bgra.x, bgra.w); + } +}; + +struct SampleRGBx : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 rgbx = tex2D<float4>(tex0, x, y); + rgbx.w = 1; + return rgbx; + } +}; + +struct SampleBGRx : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 bgrx = tex2D<float4>(tex0, x, y); + return make_float4 (bgrx.z, bgrx.y, bgrx.x, 1); + } +}; + +struct SampleARGB : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 argb = tex2D<float4>(tex0, x, y); + return make_float4 (argb.y, argb.z, argb.w, argb.x); + } +}; + +struct SampleABGR : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 abgr = tex2D<float4>(tex0, x, y); + return make_float4 (abgr.w, abgr.z, abgr.y, abgr.x); + } +}; + +struct SampleRGBP : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float r = tex2D<float>(tex0, x, y); + float g = tex2D<float>(tex1, x, y); + float b = tex2D<float>(tex2, x, y); + return make_float4 (r, g, b, 1); + } +}; + +struct SampleBGRP : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float b = tex2D<float>(tex0, x, y); + float g = tex2D<float>(tex1, x, y); + float r = tex2D<float>(tex2, x, y); + return make_float4 (r, g, b, 1); + } +}; + +struct SampleGBR : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float g = tex2D<float>(tex0, x, y); + float b = tex2D<float>(tex1, x, y); + float r = tex2D<float>(tex2, x, y); + return make_float4 (r, g, b, 1); + } +}; + +struct SampleGBR_10 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float g = tex2D<float>(tex0, x, y); + float b = tex2D<float>(tex1, x, y); + float r = tex2D<float>(tex2, x, y); + /* (1 << 6) to scale 0, 1.0) range */ + return make_float4 (r * 64, g * 64, b * 64, 1); + } +}; + +struct SampleGBR_12 : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float g = tex2D<float>(tex0, x, y); + float b = tex2D<float>(tex1, x, y); + float r = tex2D<float>(tex2, x, y); + /* (1 << 4) to scale 0, 1.0) range */ + return make_float4 (r * 16, g * 16, b * 16, 1); + } +}; + +struct SampleGBRA : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float g = tex2D<float>(tex0, x, y); + float b = tex2D<float>(tex1, x, y); + float r = tex2D<float>(tex2, x, y); + float a = tex2D<float>(tex3, x, y); + return make_float4 (r, g, b, a); + } +}; + +struct SampleVUYA : public ISampler +{ + __device__ float4 + Execute (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, float x, float y) + { + float4 vuya = tex2D<float4>(tex0, x, y); + return make_float4 (vuya.z, vuya.y, vuya.x, vuya.w); + } +}; + +struct IOutput +{ + __device__ virtual void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) = 0; + + __device__ virtual void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) = 0; +}; + +struct OutputI420 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + dst0x + y * stride0 = scale_to_uchar (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x / 2 + (y / 2) * stride1; + dst1pos = scale_to_uchar (sample.y); + dst2pos = scale_to_uchar (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x / 2 + (y / 2) * stride1; + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst2pos = blend_uchar (dst2pos, sample.z, sample.w); + } + } +}; + +struct OutputYV12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + dst0x + y * stride0 = scale_to_uchar (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x / 2 + (y / 2) * stride1; + dst1pos = scale_to_uchar (sample.z); + dst2pos = scale_to_uchar (sample.y); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x / 2 + (y / 2) * stride1; + dst1pos = blend_uchar (dst1pos, sample.z, sample.w); + dst2pos = blend_uchar (dst2pos, sample.y, sample.w); + } + } +}; + +struct OutputNV12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + dst0x + y * stride0 = scale_to_uchar (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x + (y / 2) * stride1; + dst1pos = scale_to_uchar (sample.y); + dst1pos + 1 = scale_to_uchar (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x + (y / 2) * stride1; + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst1pos + 1 = blend_uchar (dst1pos + 1, sample.z, sample.w); + } + } +}; + +struct OutputNV21 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + dst0x + y * stride0 = scale_to_uchar (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x + (y / 2) * stride1; + dst1pos = scale_to_uchar (sample.z); + dst1pos + 1 = scale_to_uchar (sample.y); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x + (y / 2) * stride1; + dst1pos = blend_uchar (dst1pos, sample.z, sample.w); + dst1pos + 1 = blend_uchar (dst1pos + 1, sample.y, sample.w); + } + } +}; + +struct OutputP010 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_ushort (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x * 2 + (y / 2) * stride1; + *(unsigned short *) &dst1pos = scale_to_ushort (sample.y); + *(unsigned short *) &dst1pos + 2 = scale_to_ushort (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_ushort (*target, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x * 2 + (y / 2) * stride1; + target = (unsigned short *) &dst1pos; + *target = blend_ushort (*target, sample.y, sample.w); + target = (unsigned short *) &dst1pos + 2; + *target = blend_ushort (*target, sample.z, sample.w); + } + } +}; + +struct OutputI420_10 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_10bits (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x + (y / 2) * stride1; + *(unsigned short *) &dst1pos = scale_to_10bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_10bits (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_10bits (*target, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x * 2 + (y / 2) * stride1; + target = (unsigned short *) &dst1pos; + *target = blend_10bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_10bits (*target, sample.z, sample.w); + } + } +}; + +struct OutputI420_12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_12bits (sample.x); + if (x % 2 == 0 && y % 2 == 0) { + unsigned int pos = x + (y / 2) * stride1; + *(unsigned short *) &dst1pos = scale_to_12bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_12bits (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_12bits (*target, sample.x, sample.w); + if (x % 2 == 0 && y % 2 == 0) { + pos = x * 2 + (y / 2) * stride1; + target = (unsigned short *) &dst1pos; + *target = blend_12bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_12bits (*target, sample.z, sample.w); + } + } +}; + +struct OutputY444 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = scale_to_uchar (sample.x); + dst1pos = scale_to_uchar (sample.y); + dst2pos = scale_to_uchar (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst2pos = blend_uchar (dst2pos, sample.z, sample.w); + } +}; + +struct OutputY444_10 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_10bits (sample.x); + *(unsigned short *) &dst1pos = scale_to_10bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_10bits (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_10bits (*target, sample.x, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_10bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_10bits (*target, sample.z, sample.w); + } +}; + +struct OutputY444_12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_12bits (sample.x); + *(unsigned short *) &dst1pos = scale_to_12bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_12bits (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_12bits (*target, sample.x, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_12bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_12bits (*target, sample.z, sample.w); + } +}; + +struct OutputY444_16 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_ushort (sample.x); + *(unsigned short *) &dst1pos = scale_to_ushort (sample.y); + *(unsigned short *) &dst2pos = scale_to_ushort (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_ushort (*target, sample.x, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_ushort (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_ushort (*target, sample.z, sample.w); + } +}; + +struct OutputRGBA : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.x); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.z); + dst0pos + 3 = scale_to_uchar (sample.w); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w); + dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w); + } +}; + +struct OutputRGBx : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.x); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.z); + dst0pos + 3 = 255; + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w); + dst0pos + 3 = 255; + } +}; + +struct OutputBGRA : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.z); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.x); + dst0pos + 3 = scale_to_uchar (sample.w); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.z, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w); + dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w); + } +}; + +struct OutputBGRx : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.z); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.x); + dst0pos + 3 = 255; + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.z, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w); + dst0pos + 3 = 255; + } +}; + +struct OutputARGB : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.w); + dst0pos + 1 = scale_to_uchar (sample.x); + dst0pos + 2 = scale_to_uchar (sample.y); + dst0pos + 3 = scale_to_uchar (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, 1.0, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.x, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.y, sample.w); + dst0pos + 3 = blend_uchar (dst0pos + 3, sample.z, sample.w); + } +}; + +struct OutputABGR : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.w); + dst0pos + 1 = scale_to_uchar (sample.z); + dst0pos + 2 = scale_to_uchar (sample.y); + dst0pos + 3 = scale_to_uchar (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, 1.0, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.z, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.y, sample.w); + dst0pos + 3 = blend_uchar (dst0pos + 3, sample.x, sample.w); + } +}; + +struct OutputRGB : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 3 + y * stride0; + dst0pos = scale_to_uchar (sample.x); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 3 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w); + } +}; + +struct OutputBGR : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 3 + y * stride0; + dst0pos = scale_to_uchar (sample.z); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 3 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.z, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w); + } +}; + +__device__ inline ushort3 +unpack_rgb10a2 (unsigned int val) +{ + unsigned short r, g, b; + r = (val & 0x3ff); + r = (r << 6) | (r >> 4); + g = ((val >> 10) & 0x3ff); + g = (g << 6) | (g >> 4); + b = ((val >> 20) & 0x3ff); + b = (b << 6) | (b >> 4); + return make_ushort3 (r, g, b); +} + +struct OutputRGB10A2 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int alpha = (unsigned int) scale_to_2bits (sample.w); + unsigned int packed_rgb = alpha << 30; + packed_rgb |= ((unsigned int) scale_to_10bits (sample.x)); + packed_rgb |= ((unsigned int) scale_to_10bits (sample.y)) << 10; + packed_rgb |= ((unsigned int) scale_to_10bits (sample.z)) << 20; + *(unsigned int *) &dst0x * 4 + y * stride0 = packed_rgb; + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int * target = (unsigned int *) &dst0x * 4 + y * stride0; + ushort3 val = unpack_rgb10a2 (*target); + unsigned int alpha = (unsigned int) scale_to_2bits (sample.w); + unsigned int packed_rgb = alpha << 30; + packed_rgb |= ((unsigned int) blend_10bits (val.x, sample.x, sample.w)); + packed_rgb |= ((unsigned int) blend_10bits (val.y, sample.y, sample.w)) << 10; + packed_rgb |= ((unsigned int) blend_10bits (val.z, sample.z, sample.w)) << 20; + *target = packed_rgb; + } +}; + +__device__ inline ushort3 +unpack_bgr10a2 (unsigned int val) +{ + unsigned short r, g, b; + b = (val & 0x3ff); + b = (b << 6) | (b >> 4); + g = ((val >> 10) & 0x3ff); + g = (g << 6) | (g >> 4); + r = ((val >> 20) & 0x3ff); + r = (r << 6) | (r >> 4); + return make_ushort3 (r, g, b); +} + +struct OutputBGR10A2 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int alpha = (unsigned int) scale_to_2bits (sample.x); + unsigned int packed_rgb = alpha << 30; + packed_rgb |= ((unsigned int) scale_to_10bits (sample.x)) << 20; + packed_rgb |= ((unsigned int) scale_to_10bits (sample.y)) << 10; + packed_rgb |= ((unsigned int) scale_to_10bits (sample.z)); + *(unsigned int *) &dst0x * 4 + y * stride0 = packed_rgb; + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int * target = (unsigned int *) &dst0x * 4 + y * stride0; + ushort3 val = unpack_bgr10a2 (*target); + unsigned int alpha = (unsigned int) scale_to_2bits (sample.w); + unsigned int packed_rgb = alpha << 30; + packed_rgb |= ((unsigned int) blend_10bits (val.x, sample.x, sample.w)) << 20; + packed_rgb |= ((unsigned int) blend_10bits (val.y, sample.y, sample.w)) << 10; + packed_rgb |= ((unsigned int) blend_10bits (val.z, sample.z, sample.w)); + *target = packed_rgb; + } +}; + +struct OutputY42B : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + dst0x + y * stride0 = scale_to_uchar (sample.x); + if (x % 2 == 0) { + unsigned int pos = x / 2 + y * stride1; + dst1pos = scale_to_uchar (sample.y); + dst2pos = scale_to_uchar (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + if (x % 2 == 0) { + pos = x / 2 + y * stride1; + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst2pos = blend_uchar (dst2pos, sample.z, sample.w); + } + } +}; + +struct OutputI422_10 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_10bits (sample.x); + if (x % 2 == 0) { + unsigned int pos = x + y * stride1; + *(unsigned short *) &dst1pos = scale_to_10bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_10bits (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_10bits (*target, sample.x, sample.w); + if (x % 2 == 0) { + pos = x / 2 + y * stride1; + target = (unsigned short *) &dst1pos; + *target = blend_10bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_10bits (*target, sample.z, sample.w); + } + } +}; + +struct OutputI422_12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_12bits (sample.x); + if (x % 2 == 0) { + unsigned int pos = x + y * stride1; + *(unsigned short *) &dst1pos = scale_to_12bits (sample.y); + *(unsigned short *) &dst2pos = scale_to_12bits (sample.z); + } + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + unsigned int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_12bits (*target, sample.x, sample.w); + if (x % 2 == 0) { + pos = x / 2 + y * stride1; + target = (unsigned short *) &dst1pos; + *target = blend_12bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_12bits (*target, sample.z, sample.w); + } + } +}; + +struct OutputRGBP : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = scale_to_uchar (sample.x); + dst1pos = scale_to_uchar (sample.y); + dst2pos = scale_to_uchar (sample.z); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.x, sample.w); + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst2pos = blend_uchar (dst2pos, sample.z, sample.w); + } +}; + +struct OutputBGRP : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = scale_to_uchar (sample.z); + dst1pos = scale_to_uchar (sample.y); + dst2pos = scale_to_uchar (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.z, sample.w); + dst1pos = blend_uchar (dst1pos, sample.y, sample.w); + dst2pos = blend_uchar (dst2pos, sample.x, sample.w); + } +}; + +struct OutputGBR : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = scale_to_uchar (sample.y); + dst1pos = scale_to_uchar (sample.z); + dst2pos = scale_to_uchar (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.y, sample.w); + dst1pos = blend_uchar (dst1pos, sample.z, sample.w); + dst2pos = blend_uchar (dst2pos, sample.x, sample.w); + } +}; + +struct OutputGBR_10 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_10bits (sample.y); + *(unsigned short *) &dst1pos = scale_to_10bits (sample.z); + *(unsigned short *) &dst2pos = scale_to_10bits (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_10bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_10bits (*target, sample.z, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_10bits (*target, sample.x, sample.w); + } +}; + +struct OutputGBR_12 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_12bits (sample.y); + *(unsigned short *) &dst1pos = scale_to_12bits (sample.z); + *(unsigned short *) &dst2pos = scale_to_12bits (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_12bits (*target, sample.y, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_12bits (*target, sample.z, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_12bits (*target, sample.x, sample.w); + } +}; + +struct OutputGBR_16 : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + *(unsigned short *) &dst0pos = scale_to_ushort (sample.y); + *(unsigned short *) &dst1pos = scale_to_ushort (sample.z); + *(unsigned short *) &dst2pos = scale_to_ushort (sample.x); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 2 + y * stride0; + unsigned short * target = (unsigned short *) &dst0pos; + *target = blend_ushort (*target, sample.y, sample.w); + target = (unsigned short *) &dst1pos; + *target = blend_ushort (*target, sample.z, sample.w); + target = (unsigned short *) &dst2pos; + *target = blend_ushort (*target, sample.x, sample.w); + } +}; + +struct OutputGBRA : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = scale_to_uchar (sample.y); + dst1pos = scale_to_uchar (sample.z); + dst2pos = scale_to_uchar (sample.x); + dst3pos = scale_to_uchar (sample.w); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.y, sample.w); + dst1pos = blend_uchar (dst1pos, sample.z, sample.w); + dst2pos = blend_uchar (dst2pos, sample.x, sample.w); + dst3pos = blend_uchar (dst3pos, 1.0, sample.w); + } +}; + +struct OutputVUYA : public IOutput +{ + __device__ void + Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = scale_to_uchar (sample.z); + dst0pos + 1 = scale_to_uchar (sample.y); + dst0pos + 2 = scale_to_uchar (sample.x); + dst0pos + 3 = scale_to_uchar (sample.w); + } + + __device__ void + Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2, + unsigned char * dst3, float4 sample, int x, int y, int stride0, + int stride1) + { + int pos = x * 4 + y * stride0; + dst0pos = blend_uchar (dst0pos, sample.z, sample.w); + dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w); + dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w); + dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w); + } +}; + +__device__ inline float2 +rotate_identity (float x, float y) +{ + return make_float2(x, y); +} + +__device__ inline float2 +rotate_90r (float x, float y) +{ + return make_float2(y, 1.0 - x); +} + +__device__ inline float2 +rotate_180 (float x, float y) +{ + return make_float2(1.0 - x, 1.0 - y); +} + +__device__ inline float2 +rotate_90l (float x, float y) +{ + return make_float2(1.0 - y, x); +} + +__device__ inline float2 +rotate_horiz (float x, float y) +{ + return make_float2(1.0 - x, y); +} + +__device__ inline float2 +rotate_vert (float x, float y) +{ + return make_float2(x, 1.0 - y); +} + +__device__ inline float2 +rotate_ul_lr (float x, float y) +{ + return make_float2(y, x); +} + +__device__ inline float2 +rotate_ur_ll (float x, float y) +{ + return make_float2(1.0 - y, 1.0 - x); +} +__device__ inline float2 +do_rotate (float x, float y, int direction) +{ + switch (direction) { + case 1: + return rotate_90r (x, y); + case 2: + return rotate_180 (x, y); + case 3: + return rotate_90l (x, y); + case 4: + return rotate_horiz (x, y); + case 5: + return rotate_vert (x, y); + case 6: + return rotate_ul_lr (x, y); + case 7: + return rotate_ur_ll (x, y); + default: + return rotate_identity (x, y); + } +} + +extern "C" { +__global__ void +GstHipConverterMain (TextureObject_t tex0, TextureObject_t tex1, + TextureObject_t tex2, TextureObject_t tex3, unsigned char * dst0, + unsigned char * dst1, unsigned char * dst2, unsigned char * dst3, + int stride0, int stride1, ConstBuffer const_buf, int off_x, int off_y) +{ + ConvertSimple g_converter; + SAMPLER g_sampler; + OUTPUT g_output; + int x_pos = blockIdx.x * blockDim.x + threadIdx.x + off_x; + int y_pos = blockIdx.y * blockDim.y + threadIdx.y + off_y; + float4 sample; + if (x_pos >= const_buf.width || y_pos >= const_buf.height || + const_buf.view_width <= 0 || const_buf.view_height <= 0) + return; + if (x_pos < const_buf.left || x_pos >= const_buf.right || + y_pos < const_buf.top || y_pos >= const_buf.bottom) { + if (!const_buf.fill_border) + return; + sample = make_float4 (const_buf.border_x, const_buf.border_y, + const_buf.border_z, const_buf.border_w); + } else { + float x = (__int2float_rz (x_pos - const_buf.left) + 0.5) / const_buf.view_width; + if (x < 0.0 || x > 1.0) + return; + float y = (__int2float_rz (y_pos - const_buf.top) + 0.5) / const_buf.view_height; + if (y < 0.0 || y > 1.0) + return; + float2 rotated = do_rotate (x, y, const_buf.video_direction); + float4 s = g_sampler.Execute (tex0, tex1, tex2, tex3, rotated.x, rotated.y); + float3 rgb = make_float3 (s.x, s.y, s.z); + float3 yuv; + if (const_buf.do_convert) + yuv = g_converter.Execute (rgb, &const_buf.matrix); + else + yuv = rgb; + sample = make_float4 (yuv.x, yuv.y, yuv.z, s.w); + } + sample.w = sample.w * const_buf.alpha; + if (!const_buf.do_blend) { + g_output.Write (dst0, dst1, dst2, dst3, sample, x_pos, y_pos, stride0, stride1); + } else { + g_output.Blend (dst0, dst1, dst2, dst3, sample, x_pos, y_pos, stride0, stride1); + } +} +} +#else +static const char ConverterMain_str = +"struct ColorMatrix\n" +"{\n" +" float CoeffX3;\n" +" float CoeffY3;\n" +" float CoeffZ3;\n" +" float Offset3;\n" +" float Min3;\n" +" float Max3;\n" +"};\n" +"\n" +"struct ConstBuffer\n" +"{\n" +" ColorMatrix matrix;\n" +" int width;\n" +" int height;\n" +" int left;\n" +" int top;\n" +" int right;\n" +" int bottom;\n" +" int view_width;\n" +" int view_height;\n" +" float border_x;\n" +" float border_y;\n" +" float border_z;\n" +" float border_w;\n" +" int fill_border;\n" +" int video_direction;\n" +" float alpha;\n" +" int do_blend;\n" +" int do_convert;\n" +"};\n" +"\n" +"__device__ inline float\n" +"dot (const float coeff3, float3 val)\n" +"{\n" +" return coeff0 * val.x + coeff1 * val.y + coeff2 * val.z;\n" +"}\n" +"\n" +"__device__ inline float\n" +"clamp (float val, float min_val, float max_val)\n" +"{\n" +" return max (min_val, min (val, max_val));\n" +"}\n" +"\n" +"__device__ inline float3\n" +"clamp3 (float3 val, const float min_val3, const float max_val3)\n" +"{\n" +" return make_float3 (clamp (val.x, min_val0, max_val0),\n" +" clamp (val.y, min_val1, max_val2),\n" +" clamp (val.z, min_val1, max_val2));\n" +"}\n" +"\n" +"__device__ inline unsigned char\n" +"scale_to_2bits (float val)\n" +"{\n" +" return (unsigned short) __float2int_rz (val * 3.0);\n" +"}\n" +"\n" +"__device__ inline unsigned char\n" +"scale_to_uchar (float val)\n" +"{\n" +" return (unsigned char) __float2int_rz (val * 255.0);\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"scale_to_ushort (float val)\n" +"{\n" +" return (unsigned short) __float2int_rz (val * 65535.0);\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"scale_to_10bits (float val)\n" +"{\n" +" return (unsigned short) __float2int_rz (val * 1023.0);\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"scale_to_12bits (float val)\n" +"{\n" +" return (unsigned short) __float2int_rz (val * 4095.0);\n" +"}\n" +"\n" +"__device__ inline unsigned char\n" +"blend_uchar (unsigned char dst, float src, float src_alpha)\n" +"{\n" +" // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor\n" +" float src_val = src * src_alpha;\n" +" float dst_val = __int2float_rz (dst) / 255.0 * (1.0 - src_alpha);\n" +" return scale_to_uchar(clamp(src_val + dst_val, 0, 1.0));\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"blend_ushort (unsigned short dst, float src, float src_alpha)\n" +"{\n" +" // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor\n" +" float src_val = src * src_alpha;\n" +" float dst_val = __int2float_rz (dst) / 65535.0 * (1.0 - src_alpha);\n" +" return scale_to_ushort(clamp(src_val + dst_val, 0, 1.0));\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"blend_10bits (unsigned short dst, float src, float src_alpha)\n" +"{\n" +" // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor\n" +" float src_val = src * src_alpha;\n" +" float dst_val = __int2float_rz (dst) / 1023.0 * (1.0 - src_alpha);\n" +" return scale_to_10bits(clamp(src_val + dst_val, 0, 1.0));\n" +"}\n" +"\n" +"__device__ inline unsigned short\n" +"blend_12bits (unsigned short dst, float src, float src_alpha)\n" +"{\n" +" // DstColor' = SrcA * SrcColor + (1 - SrcA) DstColor\n" +" float src_val = src * src_alpha;\n" +" float dst_val = __int2float_rz (dst) / 4095.0 * (1.0 - src_alpha);\n" +" return scale_to_12bits(clamp(src_val + dst_val, 0, 1.0));\n" +"}\n" +"\n" +"struct IConverter\n" +"{\n" +" __device__ virtual float3\n" +" Execute (float3 sample, const ColorMatrix * matrix) = 0;\n" +"};\n" +"\n" +"struct ConvertSimple : public IConverter\n" +"{\n" +" __device__ float3\n" +" Execute (float3 sample, const ColorMatrix * matrix)\n" +" {\n" +" float3 out;\n" +" out.x = dot (matrix->CoeffX, sample);\n" +" out.y = dot (matrix->CoeffY, sample);\n" +" out.z = dot (matrix->CoeffZ, sample);\n" +" out.x += matrix->Offset0;\n" +" out.y += matrix->Offset1;\n" +" out.z += matrix->Offset2;\n" +" return clamp3 (out, matrix->Min, matrix->Max);\n" +" }\n" +"};\n" +"\n" +"struct ISampler\n" +"{\n" +" __device__ virtual float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y) = 0;\n" +"};\n" +"\n" +"struct SampleI420 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float u = tex2D<float>(tex1, x, y);\n" +" float v = tex2D<float>(tex2, x, y);\n" +" return make_float4 (luma, u, v, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleYV12 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float u = tex2D<float>(tex2, x, y);\n" +" float v = tex2D<float>(tex1, x, y);\n" +" return make_float4 (luma, u, v, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleI420_10 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float u = tex2D<float>(tex1, x, y);\n" +" float v = tex2D<float>(tex2, x, y);\n" +" /* (1 << 6) to scale 0, 1.0) range */\n" +" return make_float4 (luma * 64, u * 64, v * 64, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleI420_12 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float u = tex2D<float>(tex1, x, y);\n" +" float v = tex2D<float>(tex2, x, y);\n" +" /* (1 << 4) to scale 0, 1.0) range */\n" +" return make_float4 (luma * 16, u * 16, v * 16, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleNV12 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float2 uv = tex2D<float2>(tex1, x, y);\n" +" return make_float4 (luma, uv.x, uv.y, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleNV21 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float luma = tex2D<float>(tex0, x, y);\n" +" float2 vu = tex2D<float2>(tex1, x, y);\n" +" return make_float4 (luma, vu.y, vu.x, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleRGBA : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" return tex2D<float4>(tex0, x, y);\n" +" }\n" +"};\n" +"\n" +"struct SampleBGRA : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 bgra = tex2D<float4>(tex0, x, y);\n" +" return make_float4 (bgra.z, bgra.y, bgra.x, bgra.w);\n" +" }\n" +"};\n" +"\n" +"struct SampleRGBx : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 rgbx = tex2D<float4>(tex0, x, y);\n" +" rgbx.w = 1;\n" +" return rgbx;\n" +" }\n" +"};\n" +"\n" +"struct SampleBGRx : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 bgrx = tex2D<float4>(tex0, x, y);\n" +" return make_float4 (bgrx.z, bgrx.y, bgrx.x, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleARGB : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 argb = tex2D<float4>(tex0, x, y);\n" +" return make_float4 (argb.y, argb.z, argb.w, argb.x);\n" +" }\n" +"};\n" +"\n" +"struct SampleABGR : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 abgr = tex2D<float4>(tex0, x, y);\n" +" return make_float4 (abgr.w, abgr.z, abgr.y, abgr.x);\n" +" }\n" +"};\n" +"\n" +"struct SampleRGBP : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float r = tex2D<float>(tex0, x, y);\n" +" float g = tex2D<float>(tex1, x, y);\n" +" float b = tex2D<float>(tex2, x, y);\n" +" return make_float4 (r, g, b, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleBGRP : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float b = tex2D<float>(tex0, x, y);\n" +" float g = tex2D<float>(tex1, x, y);\n" +" float r = tex2D<float>(tex2, x, y);\n" +" return make_float4 (r, g, b, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleGBR : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float g = tex2D<float>(tex0, x, y);\n" +" float b = tex2D<float>(tex1, x, y);\n" +" float r = tex2D<float>(tex2, x, y);\n" +" return make_float4 (r, g, b, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleGBR_10 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float g = tex2D<float>(tex0, x, y);\n" +" float b = tex2D<float>(tex1, x, y);\n" +" float r = tex2D<float>(tex2, x, y);\n" +" /* (1 << 6) to scale 0, 1.0) range */\n" +" return make_float4 (r * 64, g * 64, b * 64, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleGBR_12 : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float g = tex2D<float>(tex0, x, y);\n" +" float b = tex2D<float>(tex1, x, y);\n" +" float r = tex2D<float>(tex2, x, y);\n" +" /* (1 << 4) to scale 0, 1.0) range */\n" +" return make_float4 (r * 16, g * 16, b * 16, 1);\n" +" }\n" +"};\n" +"\n" +"struct SampleGBRA : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float g = tex2D<float>(tex0, x, y);\n" +" float b = tex2D<float>(tex1, x, y);\n" +" float r = tex2D<float>(tex2, x, y);\n" +" float a = tex2D<float>(tex3, x, y);\n" +" return make_float4 (r, g, b, a);\n" +" }\n" +"};\n" +"\n" +"struct SampleVUYA : public ISampler\n" +"{\n" +" __device__ float4\n" +" Execute (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, float x, float y)\n" +" {\n" +" float4 vuya = tex2D<float4>(tex0, x, y);\n" +" return make_float4 (vuya.z, vuya.y, vuya.x, vuya.w);\n" +" }\n" +"};\n" +"\n" +"struct IOutput\n" +"{\n" +" __device__ virtual void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1) = 0;\n" +"\n" +" __device__ virtual void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1) = 0;\n" +"};\n" +"\n" +"struct OutputI420 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" dst0x + y * stride0 = scale_to_uchar (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x / 2 + (y / 2) * stride1;\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst2pos = scale_to_uchar (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x / 2 + (y / 2) * stride1;\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputYV12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" dst0x + y * stride0 = scale_to_uchar (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x / 2 + (y / 2) * stride1;\n" +" dst1pos = scale_to_uchar (sample.z);\n" +" dst2pos = scale_to_uchar (sample.y);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x / 2 + (y / 2) * stride1;\n" +" dst1pos = blend_uchar (dst1pos, sample.z, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.y, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputNV12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" dst0x + y * stride0 = scale_to_uchar (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x + (y / 2) * stride1;\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst1pos + 1 = scale_to_uchar (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x + (y / 2) * stride1;\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst1pos + 1 = blend_uchar (dst1pos + 1, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputNV21 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" dst0x + y * stride0 = scale_to_uchar (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x + (y / 2) * stride1;\n" +" dst1pos = scale_to_uchar (sample.z);\n" +" dst1pos + 1 = scale_to_uchar (sample.y);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x + (y / 2) * stride1;\n" +" dst1pos = blend_uchar (dst1pos, sample.z, sample.w);\n" +" dst1pos + 1 = blend_uchar (dst1pos + 1, sample.y, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputP010 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_ushort (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x * 2 + (y / 2) * stride1;\n" +" *(unsigned short *) &dst1pos = scale_to_ushort (sample.y);\n" +" *(unsigned short *) &dst1pos + 2 = scale_to_ushort (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_ushort (*target, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x * 2 + (y / 2) * stride1;\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_ushort (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst1pos + 2;\n" +" *target = blend_ushort (*target, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputI420_10 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_10bits (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x + (y / 2) * stride1;\n" +" *(unsigned short *) &dst1pos = scale_to_10bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_10bits (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_10bits (*target, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x * 2 + (y / 2) * stride1;\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_10bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_10bits (*target, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputI420_12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_12bits (sample.x);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" unsigned int pos = x + (y / 2) * stride1;\n" +" *(unsigned short *) &dst1pos = scale_to_12bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_12bits (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_12bits (*target, sample.x, sample.w);\n" +" if (x % 2 == 0 && y % 2 == 0) {\n" +" pos = x * 2 + (y / 2) * stride1;\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_12bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_12bits (*target, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputY444 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.x);\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst2pos = scale_to_uchar (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputY444_10 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_10bits (sample.x);\n" +" *(unsigned short *) &dst1pos = scale_to_10bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_10bits (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_10bits (*target, sample.x, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_10bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_10bits (*target, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputY444_12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_12bits (sample.x);\n" +" *(unsigned short *) &dst1pos = scale_to_12bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_12bits (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_12bits (*target, sample.x, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_12bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_12bits (*target, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputY444_16 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_ushort (sample.x);\n" +" *(unsigned short *) &dst1pos = scale_to_ushort (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_ushort (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_ushort (*target, sample.x, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_ushort (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_ushort (*target, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputRGBA : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.x);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.z);\n" +" dst0pos + 3 = scale_to_uchar (sample.w);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w);\n" +" dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputRGBx : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.x);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.z);\n" +" dst0pos + 3 = 255;\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w);\n" +" dst0pos + 3 = 255;\n" +" }\n" +"};\n" +"\n" +"struct OutputBGRA : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.z);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.x);\n" +" dst0pos + 3 = scale_to_uchar (sample.w);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.z, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w);\n" +" dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputBGRx : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.z);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.x);\n" +" dst0pos + 3 = 255;\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.z, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w);\n" +" dst0pos + 3 = 255;\n" +" }\n" +"};\n" +"\n" +"struct OutputARGB : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.w);\n" +" dst0pos + 1 = scale_to_uchar (sample.x);\n" +" dst0pos + 2 = scale_to_uchar (sample.y);\n" +" dst0pos + 3 = scale_to_uchar (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, 1.0, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.x, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.y, sample.w);\n" +" dst0pos + 3 = blend_uchar (dst0pos + 3, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputABGR : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.w);\n" +" dst0pos + 1 = scale_to_uchar (sample.z);\n" +" dst0pos + 2 = scale_to_uchar (sample.y);\n" +" dst0pos + 3 = scale_to_uchar (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, 1.0, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.z, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.y, sample.w);\n" +" dst0pos + 3 = blend_uchar (dst0pos + 3, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputRGB : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 3 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.x);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 3 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputBGR : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 3 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.z);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 3 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.z, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"__device__ inline ushort3\n" +"unpack_rgb10a2 (unsigned int val)\n" +"{\n" +" unsigned short r, g, b;\n" +" r = (val & 0x3ff);\n" +" r = (r << 6) | (r >> 4);\n" +" g = ((val >> 10) & 0x3ff);\n" +" g = (g << 6) | (g >> 4);\n" +" b = ((val >> 20) & 0x3ff);\n" +" b = (b << 6) | (b >> 4);\n" +" return make_ushort3 (r, g, b);\n" +"}\n" +"\n" +"struct OutputRGB10A2 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int alpha = (unsigned int) scale_to_2bits (sample.w);\n" +" unsigned int packed_rgb = alpha << 30;\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.x));\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.y)) << 10;\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.z)) << 20;\n" +" *(unsigned int *) &dst0x * 4 + y * stride0 = packed_rgb;\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int * target = (unsigned int *) &dst0x * 4 + y * stride0;\n" +" ushort3 val = unpack_rgb10a2 (*target);\n" +" unsigned int alpha = (unsigned int) scale_to_2bits (sample.w);\n" +" unsigned int packed_rgb = alpha << 30;\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.x, sample.x, sample.w));\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.y, sample.y, sample.w)) << 10;\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.z, sample.z, sample.w)) << 20;\n" +" *target = packed_rgb;\n" +" }\n" +"};\n" +"\n" +"__device__ inline ushort3\n" +"unpack_bgr10a2 (unsigned int val)\n" +"{\n" +" unsigned short r, g, b;\n" +" b = (val & 0x3ff);\n" +" b = (b << 6) | (b >> 4);\n" +" g = ((val >> 10) & 0x3ff);\n" +" g = (g << 6) | (g >> 4);\n" +" r = ((val >> 20) & 0x3ff);\n" +" r = (r << 6) | (r >> 4);\n" +" return make_ushort3 (r, g, b);\n" +"}\n" +"\n" +"struct OutputBGR10A2 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int alpha = (unsigned int) scale_to_2bits (sample.x);\n" +" unsigned int packed_rgb = alpha << 30;\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.x)) << 20;\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.y)) << 10;\n" +" packed_rgb |= ((unsigned int) scale_to_10bits (sample.z));\n" +" *(unsigned int *) &dst0x * 4 + y * stride0 = packed_rgb;\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int * target = (unsigned int *) &dst0x * 4 + y * stride0;\n" +" ushort3 val = unpack_bgr10a2 (*target);\n" +" unsigned int alpha = (unsigned int) scale_to_2bits (sample.w);\n" +" unsigned int packed_rgb = alpha << 30;\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.x, sample.x, sample.w)) << 20;\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.y, sample.y, sample.w)) << 10;\n" +" packed_rgb |= ((unsigned int) blend_10bits (val.z, sample.z, sample.w));\n" +" *target = packed_rgb;\n" +" }\n" +"};\n" +"\n" +"struct OutputY42B : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" dst0x + y * stride0 = scale_to_uchar (sample.x);\n" +" if (x % 2 == 0) {\n" +" unsigned int pos = x / 2 + y * stride1;\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst2pos = scale_to_uchar (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" if (x % 2 == 0) {\n" +" pos = x / 2 + y * stride1;\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputI422_10 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_10bits (sample.x);\n" +" if (x % 2 == 0) {\n" +" unsigned int pos = x + y * stride1;\n" +" *(unsigned short *) &dst1pos = scale_to_10bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_10bits (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_10bits (*target, sample.x, sample.w);\n" +" if (x % 2 == 0) {\n" +" pos = x / 2 + y * stride1;\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_10bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_10bits (*target, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputI422_12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" *(unsigned short *) &dst0x * 2 + y * stride0 = scale_to_12bits (sample.x);\n" +" if (x % 2 == 0) {\n" +" unsigned int pos = x + y * stride1;\n" +" *(unsigned short *) &dst1pos = scale_to_12bits (sample.y);\n" +" *(unsigned short *) &dst2pos = scale_to_12bits (sample.z);\n" +" }\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" unsigned int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_12bits (*target, sample.x, sample.w);\n" +" if (x % 2 == 0) {\n" +" pos = x / 2 + y * stride1;\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_12bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_12bits (*target, sample.z, sample.w);\n" +" }\n" +" }\n" +"};\n" +"\n" +"struct OutputRGBP : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.x);\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst2pos = scale_to_uchar (sample.z);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.x, sample.w);\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.z, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputBGRP : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.z);\n" +" dst1pos = scale_to_uchar (sample.y);\n" +" dst2pos = scale_to_uchar (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.z, sample.w);\n" +" dst1pos = blend_uchar (dst1pos, sample.y, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputGBR : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.y);\n" +" dst1pos = scale_to_uchar (sample.z);\n" +" dst2pos = scale_to_uchar (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.y, sample.w);\n" +" dst1pos = blend_uchar (dst1pos, sample.z, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputGBR_10 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_10bits (sample.y);\n" +" *(unsigned short *) &dst1pos = scale_to_10bits (sample.z);\n" +" *(unsigned short *) &dst2pos = scale_to_10bits (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_10bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_10bits (*target, sample.z, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_10bits (*target, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputGBR_12 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_12bits (sample.y);\n" +" *(unsigned short *) &dst1pos = scale_to_12bits (sample.z);\n" +" *(unsigned short *) &dst2pos = scale_to_12bits (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_12bits (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_12bits (*target, sample.z, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_12bits (*target, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputGBR_16 : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" *(unsigned short *) &dst0pos = scale_to_ushort (sample.y);\n" +" *(unsigned short *) &dst1pos = scale_to_ushort (sample.z);\n" +" *(unsigned short *) &dst2pos = scale_to_ushort (sample.x);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 2 + y * stride0;\n" +" unsigned short * target = (unsigned short *) &dst0pos;\n" +" *target = blend_ushort (*target, sample.y, sample.w);\n" +" target = (unsigned short *) &dst1pos;\n" +" *target = blend_ushort (*target, sample.z, sample.w);\n" +" target = (unsigned short *) &dst2pos;\n" +" *target = blend_ushort (*target, sample.x, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputGBRA : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.y);\n" +" dst1pos = scale_to_uchar (sample.z);\n" +" dst2pos = scale_to_uchar (sample.x);\n" +" dst3pos = scale_to_uchar (sample.w);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.y, sample.w);\n" +" dst1pos = blend_uchar (dst1pos, sample.z, sample.w);\n" +" dst2pos = blend_uchar (dst2pos, sample.x, sample.w);\n" +" dst3pos = blend_uchar (dst3pos, 1.0, sample.w);\n" +" }\n" +"};\n" +"\n" +"struct OutputVUYA : public IOutput\n" +"{\n" +" __device__ void\n" +" Write (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = scale_to_uchar (sample.z);\n" +" dst0pos + 1 = scale_to_uchar (sample.y);\n" +" dst0pos + 2 = scale_to_uchar (sample.x);\n" +" dst0pos + 3 = scale_to_uchar (sample.w);\n" +" }\n" +"\n" +" __device__ void\n" +" Blend (unsigned char * dst0, unsigned char * dst1, unsigned char * dst2,\n" +" unsigned char * dst3, float4 sample, int x, int y, int stride0,\n" +" int stride1)\n" +" {\n" +" int pos = x * 4 + y * stride0;\n" +" dst0pos = blend_uchar (dst0pos, sample.z, sample.w);\n" +" dst0pos + 1 = blend_uchar (dst0pos + 1, sample.y, sample.w);\n" +" dst0pos + 2 = blend_uchar (dst0pos + 2, sample.x, sample.w);\n" +" dst0pos + 3 = blend_uchar (dst0pos + 3, 1.0, sample.w);\n" +" }\n" +"};\n" +"\n" +"__device__ inline float2\n" +"rotate_identity (float x, float y)\n" +"{\n" +" return make_float2(x, y);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_90r (float x, float y)\n" +"{\n" +" return make_float2(y, 1.0 - x);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_180 (float x, float y)\n" +"{\n" +" return make_float2(1.0 - x, 1.0 - y);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_90l (float x, float y)\n" +"{\n" +" return make_float2(1.0 - y, x);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_horiz (float x, float y)\n" +"{\n" +" return make_float2(1.0 - x, y);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_vert (float x, float y)\n" +"{\n" +" return make_float2(x, 1.0 - y);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_ul_lr (float x, float y)\n" +"{\n" +" return make_float2(y, x);\n" +"}\n" +"\n" +"__device__ inline float2\n" +"rotate_ur_ll (float x, float y)\n" +"{\n" +" return make_float2(1.0 - y, 1.0 - x);\n" +"}\n" +"__device__ inline float2\n" +"do_rotate (float x, float y, int direction)\n" +"{\n" +" switch (direction) {\n" +" case 1:\n" +" return rotate_90r (x, y);\n" +" case 2:\n" +" return rotate_180 (x, y);\n" +" case 3:\n" +" return rotate_90l (x, y);\n" +" case 4:\n" +" return rotate_horiz (x, y);\n" +" case 5:\n" +" return rotate_vert (x, y);\n" +" case 6:\n" +" return rotate_ul_lr (x, y);\n" +" case 7:\n" +" return rotate_ur_ll (x, y);\n" +" default:\n" +" return rotate_identity (x, y);\n" +" }\n" +"}\n" +"\n" +"extern \"C\" {\n" +"__global__ void\n" +"GstHipConverterMain (TextureObject_t tex0, TextureObject_t tex1,\n" +" TextureObject_t tex2, TextureObject_t tex3, unsigned char * dst0,\n" +" unsigned char * dst1, unsigned char * dst2, unsigned char * dst3,\n" +" int stride0, int stride1, ConstBuffer const_buf, int off_x, int off_y)\n" +"{\n" +" ConvertSimple g_converter;\n" +" SAMPLER g_sampler;\n" +" OUTPUT g_output;\n" +" int x_pos = blockIdx.x * blockDim.x + threadIdx.x + off_x;\n" +" int y_pos = blockIdx.y * blockDim.y + threadIdx.y + off_y;\n" +" float4 sample;\n" +" if (x_pos >= const_buf.width || y_pos >= const_buf.height ||\n" +" const_buf.view_width <= 0 || const_buf.view_height <= 0)\n" +" return;\n" +" if (x_pos < const_buf.left || x_pos >= const_buf.right ||\n" +" y_pos < const_buf.top || y_pos >= const_buf.bottom) {\n" +" if (!const_buf.fill_border)\n" +" return;\n" +" sample = make_float4 (const_buf.border_x, const_buf.border_y,\n" +" const_buf.border_z, const_buf.border_w);\n" +" } else {\n" +" float x = (__int2float_rz (x_pos - const_buf.left) + 0.5) / const_buf.view_width;\n" +" if (x < 0.0 || x > 1.0)\n" +" return;\n" +" float y = (__int2float_rz (y_pos - const_buf.top) + 0.5) / const_buf.view_height;\n" +" if (y < 0.0 || y > 1.0)\n" +" return;\n" +" float2 rotated = do_rotate (x, y, const_buf.video_direction);\n" +" float4 s = g_sampler.Execute (tex0, tex1, tex2, tex3, rotated.x, rotated.y);\n" +" float3 rgb = make_float3 (s.x, s.y, s.z);\n" +" float3 yuv;\n" +" if (const_buf.do_convert)\n" +" yuv = g_converter.Execute (rgb, &const_buf.matrix);\n" +" else\n" +" yuv = rgb;\n" +" sample = make_float4 (yuv.x, yuv.y, yuv.z, s.w);\n" +" }\n" +" sample.w = sample.w * const_buf.alpha;\n" +" if (!const_buf.do_blend) {\n" +" g_output.Write (dst0, dst1, dst2, dst3, sample, x_pos, y_pos, stride0, stride1);\n" +" } else {\n" +" g_output.Blend (dst0, dst1, dst2, dst3, sample, x_pos, y_pos, stride0, stride1);\n" +" }\n" +"}\n" +"}\n" +"\n"; +#endif \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/kernel/meson.build
Added
@@ -0,0 +1,148 @@ +conv_source = files('converter.cu') +conv_comm_source = files('converter-unpack.cu') + +conv_input_formats = + 'I420', + 'YV12', + 'I420_10', + 'I420_12', + 'NV12', + 'NV21', + 'VUYA', + 'RGBA', + 'BGRA', + 'RGBx', + 'BGRx', + 'ARGB', + 'ABGR', + 'RGBP', + 'BGRP', + 'GBR', + 'GBR_10', + 'GBR_12', + 'GBRA', + + +conv_output_formats = + 'I420', + 'YV12', + 'NV12', + 'NV21', + 'P010', + 'I420_10', + 'I420_12', + 'Y444', + 'Y444_10', + 'Y444_12', + 'Y444_16', + 'Y42B', + 'I422_10', + 'I422_12', + 'VUYA', + 'RGBA', + 'RGBx', + 'BGRA', + 'BGRx', + 'ARGB', + 'ABGR', + 'RGB', + 'BGR', + 'RGB10A2', + 'BGR10A2', + 'RGBP', + 'GBR', + 'GBR_10', + 'GBR_12', + 'GBR_16', + 'GBRA', + + +if have_hipcc + amd_header_collector = find_program('collect_hsaco_headers.py') + amd_conv_precompiled = + amd_opt_common = '-w', '--genco', '-c', '@INPUT@', '-o', '@OUTPUT@' + amd_arch_opt = get_option('hip-hipcc-arch') + if amd_arch_opt != '' + amd_opt_common += '--offload-arch=' + amd_arch_opt + endif + + foreach input_format : conv_input_formats + foreach output_format : conv_output_formats + hsaco_name = 'GstHipConverterMain_@0@_@1@_amd.hsaco'.format(input_format, output_format) + opts = amd_opt_common + '-DSAMPLER=Sample@0@'.format(input_format), + '-DOUTPUT=Output@0@'.format(output_format) + compiled_kernel = custom_target(hsaco_name, + input : conv_source, + output : hsaco_name, + command : hipcc + opts) + amd_conv_precompiled += compiled_kernel + endforeach + endforeach + + hsaco_name = 'GstHipConverterUnpack_amd.hsaco' + compiled_kernel = custom_target(hsaco_name, + input : conv_comm_source, + output : hsaco_name, + command : hipcc + amd_opt_common) + amd_conv_precompiled += compiled_kernel + + amd_conv_hsaco_collection = custom_target('hip_converter_hsaco', + input : amd_conv_precompiled, + output : 'converter_hsaco.h', + command : amd_header_collector, + '--input', meson.current_build_dir(), + '--prefix', 'GstHipConverter', + '--name', 'g_precompiled_hsaco_table', + '--output', '@OUTPUT@' + ) + + hip_amd_precompiled += + amd_conv_precompiled, + amd_conv_hsaco_collection, + +endif + +if have_nvcc + nvidia_header_collector = find_program('collect_ptx_headers.py') + nvidia_conv_precompiled = + nvidia_opt_common = '-ptx', '-w', '-o', '@OUTPUT@' + nvidia_arch_opt = get_option('hip-nvcc-arch') + if nvidia_arch_opt != '' + nvidia_opt_common += '-arch=' + nvidia_arch_opt + endif + + foreach input_format : conv_input_formats + foreach output_format : conv_output_formats + ptx_name = 'GstHipConverterMain_@0@_@1@_nvidia.ptx'.format(input_format, output_format) + opts = nvidia_opt_common + '-DSAMPLER=Sample@0@'.format(input_format), + '-DOUTPUT=Output@0@'.format(output_format), '@INPUT@' + compiled_kernel = custom_target(ptx_name, + input : conv_source, + output : ptx_name, + command : nvcc + opts) + nvidia_conv_precompiled += compiled_kernel + endforeach + endforeach + + ptx_name = 'GstHipConverterUnpack_nvidia.ptx' + compiled_kernel = custom_target(ptx_name, + input : conv_comm_source, + output : ptx_name, + command : nvcc + nvidia_opt_common + '@INPUT@') + nvidia_conv_precompiled += compiled_kernel + + nvidia_conv_ptx_collection = custom_target('hip_converter_ptx', + input : nvidia_conv_precompiled, + output : 'converter_ptx.h', + command : nvidia_header_collector, + '--input', meson.current_build_dir(), + '--prefix', 'GstHipConverter', + '--name', 'g_precompiled_ptx_table', + '--output', '@OUTPUT@' + ) + + hip_nvidia_precompiled += + nvidia_conv_precompiled, + nvidia_conv_ptx_collection, + +endif \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/meson.build
Added
@@ -0,0 +1,88 @@ +hip_sources = + 'gsthipbasefilter.cpp', + 'gsthipcompositor.cpp', + 'gsthipconverter.cpp', + 'gsthipconvertscale.cpp', + 'gsthipmemorycopy.cpp', + 'plugin.cpp', + + +doc_sources = +foreach s: hip_sources + doc_sources += meson.current_source_dir() / s +endforeach + +plugin_sources += { + 'hip': pathsep.join(doc_sources) +} + +extra_args = + '-DGST_USE_UNSTABLE_API', + + +extra_deps = +hip_amd_precompiled = +hip_nvidia_precompiled = + +if not gsthip_dep.found() + subdir_done() +endif + +hip_precompile_amd_opt = get_option('hip-amd-precompile') +hip_precompile_nvidia_opt = get_option('hip-nvidia-precompile') +have_hipcc = false +have_nvcc = false +if not hip_precompile_amd_opt.disabled() and not meson.is_cross_build() + if host_system == 'windows' + hipcc = find_program('hipcc.bin', required: false) + if not hipcc.found() + hip_root = run_command(python3, '-c', 'import os; print(os.environ.get("HIP_PATH"))', check: false).stdout().strip() + if hip_root != '' and hip_root != 'None' + hip_bin_path = join_paths(hip_root, 'bin') + hipcc = find_program('hipcc.bin', + dirs: hip_bin_path, + required: hip_precompile_amd_opt) + endif + endif + else + hipcc = find_program('hipcc', required: hip_precompile_amd_opt) + endif + have_hipcc = hipcc.found() +endif + +if not hip_precompile_nvidia_opt.disabled() and not meson.is_cross_build() + nvcc = find_program('nvcc', required: hip_precompile_nvidia_opt) + have_nvcc = nvcc.found() +endif + +hip_cdata = configuration_data() +if have_hipcc or have_nvcc + hip_cdata.set('HIP_AMD_PRECOMPILED', have_hipcc) + hip_cdata.set('HIP_NVIDIA_PRECOMPILED', have_nvcc) + subdir('kernel') +endif + +if gstcuda_dep.found() + hip_cdata.set('HAVE_GST_CUDA', true) + extra_deps += gstcuda_dep +endif + +if gsthip_gl_dep.found() + hip_cdata.set('HAVE_GST_HIP_GL', true) + extra_deps += gsthip_gl_dep +endif + +configure_file( + output: 'gsthip-config.h', + configuration: hip_cdata, +) + +gsthip = library('gsthip', hip_sources + hip_amd_precompiled + hip_nvidia_precompiled, + c_args : gst_plugins_bad_args + extra_args, + cpp_args: gst_plugins_bad_args + extra_args, + include_directories : configinc, hipstub_incdir, + dependencies : gstbase_dep, gstvideo_dep, gmodule_dep, gsthip_dep + extra_deps, + install : true, + install_dir : plugins_install_dir, +) +plugins += gsthip
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/hip/plugin.cpp
Added
@@ -0,0 +1,82 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * plugin-hip: + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include <gst/gst.h> +#include <gst/hip/gsthip.h> +#include "gsthipmemorycopy.h" +#include "gsthipconvertscale.h" +#include "gsthipcompositor.h" + +static gboolean +plugin_init (GstPlugin * plugin) +{ + auto device = gst_hip_device_new (GST_HIP_VENDOR_UNKNOWN, 0); + if (!device) + return TRUE; + + gst_element_register (plugin, + "hipupload", GST_RANK_NONE, GST_TYPE_HIP_UPLOAD); + gst_element_register (plugin, + "hipdownload", GST_RANK_NONE, GST_TYPE_HIP_DOWNLOAD); + + gboolean texture_support = FALSE; + g_object_get (device, "texture2d-support", &texture_support, nullptr); + if (!texture_support) { + gst_plugin_add_status_info (plugin, + "Texture2D not supported by HIP device"); + } + + auto have_rtc = gst_hip_rtc_load_library (GST_HIP_VENDOR_UNKNOWN); + if (!have_rtc) { + gst_plugin_add_status_info (plugin, + "Couldn't find runtime kernel compiler library"); + } + + if (texture_support && have_rtc) { + gst_element_register (plugin, + "hipconvertscale", GST_RANK_NONE, GST_TYPE_HIP_CONVERT_SCALE); + gst_element_register (plugin, + "hipconvert", GST_RANK_NONE, GST_TYPE_HIP_CONVERT); + gst_element_register (plugin, + "hipscale", GST_RANK_NONE, GST_TYPE_HIP_SCALE); + gst_element_register (plugin, + "hipcompositor", GST_RANK_NONE, GST_TYPE_HIP_COMPOSITOR); + } + + gst_clear_object (&device); + gst_type_mark_as_plugin_api (GST_TYPE_HIP_VENDOR, (GstPluginAPIFlags) 0); + + return TRUE; +} + +GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, + GST_VERSION_MINOR, + hip, + "HIP plugin", + plugin_init, VERSION, "LGPL", GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/ipcpipeline/gstipcpipelinecomm.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/ipcpipeline/gstipcpipelinecomm.c
Changed
@@ -146,7 +146,7 @@ case COMM_REQUEST_TYPE_MESSAGE: return ret ? "TRUE" : "FALSE"; case COMM_REQUEST_TYPE_STATE_CHANGE: - return gst_element_state_change_return_get_name (ret); + return gst_state_change_return_get_name (ret); default: g_assert_not_reached (); } @@ -1211,8 +1211,8 @@ GST_TRACE_OBJECT (comm->element, "Writing state change %u: %s -> %s", comm->send_id, - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); gst_byte_writer_init (&bw); if (!gst_byte_writer_put_uint8 (&bw, payload_type)) @@ -2034,10 +2034,9 @@ GST_TRACE_OBJECT (comm->element, "deserialized state change request: %s -> %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT - (transition))); + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); if (comm->on_state_change) (*comm->on_state_change) (comm->id, transition, comm->user_data);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/ipcpipeline/gstipcpipelinesink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/ipcpipeline/gstipcpipelinesink.c
Changed
@@ -526,6 +526,8 @@ GST_OBJECT_UNLOCK (sink); GST_STATE_UNLOCK (sink); } + + gst_message_unref (message); } static void @@ -540,8 +542,8 @@ GST_OBJECT_LOCK (sink); if (sink->pass_next_async_done) { GST_OBJECT_UNLOCK (sink); - gst_element_call_async (GST_ELEMENT (sink), do_async_done, - message, (GDestroyNotify) gst_message_unref); + gst_object_call_async (GST_OBJECT (sink), + (GstObjectCallAsyncFunc) do_async_done, message); } else { GST_OBJECT_UNLOCK (sink); gst_message_unref (message); @@ -595,8 +597,8 @@ gboolean down = FALSE; GST_DEBUG_OBJECT (sink, "Got state change request: %s -> %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY: @@ -636,7 +638,7 @@ if (async) { GST_DEBUG_OBJECT (sink, "Posting async-start for %s, will need state-change-done", - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); gst_element_post_message (GST_ELEMENT (sink), gst_message_new_async_start (GST_OBJECT (sink))); @@ -684,15 +686,15 @@ } GST_DEBUG_OBJECT (sink, "For %s -> %s: Peer ret: %s, parent ret: %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition)), - gst_element_state_change_return_get_name (peer_ret), - gst_element_state_change_return_get_name (ret)); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition)), + gst_state_change_return_get_name (peer_ret), + gst_state_change_return_get_name (ret)); /* now interpret the return codes */ if (async && peer_ret != GST_STATE_CHANGE_ASYNC) { GST_DEBUG_OBJECT (sink, "Posting async-done for %s; peer wasn't ASYNC", - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); GST_OBJECT_LOCK (sink); sink->pass_next_async_done = FALSE;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/ipcpipeline/gstipcpipelinesrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/ipcpipeline/gstipcpipelinesrc.c
Changed
@@ -680,6 +680,8 @@ GST_DEBUG_OBJECT (src, "Event pushed, return %d", ret); gst_ipc_pipeline_comm_write_boolean_ack_to_fd (&src->comm, id, ret); } + + gst_event_unref (event); } static void @@ -732,8 +734,8 @@ } else { GST_DEBUG_OBJECT (src, "This is not a serialized event, pushing in a thread"); - gst_element_call_async (GST_ELEMENT (src), do_oob_event, event, - (GDestroyNotify) gst_event_unref); + gst_object_call_async (GST_OBJECT (src), + (GstObjectCallAsyncFunc) do_oob_event, event); } } } @@ -769,6 +771,8 @@ GST_DEBUG_OBJECT (src, "Query pushed, return %d", ret); } gst_ipc_pipeline_comm_write_query_result_to_fd (&src->comm, id, ret, query); + + gst_query_unref (query); } static void @@ -788,8 +792,8 @@ } else { gst_mini_object_set_qdata (GST_MINI_OBJECT (query), QUARK_UPSTREAM, GINT_TO_POINTER (upstream), NULL); - gst_element_call_async (GST_ELEMENT (src), do_oob_query, query, - (GDestroyNotify) gst_query_unref); + gst_object_call_async (GST_OBJECT (src), + (GstObjectCallAsyncFunc) do_oob_query, query); } } @@ -812,8 +816,8 @@ gboolean down; GST_DEBUG_OBJECT (src, "Doing state change id %u, %s -> %s", id, - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); if (!(pipeline = find_pipeline (element))) { GST_ERROR_OBJECT (src, "No pipeline found"); @@ -832,17 +836,16 @@ effective = pending == GST_STATE_VOID_PENDING ? state : pending; GST_DEBUG_OBJECT (src, "Current element state: ret:%s state:%s pending:%s " - "effective:%s", gst_element_state_change_return_get_name (ret), - gst_element_state_get_name (state), - gst_element_state_get_name (pending), - gst_element_state_get_name (effective)); + "effective:%s", gst_state_change_return_get_name (ret), + gst_state_get_name (state), + gst_state_get_name (pending), gst_state_get_name (effective)); if ((GST_STATE_TRANSITION_NEXT (transition) <= effective && !down) || (GST_STATE_TRANSITION_NEXT (transition) > effective && down)) { /* if the request was to transition to a state that we have already * transitioned to in the same direction, then we just silently return */ GST_DEBUG_OBJECT (src, "State transition to %s is unnecessary", - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); /* make sure we return SUCCESS if the transition is to NULL or READY, * even if our current ret is ASYNC for example; also, make sure not * to return FAILURE, since our state is already committed */ @@ -861,15 +864,17 @@ ret = gst_element_set_state (pipeline, GST_STATE_TRANSITION_NEXT (transition)); GST_DEBUG_OBJECT (src, "gst_element_set_state returned %s", - gst_element_state_change_return_get_name (ret)); + gst_state_change_return_get_name (ret)); } GST_STATE_UNLOCK (pipeline); done_nolock: GST_DEBUG_OBJECT (src, "sending state change ack, ret = %s", - gst_element_state_change_return_get_name (ret)); + gst_state_change_return_get_name (ret)); gst_ipc_pipeline_comm_write_state_change_ack_to_fd (&src->comm, id, ret); + + g_free (data); } static void @@ -879,14 +884,15 @@ GstElement *ipcpipelinesrc = GST_ELEMENT (user_data); GST_DEBUG_OBJECT (ipcpipelinesrc, "Got state change id %u, %s -> %s", id, - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); d = g_new (struct StateChangeData, 1); d->id = id; d->transition = transition; - gst_element_call_async (ipcpipelinesrc, do_state_change, d, g_free); + gst_object_call_async (GST_OBJECT (ipcpipelinesrc), + (GstObjectCallAsyncFunc) do_state_change, d); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/mediafoundation/gstmfcapturedshow.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/mediafoundation/gstmfcapturedshow.cpp
Changed
@@ -798,7 +798,8 @@ return nullptr; header = (VIDEOINFOHEADER *) type->pbFormat; - if (header->bmiHeader.biWidth <= 0 || header->bmiHeader.biHeight <= 0) { + // biHeight can be either positive or negative (top-down image), so check against zero + if (header->bmiHeader.biWidth <= 0 || header->bmiHeader.biHeight == 0) { return nullptr; } @@ -809,11 +810,13 @@ } if (top_down_image) { + // The documentation for BITMAPINFOHEADER states that: For uncompressed + // RGB bitmaps, if biHeight is positive, the bitmap is a bottom-up DIB + // with the origin at the lower left corner. If biHeight is negative, the + // bitmap is a top-down DIB with the origin at the upper left corner. const GstVideoFormatInfo *finfo = gst_video_format_get_info (format); - if (GST_VIDEO_FORMAT_INFO_IS_RGB (finfo) && header->bmiHeader.biHeight < 0) { - *top_down_image = FALSE; - } else { - *top_down_image = TRUE; + if (GST_VIDEO_FORMAT_INFO_IS_RGB (finfo)) { + *top_down_image = header->bmiHeader.biHeight < 0; } } @@ -821,7 +824,7 @@ gst_caps_set_simple (caps, "format", G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, (gint) header->bmiHeader.biWidth, - "height", G_TYPE_INT, (gint) header->bmiHeader.biHeight, + "height", G_TYPE_INT, (gint) ABS (header->bmiHeader.biHeight), "framerate", GST_TYPE_FRACTION, fps_n, fps_d, nullptr); return caps;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/meson.build
Changed
@@ -13,6 +13,7 @@ subdir('dvb') subdir('dwrite') subdir('fbdev') +subdir('hip') subdir('ipcpipeline') subdir('kms') subdir('magicleap')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudacompositor.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudacompositor.cpp
Changed
@@ -158,6 +158,7 @@ GstCudaConverter *conv = nullptr; GstBufferPool *fallback_pool = nullptr; GstBuffer *prepared_buf = nullptr; + GstVideoInfo pool_info; gboolean config_updated = FALSE; @@ -586,6 +587,24 @@ return gst_buffer_ref (buffer); } + if (!gst_video_frame_map (&src, &pad->info, buffer, GST_MAP_READ)) { + GST_ERROR_OBJECT (pad, "Couldn't map src frame"); + return nullptr; + } + + auto frame_width = GST_VIDEO_FRAME_WIDTH (&src); + auto frame_height = GST_VIDEO_FRAME_HEIGHT (&src); + + if (priv->fallback_pool && + (priv->pool_info.width != frame_width || + priv->pool_info.height != frame_height)) { + /* Size can be different if crop meta is in use */ + GST_DEBUG_OBJECT (pad, + "Fallback pool size mismatch, releasing old fallback pool"); + gst_buffer_pool_set_active (priv->fallback_pool, FALSE); + gst_clear_object (&priv->fallback_pool); + } + if (!priv->fallback_pool) { priv->fallback_pool = gst_cuda_buffer_pool_new (self->context); auto config = gst_buffer_pool_get_config (priv->fallback_pool); @@ -593,8 +612,12 @@ if (self->stream) gst_buffer_pool_config_set_cuda_stream (config, self->stream); - auto caps = gst_video_info_to_caps (&pad->info); - gst_buffer_pool_config_set_params (config, caps, pad->info.size, 0, 0); + gst_video_info_set_format (&priv->pool_info, + GST_VIDEO_INFO_FORMAT (&pad->info), frame_width, frame_height); + + auto caps = gst_video_info_to_caps (&priv->pool_info); + gst_buffer_pool_config_set_params (config, + caps, priv->pool_info.size, 0, 0); gst_caps_unref (caps); if (!gst_buffer_pool_set_config (priv->fallback_pool, config)) { GST_ERROR_OBJECT (pad, "Set config failed"); @@ -613,12 +636,7 @@ gst_buffer_pool_acquire_buffer (priv->fallback_pool, &outbuf, nullptr); if (!outbuf) { GST_ERROR_OBJECT (self, "Couldn't acquire buffer"); - return nullptr; - } - - if (!gst_video_frame_map (&src, &pad->info, buffer, GST_MAP_READ)) { - GST_ERROR_OBJECT (pad, "Couldn't map src frame"); - gst_buffer_unref (outbuf); + gst_video_frame_unmap (&src); return nullptr; } @@ -639,6 +657,18 @@ return nullptr; } + auto cmeta = gst_buffer_get_video_crop_meta (buffer); + if (cmeta) { + auto new_cmeta = gst_buffer_get_video_crop_meta (outbuf); + if (!new_cmeta) + new_cmeta = gst_buffer_add_video_crop_meta (outbuf); + + new_cmeta->x = cmeta->x; + new_cmeta->y = cmeta->y; + new_cmeta->width = cmeta->width; + new_cmeta->width = cmeta->width; + } + return outbuf; } @@ -1330,6 +1360,7 @@ } gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, nullptr); return TRUE; } @@ -1652,6 +1683,31 @@ if (in_stream != stream) gst_cuda_memory_sync (in_cmem); + gint x, y, w, h; + gint x_offset = 0; + gint y_offset = 0; + + if (pad_priv->xpos < 0) + x_offset = pad_priv->xpos; + + if (pad_priv->ypos < 0) + y_offset = pad_priv->ypos; + + auto crop_meta = gst_buffer_get_video_crop_meta (in_frame->buffer); + if (crop_meta) { + x = crop_meta->x; + y = crop_meta->y; + w = crop_meta->width; + h = crop_meta->height; + } else { + x = y = 0; + w = pad->info.width; + h = pad->info.height; + } + + g_object_set (pad_priv->conv, "src-x", x - x_offset, "src-y", y - y_offset, + "src-width", w + x_offset, "src-height", h + y_offset, nullptr); + if (!gst_cuda_converter_convert_frame (pad_priv->conv, in_frame, &frame, stream_handle, nullptr)) { GST_ERROR_OBJECT (pad, "Couldn't convert frame");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaconverter.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaconverter.cpp
Changed
@@ -630,8 +630,8 @@ struct ConstBuffer { ColorMatrix convert_matrix; - int width; - int height; + int out_width; + int out_height; int left; int top; int right; @@ -643,10 +643,12 @@ float border_z; float border_w; int fill_border; - int video_direction; float alpha; int do_blend; int do_convert; + float transform_u2; + float transform_v2; + float transform_offset2; }; #define COLOR_SPACE_IDENTITY "color_space_identity" @@ -732,13 +734,6 @@ MAKE_FORMAT_RGB (VUYA, UNSIGNED_INT8, SAMPLE_VUYA), }; -struct TextureBuffer -{ - CUdeviceptr ptr = 0; - gsize stride = 0; - CUtexObject texture = 0; -}; - enum { PROP_0, @@ -746,6 +741,10 @@ PROP_DEST_Y, PROP_DEST_WIDTH, PROP_DEST_HEIGHT, + PROP_SRC_X, + PROP_SRC_Y, + PROP_SRC_WIDTH, + PROP_SRC_HEIGHT, PROP_FILL_BORDER, PROP_VIDEO_DIRECTION, PROP_ALPHA, @@ -778,8 +777,7 @@ const TextureFormat *texture_fmt; gint texture_align; - TextureBuffer fallback_bufferGST_VIDEO_MAX_COMPONENTS; - TextureBuffer unpack_buffer; + GstCudaMemory *fallback_mem = nullptr; ConstBuffer *const_buf = nullptr; CUmodule main_module = nullptr; @@ -788,7 +786,9 @@ CUmodule unpack_module = nullptr; CUfunction unpack_func = nullptr; - gboolean update_const_buf = TRUE; + gboolean update_const_buf = FALSE; + gint prev_src_width = 0; + gint prev_src_height = 0; GstCudaStream *stream = nullptr; @@ -797,6 +797,10 @@ gint dest_y = 0; gint dest_width = 0; gint dest_height = 0; + gint src_x = 0; + gint src_y = 0; + gint src_width = 0; + gint src_height = 0; GstVideoOrientationMethod video_direction = GST_VIDEO_ORIENTATION_IDENTITY; gboolean fill_border = FALSE; CUfilter_mode filter_mode = CU_TR_FILTER_MODE_LINEAR; @@ -826,19 +830,33 @@ object_class->get_property = gst_cuda_converter_get_property; g_object_class_install_property (object_class, PROP_DEST_X, - g_param_spec_int ("dest-x", "Dest-X", - "x poisition in the destination frame", G_MININT, G_MAXINT, 0, + g_param_spec_int ("dest-x", "Dest X", + "x position in the destination frame", G_MININT, G_MAXINT, 0, param_flags)); g_object_class_install_property (object_class, PROP_DEST_Y, - g_param_spec_int ("dest-y", "Dest-Y", - "y poisition in the destination frame", G_MININT, G_MAXINT, 0, + g_param_spec_int ("dest-y", "Dest Y", + "y position in the destination frame", G_MININT, G_MAXINT, 0, param_flags)); g_object_class_install_property (object_class, PROP_DEST_WIDTH, - g_param_spec_int ("dest-width", "Dest-Width", + g_param_spec_int ("dest-width", "Dest Width", "Width in the destination frame", 0, G_MAXINT, 0, param_flags)); g_object_class_install_property (object_class, PROP_DEST_HEIGHT, - g_param_spec_int ("dest-height", "Dest-Height", + g_param_spec_int ("dest-height", "Dest Height", "Height in the destination frame", 0, G_MAXINT, 0, param_flags)); + g_object_class_install_property (object_class, PROP_SRC_X, + g_param_spec_int ("src-x", "Src X", + "x position in the source frame", G_MININT, G_MAXINT, 0, + param_flags)); + g_object_class_install_property (object_class, PROP_SRC_Y, + g_param_spec_int ("src-y", "Src Y", + "y position in the source frame", G_MININT, G_MAXINT, 0, + param_flags)); + g_object_class_install_property (object_class, PROP_SRC_WIDTH, + g_param_spec_int ("src-width", "Src Width", + "Width in the source frame", 0, G_MAXINT, 0, param_flags)); + g_object_class_install_property (object_class, PROP_SRC_HEIGHT, + g_param_spec_int ("src-height", "Src Height", + "Height in the source frame", 0, G_MAXINT, 0, param_flags)); g_object_class_install_property (object_class, PROP_FILL_BORDER, g_param_spec_boolean ("fill-border", "Fill border", "Fill border", FALSE, param_flags)); @@ -880,36 +898,11 @@ CuModuleUnload (priv->main_module); priv->main_module = nullptr; } + } - for (guint i = 0; i < G_N_ELEMENTS (priv->fallback_buffer); i++) { - if (priv->fallback_bufferi.ptr) { - if (priv->fallback_bufferi.texture) { - CuTexObjectDestroy (priv->fallback_bufferi.texture); - priv->fallback_bufferi.texture = 0; - } - - if (stream) - CuMemFreeAsync (priv->fallback_bufferi.ptr, stream); - else - CuMemFree (priv->fallback_bufferi.ptr); - priv->fallback_bufferi.ptr = 0; - } - } - - if (priv->unpack_buffer.ptr) { - if (priv->unpack_buffer.texture) { - CuTexObjectDestroy (priv->unpack_buffer.texture); - priv->unpack_buffer.texture = 0; - } - - if (stream) - CuMemFreeAsync (priv->unpack_buffer.ptr, stream); - else - CuMemFree (priv->unpack_buffer.ptr); - priv->unpack_buffer.ptr = 0; - } - - gst_cuda_context_pop (nullptr); + if (priv->fallback_mem) { + gst_memory_unref ((GstMemory *) priv->fallback_mem); + priv->fallback_mem = nullptr; } if (stream) @@ -983,6 +976,42 @@ } break; } + case PROP_SRC_X: + { + auto src_x = g_value_get_int (value); + if (priv->src_x != src_x) { + priv->src_x = src_x; + priv->update_const_buf = TRUE; + } + break; + } + case PROP_SRC_Y: + { + auto src_y = g_value_get_int (value); + if (priv->src_y != src_y) { + priv->src_y = src_y; + priv->update_const_buf = TRUE; + } + break; + } + case PROP_SRC_WIDTH: + { + auto src_width = g_value_get_int (value); + if (priv->src_width != src_width) { + priv->src_width = src_width; + priv->update_const_buf = TRUE; + } + break; + } + case PROP_SRC_HEIGHT: + { + auto src_height = g_value_get_int (value); + if (priv->src_height != src_height) { + priv->src_height = src_height; + priv->update_const_buf = TRUE; + } + break; + } case PROP_FILL_BORDER: { auto fill_border = g_value_get_boolean (value); @@ -1000,7 +1029,6 @@ if (priv->video_direction != video_direction) { priv->update_const_buf = TRUE; priv->video_direction = video_direction; - priv->const_buf->video_direction = video_direction; } break; } @@ -1049,6 +1077,18 @@ case PROP_DEST_HEIGHT: g_value_set_int (value, priv->dest_height); break; + case PROP_SRC_X: + g_value_set_int (value, priv->src_x); + break; + case PROP_SRC_Y: + g_value_set_int (value, priv->src_y); + break; + case PROP_SRC_WIDTH: + g_value_set_int (value, priv->src_width); + break; + case PROP_SRC_HEIGHT: + g_value_set_int (value, priv->src_height); + break; case PROP_FILL_BORDER: g_value_set_boolean (value, priv->fill_border); break; @@ -1082,6 +1122,95 @@ return "UNKNOWN"; } +static void +gst_cuda_converter_update_transform (GstCudaConverter * self, float input_width, + float input_height) +{ + auto priv = self->priv; + + float sx = (float) priv->src_width / input_width; + float sy = (float) priv->src_height / input_height; + + float ox = (float) priv->src_x / input_width; + float oy = (float) priv->src_y / input_height; + + switch (priv->video_direction) { + case GST_VIDEO_ORIENTATION_90R: + priv->const_buf->transform_u0 = 0; + priv->const_buf->transform_u1 = -sx; + priv->const_buf->transform_v0 = sx; + priv->const_buf->transform_v1 = 0; + priv->const_buf->transform_offset0 = ox; + priv->const_buf->transform_offset1 = oy + sy; + break; + case GST_VIDEO_ORIENTATION_180: + priv->const_buf->transform_u0 = -sx; + priv->const_buf->transform_u1 = 0; + priv->const_buf->transform_v0 = 0; + priv->const_buf->transform_v1 = -sy; + priv->const_buf->transform_offset0 = ox + sx; + priv->const_buf->transform_offset1 = oy + sy; + break; + case GST_VIDEO_ORIENTATION_90L: + priv->const_buf->transform_u0 = 0; + priv->const_buf->transform_u1 = sy; + priv->const_buf->transform_v0 = -sx; + priv->const_buf->transform_v1 = 0; + priv->const_buf->transform_offset0 = ox + sx; + priv->const_buf->transform_offset1 = oy; + break; + case GST_VIDEO_ORIENTATION_HORIZ: + priv->const_buf->transform_u0 = -sx; + priv->const_buf->transform_u1 = 0; + priv->const_buf->transform_v0 = 0; + priv->const_buf->transform_v1 = sy; + priv->const_buf->transform_offset0 = ox + sx; + priv->const_buf->transform_offset1 = oy; + break; + case GST_VIDEO_ORIENTATION_VERT: + priv->const_buf->transform_u0 = sx; + priv->const_buf->transform_u1 = 0; + priv->const_buf->transform_v0 = 0; + priv->const_buf->transform_v1 = -sy; + priv->const_buf->transform_offset0 = ox; + priv->const_buf->transform_offset1 = oy + sy; + break; + case GST_VIDEO_ORIENTATION_UL_LR: + priv->const_buf->transform_u0 = 0; + priv->const_buf->transform_u1 = sy; + priv->const_buf->transform_v0 = sx; + priv->const_buf->transform_v1 = 0; + priv->const_buf->transform_offset0 = ox; + priv->const_buf->transform_offset1 = oy; + break; + case GST_VIDEO_ORIENTATION_UR_LL: + priv->const_buf->transform_u0 = 0; + priv->const_buf->transform_u1 = -sy; + priv->const_buf->transform_v0 = -sx; + priv->const_buf->transform_v1 = 0; + priv->const_buf->transform_offset0 = ox + sx; + priv->const_buf->transform_offset1 = oy + sy; + break; + case GST_VIDEO_ORIENTATION_IDENTITY: + default: + priv->const_buf->transform_u0 = sx; + priv->const_buf->transform_u1 = 0; + priv->const_buf->transform_v0 = 0; + priv->const_buf->transform_v1 = sy; + priv->const_buf->transform_offset0 = ox; + priv->const_buf->transform_offset1 = oy; + break; + } + + GST_DEBUG_OBJECT (self, "transform, sx: %lf, sy: %lf, ox: %lf, oy %lf, " + "matrix: {%lf, %lf, %lf, %lf}, offset: {%lf, %lf}", + sx, sy, ox, oy, priv->const_buf->transform_u0, + priv->const_buf->transform_u1, + priv->const_buf->transform_v0, priv->const_buf->transform_v1, + priv->const_buf->transform_offset0, + priv->const_buf->transform_offset1); +} + static gboolean gst_cuda_converter_setup (GstCudaConverter * self) { @@ -1372,8 +1501,8 @@ priv->const_buf->convert_matrix.maxi = convert_matrix.maxi; } - priv->const_buf->width = out_info->width; - priv->const_buf->height = out_info->height; + priv->const_buf->out_width = out_info->width; + priv->const_buf->out_height = out_info->height; priv->const_buf->left = 0; priv->const_buf->top = 0; priv->const_buf->right = out_info->width; @@ -1385,10 +1514,12 @@ priv->const_buf->border_z = border_color2; priv->const_buf->border_w = border_color3; priv->const_buf->fill_border = 0; - priv->const_buf->video_direction = 0; priv->const_buf->alpha = 1; priv->const_buf->do_blend = 0; + gst_cuda_converter_update_transform (self, (float) priv->src_width, + (float) priv->src_height); + guint cuda_device; g_object_get (self->context, "cuda-device-id", &cuda_device, nullptr); @@ -1487,66 +1618,6 @@ /* Allocates intermediate memory for texture */ if (!unpack_name.empty ()) { - CUDA_TEXTURE_DESC texture_desc; - CUDA_RESOURCE_DESC resource_desc; - CUtexObject texture = 0; - - memset (&texture_desc, 0, sizeof (CUDA_TEXTURE_DESC)); - memset (&resource_desc, 0, sizeof (CUDA_RESOURCE_DESC)); - - if (priv->stream) { - auto stream = gst_cuda_stream_get_handle (priv->stream); - gint texture_align = - gst_cuda_context_get_texture_alignment (self->context); - gint stride = GST_VIDEO_INFO_COMP_WIDTH (texture_info, 0) * - GST_VIDEO_INFO_COMP_PSTRIDE (texture_info, 0); - - priv->unpack_buffer.stride = - ((stride + texture_align - 1) / texture_align) * texture_align; - - ret = CuMemAllocAsync (&priv->unpack_buffer.ptr, - priv->unpack_buffer.stride * GST_VIDEO_INFO_HEIGHT (texture_info), - stream); - - if (gst_cuda_result (ret)) - ret = CuStreamSynchronize (stream); - } else { - ret = CuMemAllocPitch (&priv->unpack_buffer.ptr, - &priv->unpack_buffer.stride, - GST_VIDEO_INFO_COMP_WIDTH (texture_info, 0) * - GST_VIDEO_INFO_COMP_PSTRIDE (texture_info, 0), - GST_VIDEO_INFO_HEIGHT (texture_info), 16); - } - - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't allocate unpack buffer"); - gst_cuda_context_pop (nullptr); - return FALSE; - } - - resource_desc.resType = CU_RESOURCE_TYPE_PITCH2D; - resource_desc.res.pitch2D.format = priv->texture_fmt->array_format0; - resource_desc.res.pitch2D.numChannels = 4; - resource_desc.res.pitch2D.width = in_info->width; - resource_desc.res.pitch2D.height = in_info->height; - resource_desc.res.pitch2D.pitchInBytes = priv->unpack_buffer.stride; - resource_desc.res.pitch2D.devPtr = priv->unpack_buffer.ptr; - - texture_desc.filterMode = priv->filter_mode; - texture_desc.flags = 0x2; - texture_desc.addressMode0 = (CUaddress_mode) 1; - texture_desc.addressMode1 = (CUaddress_mode) 1; - texture_desc.addressMode2 = (CUaddress_mode) 1; - - ret = CuTexObjectCreate (&texture, &resource_desc, &texture_desc, nullptr); - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't create unpack texture"); - gst_cuda_context_pop (nullptr); - return FALSE; - } - - priv->unpack_buffer.texture = texture; - program = nullptr; const std::string unpack_module_name = "GstCudaConverterUnpack"; auto precompiled = g_precompiled_ptx_table.find (unpack_module_name); @@ -1690,6 +1761,12 @@ priv->out_info = *out_info; priv->dest_width = out_info->width; priv->dest_height = out_info->height; + priv->src_x = 0; + priv->src_y = 0; + priv->src_width = in_info->width; + priv->src_height = in_info->height; + priv->prev_src_width = in_info->width; + priv->prev_src_height = in_info->height; g_object_get (context, "prefer-stream-ordered-alloc", &use_stream_ordered, nullptr); @@ -1715,147 +1792,39 @@ return nullptr; } -static CUtexObject -gst_cuda_converter_create_texture_unchecked (GstCudaConverter * self, - CUdeviceptr src, gint width, gint height, CUarray_format format, - guint channels, gint stride, gint plane, CUfilter_mode mode) -{ - CUDA_TEXTURE_DESC texture_desc; - CUDA_RESOURCE_DESC resource_desc; - CUtexObject texture = 0; - CUresult cuda_ret; - - memset (&texture_desc, 0, sizeof (CUDA_TEXTURE_DESC)); - memset (&resource_desc, 0, sizeof (CUDA_RESOURCE_DESC)); - - resource_desc.resType = CU_RESOURCE_TYPE_PITCH2D; - resource_desc.res.pitch2D.format = format; - resource_desc.res.pitch2D.numChannels = channels; - resource_desc.res.pitch2D.width = width; - resource_desc.res.pitch2D.height = height; - resource_desc.res.pitch2D.pitchInBytes = stride; - resource_desc.res.pitch2D.devPtr = src; - - texture_desc.filterMode = mode; - /* Will read texture value as a normalized 0, 1 float value - * with 0, 1) coordinates */ - /* CU_TRSF_NORMALIZED_COORDINATES */ - texture_desc.flags = 0x2; - /* CU_TR_ADDRESS_MODE_CLAMP */ - texture_desc.addressMode0 = (CUaddress_mode) 1; - texture_desc.addressMode1 = (CUaddress_mode) 1; - texture_desc.addressMode2 = (CUaddress_mode) 1; - - cuda_ret = - CuTexObjectCreate (&texture, &resource_desc, &texture_desc, nullptr); - - if (!gst_cuda_result (cuda_ret)) { - GST_ERROR_OBJECT (self, "Could not create texture"); - return 0; - } - - return texture; -} - -static gboolean -ensure_fallback_buffer (GstCudaConverter * self, gint width_in_bytes, - gint height, guint plane) -{ - GstCudaConverterPrivate *priv = self->priv; - CUresult ret; - - if (priv->fallback_bufferplane.ptr) - return TRUE; - - if (priv->stream) { - auto stream = gst_cuda_stream_get_handle (priv->stream); - gint texture_align = gst_cuda_context_get_texture_alignment (self->context); - priv->fallback_bufferplane.stride = - ((width_in_bytes + texture_align - 1) / texture_align) * texture_align; - ret = CuMemAllocAsync (&priv->unpack_buffer.ptr, - priv->fallback_bufferplane.stride * height, stream); - if (gst_cuda_result (ret)) - ret = CuStreamSynchronize (stream); - } else { - ret = CuMemAllocPitch (&priv->fallback_bufferplane.ptr, - &priv->fallback_bufferplane.stride, width_in_bytes, height, 16); - } - - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't allocate fallback buffer"); - return FALSE; - } - - return TRUE; -} - -static CUtexObject -gst_cuda_converter_create_texture (GstCudaConverter * self, - CUdeviceptr src, gint width, gint height, gint stride, CUfilter_mode mode, - CUarray_format format, guint channles, gint plane, CUstream stream) -{ - GstCudaConverterPrivate *priv = self->priv; - CUresult ret; - CUdeviceptr src_ptr; - CUDA_MEMCPY2D params = { 0, }; - - if (!ensure_fallback_buffer (self, stride, height, plane)) - return 0; - - params.srcMemoryType = CU_MEMORYTYPE_DEVICE; - params.srcPitch = stride; - params.srcDevice = (CUdeviceptr) src; - - params.dstMemoryType = CU_MEMORYTYPE_DEVICE; - params.dstPitch = priv->fallback_bufferplane.stride; - params.dstDevice = priv->fallback_bufferplane.ptr; - params.WidthInBytes = GST_VIDEO_INFO_COMP_WIDTH (&priv->in_info, plane) - * GST_VIDEO_INFO_COMP_PSTRIDE (&priv->in_info, plane), - params.Height = GST_VIDEO_INFO_COMP_HEIGHT (&priv->in_info, plane); - - ret = CuMemcpy2DAsync (¶ms, stream); - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't copy to fallback buffer"); - return 0; - } - - if (!priv->fallback_bufferplane.texture) { - src_ptr = priv->fallback_bufferplane.ptr; - stride = priv->fallback_bufferplane.stride; - - priv->fallback_bufferplane.texture = - gst_cuda_converter_create_texture_unchecked (self, src_ptr, width, - height, format, channles, stride, plane, mode); - } - - return priv->fallback_bufferplane.texture; -} - static gboolean gst_cuda_converter_unpack_rgb (GstCudaConverter * self, GstVideoFrame * src_frame, CUstream stream) { GstCudaConverterPrivate *priv = self->priv; - CUdeviceptr src; + CUdeviceptr src, dst; gint width, height, src_stride, dst_stride; CUresult ret; - gpointer args = { &src, &priv->unpack_buffer.ptr, + gpointer args = { &src, &dst, &width, &height, &src_stride, &dst_stride }; - g_assert (priv->unpack_buffer.ptr); - g_assert (priv->unpack_buffer.stride > 0); + GstMapInfo map; + if (!gst_memory_map ((GstMemory *) priv->fallback_mem, &map, + GST_MAP_WRITE_CUDA)) { + GST_ERROR_OBJECT (self, "Couldn't map unpack buffer"); + return FALSE; + } + + dst = (CUdeviceptr) map.data; src = (CUdeviceptr) GST_VIDEO_FRAME_PLANE_DATA (src_frame, 0); width = GST_VIDEO_FRAME_WIDTH (src_frame); height = GST_VIDEO_FRAME_HEIGHT (src_frame); src_stride = GST_VIDEO_FRAME_PLANE_STRIDE (src_frame, 0); - dst_stride = (gint) priv->unpack_buffer.stride; + dst_stride = priv->fallback_mem->info.stride0; ret = CuLaunchKernel (priv->unpack_func, DIV_UP (width, CUDA_BLOCK_X), DIV_UP (height, CUDA_BLOCK_Y), 1, CUDA_BLOCK_X, CUDA_BLOCK_Y, 1, 0, stream, args, nullptr); + gst_memory_unmap ((GstMemory *) priv->fallback_mem, &map); + if (!gst_cuda_result (ret)) { GST_ERROR_OBJECT (self, "Couldn't unpack source RGB"); return FALSE; @@ -1864,6 +1833,54 @@ return TRUE; } +static gboolean +gst_cuda_converter_copy_to_fallback (GstCudaConverter * self, + GstVideoFrame * in_frame, CUstream stream, CUtexObject * texture) +{ + auto priv = self->priv; + gboolean ret = FALSE; + + GstMapInfo map; + if (!gst_memory_map ((GstMemory *) priv->fallback_mem, + &map, GST_MAP_WRITE_CUDA)) { + GST_ERROR_OBJECT (self, "Couldn't map fallback memory"); + return FALSE; + } + + CUDA_MEMCPY2D params = { 0, }; + params.srcMemoryType = CU_MEMORYTYPE_DEVICE; + params.dstMemoryType = CU_MEMORYTYPE_DEVICE; + params.dstPitch = priv->fallback_mem->info.stride0; + + for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (in_frame); i++) { + params.srcPitch = GST_VIDEO_FRAME_PLANE_STRIDE (in_frame, i); + params.srcDevice = (CUdeviceptr) GST_VIDEO_FRAME_PLANE_DATA (in_frame, i); + params.dstDevice = (CUdeviceptr) + (((guint8 *) map.data) + priv->fallback_mem->info.offseti); + params.WidthInBytes = GST_VIDEO_FRAME_COMP_WIDTH (in_frame, i) + * GST_VIDEO_FRAME_COMP_PSTRIDE (in_frame, i), + params.Height = GST_VIDEO_FRAME_COMP_HEIGHT (in_frame, i); + + auto cuda_ret = CuMemcpy2DAsync (¶ms, stream); + if (!gst_cuda_result (cuda_ret)) { + GST_ERROR_OBJECT (self, "Couldn't copy to fallback buffer"); + goto out; + } + + if (!gst_cuda_memory_get_texture (priv->fallback_mem, 0, priv->filter_mode, + &texturei)) { + GST_ERROR_OBJECT (self, "Couldn't get texture %d", i); + goto out; + } + } + + ret = TRUE; + +out: + gst_memory_unmap ((GstMemory *) priv->fallback_mem, &map); + return ret; +} + gboolean gst_cuda_converter_convert_frame (GstCudaConverter * converter, GstVideoFrame * src_frame, GstVideoFrame * dst_frame, CUstream stream, @@ -1909,32 +1926,98 @@ return FALSE; } + if (cmem->info.width != priv->prev_src_width || + cmem->info.height != priv->prev_src_height) { + GST_DEBUG_OBJECT (converter, "Input frame size updated %dx%d -> %dx%d", + priv->prev_src_width, priv->prev_src_height, + cmem->info.width, cmem->info.height); + + priv->prev_src_width = cmem->info.width; + priv->prev_src_height = cmem->info.height; + + if (priv->fallback_mem) { + if (priv->fallback_mem->info.width != cmem->info.width || + priv->fallback_mem->info.height != cmem->info.height) { + GST_DEBUG_OBJECT (converter, "Releasing previous fallback memory"); + gst_memory_unref ((GstMemory *) priv->fallback_mem); + priv->fallback_mem = nullptr; + } + } + + priv->update_const_buf = TRUE; + } + + if (priv->update_const_buf) { + gst_cuda_converter_update_transform (converter, (float) cmem->info.width, + cmem->info.height); + priv->update_const_buf = FALSE; + } + if (priv->unpack_func) { - if (!gst_cuda_converter_unpack_rgb (converter, src_frame, stream)) - goto out; + if (!priv->fallback_mem) { + gst_video_info_set_format (&priv->texture_info, + GST_VIDEO_INFO_FORMAT (&priv->texture_info), cmem->info.width, + cmem->info.height); + if (priv->stream) { + priv->fallback_mem = + (GstCudaMemory *) gst_cuda_allocator_alloc_stream_ordered (nullptr, + converter->context, priv->stream, &priv->texture_info); + } else { + priv->fallback_mem = + (GstCudaMemory *) gst_cuda_allocator_alloc (nullptr, + converter->context, nullptr, &priv->texture_info); + } + + if (!priv->fallback_mem) { + GST_ERROR_OBJECT (converter, "Couldn't create unpack memory"); + goto out; + } + } - texture0 = priv->unpack_buffer.texture; - if (!texture0) { - GST_ERROR_OBJECT (converter, "Unpack texture is unavailable"); + if (!gst_cuda_memory_get_texture (priv->fallback_mem, 0, priv->filter_mode, + &texture0)) { + GST_ERROR_OBJECT (converter, "Couldn't get unpack texture"); goto out; } + + if (!gst_cuda_converter_unpack_rgb (converter, src_frame, stream)) + goto out; } else { + gboolean need_fallback = FALSE; for (i = 0; i < GST_VIDEO_FRAME_N_PLANES (src_frame); i++) { if (!gst_cuda_memory_get_texture (cmem, i, priv->filter_mode, &texturei)) { - CUdeviceptr src; - src = (CUdeviceptr) GST_VIDEO_FRAME_PLANE_DATA (src_frame, i); - texturei = gst_cuda_converter_create_texture (converter, - src, GST_VIDEO_FRAME_COMP_WIDTH (src_frame, i), - GST_VIDEO_FRAME_COMP_HEIGHT (src_frame, i), - GST_VIDEO_FRAME_PLANE_STRIDE (src_frame, i), - priv->filter_mode, format->array_formati, format->channelsi, - i, stream); + need_fallback = TRUE; need_sync = TRUE; + break; + } + } + + if (need_fallback) { + if (!priv->fallback_mem) { + GstVideoInfo fallback_info; + gst_video_info_set_format (&fallback_info, + GST_VIDEO_INFO_FORMAT (&priv->in_info), cmem->info.width, + cmem->info.height); + + if (priv->stream) { + priv->fallback_mem = (GstCudaMemory *) + gst_cuda_allocator_alloc_stream_ordered (nullptr, + converter->context, priv->stream, &fallback_info); + } else { + priv->fallback_mem = + (GstCudaMemory *) gst_cuda_allocator_alloc (nullptr, + converter->context, nullptr, &fallback_info); + } + + if (!priv->fallback_mem) { + GST_ERROR_OBJECT (converter, "Couldn't create fallback memory"); + goto out; + } } - if (!texturei) { - GST_ERROR_OBJECT (converter, "Couldn't create texture %d", i); + if (!gst_cuda_converter_copy_to_fallback (converter, + src_frame, stream, texture)) { goto out; } }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaconvertscale.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaconvertscale.c
Changed
@@ -61,6 +61,9 @@ gint borders_h; gint borders_w; gboolean add_borders; + gboolean downstream_supports_crop_meta; + gboolean same_caps; + GstVideoRectangle in_rect; /* orientation */ /* method configured via property */ @@ -86,10 +89,12 @@ GstQuery * decide_query, GstQuery * query); static gboolean gst_cuda_base_convert_decide_allocation (GstBaseTransform * trans, GstQuery * query); -static gboolean gst_cuda_base_convert_filter_meta (GstBaseTransform * trans, - GstQuery * query, GType api, const GstStructure * params); +static gboolean gst_cuda_base_convert_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf); static GstFlowReturn gst_cuda_base_convert_transform (GstBaseTransform * trans, GstBuffer * inbuf, GstBuffer * outbuf); +static GstFlowReturn gst_cuda_base_convert_generate_output (GstBaseTransform * + trans, GstBuffer ** buffer); static gboolean gst_cuda_base_convert_set_info (GstCudaBaseTransform * btrans, GstCaps * incaps, GstVideoInfo * in_info, GstCaps * outcaps, GstVideoInfo * out_info); @@ -122,7 +127,7 @@ gst_element_class_add_static_pad_template (element_class, &sink_template); gst_element_class_add_static_pad_template (element_class, &src_template); - trans_class->passthrough_on_same_caps = TRUE; + trans_class->passthrough_on_same_caps = FALSE; trans_class->transform_caps = GST_DEBUG_FUNCPTR (gst_cuda_base_convert_transform_caps); @@ -132,9 +137,11 @@ GST_DEBUG_FUNCPTR (gst_cuda_base_convert_propose_allocation); trans_class->decide_allocation = GST_DEBUG_FUNCPTR (gst_cuda_base_convert_decide_allocation); - trans_class->filter_meta = - GST_DEBUG_FUNCPTR (gst_cuda_base_convert_filter_meta); + trans_class->transform_meta = + GST_DEBUG_FUNCPTR (gst_cuda_base_convert_transform_meta); trans_class->transform = GST_DEBUG_FUNCPTR (gst_cuda_base_convert_transform); + trans_class->generate_output = + GST_DEBUG_FUNCPTR (gst_cuda_base_convert_generate_output); btrans_class->set_info = GST_DEBUG_FUNCPTR (gst_cuda_base_convert_set_info); @@ -1173,9 +1180,11 @@ decide_query, query)) return FALSE; - /* passthrough, we're done */ - if (decide_query == NULL) + if (self->same_caps && gst_pad_peer_query (trans->srcpad, query)) { + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); + gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, NULL); return TRUE; + } gst_query_parse_allocation (query, &caps, NULL); @@ -1223,6 +1232,7 @@ } gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); + gst_query_add_allocation_meta (query, GST_VIDEO_CROP_META_API_TYPE, NULL); return TRUE; } @@ -1244,6 +1254,11 @@ if (!outcaps) return FALSE; + self->downstream_supports_crop_meta = gst_query_find_allocation_meta (query, + GST_VIDEO_CROP_META_API_TYPE, NULL); + GST_DEBUG_OBJECT (self, "Downstream crop meta support: %d", + self->downstream_supports_crop_meta); + if (gst_query_get_n_allocation_pools (query) > 0) { gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max); if (pool) { @@ -1348,6 +1363,12 @@ if (active_method != GST_VIDEO_ORIENTATION_IDENTITY) need_flip = TRUE; + if (!need_flip && gst_caps_is_equal (incaps, outcaps)) { + self->same_caps = TRUE; + } else { + self->same_caps = FALSE; + } + switch (active_method) { case GST_VIDEO_ORIENTATION_90R: case GST_VIDEO_ORIENTATION_90L: @@ -1413,24 +1434,22 @@ && in_info->finfo == out_info->finfo && self->borders_w == 0 && self->borders_h == 0 && !need_flip && !needs_color_convert (in_info, out_info)) { - gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), TRUE); - } else { - gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (self), FALSE); - - self->converter = gst_cuda_converter_new (in_info, - out_info, btrans->context, NULL); - if (!self->converter) { - GST_ERROR_OBJECT (self, "Couldn't create converter"); - return FALSE; - } + self->same_caps = TRUE; + } - g_object_set (self->converter, "dest-x", self->borders_w / 2, - "dest-y", self->borders_h / 2, - "dest-width", out_info->width - self->borders_w, - "dest-height", out_info->height - self->borders_h, - "fill-border", TRUE, "video-direction", active_method, NULL); + self->converter = gst_cuda_converter_new (in_info, + out_info, btrans->context, NULL); + if (!self->converter) { + GST_ERROR_OBJECT (self, "Couldn't create converter"); + return FALSE; } + g_object_set (self->converter, "dest-x", self->borders_w / 2, + "dest-y", self->borders_h / 2, + "dest-width", out_info->width - self->borders_w, + "dest-height", out_info->height - self->borders_h, + "fill-border", TRUE, "video-direction", active_method, NULL); + GST_DEBUG_OBJECT (self, "%s from=%dx%d (par=%d/%d dar=%d/%d), size %" G_GSIZE_FORMAT " -> %s to=%dx%d (par=%d/%d dar=%d/%d borders=%d:%d), " "size %" G_GSIZE_FORMAT, @@ -1442,21 +1461,23 @@ out_info->height, out_info->par_n, out_info->par_d, to_dar_n, to_dar_d, self->borders_w, self->borders_h, out_info->size); + self->in_rect.x = 0; + self->in_rect.y = 0; + self->in_rect.w = in_info->width; + self->in_rect.h = in_info->height; + return TRUE; } static gboolean -gst_cuda_base_convert_filter_meta (GstBaseTransform * trans, GstQuery * query, - GType api, const GstStructure * params) +gst_cuda_base_convert_transform_meta (GstBaseTransform * trans, + GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) { - /* This element cannot passthrough the crop meta, because it would convert the - * wrong sub-region of the image, and worst, our output image may not be large - * enough for the crop to be applied later */ - if (api == GST_VIDEO_CROP_META_API_TYPE) + if (meta->info->api == GST_VIDEO_CROP_META_API_TYPE) return FALSE; - /* propose all other metadata upstream */ - return TRUE; + return GST_BASE_TRANSFORM_CLASS (parent_class)->transform_meta (trans, + outbuf, meta, inbuf); } static GstFlowReturn @@ -1472,6 +1493,20 @@ GstCudaStream *in_stream, *out_stream; GstCudaStream *selected_stream = NULL; gboolean sync_done = FALSE; + GstVideoRectangle in_rect; + + GstVideoCropMeta *crop_meta = gst_buffer_get_video_crop_meta (inbuf); + if (crop_meta) { + in_rect.x = crop_meta->x; + in_rect.y = crop_meta->y; + in_rect.w = crop_meta->width; + in_rect.h = crop_meta->height; + } else { + in_rect = self->in_rect; + } + + g_object_set (self->converter, "src-x", in_rect.x, "src-y", in_rect.y, + "src-width", in_rect.w, "src-height", in_rect.h, NULL); if (gst_buffer_n_memory (inbuf) != 1) { GST_ERROR_OBJECT (self, "Invalid input buffer"); @@ -1556,6 +1591,35 @@ return ret; } +static GstFlowReturn +gst_cuda_base_convert_generate_output (GstBaseTransform * trans, + GstBuffer ** buffer) +{ + GstCudaBaseConvert *self = GST_CUDA_BASE_CONVERT (trans); + gboolean passthrough = self->same_caps; + + if (!trans->queued_buf) + return GST_FLOW_OK; + + if (passthrough && !self->downstream_supports_crop_meta) { + if (gst_buffer_get_video_crop_meta (trans->queued_buf)) { + GST_LOG_OBJECT (self, + "Buffer has crop meta but downstream does not support crop"); + passthrough = FALSE; + } + } + + if (!passthrough) { + return GST_BASE_TRANSFORM_CLASS (parent_class)->generate_output (trans, + buffer); + } + + *buffer = trans->queued_buf; + trans->queued_buf = NULL; + + return GST_FLOW_OK; +} + static void gst_cuda_base_convert_set_add_border (GstCudaBaseConvert * self, gboolean add_border)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaipc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaipc.cpp
Changed
@@ -546,27 +546,6 @@ return dump; } -bool -gst_cuda_ipc_clock_is_system (GstClock * clock) -{ - GstClockType clock_type = GST_CLOCK_TYPE_MONOTONIC; - GstClock *mclock; - - if (G_OBJECT_TYPE (clock) != GST_TYPE_SYSTEM_CLOCK) - return false; - - g_object_get (clock, "clock-type", &clock_type, nullptr); - if (clock_type != GST_CLOCK_TYPE_MONOTONIC) - return false; - - mclock = gst_clock_get_master (clock); - if (!mclock) - return true; - - gst_object_unref (mclock); - return false; -} - #ifdef G_OS_WIN32 /* *INDENT-OFF* */ static inline void rtrim(std::string &s) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaipc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaipc.h
Changed
@@ -180,8 +180,6 @@ std::string gst_cuda_ipc_mem_handle_to_string (const CUipcMemHandle & handle); -bool gst_cuda_ipc_clock_is_system (GstClock * clock); - std::string gst_cuda_ipc_win32_error_to_string (guint err); bool gst_cuda_ipc_handle_is_equal (const CUipcMemHandle & handle,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaipcsink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaipcsink.cpp
Changed
@@ -725,7 +725,7 @@ if (GST_CLOCK_TIME_IS_VALID (buffer_clock)) { GstClock *clock = gst_element_get_clock (GST_ELEMENT_CAST (sink)); - if (!gst_cuda_ipc_clock_is_system (clock)) { + if (!gst_clock_is_system_monotonic (clock)) { GstClockTime now_gst = gst_clock_get_time (clock); GstClockTimeDiff converted = buffer_clock;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudaipcsrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudaipcsrc.cpp
Changed
@@ -529,7 +529,7 @@ clock = gst_element_get_clock (GST_ELEMENT_CAST (self)); now_gst = gst_clock_get_time (clock); base_time = GST_ELEMENT_CAST (self)->base_time; - is_system_clock = gst_cuda_ipc_clock_is_system (clock); + is_system_clock = gst_clock_is_system_monotonic (clock); gst_object_unref (clock); buffer = gst_sample_get_buffer (sample);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstcudamemorycopy.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstcudamemorycopy.c
Changed
@@ -567,6 +567,11 @@ if (!pool) { GST_DEBUG_OBJECT (self, "creating system buffer pool"); pool = gst_video_buffer_pool_new (); + { + gchar *name = g_strdup_printf ("cuda-memory-copy-upstream-pool"); + g_object_set (pool, "name", name, NULL); + g_free (name); + } } config = gst_buffer_pool_get_config (pool);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvav1dec.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvav1dec.cpp
Changed
@@ -673,7 +673,7 @@ const GstAV1QuantizationParams *qp = &frame_hdr->quantization_params; const GstAV1TileInfo *ti = &frame_hdr->tile_info; const GstAV1CDEFParams *cp = &frame_hdr->cdef_params; - const GstAV1SegmenationParams *sp = &frame_hdr->segmentation_params; + const GstAV1SegmentationParams *sp = &frame_hdr->segmentation_params; const GstAV1LoopFilterParams *lp = &frame_hdr->loop_filter_params; const GstAV1LoopRestorationParams *lrp = &frame_hdr->loop_restoration_params; const GstAV1FilmGrainParams *fgp = &frame_hdr->film_grain_params;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvencoder.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvencoder.cpp
Changed
@@ -64,8 +64,17 @@ PROP_0, PROP_CC_INSERT, PROP_EXTERN_POOL, + PROP_EMIT_FRAME_STATS }; +enum +{ + SIGNAL_FRAME_STATS, + LAST_SIGNAL +}; + +static guint gst_nv_encoder_signalsLAST_SIGNAL = { 0 }; + #define DEFAULT_CC_INSERT GST_NV_ENCODER_SEI_INSERT struct _GstNvEncoderPrivate @@ -74,11 +83,15 @@ { memset (&init_params, 0, sizeof (NV_ENC_INITIALIZE_PARAMS)); memset (&config, 0, sizeof (NV_ENC_CONFIG)); + emit_frame_stats = FALSE; + frame_stats = gst_structure_new ("application/x-nvenc-stats", + "frame-idx", G_TYPE_UINT, 0, "frame-avg-qp", G_TYPE_UINT, 0, NULL); } ~_GstNvEncoderPrivate () { gst_clear_object (&extern_pool); + gst_structure_free (frame_stats); } GstCudaContext *context = nullptr; @@ -132,6 +145,8 @@ /* properties */ GstNvEncoderSeiInsertMode cc_insert = DEFAULT_CC_INSERT; GstBufferPool *extern_pool = nullptr; + gboolean emit_frame_stats; + GstStructure *frame_stats; }; /** @@ -211,6 +226,31 @@ (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY))); + /** + * GstNvEncoder:emit-frame-stats-signal: + * + * Whether to emit the 'frame-stats' signal for each encoded frame. + * + * Since: 1.28 + */ + g_object_class_install_property (object_class, PROP_EMIT_FRAME_STATS, + g_param_spec_boolean ("emit-frame-stats", "Emit Frame stats Signal", + "Emit the 'frame-stats' signal for each encoded frame", + FALSE, (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstNvEncoder::frame-stats: + * + * Emitted for each encoded frame if the 'emit-frame-stats' property is TRUE. + * The signal provides a #GstStructure containing per-frame statistics such as + * "frame-idx" (frame index) and "frame-avg-qp" (average quantization parameter). + * + * Since: 1.28 + */ + gst_nv_encoder_signalsSIGNAL_FRAME_STATS = + g_signal_new ("frame-stats", G_TYPE_FROM_CLASS (klass), + G_SIGNAL_RUN_LAST, 0, nullptr, nullptr, nullptr, G_TYPE_NONE, 1, + GST_TYPE_STRUCTURE | G_SIGNAL_TYPE_STATIC_SCOPE); element_class->set_context = GST_DEBUG_FUNCPTR (gst_nv_encoder_set_context); @@ -300,6 +340,9 @@ } } break; + case PROP_EMIT_FRAME_STATS: + priv->emit_frame_stats = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -320,6 +363,9 @@ case PROP_EXTERN_POOL: g_value_set_object (value, priv->extern_pool); break; + case PROP_EMIT_FRAME_STATS: + g_value_set_boolean (value, priv->emit_frame_stats); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -859,43 +905,6 @@ return TRUE; } -static NV_ENC_PIC_STRUCT -gst_nv_encoder_get_pic_struct (GstNvEncoder * self, GstBuffer * buffer) -{ - GstNvEncoderPrivate *priv = self->priv; - GstVideoInfo *info = &priv->input_state->info; - - if (!GST_VIDEO_INFO_IS_INTERLACED (info)) - return NV_ENC_PIC_STRUCT_FRAME; - - if (GST_VIDEO_INFO_INTERLACE_MODE (info) == GST_VIDEO_INTERLACE_MODE_MIXED) { - if (!GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_INTERLACED)) { - return NV_ENC_PIC_STRUCT_FRAME; - } - - if (GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_TFF)) - return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; - - return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; - } - - switch (GST_VIDEO_INFO_FIELD_ORDER (info)) { - case GST_VIDEO_FIELD_ORDER_TOP_FIELD_FIRST: - return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; - break; - case GST_VIDEO_FIELD_ORDER_BOTTOM_FIELD_FIRST: - return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; - break; - default: - break; - } - - if (GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_TFF)) - return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; - - return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; -} - static GstVideoCodecFrame * gst_nv_encoder_find_output_frame (GstVideoEncoder * self, GstNvEncTask * task) { @@ -998,6 +1007,14 @@ gst_nv_enc_task_unlock_bitstream (task); gst_nv_enc_task_unref (task); + if (priv->emit_frame_stats) { + gst_structure_set (priv->frame_stats, + "frame-idx", G_TYPE_UINT, bitstream.frameIdx, + "frame-avg-qp", G_TYPE_UINT, bitstream.frameAvgQP, NULL); + g_signal_emit (self, gst_nv_encoder_signalsSIGNAL_FRAME_STATS, 0, + priv->frame_stats); + } + priv->last_flow = gst_video_encoder_finish_frame (encoder, frame); if (priv->last_flow != GST_FLOW_OK) { GST_INFO_OBJECT (self, @@ -2242,8 +2259,12 @@ gst_nv_enc_task_get_sei_payload (task)); } - status = priv->object->Encode (frame, - gst_nv_encoder_get_pic_struct (self, in_buf), task); + auto pic_struct = NV_ENC_PIC_STRUCT_FRAME; + if (klass->get_pic_struct) { + pic_struct = klass->get_pic_struct (self, &priv->input_state->info, in_buf); + } + + status = priv->object->Encode (frame, pic_struct, task); if (status != NV_ENC_SUCCESS) { GST_ERROR_OBJECT (self, "Failed to encode frame"); gst_video_encoder_release_frame (encoder, frame);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvencoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvencoder.h
Changed
@@ -275,6 +275,10 @@ GstNvEncoderDeviceData * data); guint (*calculate_min_buffers) (GstNvEncoder * encoder); + + NV_ENC_PIC_STRUCT (*get_pic_struct) (GstNvEncoder * encoder, + const GstVideoInfo * info, + GstBuffer * buffer); }; GType gst_nv_encoder_get_type (void);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvh264encoder.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvh264encoder.cpp
Changed
@@ -50,8 +50,7 @@ #define DOC_SINK_CAPS_COMM \ "format = (string) { NV12, Y444, VUYA, RGBA, RGBx, BGRA, BGRx }, " \ - "width = (int) 160, 4096 , height = (int) 64, 4096 , " \ - "interlace-mode = (string) progressive" + "width = (int) 160, 4096 , height = (int) 64, 4096 " #define DOC_SINK_CAPS \ "video/x-raw(memory:CUDAMemory), " DOC_SINK_CAPS_COMM "; " \ @@ -130,6 +129,7 @@ PROP_AUD, PROP_CABAC, PROP_REPEAT_SEQUENCE_HEADER, + PROP_NUM_SLICES, }; #define DEFAULT_PRESET GST_NV_ENCODER_PRESET_DEFAULT @@ -155,6 +155,7 @@ #define DEFAULT_CONST_QUALITY 0 #define DEFAULT_AUD TRUE #define DEFAULT_REPEAT_SEQUENCE_HEADER FALSE +#define DEFAULT_NUM_SLICES 0 typedef struct _GstNvH264Encoder { @@ -214,6 +215,7 @@ gboolean aud; gboolean cabac; gboolean repeat_sequence_header; + guint num_slices; } GstNvH264Encoder; typedef struct _GstNvH264EncoderClass @@ -262,6 +264,9 @@ const GstVideoInfo * info, GstBuffer * buffer, GstNvEncoderDeviceData * data); static guint gst_nv_h264_encoder_calculate_min_buffers (GstNvEncoder * encoder); +static NV_ENC_PIC_STRUCT +gst_nv_h264_encoder_get_pic_struct (GstNvEncoder * encoder, + const GstVideoInfo * info, GstBuffer * buffer); static void gst_nv_h264_encoder_class_init (GstNvH264EncoderClass * klass, gpointer data) @@ -601,6 +606,12 @@ g_param_spec_boolean ("repeat-sequence-header", "Repeat Sequence Header", "Insert sequence headers (SPS/PPS) per IDR", DEFAULT_REPEAT_SEQUENCE_HEADER, param_flags)); + if (dev_caps->dynamic_slice_mode) { + g_object_class_install_property (object_class, PROP_NUM_SLICES, + g_param_spec_uint ("num-slices", "Number of Slices", + "Number of slices per frame (0 = default, 1-32 = specific count)", + 0, 32, DEFAULT_NUM_SLICES, conditional_param_flags)); + } GstPadTemplate *pad_templ = gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->sink_caps); @@ -660,6 +671,8 @@ GST_DEBUG_FUNCPTR (gst_nv_h264_encoder_select_device); nvenc_class->calculate_min_buffers = GST_DEBUG_FUNCPTR (gst_nv_h264_encoder_calculate_min_buffers); + nvenc_class->get_pic_struct = + GST_DEBUG_FUNCPTR (gst_nv_h264_encoder_get_pic_struct); klass->device_caps = cdata->device_caps; klass->cuda_device_id = cdata->cuda_device_id; @@ -721,6 +734,7 @@ if (klass->device_caps.cabac) self->cabac = TRUE; self->repeat_sequence_header = DEFAULT_REPEAT_SEQUENCE_HEADER; + self->num_slices = DEFAULT_NUM_SLICES; self->parser = gst_h264_nal_parser_new (); self->sei_array = g_array_new (FALSE, FALSE, sizeof (GstH264SEIMessage)); @@ -1010,6 +1024,9 @@ update_boolean (self, &self->repeat_sequence_header, value, UPDATE_INIT_PARAM); break; + case PROP_NUM_SLICES: + update_uint (self, &self->num_slices, value, UPDATE_INIT_PARAM); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -1136,6 +1153,9 @@ case PROP_REPEAT_SEQUENCE_HEADER: g_value_set_boolean (value, self->repeat_sequence_header); break; + case PROP_NUM_SLICES: + g_value_set_uint (value, self->num_slices); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -1203,14 +1223,12 @@ gst_nv_h264_encoder_getcaps (GstVideoEncoder * encoder, GstCaps * filter) { GstNvH264Encoder *self = GST_NV_H264_ENCODER (encoder); - GstNvH264EncoderClass *klass = GST_NV_H264_ENCODER_GET_CLASS (self); GstCaps *allowed_caps; GstCaps *template_caps; GstCaps *filtered_caps; GstCaps *supported_caps; std::set < std::string > downstream_profiles; std::set < std::string > allowed_formats; - gboolean profile_support_interlaced = FALSE; gst_nv_h264_encoder_get_downstream_profiles_and_format (self, downstream_profiles, nullptr); @@ -1223,11 +1241,7 @@ /* *INDENT-OFF* */ for (const auto &iter: downstream_profiles) { - if (iter == "high" || iter == "main") - profile_support_interlaced = TRUE; - if (iter == "high-4:4:4") { - profile_support_interlaced = TRUE; allowed_formats.insert("Y444"); } else { allowed_formats.insert("NV12"); @@ -1240,17 +1254,9 @@ } /* *INDENT-ON* */ - GST_DEBUG_OBJECT (self, "Downstream %s support interlaced format", - profile_support_interlaced ? "can" : "cannot"); - template_caps = gst_pad_get_pad_template_caps (encoder->sinkpad); allowed_caps = gst_caps_copy (template_caps); - if (klass->device_caps.field_encoding == 0 || !profile_support_interlaced) { - gst_caps_set_simple (allowed_caps, "interlace-mode", G_TYPE_STRING, - "progressive", nullptr); - } - GValue formats = G_VALUE_INIT; g_value_init (&formats, GST_TYPE_LIST); @@ -1330,19 +1336,6 @@ return FALSE; } - if (GST_VIDEO_INFO_IS_INTERLACED (info)) { - downstream_profiles.erase ("progressive-high"); - downstream_profiles.erase ("constrained-high"); - downstream_profiles.erase ("constrained-baseline"); - downstream_profiles.erase ("baseline"); - - if (downstream_profiles.empty ()) { - GST_ERROR_OBJECT (self, - "None of downstream profile supports interlaced encoding"); - return FALSE; - } - } - if (GST_VIDEO_INFO_FORMAT (info) == GST_VIDEO_FORMAT_Y444) { if (downstream_profiles.find ("high-4:4:4") == downstream_profiles.end ()) { GST_ERROR_OBJECT (self, "Downstream does not support 4:4:4 profile"); @@ -1656,6 +1649,11 @@ h264_config->entropyCodingMode = NV_ENC_H264_ENTROPY_CODING_MODE_AUTOSELECT; } + if (dev_caps->dynamic_slice_mode && self->num_slices > 0) { + h264_config->sliceMode = 3; + h264_config->sliceModeData = self->num_slices; + } + GstVideoColorimetry cinfo; switch (GST_VIDEO_INFO_FORMAT (info)) { case GST_VIDEO_FORMAT_NV12: @@ -2156,6 +2154,53 @@ return num_buffers; } +static NV_ENC_PIC_STRUCT +gst_nv_h264_encoder_get_pic_struct (GstNvEncoder * encoder, + const GstVideoInfo * info, GstBuffer * buffer) +{ + auto klass = GST_NV_H264_ENCODER_GET_CLASS (encoder); + + /* Only use interlaced picture structures if field encoding is supported + * and the input is actually interlaced */ + if (klass->device_caps.field_encoding == 0 + || !GST_VIDEO_INFO_IS_INTERLACED (info)) { + GST_TRACE_OBJECT (encoder, + "Using progressive frame structure (field_encoding=%d, interlaced=%d)", + klass->device_caps.field_encoding, GST_VIDEO_INFO_IS_INTERLACED (info)); + return NV_ENC_PIC_STRUCT_FRAME; + } + + if (GST_VIDEO_INFO_INTERLACE_MODE (info) == GST_VIDEO_INTERLACE_MODE_MIXED) { + if (!GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_INTERLACED)) { + return NV_ENC_PIC_STRUCT_FRAME; + } + + if (GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_TFF)) { + GST_TRACE_OBJECT (encoder, "Using interlaced TFF structure"); + return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; + } + + GST_TRACE_OBJECT (encoder, "Using interlaced BFF structure"); + return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; + } + + switch (GST_VIDEO_INFO_FIELD_ORDER (info)) { + case GST_VIDEO_FIELD_ORDER_TOP_FIELD_FIRST: + return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; + break; + case GST_VIDEO_FIELD_ORDER_BOTTOM_FIELD_FIRST: + return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; + break; + default: + break; + } + + if (GST_BUFFER_FLAG_IS_SET (buffer, GST_VIDEO_BUFFER_FLAG_TFF)) + return NV_ENC_PIC_STRUCT_FIELD_TOP_BOTTOM; + + return NV_ENC_PIC_STRUCT_FIELD_BOTTOM_TOP; +} + static GstNvEncoderClassData * gst_nv_h264_encoder_create_class_data (GstObject * device, gpointer session, GstNvEncoderDeviceMode device_mode) @@ -2306,13 +2351,6 @@ sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str; - if (dev_caps.field_encoding > 0) { - sink_caps_str += - ", interlace-mode = (string) { progressive, interleaved, mixed }"; - } else { - sink_caps_str += ", interlace-mode = (string) progressive"; - } - src_caps_str = "video/x-h264, " + resolution_str + ", " + profile_str + ", stream-format = (string) { byte-stream, avc }, alignment = (string) au"; @@ -2656,13 +2694,6 @@ sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str; - if (dev_caps.field_encoding > 0) { - sink_caps_str += - ", interlace-mode = (string) { progressive, interleaved, mixed }"; - } else { - sink_caps_str += ", interlace-mode = (string) progressive"; - } - src_caps_str = "video/x-h264, " + resolution_str + ", " + profile_str + ", stream-format = (string) { byte-stream, avc }, alignment = (string) au";
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvh265encoder.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvh265encoder.cpp
Changed
@@ -50,8 +50,7 @@ #define DOC_SINK_CAPS_COMM \ "format = (string) { NV12, P010_10LE, Y444, Y444_16LE, GBR, GBR_16LE, VUYA, RGBA, RGBx, BGRA, BGRx, RGB10A2_LE }, " \ - "width = (int) 144, 8192 , height = (int) 48, 8192 , " \ - "interlace-mode = (string) progressive" + "width = (int) 144, 8192 , height = (int) 48, 8192 " #define DOC_SINK_CAPS \ "video/x-raw(memory:CUDAMemory), " DOC_SINK_CAPS_COMM "; " \ @@ -129,6 +128,7 @@ /* h265 specific */ PROP_AUD, PROP_REPEAT_SEQUENCE_HEADER, + PROP_NUM_SLICES, }; #define DEFAULT_PRESET GST_NV_ENCODER_PRESET_DEFAULT @@ -154,6 +154,7 @@ #define DEFAULT_CONST_QUALITY 0 #define DEFAULT_AUD TRUE #define DEFAULT_REPEAT_SEQUENCE_HEADER FALSE +#define DEFAULT_NUM_SLICES 0 typedef enum { @@ -219,6 +220,7 @@ gboolean aud; gboolean repeat_sequence_header; + guint num_slices; } GstNvH265Encoder; typedef struct _GstNvH265EncoderClass @@ -602,6 +604,12 @@ "Insert sequence headers (SPS/PPS) per IDR, " "ignored if negotiated stream-format is \"hvc1\"", DEFAULT_REPEAT_SEQUENCE_HEADER, param_flags)); + if (dev_caps->dynamic_slice_mode) { + g_object_class_install_property (object_class, PROP_NUM_SLICES, + g_param_spec_uint ("num-slices", "Number of Slices", + "Number of slices per frame (0 = default, 1-32 = specific count)", + 0, 32, DEFAULT_NUM_SLICES, conditional_param_flags)); + } GstPadTemplate *pad_templ = gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->sink_caps); @@ -720,6 +728,7 @@ self->const_quality = DEFAULT_CONST_QUALITY; self->aud = DEFAULT_AUD; self->repeat_sequence_header = DEFAULT_REPEAT_SEQUENCE_HEADER; + self->num_slices = DEFAULT_NUM_SLICES; self->parser = gst_h265_parser_new (); self->sei_array = g_array_new (FALSE, FALSE, sizeof (GstH265SEIMessage)); @@ -1006,6 +1015,9 @@ update_boolean (self, &self->repeat_sequence_header, value, UPDATE_INIT_PARAM); break; + case PROP_NUM_SLICES: + update_uint (self, &self->num_slices, value, UPDATE_INIT_PARAM); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -1129,6 +1141,9 @@ case PROP_REPEAT_SEQUENCE_HEADER: g_value_set_boolean (value, self->repeat_sequence_header); break; + case PROP_NUM_SLICES: + g_value_set_uint (value, self->num_slices); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; @@ -1608,6 +1623,11 @@ hevc_config->repeatSPSPPS = 0; } + if (dev_caps->dynamic_slice_mode && self->num_slices > 0) { + hevc_config->sliceMode = 3; + hevc_config->sliceModeData = self->num_slices; + } + GstVideoColorimetry cinfo; switch (GST_VIDEO_INFO_FORMAT (info)) { case GST_VIDEO_FORMAT_NV12: @@ -2367,8 +2387,7 @@ std::to_string (GST_ROUND_UP_16 (dev_caps.height_min)) + ", " + std::to_string (dev_caps.height_max) + " "; - sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str - + ", interlace-mode = (string) progressive"; + sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str; src_caps_str = "video/x-h265, " + resolution_str + ", " + profile_str + ", stream-format = (string) { byte-stream, hvc1, hev1 }" + @@ -2714,8 +2733,7 @@ std::to_string (GST_ROUND_UP_16 (dev_caps.height_min)) + ", " + std::to_string (dev_caps.height_max) + " "; - sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str - + ", interlace-mode = (string) progressive"; + sink_caps_str = "video/x-raw, " + format_str + ", " + resolution_str; src_caps_str = "video/x-h265, " + resolution_str + ", " + profile_str + ", stream-format = (string) { byte-stream, hvc1, hev1 }" +
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvjpegenc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvjpegenc.cpp
Changed
@@ -203,6 +203,7 @@ guint cuda_device_id; GstCaps *sink_caps; gboolean have_nvrtc; + gboolean autogpu; }; /* *INDENT-OFF* */ @@ -227,9 +228,11 @@ GstBufferPool *pool = nullptr; GstBuffer *fallback_buf = nullptr; - std::mutex lock; + std::recursive_mutex lock; guint quality = DEFAULT_JPEG_QUALITY; bool quality_updated = false; + gboolean use_stream_ordered = FALSE; + guint cuda_device_id = 0; }; /* *INDENT-ON* */ @@ -246,6 +249,7 @@ guint cuda_device_id; gboolean have_nvrtc; + gboolean autogpu; }; static void gst_nv_jpeg_enc_finalize (GObject * object); @@ -300,10 +304,18 @@ "Quality of encoding", 1, 100, DEFAULT_JPEG_QUALITY, (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); - gst_element_class_set_static_metadata (element_class, - "NVIDIA JPEG Encoder", "Codec/Encoder/Video/Hardware", - "Encode JPEG image using nvJPEG library", - "Seungha Yang <seungha@centricular.com>"); + if (cdata->autogpu) { + gst_element_class_set_static_metadata (element_class, + "NVIDIA JPEG Encoder Auto GPU Select Mode", + "Codec/Encoder/Video/Hardware", + "Encode JPEG image using nvJPEG library", + "Seungha Yang <seungha@centricular.com>"); + } else { + gst_element_class_set_static_metadata (element_class, + "NVIDIA JPEG Encoder", "Codec/Encoder/Video/Hardware", + "Encode JPEG image using nvJPEG library", + "Seungha Yang <seungha@centricular.com>"); + } auto sink_templ = gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, cdata->sink_caps); @@ -326,6 +338,7 @@ klass->cuda_device_id = cdata->cuda_device_id; klass->have_nvrtc = cdata->have_nvrtc; + klass->autogpu = cdata->autogpu; gst_caps_unref (cdata->sink_caps); g_free (cdata); } @@ -333,7 +346,10 @@ static void gst_nv_jpeg_enc_init (GstNvJpegEnc * self) { + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); + self->priv = new GstNvJpegEncPrivate (); + self->priv->cuda_device_id = klass->cuda_device_id; } static void @@ -353,7 +369,7 @@ auto self = GST_NV_JPEG_ENC (object); auto priv = self->priv; - std::lock_guard < std::mutex > lk (priv->lock); + std::lock_guard < std::recursive_mutex > lk (priv->lock); switch (prop_id) { case PROP_QUALITY: { @@ -376,12 +392,11 @@ { auto self = GST_NV_JPEG_ENC (object); auto priv = self->priv; - auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); - std::lock_guard < std::mutex > lk (priv->lock); + std::lock_guard < std::recursive_mutex > lk (priv->lock); switch (prop_id) { case PROP_CUDA_DEVICE_ID: - g_value_set_uint (value, klass->cuda_device_id); + g_value_set_uint (value, priv->cuda_device_id); break; case PROP_QUALITY: g_value_set_uint (value, priv->quality); @@ -397,33 +412,35 @@ { auto self = GST_NV_JPEG_ENC (element); auto priv = self->priv; - auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); - gst_cuda_handle_set_context (element, context, klass->cuda_device_id, - &priv->context); + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); + gst_cuda_handle_set_context (element, context, priv->cuda_device_id, + &priv->context); + } GST_ELEMENT_CLASS (parent_class)->set_context (element, context); } static gboolean -gst_nv_jpeg_enc_open (GstVideoEncoder * encoder) +default_stream_ordered_alloc_enabled (void) { - auto self = GST_NV_JPEG_ENC (encoder); - auto priv = self->priv; - auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); - - GST_DEBUG_OBJECT (self, "Open"); - - if (!gst_cuda_ensure_element_context (GST_ELEMENT_CAST (encoder), - klass->cuda_device_id, &priv->context)) { - GST_ERROR_OBJECT (self, "Couldn't create CUDA context"); - return FALSE; + static gboolean enabled = FALSE; + GST_CUDA_CALL_ONCE_BEGIN { + if (g_getenv ("GST_CUDA_ENABLE_STREAM_ORDERED_ALLOC")) + enabled = TRUE; } + GST_CUDA_CALL_ONCE_END; - if (!gst_cuda_context_push (priv->context)) { - GST_ERROR_OBJECT (self, "Couldn't push context"); - return FALSE; - } + return enabled; +} + +/* Caller should push context */ +static gboolean +gst_nv_jpeg_enc_load_module (GstNvJpegEnc * self) +{ + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); + auto priv = self->priv; if (!priv->module && klass->have_nvrtc) { const gchar *program = nullptr; @@ -445,13 +462,13 @@ if (!program) { std::lock_guard < std::mutex > lk (g_kernel_table_lock); std::string cubin_kernel_name = - "GstJpegEnc_device_" + std::to_string (klass->cuda_device_id); + "GstJpegEnc_device_" + std::to_string (priv->cuda_device_id); auto cubin = g_cubin_table.find (cubin_kernel_name); if (cubin == g_cubin_table.end ()) { GST_DEBUG_OBJECT (self, "Building CUBIN"); program = gst_cuda_nvrtc_compile_cubin (GstNvJpegEncConvertMain_str, - klass->cuda_device_id); + priv->cuda_device_id); if (program) g_cubin_tablecubin_kernel_name = program; } else { @@ -495,7 +512,6 @@ if (!priv->module) { GST_ERROR_OBJECT (self, "Couldn't load module"); - gst_cuda_context_pop (nullptr); return FALSE; } @@ -503,20 +519,34 @@ "GstNvJpegEncConvertMain"); if (!gst_cuda_result (ret)) { GST_ERROR_OBJECT (self, "Couldn't get kernel function"); - gst_cuda_context_pop (nullptr); return FALSE; } } - auto ret = g_vtable.NvjpegCreateSimple (&priv->handle); - gst_cuda_context_pop (nullptr); + return TRUE; +} - if (ret != NVJPEG_STATUS_SUCCESS) { - GST_ERROR_OBJECT (self, "Couldn't create encoder handle"); +static gboolean +gst_nv_jpeg_enc_open (GstVideoEncoder * encoder) +{ + auto self = GST_NV_JPEG_ENC (encoder); + auto priv = self->priv; + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); + + GST_DEBUG_OBJECT (self, "Open"); + + /* Will open GPU later */ + if (klass->autogpu) + return TRUE; + + if (!gst_cuda_ensure_element_context (GST_ELEMENT_CAST (encoder), + priv->cuda_device_id, &priv->context)) { + GST_ERROR_OBJECT (self, "Couldn't create CUDA context"); return FALSE; } - priv->stream = gst_cuda_stream_new (priv->context); + if (!priv->stream) + priv->stream = gst_cuda_stream_new (priv->context); return TRUE; } @@ -532,18 +562,32 @@ if (priv->params) g_vtable.NvjpegEncoderParamsDestroy (priv->params); + if (priv->handle) + g_vtable.NvjpegDestroy (priv->handle); + + gboolean need_sync = FALSE; + auto stream = gst_cuda_stream_get_handle (priv->stream); for (guint i = 0; i < G_N_ELEMENTS (priv->uv); i++) { if (priv->uvi) { - CuMemFree (priv->uvi); + if (priv->use_stream_ordered) { + CuMemFreeAsync (priv->uvi, stream); + need_sync = TRUE; + } else { + CuMemFree (priv->uvi); + } priv->uvi = 0; } } + if (need_sync) + CuStreamSynchronize (stream); + gst_cuda_context_pop (nullptr); } priv->state = nullptr; priv->params = nullptr; + priv->handle = nullptr; priv->launch_kernel = false; gst_clear_buffer (&priv->fallback_buf); @@ -558,8 +602,26 @@ gst_nv_jpeg_enc_stop (GstVideoEncoder * encoder) { auto self = GST_NV_JPEG_ENC (encoder); + auto priv = self->priv; + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); gst_nv_jpeg_enc_reset (self); + if (priv->context && priv->module) { + gst_cuda_context_push (priv->context); + if (priv->module) { + CuModuleUnload (priv->module); + priv->module = nullptr; + } + gst_cuda_context_pop (nullptr); + } + + priv->module = nullptr; + priv->handle = nullptr; + + if (klass->autogpu) { + gst_clear_cuda_stream (&priv->stream); + gst_clear_object (&priv->context); + } return TRUE; } @@ -572,19 +634,6 @@ GST_DEBUG_OBJECT (self, "Close"); - if (priv->context && gst_cuda_context_push (priv->context)) { - if (priv->handle) - g_vtable.NvjpegDestroy (priv->handle); - - if (priv->module) { - CuModuleUnload (priv->module); - priv->module = nullptr; - } - - gst_cuda_context_pop (nullptr); - } - - priv->handle = nullptr; gst_clear_cuda_stream (&priv->stream); gst_clear_object (&priv->context); @@ -598,8 +647,11 @@ switch (GST_QUERY_TYPE (query)) { case GST_QUERY_CONTEXT: + { + std::lock_guard < std::recursive_mutex > lk (priv->lock); return gst_cuda_handle_context_query (GST_ELEMENT (self), query, priv->context); + } default: break; } @@ -634,6 +686,7 @@ { auto self = GST_NV_JPEG_ENC (encoder); auto priv = self->priv; + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); GstVideoInfo info; GstBufferPool *pool = nullptr; GstCaps *caps; @@ -645,6 +698,13 @@ return FALSE; } + if (klass->autogpu) { + /* Use upstream pool in case of auto select mode. We don't know which + * GPU to use at this moment */ + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, nullptr); + return TRUE; + } + if (!gst_video_info_from_caps (&info, caps)) { GST_WARNING_OBJECT (self, "Failed to convert caps into info"); return FALSE; @@ -695,21 +755,115 @@ } static gboolean -gst_nv_jpeg_enc_set_format (GstVideoEncoder * encoder, - GstVideoCodecState * state) +gst_nv_jpeg_enc_prepare_kernel_resource (GstNvJpegEnc * self) { - auto self = GST_NV_JPEG_ENC (encoder); auto priv = self->priv; - priv->info = state->info; + if (!priv->launch_kernel) + return TRUE; - auto caps = gst_caps_new_empty_simple ("image/jpeg"); - auto output_state = gst_video_encoder_set_output_state (encoder, caps, - state); - gst_video_codec_state_unref (output_state); + if (!gst_nv_jpeg_enc_load_module (self)) + return FALSE; + + auto stream = gst_cuda_stream_get_handle (priv->stream); + auto width = (priv->info.width + 1) / 2; + auto height = (priv->info.height + 1) / 2; + size_t pitch; + CUresult ret = CUDA_SUCCESS; + + if (priv->use_stream_ordered) { + gint texture_align = gst_cuda_context_get_texture_alignment (priv->context); + pitch = ((width + texture_align - 1) / texture_align) * texture_align; + + ret = CuMemAllocAsync (&priv->uv0, pitch * height, stream); + if (!gst_cuda_result (ret)) { + GST_ERROR_OBJECT (self, "Couldn't allocate U plane memory"); + return FALSE; + } + + ret = CuMemAllocAsync (&priv->uv1, pitch * height, stream); + if (!gst_cuda_result (ret)) { + GST_ERROR_OBJECT (self, "Couldn't allocate V plane memory"); + return FALSE; + } + + if (!gst_cuda_result (CuStreamSynchronize (stream))) { + GST_ERROR_OBJECT (self, "Couldn't synchronize stream"); + return FALSE; + } + } else { + ret = CuMemAllocPitch (&priv->uv0, &pitch, width, height, 16); + if (!gst_cuda_result (ret)) { + GST_ERROR_OBJECT (self, "Couldn't allocate U plane memory"); + return FALSE; + } + + ret = CuMemAllocPitch (&priv->uv1, &pitch, width, height, 16); + if (!gst_cuda_result (ret)) { + GST_ERROR_OBJECT (self, "Couldn't allocate V plane memory"); + return FALSE; + } + } + + priv->pitch = pitch; + + return TRUE; +} + +static gboolean +gst_nv_jpeg_enc_init_session (GstNvJpegEnc * self, GstBuffer * in_buf) +{ + auto klass = GST_NV_JPEG_ENC_GET_CLASS (self); + auto priv = self->priv; gst_nv_jpeg_enc_reset (self); + if (klass->autogpu) { + if (!in_buf) { + GST_DEBUG_OBJECT (self, "Open session later for auto gpu mode"); + return TRUE; + } + + std::lock_guard < std::recursive_mutex > lk (priv->lock); + if (priv->module) { + gst_cuda_context_push (priv->context); + CuModuleUnload (priv->module); + priv->module = nullptr; + gst_cuda_context_pop (nullptr); + } + + gst_clear_cuda_stream (&priv->stream); + gst_clear_object (&priv->context); + + auto mem = gst_buffer_peek_memory (in_buf, 0); + if (gst_is_cuda_memory (mem)) { + auto cmem = GST_CUDA_MEMORY_CAST (mem); + priv->context = (GstCudaContext *) gst_object_ref (cmem->context); + guint device_id = 0; + g_object_get (priv->context, "cuda-device-id", &device_id, nullptr); + + GST_DEBUG_OBJECT (self, "Upstream is CUDA with device id %d", device_id); + + if (device_id != priv->cuda_device_id) { + priv->cuda_device_id = device_id; + g_object_notify (G_OBJECT (self), "cuda-device-id"); + } + + priv->stream = gst_cuda_memory_get_stream (cmem); + if (priv->stream) + gst_cuda_stream_ref (priv->stream); + } else { + GST_DEBUG_OBJECT (self, "Upstream is not CUDA"); + if (!gst_cuda_ensure_element_context (GST_ELEMENT_CAST (self), + priv->cuda_device_id, &priv->context)) { + GST_ERROR_OBJECT (self, "Couldn't create CUDA context"); + return FALSE; + } + + priv->stream = gst_cuda_stream_new (priv->context); + } + } + switch (GST_VIDEO_INFO_FORMAT (&priv->info)) { case GST_VIDEO_FORMAT_I420: priv->subsampling = NVJPEG_CSS_420; @@ -729,39 +883,38 @@ return FALSE; } - std::lock_guard < std::mutex > lk (priv->lock); + priv->use_stream_ordered = FALSE; + g_object_get (priv->context, "prefer-stream-ordered-alloc", + &priv->use_stream_ordered, nullptr); + if (!priv->use_stream_ordered) + priv->use_stream_ordered = default_stream_ordered_alloc_enabled (); + + + std::lock_guard < std::recursive_mutex > lk (priv->lock); priv->quality_updated = false; if (!gst_cuda_context_push (priv->context)) { GST_ERROR_OBJECT (self, "Couldn't push context"); + gst_nv_jpeg_enc_reset (self); return FALSE; } - /* Allocate memory */ - if (priv->launch_kernel) { - auto width = (priv->info.width + 1) / 2; - auto height = (priv->info.height + 1) / 2; - size_t pitch; - auto ret = CuMemAllocPitch (&priv->uv0, &pitch, width, height, 16); - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't allocate U plane memory"); - gst_cuda_context_pop (nullptr); - return FALSE; - } - - ret = CuMemAllocPitch (&priv->uv1, &pitch, width, height, 16); - if (!gst_cuda_result (ret)) { - GST_ERROR_OBJECT (self, "Couldn't allocate V plane memory"); - gst_cuda_context_pop (nullptr); - gst_nv_jpeg_enc_reset (self); - return FALSE; - } + if (!gst_nv_jpeg_enc_prepare_kernel_resource (self)) { + gst_cuda_context_pop (nullptr); + gst_nv_jpeg_enc_reset (self); + return FALSE; + } - priv->pitch = pitch; + auto ret = g_vtable.NvjpegCreateSimple (&priv->handle); + if (ret != NVJPEG_STATUS_SUCCESS) { + GST_ERROR_OBJECT (self, "Couldn't create encoder handle"); + gst_cuda_context_pop (nullptr); + gst_nv_jpeg_enc_reset (self); + return FALSE; } auto stream = gst_cuda_stream_get_handle (priv->stream); - auto ret = g_vtable.NvjpegEncoderParamsCreate (priv->handle, &priv->params, + ret = g_vtable.NvjpegEncoderParamsCreate (priv->handle, &priv->params, stream); if (ret != NVJPEG_STATUS_SUCCESS) { GST_ERROR_OBJECT (self, "Couldn't create param handle, ret %d", ret); @@ -790,7 +943,6 @@ ret = g_vtable.NvjpegEncoderStateCreate (priv->handle, &priv->state, stream); gst_cuda_context_pop (nullptr); - if (ret != NVJPEG_STATUS_SUCCESS) { GST_ERROR_OBJECT (self, "Couldn't create state handle, ret %d", ret); gst_nv_jpeg_enc_reset (self); @@ -799,9 +951,11 @@ priv->pool = gst_cuda_buffer_pool_new (priv->context); auto config = gst_buffer_pool_get_config (priv->pool); + auto caps = gst_video_info_to_caps (&priv->info); gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); - gst_buffer_pool_config_set_params (config, - state->caps, priv->info.size, 0, 0); + gst_buffer_pool_config_set_params (config, caps, priv->info.size, 0, 0); + gst_caps_unref (caps); + if (priv->stream) gst_buffer_pool_config_set_cuda_stream (config, priv->stream); @@ -820,6 +974,23 @@ return TRUE; } +static gboolean +gst_nv_jpeg_enc_set_format (GstVideoEncoder * encoder, + GstVideoCodecState * state) +{ + auto self = GST_NV_JPEG_ENC (encoder); + auto priv = self->priv; + + priv->info = state->info; + + auto caps = gst_caps_new_empty_simple ("image/jpeg"); + auto output_state = gst_video_encoder_set_output_state (encoder, caps, + state); + gst_video_codec_state_unref (output_state); + + return gst_nv_jpeg_enc_init_session (self, nullptr); +} + static GstBuffer * gst_nv_jpeg_enc_upload_system (GstNvJpegEnc * self, GstBuffer * buffer) { @@ -944,6 +1115,12 @@ auto self = GST_NV_JPEG_ENC (encoder); auto priv = self->priv; + if (!priv->handle && !gst_nv_jpeg_enc_init_session (self, + frame->input_buffer)) { + gst_video_encoder_finish_frame (encoder, frame); + return GST_FLOW_ERROR; + } + if (!gst_cuda_context_push (priv->context)) { GST_ERROR_OBJECT (self, "Couldn't push context"); gst_video_encoder_finish_frame (encoder, frame); @@ -953,7 +1130,7 @@ auto stream = gst_cuda_stream_get_handle (priv->stream); { - std::lock_guard < std::mutex > lk (priv->lock); + std::lock_guard < std::recursive_mutex > lk (priv->lock); if (priv->quality_updated) { priv->quality_updated = false; auto ret = g_vtable.NvjpegEncoderParamsSetQuality (priv->params, @@ -1021,14 +1198,14 @@ return gst_video_encoder_finish_frame (encoder, frame); } -void +gboolean gst_nv_jpeg_enc_register (GstPlugin * plugin, GstCudaContext * context, guint rank, gboolean have_nvrtc) { GST_DEBUG_CATEGORY_INIT (gst_nv_jpeg_enc_debug, "nvjpegenc", 0, "nvjpegenc"); if (!gst_nv_jpeg_enc_load_library ()) - return; + return FALSE; GType type; guint index = 0; @@ -1044,8 +1221,12 @@ (GInstanceInitFunc) gst_nv_jpeg_enc_init, }; - guint cuda_device_id; - g_object_get (context, "cuda-device-id", &cuda_device_id, nullptr); + guint cuda_device_id = 0; + gboolean autogpu = FALSE; + if (!context) + autogpu = TRUE; + else + g_object_get (context, "cuda-device-id", &cuda_device_id, nullptr); std::string format_string; #ifdef NVCODEC_CUDA_PRECOMPILED @@ -1072,16 +1253,24 @@ cdata->cuda_device_id = cuda_device_id; cdata->sink_caps = sink_caps; cdata->have_nvrtc = have_nvrtc; + cdata->autogpu = autogpu; type_info.class_data = cdata; - auto type_name = g_strdup ("GstNvJpegEnc"); - auto feature_name = g_strdup ("nvjpegenc"); - while (g_type_from_name (type_name)) { - index++; - g_free (type_name); - g_free (feature_name); - type_name = g_strdup_printf ("GstNvJpegDevice%dEnc", index); - feature_name = g_strdup_printf ("nvjpegdevice%denc", index); + gchar *type_name = nullptr; + gchar *feature_name = nullptr; + if (autogpu) { + type_name = g_strdup ("GstNvAutoGpuJpegEnc"); + feature_name = g_strdup ("nvautogpujpegenc"); + } else { + type_name = g_strdup ("GstNvJpegEnc"); + feature_name = g_strdup ("nvjpegenc"); + while (g_type_from_name (type_name)) { + index++; + g_free (type_name); + g_free (feature_name); + type_name = g_strdup_printf ("GstNvJpegDevice%dEnc", index); + feature_name = g_strdup_printf ("nvjpegdevice%denc", index); + } } type = g_type_register_static (GST_TYPE_VIDEO_ENCODER, @@ -1098,4 +1287,6 @@ g_free (type_name); g_free (feature_name); + + return TRUE; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvjpegenc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvjpegenc.h
Changed
@@ -23,9 +23,9 @@ G_BEGIN_DECLS -void gst_nv_jpeg_enc_register (GstPlugin * plugin, - GstCudaContext * context, - guint rank, - gboolean have_nvrtc); +gboolean gst_nv_jpeg_enc_register (GstPlugin * plugin, + GstCudaContext * context, + guint rank, + gboolean have_nvrtc); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvvp8dec.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvvp8dec.cpp
Changed
@@ -229,7 +229,7 @@ element_class->set_context = GST_DEBUG_FUNCPTR (gst_nv_vp8_dec_set_context); parent_class = (GTypeClass *) g_type_class_peek_parent (klass); - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "NVDEC VP8 Decoder", "Codec/Decoder/Video/Hardware", "NVIDIA VP8 video decoder", "Seungha Yang <seungha@centricular.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/gstnvvp9dec.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/gstnvvp9dec.cpp
Changed
@@ -234,7 +234,7 @@ element_class->set_context = GST_DEBUG_FUNCPTR (gst_nv_vp9_dec_set_context); parent_class = (GTypeClass *) g_type_class_peek_parent (klass); - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "NVDEC VP9 Decoder", "Codec/Decoder/Video/Hardware", "NVIDIA VP9 video decoder", "Seungha Yang <seungha@centricular.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/kernel/gstcudaconverter.cu -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/kernel/gstcudaconverter.cu
Changed
@@ -44,10 +44,12 @@ float border_z; float border_w; int fill_border; - int video_direction; float alpha; int do_blend; int do_convert; + float transform_u2; + float transform_v2; + float transform_offset2; }; __device__ inline float @@ -1351,76 +1353,6 @@ } }; -__device__ inline float2 -rotate_identity (float x, float y) -{ - return make_float2(x, y); -} - -__device__ inline float2 -rotate_90r (float x, float y) -{ - return make_float2(y, 1.0 - x); -} - -__device__ inline float2 -rotate_180 (float x, float y) -{ - return make_float2(1.0 - x, 1.0 - y); -} - -__device__ inline float2 -rotate_90l (float x, float y) -{ - return make_float2(1.0 - y, x); -} - -__device__ inline float2 -rotate_horiz (float x, float y) -{ - return make_float2(1.0 - x, y); -} - -__device__ inline float2 -rotate_vert (float x, float y) -{ - return make_float2(x, 1.0 - y); -} - -__device__ inline float2 -rotate_ul_lr (float x, float y) -{ - return make_float2(y, x); -} - -__device__ inline float2 -rotate_ur_ll (float x, float y) -{ - return make_float2(1.0 - y, 1.0 - x); -} -__device__ inline float2 -do_rotate (float x, float y, int direction) -{ - switch (direction) { - case 1: - return rotate_90r (x, y); - case 2: - return rotate_180 (x, y); - case 3: - return rotate_90l (x, y); - case 4: - return rotate_horiz (x, y); - case 5: - return rotate_vert (x, y); - case 6: - return rotate_ul_lr (x, y); - case 7: - return rotate_ur_ll (x, y); - default: - return rotate_identity (x, y); - } -} - extern "C" { __global__ void GstCudaConverterMain (cudaTextureObject_t tex0, cudaTextureObject_t tex1, @@ -1450,7 +1382,11 @@ float y = (__int2float_rz (y_pos - const_buf.top) + 0.5) / const_buf.view_height; if (y < 0.0 || y > 1.0) return; - float2 rotated = do_rotate (x, y, const_buf.video_direction); + float2 rotated; + rotated.x = fmaf (x, const_buf.transform_u0, + fmaf (y, const_buf.transform_v0, const_buf.transform_offset0)); + rotated.y = fmaf (x, const_buf.transform_u1, + fmaf (y, const_buf.transform_v1, const_buf.transform_offset1)); float4 s = g_sampler.Execute (tex0, tex1, tex2, tex3, rotated.x, rotated.y); float3 rgb = make_float3 (s.x, s.y, s.z); float3 yuv; @@ -1496,10 +1432,12 @@ " float border_z;\n" " float border_w;\n" " int fill_border;\n" -" int video_direction;\n" " float alpha;\n" " int do_blend;\n" " int do_convert;\n" +" float transform_u2;\n" +" float transform_v2;\n" +" float transform_offset2;\n" "};\n" "\n" "__device__ inline float\n" @@ -2803,76 +2741,6 @@ " }\n" "};\n" "\n" -"__device__ inline float2\n" -"rotate_identity (float x, float y)\n" -"{\n" -" return make_float2(x, y);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_90r (float x, float y)\n" -"{\n" -" return make_float2(y, 1.0 - x);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_180 (float x, float y)\n" -"{\n" -" return make_float2(1.0 - x, 1.0 - y);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_90l (float x, float y)\n" -"{\n" -" return make_float2(1.0 - y, x);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_horiz (float x, float y)\n" -"{\n" -" return make_float2(1.0 - x, y);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_vert (float x, float y)\n" -"{\n" -" return make_float2(x, 1.0 - y);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_ul_lr (float x, float y)\n" -"{\n" -" return make_float2(y, x);\n" -"}\n" -"\n" -"__device__ inline float2\n" -"rotate_ur_ll (float x, float y)\n" -"{\n" -" return make_float2(1.0 - y, 1.0 - x);\n" -"}\n" -"__device__ inline float2\n" -"do_rotate (float x, float y, int direction)\n" -"{\n" -" switch (direction) {\n" -" case 1:\n" -" return rotate_90r (x, y);\n" -" case 2:\n" -" return rotate_180 (x, y);\n" -" case 3:\n" -" return rotate_90l (x, y);\n" -" case 4:\n" -" return rotate_horiz (x, y);\n" -" case 5:\n" -" return rotate_vert (x, y);\n" -" case 6:\n" -" return rotate_ul_lr (x, y);\n" -" case 7:\n" -" return rotate_ur_ll (x, y);\n" -" default:\n" -" return rotate_identity (x, y);\n" -" }\n" -"}\n" -"\n" "extern \"C\" {\n" "__global__ void\n" "GstCudaConverterMain (cudaTextureObject_t tex0, cudaTextureObject_t tex1,\n" @@ -2902,7 +2770,11 @@ " float y = (__int2float_rz (y_pos - const_buf.top) + 0.5) / const_buf.view_height;\n" " if (y < 0.0 || y > 1.0)\n" " return;\n" -" float2 rotated = do_rotate (x, y, const_buf.video_direction);\n" +" float2 rotated;\n" +" rotated.x = fmaf (x, const_buf.transform_u0,\n" +" fmaf (y, const_buf.transform_v0, const_buf.transform_offset0));\n" +" rotated.y = fmaf (x, const_buf.transform_u1,\n" +" fmaf (y, const_buf.transform_v1, const_buf.transform_offset1));\n" " float4 s = g_sampler.Execute (tex0, tex1, tex2, tex3, rotated.x, rotated.y);\n" " float3 rgb = make_float3 (s.x, s.y, s.z);\n" " float3 yuv;\n"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/nvcodec/plugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/nvcodec/plugin.c
Changed
@@ -129,6 +129,7 @@ GList *h264_enc_cdata = NULL; GList *h265_enc_cdata = NULL; GList *av1_enc_cdata = NULL; + gboolean have_nvjpegenc = FALSE; #endif gboolean have_nvrtc = FALSE; @@ -327,8 +328,8 @@ av1_enc_cdata = g_list_append (av1_enc_cdata, cdata); } - gst_nv_jpeg_enc_register (plugin, context, GST_RANK_NONE, have_nvrtc); - + if (gst_nv_jpeg_enc_register (plugin, context, GST_RANK_NONE, have_nvrtc)) + have_nvjpegenc = TRUE; #endif gst_object_unref (context); } @@ -348,6 +349,9 @@ gst_nv_av1_encoder_register_auto_select (plugin, av1_enc_cdata, GST_RANK_NONE); } + + if (have_nvjpegenc) + gst_nv_jpeg_enc_register (plugin, NULL, GST_RANK_NONE, have_nvrtc); #endif gst_cuda_memory_copy_register (plugin, GST_RANK_NONE);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/uvcgadget/gstuvcsink.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/uvcgadget/gstuvcsink.c
Changed
@@ -193,6 +193,9 @@ case V4L2_PIX_FMT_UYVY: format = GST_VIDEO_FORMAT_UYVY; break; + case V4L2_PIX_FMT_VYUY: + format = GST_VIDEO_FORMAT_VYUY; + break; case V4L2_PIX_FMT_YUV411P: format = GST_VIDEO_FORMAT_Y41B; break; @@ -442,7 +445,7 @@ element_class->change_state = gst_uvc_sink_change_state; - gst_element_class_set_metadata (element_class, + gst_element_class_set_static_metadata (element_class, "UVC Sink", "Sink/Video", "Streams Video via UVC Gadget", "Michael Grzeschik <mgr@pengutronix.de>"); @@ -977,8 +980,8 @@ int bret = GST_STATE_CHANGE_SUCCESS; GST_DEBUG_OBJECT (self, "%s -> %s", - gst_element_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), - gst_element_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); + gst_state_get_name (GST_STATE_TRANSITION_CURRENT (transition)), + gst_state_get_name (GST_STATE_TRANSITION_NEXT (transition))); switch (transition) { case GST_STATE_CHANGE_NULL_TO_READY:
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/gstv4l2codecav1dec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/gstv4l2codecav1dec.c
Changed
@@ -694,7 +694,7 @@ static void gst_v4l2_codec_av1_fill_segmentation (GstV4l2CodecAV1Dec * self, - const GstAV1SegmenationParams * seg) + const GstAV1SegmentationParams * seg) { struct v4l2_av1_segmentation *v4l2_seg = &self->v4l2_frame.segmentation; guint32 i; @@ -868,7 +868,7 @@ const GstAV1FrameHeaderOBU *f = &pic->frame_hdr; const GstAV1TileInfo *ti = &f->tile_info; const GstAV1QuantizationParams *q = &f->quantization_params; - const GstAV1SegmenationParams *seg = &f->segmentation_params; /* FIXME: send patch upstream to fix spelling on the parser s/segmenation/segmentation */ + const GstAV1SegmentationParams *seg = &f->segmentation_params; const GstAV1LoopFilterParams *lf = &f->loop_filter_params; const GstAV1LoopRestorationParams *lr = &f->loop_restoration_params; guint i;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/gstv4l2decoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/gstv4l2decoder.c
Changed
@@ -1113,14 +1113,16 @@ video_device_path = device->video_device_path; } - g_object_class_install_property (gobject_class, PROP_MEDIA_DEVICE, - g_param_spec_string ("media-device", "Media Device Path", - "Path to the media device node", media_device_path, + g_object_class_install_property (gobject_class, + PROP_MEDIA_DEVICE + prop_offset, g_param_spec_string ("media-device", + "Media Device Path", "Path to the media device node", + media_device_path, G_PARAM_CONSTRUCT_ONLY | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - g_object_class_install_property (gobject_class, PROP_VIDEO_DEVICE, - g_param_spec_string ("video-device", "Video Device Path", - "Path to the video device node", video_device_path, + g_object_class_install_property (gobject_class, + PROP_VIDEO_DEVICE + prop_offset, g_param_spec_string ("video-device", + "Video Device Path", "Path to the video device node", + video_device_path, G_PARAM_CONSTRUCT_ONLY | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/gstv4l2format.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/gstv4l2format.c
Changed
@@ -30,10 +30,6 @@ #define V4L2_PIX_FMT_NC12 v4l2_fourcc('N', 'C', '1', '2') /* Y/CbCr 4:2:0 (128b cols) */ #endif -#ifndef V4L2_PIX_FMT_NV15 -#define V4L2_PIX_FMT_NV15 v4l2_fourcc('N', 'V', '1', '5') /* 15 Y/CbCr 4:2:0 10-bit packed */ -#endif - typedef struct { guint32 v4l2_pix_fmt; @@ -46,6 +42,8 @@ /* *INDENT-OFF* */ /* Keep the same order as GST_V4L2_DEFAULT_VIDEO_FORMATS */ static const GstV4l2FormatDesc gst_v4l2_descriptions = { + {V4L2_PIX_FMT_NV20, GST_VIDEO_FORMAT_NV16_10LE40, DRM_FORMAT_INVALID, DRM_FORMAT_MOD_INVALID, 0}, + {V4L2_PIX_FMT_NV16, GST_VIDEO_FORMAT_NV16, DRM_FORMAT_INVALID, DRM_FORMAT_MOD_INVALID, 0}, {V4L2_PIX_FMT_MT2110R, GST_VIDEO_FORMAT_MT2110R, DRM_FORMAT_INVALID, DRM_FORMAT_MOD_INVALID, 0}, {V4L2_PIX_FMT_MT2110T, GST_VIDEO_FORMAT_MT2110T, DRM_FORMAT_INVALID, DRM_FORMAT_MOD_INVALID, 0}, {V4L2_PIX_FMT_NV15_4L4, GST_VIDEO_FORMAT_NV12_10LE40_4L4, DRM_FORMAT_INVALID, DRM_FORMAT_MOD_INVALID, 0},
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/gstv4l2format.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/gstv4l2format.h
Changed
@@ -23,11 +23,13 @@ #include <gst/video/video.h> #include "linux/videodev2.h" -/* +/* * Ordered similar to what libgstvideo does, but keeping tiled formats first, * and prefering bandwidth over alignment (NV12_10LE40 over P010_LE). */ #define GST_V4L2_DEFAULT_VIDEO_FORMATS "{ " \ + "NV16_10LE40, " \ + "NV16, " \ "MT2110R, " \ "MT2110T, " \ "NV12_10LE40_4L4, " \
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/linux/videodev2.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/linux/videodev2.h
Changed
@@ -611,8 +611,10 @@ /* two planes -- one Y, one Cr + Cb interleaved */ #define V4L2_PIX_FMT_NV12 v4l2_fourcc('N', 'V', '1', '2') /* 12 Y/CbCr 4:2:0 */ #define V4L2_PIX_FMT_NV21 v4l2_fourcc('N', 'V', '2', '1') /* 12 Y/CrCb 4:2:0 */ +#define V4L2_PIX_FMT_NV15 v4l2_fourcc('N', 'V', '1', '5') /* 15 Y/CbCr 4:2:0 10-bit packed */ #define V4L2_PIX_FMT_NV16 v4l2_fourcc('N', 'V', '1', '6') /* 16 Y/CbCr 4:2:2 */ #define V4L2_PIX_FMT_NV61 v4l2_fourcc('N', 'V', '6', '1') /* 16 Y/CrCb 4:2:2 */ +#define V4L2_PIX_FMT_NV20 v4l2_fourcc('N', 'V', '2', '0') /* 20 Y/CbCr 4:2:2 10-bit packed */ #define V4L2_PIX_FMT_NV24 v4l2_fourcc('N', 'V', '2', '4') /* 24 Y/CbCr 4:4:4 */ #define V4L2_PIX_FMT_NV42 v4l2_fourcc('N', 'V', '4', '2') /* 24 Y/CrCb 4:4:4 */ #define V4L2_PIX_FMT_P010 v4l2_fourcc('P', '0', '1', '0') /* 24 Y/CbCr 4:2:0 10-bit per component */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/v4l2codecs/plugin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/v4l2codecs/plugin.c
Changed
@@ -28,13 +28,13 @@ * capabilities. For this reason, this documentation may differ from output * of running `gst-inspect-1.0` on your target. * - * If you are having issues getting any elementis to be registered, you may want + * If you are having issues getting any elements to be registered, you may want * to verify that your user have adequate permissions to access media and video * devices. These Linux devices are usually found in `/dev/media*` and * `/dev/video*`. * - * This documentation as been generated with the use of the environment variable - * `GST_V4L2_CODEC_GEN_DOC=1`. Using tis environment outside of the documentation + * This documentation has been generated with the use of the environment variable + * `GST_V4L2_CODEC_GEN_DOC=1`. Using this environment outside of the documentation * generation will render your codecs unusable. * * Since: 1.18
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvaav1enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvaav1enc.c
Changed
@@ -52,10 +52,9 @@ #include "vacompat.h" #include "gstvabaseenc.h" +#include "gstvadisplay_priv.h" #include "gstvaencoder.h" -#include "gstvacaps.h" #include "gstvaprofile.h" -#include "gstvadisplay_priv.h" #include "gstvapluginutils.h" #include "gst/glib-compat-private.h" @@ -93,6 +92,8 @@ PROP_TILE_GROUPS, PROP_MBBRC, PROP_RATE_CONTROL, + PROP_PALETTE_MODE, + PROP_ALLOW_INTRABC, N_PROPERTIES }; @@ -227,6 +228,8 @@ guint32 num_tile_rows; guint32 tile_groups; guint32 mbbrc; + gboolean allow_intrabc; + gboolean enable_palette_mode; } prop; struct @@ -990,11 +993,14 @@ GstVideoCodecFrame * gst_frame) { GstVaAV1EncFrame *frame = _enc_frame (gst_frame); + +#ifndef G_DISABLE_CHECKS gint pushed_frame_num = gf_group->last_pushed_num < 0 ? 0 : gf_group->last_pushed_num - gf_group->start_frame_offset + 1; - /* No room for a new one. */ g_return_val_if_fail (pushed_frame_num < gf_group->group_frame_num, FALSE); +#endif + /* The frame num should just increase. */ g_return_val_if_fail (frame->frame_num == gf_group->last_pushed_num + 1, FALSE); @@ -1771,6 +1777,9 @@ self->partition.num_tile_cols = self->prop.num_tile_cols; self->partition.num_tile_rows = self->prop.num_tile_rows; self->partition.tile_groups = self->prop.tile_groups; + + self->features.allow_intrabc = self->prop.allow_intrabc; + self->features.enable_palette_mode = self->prop.enable_palette_mode; GST_OBJECT_UNLOCK (self); self->packed_headers = 0; @@ -1791,15 +1800,12 @@ self->features.enable_interintra_compound = FALSE; self->features.enable_masked_compound = FALSE; self->features.enable_warped_motion = FALSE; - self->features.enable_palette_mode = FALSE; self->features.enable_dual_filter = FALSE; self->features.enable_jnt_comp = FALSE; self->features.enable_ref_frame_mvs = FALSE; self->features.enable_superres = FALSE; self->features.enable_restoration = FALSE; - self->features.allow_intrabc = FALSE; self->features.enable_segmentation = FALSE; - self->features.enable_cdef = FALSE; self->features.interpolation_filter_support = 0; self->features.interpolation_type = 0; self->features.obu_size_bytes = 0; @@ -1963,6 +1969,7 @@ 2 12 Yes YUV 4:2:0,YUV 4:2:2,YUV 4:4:4 */ /* We only support 0 and 1 profile now */ + /* note that profile 2 doesn't support screen content coding (SCC) */ if (chrome == 0 || chrome == 1) { va_profile = VAProfileAV1Profile0; } else if (chrome == 3) { @@ -1980,7 +1987,7 @@ if (!gst_va_encoder_has_profile (base->encoder, p)) continue; - if ((rt_format & gst_va_encoder_get_rtformat (base->encoder, + if ((rt_format & gst_va_display_get_rtformat (base->display, p, GST_VA_BASE_ENC_ENTRYPOINT (base))) == 0) continue; @@ -2009,7 +2016,7 @@ self->packed_headers = 0; - if (!gst_va_encoder_get_packed_headers (base->encoder, base->profile, + if (!gst_va_display_get_packed_headers (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &packed_headers)) return FALSE; @@ -2042,7 +2049,7 @@ if (self->gop.gf_group_size >= self->gop.keyframe_interval) self->gop.gf_group_size = self->gop.keyframe_interval - 1; - if (!gst_va_encoder_get_max_num_reference (base->encoder, base->profile, + if (!gst_va_display_get_max_num_reference (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &list0, &list1)) { GST_INFO_OBJECT (self, "Failed to get the max num reference"); list0 = 1; @@ -2317,12 +2324,6 @@ features.value = attrib.value; - if (self->partition.use_128x128_superblock - && (features.bits.support_128x128_superblock == 0)) { - GST_INFO_OBJECT (self, "128x128 superblock is not supported."); - self->partition.use_128x128_superblock = FALSE; - } - self->features.enable_filter_intra = (features.bits.support_filter_intra != 0); self->features.enable_intra_edge_filter = @@ -2331,30 +2332,43 @@ (features.bits.support_interintra_compound != 0); self->features.enable_masked_compound = (features.bits.support_masked_compound != 0); - /* not enable it now. */ + /* TODO: not implemented */ self->features.enable_warped_motion = FALSE; - // (features.bits.support_warped_motion != 0); - self->features.enable_palette_mode = FALSE; - // (features.bits.support_palette_mode != 0); + /* (features.bits.support_warped_motion != 0); */ self->features.enable_dual_filter = (features.bits.support_dual_filter != 0); self->features.enable_jnt_comp = (features.bits.support_jnt_comp != 0); self->features.enable_ref_frame_mvs = (features.bits.support_ref_frame_mvs != 0); - /* not enable it now. */ + /* TODO: not implemented */ self->features.enable_superres = FALSE; + /* (features.bits.support_superres != 0); */ + /* TODO: not implemented */ self->features.enable_restoration = FALSE; - // (features.bits.support_restoration != 0); - /* not enable it now. */ - self->features.allow_intrabc = FALSE; - self->features.enable_cdef = TRUE; + /* (features.bits.support_restoration != 0); */ self->features.cdef_channel_strength = (features.bits.support_cdef_channel_strength != 0); + + /* affected by the properties */ + self->partition.use_128x128_superblock &= + (features.bits.support_128x128_superblock != 0); + self->features.enable_palette_mode &= + (features.bits.support_palette_mode != 0); + self->features.allow_intrabc &= (features.bits.support_allow_intrabc != 0); + /* intra-block copy is incompatible with the constrained directional + * enhancement filter */ + self->features.enable_cdef = !self->features.allow_intrabc; } update_property_bool (base, &self->prop.use_128x128_superblock, self->partition.use_128x128_superblock, PROP_128X128_SUPERBLOCK); + update_property_bool (base, &self->prop.allow_intrabc, + self->features.allow_intrabc, PROP_ALLOW_INTRABC); + + update_property_bool (base, &self->prop.enable_palette_mode, + self->features.enable_palette_mode, PROP_PALETTE_MODE); + attrib.type = VAConfigAttribEncAV1Ext1; attrib.value = 0; status = vaGetConfigAttributes (gst_va_display_get_va_dpy (base->display), @@ -2596,7 +2610,7 @@ guint bitrate; guint32 rc_ctrl, rc_mode, quality_level; - quality_level = gst_va_encoder_get_quality_level (base->encoder, + quality_level = gst_va_display_get_quality_level (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->rc.target_usage > quality_level) { GST_INFO_OBJECT (self, "User setting target-usage: %d is not supported, " @@ -2612,7 +2626,7 @@ GST_OBJECT_UNLOCK (self); if (rc_ctrl != VA_RC_NONE) { - rc_mode = gst_va_encoder_get_rate_control_mode (base->encoder, + rc_mode = gst_va_display_get_rate_control_mode (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (!(rc_mode & rc_ctrl)) { guint32 defval = @@ -2764,32 +2778,20 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaAV1Enc *self = GST_VA_AV1_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL; + GstCaps *out_caps; GstVideoCodecState *output_state; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoFormat format; VAProfile profile; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation, rc_same; - guint max_ref_frames, max_surfaces = 0, - rt_format, depth = 0, chrome = 0, codedbuf_size, latency_num; + gboolean do_renegotiation = TRUE; + guint max_ref_frames, rt_format, depth = 0, chrome = 0, latency_num; gint width, height; GstClockTime latency; width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay + self->gop.gf_group_size - 1; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - rt_format = _av1_get_rtformat (self, format, &depth, &chrome); if (!rt_format) { GST_ERROR_OBJECT (self, "unrecognized input format."); @@ -2800,19 +2802,6 @@ if (profile == VAProfileNone) return FALSE; - GST_OBJECT_LOCK (self); - rc_same = (self->prop.rc_ctrl == self->rc.rc_ctrl_mode); - GST_OBJECT_UNLOCK (self); - - /* first check */ - do_reopen = !(base->profile == profile && base->rt_format == rt_format - && format == reconf_format && width == base->width - && height == base->height && rc_same && depth == self->depth - && chrome == self->chrome); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -2867,7 +2856,6 @@ /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay + self->gop.gf_group_size - 1) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay + self->gop.gf_group_size - 1; } @@ -2882,14 +2870,7 @@ base->min_buffers = max_ref_frames; max_ref_frames += 3 /* scratch frames */ ; - /* second check after calculations */ - do_reopen |= - !(max_ref_frames == max_surfaces && codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, + if (!gst_va_encoder_open (base->encoder, base->profile, GST_VIDEO_INFO_FORMAT (&base->in_info), base->rt_format, base->width, base->height, base->codedbuf_size, max_ref_frames, self->rc.rc_ctrl_mode, self->packed_headers)) { @@ -2912,17 +2893,15 @@ "height", G_TYPE_INT, base->height, "alignment", G_TYPE_STRING, "tu", "stream-format", G_TYPE_STRING, "obu-stream", NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + output_state = gst_video_encoder_get_output_state (venc); + do_renegotiation = TRUE; + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps); @@ -3033,7 +3012,11 @@ .enable_order_hint = seq_param->seq_fields.bits.enable_order_hint, .enable_jnt_comp = seq_param->seq_fields.bits.enable_jnt_comp, .enable_ref_frame_mvs = seq_param->seq_fields.bits.enable_ref_frame_mvs, - .seq_choose_screen_content_tools = 0, + .seq_choose_screen_content_tools = + (self->features.allow_intrabc || self->features.enable_palette_mode), + .seq_force_screen_content_tools = + (self->features.allow_intrabc || self->features.enable_palette_mode) ? + GST_AV1_SELECT_SCREEN_CONTENT_TOOLS : 0, .order_hint_bits_minus_1 = seq_param->order_hint_bits_minus_1, .enable_superres = seq_param->seq_fields.bits.enable_superres, .enable_cdef = seq_param->seq_fields.bits.enable_cdef, @@ -3161,6 +3144,16 @@ guint cdef_damping; guint i; + if (!self->features.enable_cdef) { + pic_param->cdef_bits = 0; + pic_param->cdef_damping_minus_3 = 3; + for (i = 0; i < GST_AV1_CDEF_MAX; i++) { + pic_param->cdef_y_strengthsi = 0; + pic_param->cdef_uv_strengthsi = 0; + } + return; + } + /* Adjust the CDEF parameter for CQP mode. In bitrate control mode, the driver will update the CDEF value for each frame automatically. */ if (self->rc.rc_ctrl_mode == VA_RC_CQP) { @@ -3219,11 +3212,14 @@ g_assert (!(va_frame->type & FRAME_TYPE_REPEAT)); /* *INDENT-OFF* */ - if (self->rc.rc_ctrl_mode == VA_RC_CQP) { + if (self->rc.rc_ctrl_mode == VA_RC_CQP && !self->features.allow_intrabc) { loop_filter_level_y = _av1_calculate_filter_level (self->rc.base_qindex, FALSE); loop_filter_level_uv = _av1_calculate_filter_level (self->rc.base_qindex, TRUE); + } else if (self->features.allow_intrabc) { + loop_filter_level_y = 0; + loop_filter_level_uv = 0; } else { /* In bitrate control mode, the driver will set the loop filter level for each frame, we do not care here. */ @@ -3372,6 +3368,10 @@ .skip_frames_reduced_size = 0, }; /* *INDENT-ON* */ + if (self->features.allow_intrabc) { + pic_param->ref_deltas4 = 0; + pic_param->ref_deltas5 = -1; + } _av1_calculate_cdef_param (self, pic_param); @@ -3555,6 +3555,7 @@ .allow_screen_content_tools = 0, .frame_size_override_flag = 0, .frame_width = self->sequence_hdr.max_frame_width_minus_1 + 1, + .upscaled_width = self->sequence_hdr.max_frame_width_minus_1 + 1, .frame_height = self->sequence_hdr.max_frame_height_minus_1 + 1, .order_hint = pic_param->order_hint, .primary_ref_frame = pic_param->primary_ref_frame, @@ -3656,15 +3657,22 @@ }; /* *INDENT-ON* */ - for (i = 0; i < GST_AV1_CDEF_MAX; i++) { - frame_hdr->cdef_params.cdef_y_pri_strengthi = - pic_param->cdef_y_strengthsi / 4; - frame_hdr->cdef_params.cdef_y_sec_strengthi = - pic_param->cdef_y_strengthsi % 4; - frame_hdr->cdef_params.cdef_uv_pri_strengthi = - pic_param->cdef_uv_strengthsi / 4; - frame_hdr->cdef_params.cdef_uv_sec_strengthi = - pic_param->cdef_uv_strengthsi % 4; + if (frame_hdr->allow_intrabc == 0) { + for (i = 0; i < GST_AV1_CDEF_MAX; i++) { + frame_hdr->cdef_params.cdef_y_pri_strengthi = + pic_param->cdef_y_strengthsi / 4; + frame_hdr->cdef_params.cdef_y_sec_strengthi = + pic_param->cdef_y_strengthsi % 4; + frame_hdr->cdef_params.cdef_uv_pri_strengthi = + pic_param->cdef_uv_strengthsi / 4; + frame_hdr->cdef_params.cdef_uv_sec_strengthi = + pic_param->cdef_uv_strengthsi % 4; + } + } + + if (frame_hdr->allow_intrabc + || pic_param->picture_flags.bits.palette_mode_enable) { + frame_hdr->allow_screen_content_tools = 1; } _av1_set_skip_mode_frame (self, va_frame, frame_hdr); @@ -4169,6 +4177,8 @@ self->prop.num_tile_rows = 1; self->prop.tile_groups = 1; self->prop.mbbrc = 0; + self->prop.enable_palette_mode = FALSE; + self->prop.allow_intrabc = FALSE; if (propertiesPROP_RATE_CONTROL) { self->prop.rc_ctrl = @@ -4274,6 +4284,12 @@ } break; } + case PROP_PALETTE_MODE: + self->prop.enable_palette_mode = g_value_get_boolean (value); + break; + case PROP_ALLOW_INTRABC: + self->prop.allow_intrabc = g_value_get_boolean (value); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); } @@ -4348,6 +4364,12 @@ case PROP_MBBRC: g_value_set_enum (value, self->prop.mbbrc); break; + case PROP_PALETTE_MODE: + g_value_set_boolean (value, self->prop.enable_palette_mode); + break; + case PROP_ALLOW_INTRABC: + g_value_set_boolean (value, self->prop.allow_intrabc); + break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); } @@ -4505,6 +4527,28 @@ "Enable the 128x128 superblock mode", FALSE, param_flags); /** + * GstVaAV1Enc:palette-mode: + * + * Enable palette mode, an intra-frame optimization for blocks with a limited + * number of distinct colors, such a UI elements, for example. + */ + propertiesPROP_PALETTE_MODE = + g_param_spec_boolean ("palette-mode", "Enable palette mode", + "Enable palette mode, intra-frame optimization with limited colors", + FALSE, param_flags); + + /** + * GstVaAV1Enc:allow_intrabc: + * + * Allow intra-block copy, a prediction mode for spatial redundancy within a + * frame. If it's enabled, it disables the usage of the constrained + * directional enhancement filter. + */ + propertiesPROP_ALLOW_INTRABC = + g_param_spec_boolean ("allow-intrabc", "Allow intra-block copy", + "Allow intra-block copy, a prediction mode for spatial redundancy within " + "a frame", FALSE, param_flags); + /** * GstVaAV1Enc:min-qp: * * The minimum quantizer value.
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvabasedec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvabasedec.c
Changed
@@ -227,7 +227,7 @@ { GstAllocator *allocator = NULL; - if (gst_caps_is_dmabuf (caps)) + if (gst_video_is_dma_drm_caps (caps)) allocator = gst_va_dmabuf_allocator_new (base->display); else { GArray *surface_formats =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvabaseenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvabaseenc.c
Changed
@@ -26,7 +26,6 @@ #include "vacompat.h" #include "gstvabase.h" -#include "gstvacaps.h" #include "gstvapluginutils.h" #define GST_CAT_DEFAULT gst_va_base_enc_debug @@ -194,6 +193,7 @@ _get_sinkpad_pool (GstElement * element, gpointer data) { GstVaBaseEnc *base = GST_VA_BASE_ENC (element); + GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (element); GstAllocator *allocator; GstAllocationParams params = { 0, }; guint usage_hint; @@ -219,7 +219,7 @@ allocator = gst_va_allocator_new (base->display, surface_formats); usage_hint = va_get_surface_usage_hint (base->display, - VAEntrypointEncSlice, GST_PAD_SINK, FALSE); + klass->entrypoint, GST_PAD_SINK, FALSE); base->priv->raw_pool = gst_va_pool_new_with_config (caps, 1, 0, usage_hint, GST_VA_FEATURE_AUTO, allocator, ¶ms); @@ -384,7 +384,7 @@ { GstAllocator *allocator = NULL; - if (gst_caps_is_dmabuf (caps)) { + if (gst_video_is_dma_drm_caps (caps)) { allocator = gst_va_dmabuf_allocator_new (base->display); } else { GArray *surface_formats =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvabasetransform.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvabasetransform.c
Changed
@@ -680,7 +680,7 @@ { GstAllocator *allocator = NULL; - if (gst_caps_is_dmabuf (caps)) { + if (gst_video_is_dma_drm_caps (caps)) { allocator = gst_va_dmabuf_allocator_new (self->display); } else { GArray *surface_formats = gst_va_filter_get_surface_formats (self->filter);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvacaps.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvacaps.c
Changed
@@ -28,7 +28,6 @@ #include <gst/va/vasurfaceimage.h> #include <va/va_drmcommon.h> -#include "gstvadisplay_priv.h" #include "gstvaprofile.h" GST_DEBUG_CATEGORY_EXTERN (gstva_debug); @@ -827,12 +826,6 @@ } gboolean -gst_caps_is_dmabuf (GstCaps * caps) -{ - return _caps_is (caps, GST_CAPS_FEATURE_MEMORY_DMABUF); -} - -gboolean gst_caps_is_vamemory (GstCaps * caps) { return _caps_is (caps, GST_CAPS_FEATURE_MEMORY_VA);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvacaps.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvacaps.h
Changed
@@ -57,9 +57,7 @@ gboolean gst_caps_set_format_array (GstCaps * caps, GArray * formats); -gboolean gst_caps_is_dmabuf (GstCaps * caps); gboolean gst_caps_is_vamemory (GstCaps * caps); gboolean gst_caps_is_raw (GstCaps * caps); G_END_DECLS -
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvacompositor.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvacompositor.c
Changed
@@ -55,7 +55,6 @@ #include "gstvabase.h" #include "gstvacaps.h" -#include "gstvadisplay_priv.h" #include "gstvafilter.h" #include "gstvapluginutils.h" @@ -571,7 +570,7 @@ { GstAllocator *allocator = NULL; - if (gst_caps_is_dmabuf (caps)) { + if (gst_video_is_dma_drm_caps (caps)) { allocator = gst_va_dmabuf_allocator_new (self->display); } else { GArray *surface_formats = gst_va_filter_get_surface_formats (self->filter); @@ -748,7 +747,7 @@ goto bail; } - if (gst_caps_is_dmabuf (caps) && GST_VIDEO_INFO_IS_RGB (&info)) { + if (gst_video_is_dma_drm_caps (caps) && GST_VIDEO_INFO_IS_RGB (&info)) { usage_hint = VA_SURFACE_ATTRIB_USAGE_HINT_GENERIC; } else { usage_hint = va_get_surface_usage_hint (self->display,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvadeinterlace.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvadeinterlace.c
Changed
@@ -56,8 +56,6 @@ #include <va/va_drmcommon.h> #include "gstvabasetransform.h" -#include "gstvacaps.h" -#include "gstvadisplay_priv.h" #include "gstvafilter.h" #include "gstvapluginutils.h"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvadisplay_priv.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvadisplay_priv.c
Changed
@@ -28,6 +28,9 @@ #include "gstvaprofile.h" +#define GST_CAT_DEFAULT gstva_debug +GST_DEBUG_CATEGORY_EXTERN (gstva_debug); + GArray * gst_va_display_get_profiles (GstVaDisplay * self, guint32 codec, VAEntrypoint entrypoint) @@ -161,3 +164,200 @@ g_free (entrypoints); return found; } + +#define _get_config_attrib(type) \ + __get_config_attrib(self, profile, entrypoint, &attrib, type, G_STRINGIFY (type)) + +static inline int +__get_config_attrib (GstVaDisplay * self, VAProfile profile, + VAEntrypoint entrypoint, VAConfigAttrib * attrib, + VAConfigAttribType type, const char *name) +{ + VAStatus status; + VADisplay dpy; + + g_return_val_if_fail (profile != VAProfileNone, 0); + + /* *INDENT-OFF* */ + *attrib = (VAConfigAttrib) { + .type = type, + }; + /* *INDENT-ON* */ + + dpy = gst_va_display_get_va_dpy (self); + status = vaGetConfigAttributes (dpy, profile, entrypoint, attrib, 1); + if (status != VA_STATUS_SUCCESS) { + GST_WARNING_OBJECT (self, "vaGetConfigAttributes (%s): %s", name, + vaErrorStr (status)); + return 0; + } + + if (attrib->value == VA_ATTRIB_NOT_SUPPORTED) { + GST_WARNING_OBJECT (self, "Driver does not support attribute %s", name); + return -1; + } + + return 1; +} + +gint32 +gst_va_display_get_max_slice_num (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), -1); + + if (_get_config_attrib (VAConfigAttribEncMaxSlices) < 1) + return -1; + + return attrib.value; +} + +guint32 +gst_va_display_get_slice_structure (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), 0); + + if (_get_config_attrib (VAConfigAttribEncSliceStructure) < 1) + return 0; + + return attrib.value; +} + +gboolean +gst_va_display_get_max_num_reference (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint, + guint32 * list0, guint32 * list1) +{ + VAConfigAttrib attrib; + int ret; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), FALSE); + + ret = _get_config_attrib (VAConfigAttribEncMaxRefFrames); + + if (ret == 0) + return FALSE; + + if (ret == -1) { + if (list0) + *list0 = 0; + if (list1) + *list1 = 0; + + return TRUE; + } + + if (list0) + *list0 = attrib.value & 0xffff; + if (list1) + *list1 = (attrib.value >> 16) & 0xffff; + + return TRUE; +} + +guint32 +gst_va_display_get_prediction_direction (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), 0); + + if (_get_config_attrib (VAConfigAttribPredictionDirection) < 1) + return 0; + + /* supported prediction directions */ + return attrib.value & (VA_PREDICTION_DIRECTION_PREVIOUS | + VA_PREDICTION_DIRECTION_FUTURE | VA_PREDICTION_DIRECTION_BI_NOT_EMPTY); +} + +guint32 +gst_va_display_get_rate_control_mode (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib = {.type = VAConfigAttribRateControl }; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), 0); + + if (_get_config_attrib (VAConfigAttribRateControl) < 1) + return 0; + + return attrib.value; +} + +guint32 +gst_va_display_get_quality_level (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), 0); + + if (_get_config_attrib (VAConfigAttribEncQualityRange) < 1) + return 0; + + return attrib.value; +} + +gboolean +gst_va_display_has_trellis (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), FALSE); + + if (_get_config_attrib (VAConfigAttribEncQuantization) < 1) + return FALSE; + + return (gboolean) (attrib.value & VA_ENC_QUANTIZATION_TRELLIS_SUPPORTED); +} + +gboolean +gst_va_display_has_tile (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), FALSE); + + if (_get_config_attrib (VAConfigAttribEncTileSupport) < 1) + return FALSE; + + return (attrib.value > 0); +} + +guint32 +gst_va_display_get_rtformat (GstVaDisplay * self, + VAProfile profile, VAEntrypoint entrypoint) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), FALSE); + + if (_get_config_attrib (VAConfigAttribRTFormat) < 1) + return 0; + + return attrib.value; +} + +gboolean +gst_va_display_get_packed_headers (GstVaDisplay * self, VAProfile profile, + VAEntrypoint entrypoint, guint32 * packed_headers) +{ + VAConfigAttrib attrib; + + g_return_val_if_fail (GST_IS_VA_DISPLAY (self), FALSE); + + if (_get_config_attrib (VAConfigAttribEncPackedHeaders) < 1) + return FALSE; + + if (packed_headers) + *packed_headers = attrib.value; + return TRUE; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvadisplay_priv.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvadisplay_priv.h
Changed
@@ -30,4 +30,42 @@ GArray * gst_va_display_get_image_formats (GstVaDisplay * self); gboolean gst_va_display_has_vpp (GstVaDisplay * self); +gint32 gst_va_display_get_max_slice_num (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +guint32 gst_va_display_get_slice_structure (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +gboolean gst_va_display_get_max_num_reference + (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint, + guint32 * list0, + guint32 * list1); +guint32 gst_va_display_get_prediction_direction + (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +guint32 gst_va_display_get_rate_control_mode + (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +guint32 gst_va_display_get_quality_level (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +gboolean gst_va_display_has_trellis (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +gboolean gst_va_display_has_tile (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +guint32 gst_va_display_get_rtformat (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint); +gboolean gst_va_display_get_packed_headers (GstVaDisplay * self, + VAProfile profile, + VAEntrypoint entrypoint, + guint32 * packed_headers); + + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvaencoder.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvaencoder.c
Changed
@@ -28,10 +28,10 @@ #include <gst/va/gstvavideoformat.h> #include <gst/va/vasurfaceimage.h> +#include "vacompat.h" #include "gstvacaps.h" #include "gstvaprofile.h" #include "gstvadisplay_priv.h" -#include "vacompat.h" struct _GstVaEncoder { @@ -50,7 +50,12 @@ gint coded_height; gint codedbuf_size; - GstBufferPool *recon_pool; + struct + { + GstBufferPool *pool; + GstVideoFormat format; + gint max_surfaces; + } recon; }; GST_DEBUG_CATEGORY_STATIC (gst_va_encoder_debug); @@ -92,10 +97,11 @@ } static VABufferID -_create_buffer (GstVaEncoder * self, gint type, gpointer data, gsize size) +_create_buffer (GstVaEncoder * self, VABufferType type, gpointer data, + guint size) { VAStatus status; - VADisplay dpy = gst_va_display_get_va_dpy (self->display); + VADisplay dpy; VABufferID buffer; VAContextID context; @@ -183,26 +189,42 @@ { self->profile = VAProfileNone; self->config = VA_INVALID_ID; -} - -static void -gst_va_encoder_reset (GstVaEncoder * self) -{ - self->profile = VAProfileNone; - self->config = VA_INVALID_ID; self->context = VA_INVALID_ID; self->rt_format = 0; - self->coded_width = 0; - self->coded_height = 0; + self->coded_width = -1; + self->coded_height = -1; self->codedbuf_size = 0; + + self->recon.pool = NULL; + self->recon.max_surfaces = 0; + self->recon.format = GST_VIDEO_FORMAT_UNKNOWN; } static inline gboolean -_is_open_unlocked (GstVaEncoder * self) +_is_setup_unlocked (GstVaEncoder * self) { return (self->config != VA_INVALID_ID && self->profile != VAProfileNone); } +static inline gboolean +gst_va_encoder_is_setup (GstVaEncoder * self) +{ + gboolean ret; + + g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); + + GST_OBJECT_LOCK (self); + ret = _is_setup_unlocked (self); + GST_OBJECT_UNLOCK (self); + return ret; +} + +static inline gboolean +_is_open_unlocked (GstVaEncoder * self) +{ + return (_is_setup_unlocked (self) && self->context != VA_INVALID_ID); +} + gboolean gst_va_encoder_is_open (GstVaEncoder * self) { @@ -216,94 +238,108 @@ return ret; } +static inline void +_destroy_context (GstVaEncoder * self) +{ + VADisplay dpy; + VAStatus status; + VAContextID context; + GstBufferPool *pool; + + GST_OBJECT_LOCK (self); + context = self->context; + self->context = VA_INVALID_ID; + self->coded_width = -1; + self->coded_height = -1; + + if ((pool = self->recon.pool)) { + self->recon.pool = NULL; + self->recon.format = GST_VIDEO_FORMAT_UNKNOWN; + self->recon.max_surfaces = 0; + } + GST_OBJECT_UNLOCK (self); + + if (pool) { + gst_buffer_pool_set_active (pool, FALSE); + gst_object_unref (pool); + } + + if (context == VA_INVALID_ID) + return; + + dpy = gst_va_display_get_va_dpy (self->display); + status = vaDestroyContext (dpy, context); + if (status != VA_STATUS_SUCCESS) + GST_ERROR_OBJECT (self, "vaDestroyContext: %s", vaErrorStr (status)); +} + gboolean gst_va_encoder_close (GstVaEncoder * self) { VADisplay dpy; VAStatus status; - VAConfigID config = VA_INVALID_ID; - VAContextID context = VA_INVALID_ID; - GstBufferPool *recon_pool = NULL; + VAConfigID config; g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - GST_OBJECT_LOCK (self); - if (!_is_open_unlocked (self)) { - GST_OBJECT_UNLOCK (self); - return TRUE; - } + _destroy_context (self); - config = self->config; - context = self->context; + gst_caps_replace (&self->srcpad_caps, NULL); + gst_caps_replace (&self->sinkpad_caps, NULL); - recon_pool = self->recon_pool; - self->recon_pool = NULL; + GST_OBJECT_LOCK (self); + config = self->config; - gst_va_encoder_reset (self); + gst_va_encoder_init (self); GST_OBJECT_UNLOCK (self); - gst_buffer_pool_set_active (recon_pool, FALSE); - g_clear_pointer (&recon_pool, gst_object_unref); + if (config == VA_INVALID_ID) + return TRUE; dpy = gst_va_display_get_va_dpy (self->display); - - if (context != VA_INVALID_ID) { - status = vaDestroyContext (dpy, context); - if (status != VA_STATUS_SUCCESS) - GST_ERROR_OBJECT (self, "vaDestroyContext: %s", vaErrorStr (status)); - } - status = vaDestroyConfig (dpy, config); if (status != VA_STATUS_SUCCESS) GST_ERROR_OBJECT (self, "vaDestroyConfig: %s", vaErrorStr (status)); - gst_caps_replace (&self->srcpad_caps, NULL); - gst_caps_replace (&self->sinkpad_caps, NULL); - return TRUE; } /* for querying the customized surface alignment */ guint -gst_va_encoder_get_surface_alignment (GstVaDisplay * display, - VAProfile profile, VAEntrypoint entrypoint) +gst_va_encoder_get_surface_alignment (GstVaEncoder * self) { guint alignment = 0; #if VA_CHECK_VERSION(1, 21, 0) - VAConfigAttrib *attrib = NULL; VASurfaceAttrib *attr_list; guint i, count; VAConfigID config; - VADisplay dpy; - VAStatus status; - dpy = gst_va_display_get_va_dpy (display); - status = vaCreateConfig (dpy, profile, entrypoint, attrib, 0, &config); - if (status != VA_STATUS_SUCCESS) { - GST_ERROR_OBJECT (display, "vaCreateConfig: %s", vaErrorStr (status)); - return alignment; + GST_OBJECT_LOCK (self); + config = self->config; + GST_OBJECT_UNLOCK (self); + + if (config == VA_INVALID_ID) { + GST_ERROR_OBJECT (self, + "Encoder has to be setup before getting surface alignment"); + return 0; } - attr_list = gst_va_get_surface_attribs (display, config, &count); + + attr_list = gst_va_get_surface_attribs (self->display, config, &count); if (!attr_list) goto bail; for (i = 0; i < count; i++) { - if (attr_listi.type == VASurfaceAttribAlignmentSize) { - alignment = attr_listi.value.value.i; - GST_INFO_OBJECT (display, - "Using customized surface alignment %dx%d\n", - 1 << (alignment & 0xf), 1 << ((alignment & 0xf0) >> 4)); - break; - } + if (attr_listi.type != VASurfaceAttribAlignmentSize) + continue; + + alignment = attr_listi.value.value.i; + GST_INFO_OBJECT (self, "Using customized surface alignment %dx%d", + 1 << (alignment & 0xf), 1 << ((alignment & 0xf0) >> 4)); + break; } g_free (attr_list); bail: - status = vaDestroyConfig (dpy, config); - if (status != VA_STATUS_SUCCESS) { - GST_ERROR_OBJECT (display, "vaDestroyConfig: %s", vaErrorStr (status)); - return alignment; - } #endif return alignment; } @@ -346,42 +382,165 @@ return formats; } -static GstBufferPool * -_create_reconstruct_pool (GstVaDisplay * display, GArray * surface_formats, - GstVideoFormat format, gint coded_width, gint coded_height, - guint max_buffers) +static inline GstCaps * +_get_reconstructed_caps (GstVaEncoder * self) { - GstAllocator *allocator = NULL; - guint usage_hint; GstVideoInfo info; - GstAllocationParams params = { 0, }; - GstBufferPool *pool; - GstCaps *caps = NULL; - - gst_video_info_set_format (&info, format, coded_width, coded_height); + GstCaps *caps; + GstVideoFormat format; + gint width, height; - usage_hint = va_get_surface_usage_hint (display, - VAEntrypointEncSlice, GST_PAD_SINK, FALSE); + GST_OBJECT_LOCK (self); + format = self->recon.format; + width = self->coded_width; + height = self->coded_height; + GST_OBJECT_UNLOCK (self); + if (!gst_video_info_set_format (&info, format, width, height)) { + GST_WARNING_OBJECT (self, "Invalid video info"); + return NULL; + } caps = gst_video_info_to_caps (&info); + if (!caps) + return NULL; + gst_caps_set_features_simple (caps, gst_caps_features_new_single_static_str (GST_CAPS_FEATURE_MEMORY_VA)); + return caps; +} - allocator = gst_va_allocator_new (display, surface_formats); +static inline GstAllocator * +_get_reconstructed_allocator (GstVaEncoder * self) +{ + GArray *surface_formats; + VAConfigID config; + + GST_OBJECT_LOCK (self); + config = self->config; + GST_OBJECT_UNLOCK (self); + + g_assert (config != VA_INVALID_ID); + + surface_formats = _get_surface_formats (self->display, config); + if (!surface_formats) { + GST_ERROR_OBJECT (self, "Failed to get surface formats"); + return NULL; + } + + return gst_va_allocator_new (self->display, surface_formats); +} - pool = gst_va_pool_new_with_config (caps, 0, max_buffers, usage_hint, +static GstBufferPool * +_get_reconstructed_buffer_pool (GstVaEncoder * self) +{ + GstAllocator *allocator = NULL; + guint usage_hint; + GstAllocationParams params; + GstBufferPool *pool = NULL; + GstCaps *caps; + gint max_surfaces; + + GST_OBJECT_LOCK (self); + pool = self->recon.pool ? gst_object_ref (self->recon.pool) : NULL; + max_surfaces = self->recon.max_surfaces; + GST_OBJECT_UNLOCK (self); + + if (pool) + return pool; + + allocator = _get_reconstructed_allocator (self); + if (!allocator) { + GST_ERROR_OBJECT (self, "Failed to create reconstruct allocator"); + return NULL; + } + + caps = _get_reconstructed_caps (self); + if (!caps) { + GST_ERROR_OBJECT (self, "Failed to configure reconstruct caps"); + goto bail; + } + + usage_hint = va_get_surface_usage_hint (self->display, self->entrypoint, + GST_PAD_SINK, FALSE); + + gst_allocation_params_init (¶ms); + + /* create one reconstruct surface at least */ + pool = gst_va_pool_new_with_config (caps, 1, max_surfaces, usage_hint, GST_VA_FEATURE_AUTO, allocator, ¶ms); + if (!pool) { + GST_ERROR_OBJECT (self, "Failed to create reconstruct pool"); + goto bail; + } + + if (!gst_buffer_pool_set_active (pool, TRUE)) { + GST_ERROR_OBJECT (self, "Failed to activate reconstruct pool"); + gst_clear_object (&pool); + } +bail: gst_clear_object (&allocator); gst_clear_caps (&caps); + gst_object_replace ((GstObject **) & self->recon.pool, + GST_OBJECT_CAST (pool)); return pool; } +static inline gboolean +_skip_setup (GstVaEncoder * self, VAProfile profile, guint rt_format, + guint rc_ctrl, guint32 packed_headers) +{ + VADisplay dpy; + VAStatus status; + /* *INDENT-OFF* */ + VAConfigAttrib attribs = { + { .type = VAConfigAttribRateControl, .value = 0, }, + { .type = VAConfigAttribEncPackedHeaders, .value = 0, }, + }; + /* *INDENT-ON* */ + gboolean same; + + /* encoder is closed */ + if (!gst_va_encoder_is_setup (self)) + return FALSE; + + GST_OBJECT_LOCK (self); + same = (profile == self->profile) && (rt_format == self->rt_format); + GST_OBJECT_UNLOCK (self); + if (!same) + goto close_and_bail; + + dpy = gst_va_display_get_va_dpy (self->display); + status = vaGetConfigAttributes (dpy, profile, self->entrypoint, attribs, + G_N_ELEMENTS (attribs)); + if (status != VA_STATUS_SUCCESS) { + GST_ERROR_OBJECT (self, "vaGetConfigAttributes: %s", vaErrorStr (status)); + goto close_and_bail; + } + + same = ((attribs0.value == VA_ATTRIB_NOT_SUPPORTED) + && (rc_ctrl == VA_RC_NONE)) + || ((attribs0.value & rc_ctrl) == rc_ctrl); + if (!same) + goto close_and_bail; + + same = ((attribs1.value == VA_ATTRIB_NOT_SUPPORTED) + && (packed_headers == 0)) + || ((attribs1.value & packed_headers) == packed_headers); + if (!same) + goto close_and_bail; + + /* the same setup can be reused */ + return TRUE; + +close_and_bail: + gst_va_encoder_close (self); + return FALSE; +} + gboolean -gst_va_encoder_open (GstVaEncoder * self, VAProfile profile, - GstVideoFormat video_format, guint rt_format, gint coded_width, - gint coded_height, gint codedbuf_size, guint max_reconstruct_surfaces, +gst_va_encoder_setup (GstVaEncoder * self, VAProfile profile, guint rt_format, guint rc_ctrl, guint32 packed_headers) { /* *INDENT-OFF* */ @@ -390,17 +549,16 @@ }; /* *INDENT-ON* */ VAConfigID config = VA_INVALID_ID; - VAContextID context = VA_INVALID_ID; VADisplay dpy; - GArray *surface_formats = NULL; VAStatus status; - GstBufferPool *recon_pool = NULL; guint attrib_idx = 1; g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - g_return_val_if_fail (codedbuf_size > 0, FALSE); + g_return_val_if_fail (profile != VAProfileNone, FALSE); + g_return_val_if_fail (rc_ctrl > 0, FALSE); + g_return_val_if_fail (rt_format > 0, FALSE); - if (gst_va_encoder_is_open (self)) + if (_skip_setup (self, profile, rt_format, rc_ctrl, packed_headers)) return TRUE; if (!gst_va_encoder_has_profile (self, profile)) { @@ -422,69 +580,116 @@ } dpy = gst_va_display_get_va_dpy (self->display); - status = vaCreateConfig (dpy, profile, self->entrypoint, attribs, attrib_idx, &config); if (status != VA_STATUS_SUCCESS) { GST_ERROR_OBJECT (self, "vaCreateConfig: %s", vaErrorStr (status)); - goto error; + return FALSE; } - surface_formats = _get_surface_formats (self->display, config); - if (!surface_formats) { - GST_ERROR_OBJECT (self, "Failed to get surface formats"); - goto error; - } + GST_OBJECT_LOCK (self); + self->config = config; + self->profile = profile; + self->rt_format = rt_format; + GST_OBJECT_UNLOCK (self); - recon_pool = _create_reconstruct_pool (self->display, surface_formats, - video_format, coded_width, coded_height, max_reconstruct_surfaces); - if (!recon_pool) { - GST_ERROR_OBJECT (self, "Failed to create reconstruct pool"); - goto error; - } + return TRUE; +} - if (!gst_buffer_pool_set_active (recon_pool, TRUE)) { - GST_ERROR_OBJECT (self, "Failed to activate reconstruct pool"); - goto error; +static inline gboolean +_skip_open (GstVaEncoder * self, gint coded_width, gint coded_height) +{ + gboolean same_size; + + if (!gst_va_encoder_is_open (self)) + return FALSE; + + GST_OBJECT_LOCK (self); + same_size = (self->coded_width == coded_width) + && (self->coded_height == coded_height); + GST_OBJECT_UNLOCK (self); + + if (same_size) + return TRUE; + + /* partial close: context & pool */ + _destroy_context (self); + + return FALSE; +} + +gboolean +gst_va_encoder_open_2 (GstVaEncoder * self, gint coded_width, gint coded_height) +{ + VAConfigID config = VA_INVALID_ID; + VAContextID context = VA_INVALID_ID; + VADisplay dpy; + VAStatus status; + + g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); + + if (!gst_va_encoder_is_setup (self)) { + /* clean up any misleading previous state */ + _destroy_context (self); + GST_ERROR_OBJECT (self, "call gst_va_encoder_setup() previous!"); + return FALSE; } + if (_skip_open (self, coded_width, coded_height)) + return TRUE; + + GST_OBJECT_LOCK (self); + config = self->config; + GST_OBJECT_UNLOCK (self); + + dpy = gst_va_display_get_va_dpy (self->display); status = vaCreateContext (dpy, config, coded_width, coded_height, VA_PROGRESSIVE, NULL, 0, &context); if (status != VA_STATUS_SUCCESS) { GST_ERROR_OBJECT (self, "vaCreateConfig: %s", vaErrorStr (status)); - goto error; + return FALSE; } GST_OBJECT_LOCK (self); - - self->config = config; self->context = context; - self->profile = profile; - self->rt_format = rt_format; self->coded_width = coded_width; self->coded_height = coded_height; - self->codedbuf_size = codedbuf_size; - gst_object_replace ((GstObject **) & self->recon_pool, - (GstObject *) recon_pool); - GST_OBJECT_UNLOCK (self); - g_clear_pointer (&recon_pool, gst_object_unref); - /* now we should return now only this profile's caps */ - gst_caps_replace (&self->srcpad_caps, NULL); - return TRUE; +} + +gboolean +gst_va_encoder_open (GstVaEncoder * self, VAProfile profile, + GstVideoFormat video_format, guint rt_format, gint coded_width, + gint coded_height, gint codedbuf_size, guint max_reconstruct_surfaces, + guint rc_ctrl, guint32 packed_headers) +{ + GstBufferPool *recon_pool; -error: - g_clear_pointer (&recon_pool, gst_object_unref); + g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); + g_return_val_if_fail (codedbuf_size > 0, FALSE); - if (config != VA_INVALID_ID) - vaDestroyConfig (dpy, config); + if (!gst_va_encoder_setup (self, profile, rt_format, rc_ctrl, packed_headers)) + return FALSE; - if (context != VA_INVALID_ID) - vaDestroyContext (dpy, context); + if (!gst_va_encoder_open_2 (self, coded_width, coded_height)) + return FALSE; - return FALSE; + if (!gst_va_encoder_set_reconstruct_pool_config (self, video_format, + max_reconstruct_surfaces)) + return FALSE; + recon_pool = _get_reconstructed_buffer_pool (self); + if (!recon_pool) + return FALSE; + gst_object_unref (recon_pool); + + gst_va_encoder_set_coded_buffer_size (self, codedbuf_size); + + /* XXX: now we should return now only this profile's caps */ + gst_caps_replace (&self->srcpad_caps, NULL); + + return TRUE; } static void @@ -585,347 +790,118 @@ return self; } -gboolean -gst_va_encoder_get_reconstruct_pool_config (GstVaEncoder * self, - GstCaps ** caps, guint * max_surfaces) +void +gst_va_encoder_set_coded_buffer_size (GstVaEncoder * self, + guint coded_buffer_size) { - GstStructure *config; - gboolean ret; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - - if (!gst_va_encoder_is_open (self)) - return FALSE; + g_return_if_fail (GST_IS_VA_ENCODER (self)); + g_return_if_fail (coded_buffer_size > 0); - if (!self->recon_pool) - return FALSE; - - config = gst_buffer_pool_get_config (self->recon_pool); - ret = gst_buffer_pool_config_get_params (config, caps, NULL, NULL, - max_surfaces); - gst_structure_free (config); - return ret; + GST_OBJECT_LOCK (self); + self->codedbuf_size = coded_buffer_size; + GST_OBJECT_UNLOCK (self); } gboolean -gst_va_encoder_has_profile (GstVaEncoder * self, VAProfile profile) +gst_va_encoder_set_reconstruct_pool_config (GstVaEncoder * self, + GstVideoFormat format, guint max_surfaces) { - VAProfile p; - gint i; + GstBufferPool *old_pool = NULL; + guint new_rt_format; g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - for (i = 0; i < self->available_profiles->len; i++) { - p = g_array_index (self->available_profiles, VAProfile, i); - if (p == profile) - return TRUE; - } - - return FALSE; -} - -gint32 -gst_va_encoder_get_max_slice_num (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncMaxSlices }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), -1); - - if (profile == VAProfileNone) - return -1; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query encoding slices: %s", - vaErrorStr (status)); - return -1; - } - - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support encoding picture as " - "multiple slices"); - return -1; - } - - return attrib.value; -} - -gint32 -gst_va_encoder_get_slice_structure (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncSliceStructure }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), 0); - - if (profile == VAProfileNone) - return -1; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query encoding slice structure: %s", - vaErrorStr (status)); - return 0; - } + new_rt_format = gst_va_chroma_from_video_format (format); + g_return_val_if_fail (new_rt_format > 0, FALSE); - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support slice structure"); - return 0; - } + GST_OBJECT_LOCK (self); - return attrib.value; -} + if (!_is_setup_unlocked (self)) + goto no_setup_error; -gboolean -gst_va_encoder_get_max_num_reference (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint, - guint32 * list0, guint32 * list1) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncMaxRefFrames }; + if (new_rt_format != self->rt_format) + goto bad_rt_format_error; - g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); + /* if it's the same configuration, carry on */ + if (self->recon.format == format && self->recon.max_surfaces == max_surfaces) + goto bail; - if (profile == VAProfileNone) - return FALSE; + /* if there's a previous reconstruct pool, destroy it */ + old_pool = self->recon.pool; + self->recon.pool = NULL; - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query reference frames: %s", - vaErrorStr (status)); - return FALSE; - } + self->recon.max_surfaces = max_surfaces; + self->recon.format = format; - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - if (list0) - *list0 = 0; - if (list1) - *list1 = 0; +bail: + GST_OBJECT_UNLOCK (self); - return TRUE; + if (old_pool) { + GST_DEBUG_OBJECT (self, "De-allocating previous reconstruct pool"); + gst_object_unref (old_pool); } - if (list0) - *list0 = attrib.value & 0xffff; - if (list1) - *list1 = (attrib.value >> 16) & 0xffff; - return TRUE; -} -guint -gst_va_encoder_get_prediction_direction (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribPredictionDirection }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), 0); - - if (profile == VAProfileNone) - return 0; - - if (entrypoint != self->entrypoint) - return 0; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query prediction direction: %s", - vaErrorStr (status)); - return 0; - } - - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support query" - " prediction direction"); - return 0; - } - - return attrib.value & (VA_PREDICTION_DIRECTION_PREVIOUS | - VA_PREDICTION_DIRECTION_FUTURE | VA_PREDICTION_DIRECTION_BI_NOT_EMPTY); -} - -guint32 -gst_va_encoder_get_rate_control_mode (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribRateControl }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), 0); - - if (profile == VAProfileNone) - return 0; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query rate control mode: %s", - vaErrorStr (status)); - return 0; - } - - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support any rate control modes"); - return 0; - } - - return attrib.value; -} - -guint32 -gst_va_encoder_get_quality_level (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncQualityRange }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), 0); - - if (profile == VAProfileNone) - return 0; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query the quality level: %s", - vaErrorStr (status)); - return 0; - } - - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support quality attribute"); - return 0; - } - - return attrib.value; -} - -gboolean -gst_va_encoder_has_trellis (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncQuantization }; - - g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - - if (profile == VAProfileNone) - return FALSE; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query the trellis: %s", - vaErrorStr (status)); + /* ERRORS */ +no_setup_error: + { + GST_OBJECT_UNLOCK (self); + GST_WARNING_OBJECT (self, "Can't configure reconstruct pool without setting" + " up the encoder previously"); return FALSE; } - - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support trellis"); +bad_rt_format_error: + { + GST_OBJECT_UNLOCK (self); + GST_WARNING_OBJECT (self, "Reconstruct pool format (%s) doesn't have same" + " chroma as encoder setup", gst_video_format_to_string (format)); return FALSE; } - - return attrib.value & VA_ENC_QUANTIZATION_TRELLIS_SUPPORTED; } gboolean -gst_va_encoder_has_tile (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) +gst_va_encoder_get_reconstruct_pool_config (GstVaEncoder * self, + GstCaps ** caps, guint * max_surfaces) { - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncTileSupport }; + GstBufferPool *pool; + GstStructure *config; + gboolean ret; g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - if (profile == VAProfileNone) - return FALSE; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_WARNING_OBJECT (self, "Failed to query the tile: %s", - vaErrorStr (status)); - return FALSE; - } + GST_OBJECT_LOCK (self); + pool = self->recon.pool ? gst_object_ref (self->recon.pool) : NULL; + GST_OBJECT_UNLOCK (self); - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support tile"); + if (!pool) return FALSE; - } - - return attrib.value > 0; -} -guint32 -gst_va_encoder_get_rtformat (GstVaEncoder * self, - VAProfile profile, VAEntrypoint entrypoint) -{ - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribRTFormat }; - - if (profile == VAProfileNone) - return 0; - - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_ERROR_OBJECT (self, "Failed to query rt format: %s", - vaErrorStr (status)); - return 0; - } + config = gst_buffer_pool_get_config (pool); + ret = gst_buffer_pool_config_get_params (config, caps, NULL, NULL, + max_surfaces); + gst_structure_free (config); - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support any rt format"); - return 0; - } + gst_object_unref (pool); - return attrib.value; + return ret; } gboolean -gst_va_encoder_get_packed_headers (GstVaEncoder * self, VAProfile profile, - VAEntrypoint entrypoint, guint * packed_headers) +gst_va_encoder_has_profile (GstVaEncoder * self, VAProfile profile) { - VAStatus status; - VADisplay dpy; - VAConfigAttrib attrib = {.type = VAConfigAttribEncPackedHeaders }; - - if (profile == VAProfileNone) - return FALSE; + VAProfile p; + gint i; - dpy = gst_va_display_get_va_dpy (self->display); - status = vaGetConfigAttributes (dpy, profile, entrypoint, &attrib, 1); - if (status != VA_STATUS_SUCCESS) { - GST_ERROR_OBJECT (self, "Failed to query packed headers: %s", - vaErrorStr (status)); - return FALSE; - } + g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - if (attrib.value == VA_ATTRIB_NOT_SUPPORTED) { - GST_WARNING_OBJECT (self, "Driver does not support any packed headers"); - return FALSE; + for (i = 0; i < self->available_profiles->len; i++) { + p = g_array_index (self->available_profiles, VAProfile, i); + if (p == profile) + return TRUE; } - if (packed_headers) - *packed_headers = attrib.value; - return TRUE; + return FALSE; } /* Add packed header such as SPS, PPS, SEI, etc. If adding slice header, @@ -976,7 +952,6 @@ VABufferID buffer; g_return_val_if_fail (GST_IS_VA_ENCODER (self), FALSE); - g_return_val_if_fail (self->context != VA_INVALID_ID, FALSE); g_return_val_if_fail (pic && data && size > 0, FALSE); if (!gst_va_encoder_is_open (self)) { @@ -998,7 +973,7 @@ { g_return_val_if_fail (GST_IS_VA_ENCODER (self), NULL); - if (!gst_va_encoder_is_open (self)) + if (!gst_va_encoder_is_setup (self)) return NULL; return _get_surface_formats (self->display, self->config); @@ -1009,7 +984,7 @@ { GstCaps *sinkpad_caps = NULL, *srcpad_caps = NULL; - if (!gst_va_encoder_is_open (self) + if (!gst_va_encoder_is_setup (self) && GST_IS_VA_DISPLAY_WRAPPED (self->display)) { if (gst_va_caps_from_profiles (self->display, self->available_profiles, self->entrypoint, &srcpad_caps, &sinkpad_caps)) { @@ -1038,9 +1013,14 @@ if (_get_codec_caps (self)) return gst_caps_ref (self->sinkpad_caps); - if (gst_va_encoder_is_open (self)) { - sinkpad_caps = gst_va_create_raw_caps_from_config (self->display, - self->config); + if (gst_va_encoder_is_setup (self)) { + VAConfigID config; + + GST_OBJECT_LOCK (self); + config = self->config; + GST_OBJECT_UNLOCK (self); + + sinkpad_caps = gst_va_create_raw_caps_from_config (self->display, config); if (!sinkpad_caps) { GST_WARNING_OBJECT (self, "Invalid configuration caps"); return NULL; @@ -1065,17 +1045,16 @@ if (_get_codec_caps (self)) return gst_caps_ref (self->srcpad_caps); - if (gst_va_encoder_is_open (self)) { + if (gst_va_encoder_is_setup (self)) { VAProfile profile; - VAEntrypoint entrypoint; GstCaps *caps; GST_OBJECT_LOCK (self); profile = self->profile; - entrypoint = self->entrypoint; GST_OBJECT_UNLOCK (self); - caps = gst_va_create_coded_caps (self->display, profile, entrypoint, NULL); + caps = gst_va_create_coded_caps (self->display, profile, self->entrypoint, + NULL); if (caps) { gst_caps_replace (&self->srcpad_caps, caps); return gst_caps_ref (self->srcpad_caps); @@ -1205,8 +1184,8 @@ .flags = GST_BUFFER_POOL_ACQUIRE_FLAG_DONTWAIT, }; - g_return_val_if_fail (self && GST_IS_VA_ENCODER (self), NULL); - g_return_val_if_fail (raw_buffer && GST_IS_BUFFER (raw_buffer), NULL); + g_return_val_if_fail (GST_IS_VA_ENCODER (self), NULL); + g_return_val_if_fail (GST_IS_BUFFER (raw_buffer), NULL); GST_OBJECT_LOCK (self); @@ -1216,18 +1195,14 @@ return NULL; } - if (self->codedbuf_size <= 0) { - GST_ERROR_OBJECT (self, "codedbuf_size: %d, is invalid", - self->codedbuf_size); - GST_OBJECT_UNLOCK (self); - return NULL; - } codedbuf_size = self->codedbuf_size; - recon_pool = gst_object_ref (self->recon_pool); - GST_OBJECT_UNLOCK (self); + recon_pool = _get_reconstructed_buffer_pool (self); + if (!recon_pool) + return NULL; + ret = gst_buffer_pool_acquire_buffer (recon_pool, &reconstruct_buffer, &buffer_pool_params); gst_clear_object (&recon_pool); @@ -1238,6 +1213,9 @@ return NULL; } + /* this has to be assigned before */ + g_assert (codedbuf_size > 0); + dpy = gst_va_display_get_va_dpy (self->display); status = vaCreateBuffer (dpy, self->context, VAEncCodedBufferType, codedbuf_size, 1, NULL, &coded_buffer); @@ -1320,7 +1298,8 @@ for (i = 0; i < self->available_profiles->len; i++) { profile = g_array_index (self->available_profiles, VAProfile, i); - rc = gst_va_encoder_get_rate_control_mode (self, profile, self->entrypoint); + rc = gst_va_display_get_rate_control_mode (self->display, profile, + self->entrypoint); if (rc == 0) continue;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvaencoder.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvaencoder.h
Changed
@@ -43,6 +43,14 @@ }; gboolean gst_va_encoder_is_open (GstVaEncoder * self); +gboolean gst_va_encoder_setup (GstVaEncoder * self, + VAProfile profile, + guint rt_format, + guint rc_ctrl, + guint32 packed_headers); +gboolean gst_va_encoder_open_2 (GstVaEncoder * self, + gint width, + gint height); gboolean gst_va_encoder_open (GstVaEncoder * self, VAProfile profile, GstVideoFormat video_format, @@ -54,44 +62,21 @@ guint rc_ctrl, guint32 packed_headers); gboolean gst_va_encoder_close (GstVaEncoder * self); -gboolean gst_va_encoder_get_reconstruct_pool_config (GstVaEncoder * self, - GstCaps ** caps, - guint * max_surfaces); +void gst_va_encoder_set_coded_buffer_size + (GstVaEncoder * self, + guint coded_buffer_size); +gboolean gst_va_encoder_set_reconstruct_pool_config + (GstVaEncoder * self, + GstVideoFormat format, + guint max_surfaces); +gboolean gst_va_encoder_get_reconstruct_pool_config + (GstVaEncoder * self, + GstCaps ** caps, + guint * max_surfaces); +guint gst_va_encoder_get_surface_alignment + (GstVaEncoder * self); gboolean gst_va_encoder_has_profile (GstVaEncoder * self, VAProfile profile); -gint gst_va_encoder_get_max_slice_num (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -gint32 gst_va_encoder_get_slice_structure (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -gboolean gst_va_encoder_get_max_num_reference (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint, - guint32 * list0, - guint32 * list1); -guint gst_va_encoder_get_prediction_direction (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -guint32 gst_va_encoder_get_rate_control_mode (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -guint32 gst_va_encoder_get_quality_level (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -gboolean gst_va_encoder_has_trellis (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -gboolean gst_va_encoder_has_tile (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -guint32 gst_va_encoder_get_rtformat (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint); -gboolean gst_va_encoder_get_packed_headers (GstVaEncoder * self, - VAProfile profile, - VAEntrypoint entrypoint, - guint32 * packed_headers); gboolean gst_va_encoder_get_rate_control_enum (GstVaEncoder * self, GEnumValue ratectl16); gboolean gst_va_encoder_add_param (GstVaEncoder * self, @@ -119,7 +104,5 @@ void gst_va_encode_picture_free (GstVaEncodePicture * pic); VASurfaceID gst_va_encode_picture_get_raw_surface (GstVaEncodePicture * pic); VASurfaceID gst_va_encode_picture_get_reconstruct_surface (GstVaEncodePicture * pic); -guint gst_va_encoder_get_surface_alignment (GstVaDisplay *display, - VAProfile profile, - VAEntrypoint entrypoint); + G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvafilter.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvafilter.c
Changed
@@ -28,10 +28,10 @@ #include <gst/va/vasurfaceimage.h> #include <gst/video/video.h> #include <va/va_drmcommon.h> +#include <string.h> #include "gstvacaps.h" #include "gstvadisplay_priv.h" -#include <string.h> struct _GstVaFilter {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvah264enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvah264enc.c
Changed
@@ -79,12 +79,11 @@ #include <gst/video/video.h> #include <va/va_drmcommon.h> +#include "vacompat.h" #include "gstvabaseenc.h" -#include "gstvacaps.h" #include "gstvadisplay_priv.h" #include "gstvaencoder.h" #include "gstvaprofile.h" -#include "vacompat.h" #include "gstvapluginutils.h" #include "gst/glib-compat-private.h" @@ -511,7 +510,7 @@ guint bitrate; guint32 rc_ctrl, rc_mode, quality_level; - quality_level = gst_va_encoder_get_quality_level (base->encoder, + quality_level = gst_va_display_get_quality_level (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->rc.target_usage > quality_level) { GST_INFO_OBJECT (self, "User setting target-usage: %d is not supported, " @@ -527,7 +526,7 @@ GST_OBJECT_UNLOCK (self); if (rc_ctrl != VA_RC_NONE) { - rc_mode = gst_va_encoder_get_rate_control_mode (base->encoder, + rc_mode = gst_va_display_get_rate_control_mode (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (!(rc_mode & rc_ctrl)) { guint32 defval = @@ -762,7 +761,7 @@ * of the number of slices permitted by the stream and by the * hardware. */ g_assert (self->num_slices >= 1); - max_slices = gst_va_encoder_get_max_slice_num (base->encoder, + max_slices = gst_va_display_get_max_slice_num (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->num_slices > max_slices) self->num_slices = max_slices; @@ -774,7 +773,7 @@ self->num_slices, PROP_NUM_SLICES); /* Ensure trellis. */ - self->support_trellis = gst_va_encoder_has_trellis (base->encoder, + self->support_trellis = gst_va_display_has_trellis (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->use_trellis && !self->support_trellis) { GST_INFO_OBJECT (self, "The trellis is not supported"); @@ -1006,7 +1005,7 @@ } } - if (!gst_va_encoder_get_max_num_reference (base->encoder, base->profile, + if (!gst_va_display_get_max_num_reference (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &list0, &list1)) { GST_INFO_OBJECT (self, "Failed to get the max num reference"); list0 = 1; @@ -1322,7 +1321,7 @@ self->packed_headers = 0; - if (!gst_va_encoder_get_packed_headers (base->encoder, base->profile, + if (!gst_va_display_get_packed_headers (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &packed_headers)) return FALSE; @@ -1432,7 +1431,7 @@ if (!gst_va_encoder_has_profile (base->encoder, profile)) continue; - if ((rt_format & gst_va_encoder_get_rtformat (base->encoder, + if ((rt_format & gst_va_display_get_rtformat (base->display, profile, GST_VA_BASE_ENC_ENTRYPOINT (base))) == 0) continue; @@ -1454,7 +1453,7 @@ if (!gst_va_encoder_has_profile (base->encoder, profile)) continue; - if ((rt_format & gst_va_encoder_get_rtformat (base->encoder, + if ((rt_format & gst_va_display_get_rtformat (base->display, profile, GST_VA_BASE_ENC_ENTRYPOINT (base))) == 0) continue; @@ -1582,47 +1581,23 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaH264Enc *self = GST_VA_H264_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL; + GstCaps *out_caps; GstVideoCodecState *output_state = NULL; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoFormat format; VAProfile profile = VAProfileNone; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation, rc_same; - guint max_ref_frames, max_surfaces = 0, rt_format = 0, - codedbuf_size, latency_num; + gboolean do_renegotiation = TRUE; + guint max_ref_frames, rt_format = 0, latency_num; gint width, height; GstClockTime latency; width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay + self->gop.ip_period - 1; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - if (!_decide_profile (self, &profile, &rt_format)) return FALSE; - GST_OBJECT_LOCK (self); - rc_same = (self->prop.rc_ctrl == self->rc.rc_ctrl_mode); - GST_OBJECT_UNLOCK (self); - - /* first check */ - do_reopen = !(base->profile == profile && base->rt_format == rt_format - && format == reconf_format && width == base->width - && height == base->height && rc_same); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -1679,7 +1654,6 @@ /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay + self->gop.ip_period - 1) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay + self->gop.ip_period - 1; } @@ -1695,14 +1669,7 @@ base->min_buffers = max_ref_frames; max_ref_frames += 3 /* scratch frames */ ; - /* second check after calculations */ - do_reopen |= - !(max_ref_frames == max_surfaces && codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, + if (!gst_va_encoder_open (base->encoder, base->profile, format, base->rt_format, base->width, base->height, base->codedbuf_size, max_ref_frames, self->rc.rc_ctrl_mode, self->packed_headers)) { @@ -1725,17 +1692,15 @@ "height", G_TYPE_INT, base->height, "alignment", G_TYPE_STRING, "au", "stream-format", G_TYPE_STRING, "byte-stream", NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + output_state = gst_video_encoder_get_output_state (venc); + do_renegotiation = TRUE; + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps); @@ -1967,7 +1932,7 @@ count.poc = b_vaframe->poc; g_queue_foreach (&base->ref_list, (GFunc) _count_backward_ref_num, &count); if (count.num >= self->gop.ref_num_list1) { - GstVideoCodecFrame *f; + GstVideoCodecFrame *f GST_UNUSED_ASSERT; /* it will unref at pop_frame */ f = g_queue_pop_nth (&base->reorder_list, index);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvah265enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvah265enc.c
Changed
@@ -72,10 +72,9 @@ #include "vacompat.h" #include "gstvabaseenc.h" +#include "gstvadisplay_priv.h" #include "gstvaencoder.h" -#include "gstvacaps.h" #include "gstvaprofile.h" -#include "gstvadisplay_priv.h" #include "gstvapluginutils.h" #include "gst/glib-compat-private.h" @@ -2237,7 +2236,7 @@ count.poc = b_vaframe->poc; g_queue_foreach (&base->ref_list, (GFunc) _count_backward_ref_num, &count); if (count.num >= 1) { - GstVideoCodecFrame *f; + GstVideoCodecFrame *f GST_UNUSED_ASSERT; /* it will unref at pop_frame */ f = g_queue_pop_nth (&base->reorder_list, index); @@ -2792,7 +2791,7 @@ if (!gst_va_encoder_has_profile (base->encoder, profile)) continue; - if ((rt_format & gst_va_encoder_get_rtformat (base->encoder, + if ((rt_format & gst_va_display_get_rtformat (base->display, profile, GST_VA_BASE_ENC_ENTRYPOINT (base))) == 0) continue; @@ -3103,7 +3102,7 @@ * of the number of slices permitted by the stream and by the * hardware. */ g_assert (self->partition.num_slices >= 1); - max_slices = gst_va_encoder_get_max_slice_num (base->encoder, + max_slices = gst_va_display_get_max_slice_num (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->partition.num_slices > max_slices) self->partition.num_slices = max_slices; @@ -3113,14 +3112,14 @@ ((self->ctu_width * self->ctu_height + 1) / 2)) self->partition.num_slices = ((self->ctu_width * self->ctu_height + 1) / 2); - slice_structure = gst_va_encoder_get_slice_structure (base->encoder, + slice_structure = gst_va_display_get_slice_structure (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (_is_tile_enabled (self)) { const GstVaH265LevelLimits *level_limits; guint i; - if (!gst_va_encoder_has_tile (base->encoder, + if (!gst_va_display_has_tile (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base))) { self->partition.num_tile_cols = 1; self->partition.num_tile_rows = 1; @@ -3303,7 +3302,7 @@ guint bitrate; guint32 rc_mode, quality_level, rc_ctrl; - quality_level = gst_va_encoder_get_quality_level (base->encoder, + quality_level = gst_va_display_get_quality_level (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->rc.target_usage > quality_level) { GST_INFO_OBJECT (self, "User setting target-usage: %d is not supported, " @@ -3318,7 +3317,7 @@ GST_OBJECT_UNLOCK (self); if (rc_ctrl != VA_RC_NONE) { - rc_mode = gst_va_encoder_get_rate_control_mode (base->encoder, + rc_mode = gst_va_display_get_rate_control_mode (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (!(rc_mode & rc_ctrl)) { guint32 defval = @@ -3844,7 +3843,7 @@ } } - if (!gst_va_encoder_get_max_num_reference (base->encoder, base->profile, + if (!gst_va_display_get_max_num_reference (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &list0, &list1)) { GST_INFO_OBJECT (self, "Failed to get the max num reference"); list0 = 1; @@ -3858,7 +3857,7 @@ forward_num = list0; backward_num = list1; - prediction_direction = gst_va_encoder_get_prediction_direction (base->encoder, + prediction_direction = gst_va_display_get_prediction_direction (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (prediction_direction) { if (!(prediction_direction & VA_PREDICTION_DIRECTION_PREVIOUS)) { @@ -4098,7 +4097,7 @@ self->packed_headers = 0; - if (!gst_va_encoder_get_packed_headers (base->encoder, base->profile, + if (!gst_va_display_get_packed_headers (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &packed_headers)) return FALSE; @@ -4346,7 +4345,7 @@ self->features.transquant_bypass_enabled_flag); /* Ensure trellis. */ - self->support_trellis = gst_va_encoder_has_trellis (base->encoder, + self->support_trellis = gst_va_display_has_trellis (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->features.use_trellis && !self->support_trellis) { GST_INFO_OBJECT (self, "The trellis is not supported"); @@ -4502,13 +4501,12 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaH265Enc *self = GST_VA_H265_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL;; + GstCaps *out_caps; GstVideoCodecState *output_state = NULL; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoFormat format; VAProfile profile = VAProfileNone; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation, rc_same; - guint max_ref_frames, max_surfaces = 0, rt_format = 0, - codedbuf_size, latency_num; + gboolean do_renegotiation = TRUE; + guint max_ref_frames, rt_format = 0, latency_num; gint width, height; guint alignment; GstClockTime latency; @@ -4516,34 +4514,11 @@ width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay + self->gop.ip_period - 1; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - if (!_h265_decide_profile (self, &profile, &rt_format)) return FALSE; - GST_OBJECT_LOCK (self); - rc_same = (self->prop.rc_ctrl == self->rc.rc_ctrl_mode); - GST_OBJECT_UNLOCK (self); - - /* first check */ - do_reopen = !(base->profile == profile && base->rt_format == rt_format - && format == reconf_format && width == base->width - && height == base->height && rc_same); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -4558,8 +4533,14 @@ base->width = width; base->height = height; - alignment = gst_va_encoder_get_surface_alignment (base->display, - profile, klass->entrypoint); + if (!_h265_init_packed_headers (self)) + return FALSE; + + if (!gst_va_encoder_setup (base->encoder, base->profile, base->rt_format, + self->rc.rc_ctrl_mode, self->packed_headers)) + return FALSE; + + alignment = gst_va_encoder_get_surface_alignment (base->encoder); if (alignment) { self->luma_width = GST_ROUND_UP_N (base->width, 1 << (alignment & 0xf)); self->luma_height = @@ -4642,15 +4623,11 @@ if (!_h265_setup_slice_and_tile_partition (self)) return FALSE; - if (!_h265_init_packed_headers (self)) - return FALSE; - self->aud = self->aud && self->packed_headers & VA_ENC_PACKED_HEADER_RAW_DATA; update_property_bool (base, &self->prop.aud, self->aud, PROP_AUD); /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay + self->gop.ip_period - 1) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay + self->gop.ip_period - 1; } @@ -4660,26 +4637,24 @@ GST_VIDEO_INFO_FPS_N (&base->in_info)); gst_video_encoder_set_latency (venc, latency, latency); + if (!gst_va_encoder_open_2 (base->encoder, self->luma_width, + self->luma_height)) { + GST_ERROR_OBJECT (self, "Failed to open the VA encoder."); + return FALSE; + } + max_ref_frames = self->gop.b_pyramid ? self->gop.highest_pyramid_level + 2 : self->gop.num_ref_frames; max_ref_frames += base->preferred_output_delay; base->min_buffers = max_ref_frames; max_ref_frames += 3 /* scratch frames */ ; - /* second check after calculations */ - do_reopen |= - !(max_ref_frames == max_surfaces && codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, - format, base->rt_format, self->luma_width, self->luma_height, - base->codedbuf_size, max_ref_frames, self->rc.rc_ctrl_mode, - self->packed_headers)) { - GST_ERROR_OBJECT (self, "Failed to open the VA encoder."); + if (!gst_va_encoder_set_reconstruct_pool_config (base->encoder, format, + max_ref_frames)) { + GST_ERROR_OBJECT (self, "Reconstruct pool configuration is invalid"); return FALSE; } + gst_va_encoder_set_coded_buffer_size (base->encoder, base->codedbuf_size); /* Add some tags */ gst_va_base_enc_add_codec_tag (base, "H265"); @@ -4696,19 +4671,15 @@ "height", G_TYPE_INT, base->height, "alignment", G_TYPE_STRING, "au", "stream-format", G_TYPE_STRING, "byte-stream", NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } + output_state = gst_video_encoder_get_output_state (venc); + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvajpegenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvajpegenc.c
Changed
@@ -52,10 +52,9 @@ #include "vacompat.h" #include "gstvabaseenc.h" +#include "gstvadisplay_priv.h" #include "gstvaencoder.h" -#include "gstvacaps.h" #include "gstvaprofile.h" -#include "gstvadisplay_priv.h" #include "gstvapluginutils.h" GST_DEBUG_CATEGORY_STATIC (gst_va_jpegenc_debug); @@ -220,7 +219,7 @@ self->packed_headers = 0; - if (!gst_va_encoder_get_packed_headers (base->encoder, base->profile, + if (!gst_va_display_get_packed_headers (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &packed_headers)) return FALSE; @@ -306,31 +305,19 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaJpegEnc *self = GST_VA_JPEG_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL; + GstCaps *out_caps; GstVideoCodecState *output_state = NULL; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation; + gboolean do_renegotiation = TRUE; gint width, height; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; - guint rt_format = 0, codedbuf_size, latency_num, - max_surfaces = 0, max_cached_frames; + GstVideoFormat format; + guint rt_format = 0, latency_num, max_cached_frames; const char *colorspace, *sampling; width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - rt_format = gst_va_chroma_from_video_format (format); if (!rt_format) { GST_ERROR_OBJECT (self, "unrecognized input format."); @@ -340,15 +327,6 @@ if (!_ensure_profile (self)) return FALSE; - /* first check */ - do_reopen = !(base->profile == VAProfileJPEGBaseline - && base->rt_format == rt_format - && format == reconf_format && width == base->width - && height == base->height); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -373,7 +351,6 @@ /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay; } @@ -403,20 +380,13 @@ base->min_buffers = max_cached_frames; max_cached_frames += 3 /* scratch frames */ ; - /* second check after calculations */ - do_reopen |= !(max_cached_frames == max_surfaces && - codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - /* Just use driver's capability attribute, we do not change them. */ if (!_jpeg_get_capability_attribute (self)) { GST_ERROR_OBJECT (self, "Failed to satisfy the jpeg capability."); return FALSE; } - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, format, + if (!gst_va_encoder_open (base->encoder, base->profile, format, base->rt_format, base->width, base->height, base->codedbuf_size, 1, VA_RC_NONE, self->packed_headers)) { GST_ERROR_OBJECT (self, "Failed to open the VA encoder."); @@ -486,17 +456,15 @@ if (sampling) gst_caps_set_simple (out_caps, "sampling", G_TYPE_STRING, sampling, NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + output_state = gst_video_encoder_get_output_state (venc); + do_renegotiation = TRUE; + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvapluginutils.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvapluginutils.c
Changed
@@ -90,6 +90,15 @@ } pool = gst_video_buffer_pool_new (); + { + gchar *name; + if (allocator) + name = g_strdup_printf ("va-%s-pool", GST_OBJECT_NAME (allocator)); + else + name = g_strdup ("va-video-pool"); + g_object_set (pool, "name", name, NULL); + g_free (name); + } config = gst_buffer_pool_get_config (pool); gst_buffer_pool_config_set_params (config, caps, size, 0, 0);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvavp8enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvavp8enc.c
Changed
@@ -48,6 +48,7 @@ #include "gstvabaseenc.h" #include "gstvapluginutils.h" +#include "gstvadisplay_priv.h" GST_DEBUG_CATEGORY_STATIC (gst_va_vp8enc_debug); #define GST_CAT_DEFAULT gst_va_vp8enc_debug @@ -426,7 +427,7 @@ guint bitrate; guint32 rc_ctrl, rc_mode, quality_level; - quality_level = gst_va_encoder_get_quality_level (base->encoder, + quality_level = gst_va_display_get_quality_level (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->rc.target_usage > quality_level) { GST_INFO_OBJECT (self, "User setting target-usage: %d is not supported, " @@ -442,7 +443,7 @@ GST_OBJECT_UNLOCK (self); if (rc_ctrl != VA_RC_NONE) { - rc_mode = gst_va_encoder_get_rate_control_mode (base->encoder, + rc_mode = gst_va_display_get_rate_control_mode (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (!(rc_mode & rc_ctrl)) { @@ -573,19 +574,18 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaVp8Enc *self = GST_VA_VP8_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL; + GstCaps *out_caps; GstVideoCodecState *output_state; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoFormat format; const GstVideoFormatInfo *format_info; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation, rc_same; - guint max_ref_frames, max_surfaces = 0, codedbuf_size, latency_num; + gboolean do_renegotiation = TRUE; + guint max_ref_frames, latency_num; gint width, height; GstClockTime latency; width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay; /* VP8 only support 4:2:0 formats so check that first */ @@ -594,28 +594,6 @@ GST_VIDEO_FORMAT_INFO_H_SUB (format_info, 1) != 1) return FALSE; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - - GST_OBJECT_LOCK (self); - rc_same = (self->prop.rc_ctrl == self->rc.rc_ctrl_mode); - GST_OBJECT_UNLOCK (self); - - /* First check */ - do_reopen = !(format == reconf_format && width == base->width - && height == base->height && rc_same); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -653,7 +631,6 @@ /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay + 1) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay + 1; } @@ -669,12 +646,7 @@ max_ref_frames += 3; /* scratch frames */ /* Second check after calculations. */ - do_reopen |= !(codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, + if (!gst_va_encoder_open (base->encoder, base->profile, GST_VIDEO_INFO_FORMAT (&base->in_info), base->rt_format, base->width, base->height, base->codedbuf_size, max_ref_frames, self->rc.rc_ctrl_mode, 0)) { @@ -692,17 +664,15 @@ gst_caps_set_simple (out_caps, "width", G_TYPE_INT, base->width, "height", G_TYPE_INT, base->height, NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + output_state = gst_video_encoder_get_output_state (venc); + do_renegotiation = TRUE; + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvavp9enc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvavp9enc.c
Changed
@@ -53,10 +53,9 @@ #include "vacompat.h" #include "gstvabaseenc.h" +#include "gstvadisplay_priv.h" #include "gstvaencoder.h" -#include "gstvacaps.h" #include "gstvaprofile.h" -#include "gstvadisplay_priv.h" #include "gstvapluginutils.h" #include "gst/glib-compat-private.h" @@ -747,11 +746,13 @@ GstVideoCodecFrame * gst_frame) { GstVaVp9EncFrame *frame = _enc_frame (gst_frame); +#ifndef G_DISABLE_CHECKS gint pushed_frame_num = gf_group->last_pushed_num < 0 ? 0 : gf_group->last_pushed_num - gf_group->start_frame_offset + 1; - /* No room for a new one. */ g_return_val_if_fail (pushed_frame_num < gf_group->group_frame_num, FALSE); +#endif + /* The frame num should just increase. */ g_return_val_if_fail (frame->frame_num == gf_group->last_pushed_num + 1, FALSE); @@ -1609,7 +1610,7 @@ if (!gst_va_encoder_has_profile (base->encoder, p)) continue; - if ((rt_format & gst_va_encoder_get_rtformat (base->encoder, + if ((rt_format & gst_va_display_get_rtformat (base->display, p, GST_VA_BASE_ENC_ENTRYPOINT (base))) == 0) continue; @@ -1647,7 +1648,7 @@ self->gop.gf_group_size = self->gop.keyframe_interval - 1; /* VP9 does not define reference list1 in spec. */ - if (!gst_va_encoder_get_max_num_reference (base->encoder, base->profile, + if (!gst_va_display_get_max_num_reference (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &list0, NULL)) { GST_INFO_OBJECT (self, "Failed to get the max num reference"); list0 = 1; @@ -1883,7 +1884,7 @@ guint bitrate; guint32 rc_ctrl, rc_mode, quality_level; - quality_level = gst_va_encoder_get_quality_level (base->encoder, + quality_level = gst_va_display_get_quality_level (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (self->rc.target_usage > quality_level) { GST_INFO_OBJECT (self, "User setting target-usage: %d is not supported, " @@ -1899,7 +1900,7 @@ GST_OBJECT_UNLOCK (self); if (rc_ctrl != VA_RC_NONE) { - rc_mode = gst_va_encoder_get_rate_control_mode (base->encoder, + rc_mode = gst_va_display_get_rate_control_mode (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base)); if (!(rc_mode & rc_ctrl)) { guint32 defval = @@ -2068,7 +2069,7 @@ GstVaBaseEnc *base = GST_VA_BASE_ENC (self); guint32 packed_headers; - if (!gst_va_encoder_get_packed_headers (base->encoder, base->profile, + if (!gst_va_display_get_packed_headers (base->display, base->profile, GST_VA_BASE_ENC_ENTRYPOINT (base), &packed_headers)) return FALSE; @@ -2093,32 +2094,20 @@ GstVaBaseEncClass *klass = GST_VA_BASE_ENC_GET_CLASS (base); GstVideoEncoder *venc = GST_VIDEO_ENCODER (base); GstVaVp9Enc *self = GST_VA_VP9_ENC (base); - GstCaps *out_caps, *reconf_caps = NULL; + GstCaps *out_caps; GstVideoCodecState *output_state; - GstVideoFormat format, reconf_format = GST_VIDEO_FORMAT_UNKNOWN; + GstVideoFormat format; VAProfile profile; - gboolean do_renegotiation = TRUE, do_reopen, need_negotiation, rc_same; - guint max_ref_frames, max_surfaces = 0, - rt_format, depth = 0, chrome = 0, codedbuf_size, latency_num; + gboolean do_renegotiation = TRUE; + guint max_ref_frames, rt_format, depth = 0, chrome = 0, latency_num; gint width, height; GstClockTime latency; width = GST_VIDEO_INFO_WIDTH (&base->in_info); height = GST_VIDEO_INFO_HEIGHT (&base->in_info); format = GST_VIDEO_INFO_FORMAT (&base->in_info); - codedbuf_size = base->codedbuf_size; latency_num = base->preferred_output_delay + self->gop.gf_group_size - 1; - need_negotiation = - !gst_va_encoder_get_reconstruct_pool_config (base->encoder, &reconf_caps, - &max_surfaces); - if (!need_negotiation && reconf_caps) { - GstVideoInfo vi; - if (!gst_video_info_from_caps (&vi, reconf_caps)) - return FALSE; - reconf_format = GST_VIDEO_INFO_FORMAT (&vi); - } - rt_format = _vp9_get_rtformat (self, format, &depth, &chrome); if (!rt_format) { GST_ERROR_OBJECT (self, "unrecognized input format."); @@ -2129,19 +2118,6 @@ if (profile == VAProfileNone) return FALSE; - GST_OBJECT_LOCK (self); - rc_same = (self->prop.rc_ctrl == self->rc.rc_ctrl_mode); - GST_OBJECT_UNLOCK (self); - - /* first check */ - do_reopen = !(base->profile == profile && base->rt_format == rt_format - && format == reconf_format && width == base->width - && height == base->height && rc_same && depth == self->depth - && chrome == self->chrome); - - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - gst_va_base_enc_reset_state (base); if (base->is_live) { @@ -2186,7 +2162,6 @@ /* Let the downstream know the new latency. */ if (latency_num != base->preferred_output_delay + self->gop.gf_group_size - 1) { - need_negotiation = TRUE; latency_num = base->preferred_output_delay + self->gop.gf_group_size - 1; } @@ -2202,13 +2177,7 @@ max_ref_frames += 3 /* scratch frames */ ; /* second check after calculations */ - do_reopen |= - !(max_ref_frames == max_surfaces && codedbuf_size == base->codedbuf_size); - if (do_reopen && gst_va_encoder_is_open (base->encoder)) - gst_va_encoder_close (base->encoder); - - if (!gst_va_encoder_is_open (base->encoder) - && !gst_va_encoder_open (base->encoder, base->profile, + if (!gst_va_encoder_open (base->encoder, base->profile, GST_VIDEO_INFO_FORMAT (&base->in_info), base->rt_format, base->width, base->height, base->codedbuf_size, max_ref_frames, self->rc.rc_ctrl_mode, self->packed_headers)) { @@ -2227,17 +2196,15 @@ "height", G_TYPE_INT, base->height, "alignment", G_TYPE_STRING, "super-frame", NULL); - if (!need_negotiation) { - output_state = gst_video_encoder_get_output_state (venc); - do_renegotiation = TRUE; - if (output_state) { - do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); - gst_video_codec_state_unref (output_state); - } - if (!do_renegotiation) { - gst_caps_unref (out_caps); - return TRUE; - } + output_state = gst_video_encoder_get_output_state (venc); + do_renegotiation = TRUE; + if (output_state) { + do_renegotiation = !gst_caps_is_subset (output_state->caps, out_caps); + gst_video_codec_state_unref (output_state); + } + if (!do_renegotiation) { + gst_caps_unref (out_caps); + return TRUE; } GST_DEBUG_OBJECT (self, "output caps is %" GST_PTR_FORMAT, out_caps);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/gstvavpp.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/gstvavpp.c
Changed
@@ -69,7 +69,6 @@ #include "gstvabasetransform.h" #include "gstvacaps.h" -#include "gstvadisplay_priv.h" #include "gstvafilter.h" #include "gstvapluginutils.h"
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/va/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/va/meson.build
Changed
@@ -34,18 +34,10 @@ 'gstjpegdecoder.h', 'gstvaav1dec.h', 'gstvaav1enc.h', - 'gstvabasedec.h', 'gstvabaseenc.h', - 'gstvabase.h', 'gstvabasetransform.h', - 'gstvacaps.h', 'gstvacompositor.h', - 'gstvadecoder.h', 'gstvadeinterlace.h', - 'gstvadevice.h', - 'gstvadisplay_priv.h', - 'gstvaencoder.h', - 'gstvafilter.h', 'gstvah264dec.h', 'gstvah264enc.h', 'gstvah265dec.h', @@ -53,14 +45,11 @@ 'gstvajpegdec.h', 'gstvajpegenc.h', 'gstvampeg2dec.h', - 'gstvapluginutils.h', - 'gstvaprofile.h', 'gstvavp8dec.h', 'gstvavp8enc.h', 'gstvavp9dec.h', 'gstvavp9enc.h', 'gstvavpp.h', - 'vacompat.h', 'gstvacodecalphadecodebin.h'
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi/gstwasapi.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi/gstwasapi.c
Changed
@@ -41,7 +41,7 @@ return FALSE; if (!gst_device_provider_register (plugin, "wasapideviceprovider", - GST_RANK_PRIMARY, GST_TYPE_WASAPI_DEVICE_PROVIDER)) + GST_RANK_NONE, GST_TYPE_WASAPI_DEVICE_PROVIDER)) return FALSE; GST_DEBUG_CATEGORY_INIT (gst_wasapi_debug, "wasapi",
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2device.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2device.cpp
Changed
@@ -26,8 +26,21 @@ #include "gstwasapi2util.h" #include "gstwasapi2enumerator.h" -GST_DEBUG_CATEGORY_EXTERN (gst_wasapi2_debug); -#define GST_CAT_DEFAULT gst_wasapi2_debug +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + + GST_WASAPI2_CALL_ONCE_BEGIN { + cat = _gst_debug_category_new ("wasapi2deviceprovider", + 0, "wasapi2deviceprovider"); + } GST_WASAPI2_CALL_ONCE_END; + + return cat; +} +#endif enum { @@ -42,6 +55,7 @@ gchar *device_id; const gchar *factory_name; GstWasapi2EndpointClass device_class; + gboolean is_default; }; G_DEFINE_TYPE (GstWasapi2Device, gst_wasapi2_device, GST_TYPE_DEVICE); @@ -216,24 +230,62 @@ auto props = gst_structure_new ("wasapi2-proplist", "device.api", G_TYPE_STRING, "wasapi2", - "device.id", G_TYPE_STRING, entry->device_id, + "device.id", G_TYPE_STRING, entry->device_id.c_str (), "device.default", G_TYPE_BOOLEAN, entry->is_default, - "wasapi2.device.description", G_TYPE_STRING, entry->device_name, - nullptr); + "wasapi2.device.description", G_TYPE_STRING, + entry->device_name.c_str (), + "device.form-factor", G_TYPE_INT, + (gint) entry->device_props.form_factor, + "device.form-factor-name", G_TYPE_STRING, + gst_wasapi2_form_factor_to_string (entry->device_props.form_factor), + "device.enumerator-name", G_TYPE_STRING, + entry->device_props.enumerator_name.c_str (), nullptr); + + if (entry->is_default) { + if (!entry->actual_device_id.empty ()) { + gst_structure_set (props, "device.actual-id", G_TYPE_STRING, + entry->actual_device_id.c_str (), nullptr); + } + + if (!entry->actual_device_name.empty ()) { + gst_structure_set (props, "device.actual-name", G_TYPE_STRING, + entry->actual_device_name.c_str (), nullptr); + } + } else { + gst_structure_set (props, + "device.shared-mode-engine-default-period-us", G_TYPE_INT64, + entry->shared_mode_engine_default_period_us, + "device.shared-mode-engine-fundamental-period-us", G_TYPE_INT64, + entry->shared_mode_engine_fundamental_period_us, + "device.shared-mode-engine-min-period-us", G_TYPE_INT64, + entry->shared_mode_engine_min_period_us, + "device.shared-mode-engine-max-period-us", G_TYPE_INT64, + entry->shared_mode_engine_max_period_us, + "device.default-device-period-us", G_TYPE_INT64, + entry->default_device_period_us, + "device.min-device-period-us", G_TYPE_INT64, + entry->min_device_period_us, nullptr); + } if (entry->flow == eCapture) { gst_structure_set (props, "wasapi2.device.loopback", G_TYPE_BOOLEAN, FALSE, nullptr); + if (!entry->is_default && entry->exclusive_caps) { + gst_structure_set (props, "device.exclusive-caps", GST_TYPE_CAPS, + entry->exclusive_caps, nullptr); + } + auto device = (GstDevice *) g_object_new (GST_TYPE_WASAPI2_DEVICE, - "device", entry->device_id, - "display-name", entry->device_name, "caps", entry->caps, + "device", entry->device_id.c_str (), + "display-name", entry->device_name.c_str (), "caps", entry->caps, "device-class", "Audio/Source", "properties", props, nullptr); gst_structure_free (props); - GST_WASAPI2_DEVICE (device)->factory_name = "wasapi2src"; - GST_WASAPI2_DEVICE (device)->device_class = - GST_WASAPI2_ENDPOINT_CLASS_CAPTURE; + auto wasapi2_dev = GST_WASAPI2_DEVICE (device); + wasapi2_dev->factory_name = "wasapi2src"; + wasapi2_dev->device_class = GST_WASAPI2_ENDPOINT_CLASS_CAPTURE; + wasapi2_dev->is_default = entry->is_default; devices = g_list_append (devices, device); } else { @@ -241,27 +293,34 @@ gst_structure_set (prop_copy, "wasapi2.device.loopback", G_TYPE_BOOLEAN, TRUE, nullptr); + if (!entry->is_default && entry->exclusive_caps) { + gst_structure_set (props, "device.exclusive-caps", GST_TYPE_CAPS, + entry->exclusive_caps, nullptr); + } + auto device = (GstDevice *) g_object_new (GST_TYPE_WASAPI2_DEVICE, - "device", entry->device_id, - "display-name", entry->device_name, "caps", entry->caps, + "device", entry->device_id.c_str (), + "display-name", entry->device_name.c_str (), "caps", entry->caps, "device-class", "Audio/Sink", "properties", props, nullptr); gst_structure_free (props); - GST_WASAPI2_DEVICE (device)->factory_name = "wasapi2sink"; - GST_WASAPI2_DEVICE (device)->device_class = - GST_WASAPI2_ENDPOINT_CLASS_RENDER; + auto wasapi2_dev = GST_WASAPI2_DEVICE (device); + wasapi2_dev->factory_name = "wasapi2sink"; + wasapi2_dev->device_class = GST_WASAPI2_ENDPOINT_CLASS_RENDER; + wasapi2_dev->is_default = entry->is_default; devices = g_list_append (devices, device); device = (GstDevice *) g_object_new (GST_TYPE_WASAPI2_DEVICE, - "device", entry->device_id, - "display-name", entry->device_name, "caps", entry->caps, + "device", entry->device_id.c_str (), + "display-name", entry->device_name.c_str (), "caps", entry->caps, "device-class", "Audio/Source", "properties", prop_copy, nullptr); gst_structure_free (prop_copy); - GST_WASAPI2_DEVICE (device)->factory_name = "wasapi2src"; - GST_WASAPI2_DEVICE (device)->device_class = - GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE; + wasapi2_dev = GST_WASAPI2_DEVICE (device); + wasapi2_dev->factory_name = "wasapi2src"; + wasapi2_dev->device_class = GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE; + wasapi2_dev->is_default = entry->is_default; devices = g_list_append (devices, device); } @@ -334,6 +393,63 @@ return found; } +static gboolean +dump_structure_field (const GstIdStr * fieldname, const GValue * value, + gpointer user_data) +{ + auto str = (GString *) user_data; + gchar *val; + + if (G_VALUE_HOLDS_UINT (value)) { + val = g_strdup_printf ("%u (0x%08x)", g_value_get_uint (value), + g_value_get_uint (value)); + } else if (G_VALUE_HOLDS_STRING (value)) { + val = g_value_dup_string (value); + } else { + val = gst_value_serialize (value); + } + + if (val) { + g_string_append_printf (str, + "\t%s = %s\n", gst_id_str_as_str (fieldname), val); + } + + g_free (val); + + return TRUE; +} + +static gchar * +gst_wasapi2_dump_devices (GList * device_list) +{ +#ifndef GST_DISABLE_GST_DEBUG + if (gst_debug_category_get_threshold (GST_CAT_DEFAULT) < GST_LEVEL_LOG || + !device_list) { + return nullptr; + } + + auto str = g_string_new (nullptr); + GList *iter; + for (iter = device_list; iter; iter = g_list_next (iter)) { + auto device = GST_DEVICE (iter->data); + auto name = gst_device_get_display_name (device); + auto device_class = gst_device_get_device_class (device); + auto prop = gst_device_get_properties (device); + g_string_append_printf (str, "%s (%s)\n", name, device_class); + gst_structure_foreach_id_str (prop, dump_structure_field, str); + g_string_append_c (str, '\n'); + + g_free (name); + g_free (device_class); + gst_structure_free (prop); + } + + return g_string_free (str, FALSE); +#else + return nullptr; +#endif +} + static void gst_wasapi2_device_provider_update_devices (GstWasapi2DeviceProvider * self) { @@ -342,7 +458,7 @@ GList *new_devices = nullptr; GList *to_add = nullptr; GList *to_remove = nullptr; - GList *iter; + GList *iter, *walk; GST_OBJECT_LOCK (self); prev_devices = g_list_copy_deep (provider->devices, @@ -371,6 +487,70 @@ } } + iter = to_remove; + while (iter) { + auto prev_dev = GST_WASAPI2_DEVICE (iter->data); + + if (!prev_dev->is_default) { + iter = g_list_next (iter); + continue; + } + + walk = to_add; + bool found = false; + while (walk) { + auto new_dev = GST_WASAPI2_DEVICE (walk->data); + + if (!new_dev->is_default || + prev_dev->device_class != new_dev->device_class) { + walk = g_list_next (walk); + continue; + } + + gst_device_provider_device_changed (provider, GST_DEVICE (new_dev), + GST_DEVICE (prev_dev)); + gst_object_unref (new_dev); + to_add = g_list_delete_link (to_add, walk); + found = true; + break; + } + + if (found) { + gst_object_unref (prev_dev); + auto next = iter->next; + to_remove = g_list_delete_link (to_remove, iter); + iter = next; + } else { + iter = g_list_next (iter); + } + } + + if (to_add || to_remove) { + auto dump = gst_wasapi2_dump_devices (prev_devices); + if (dump) { + GST_LOG_OBJECT (self, "Previous devices:\n%s", dump); + g_free (dump); + } + + dump = gst_wasapi2_dump_devices (new_devices); + if (dump) { + GST_LOG_OBJECT (self, "Probed devices:\n%s", dump); + g_free (dump); + } + + dump = gst_wasapi2_dump_devices (to_add); + if (dump) { + GST_LOG_OBJECT (self, "New devices:\n%s", dump); + g_free (dump); + } + + dump = gst_wasapi2_dump_devices (to_remove); + if (dump) { + GST_LOG_OBJECT (self, "Removed devices:\n%s", dump); + g_free (dump); + } + } + for (iter = to_remove; iter; iter = g_list_next (iter)) gst_device_provider_device_remove (provider, GST_DEVICE (iter->data));
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2enumerator.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2enumerator.cpp
Changed
@@ -28,17 +28,64 @@ #include <wrl.h> #include <functiondiscoverykeys_devpkey.h> #include <string> +#include <atomic> /* *INDENT-OFF* */ using namespace Microsoft::WRL; -GST_DEBUG_CATEGORY_EXTERN (gst_wasapi2_debug); -#define GST_CAT_DEFAULT gst_wasapi2_debug +#ifndef GST_DISABLE_GST_DEBUG +#define GST_CAT_DEFAULT ensure_debug_category() +static GstDebugCategory * +ensure_debug_category (void) +{ + static GstDebugCategory *cat = nullptr; + + GST_WASAPI2_CALL_ONCE_BEGIN { + cat = _gst_debug_category_new ("wasapi2enumerator", 0, "wasapi2enumerator"); + } GST_WASAPI2_CALL_ONCE_END; -static GstStaticCaps template_caps = GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS); + return cat; +} +#endif static void gst_wasapi2_on_device_updated (GstWasapi2Enumerator * object); +static std::string +device_state_to_string (DWORD state) +{ + std::string ret; + bool is_first = true; + if ((state & DEVICE_STATE_ACTIVE) == DEVICE_STATE_ACTIVE) { + if (!is_first) + ret += "|"; + ret += "ACTIVE"; + is_first = false; + } + + if ((state & DEVICE_STATE_DISABLED) == DEVICE_STATE_DISABLED) { + if (!is_first) + ret += "|"; + ret += "DISABLED"; + is_first = false; + } + + if ((state & DEVICE_STATE_NOTPRESENT) == DEVICE_STATE_NOTPRESENT) { + if (!is_first) + ret += "|"; + ret += "NOTPRESENT"; + is_first = false; + } + + if ((state & DEVICE_STATE_UNPLUGGED) == DEVICE_STATE_UNPLUGGED) { + if (!is_first) + ret += "|"; + ret += "UNPLUGGED"; + is_first = false; + } + + return ret; +} + /* IMMNotificationClient implementation */ class IWasapi2NotificationClient : public IMMNotificationClient { @@ -77,7 +124,6 @@ STDMETHODIMP_ (ULONG) AddRef (void) { - GST_TRACE ("%p, %d", this, (guint) ref_count_); return InterlockedIncrement (&ref_count_); } @@ -105,6 +151,12 @@ if (!object) return S_OK; + auto id = g_utf16_to_utf8 ((gunichar2 *) device_id, + -1, nullptr, nullptr, nullptr); + auto state = device_state_to_string (new_state); + GST_LOG ("%s, %s (0x%x)", id, state.c_str (), (guint) new_state); + g_free (id); + gst_wasapi2_on_device_updated (object); gst_object_unref (object); @@ -118,6 +170,11 @@ if (!object) return S_OK; + auto id = g_utf16_to_utf8 ((gunichar2 *) device_id, + -1, nullptr, nullptr, nullptr); + GST_LOG ("%s", id); + g_free (id); + gst_wasapi2_on_device_updated (object); gst_object_unref (object); @@ -131,6 +188,11 @@ if (!object) return S_OK; + auto id = g_utf16_to_utf8 ((gunichar2 *) device_id, + -1, nullptr, nullptr, nullptr); + GST_LOG ("%s", id); + g_free (id); + gst_wasapi2_on_device_updated (object); gst_object_unref (object); @@ -138,12 +200,19 @@ } STDMETHODIMP - OnDefaultDeviceChanged (EDataFlow flow, ERole role, LPCWSTR default_device_id) + OnDefaultDeviceChanged (EDataFlow flow, ERole role, LPCWSTR device_id) { auto object = (GstWasapi2Enumerator *) g_weak_ref_get (&obj_); if (!object) return S_OK; + auto id = g_utf16_to_utf8 ((gunichar2 *) device_id, + -1, nullptr, nullptr, nullptr); + GST_LOG ("%s, flow: %s, role: %s", id, + gst_wasapi2_data_flow_to_string (flow), + gst_wasapi2_role_to_string (role)); + g_free (id); + gst_wasapi2_on_device_updated (object); gst_object_unref (object); @@ -188,6 +257,20 @@ struct GstWasapi2EnumeratorPrivate { + GstWasapi2EnumeratorPrivate () + { + device_list = g_ptr_array_new_with_free_func ((GDestroyNotify) + gst_wasapi2_enumerator_entry_free); + endpoint_formats = g_ptr_array_new_with_free_func ((GDestroyNotify) + gst_wasapi2_free_wfx); + } + + ~GstWasapi2EnumeratorPrivate () + { + g_ptr_array_unref (device_list); + g_ptr_array_unref (endpoint_formats); + } + ComPtr<IMMDeviceEnumerator> handle; std::mutex lock; std::condition_variable cond; @@ -195,6 +278,9 @@ ComPtr<IMMNotificationClient> client; Wasapi2ActivationHandler *capture_activator = nullptr; Wasapi2ActivationHandler *render_activator = nullptr; + std::atomic<int> notify_count = { 0 }; + GPtrArray *device_list; + GPtrArray *endpoint_formats; void ClearCOM () { @@ -272,12 +358,26 @@ gst_wasapi2_on_device_updated (GstWasapi2Enumerator * object) { /* *INDENT-OFF* */ - g_main_context_invoke_full (object->context, G_PRIORITY_DEFAULT, + auto priv = object->priv; + + auto count = priv->notify_count.fetch_add (1); + GST_LOG ("notify count before scheduling %d", count); + + auto source = g_timeout_source_new (100); + g_source_set_callback (source, (gpointer obj) -> gboolean { - g_signal_emit (obj, wasapi2_device_signalsSIGNAL_UPDATED, 0); + auto self = GST_WASAPI2_ENUMERATOR (obj); + auto priv = self->priv; + auto count = priv->notify_count.fetch_sub (1); + GST_LOG ("scheduled notify count %d", count); + if (count == 1) + g_signal_emit (obj, wasapi2_device_signalsSIGNAL_UPDATED, 0); return G_SOURCE_REMOVE; }, gst_object_ref (object), (GDestroyNotify) gst_object_unref); + + g_source_attach (source, object->context); + g_source_unref (source); /* *INDENT-ON* */ } @@ -433,10 +533,7 @@ void gst_wasapi2_enumerator_entry_free (GstWasapi2EnumeratorEntry * entry) { - g_free (entry->device_id); - g_free (entry->device_name); - gst_clear_caps (&entry->caps); - g_free (entry); + delete entry; } /* *INDENT-OFF* */ @@ -458,68 +555,120 @@ }; /* *INDENT-ON* */ +static GstWasapi2EnumeratorEntry * +gst_wasapi2_enumerator_build_entry (GstWasapi2Enumerator * self, + GstCaps * caps, EDataFlow flow, gboolean is_default, + gchar * device_id, gchar * device_name, + gchar * actual_device_id, gchar * actual_device_name, + GstWasapi2DeviceProps * device_props) +{ + auto entry = new GstWasapi2EnumeratorEntry (); + + entry->device_id = device_id; + entry->device_name = device_name; + entry->caps = caps; + entry->flow = flow; + entry->is_default = is_default; + if (actual_device_id) + entry->actual_device_id = actual_device_id; + if (actual_device_name) + entry->actual_device_name = actual_device_name; + + if (device_props) { + entry->device_props.form_factor = device_props->form_factor; + entry->device_props.enumerator_name = device_props->enumerator_name; + } + + GST_LOG_OBJECT (self, "Adding entry %s (%s), flow %d, caps %" GST_PTR_FORMAT, + device_id, device_name, flow, caps); + g_free (device_id); + g_free (device_name); + g_free (actual_device_id); + g_free (actual_device_name); + + return entry; +} + static void -gst_wasapi2_enumerator_add_entry (GstWasapi2Enumerator * self, - IAudioClient * client, - GstCaps * static_caps, EDataFlow flow, gboolean is_default, - gchar * device_id, gchar * device_name, GPtrArray * device_list) +gst_wasapi2_enumerator_probe_props (IPropertyStore * store, + GstWasapi2DeviceProps * props) { - WAVEFORMATEX *mix_format = nullptr; - GstCaps *supported_caps = nullptr; + PROPVARIANT var; + PropVariantInit (&var); - client->GetMixFormat (&mix_format); - if (!mix_format) { - g_free (device_id); - g_free (device_name); - return; + auto hr = store->GetValue (PKEY_AudioEndpoint_FormFactor, &var); + if (SUCCEEDED (hr) && var.vt == VT_UI4) + props->form_factor = (EndpointFormFactor) var.ulVal; + + PropVariantClear (&var); + + hr = store->GetValue (PKEY_Device_EnumeratorName, &var); + if (SUCCEEDED (hr) && var.vt == VT_LPWSTR) { + auto name = g_utf16_to_utf8 ((gunichar2 *) var.pwszVal, + -1, nullptr, nullptr, nullptr); + props->enumerator_name = name; + g_free (name); } - gst_wasapi2_util_parse_waveformatex (mix_format, - static_caps, &supported_caps, nullptr); - CoTaskMemFree (mix_format); + PropVariantClear (&var); +} + +static void +get_default_device (GstWasapi2Enumerator * self, EDataFlow flow, + IMMDevice ** device, IPropertyStore ** prop, gchar ** actual_device_id, + gchar ** actual_device_name) +{ + auto priv = self->priv; + ComPtr < IMMDevice > rst_device; + ComPtr < IPropertyStore > rst_prop; + + *actual_device_id = nullptr; + *actual_device_name = nullptr; - if (!supported_caps) { - g_free (device_id); - g_free (device_name); + auto hr = priv->handle->GetDefaultAudioEndpoint (flow, + eConsole, &rst_device); + if (FAILED (hr)) return; - } - auto entry = g_new0 (GstWasapi2EnumeratorEntry, 1); + hr = rst_device->OpenPropertyStore (STGM_READ, &rst_prop); + if (FAILED (hr)) + return; - entry->device_id = device_id; - entry->device_name = device_name; - entry->caps = supported_caps; - entry->flow = flow; - entry->is_default = is_default; + LPWSTR wid = nullptr; + hr = rst_device->GetId (&wid); + if (!gst_wasapi2_result (hr)) + return; - GST_LOG_OBJECT (self, "Adding entry %s (%s), flow %d, caps %" GST_PTR_FORMAT, - device_id, device_name, flow, supported_caps); + *actual_device_id = g_utf16_to_utf8 ((gunichar2 *) wid, + -1, nullptr, nullptr, nullptr); + CoTaskMemFree (wid); - g_ptr_array_add (device_list, entry); + PROPVARIANT var; + PropVariantInit (&var); + hr = rst_prop->GetValue (PKEY_Device_FriendlyName, &var); + if (gst_wasapi2_result (hr)) { + *actual_device_name = g_utf16_to_utf8 ((gunichar2 *) var.pwszVal, + -1, nullptr, nullptr, nullptr); + PropVariantClear (&var); + } + + *device = rst_device.Detach (); + *prop = rst_prop.Detach (); + return; } static gboolean -gst_wasapi2_enumerator_enumerate_internal (EnumerateData * data) +gst_wasapi2_enumerator_execute (GstWasapi2Enumerator * self, + IMMDeviceCollection * collection, gboolean ignore_error) { - auto self = data->self; auto priv = self->priv; - ComPtr < IMMDeviceCollection > collection; - auto hr = priv->handle->EnumAudioEndpoints (eAll, DEVICE_STATE_ACTIVE, - &collection); - if (!gst_wasapi2_result (hr)) { - SetEvent (data->event); - return G_SOURCE_REMOVE; - } + GST_DEBUG_OBJECT (self, "Start enumerate"); UINT count = 0; - hr = collection->GetCount (&count); - if (!gst_wasapi2_result (hr) || count == 0) { - SetEvent (data->event); - return G_SOURCE_REMOVE; - } - - auto scaps = gst_static_caps_get (&template_caps); + auto hr = collection->GetCount (&count); + if (!gst_wasapi2_result (hr) || count == 0) + return TRUE; ComPtr < IAudioClient > default_capture_client; ComPtr < IAudioClient > default_render_client; @@ -528,18 +677,86 @@ if (priv->render_activator) priv->render_activator->GetClient (&default_render_client, 10000); + ComPtr < IMMDevice > default_capture_device; + ComPtr < IPropertyStore > default_capture_prop; + gchar *default_capture_device_id = nullptr; + gchar *default_capture_device_name = nullptr; + + ComPtr < IMMDevice > default_render_device; + ComPtr < IPropertyStore > default_render_prop; + gchar *default_render_device_id = nullptr; + gchar *default_render_device_name = nullptr; + + get_default_device (self, eCapture, &default_capture_device, + &default_capture_prop, + &default_capture_device_id, &default_capture_device_name); + get_default_device (self, eRender, &default_render_device, + &default_render_prop, + &default_render_device_id, &default_render_device_name); + + if (priv->capture_activator && !default_capture_client && + default_capture_device) { + default_capture_device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &default_capture_client); + } + + if (priv->render_activator && !default_render_client && default_render_device) { + default_render_device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &default_render_client); + } + if (default_capture_client) { - gst_wasapi2_enumerator_add_entry (self, default_capture_client.Get (), - scaps, eCapture, TRUE, - g_strdup (gst_wasapi2_get_default_device_id (eCapture)), - g_strdup ("Default Audio Capture Device"), data->device_list); + GstWasapi2DeviceProps props; + props.form_factor = UnknownFormFactor; + props.enumerator_name = "UNKNOWN"; + + if (default_capture_prop) + gst_wasapi2_enumerator_probe_props (default_capture_prop.Get (), &props); + + g_ptr_array_set_size (priv->endpoint_formats, 0); + gst_wasapi2_get_shared_mode_formats (default_capture_client.Get (), + priv->endpoint_formats); + auto caps = gst_wasapi2_wfx_list_to_caps (priv->endpoint_formats); + g_ptr_array_set_size (priv->endpoint_formats, 0); + + if (caps) { + auto entry = gst_wasapi2_enumerator_build_entry (self, + caps, eCapture, TRUE, + g_strdup (gst_wasapi2_get_default_device_id (eCapture)), + g_strdup ("Default Audio Capture Device"), + g_strdup (default_capture_device_id), + g_strdup (default_capture_device_name), &props); + + if (entry) + g_ptr_array_add (priv->device_list, entry); + } } if (default_render_client) { - gst_wasapi2_enumerator_add_entry (self, default_render_client.Get (), - scaps, eRender, TRUE, - g_strdup (gst_wasapi2_get_default_device_id (eRender)), - g_strdup ("Default Audio Render Device"), data->device_list); + GstWasapi2DeviceProps props; + props.form_factor = UnknownFormFactor; + props.enumerator_name = "UNKNOWN"; + + if (default_render_prop) + gst_wasapi2_enumerator_probe_props (default_render_prop.Get (), &props); + + g_ptr_array_set_size (priv->endpoint_formats, 0); + gst_wasapi2_get_shared_mode_formats (default_render_client.Get (), + priv->endpoint_formats); + auto caps = gst_wasapi2_wfx_list_to_caps (priv->endpoint_formats); + g_ptr_array_set_size (priv->endpoint_formats, 0); + + if (caps) { + auto entry = gst_wasapi2_enumerator_build_entry (self, + caps, eRender, TRUE, + g_strdup (gst_wasapi2_get_default_device_id (eRender)), + g_strdup ("Default Audio Render Device"), + g_strdup (default_render_device_id), + g_strdup (default_render_device_name), &props); + + if (entry) + g_ptr_array_add (priv->device_list, entry); + } } for (UINT i = 0; i < count; i++) { @@ -547,6 +764,10 @@ ComPtr < IMMEndpoint > endpoint; EDataFlow flow; + GstWasapi2DeviceProps props; + props.form_factor = UnknownFormFactor; + props.enumerator_name = "UNKNOWN"; + hr = collection->Item (i, &device); if (!gst_wasapi2_result (hr)) continue; @@ -588,17 +809,129 @@ ComPtr < IAudioClient > client; hr = device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, nullptr, &client); + if (!gst_wasapi2_result (hr)) { + /* Requested active devices via DEVICE_STATE_ACTIVE but activate fail here. + * That means devices were changed while we were enumerating. + * Need retry here */ + GST_DEBUG_OBJECT (self, "Couldn't activate device %s (%s)", + device_id, desc); g_free (device_id); g_free (desc); - continue; + + if (!ignore_error && hr == AUDCLNT_E_DEVICE_INVALIDATED) + return FALSE; } - gst_wasapi2_enumerator_add_entry (self, client.Get (), scaps, flow, FALSE, - device_id, desc, data->device_list); + gst_wasapi2_enumerator_probe_props (prop.Get (), &props); + + g_ptr_array_set_size (priv->endpoint_formats, 0); + gst_wasapi2_get_shared_mode_formats (client.Get (), priv->endpoint_formats); + auto caps = gst_wasapi2_wfx_list_to_caps (priv->endpoint_formats); + g_ptr_array_set_size (priv->endpoint_formats, 0); + + if (caps) { + auto entry = gst_wasapi2_enumerator_build_entry (self, caps, flow, + FALSE, device_id, desc, nullptr, nullptr, &props); + if (entry) { + g_ptr_array_set_size (priv->endpoint_formats, 0); + gst_wasapi2_get_exclusive_mode_formats (client.Get (), + prop.Get (), priv->endpoint_formats); + auto exclusive_caps = + gst_wasapi2_wfx_list_to_caps (priv->endpoint_formats); + g_ptr_array_set_size (priv->endpoint_formats, 0); + entry->exclusive_caps = exclusive_caps; + + REFERENCE_TIME default_period = 0; + REFERENCE_TIME min_period = 0; + WAVEFORMATEX *mix_format = nullptr; + + hr = client->GetDevicePeriod (&default_period, &min_period); + if (SUCCEEDED (hr)) { + entry->default_device_period_us = default_period / 10; + entry->min_device_period_us = min_period / 10; + } + + client->GetMixFormat (&mix_format); + if (mix_format) { + ComPtr < IAudioClient3 > client3; + hr = client.As (&client3); + if (SUCCEEDED (hr)) { + UINT32 default_period_frame = 0; + UINT32 fundamental_period_frame = 0; + UINT32 min_period_frame = 0; + UINT32 max_period_frame = 0; + + hr = client3->GetSharedModeEnginePeriod (mix_format, + &default_period_frame, &fundamental_period_frame, + &min_period_frame, &max_period_frame); + if (SUCCEEDED (hr)) { + entry->shared_mode_engine_default_period_us = + (default_period_frame * 1000000ULL) / + mix_format->nSamplesPerSec; + entry->shared_mode_engine_fundamental_period_us = + (fundamental_period_frame * 1000000ULL) / + mix_format->nSamplesPerSec; + entry->shared_mode_engine_min_period_us = + (min_period_frame * 1000000ULL) / mix_format->nSamplesPerSec; + entry->shared_mode_engine_max_period_us = + (max_period_frame * 1000000ULL) / mix_format->nSamplesPerSec; + } + } + + CoTaskMemFree (mix_format); + } + + g_ptr_array_add (priv->device_list, entry); + } + } + } + + g_free (default_capture_device_id); + g_free (default_capture_device_name); + g_free (default_render_device_id); + g_free (default_render_device_name); + + return TRUE; +} + +static gboolean +gst_wasapi2_enumerator_enumerate_internal (EnumerateData * data) +{ + auto self = data->self; + auto priv = self->priv; + /* Upto 3 times retry */ + const guint num_retry = 5; + + for (guint i = 0; i < num_retry; i++) { + ComPtr < IMMDeviceCollection > collection; + gboolean is_last = FALSE; + + if (i + 1 == num_retry) + is_last = TRUE; + + g_ptr_array_set_size (priv->device_list, 0); + + auto hr = priv->handle->EnumAudioEndpoints (eAll, DEVICE_STATE_ACTIVE, + &collection); + if (!gst_wasapi2_result (hr)) { + SetEvent (data->event); + return G_SOURCE_REMOVE; + } + + if (gst_wasapi2_enumerator_execute (self, collection.Get (), is_last)) + break; + + if (!is_last) { + GST_DEBUG_OBJECT (self, "Sleep for retrying"); + Sleep (50); + } } - gst_caps_unref (scaps); + while (priv->device_list->len > 0) { + g_ptr_array_add (data->device_list, + g_ptr_array_steal_index (priv->device_list, 0)); + } SetEvent (data->event); return G_SOURCE_REMOVE; @@ -618,3 +951,31 @@ WaitForSingleObject (data.event, INFINITE); } + +const gchar * +gst_wasapi2_form_factor_to_string (EndpointFormFactor form_factor) +{ + switch (form_factor) { + case RemoteNetworkDevice: + return "RemoteNetworkDevice"; + case Speakers: + return "Speakers"; + case LineLevel: + return "LineLevel"; + case Microphone: + return "Microphone"; + case Headset: + return "Headset"; + case Handset: + return "Handset"; + case UnknownDigitalPassthrough: + return "UnknownDigitalPassthrough"; + case SPDIF: + return "SPDIF"; + case DigitalAudioDisplayDevice: + return "DigitalAudioDisplayDevice"; + case UnknownFormFactor: + default: + return "UnknownFormFactor"; + } +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2enumerator.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2enumerator.h
Changed
@@ -21,6 +21,7 @@ #include <gst/gst.h> #include "gstwasapi2util.h" +#include <string> G_BEGIN_DECLS @@ -28,14 +29,40 @@ G_DECLARE_FINAL_TYPE (GstWasapi2Enumerator, gst_wasapi2_enumerator, GST, WASAPI2_ENUMERATOR, GstObject); -typedef struct _GstWasapi2EnumeratorEntry +G_END_DECLS + +struct GstWasapi2DeviceProps +{ + EndpointFormFactor form_factor; + std::string enumerator_name; +}; + +struct GstWasapi2EnumeratorEntry { - gchar *device_id; - gchar *device_name; - gboolean is_default; - GstCaps *caps; + ~GstWasapi2EnumeratorEntry() + { + gst_clear_caps (&caps); + gst_clear_caps (&exclusive_caps); + } + + std::string device_id; + std::string device_name; + std::string actual_device_id; + std::string actual_device_name; + gboolean is_default = FALSE; + GstCaps *caps = nullptr; + GstCaps *exclusive_caps = nullptr; EDataFlow flow; -} GstWasapi2EnumeratorEntry; + GstWasapi2DeviceProps device_props = { }; + + gint64 shared_mode_engine_default_period_us = 0; + gint64 shared_mode_engine_fundamental_period_us = 0; + gint64 shared_mode_engine_min_period_us = 0; + gint64 shared_mode_engine_max_period_us = 0; + + gint64 default_device_period_us = 0; + gint64 min_device_period_us = 0; +}; GstWasapi2Enumerator * gst_wasapi2_enumerator_new (void); @@ -47,5 +74,5 @@ void gst_wasapi2_enumerator_enumerate_devices (GstWasapi2Enumerator * object, GPtrArray * entry); -G_END_DECLS +const gchar * gst_wasapi2_form_factor_to_string (EndpointFormFactor form_factor);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2rbuf.cpp
Added
@@ -0,0 +1,3234 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/* + * This module implements a GstAudioRingBuffer subclass using + * Windows Audio Session API (WASAPI). + * + * Major Components: + * + * - RbufCtx: Encapsulates WASAPI objects such as IAudioClient, + * IAudioRenderClient/IAudioCaptureClient, volume/mute interfaces, and events. + * + * - Wasapi2DeviceManager: Handles IMMDevice activation and RbufCtx creation + * in a dedicated COM thread. This avoids blocking the main I/O thread. + * + * - CommandData and command queue: All user-triggered operations (open, start, + * stop, volume changes, etc.) are serialized through a command queue. + * + * - gst_wasapi2_rbuf_loop_thread: The main loop that processes WASAPI I/O events + * and executes queued commands. + * + * Design Highlights: + * + * 1) The Wasapi2DeviceManager and GstWasapi2Rbuf classes are decoupled to manage + * device initialization efficiently. Creating and initializing an IAudioClient + * can take significant time due to format negotiation or endpoint activation. + * + * - During a normal open/start sequence, the main I/O thread (gst_wasapi2_rbuf_loop_thread) + * synchronously waits for Wasapi2DeviceManager to finish device activation and + * RbufCtx creation before proceeding. + * + * - In contrast, when a device is already open and a dynamic device change + * is requested, device creation is delegated to Wasapi2DeviceManager + * asynchronously in the background. Once initialization succeeds, + * newly created RbufCtx is returned back to the I/O thread via the + * command queue and swapped in without interrupting ongoing I/O. + * + * This separation allows for seamless device transitions without blocking audio streaming. + * + * 2) All user-triggered events (such as open, close, start, stop, volume/mute changes) + * are serialized through a command queue and processed exclusively by the main I/O thread. + * This ensures thread-safe and ordered execution of state changes, avoiding race conditions. + */ + +#include "gstwasapi2rbuf.h" +#include "gstwasapi2activator.h" +#include <endpointvolume.h> +#include <memory> +#include <atomic> +#include <vector> +#include <mutex> +#include <condition_variable> +#include <wrl.h> +#include <string> +#include <string.h> +#include <queue> +#include <avrt.h> + +#if defined(__SSE2__) || (defined(_MSC_VER) && (defined(_M_X64) || (_M_IX86_FP >= 2))) +#include <emmintrin.h> +#define GST_WASAPI2_HAVE_SSE2 +#endif + +GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_rbuf_debug); +#define GST_CAT_DEFAULT gst_wasapi2_rbuf_debug + +/* Defined for _WIN32_WINNT >= _NT_TARGET_VERSION_WIN10_RS4 */ +#ifndef CREATE_WAITABLE_TIMER_HIGH_RESOLUTION +#define CREATE_WAITABLE_TIMER_HIGH_RESOLUTION 0x00000002 +#endif + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; + +static gpointer device_manager_com_thread (gpointer manager); + +struct RbufCtx +{ + RbufCtx () = delete; + RbufCtx (const std::string & id) : device_id (id) + { + capture_event = CreateEvent (nullptr, FALSE, FALSE, nullptr); + render_event = CreateEvent (nullptr, FALSE, FALSE, nullptr); + formats = g_ptr_array_new_with_free_func ((GDestroyNotify) + gst_wasapi2_free_wfx); + } + + ~RbufCtx () + { + Stop (); + + if (volume_callback && endpoint_volume) + endpoint_volume->UnregisterControlChangeNotify (volume_callback.Get ()); + + if (mix_format) + CoTaskMemFree (mix_format); + gst_clear_caps (&caps); + gst_clear_caps (&supported_caps); + + if (conv) + gst_audio_converter_free (conv); + + CloseHandle (capture_event); + CloseHandle (render_event); + + g_ptr_array_unref (formats); + } + + HRESULT Start () + { + if (running) + return S_OK; + + auto hr = client->Start (); + if (!gst_wasapi2_result (hr)) + return hr; + + if (dummy_client) { + hr = dummy_client->Start (); + if (!gst_wasapi2_result (hr)) { + client->Stop (); + client->Reset (); + + return hr; + } + } + + running = true; + + return S_OK; + } + + HRESULT Stop () + { + HRESULT hr = S_OK; + if (client) { + hr = client->Stop (); + if (gst_wasapi2_result (hr)) + client->Reset (); + } + + if (dummy_client) { + auto dummy_hr = dummy_client->Stop (); + if (gst_wasapi2_result (dummy_hr)) + dummy_client->Reset (); + } + + running = false; + + return hr; + } + + HRESULT SetVolume (float vol) + { + if (!stream_volume) + return S_OK; + + UINT32 count = 0; + auto hr = stream_volume->GetChannelCount (&count); + if (!gst_wasapi2_result (hr) || count == 0) + return hr; + + volumes.resize (count); + + for (size_t i = 0; i < volumes.size (); i++) + volumesi = vol; + + return stream_volume->SetAllVolumes ((UINT32) volumes.size (), + volumes.data ()); + } + + BOOL IsEndpointMuted () + { + return endpoint_muted.load (std::memory_order_acquire); + } + + GstWasapi2EndpointClass endpoint_class; + ComPtr<IMMDevice> device; + ComPtr<IAudioClient> client; + ComPtr<IAudioClient> dummy_client; + ComPtr<IAudioCaptureClient> capture_client; + ComPtr<IAudioRenderClient> render_client; + ComPtr<IAudioStreamVolume> stream_volume; + ComPtr<IAudioEndpointVolume> endpoint_volume; + ComPtr<IAudioEndpointVolumeCallback> volume_callback; + std::string device_id; + std::vector<float> volumes; + std::atomic<bool> endpoint_muted = { false }; + HANDLE capture_event; + HANDLE render_event; + GstCaps *caps = nullptr; + GstCaps *supported_caps = nullptr; + WAVEFORMATEX *mix_format = nullptr; + std::vector<guint8> exclusive_staging; + size_t exclusive_staging_filled = 0; + size_t exclusive_period_bytes = 0; + GstAudioInfo device_info; + GstAudioInfo host_info; + std::vector<guint8> device_fifo; + std::vector<guint8> host_fifo; + size_t device_fifo_bytes = 0; + size_t host_fifo_bytes = 0; + GstAudioConverter *conv = nullptr; + GPtrArray *formats = nullptr; + + UINT32 period = 0; + UINT32 client_buf_size = 0; + UINT32 dummy_buf_size = 0; + bool is_default = false; + bool running = false; + bool error_posted = false; + bool is_exclusive = false; + bool is_s24in32 = false; + bool init_done = false; + bool low_latency = false; + gint64 latency_time = 0; + gint64 buffer_time = 0; +}; + +typedef std::shared_ptr<RbufCtx> RbufCtxPtr; + +enum class CommandType +{ + Shutdown, + SetDevice, + UpdateDevice, + Open, + Close, + Acquire, + Release, + Start, + Stop, + GetCaps, + UpdateVolume, +}; + +static inline const gchar * +command_type_to_string (CommandType type) +{ + switch (type) { + case CommandType::Shutdown: + return "Shutdown"; + case CommandType::SetDevice: + return "SetDevice"; + case CommandType::UpdateDevice: + return "UpdateDevice"; + case CommandType::Open: + return "Open"; + case CommandType::Close: + return "Close"; + case CommandType::Acquire: + return "Acquire"; + case CommandType::Release: + return "Release"; + case CommandType::Start: + return "Start"; + case CommandType::Stop: + return "Stop"; + case CommandType::GetCaps: + return "GetCaps"; + case CommandType::UpdateVolume: + return "UpdateVolume"; + default: + return "Unknown"; + } +} + +struct CommandData +{ + CommandData (const CommandData &) = delete; + CommandData& operator= (const CommandData &) = delete; + CommandData () = delete; + CommandData (CommandType ctype) : type (ctype) + { + event_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); + } + + virtual ~CommandData () + { + CloseHandle (event_handle); + } + + CommandType type; + + HRESULT hr = S_OK; + HANDLE event_handle; +}; + +struct CommandSetDevice : public CommandData +{ + CommandSetDevice () : CommandData (CommandType::SetDevice) {} + + std::string device_id; + GstWasapi2EndpointClass endpoint_class; + guint pid = 0; + gboolean low_latency = FALSE; + gboolean exclusive = FALSE; +}; + +struct CommandUpdateDevice : public CommandData +{ + CommandUpdateDevice (const std::string & id) + : CommandData (CommandType::UpdateDevice), device_id (id) {} + std::shared_ptr<RbufCtx> ctx; + std::string device_id; +}; + +struct CommandGetCaps : public CommandData +{ + CommandGetCaps () : CommandData (CommandType::GetCaps) { } + + GstCaps *caps = nullptr; +}; + +struct CommandAcquire : public CommandData +{ + CommandAcquire (GstAudioRingBufferSpec * s) : + CommandData (CommandType::Acquire), spec (s) {} + + GstAudioRingBufferSpec *spec = nullptr; +}; + +static void gst_wasapi2_rbuf_push_command (GstWasapi2Rbuf * self, + std::shared_ptr<CommandData> cmd); + + +DEFINE_GUID (IID_Wasapi2EndpointVolumeCallback, 0x21ba991f, 0x4d78, + 0x418c, 0xa1, 0xea, 0x8a, 0xc7, 0xdd, 0xa2, 0xdc, 0x39); +class Wasapi2EndpointVolumeCallback : public IAudioEndpointVolumeCallback +{ +public: + static void CreateInstance (IAudioEndpointVolumeCallback ** iface, + RbufCtxPtr & ctx) + { + auto self = new Wasapi2EndpointVolumeCallback (); + self->ctx_ = ctx; + *iface = static_cast<IAudioEndpointVolumeCallback *>( + static_cast<Wasapi2EndpointVolumeCallback*>(self)); + } + + STDMETHODIMP_ (ULONG) + AddRef (void) + { + return InterlockedIncrement (&refcount_); + } + + STDMETHODIMP_ (ULONG) + Release (void) + { + ULONG ref_count; + + ref_count = InterlockedDecrement (&refcount_); + + if (ref_count == 0) + delete this; + + return ref_count; + } + + STDMETHODIMP + QueryInterface (REFIID riid, void ** object) + { + if (riid == __uuidof(IUnknown) || riid == __uuidof(IAgileObject)) { + *object = static_cast<IUnknown *>( + static_cast<Wasapi2EndpointVolumeCallback*>(this)); + } else if (riid == __uuidof(IAudioEndpointVolumeCallback)) { + *object = static_cast<IAudioEndpointVolumeCallback *>( + static_cast<Wasapi2EndpointVolumeCallback*>(this)); + } else if (riid == IID_Wasapi2EndpointVolumeCallback) { + *object = static_cast<Wasapi2EndpointVolumeCallback *> (this); + } else { + *object = nullptr; + return E_NOINTERFACE; + } + + AddRef (); + + return S_OK; + } + + STDMETHODIMP + OnNotify (AUDIO_VOLUME_NOTIFICATION_DATA * notify) + { + auto ctx = ctx_.lock (); + if (!ctx) + return S_OK; + + ctx->endpoint_muted.store (notify->bMuted, std::memory_order_release); + + return S_OK; + } + +private: + Wasapi2EndpointVolumeCallback () {} + virtual ~Wasapi2EndpointVolumeCallback () {} + +private: + ULONG refcount_ = 1; + std::weak_ptr<RbufCtx> ctx_; +}; + +struct RbufCtxDesc +{ + RbufCtxDesc () + { + event_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); + } + + ~RbufCtxDesc () + { + CloseHandle (event_handle); + } + + GstWasapi2Rbuf *rbuf = nullptr; + GstWasapi2EndpointClass endpoint_class; + std::string device_id; + guint pid; + RbufCtxPtr ctx; + gint64 buffer_time; + gint64 latency_time; + WAVEFORMATEX *mix_format = nullptr; + gboolean low_latency = FALSE; + gboolean exclusive = FALSE; + HANDLE event_handle; +}; + +static gboolean +is_equal_device_id (const gchar * a, const gchar * b) +{ + auto len_a = strlen (a); + auto len_b = strlen (b); + + if (len_a != len_b) + return FALSE; + +#ifdef _MSC_VER + return _strnicmp (a, b, len_a) == 0; +#else + return strncasecmp (a, b, len_a) == 0; +#endif +} + +static HRESULT +initialize_audio_client_exclusive (IMMDevice * device, + ComPtr<IAudioClient> & client, WAVEFORMATEX * wfx, guint * period, + bool low_latency, gint64 latency_time) +{ + /* Format must be validated by caller */ + auto hr = client->IsFormatSupported (AUDCLNT_SHAREMODE_EXCLUSIVE, + wfx, nullptr); + if (hr != S_OK) + return E_FAIL; + + REFERENCE_TIME min_hns = 0; + REFERENCE_TIME max_hns = 0; + REFERENCE_TIME default_period = 0; + REFERENCE_TIME min_hns_period = 0; + + { + ComPtr<IAudioClient2> client2; + hr = client->QueryInterface (IID_PPV_ARGS (&client2)); + if (SUCCEEDED (hr)) { + hr = client2->GetBufferSizeLimits (wfx, TRUE, &min_hns, &max_hns); + if (FAILED (hr) || min_hns == 0 || max_hns == 0) { + min_hns = 0; + max_hns = 0; + } else { + auto min_gst = static_cast <GstClockTime> (min_hns) * 100; + auto max_gst = static_cast <GstClockTime> (max_hns) * 100; + GST_DEBUG ("GetBufferSizeLimits - min: %" GST_TIME_FORMAT ", max: %" + GST_TIME_FORMAT, GST_TIME_ARGS (min_gst), GST_TIME_ARGS (max_gst)); + } + } + } + + hr = client->GetDevicePeriod (&default_period, &min_hns_period); + if (!gst_wasapi2_result (hr)) + return hr; + + auto min_gst = static_cast <GstClockTime> (min_hns_period) * 100; + auto default_gst = static_cast <GstClockTime> (default_period) * 100; + GST_DEBUG ("GetDevicePeriod - default: %" GST_TIME_FORMAT ", min: %" + GST_TIME_FORMAT, GST_TIME_ARGS (default_gst), GST_TIME_ARGS (min_gst)); + + min_hns = MAX (min_hns, min_hns_period); + + if (max_hns == 0) + max_hns = default_period; + + REFERENCE_TIME target = min_hns; + if (!low_latency && latency_time > 0) + target = latency_time * 10; + + if (target < min_hns) + target = min_hns; + if (target > max_hns) + target = max_hns; + + DWORD flags = AUDCLNT_STREAMFLAGS_EVENTCALLBACK | + AUDCLNT_STREAMFLAGS_NOPERSIST ; + + hr = client->Initialize (AUDCLNT_SHAREMODE_EXCLUSIVE, flags, + target, target, wfx, nullptr); + if (hr == AUDCLNT_E_BUFFER_SIZE_NOT_ALIGNED) { + UINT32 buffer_size = 0; + + GST_DEBUG ("Buffer size not aligned, opening device again"); + + hr = client->GetBufferSize (&buffer_size); + if (!gst_wasapi2_result (hr) || buffer_size == 0) + return E_FAIL; + + client.Reset (); + hr = device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, nullptr, + &client); + if (!gst_wasapi2_result (hr)) + return hr; + + target = (GST_SECOND / 100) * buffer_size / wfx->nSamplesPerSec; + hr = client->Initialize (AUDCLNT_SHAREMODE_EXCLUSIVE, + flags, target, target, wfx, nullptr); + } + + if (!gst_wasapi2_result (hr)) + return hr; + + UINT32 buffer_size = 0; + hr = client->GetBufferSize (&buffer_size); + if (!gst_wasapi2_result (hr) || buffer_size == 0) { + client.Reset (); + return E_FAIL; + } + + GST_DEBUG ("Configured exclusive mode period: %d frames", buffer_size); + + if (period) + *period = buffer_size; + + GST_DEBUG ("Opened in exclusive mode"); + + return S_OK; +} + +static HRESULT +initialize_audio_client (IAudioClient * client_handle, + WAVEFORMATEX * mix_format, guint * period, + DWORD extra_flags, GstWasapi2EndpointClass device_class, + bool low_latency, gint64 latency_time, gint64 buffer_time) +{ + REFERENCE_TIME default_period, min_period; + DWORD stream_flags = + AUDCLNT_STREAMFLAGS_EVENTCALLBACK | AUDCLNT_STREAMFLAGS_NOPERSIST; + HRESULT hr; + REFERENCE_TIME buf_dur = 0; + + stream_flags |= extra_flags; + + if (!gst_wasapi2_is_process_loopback_class (device_class)) { + hr = client_handle->GetDevicePeriod (&default_period, &min_period); + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't get device period info"); + return hr; + } + + GST_INFO ("wasapi2 default period: %" G_GINT64_FORMAT + ", min period: %" G_GINT64_FORMAT, default_period, min_period); + + /* https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-initialize + * For a shared-mode stream that uses event-driven buffering, + * the caller must set both hnsPeriodicity and hnsBufferDuration to 0 + * + * The above MS documentation does not seem to correct. By setting + * zero hnsBufferDuration, we can use audio engine determined buffer size + * but it seems to cause glitch depending on device. Calculate buffer size + * like wasapi plugin does. Note that MS example code uses non-zero + * buffer duration for event-driven shared-mode case as well. + */ + if (low_latency && latency_time > 0 && buffer_time > 0) { + /* Ensure that the period (latency_time) used is an integral multiple of + * either the default period or the minimum period */ + guint64 factor = (latency_time * 10) / default_period; + REFERENCE_TIME period = default_period * MAX (factor, 1); + + buf_dur = buffer_time * 10; + if (buf_dur < 2 * period) + buf_dur = 2 * period; + } + + hr = client_handle->Initialize (AUDCLNT_SHAREMODE_SHARED, stream_flags, + buf_dur, + /* This must always be 0 in shared mode */ + 0, mix_format, nullptr); + } else { + /* XXX: virtual device will not report device period. + * Use hardcoded period 20ms, same as Microsoft sample code + * https://github.com/microsoft/windows-classic-samples/tree/main/Samples/ApplicationLoopback + */ + default_period = (20 * GST_MSECOND) / 100; + hr = client_handle->Initialize (AUDCLNT_SHAREMODE_SHARED, + AUDCLNT_STREAMFLAGS_LOOPBACK | AUDCLNT_STREAMFLAGS_EVENTCALLBACK | + AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM, + default_period, 0, mix_format, nullptr); + } + + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't initialize audioclient"); + return hr; + } + + if (period) { + *period = gst_util_uint64_scale_round (default_period * 100, + mix_format->nSamplesPerSec, GST_SECOND); + } + + return S_OK; +} + +static gboolean +gst_wasapi2_rbuf_ctx_init (RbufCtxPtr & ctx, WAVEFORMATEX * selected_format) +{ + if (ctx->init_done) { + GST_DEBUG ("Already initialized"); + return TRUE; + } + + if (!selected_format) { + GST_ERROR ("No selected format"); + return FALSE; + } + + HRESULT hr; + if (ctx->is_exclusive) { + bool need_format_conv = false; + /* Try current format */ + hr = ctx->client->IsFormatSupported (AUDCLNT_SHAREMODE_EXCLUSIVE, + selected_format, nullptr); + if (hr == S_OK) { + ctx->mix_format = gst_wasapi2_copy_wfx (selected_format); + } else { + /* Use closest format */ + gst_wasapi2_sort_wfx (ctx->formats, selected_format);; + + auto format = (WAVEFORMATEX *) g_ptr_array_index (ctx->formats, 0); + + GstCaps *old_caps = nullptr; + GstCaps *new_caps = nullptr; + + gst_wasapi2_util_parse_waveformatex (selected_format, + &old_caps, nullptr); + gst_wasapi2_util_parse_waveformatex (format, + &new_caps, nullptr); + + if (!new_caps || !old_caps) { + GST_ERROR ("Couldn't get caps from format"); + gst_clear_caps (&new_caps); + gst_clear_caps (&old_caps); + return FALSE; + } + + if (!gst_caps_is_equal (new_caps, old_caps)) { + GST_INFO ("Closest caps is different, old: %" GST_PTR_FORMAT + ", new : %" GST_PTR_FORMAT, old_caps, new_caps); + need_format_conv = true; + gst_audio_info_from_caps (&ctx->host_info, old_caps); + } + + gst_caps_unref (new_caps); + gst_caps_unref (old_caps); + + ctx->mix_format = gst_wasapi2_copy_wfx (format); + } + + gst_wasapi2_util_parse_waveformatex (ctx->mix_format, &ctx->caps, nullptr); + gst_audio_info_from_caps (&ctx->device_info, ctx->caps); + if (!need_format_conv) + ctx->host_info = ctx->device_info; + + hr = initialize_audio_client_exclusive (ctx->device.Get (), ctx->client, + ctx->mix_format, &ctx->period, ctx->low_latency, ctx->latency_time); + if (FAILED (hr)) { + ctx->is_exclusive = false; + ctx->client = nullptr; + gst_wasapi2_clear_wfx (&ctx->mix_format); + gst_clear_caps (&ctx->caps); + + hr = ctx->device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &ctx->client); + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't get IAudioClient from IMMDevice"); + return FALSE; + } + } else if (need_format_conv) { + GstAudioInfo *in_info, *out_info; + if (ctx->endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_CAPTURE) { + in_info = &ctx->device_info; + out_info = &ctx->host_info; + } else { + in_info = &ctx->host_info; + out_info = &ctx->device_info; + } + + auto config = gst_structure_new_static_str ("converter-config", + GST_AUDIO_CONVERTER_OPT_DITHER_METHOD, GST_TYPE_AUDIO_DITHER_METHOD, + GST_AUDIO_DITHER_TPDF, + GST_AUDIO_CONVERTER_OPT_RESAMPLER_METHOD, + GST_TYPE_AUDIO_RESAMPLER_METHOD, GST_AUDIO_RESAMPLER_METHOD_KAISER, + nullptr); + + gst_audio_resampler_options_set_quality (GST_AUDIO_RESAMPLER_METHOD_KAISER, + GST_AUDIO_RESAMPLER_QUALITY_DEFAULT, GST_AUDIO_INFO_RATE (in_info), + GST_AUDIO_INFO_RATE (out_info), config); + + ctx->conv = gst_audio_converter_new (GST_AUDIO_CONVERTER_FLAG_NONE, + in_info, out_info, config); + if (!ctx->conv) { + GST_ERROR ("Couldn't create converter"); + ctx->is_exclusive = false; + ctx->client = nullptr; + gst_wasapi2_clear_wfx (&ctx->mix_format); + gst_clear_caps (&ctx->caps); + + hr = ctx->device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &ctx->client); + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't get IAudioClient from IMMDevice"); + return FALSE; + } + } else { + GST_INFO ("converter configured"); + } + } + } + + if (!ctx->is_exclusive) { + DWORD stream_flags = 0; + /* Check format support */ + WAVEFORMATEX *closest = nullptr; + hr = ctx->client->IsFormatSupported (AUDCLNT_SHAREMODE_SHARED, + selected_format, &closest); + if (hr == S_OK) { + ctx->mix_format = gst_wasapi2_copy_wfx (selected_format); + /* format supported */ + } else if (hr == S_FALSE) { + if (!closest) { + GST_ERROR ("Couldn't get closest format"); + return FALSE; + } + + GstCaps *old_caps = nullptr; + GstCaps *new_caps = nullptr; + + gst_wasapi2_util_parse_waveformatex (selected_format, + &old_caps, nullptr); + gst_wasapi2_util_parse_waveformatex (closest, + &new_caps, nullptr); + + if (!new_caps || !old_caps) { + GST_ERROR ("Couldn't get caps from format"); + gst_clear_caps (&new_caps); + gst_clear_caps (&old_caps); + CoTaskMemFree (closest); + return FALSE; + } + + if (!gst_caps_is_equal (new_caps, old_caps)) { + GST_INFO ("Closest caps is different, old: %" GST_PTR_FORMAT + ", new : %" GST_PTR_FORMAT, old_caps, new_caps); + /* Hope OS mixer can convert the format */ + gst_caps_unref (new_caps); + gst_caps_unref (old_caps); + CoTaskMemFree (closest); + ctx->mix_format = gst_wasapi2_copy_wfx (selected_format); + stream_flags = AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM | + AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY; + } else { + gst_caps_unref (new_caps); + gst_caps_unref (old_caps); + + ctx->mix_format = closest; + } + } else { + ctx->mix_format = gst_wasapi2_copy_wfx (selected_format); + stream_flags = AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM | + AUDCLNT_STREAMFLAGS_SRC_DEFAULT_QUALITY; + } + + gst_wasapi2_util_parse_waveformatex (ctx->mix_format, &ctx->caps, nullptr); + + bool client3_init_done = false; + if (!gst_wasapi2_is_loopback_class (ctx->endpoint_class) && + !gst_wasapi2_is_process_loopback_class (ctx->endpoint_class)) { + /* Use IAudioClient3::InitializeSharedAudioStream if + * - low-latency is requested + * - device actually supports shared-mode low-latency streaming + * (i.e., min-period < default-period) and user requested latency-time + * is smaller than default-period */ + ComPtr<IAudioClient3> client3; + hr = ctx->client.As (&client3); + if (SUCCEEDED (hr)) { + UINT32 default_period, fundamental_period, min_period, max_period; + hr = client3->GetSharedModeEnginePeriod (ctx->mix_format, + &default_period, &fundamental_period, &min_period, &max_period); + + if (SUCCEEDED (hr)) { + UINT32 target_period_frames = 0; + UINT32 latency_time_frames = + static_cast<UINT32>(ctx->latency_time * + ctx->mix_format->nSamplesPerSec / 1000000.0); + if (ctx->low_latency) { + target_period_frames = min_period; + } else if (min_period < default_period && + latency_time_frames < default_period) { + UINT32 cand = MAX (min_period, latency_time_frames); + if (fundamental_period > 0) { + /* period should be multiple of fundamental period */ + cand = ((cand + fundamental_period - 1) / fundamental_period) + * fundamental_period; + } + + cand = MAX (cand, min_period); + cand = MIN (cand, max_period); + + /* Use audioclient3 only if calculated target period is + * smaller than default period */ + if (cand < default_period) + target_period_frames = cand; + } + + if (target_period_frames > 0) { + DWORD flags = stream_flags | + AUDCLNT_STREAMFLAGS_EVENTCALLBACK; + hr = client3->InitializeSharedAudioStream (flags, + target_period_frames, ctx->mix_format, nullptr); + if (SUCCEEDED (hr)) { + GST_INFO ("Using IAudioClient3, default period %d frames, " + "fundamental period %d frames, minimum period %d frames, " + "maximum period %d frames, requested latency time %d frames, " + "target period %d frames", default_period, + fundamental_period, min_period, max_period, latency_time_frames, + target_period_frames); + client3_init_done = true; + ctx->period = target_period_frames; + } + } + } + } + } + + if (!client3_init_done) { + DWORD extra_flags = stream_flags; + if (gst_wasapi2_is_loopback_class (ctx->endpoint_class)) + extra_flags = AUDCLNT_STREAMFLAGS_LOOPBACK; + + hr = initialize_audio_client (ctx->client.Get (), ctx->mix_format, + &ctx->period, extra_flags, ctx->endpoint_class, ctx->low_latency, + ctx->latency_time, ctx->buffer_time); + } + + if (FAILED (hr)) + return FALSE; + } + + if (ctx->endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER) { + hr = ctx->client->SetEventHandle (ctx->render_event); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't set event handle"); + return FALSE; + } + + hr = ctx->client->GetService (IID_PPV_ARGS (&ctx->render_client)); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get render client handle"); + return FALSE; + } + } else { + hr = ctx->client->SetEventHandle (ctx->capture_event); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't set event handle"); + return FALSE; + } + + hr = ctx->client->GetService (IID_PPV_ARGS (&ctx->capture_client)); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get capture client handle"); + return FALSE; + } + } + + if (!ctx->is_exclusive) { + hr = ctx->client->GetService (IID_PPV_ARGS (&ctx->stream_volume)); + if (!gst_wasapi2_result (hr)) + GST_WARNING ("Couldn't get ISimpleAudioVolume interface"); + } + + hr = ctx->client->GetBufferSize (&ctx->client_buf_size); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get buffer size"); + return FALSE; + } + + /* Activate silence feed client */ + if (ctx->dummy_client) { + WAVEFORMATEX *mix_format = nullptr; + hr = ctx->dummy_client->GetMixFormat (&mix_format); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get mix format"); + return FALSE; + } + + hr = initialize_audio_client (ctx->dummy_client.Get (), mix_format, nullptr, + 0, GST_WASAPI2_ENDPOINT_CLASS_RENDER, false, 0, 0); + CoTaskMemFree (mix_format); + + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't initialize dummy client"); + return FALSE; + } + + hr = ctx->dummy_client->SetEventHandle (ctx->render_event); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't set event handle"); + return FALSE; + } + + hr = ctx->dummy_client->GetBufferSize (&ctx->dummy_buf_size); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get buffer size"); + return FALSE; + } + + hr = ctx->dummy_client->GetService (IID_PPV_ARGS (&ctx->render_client)); + if (!gst_wasapi2_result (hr)) { + GST_ERROR ("Couldn't get render client"); + return FALSE; + } + + if (ctx->device) { + hr = ctx->device->Activate (__uuidof (IAudioEndpointVolume), + CLSCTX_ALL, nullptr, &ctx->endpoint_volume); + if (gst_wasapi2_result (hr)) { + Wasapi2EndpointVolumeCallback::CreateInstance (&ctx->volume_callback, + ctx); + + hr = ctx->endpoint_volume->RegisterControlChangeNotify ( + ctx->volume_callback.Get ()); + if (!gst_wasapi2_result (hr)) { + ctx->volume_callback = nullptr; + } else { + BOOL muted = FALSE; + hr = ctx->endpoint_volume->GetMute (&muted); + if (gst_wasapi2_result (hr)) + ctx->endpoint_muted = muted; + } + } + } + } + + /* Preroll data with silent data */ + if (ctx->render_client && !ctx->dummy_client) { + if (ctx->is_exclusive) { + BYTE *data; + hr = ctx->render_client->GetBuffer (ctx->client_buf_size, &data); + if (SUCCEEDED (hr)) { + GST_DEBUG ("Prefill %u frames", ctx->client_buf_size); + ctx->render_client->ReleaseBuffer (ctx->client_buf_size, + AUDCLNT_BUFFERFLAGS_SILENT); + } + } else { + UINT32 padding = 0; + auto hr = ctx->client->GetCurrentPadding (&padding); + if (SUCCEEDED (hr) && padding < ctx->client_buf_size) { + auto can_write = ctx->client_buf_size - padding; + if (can_write > ctx->period) + can_write = ctx->period; + + BYTE *data; + hr = ctx->render_client->GetBuffer (can_write, &data); + if (SUCCEEDED (hr)) { + GST_DEBUG ("Prefill %u frames", can_write); + ctx->render_client->ReleaseBuffer (can_write, + AUDCLNT_BUFFERFLAGS_SILENT); + } + } + } + } + + /* Warm up device, first Start() call may take long if device is in idle state */ + if (ctx->capture_client && !ctx->dummy_client) { + ctx->client->Start (); + ctx->client->Stop (); + ctx->client->Reset (); + } + + GstAudioInfo info; + gst_audio_info_from_caps (&info, ctx->caps); + + /* Due to format mismatch between Windows and GStreamer, + * we need to convert format */ + if (GST_AUDIO_INFO_FORMAT (&info) == GST_AUDIO_FORMAT_S24_32LE) + ctx->is_s24in32 = true; + + /* Allocates staging buffer for exclusive mode, since we should fill + * endpoint buffer at once */ + if (ctx->is_exclusive && ctx->render_client) { + ctx->exclusive_period_bytes = ctx->period * GST_AUDIO_INFO_BPF (&info); + ctx->exclusive_staging.resize (ctx->exclusive_period_bytes); + ctx->exclusive_staging_filled = 0; + } + + ctx->init_done = true; + + return TRUE; +} + +static void +gst_wasapi2_device_manager_create_ctx (IMMDeviceEnumerator * enumerator, + RbufCtxDesc * desc) +{ + HRESULT hr = S_OK; + Wasapi2ActivationHandler *activator = nullptr; + Wasapi2ActivationHandler *dummy_activator = nullptr; + ComPtr<IMMDevice> device; + bool is_default = false; + + if (!enumerator) + return; + + auto endpoint_class = desc->endpoint_class; + + if ((endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE || + gst_wasapi2_is_process_loopback_class (endpoint_class)) && + desc->exclusive) { + GST_WARNING ("Loopback + exclusive is not supported configuration"); + desc->exclusive = FALSE; + } + + switch (endpoint_class) { + case GST_WASAPI2_ENDPOINT_CLASS_CAPTURE: + if (desc->device_id.empty () || + is_equal_device_id (desc->device_id.c_str (), + gst_wasapi2_get_default_device_id (eCapture))) { + if (gst_wasapi2_can_automatic_stream_routing () && !desc->exclusive) { + Wasapi2ActivationHandler::CreateInstance (&activator, + gst_wasapi2_get_default_device_id_wide (eCapture), nullptr); + GST_LOG ("Creating default capture device"); + } + + GST_LOG ("Creating default capture MMdevice"); + hr = enumerator->GetDefaultAudioEndpoint (eCapture, + eConsole, &device); + } else { + auto wstr = g_utf8_to_utf16 (desc->device_id.c_str (), + -1, nullptr, nullptr, nullptr); + hr = enumerator->GetDevice ((LPCWSTR) wstr, &device); + g_free (wstr); + } + break; + case GST_WASAPI2_ENDPOINT_CLASS_RENDER: + case GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE: + if (desc->device_id.empty () || + is_equal_device_id (desc->device_id.c_str (), + gst_wasapi2_get_default_device_id (eRender))) { + if (gst_wasapi2_can_automatic_stream_routing () && !desc->exclusive) { + Wasapi2ActivationHandler::CreateInstance (&activator, + gst_wasapi2_get_default_device_id_wide (eRender), nullptr); + GST_LOG ("Creating default render device"); + + if (endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { + /* Create another client to send dummy audio data to endpoint */ + Wasapi2ActivationHandler::CreateInstance (&dummy_activator, + gst_wasapi2_get_default_device_id_wide (eRender), nullptr); + } + } + + hr = enumerator->GetDefaultAudioEndpoint (eRender, + eConsole, &device); + } else { + auto wstr = g_utf8_to_utf16 (desc->device_id.c_str (), + -1, nullptr, nullptr, nullptr); + hr = enumerator->GetDevice ((LPCWSTR) wstr, &device); + g_free (wstr); + } + break; + case GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE: + case GST_WASAPI2_ENDPOINT_CLASS_EXCLUDE_PROCESS_LOOPBACK_CAPTURE: + { + AUDIOCLIENT_ACTIVATION_PARAMS params = { }; + params.ActivationType = AUDIOCLIENT_ACTIVATION_TYPE_PROCESS_LOOPBACK; + params.ProcessLoopbackParams.TargetProcessId = desc->pid; + if (desc->endpoint_class == + GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE) { + params.ProcessLoopbackParams.ProcessLoopbackMode = + PROCESS_LOOPBACK_MODE_INCLUDE_TARGET_PROCESS_TREE; + } else { + params.ProcessLoopbackParams.ProcessLoopbackMode = + PROCESS_LOOPBACK_MODE_EXCLUDE_TARGET_PROCESS_TREE; + } + + GST_LOG ("Creating process loopback capture device"); + + Wasapi2ActivationHandler::CreateInstance (&activator, + VIRTUAL_AUDIO_DEVICE_PROCESS_LOOPBACK, ¶ms); + break; + } + default: + g_assert_not_reached (); + return; + } + + /* For debug */ + gst_wasapi2_result (hr); + + auto ctx = std::make_shared<RbufCtx> (desc->device_id); + if (activator) { + is_default = true; + activator->ActivateAsync (); + activator->GetClient (&ctx->client, INFINITE); + activator->Release (); + if (dummy_activator) { + dummy_activator->ActivateAsync (); + dummy_activator->GetClient (&ctx->dummy_client, INFINITE); + dummy_activator->Release (); + + if (!ctx->dummy_client) { + GST_WARNING ("Couldn't get dummy audio client"); + ctx->client = nullptr; + } + } + } + + if (!ctx->client) { + if (!device) { + GST_WARNING ("Couldn't get IMMDevice"); + return; + } + + hr = device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &ctx->client); + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't get IAudioClient from IMMDevice"); + return; + } + + if (endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { + hr = device->Activate (__uuidof (IAudioClient), CLSCTX_ALL, + nullptr, &ctx->dummy_client); + if (!gst_wasapi2_result (hr)) { + GST_WARNING ("Couldn't get IAudioClient from IMMDevice"); + return; + } + } + } + + if (desc->exclusive) { + if (!device) { + GST_WARNING ("IMMDevice is unavailable"); + return; + } + + ComPtr < IPropertyStore > prop; + hr = device->OpenPropertyStore (STGM_READ, &prop); + if (!gst_wasapi2_result (hr)) + return; + + g_ptr_array_set_size (ctx->formats, 0); + gst_wasapi2_get_exclusive_mode_formats (ctx->client.Get (), prop.Get (), + ctx->formats); + if (ctx->formats->len == 0) { + GST_WARNING ("Couldn't get exclusive mode formats"); + desc->exclusive = false; + } + } + + if (!desc->exclusive) { + gst_wasapi2_get_shared_mode_formats (ctx->client.Get (), ctx->formats); + if (ctx->formats->len == 0) { + if (gst_wasapi2_is_process_loopback_class (endpoint_class)) { + g_ptr_array_add (ctx->formats, gst_wasapi2_get_default_mix_format ()); + } else { + GST_ERROR ("Couldn't find supported formats"); + return; + } + } + } + + ctx->supported_caps = gst_wasapi2_wfx_list_to_caps (ctx->formats); + if (!ctx->supported_caps) { + GST_ERROR ("Couldn't build caps from format"); + return; + } + + ctx->is_default = is_default; + ctx->endpoint_class = endpoint_class; + ctx->is_exclusive = desc->exclusive; + ctx->device = device; + ctx->low_latency = desc->low_latency; + ctx->latency_time = desc->latency_time; + ctx->buffer_time = desc->buffer_time; + + if (!desc->mix_format) { + /* format not fixated, return ctx without init */ + desc->ctx = ctx; + return; + } + + if (gst_wasapi2_rbuf_ctx_init (ctx, desc->mix_format)) + desc->ctx = ctx; +} + +struct Wasapi2DeviceManager +{ + Wasapi2DeviceManager (const Wasapi2DeviceManager &) = delete; + Wasapi2DeviceManager& operator= (const Wasapi2DeviceManager &) = delete; + + static Wasapi2DeviceManager * GetInstance() + { + static Wasapi2DeviceManager *inst = nullptr; + GST_WASAPI2_CALL_ONCE_BEGIN { + inst = new Wasapi2DeviceManager (); + } GST_WASAPI2_CALL_ONCE_END; + + return inst; + } + + Wasapi2DeviceManager () + { + shutdown_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); + interrupt_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); + com_thread = g_thread_new ("Wasapi2DeviceManager", + (GThreadFunc) device_manager_com_thread, this); + } + + ~Wasapi2DeviceManager () + { + CloseHandle (shutdown_handle); + CloseHandle (interrupt_handle); + } + + RbufCtxPtr + CreateCtx (const std::string & device_id, + GstWasapi2EndpointClass endpoint_class, guint pid, gint64 buffer_time, + gint64 latency_time, gboolean low_latency, gboolean exclusive, + WAVEFORMATEX * mix_format) + { + auto desc = std::make_shared<RbufCtxDesc> (); + desc->device_id = device_id; + desc->endpoint_class = endpoint_class; + desc->pid = pid; + desc->buffer_time = buffer_time; + desc->latency_time = latency_time; + desc->low_latency = low_latency; + desc->exclusive = exclusive; + if (mix_format) + desc->mix_format = gst_wasapi2_copy_wfx (mix_format); + + { + std::lock_guard <std::mutex> lk (lock); + queue.push (desc); + } + SetEvent (interrupt_handle); + + WaitForSingleObject (desc->event_handle, INFINITE); + + return desc->ctx; + } + + void + CreateCtxAsync (GstWasapi2Rbuf * rbuf, const std::string & device_id, + GstWasapi2EndpointClass endpoint_class, guint pid, gint64 buffer_time, + gint64 latency_time, gboolean low_latency, gboolean exclusive, + WAVEFORMATEX * mix_format) + { + auto desc = std::make_shared<RbufCtxDesc> (); + desc->rbuf = (GstWasapi2Rbuf *) gst_object_ref (rbuf); + desc->device_id = device_id; + desc->endpoint_class = endpoint_class; + desc->pid = pid; + desc->buffer_time = buffer_time; + desc->latency_time = latency_time; + desc->low_latency = low_latency; + desc->exclusive = exclusive; + if (mix_format) + desc->mix_format = gst_wasapi2_copy_wfx (mix_format); + + { + std::lock_guard <std::mutex> lk (lock); + queue.push (desc); + } + SetEvent (interrupt_handle); + } + + std::mutex lock; + std::queue<std::shared_ptr<RbufCtxDesc>> queue; + HANDLE shutdown_handle; + HANDLE interrupt_handle; + GThread *com_thread; +}; + +static gpointer +device_manager_com_thread (gpointer manager) +{ + auto self = (Wasapi2DeviceManager *) manager; + CoInitializeEx (nullptr, COINIT_MULTITHREADED); + + ComPtr<IMMDeviceEnumerator> enumerator; + CoCreateInstance (__uuidof (MMDeviceEnumerator), + nullptr, CLSCTX_ALL, IID_PPV_ARGS (&enumerator)); + + HANDLE waitables = { self->shutdown_handle, self->interrupt_handle }; + bool running = true; + while (running) { + auto wait_ret = WaitForMultipleObjects (G_N_ELEMENTS (waitables), + waitables, FALSE, INFINITE); + + switch (wait_ret) { + case WAIT_OBJECT_0: + running = false; + break; + case WAIT_OBJECT_0 + 1: + { + std::unique_lock <std::mutex> lk (self->lock); + while (!self->queue.empty ()) { + auto desc = self->queue.front (); + self->queue.pop (); + lk.unlock (); + GST_LOG ("Creating new context"); + + gst_wasapi2_device_manager_create_ctx (enumerator.Get (), desc.get ()); + + if (desc->mix_format) + CoTaskMemFree (desc->mix_format); + + SetEvent (desc->event_handle); + + if (desc->rbuf) { + auto cmd = std::make_shared < CommandUpdateDevice > (desc->device_id); + cmd->ctx = std::move (desc->ctx); + + gst_wasapi2_rbuf_push_command (desc->rbuf, cmd); + WaitForSingleObject (cmd->event_handle, INFINITE); + + gst_object_unref (desc->rbuf); + } + + lk.lock (); + } + break; + } + default: + GST_ERROR ("Unexpected wait return 0x%x", (guint) wait_ret); + running = false; + break; + } + } + + enumerator = nullptr; + + CoUninitialize (); + + return nullptr; +} + +struct GstWasapi2RbufPrivate +{ + GstWasapi2RbufPrivate () + { + command_handle = CreateEvent (nullptr, FALSE, FALSE, nullptr); + g_weak_ref_init (&parent, nullptr); + + QueryPerformanceFrequency (&qpc_freq); + } + + ~GstWasapi2RbufPrivate () + { + CloseHandle (command_handle); + gst_clear_caps (&caps); + g_weak_ref_set (&parent, nullptr); + } + + std::string device_id; + GstWasapi2EndpointClass endpoint_class; + guint pid; + gboolean low_latency = FALSE; + gboolean exclusive = FALSE; + + std::shared_ptr<RbufCtx> ctx; + std::atomic<bool> monitor_device_mute = { false }; + GThread *thread = nullptr; + HANDLE command_handle; + GstCaps *caps = nullptr; + + std::mutex lock; + std::condition_variable cond; + WAVEFORMATEX *mix_format = nullptr; + std::queue<std::shared_ptr<CommandData>> cmd_queue; + bool opened = false; + bool running = false; + + std::atomic<float> volume = { 1.0 }; + std::atomic<bool> mute = { false }; + std::atomic<bool> allow_dummy = { false }; + + bool is_first = true; + gint segoffset = 0; + guint64 write_frame_offset = 0; + guint64 expected_position = 0; + + HANDLE fallback_timer = nullptr; + bool fallback_timer_armed = false; + UINT64 fallback_frames_processed = 0; + bool configured_allow_dummy = false; + + LARGE_INTEGER qpc_freq; + LARGE_INTEGER fallback_qpc_base; + + HANDLE monitor_timer = nullptr; + bool monitor_timer_armed = false; + + std::vector<guint8> temp_data; + + GWeakRef parent; + GstWasapi2RbufCallback invalidated_cb; +}; +/* *INDENT-ON* */ + +struct _GstWasapi2Rbuf +{ + GstAudioRingBuffer parent; + + GstWasapi2RbufPrivate *priv; +}; + +static void gst_wasapi2_rbuf_finalize (GObject * object); + +static gboolean gst_wasapi2_rbuf_open_device (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_close_device (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_acquire (GstAudioRingBuffer * buf, + GstAudioRingBufferSpec * spec); +static gboolean gst_wasapi2_rbuf_release (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_start (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_resume (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_pause (GstAudioRingBuffer * buf); +static gboolean gst_wasapi2_rbuf_stop (GstAudioRingBuffer * buf); +static guint gst_wasapi2_rbuf_delay (GstAudioRingBuffer * buf); +static gpointer gst_wasapi2_rbuf_loop_thread (GstWasapi2Rbuf * self); + +#define gst_wasapi2_rbuf_parent_class parent_class +G_DEFINE_TYPE (GstWasapi2Rbuf, gst_wasapi2_rbuf, GST_TYPE_AUDIO_RING_BUFFER); + +static void +gst_wasapi2_rbuf_class_init (GstWasapi2RbufClass * klass) +{ + GObjectClass *gobject_class = G_OBJECT_CLASS (klass); + GstAudioRingBufferClass *ring_buffer_class = + GST_AUDIO_RING_BUFFER_CLASS (klass); + + gobject_class->finalize = gst_wasapi2_rbuf_finalize; + + ring_buffer_class->open_device = + GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_open_device); + ring_buffer_class->close_device = + GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_close_device); + ring_buffer_class->acquire = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_acquire); + ring_buffer_class->release = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_release); + ring_buffer_class->start = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_start); + ring_buffer_class->resume = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_resume); + ring_buffer_class->pause = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_pause); + ring_buffer_class->stop = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_stop); + ring_buffer_class->delay = GST_DEBUG_FUNCPTR (gst_wasapi2_rbuf_delay); + + GST_DEBUG_CATEGORY_INIT (gst_wasapi2_rbuf_debug, + "wasapi2ringbuffer", 0, "wasapi2ringbuffer"); + + /* Initialize background thread on class init */ + Wasapi2DeviceManager::GetInstance (); +} + +static void +gst_wasapi2_rbuf_init (GstWasapi2Rbuf * self) +{ + self->priv = new GstWasapi2RbufPrivate (); +} + +static void +gst_wasapi2_rbuf_push_command (GstWasapi2Rbuf * self, + std::shared_ptr < CommandData > cmd) +{ + auto priv = self->priv; + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->cmd_queue.push (cmd); + } + SetEvent (priv->command_handle); +} + +static void +gst_wasapi2_rbuf_finalize (GObject * object) +{ + auto self = GST_WASAPI2_RBUF (object); + auto priv = self->priv; + + GST_LOG_OBJECT (self, "Finalize"); + + auto cmd = std::make_shared < CommandData > (CommandType::Shutdown); + gst_wasapi2_rbuf_push_command (self, cmd); + + g_thread_join (priv->thread); + + delete priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_wasapi2_rbuf_post_open_error (GstWasapi2Rbuf * self, + const gchar * device_id) +{ + auto priv = self->priv; + auto parent = g_weak_ref_get (&priv->parent); + + if (!parent) + return; + + priv->invalidated_cb (parent); + + if (priv->configured_allow_dummy) { + GST_ELEMENT_WARNING (parent, RESOURCE, OPEN_READ_WRITE, + (nullptr), ("Failed to open device %s", GST_STR_NULL (device_id))); + } else { + GST_ELEMENT_ERROR (parent, RESOURCE, OPEN_READ_WRITE, + (nullptr), ("Failed to open device %s", GST_STR_NULL (device_id))); + } + + g_object_unref (parent); +} + +static void +gst_wasapi2_rbuf_post_io_error (GstWasapi2Rbuf * self, HRESULT hr, + gboolean is_write) +{ + auto priv = self->priv; + auto parent = g_weak_ref_get (&priv->parent); + + auto error_msg = gst_wasapi2_util_get_error_message (hr); + GST_ERROR_OBJECT (self, "Posting I/O error %s (hr: 0x%x)", error_msg, + (guint) hr); + + priv->invalidated_cb (parent); + + if (is_write) { + if (priv->configured_allow_dummy) { + GST_ELEMENT_WARNING (parent, RESOURCE, WRITE, + ("Failed to write to device"), ("%s, hr: 0x%x", error_msg, + (guint) hr)); + } else { + GST_ELEMENT_ERROR (parent, RESOURCE, WRITE, + ("Failed to write to device"), ("%s, hr: 0x%x", error_msg, + (guint) hr)); + } + } else { + if (priv->configured_allow_dummy) { + GST_ELEMENT_WARNING (parent, RESOURCE, READ, + ("Failed to read from device"), ("%s hr: 0x%x", error_msg, + (guint) hr)); + } else { + GST_ELEMENT_ERROR (parent, RESOURCE, READ, + ("Failed to read from device"), ("%s hr: 0x%x", error_msg, + (guint) hr)); + } + } + + g_free (error_msg); + g_object_unref (parent); +} + +static RbufCtxPtr +gst_wasapi2_rbuf_create_ctx (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + auto parent = g_weak_ref_get (&priv->parent); + + if (!parent) { + GST_ERROR_OBJECT (self, "No parent"); + return nullptr; + } + + gint64 buffer_time = 0; + gint64 latency_time = 0; + g_object_get (parent, "buffer-time", &buffer_time, "latency-time", + &latency_time, nullptr); + g_object_unref (parent); + + auto inst = Wasapi2DeviceManager::GetInstance (); + + return inst->CreateCtx (priv->device_id, priv->endpoint_class, + priv->pid, buffer_time, latency_time, priv->low_latency, + priv->exclusive, priv->mix_format); +} + +static void +gst_wasapi2_rbuf_create_ctx_async (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + auto parent = g_weak_ref_get (&priv->parent); + + if (!parent) { + GST_ERROR_OBJECT (self, "No parent"); + return; + } + + gint64 buffer_time = 0; + gint64 latency_time = 0; + g_object_get (parent, "buffer-time", &buffer_time, "latency-time", + &latency_time, nullptr); + g_object_unref (parent); + + auto inst = Wasapi2DeviceManager::GetInstance (); + + inst->CreateCtxAsync (self, priv->device_id, priv->endpoint_class, + priv->pid, buffer_time, latency_time, priv->low_latency, + priv->exclusive, priv->mix_format); +} + +static gboolean +gst_wasapi2_rbuf_open_device (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Open"); + + auto cmd = std::make_shared < CommandData > (CommandType::Open); + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return gst_wasapi2_result (cmd->hr); +} + +static gboolean +gst_wasapi2_rbuf_close_device (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Close device"); + + auto cmd = std::make_shared < CommandData > (CommandType::Close); + + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return TRUE; +} + +static gboolean +gst_wasapi2_rbuf_acquire (GstAudioRingBuffer * buf, + GstAudioRingBufferSpec * spec) +{ + auto self = GST_WASAPI2_RBUF (buf); + + auto cmd = std::make_shared < CommandAcquire > (spec); + + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return gst_wasapi2_result (cmd->hr); +} + +static gboolean +gst_wasapi2_rbuf_release (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Release"); + + auto cmd = std::make_shared < CommandData > (CommandType::Release); + + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return TRUE; +} + +static gboolean +gst_wasapi2_rbuf_start_internal (GstWasapi2Rbuf * self) +{ + auto cmd = std::make_shared < CommandData > (CommandType::Start); + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return gst_wasapi2_result (cmd->hr); +} + +static gboolean +gst_wasapi2_rbuf_start (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Start"); + + return gst_wasapi2_rbuf_start_internal (self); +} + +static gboolean +gst_wasapi2_rbuf_resume (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Resume"); + + return gst_wasapi2_rbuf_start_internal (self); +} + +static gboolean +gst_wasapi2_rbuf_stop_internal (GstWasapi2Rbuf * self) +{ + auto cmd = std::make_shared < CommandData > (CommandType::Stop); + gst_wasapi2_rbuf_push_command (self, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); + + return TRUE; +} + +static gboolean +gst_wasapi2_rbuf_stop (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Stop"); + + return gst_wasapi2_rbuf_stop_internal (self); +} + +static gboolean +gst_wasapi2_rbuf_pause (GstAudioRingBuffer * buf) +{ + auto self = GST_WASAPI2_RBUF (buf); + + GST_DEBUG_OBJECT (self, "Pause"); + + return gst_wasapi2_rbuf_stop_internal (self); +} + +static inline gint32 +rshift8_32 (gint32 x) +{ + guint32 s = ((guint32) x) >> 8; + guint32 signmask = (x < 0) ? 0xff000000u : 0u; + + return (gint32) (s | signmask); +} + +static inline void +shift32_right8_copy (const gint32 * src, gint32 * dst, size_t n) +{ +#ifdef GST_WASAPI2_HAVE_SSE2 + size_t i = 0; + size_t step = 4; + for (; i + step <= n; i += step) { + __m128i v = _mm_loadu_si128 ((const __m128i *) (src + i)); + __m128i y = _mm_srai_epi32 (v, 8); + _mm_storeu_si128 ((__m128i *) (dst + i), y); + } + + for (; i < n; i++) + dsti = rshift8_32 (srci); +#else + for (size_t i = 0; i < n; i++) + dsti = rshift8_32 (srci); +#endif +} + +static inline void +shift32_left8_copy (const gint32 * src, gint32 * dst, size_t n) +{ +#ifdef GST_WASAPI2_HAVE_SSE2 + size_t i = 0; + size_t step = 4; + for (; i + step <= n; i += 4) { + __m128i v = _mm_loadu_si128 ((const __m128i *) (src + i)); + __m128i y = _mm_slli_epi32 (v, 8); + _mm_storeu_si128 ((__m128i *) (dst + i), y); + } + + for (; i < n; i++) + dsti = (gint32) ((guint32) srci << 8); +#else + for (size_t i = 0; i < n; i++) + dsti = (gint32) ((guint32) srci << 8); +#endif +} + +static inline void +s24_msb_to_s24lsb (guint8 * dst, const guint8 * src, size_t bytes) +{ + if ((bytes & 3) == 0) + shift32_right8_copy ((const gint32 *) src, (gint32 *) dst, bytes >> 2); + else + memcpy (dst, src, bytes); +} + +static inline void +s24lsb_to_s24_msb (guint8 * dst, const guint8 * src, size_t bytes) +{ + if ((bytes & 3) == 0) + shift32_left8_copy ((const gint32 *) src, (gint32 *) dst, bytes >> 2); + else + memcpy (dst, src, bytes); +} + +static HRESULT +gst_wasapi2_rbuf_process_read (GstWasapi2Rbuf * self) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + BYTE *data = nullptr; + UINT32 to_read_frames = 0; + DWORD flags = 0; + guint64 position = 0; + UINT64 qpc_pos = 0; + GstClockTime qpc_time; + + if (!priv->ctx || !priv->ctx->capture_client) { + GST_ERROR_OBJECT (self, "IAudioCaptureClient is not available"); + return E_FAIL; + } + + auto & ctx = priv->ctx; + auto client = priv->ctx->capture_client; + + auto hr = + client->GetBuffer (&data, &to_read_frames, &flags, &position, &qpc_pos); + /* 100 ns unit */ + qpc_time = qpc_pos * 100; + + GST_LOG_OBJECT (self, "Reading %d frames offset at %" G_GUINT64_FORMAT + ", expected position %" G_GUINT64_FORMAT ", qpc-time %" + GST_TIME_FORMAT "(%" G_GUINT64_FORMAT "), flags 0x%x", to_read_frames, + position, priv->expected_position, GST_TIME_ARGS (qpc_time), qpc_pos, + (guint) flags); + + if (hr == AUDCLNT_S_BUFFER_EMPTY || to_read_frames == 0) { + GST_LOG_OBJECT (self, "Empty buffer"); + return S_OK; + } + + if (!gst_wasapi2_result (hr)) + return hr; + + guint gap_dev_frames = 0; + if (!gst_wasapi2_is_process_loopback_class (priv->ctx->endpoint_class)) { + /* XXX: position might not be increased in case of process loopback */ + if (priv->is_first) { + priv->expected_position = position + to_read_frames; + priv->is_first = false; + } else { + if (position > priv->expected_position) { + gap_dev_frames = (guint) (position - priv->expected_position); + GST_WARNING_OBJECT (self, "Found %u frames gap", gap_dev_frames); + } + + priv->expected_position = position + to_read_frames; + } + } + + if (priv->mute) { + /* volume clinet might not be available in case of process loopback */ + flags |= AUDCLNT_BUFFERFLAGS_SILENT; + } + + gboolean device_muted = + priv->monitor_device_mute.load (std::memory_order_acquire) && + priv->ctx->IsEndpointMuted (); + gboolean force_silence = + ((flags & AUDCLNT_BUFFERFLAGS_SILENT) == AUDCLNT_BUFFERFLAGS_SILENT) || + device_muted; + + gsize host_bpf = (gsize) GST_AUDIO_INFO_BPF (&rb->spec.info); + gsize device_bpf = (ctx->conv) + ? (gsize) GST_AUDIO_INFO_BPF (&ctx->device_info) + : (gsize) GST_AUDIO_INFO_BPF (&rb->spec.info); + + /* Fill gap data if any */ + if (gap_dev_frames > 0) { + if (ctx->conv) { + auto gap_bytes = (gsize) gap_dev_frames * device_bpf; + auto old = ctx->device_fifo_bytes; + ctx->device_fifo.resize (old + gap_bytes); + gst_audio_format_info_fill_silence (ctx->device_info.finfo, + ctx->device_fifo.data () + old, (gint) gap_bytes); + ctx->device_fifo_bytes += gap_bytes; + } else { + auto gap_bytes = (gsize) gap_dev_frames * host_bpf; + while (gap_bytes > 0) { + gint segment; + guint8 *dstptr; + gint len; + + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &dstptr, &len)) + break; + + len -= priv->segoffset; + if (len <= 0) + break; + + gsize to_write = MIN ((gsize) len, gap_bytes); + gst_audio_format_info_fill_silence (rb->spec.info.finfo, + dstptr + priv->segoffset, (gint) to_write); + + priv->segoffset += (gint) to_write; + gap_bytes -= to_write; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + } + } + } + + if (ctx->conv) { + /* push device data to device_fifo */ + const size_t in_bytes = (size_t) to_read_frames * device_bpf; + if (in_bytes > 0) { + const size_t old = ctx->device_fifo_bytes; + ctx->device_fifo.resize (old + in_bytes); + if (force_silence) { + gst_audio_format_info_fill_silence (ctx->device_info.finfo, + ctx->device_fifo.data () + old, (gint) in_bytes); + } else { + if (ctx->is_s24in32) { + s24_msb_to_s24lsb (ctx->device_fifo.data () + old, data, in_bytes); + } else { + memcpy (ctx->device_fifo.data () + old, data, in_bytes); + } + } + ctx->device_fifo_bytes += in_bytes; + } + + /* convert device_fifo -> host_fifo */ + while (ctx->device_fifo_bytes >= device_bpf) { + auto in_frames_avail = (gsize) (ctx->device_fifo_bytes / device_bpf); + auto out_frames = gst_audio_converter_get_out_frames (ctx->conv, + (gint) in_frames_avail); + if (out_frames == 0) + break; + + auto out_bytes = (size_t) (out_frames * host_bpf); + priv->temp_data.resize (out_bytes); + + gpointer in_planes1 = { ctx->device_fifo.data () }; + gpointer out_planes1 = { priv->temp_data.data () }; + + if (!gst_audio_converter_samples (ctx->conv, + GST_AUDIO_CONVERTER_FLAG_NONE, + in_planes, (gint) in_frames_avail, + out_planes, (gint) out_frames)) { + GST_ERROR_OBJECT (self, "Couldn't convert sample"); + client->ReleaseBuffer (to_read_frames); + return E_FAIL; + } + + auto consumed_in = (size_t) (in_frames_avail * device_bpf); + if (consumed_in < ctx->device_fifo_bytes) { + memmove (ctx->device_fifo.data (), + ctx->device_fifo.data () + consumed_in, + ctx->device_fifo_bytes - consumed_in); + } + ctx->device_fifo_bytes -= consumed_in; + ctx->device_fifo.resize (ctx->device_fifo_bytes); + + /* Push converted data to host_fifo */ + if (out_bytes > 0) { + auto hold = ctx->host_fifo_bytes; + ctx->host_fifo.resize (hold + out_bytes); + memcpy (ctx->host_fifo.data () + hold, priv->temp_data.data (), + out_bytes); + ctx->host_fifo_bytes += out_bytes; + } + + if (ctx->device_fifo_bytes < device_bpf) + break; + } + + /* host_fifo -> ringbuffer */ + while (ctx->host_fifo_bytes > 0) { + gint segment; + guint8 *dstptr; + gint len; + + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &dstptr, &len)) + break; + + len -= priv->segoffset; + if (len <= 0) + break; + + auto to_copy = MIN ((size_t) len, ctx->host_fifo_bytes); + memcpy (dstptr + priv->segoffset, ctx->host_fifo.data (), to_copy); + + priv->segoffset += (gint) to_copy; + + if (to_copy < ctx->host_fifo_bytes) { + memmove (ctx->host_fifo.data (), + ctx->host_fifo.data () + to_copy, ctx->host_fifo_bytes - to_copy); + } + ctx->host_fifo_bytes -= to_copy; + ctx->host_fifo.resize (ctx->host_fifo_bytes); + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + + if (to_copy == 0) + break; + } + } else { + gsize remain = (gsize) to_read_frames * device_bpf; + gsize offset = 0; + + while (remain > 0) { + gint segment; + guint8 *dstptr; + gint len; + + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &dstptr, &len)) { + GST_INFO_OBJECT (self, "No segment available"); + break; + } + + len -= priv->segoffset; + if (len <= 0) + break; + + auto to_write = MIN ((gsize) len, remain); + if (force_silence) { + gst_audio_format_info_fill_silence (rb->spec.info.finfo, + dstptr + priv->segoffset, (gint) to_write); + } else { + if (ctx->is_s24in32) + s24_msb_to_s24lsb (dstptr + priv->segoffset, data + offset, to_write); + else + memcpy (dstptr + priv->segoffset, data + offset, to_write); + } + + priv->segoffset += (gint) to_write; + offset += to_write; + remain -= to_write; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + } + } + + hr = client->ReleaseBuffer (to_read_frames); + gst_wasapi2_result (hr); + + return S_OK; +} + +static HRESULT +gst_wasapi2_rbuf_process_write (GstWasapi2Rbuf * self) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + HRESULT hr; + guint32 padding_frames = 0; + guint32 can_write; + guint32 can_write_bytes; + gint segment; + guint8 *readptr; + gint len; + BYTE *data = nullptr; + + if (!priv->ctx || !priv->ctx->render_client) { + GST_ERROR_OBJECT (self, "IAudioRenderClient is not available"); + return E_FAIL; + } + + auto client = priv->ctx->client; + auto render_client = priv->ctx->render_client; + bool force_silence = priv->mute; + + hr = client->GetCurrentPadding (&padding_frames); + if (!gst_wasapi2_result (hr)) + return hr; + + if (padding_frames >= priv->ctx->client_buf_size) { + GST_INFO_OBJECT (self, + "Padding size %d is larger than or equal to buffer size %d", + padding_frames, priv->ctx->client_buf_size); + return S_OK; + } + + can_write = priv->ctx->client_buf_size - padding_frames; + can_write_bytes = can_write * GST_AUDIO_INFO_BPF (&rb->spec.info); + + GST_LOG_OBJECT (self, "Writing %d frames offset at %" G_GUINT64_FORMAT, + can_write, priv->write_frame_offset); + priv->write_frame_offset += can_write; + + while (can_write_bytes > 0) { + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &readptr, &len)) { + GST_INFO_OBJECT (self, "No segment available, fill silence"); + + /* This would be case where in the middle of PAUSED state change. + * Just fill silent buffer to avoid immediate I/O callback after + * we return here */ + hr = render_client->GetBuffer (can_write, &data); + if (!gst_wasapi2_result (hr)) + return hr; + + hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); + /* for debugging */ + gst_wasapi2_result (hr); + return hr; + } + + len -= priv->segoffset; + + if (len > (gint) can_write_bytes) + len = can_write_bytes; + + can_write = len / GST_AUDIO_INFO_BPF (&rb->spec.info); + if (can_write == 0) + break; + + hr = render_client->GetBuffer (can_write, &data); + if (!gst_wasapi2_result (hr)) + return hr; + + if (force_silence) { + hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); + } else { + if (priv->ctx->is_s24in32) + s24lsb_to_s24_msb (data, readptr + priv->segoffset, len); + else + memcpy (data, readptr + priv->segoffset, len); + + hr = render_client->ReleaseBuffer (can_write, 0); + } + + priv->segoffset += len; + can_write_bytes -= len; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_clear (rb, segment); + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + + if (!gst_wasapi2_result (hr)) { + GST_WARNING_OBJECT (self, "Failed to release buffer"); + break; + } + } + + return S_OK; +} + +static HRESULT +gst_wasapi2_rbuf_process_write_exclusive (GstWasapi2Rbuf * self) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + HRESULT hr; + BYTE *data = nullptr; + + if (!priv->ctx || !priv->ctx->render_client) { + GST_ERROR_OBJECT (self, "IAudioRenderClient is not available"); + return E_FAIL; + } + + auto & ctx = priv->ctx; + auto client = priv->ctx->client; + auto render_client = priv->ctx->render_client; + + auto period_bytes = ctx->exclusive_period_bytes; + + if (ctx->conv) { + auto host_bpf = (gsize) GST_AUDIO_INFO_BPF (&ctx->host_info); + auto device_bpf = (gsize) GST_AUDIO_INFO_BPF (&ctx->device_info); + + while (ctx->exclusive_staging_filled < period_bytes) { + bool processed_any = false; + gint segment; + guint8 *readptr; + gint len; + + /* read data from ringbuffer */ + if (gst_audio_ring_buffer_prepare_read (rb, &segment, &readptr, &len)) { + len -= priv->segoffset; + if (len > 0) { + auto old = ctx->host_fifo_bytes; + ctx->host_fifo.resize (old + (size_t) len); + memcpy (ctx->host_fifo.data () + old, readptr + priv->segoffset, + (size_t) len); + ctx->host_fifo_bytes += (size_t) len; + processed_any = true; + + priv->segoffset += len; + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_clear (rb, segment); + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + } + } + + /* do conversion */ + { + auto host_frames_avail = (gsize) (ctx->host_fifo_bytes / host_bpf); + if (host_frames_avail > 0) { + auto out_frames = + gst_audio_converter_get_out_frames (ctx->conv, host_frames_avail); + if (out_frames > 0) { + auto out_bytes = (size_t) (out_frames * device_bpf); + priv->temp_data.resize (out_bytes); + + gpointer in_planes1 = { ctx->host_fifo.data () }; + gpointer out_planes1 = { priv->temp_data.data () }; + + if (!gst_audio_converter_samples (ctx->conv, + GST_AUDIO_CONVERTER_FLAG_NONE, + in_planes, host_frames_avail, out_planes, out_frames)) { + GST_ERROR_OBJECT (self, "gst_audio_converter_samples() failed"); + return E_FAIL; + } + + auto consumed_host = (size_t) (host_frames_avail * host_bpf); + if (consumed_host < ctx->host_fifo_bytes) { + memmove (ctx->host_fifo.data (), + ctx->host_fifo.data () + consumed_host, + ctx->host_fifo_bytes - consumed_host); + } + ctx->host_fifo_bytes -= consumed_host; + ctx->host_fifo.resize (ctx->host_fifo_bytes); + + auto old_dev = ctx->device_fifo_bytes; + ctx->device_fifo.resize (old_dev + out_bytes); + + if (ctx->is_s24in32) { + s24lsb_to_s24_msb (ctx->device_fifo.data () + + old_dev, priv->temp_data.data (), out_bytes); + } else { + memcpy (ctx->device_fifo.data () + old_dev, + priv->temp_data.data (), out_bytes); + } + + ctx->device_fifo_bytes += out_bytes; + + processed_any = true; + } + } + } + + /* move device fifo to staging */ + if (ctx->device_fifo_bytes > 0 && + ctx->exclusive_staging_filled < period_bytes) { + auto need = period_bytes - ctx->exclusive_staging_filled; + auto to_copy = MIN (need, ctx->device_fifo_bytes); + + memcpy (ctx->exclusive_staging.data () + ctx->exclusive_staging_filled, + ctx->device_fifo.data (), to_copy); + ctx->exclusive_staging_filled += to_copy; + + if (to_copy < ctx->device_fifo_bytes) { + memmove (ctx->device_fifo.data (), + ctx->device_fifo.data () + to_copy, + ctx->device_fifo_bytes - to_copy); + } + + ctx->device_fifo_bytes -= to_copy; + ctx->device_fifo.resize (ctx->device_fifo_bytes); + + if (to_copy > 0) + processed_any = true; + } + + if (!processed_any) + break; + + if (ctx->exclusive_staging_filled >= period_bytes) + break; + } + } else { + while (ctx->exclusive_staging_filled < period_bytes) { + gint segment; + guint8 *readptr; + gint len; + + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &readptr, &len)) + break; + + len -= priv->segoffset; + if (len <= 0) + break; + + auto remain = period_bytes - ctx->exclusive_staging_filled; + auto to_copy = (size_t) MIN ((gsize) len, (gsize) remain); + + if (ctx->is_s24in32) { + s24lsb_to_s24_msb (ctx->exclusive_staging.data () + + ctx->exclusive_staging_filled, readptr + priv->segoffset, to_copy); + } else { + memcpy (ctx->exclusive_staging.data () + ctx->exclusive_staging_filled, + readptr + priv->segoffset, to_copy); + } + + priv->segoffset += (gint) to_copy; + ctx->exclusive_staging_filled += to_copy; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_clear (rb, segment); + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + + if (ctx->exclusive_staging_filled >= period_bytes) + break; + } + } + + hr = render_client->GetBuffer (ctx->period, &data); + if (!gst_wasapi2_result (hr)) + return hr; + + GST_LOG_OBJECT (self, "Writing %d frames offset at %" G_GUINT64_FORMAT, + (guint) ctx->period, priv->write_frame_offset); + priv->write_frame_offset += ctx->period; + + if (ctx->exclusive_staging_filled < ctx->exclusive_period_bytes) { + GST_LOG_OBJECT (self, "Staging buffer not filled %d < %d", + (guint) ctx->exclusive_staging_filled, + (guint) ctx->exclusive_period_bytes); + hr = render_client->ReleaseBuffer (ctx->period, AUDCLNT_BUFFERFLAGS_SILENT); + gst_wasapi2_result (hr); + } else { + if (priv->mute) { + hr = ctx->render_client->ReleaseBuffer (ctx->period, + AUDCLNT_BUFFERFLAGS_SILENT); + } else { + memcpy (data, ctx->exclusive_staging.data (), + ctx->exclusive_period_bytes); + hr = ctx->render_client->ReleaseBuffer (ctx->period, 0); + } + + gst_wasapi2_result (hr); + ctx->exclusive_staging_filled = 0; + } + + return S_OK; +} + +static HRESULT +fill_loopback_silence (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + HRESULT hr; + guint32 padding_frames = 0; + guint32 can_write; + BYTE *data = nullptr; + + if (!priv->ctx || !priv->ctx->dummy_client || !priv->ctx->render_client) { + GST_ERROR_OBJECT (self, "IAudioRenderClient is not available"); + return E_FAIL; + } + + auto client = priv->ctx->dummy_client; + auto render_client = priv->ctx->render_client; + + hr = client->GetCurrentPadding (&padding_frames); + if (!gst_wasapi2_result (hr)) + return hr; + + if (padding_frames >= priv->ctx->dummy_buf_size) { + GST_INFO_OBJECT (self, + "Padding size %d is larger than or equal to buffer size %d", + padding_frames, priv->ctx->dummy_buf_size); + return S_OK; + } + + can_write = priv->ctx->dummy_buf_size - padding_frames; + + GST_TRACE_OBJECT (self, "Writing %d silent frames", can_write); + + hr = render_client->GetBuffer (can_write, &data); + if (!gst_wasapi2_result (hr)) + return hr; + + hr = render_client->ReleaseBuffer (can_write, AUDCLNT_BUFFERFLAGS_SILENT); + return gst_wasapi2_result (hr); +} + +static gboolean +gst_wasapi2_rbuf_process_acquire (GstWasapi2Rbuf * self, + GstAudioRingBufferSpec * spec) +{ + auto buf = GST_AUDIO_RING_BUFFER (self); + auto priv = self->priv; + + guint client_buf_size = 0; + gint period_frames = 480; + + auto rbuf_caps = gst_audio_info_to_caps (&spec->info); + if (!rbuf_caps) { + GST_ERROR_OBJECT (self, "Couldn't get caps from info"); + return FALSE; + } + + GST_DEBUG_OBJECT (self, "Acquire with caps %" GST_PTR_FORMAT, rbuf_caps); + + gst_wasapi2_clear_wfx (&priv->mix_format); + + if (priv->ctx) { + if (!priv->ctx->init_done) { + WAVEFORMATEX *matching = nullptr; + for (guint i = 0; i < priv->ctx->formats->len && !matching; i++) { + GstCaps *format_caps = nullptr; + auto format = + (WAVEFORMATEX *) g_ptr_array_index (priv->ctx->formats, i); + gst_wasapi2_util_parse_waveformatex (format, &format_caps, nullptr); + if (!format_caps) + continue; + + if (gst_caps_can_intersect (rbuf_caps, format_caps)) + matching = gst_wasapi2_copy_wfx (format); + + gst_caps_unref (format_caps); + } + + if (!matching) + matching = gst_wasapi2_audio_info_to_wfx (&spec->info); + + if (!matching) { + GST_ERROR_OBJECT (self, "Couldn't build wave format from caps %" + GST_PTR_FORMAT, rbuf_caps); + gst_clear_caps (&rbuf_caps); + return FALSE; + } + + auto ret = gst_wasapi2_rbuf_ctx_init (priv->ctx, matching); + gst_wasapi2_free_wfx (matching); + + if (!ret) { + GST_WARNING_OBJECT (self, "Couldn't initialize ctx"); + gst_wasapi2_rbuf_post_open_error (self, priv->device_id.c_str ()); + + if (!priv->configured_allow_dummy) + return FALSE; + + priv->ctx = nullptr; + } else { + client_buf_size = priv->ctx->client_buf_size; + period_frames = priv->ctx->period; + } + } else { + client_buf_size = priv->ctx->client_buf_size; + period_frames = priv->ctx->period; + } + } + + if (priv->ctx) + priv->mix_format = gst_wasapi2_copy_wfx (priv->ctx->mix_format); + else + priv->mix_format = gst_wasapi2_audio_info_to_wfx (&spec->info); + + gst_clear_caps (&rbuf_caps); + + gint bpf = GST_AUDIO_INFO_BPF (&buf->spec.info); + gint rate = GST_AUDIO_INFO_RATE (&buf->spec.info); + gint target_frames = rate / 2; /* 500ms duration */ + + gint segtotal = (target_frames + period_frames - 1) / period_frames; + spec->segsize = period_frames * bpf; + spec->segtotal = MAX (segtotal, 2); + + /* Since we allocates large buffer (large segtotal) for device switching, + * update seglatency to reasonable value */ + spec->seglatency = 2; + + GST_INFO_OBJECT (self, + "Buffer size: %d frames, period: %d frames, segsize: %d bytes, " + "segtotal: %d", client_buf_size, period_frames, + spec->segsize, spec->segtotal); + + GstAudioChannelPosition *position = nullptr; + gst_wasapi2_util_waveformatex_to_channel_mask (priv->mix_format, &position); + if (position) + gst_audio_ring_buffer_set_channel_positions (buf, position); + g_free (position); + + buf->size = spec->segtotal * spec->segsize; + buf->memory = (guint8 *) g_malloc (buf->size); + gst_audio_format_info_fill_silence (buf->spec.info.finfo, + buf->memory, buf->size); + + return TRUE; +} + +static HRESULT +gst_wasapi2_rbuf_process_release (GstWasapi2Rbuf * self) +{ + auto buf = GST_AUDIO_RING_BUFFER (self); + + g_clear_pointer (&buf->memory, g_free); + + return S_OK; +} + +static void +gst_wasapi2_rbuf_start_fallback_timer (GstWasapi2Rbuf * self) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + + if (priv->fallback_timer_armed || !priv->configured_allow_dummy) + return; + + GST_DEBUG_OBJECT (self, "Start fallback timer"); + + auto period_frames = rb->spec.segsize / GST_AUDIO_INFO_BPF (&rb->spec.info); + UINT64 period_100ns = (10000000ULL * period_frames) / + GST_AUDIO_INFO_RATE (&rb->spec.info); + + LARGE_INTEGER due_time; + due_time.QuadPart = -static_cast < LONGLONG > (period_100ns); + + SetWaitableTimer (priv->fallback_timer, + &due_time, + static_cast < LONG > (period_100ns / 10000), nullptr, nullptr, FALSE); + + QueryPerformanceCounter (&priv->fallback_qpc_base); + priv->fallback_frames_processed = 0; + priv->fallback_timer_armed = true; +} + +static void +gst_wasapi2_rbuf_stop_fallback_timer (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + + if (!priv->fallback_timer_armed) + return; + + GST_DEBUG_OBJECT (self, "Stop fallback timer"); + + CancelWaitableTimer (priv->fallback_timer); + priv->fallback_timer_armed = false; +} + +static void +gst_wasapi2_rbuf_start_monitor_timer (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + + if (priv->monitor_timer_armed) + return; + + GST_DEBUG_OBJECT (self, "Start monitor timer"); + + /* Run 15ms timer to monitor device status */ + LARGE_INTEGER due_time; + due_time.QuadPart = -1500000LL; + + SetWaitableTimer (priv->monitor_timer, + &due_time, 15, nullptr, nullptr, FALSE); + + priv->monitor_timer_armed = true; +} + +static void +gst_wasapi2_rbuf_stop_monitor_timer (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + + if (!priv->monitor_timer_armed) + return; + + GST_DEBUG_OBJECT (self, "Stop monitor timer"); + + CancelWaitableTimer (priv->monitor_timer); + priv->monitor_timer_armed = false; +} + +static HRESULT +gst_wasapi2_rbuf_process_start (GstWasapi2Rbuf * self, gboolean reset_offset) +{ + auto priv = self->priv; + + if (!priv->ctx && !priv->configured_allow_dummy) { + GST_WARNING_OBJECT (self, "No context to start"); + return E_FAIL; + } + + if (priv->running) + return S_OK; + + priv->is_first = true; + if (reset_offset) + priv->segoffset = 0; + priv->write_frame_offset = 0; + priv->expected_position = 0; + + if (priv->ctx) { + priv->ctx->exclusive_staging_filled = 0; + priv->ctx->device_fifo_bytes = 0; + priv->ctx->host_fifo_bytes = 0; + priv->ctx->device_fifo.clear (); + priv->ctx->host_fifo.clear (); + + if (priv->ctx->conv) + gst_audio_converter_reset (priv->ctx->conv); + + auto hr = priv->ctx->Start (); + + if (!gst_wasapi2_result (hr)) { + GST_WARNING_OBJECT (self, "Couldn't start device"); + gst_wasapi2_rbuf_post_open_error (self, priv->ctx->device_id.c_str ()); + if (!priv->configured_allow_dummy) + return hr; + + gst_wasapi2_rbuf_start_fallback_timer (self); + } + } else { + gst_wasapi2_rbuf_start_fallback_timer (self); + } + + gst_wasapi2_rbuf_start_monitor_timer (self); + priv->running = true; + + return S_OK; +} + +static HRESULT +gst_wasapi2_rbuf_process_stop (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + HRESULT hr = S_OK; + + if (priv->ctx) + hr = priv->ctx->Stop (); + + priv->running = false; + priv->is_first = true; + priv->segoffset = 0; + priv->write_frame_offset = 0; + priv->expected_position = 0; + + gst_wasapi2_rbuf_stop_fallback_timer (self); + gst_wasapi2_rbuf_stop_monitor_timer (self); + + return hr; +} + +static void +gst_wasapi2_rbuf_discard_frames (GstWasapi2Rbuf * self, guint frames) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + guint len = frames * GST_AUDIO_INFO_BPF (&rb->spec.info); + + while (len > 0) { + gint seg; + guint8 *ptr; + gint avail; + + if (!gst_audio_ring_buffer_prepare_read (rb, &seg, &ptr, &avail)) + return; + + avail -= priv->segoffset; + gint to_consume = MIN ((gint) len, avail); + + priv->segoffset += to_consume; + len -= to_consume; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_clear (rb, seg); + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + } +} + +static void +gst_wasapi2_rbuf_insert_silence_frames (GstWasapi2Rbuf * self, guint frames) +{ + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + auto priv = self->priv; + guint bpf = GST_AUDIO_INFO_BPF (&rb->spec.info); + guint len = frames * bpf; + + while (len > 0) { + gint segment; + guint8 *writeptr; + gint avail; + + if (!gst_audio_ring_buffer_prepare_read (rb, &segment, &writeptr, &avail)) + break; + + avail -= priv->segoffset; + gint to_write = MIN ((gint) len, avail); + + gst_audio_format_info_fill_silence (rb->spec.info.finfo, + writeptr + priv->segoffset, to_write); + + priv->segoffset += to_write; + len -= to_write; + + if (priv->segoffset == rb->spec.segsize) { + gst_audio_ring_buffer_advance (rb, 1); + priv->segoffset = 0; + } + } +} + +static gpointer +gst_wasapi2_rbuf_loop_thread (GstWasapi2Rbuf * self) +{ + auto priv = self->priv; + DWORD task_idx = 0; + auto task_handle = AvSetMmThreadCharacteristicsW (L"Pro Audio", &task_idx); + + CoInitializeEx (nullptr, COINIT_MULTITHREADED); + + bool loop_running = true; + + /* Dummy event handles for IO events can have higher priority than user commands */ + auto dummy_render = CreateEvent (nullptr, FALSE, FALSE, nullptr); + auto dummy_capture = CreateEvent (nullptr, FALSE, FALSE, nullptr); + + priv->fallback_timer = CreateWaitableTimerExW (nullptr, + nullptr, CREATE_WAITABLE_TIMER_HIGH_RESOLUTION, TIMER_ALL_ACCESS); + + if (!priv->fallback_timer) { + GST_WARNING_OBJECT (self, + "High-resolution timer not available, using default"); + priv->fallback_timer = CreateWaitableTimer (nullptr, FALSE, nullptr); + } + + /* Another timer to detect device-removed state, since I/O event + * would not be singalled on device-removed state */ + priv->monitor_timer = CreateWaitableTimer (nullptr, FALSE, nullptr); + + HANDLE waitables = { dummy_render, dummy_capture, + priv->fallback_timer, priv->monitor_timer, priv->command_handle + }; + + GST_DEBUG_OBJECT (self, "Entering loop"); + + auto default_format = gst_wasapi2_get_default_mix_format (); + GstCaps *default_caps; + gst_wasapi2_util_parse_waveformatex (default_format, &default_caps, nullptr); + + while (loop_running) { + auto wait_ret = WaitForMultipleObjects (G_N_ELEMENTS (waitables), + waitables, FALSE, INFINITE); + + switch (wait_ret) { + case WAIT_OBJECT_0: + if (priv->running) { + HRESULT hr = S_OK; + if (priv->ctx->endpoint_class == + GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE) { + hr = fill_loopback_silence (self); + if (SUCCEEDED (hr)) + hr = gst_wasapi2_rbuf_process_read (self); + } else { + if (priv->ctx->is_exclusive) + hr = gst_wasapi2_rbuf_process_write_exclusive (self); + else + hr = gst_wasapi2_rbuf_process_write (self); + } + + if (FAILED (hr)) { + gst_wasapi2_rbuf_post_io_error (self, hr, TRUE); + gst_wasapi2_rbuf_start_fallback_timer (self); + } + } + break; + case WAIT_OBJECT_0 + 1: + if (priv->running) { + auto hr = gst_wasapi2_rbuf_process_read (self); + if ((hr == AUDCLNT_E_ENDPOINT_CREATE_FAILED || + hr == AUDCLNT_E_DEVICE_INVALIDATED) && priv->ctx->is_default + && !gst_wasapi2_is_loopback_class (priv->ctx->endpoint_class)) { + GST_WARNING_OBJECT (self, + "Device was unplugged but client can support automatic routing"); + hr = S_OK; + } + + if (FAILED (hr)) { + gst_wasapi2_rbuf_post_io_error (self, hr, FALSE); + gst_wasapi2_rbuf_start_fallback_timer (self); + } + } + break; + case WAIT_OBJECT_0 + 2: + { + if (!priv->running || !priv->fallback_timer_armed) + break; + + LARGE_INTEGER qpc_now; + QueryPerformanceCounter (&qpc_now); + + LONGLONG elapsed = qpc_now.QuadPart - priv->fallback_qpc_base.QuadPart; + UINT64 elapsed_100ns = elapsed * 10000000ULL / priv->qpc_freq.QuadPart; + auto rb = GST_AUDIO_RING_BUFFER_CAST (self); + UINT32 rate = GST_AUDIO_INFO_RATE (&rb->spec.info); + UINT64 expected_frames = (elapsed_100ns * rate) / 10000000ULL; + UINT64 delta = expected_frames - priv->fallback_frames_processed; + + if (delta > 0) { + GST_TRACE_OBJECT (self, + "procssing fallback %u frames", (guint) delta); + + if (priv->endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER) + gst_wasapi2_rbuf_discard_frames (self, (guint) delta); + else + gst_wasapi2_rbuf_insert_silence_frames (self, (guint) delta); + + priv->fallback_frames_processed += delta; + } + + break; + } + case WAIT_OBJECT_0 + 3: + { + if (!priv->running || !priv->ctx || !priv->monitor_timer_armed) + break; + + UINT32 dummy; + auto hr = priv->ctx->client->GetCurrentPadding (&dummy); + if (hr == AUDCLNT_E_DEVICE_INVALIDATED && !priv->ctx->error_posted) { + priv->ctx->error_posted = true; + gst_wasapi2_rbuf_post_io_error (self, AUDCLNT_E_DEVICE_INVALIDATED, + priv->endpoint_class == GST_WASAPI2_ENDPOINT_CLASS_RENDER); + gst_wasapi2_rbuf_start_fallback_timer (self); + } + + break; + } + case WAIT_OBJECT_0 + 4: + /* Wakeup event for event processing */ + break; + default: + GST_WARNING_OBJECT (self, + "Unexpected wait return 0x%x", (guint) wait_ret); + loop_running = false; + break; + } + + /* Process events */ + { + std::unique_lock < std::mutex > lk (priv->lock); + while (!priv->cmd_queue.empty ()) { + auto cmd = priv->cmd_queue.front (); + priv->cmd_queue.pop (); + lk.unlock (); + + auto cmd_name = command_type_to_string (cmd->type); + GST_DEBUG_OBJECT (self, "Got command %s", cmd_name); + switch (cmd->type) { + case CommandType::Shutdown: + loop_running = false; + cmd->hr = S_OK; + SetEvent (cmd->event_handle); + break; + case CommandType::SetDevice: + { + auto scmd = std::dynamic_pointer_cast < CommandSetDevice > (cmd); + priv->device_id = scmd->device_id; + priv->endpoint_class = scmd->endpoint_class; + priv->pid = scmd->pid; + priv->low_latency = scmd->low_latency; + priv->exclusive = scmd->exclusive; + + if (priv->opened) { + GST_DEBUG_OBJECT (self, + "Have opened device, creating context asynchronously"); + gst_wasapi2_rbuf_create_ctx_async (self); + } + + cmd->hr = S_OK; + SetEvent (cmd->event_handle); + break; + } + case CommandType::UpdateDevice: + { + auto ucmd = std::dynamic_pointer_cast < CommandUpdateDevice > (cmd); + if (priv->opened) { + GST_DEBUG_OBJECT (self, "Updating device"); + + gst_wasapi2_rbuf_stop_fallback_timer (self); + + priv->ctx = ucmd->ctx; + + if (priv->ctx && !priv->ctx->init_done && priv->mix_format) { + if (!gst_wasapi2_rbuf_ctx_init (priv->ctx, priv->mix_format)) { + GST_WARNING_OBJECT (self, "Couldn't initialize context"); + priv->ctx = nullptr; + } + } + + if (priv->ctx) { + waitables0 = priv->ctx->render_event; + waitables1 = priv->ctx->capture_event; + + if (priv->mute) + priv->ctx->SetVolume (0); + else + priv->ctx->SetVolume (priv->volume); + } else { + waitables0 = dummy_render; + waitables1 = dummy_capture; + + gst_wasapi2_rbuf_post_open_error (self, + ucmd->device_id.c_str ()); + if (!priv->configured_allow_dummy) { + SetEvent (cmd->event_handle); + break; + } + } + + if (priv->running) { + priv->running = false; + gst_wasapi2_rbuf_process_start (self, FALSE); + } + } + SetEvent (cmd->event_handle); + break; + } + case CommandType::Open: + priv->configured_allow_dummy = priv->allow_dummy; + gst_wasapi2_clear_wfx (&priv->mix_format); + priv->ctx = gst_wasapi2_rbuf_create_ctx (self); + + if (priv->ctx) { + waitables0 = priv->ctx->render_event; + waitables1 = priv->ctx->capture_event; + gst_caps_replace (&priv->caps, priv->ctx->supported_caps); + + priv->opened = true; + cmd->hr = S_OK; + } else { + gst_clear_caps (&priv->caps); + waitables0 = dummy_render; + waitables1 = dummy_capture; + gst_wasapi2_rbuf_post_open_error (self, priv->device_id.c_str ()); + + if (priv->configured_allow_dummy) { + gst_caps_replace (&priv->caps, default_caps); + + priv->opened = true; + cmd->hr = S_OK; + } else { + cmd->hr = E_FAIL; + } + } + SetEvent (cmd->event_handle); + break; + case CommandType::Close: + waitables0 = dummy_render; + waitables1 = dummy_capture; + priv->ctx = nullptr; + gst_clear_caps (&priv->caps); + cmd->hr = S_OK; + SetEvent (cmd->event_handle); + priv->opened = false; + gst_wasapi2_clear_wfx (&priv->mix_format); + gst_wasapi2_rbuf_stop_fallback_timer (self); + break; + case CommandType::Acquire: + { + auto acquire_cmd = + std::dynamic_pointer_cast < CommandAcquire > (cmd); + + if (!priv->ctx) { + priv->ctx = gst_wasapi2_rbuf_create_ctx (self); + if (!priv->ctx) { + GST_WARNING_OBJECT (self, "No context configured"); + gst_wasapi2_rbuf_post_open_error (self, + priv->device_id.c_str ()); + if (!priv->configured_allow_dummy) { + cmd->hr = E_FAIL; + SetEvent (cmd->event_handle); + break; + } + } + } + + if (!gst_wasapi2_rbuf_process_acquire (self, acquire_cmd->spec)) { + cmd->hr = E_FAIL; + SetEvent (cmd->event_handle); + break; + } + + priv->opened = true; + + /* Since format selected now, use fixated one */ + gst_clear_caps (&priv->caps); + gst_wasapi2_util_parse_waveformatex (priv->mix_format, + &priv->caps, nullptr); + + if (priv->ctx) { + waitables0 = priv->ctx->render_event; + waitables1 = priv->ctx->capture_event; + + if (priv->mute) + priv->ctx->SetVolume (0); + else + priv->ctx->SetVolume (priv->volume); + } else { + waitables0 = dummy_render; + waitables1 = dummy_capture; + } + + cmd->hr = S_OK; + SetEvent (cmd->event_handle); + break; + } + case CommandType::Release: + cmd->hr = gst_wasapi2_rbuf_process_release (self); + gst_wasapi2_rbuf_stop_fallback_timer (self); + SetEvent (cmd->event_handle); + break; + case CommandType::Start: + cmd->hr = gst_wasapi2_rbuf_process_start (self, TRUE); + SetEvent (cmd->event_handle); + break; + case CommandType::Stop: + cmd->hr = gst_wasapi2_rbuf_process_stop (self); + SetEvent (cmd->event_handle); + break; + case CommandType::GetCaps: + { + auto caps_cmd = std::dynamic_pointer_cast < CommandGetCaps > (cmd); + if (priv->caps) + caps_cmd->caps = gst_caps_ref (priv->caps); + + SetEvent (cmd->event_handle); + break; + } + case CommandType::UpdateVolume: + if (priv->ctx) { + if (priv->mute) + priv->ctx->SetVolume (0); + else + priv->ctx->SetVolume (priv->volume); + } + SetEvent (cmd->event_handle); + break; + default: + g_assert_not_reached (); + break; + } + GST_DEBUG_OBJECT (self, "command %s processed", cmd_name); + lk.lock (); + } + } + } + + gst_wasapi2_free_wfx (default_format); + gst_clear_caps (&default_caps); + priv->ctx = nullptr; + priv->cmd_queue = { }; + gst_wasapi2_clear_wfx (&priv->mix_format); + + CoUninitialize (); + + if (task_handle) + AvRevertMmThreadCharacteristics (task_handle); + + GST_DEBUG_OBJECT (self, "Exit loop"); + + CloseHandle (dummy_render); + CloseHandle (dummy_capture); + + CancelWaitableTimer (priv->monitor_timer); + CloseHandle (priv->monitor_timer); + + CancelWaitableTimer (priv->fallback_timer); + CloseHandle (priv->fallback_timer); + + return nullptr; +} + +static guint +gst_wasapi2_rbuf_delay (GstAudioRingBuffer * buf) +{ + /* NOTE: WASAPI supports GetCurrentPadding() method for querying + * currently unread buffer size, but it doesn't seem to be quite useful + * here because: + * + * In case of capture client, GetCurrentPadding() will return the number of + * unread frames which will be identical to pNumFramesToRead value of + * IAudioCaptureClient::GetBuffer()'s return. Since we are running on + * event-driven mode and whenever available, WASAPI will notify signal + * so it's likely zero at this moment. And there is a chance to + * return incorrect value here because our IO callback happens from + * other thread. + * + * And render client's padding size will return the total size of buffer + * which is likely larger than twice of our period. Which doesn't represent + * the amount queued frame size in device correctly + */ + return 0; +} + +GstWasapi2Rbuf * +gst_wasapi2_rbuf_new (gpointer parent, GstWasapi2RbufCallback callback) +{ + auto self = (GstWasapi2Rbuf *) g_object_new (GST_TYPE_WASAPI2_RBUF, nullptr); + gst_object_ref_sink (self); + + auto priv = self->priv; + priv->invalidated_cb = callback; + g_weak_ref_set (&priv->parent, parent); + priv->thread = g_thread_new ("GstWasapi2Rbuf", + (GThreadFunc) gst_wasapi2_rbuf_loop_thread, self); + + return self; +} + +void +gst_wasapi2_rbuf_set_device (GstWasapi2Rbuf * rbuf, const gchar * device_id, + GstWasapi2EndpointClass endpoint_class, guint pid, gboolean low_latency, + gboolean exclusive) +{ + auto cmd = std::make_shared < CommandSetDevice > (); + + if (device_id) + cmd->device_id = device_id; + cmd->endpoint_class = endpoint_class; + cmd->pid = pid; + cmd->low_latency = low_latency; + cmd->exclusive = exclusive; + + gst_wasapi2_rbuf_push_command (rbuf, cmd); + + WaitForSingleObject (cmd->event_handle, INFINITE); +} + +GstCaps * +gst_wasapi2_rbuf_get_caps (GstWasapi2Rbuf * rbuf) +{ + auto cmd = std::make_shared < CommandGetCaps > (); + + gst_wasapi2_rbuf_push_command (rbuf, cmd); + WaitForSingleObject (cmd->event_handle, INFINITE); + + return cmd->caps; +} + +void +gst_wasapi2_rbuf_set_mute (GstWasapi2Rbuf * rbuf, gboolean mute) +{ + auto priv = rbuf->priv; + + priv->mute = mute; + + auto cmd = std::make_shared < CommandData > (CommandType::UpdateVolume); + + gst_wasapi2_rbuf_push_command (rbuf, cmd); +} + +gboolean +gst_wasapi2_rbuf_get_mute (GstWasapi2Rbuf * rbuf) +{ + auto priv = rbuf->priv; + + return priv->mute.load (); +} + +void +gst_wasapi2_rbuf_set_volume (GstWasapi2Rbuf * rbuf, gdouble volume) +{ + auto priv = rbuf->priv; + + priv->volume = (float) volume; + + auto cmd = std::make_shared < CommandData > (CommandType::UpdateVolume); + + gst_wasapi2_rbuf_push_command (rbuf, cmd); +} + +gdouble +gst_wasapi2_rbuf_get_volume (GstWasapi2Rbuf * rbuf) +{ + auto priv = rbuf->priv; + + return (gdouble) priv->volume.load (); +} + +void +gst_wasapi2_rbuf_set_device_mute_monitoring (GstWasapi2Rbuf * rbuf, + gboolean value) +{ + auto priv = rbuf->priv; + + priv->monitor_device_mute.store (value, std::memory_order_release); +} + +void +gst_wasapi2_rbuf_set_continue_on_error (GstWasapi2Rbuf * rbuf, gboolean value) +{ + auto priv = rbuf->priv; + + priv->allow_dummy = value; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2rbuf.h
Added
@@ -0,0 +1,63 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/audio/audio.h> +#include "gstwasapi2util.h" + +G_BEGIN_DECLS + +#define GST_TYPE_WASAPI2_RBUF (gst_wasapi2_rbuf_get_type()) +G_DECLARE_FINAL_TYPE (GstWasapi2Rbuf, gst_wasapi2_rbuf, + GST, WASAPI2_RBUF, GstAudioRingBuffer); + +typedef void (*GstWasapi2RbufCallback) (gpointer elem); + +GstWasapi2Rbuf * gst_wasapi2_rbuf_new (gpointer parent, + GstWasapi2RbufCallback callback); + +void gst_wasapi2_rbuf_set_device (GstWasapi2Rbuf * rbuf, + const gchar * device_id, + GstWasapi2EndpointClass endpoint_class, + guint pid, + gboolean low_latency, + gboolean exclusive); + +GstCaps * gst_wasapi2_rbuf_get_caps (GstWasapi2Rbuf * rbuf); + +void gst_wasapi2_rbuf_set_mute (GstWasapi2Rbuf * rbuf, + gboolean mute); + +gboolean gst_wasapi2_rbuf_get_mute (GstWasapi2Rbuf * rbuf); + +void gst_wasapi2_rbuf_set_volume (GstWasapi2Rbuf * rbuf, + gdouble volume); + +gdouble gst_wasapi2_rbuf_get_volume (GstWasapi2Rbuf * rbuf); + +void gst_wasapi2_rbuf_set_device_mute_monitoring (GstWasapi2Rbuf * rbuf, + gboolean value); + +void gst_wasapi2_rbuf_set_continue_on_error (GstWasapi2Rbuf * rbuf, + gboolean value); + +G_END_DECLS +
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2sink.cpp
Added
@@ -0,0 +1,402 @@ +/* + * Copyright (C) 2008 Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com> + * Copyright (C) 2013 Collabora Ltd. + * Author: Sebastian Dröge <sebastian.droege@collabora.co.uk> + * Copyright (C) 2018 Centricular Ltd. + * Author: Nirbheek Chauhan <nirbheek@centricular.com> + * Copyright (C) 2020 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-wasapi2sink + * @title: wasapi2sink + * + * Provides audio playback using the Windows Audio Session API available with + * Windows 10. + * + * ## Example pipelines + * | + * gst-launch-1.0 -v audiotestsrc ! wasapi2sink + * | Generate audio test buffers and render to the default audio device. + * + * | + * gst-launch-1.0 -v audiotestsink samplesperbuffer=160 ! wasapi2sink low-latency=true + * | Same as above, but with the minimum possible latency + * + */ +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwasapi2sink.h" +#include "gstwasapi2util.h" +#include "gstwasapi2rbuf.h" +#include <mutex> +#include <atomic> + +GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_sink_debug); +#define GST_CAT_DEFAULT gst_wasapi2_sink_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS)); + +#define DEFAULT_LOW_LATENCY FALSE +#define DEFAULT_MUTE FALSE +#define DEFAULT_VOLUME 1.0 +#define DEFAULT_CONTINUE_ON_ERROR FALSE +#define DEFAULT_EXCLUSIVE FALSE + +enum +{ + PROP_0, + PROP_DEVICE, + PROP_LOW_LATENCY, + PROP_MUTE, + PROP_VOLUME, + PROP_DISPATCHER, + PROP_CONTINUE_ON_ERROR, + PROP_EXCLUSIVE, +}; + +/* *INDENT-OFF* */ +struct GstWasapi2SinkPrivate +{ + ~GstWasapi2SinkPrivate () + { + gst_object_unref (rbuf); + g_free (device_id); + } + + GstWasapi2Rbuf *rbuf = nullptr; + + std::mutex lock; + std::atomic<bool> device_invalidated = { false }; + + /* properties */ + gchar *device_id = nullptr;; + gboolean low_latency = DEFAULT_LOW_LATENCY; + gboolean continue_on_error = DEFAULT_CONTINUE_ON_ERROR; + gboolean exclusive = DEFAULT_EXCLUSIVE; +}; +/* *INDENT-ON* */ + +struct _GstWasapi2Sink +{ + GstAudioBaseSink parent; + + GstWasapi2SinkPrivate *priv; +}; + +static void gst_wasapi2_sink_finalize (GObject * object); +static void gst_wasapi2_sink_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_wasapi2_sink_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +static GstCaps *gst_wasapi2_sink_get_caps (GstBaseSink * bsink, + GstCaps * filter); +static GstAudioRingBuffer *gst_wasapi2_sink_create_ringbuffer (GstAudioBaseSink + * sink); + +#define gst_wasapi2_sink_parent_class parent_class +G_DEFINE_TYPE_WITH_CODE (GstWasapi2Sink, gst_wasapi2_sink, + GST_TYPE_AUDIO_BASE_SINK, + G_IMPLEMENT_INTERFACE (GST_TYPE_STREAM_VOLUME, nullptr)); + +static void +gst_wasapi2_sink_class_init (GstWasapi2SinkClass * klass) +{ + auto gobject_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto basesink_class = GST_BASE_SINK_CLASS (klass); + auto audiobasesink_class = GST_AUDIO_BASE_SINK_CLASS (klass); + + gobject_class->finalize = gst_wasapi2_sink_finalize; + gobject_class->set_property = gst_wasapi2_sink_set_property; + gobject_class->get_property = gst_wasapi2_sink_get_property; + + g_object_class_install_property (gobject_class, PROP_DEVICE, + g_param_spec_string ("device", "Device", + "Audio device ID as provided by " + "WASAPI device endpoint ID as provided by IMMDevice::GetId", + nullptr, (GParamFlags) (GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_LOW_LATENCY, + g_param_spec_boolean ("low-latency", "Low latency", + "Optimize all settings for lowest latency. Always safe to enable.", + DEFAULT_LOW_LATENCY, (GParamFlags) (GST_PARAM_MUTABLE_READY | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_MUTE, + g_param_spec_boolean ("mute", "Mute", "Mute state of this stream", + DEFAULT_MUTE, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_VOLUME, + g_param_spec_double ("volume", "Volume", "Volume of this stream", + 0.0, 1.0, DEFAULT_VOLUME, + (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Sink:dispatcher: + * + * ICoreDispatcher COM object used for activating device from UI thread. + * + * Since: 1.18 + */ + g_object_class_install_property (gobject_class, PROP_DISPATCHER, + g_param_spec_pointer ("dispatcher", "Dispatcher", + "ICoreDispatcher COM object to use. In order for application to ask " + "permission of audio device, device activation should be running " + "on UI thread via ICoreDispatcher. This element will increase " + "the reference count of given ICoreDispatcher and release it after " + "use. Therefore, caller does not need to consider additional " + "reference count management", + (GParamFlags) (GST_PARAM_MUTABLE_READY | G_PARAM_WRITABLE | + G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Sink:continue-on-error: + * + * If enabled, wasapi2sink will post a warning message instead of an error, + * when device failures occur, such as open failure, I/O error, + * or device removal. + * The element will continue to consume audio buffers and behave as if + * a render device were active, allowing pipeline to keep running even when + * no audio endpoint is available + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_CONTINUE_ON_ERROR, + g_param_spec_boolean ("continue-on-error", "Continue On Error", + "Continue running and consume buffers on device failure", + DEFAULT_CONTINUE_ON_ERROR, (GParamFlags) (GST_PARAM_MUTABLE_READY | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Sink:exclusive: + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_EXCLUSIVE, + g_param_spec_boolean ("exclusive", "Exclusive", + "Open the device in exclusive mode", + DEFAULT_EXCLUSIVE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_add_static_pad_template (element_class, &sink_template); + gst_element_class_set_static_metadata (element_class, "Wasapi2Sink", + "Sink/Audio/Hardware", + "Stream audio to an audio capture device through WASAPI", + "Seungha Yang <seungha@centricular.com>"); + + basesink_class->get_caps = GST_DEBUG_FUNCPTR (gst_wasapi2_sink_get_caps); + + audiobasesink_class->create_ringbuffer = + GST_DEBUG_FUNCPTR (gst_wasapi2_sink_create_ringbuffer); + + GST_DEBUG_CATEGORY_INIT (gst_wasapi2_sink_debug, "wasapi2sink", + 0, "Windows audio session API sink"); +} + +static void +gst_wasapi2_sink_on_invalidated (gpointer elem) +{ + auto self = GST_WASAPI2_SINK (elem); + auto priv = self->priv; + + GST_WARNING_OBJECT (self, "Device invalidated"); + + priv->device_invalidated = true; +} + +static void +gst_wasapi2_sink_init (GstWasapi2Sink * self) +{ + auto priv = new GstWasapi2SinkPrivate (); + + priv->rbuf = gst_wasapi2_rbuf_new (self, gst_wasapi2_sink_on_invalidated); + gst_wasapi2_rbuf_set_device (priv->rbuf, nullptr, + GST_WASAPI2_ENDPOINT_CLASS_RENDER, 0, DEFAULT_LOW_LATENCY, + DEFAULT_EXCLUSIVE); + + self->priv = priv; +} + +static void +gst_wasapi2_sink_finalize (GObject * object) +{ + auto self = GST_WASAPI2_SINK (object); + + GST_LOG_OBJECT (self, "Finalize"); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_wasapi2_sink_set_device (GstWasapi2Sink * self, bool updated) +{ + auto priv = self->priv; + bool expected = true; + bool set_device = priv->device_invalidated.compare_exchange_strong (expected, + false); + + if (!set_device && !updated) + return; + + gst_wasapi2_rbuf_set_device (priv->rbuf, priv->device_id, + GST_WASAPI2_ENDPOINT_CLASS_RENDER, 0, priv->low_latency, priv->exclusive); +} + +static void +gst_wasapi2_sink_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_WASAPI2_SINK (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE: + { + auto new_val = g_value_get_string (value); + bool updated = false; + if (g_strcmp0 (new_val, priv->device_id) != 0) { + g_free (priv->device_id); + priv->device_id = g_strdup (new_val); + updated = true; + } + + gst_wasapi2_sink_set_device (self, updated); + break; + } + case PROP_LOW_LATENCY: + { + auto new_val = g_value_get_boolean (value); + bool updated = false; + if (new_val != priv->low_latency) { + priv->low_latency = new_val; + updated = true; + } + + gst_wasapi2_sink_set_device (self, updated); + break; + } + case PROP_MUTE: + gst_wasapi2_rbuf_set_mute (priv->rbuf, g_value_get_boolean (value)); + break; + case PROP_VOLUME: + gst_wasapi2_rbuf_set_volume (priv->rbuf, g_value_get_double (value)); + break; + case PROP_DISPATCHER: + /* Unused */ + break; + case PROP_CONTINUE_ON_ERROR: + priv->continue_on_error = g_value_get_boolean (value); + gst_wasapi2_rbuf_set_continue_on_error (priv->rbuf, + priv->continue_on_error); + break; + case PROP_EXCLUSIVE: + { + auto new_val = g_value_get_boolean (value); + bool updated = false; + if (new_val != priv->exclusive) { + priv->exclusive = new_val; + updated = true; + } + + gst_wasapi2_sink_set_device (self, updated); + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_wasapi2_sink_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_WASAPI2_SINK (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE: + g_value_set_string (value, priv->device_id); + break; + case PROP_LOW_LATENCY: + g_value_set_boolean (value, priv->low_latency); + break; + case PROP_MUTE: + g_value_set_boolean (value, gst_wasapi2_rbuf_get_mute (priv->rbuf)); + break; + case PROP_VOLUME: + g_value_set_double (value, gst_wasapi2_rbuf_get_volume (priv->rbuf)); + break; + case PROP_CONTINUE_ON_ERROR: + g_value_set_boolean (value, priv->continue_on_error); + break; + case PROP_EXCLUSIVE: + g_value_set_boolean (value, priv->exclusive); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstCaps * +gst_wasapi2_sink_get_caps (GstBaseSink * bsink, GstCaps * filter) +{ + auto self = GST_WASAPI2_SINK (bsink); + auto priv = self->priv; + auto caps = gst_wasapi2_rbuf_get_caps (priv->rbuf); + + if (!caps) + caps = gst_pad_get_pad_template_caps (bsink->sinkpad); + + if (filter) { + GstCaps *filtered = + gst_caps_intersect_full (filter, caps, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (caps); + caps = filtered; + } + + GST_DEBUG_OBJECT (self, "returning caps %" GST_PTR_FORMAT, caps); + + return caps; +} + +static GstAudioRingBuffer * +gst_wasapi2_sink_create_ringbuffer (GstAudioBaseSink * sink) +{ + auto self = GST_WASAPI2_SINK (sink); + auto priv = self->priv; + + return GST_AUDIO_RING_BUFFER (priv->rbuf); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2src.cpp
Added
@@ -0,0 +1,606 @@ +/* + * Copyright (C) 2008 Ole André Vadla Ravnås <ole.andre.ravnas@tandberg.com> + * Copyright (C) 2018 Centricular Ltd. + * Author: Nirbheek Chauhan <nirbheek@centricular.com> + * Copyright (C) 2020 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-wasapi2src + * @title: wasapi2src + * + * Provides audio capture from the Windows Audio Session API available with + * Windows 10. + * + * ## Example pipelines + * | + * gst-launch-1.0 -v wasapi2src ! fakesink + * | Capture from the default audio device and render to fakesink. + * + * | + * gst-launch-1.0 -v wasapi2src low-latency=true ! fakesink + * | Capture from the default audio device with the minimum possible latency and render to fakesink. + * + */ +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include "gstwasapi2src.h" +#include "gstwasapi2util.h" +#include "gstwasapi2rbuf.h" +#include <mutex> +#include <atomic> + +GST_DEBUG_CATEGORY_STATIC (gst_wasapi2_src_debug); +#define GST_CAT_DEFAULT gst_wasapi2_src_debug + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS)); + +/** + * GstWasapi2SrcLoopbackMode: + * + * Loopback capture mode + * + * Since: 1.22 + */ +typedef enum +{ + /** + * GstWasapi2SrcLoopbackMode::default: + * + * Default loopback mode + * + * Since: 1.22 + */ + GST_WASAPI2_SRC_LOOPBACK_DEFAULT, + + /** + * GstWasapi2SrcLoopbackMode::include-process-tree: + * + * Captures only specified process and its child process + * + * Since: 1.22 + */ + GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE, + + /** + * GstWasapi2SrcLoopbackMode::exclude-process-tree: + * + * Excludes specified process and its child process + * + * Since: 1.22 + */ + GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE, +} GstWasapi2SrcLoopbackMode; + +#define GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE (gst_wasapi2_src_loopback_mode_get_type ()) +static GType +gst_wasapi2_src_loopback_mode_get_type (void) +{ + static GType loopback_type = 0; + static const GEnumValue types = { + {GST_WASAPI2_SRC_LOOPBACK_DEFAULT, "Default", "default"}, + {GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE, + "Include process and its child processes", + "include-process-tree"}, + {GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE, + "Exclude process and its child processes", + "exclude-process-tree"}, + {0, nullptr, nullptr} + }; + + GST_WASAPI2_CALL_ONCE_BEGIN { + loopback_type = g_enum_register_static ("GstWasapi2SrcLoopbackMode", types); + } GST_WASAPI2_CALL_ONCE_END; + + return loopback_type; +} + +#define DEFAULT_LOW_LATENCY FALSE +#define DEFAULT_MUTE FALSE +#define DEFAULT_VOLUME 1.0 +#define DEFAULT_LOOPBACK FALSE +#define DEFAULT_LOOPBACK_MODE GST_WASAPI2_SRC_LOOPBACK_DEFAULT +#define DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE FALSE +#define DEFAULT_CONTINUE_ON_ERROR FALSE +#define DEFAULT_EXCLUSIVE FALSE + +enum +{ + PROP_0, + PROP_DEVICE, + PROP_LOW_LATENCY, + PROP_MUTE, + PROP_VOLUME, + PROP_DISPATCHER, + PROP_LOOPBACK, + PROP_LOOPBACK_MODE, + PROP_LOOPBACK_TARGET_PID, + PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE, + PROP_CONTINUE_ON_ERROR, + PROP_EXCLUSIVE, +}; + +/* *INDENT-OFF* */ +struct GstWasapi2SrcPrivate +{ + ~GstWasapi2SrcPrivate () + { + gst_object_unref (rbuf); + g_free (device_id); + } + + GstWasapi2Rbuf *rbuf = nullptr; + + std::mutex lock; + std::atomic<bool> device_invalidated = { false }; + + /* properties */ + gchar *device_id = nullptr; + gboolean low_latency = DEFAULT_LOW_LATENCY; + gboolean loopback = DEFAULT_LOOPBACK; + GstWasapi2SrcLoopbackMode loopback_mode = DEFAULT_LOOPBACK_MODE; + guint loopback_pid = 0; + gboolean loopback_silence_on_device_mute = + DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE; + gboolean continue_on_error = DEFAULT_CONTINUE_ON_ERROR; + gboolean exclusive = DEFAULT_EXCLUSIVE; +}; +/* *INDENT-ON* */ + +struct _GstWasapi2Src +{ + GstAudioBaseSrc parent; + + GstWasapi2SrcPrivate *priv; +}; + +static void gst_wasapi2_src_finalize (GObject * object); +static void gst_wasapi2_src_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec); +static void gst_wasapi2_src_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +static GstCaps *gst_wasapi2_src_get_caps (GstBaseSrc * bsrc, GstCaps * filter); +static GstAudioRingBuffer *gst_wasapi2_src_create_ringbuffer (GstAudioBaseSrc * + src); + +#define gst_wasapi2_src_parent_class parent_class +G_DEFINE_TYPE_WITH_CODE (GstWasapi2Src, gst_wasapi2_src, + GST_TYPE_AUDIO_BASE_SRC, + G_IMPLEMENT_INTERFACE (GST_TYPE_STREAM_VOLUME, nullptr)); + +static void +gst_wasapi2_src_class_init (GstWasapi2SrcClass * klass) +{ + auto gobject_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto basesrc_class = GST_BASE_SRC_CLASS (klass); + auto audiobasesrc_class = GST_AUDIO_BASE_SRC_CLASS (klass); + + gobject_class->finalize = gst_wasapi2_src_finalize; + gobject_class->set_property = gst_wasapi2_src_set_property; + gobject_class->get_property = gst_wasapi2_src_get_property; + + g_object_class_install_property (gobject_class, PROP_DEVICE, + g_param_spec_string ("device", "Device", + "Audio device ID as provided by " + "WASAPI device endpoint ID as provided by IMMDevice::GetId", + nullptr, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_LOW_LATENCY, + g_param_spec_boolean ("low-latency", "Low latency", + "Optimize all settings for lowest latency. Always safe to enable.", + DEFAULT_LOW_LATENCY, (GParamFlags) (GST_PARAM_MUTABLE_READY | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_MUTE, + g_param_spec_boolean ("mute", "Mute", "Mute state of this stream", + DEFAULT_MUTE, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (gobject_class, PROP_VOLUME, + g_param_spec_double ("volume", "Volume", "Volume of this stream", + 0.0, 1.0, DEFAULT_VOLUME, (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Src:dispatcher: + * + * ICoreDispatcher COM object used for activating device from UI thread. + * + * Since: 1.18 + */ + g_object_class_install_property (gobject_class, PROP_DISPATCHER, + g_param_spec_pointer ("dispatcher", "Dispatcher", + "ICoreDispatcher COM object to use. In order for application to ask " + "permission of audio device, device activation should be running " + "on UI thread via ICoreDispatcher. This element will increase " + "the reference count of given ICoreDispatcher and release it after " + "use. Therefore, caller does not need to consider additional " + "reference count management", + (GParamFlags) (GST_PARAM_MUTABLE_READY | G_PARAM_WRITABLE | + G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Src:loopback: + * + * Open render device for loopback recording + * + * Since: 1.20 + */ + g_object_class_install_property (gobject_class, PROP_LOOPBACK, + g_param_spec_boolean ("loopback", "Loopback recording", + "Open render device for loopback recording", DEFAULT_LOOPBACK, + (GParamFlags) (GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + + if (gst_wasapi2_can_process_loopback ()) { + /** + * GstWasapi2Src:loopback-mode: + * + * Loopback mode. "target-process-id" must be specified in case of + * process loopback modes. + * + * This feature requires "Windows 10 build 20348" + * + * Since: 1.22 + */ + g_object_class_install_property (gobject_class, PROP_LOOPBACK_MODE, + g_param_spec_enum ("loopback-mode", "Loopback Mode", + "Loopback mode to use", GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE, + DEFAULT_LOOPBACK_MODE, + (GParamFlags) (GST_PARAM_CONDITIONALLY_AVAILABLE | + GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Src:loopback-target-pid: + * + * Target process id to be recorded or excluded depending on loopback mode + * + * This feature requires "Windows 10 build 20348" + * + * Since: 1.22 + */ + g_object_class_install_property (gobject_class, PROP_LOOPBACK_TARGET_PID, + g_param_spec_uint ("loopback-target-pid", "Loopback Target PID", + "Process ID to be recorded or excluded for process loopback mode", + 0, G_MAXUINT32, 0, + (GParamFlags) (GST_PARAM_CONDITIONALLY_AVAILABLE | + GST_PARAM_MUTABLE_READY | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + } + + /** + * GstWasapi2Src:loopback-silence-on-device-mute: + * + * When loopback recording, if the device is muted, inject silence in the pipeline + * + * Since: 1.24 + */ + g_object_class_install_property (gobject_class, + PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE, + g_param_spec_boolean ("loopback-silence-on-device-mute", + "Loopback Silence On Device Mute", + "When loopback recording, if the device is muted, inject silence in the pipeline", + DEFAULT_LOOPBACK_SILENCE_ON_DEVICE_MUTE, + (GParamFlags) (GST_PARAM_MUTABLE_PLAYING | G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Src:continue-on-error: + * + * If enabled, wasapi2src will post a warning message instead of an error, + * when device failures occur, such as open failure, I/O error, + * or device removal. + * The element will continue to produce audio buffers and behave as if + * a capture device were active, allowing pipeline to keep running even when + * no audio endpoint is available + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_CONTINUE_ON_ERROR, + g_param_spec_boolean ("continue-on-error", "Continue On Error", + "Continue running and produce buffers on device failure", + DEFAULT_CONTINUE_ON_ERROR, (GParamFlags) (GST_PARAM_MUTABLE_READY | + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + /** + * GstWasapi2Src:exclusive: + * + * Since: 1.28 + */ + g_object_class_install_property (gobject_class, PROP_EXCLUSIVE, + g_param_spec_boolean ("exclusive", "Exclusive", + "Open the device in exclusive mode", + DEFAULT_EXCLUSIVE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + gst_element_class_add_static_pad_template (element_class, &src_template); + gst_element_class_set_static_metadata (element_class, "Wasapi2Src", + "Source/Audio/Hardware", + "Stream audio from an audio capture device through WASAPI", + "Seungha Yang <seungha@centricular.com>"); + + basesrc_class->get_caps = GST_DEBUG_FUNCPTR (gst_wasapi2_src_get_caps); + + audiobasesrc_class->create_ringbuffer = + GST_DEBUG_FUNCPTR (gst_wasapi2_src_create_ringbuffer); + + GST_DEBUG_CATEGORY_INIT (gst_wasapi2_src_debug, "wasapi2src", + 0, "Windows audio session API source"); + + if (gst_wasapi2_can_process_loopback ()) { + gst_type_mark_as_plugin_api (GST_TYPE_WASAPI2_SRC_LOOPBACK_MODE, + (GstPluginAPIFlags) 0); + } +} + +static void +gst_wasapi2_src_on_invalidated (gpointer elem) +{ + auto self = GST_WASAPI2_SRC (elem); + auto priv = self->priv; + + GST_WARNING_OBJECT (self, "Device invalidated"); + + priv->device_invalidated = true; +} + +static void +gst_wasapi2_src_init (GstWasapi2Src * self) +{ + auto priv = new GstWasapi2SrcPrivate (); + + priv->rbuf = gst_wasapi2_rbuf_new (self, gst_wasapi2_src_on_invalidated); + gst_wasapi2_rbuf_set_device (priv->rbuf, nullptr, + GST_WASAPI2_ENDPOINT_CLASS_CAPTURE, 0, DEFAULT_LOW_LATENCY, + DEFAULT_EXCLUSIVE); + + self->priv = priv; +} + +static void +gst_wasapi2_src_finalize (GObject * object) +{ + auto self = GST_WASAPI2_SRC (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_wasapi2_src_set_device (GstWasapi2Src * self, bool updated) +{ + auto priv = self->priv; + GstWasapi2EndpointClass device_class = GST_WASAPI2_ENDPOINT_CLASS_CAPTURE; + bool expected = true; + bool set_device = priv->device_invalidated.compare_exchange_strong (expected, + false); + + if (!set_device && !updated) + return; + + if (priv->loopback_pid) { + if (priv->loopback_mode == GST_WASAPI2_SRC_LOOPBACK_INCLUDE_PROCESS_TREE) { + device_class = + GST_WASAPI2_ENDPOINT_CLASS_INCLUDE_PROCESS_LOOPBACK_CAPTURE; + } else if (priv->loopback_mode == + GST_WASAPI2_SRC_LOOPBACK_EXCLUDE_PROCESS_TREE) { + device_class = + GST_WASAPI2_ENDPOINT_CLASS_EXCLUDE_PROCESS_LOOPBACK_CAPTURE; + } + } else if (priv->loopback) { + device_class = GST_WASAPI2_ENDPOINT_CLASS_LOOPBACK_CAPTURE; + } + + gst_wasapi2_rbuf_set_device (priv->rbuf, priv->device_id, device_class, + priv->loopback_pid, priv->low_latency, priv->exclusive); +} + +static void +gst_wasapi2_src_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_WASAPI2_SRC (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE: + { + auto new_val = g_value_get_string (value); + bool updated = false; + if (g_strcmp0 (new_val, priv->device_id) != 0) { + g_free (priv->device_id); + priv->device_id = g_strdup (new_val); + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + case PROP_LOW_LATENCY: + { + auto new_val = g_value_get_boolean (value); + bool updated = false; + if (new_val != priv->low_latency) { + priv->low_latency = new_val; + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + case PROP_MUTE: + gst_wasapi2_rbuf_set_mute (priv->rbuf, g_value_get_boolean (value)); + break; + case PROP_VOLUME: + gst_wasapi2_rbuf_set_volume (priv->rbuf, g_value_get_double (value)); + break; + case PROP_DISPATCHER: + /* Unused */ + break; + case PROP_LOOPBACK: + { + auto new_val = g_value_get_boolean (value); + bool updated = false; + if (new_val != priv->loopback) { + priv->loopback = new_val; + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + case PROP_LOOPBACK_MODE: + { + auto new_val = (GstWasapi2SrcLoopbackMode) g_value_get_enum (value); + bool updated = false; + if (new_val != priv->loopback_mode) { + priv->loopback_mode = new_val; + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + case PROP_LOOPBACK_TARGET_PID: + { + auto new_val = g_value_get_uint (value); + bool updated = false; + if (new_val != priv->loopback_pid) { + priv->loopback_pid = new_val; + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + case PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE: + priv->loopback_silence_on_device_mute = g_value_get_boolean (value); + gst_wasapi2_rbuf_set_device_mute_monitoring (priv->rbuf, + priv->loopback_silence_on_device_mute); + break; + case PROP_CONTINUE_ON_ERROR: + priv->continue_on_error = g_value_get_boolean (value); + gst_wasapi2_rbuf_set_continue_on_error (priv->rbuf, + priv->continue_on_error); + break; + case PROP_EXCLUSIVE: + { + auto new_val = g_value_get_boolean (value); + bool updated = false; + if (new_val != priv->exclusive) { + priv->exclusive = new_val; + updated = true; + } + + gst_wasapi2_src_set_device (self, updated); + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_wasapi2_src_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_WASAPI2_SRC (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_DEVICE: + g_value_set_string (value, priv->device_id); + break; + case PROP_LOW_LATENCY: + g_value_set_boolean (value, priv->low_latency); + break; + case PROP_MUTE: + g_value_set_boolean (value, gst_wasapi2_rbuf_get_mute (priv->rbuf)); + break; + case PROP_VOLUME: + g_value_set_double (value, gst_wasapi2_rbuf_get_volume (priv->rbuf)); + break; + case PROP_LOOPBACK: + g_value_set_boolean (value, priv->loopback); + break; + case PROP_LOOPBACK_MODE: + g_value_set_enum (value, priv->loopback_mode); + break; + case PROP_LOOPBACK_TARGET_PID: + g_value_set_uint (value, priv->loopback_pid); + break; + case PROP_LOOPBACK_SILENCE_ON_DEVICE_MUTE: + g_value_set_boolean (value, priv->loopback_silence_on_device_mute); + break; + case PROP_CONTINUE_ON_ERROR: + g_value_set_boolean (value, priv->continue_on_error); + break; + case PROP_EXCLUSIVE: + g_value_set_boolean (value, priv->exclusive); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstCaps * +gst_wasapi2_src_get_caps (GstBaseSrc * bsrc, GstCaps * filter) +{ + auto self = GST_WASAPI2_SRC (bsrc); + auto priv = self->priv; + auto caps = gst_wasapi2_rbuf_get_caps (priv->rbuf); + + if (!caps) + caps = gst_pad_get_pad_template_caps (bsrc->srcpad); + + if (filter) { + GstCaps *filtered = + gst_caps_intersect_full (filter, caps, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (caps); + caps = filtered; + } + + GST_DEBUG_OBJECT (self, "returning caps %" GST_PTR_FORMAT, caps); + + return caps; +} + +static GstAudioRingBuffer * +gst_wasapi2_src_create_ringbuffer (GstAudioBaseSrc * src) +{ + auto self = GST_WASAPI2_SRC (src); + auto priv = self->priv; + + return GST_AUDIO_RING_BUFFER (priv->rbuf); +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2util.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2util.cpp
Changed
@@ -29,10 +29,26 @@ #include <winternl.h> #include <mutex> #include <string.h> +#include <wrl.h> +#include <vector> +#include <math.h> GST_DEBUG_CATEGORY_EXTERN (gst_wasapi2_debug); #define GST_CAT_DEFAULT gst_wasapi2_debug +static GstStaticCaps template_caps = GST_STATIC_CAPS (GST_WASAPI2_STATIC_CAPS); + +/* *INDENT-OFF* */ +using namespace Microsoft::WRL; +/* *INDENT-ON* */ + +/* Define GUIDs instead of linking ksuser.lib */ +DEFINE_GUID (GST_KSDATAFORMAT_SUBTYPE_PCM, 0x00000001, 0x0000, 0x0010, + 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71); + +DEFINE_GUID (GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, 0x00000003, 0x0000, 0x0010, + 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71); + /* Desktop only defines */ #ifndef KSAUDIO_SPEAKER_MONO #define KSAUDIO_SPEAKER_MONO (SPEAKER_FRONT_CENTER) @@ -310,7 +326,7 @@ GST_WARNING ("Unknown channel mask value for %d channel stream", nChannels); if (nChannels >= G_N_ELEMENTS (default_ch_masks)) { - GST_ERROR ("To may channels %d", nChannels); + GST_ERROR ("Too many channels %d", nChannels); return 0; } @@ -322,7 +338,7 @@ /* Too many channels, have to assume that they are all non-positional */ if (nChannels > G_N_ELEMENTS (wasapi_to_gst_pos)) { - GST_LOG ("Got too many (%i) channels, assuming non-positional", nChannels); + GST_INFO ("Got too many (%i) channels, assuming non-positional", nChannels); goto out; } @@ -379,10 +395,11 @@ case WAVE_FORMAT_EXTENSIBLE: { WAVEFORMATEXTENSIBLE *ex = (WAVEFORMATEXTENSIBLE *) format; - if (IsEqualGUID (ex->SubFormat, KSDATAFORMAT_SUBTYPE_PCM)) { + if (IsEqualGUID (ex->SubFormat, GST_KSDATAFORMAT_SUBTYPE_PCM)) { fmt = gst_audio_format_build_integer (TRUE, G_LITTLE_ENDIAN, format->wBitsPerSample, ex->Samples.wValidBitsPerSample); - } else if (IsEqualGUID (ex->SubFormat, KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)) { + } else if (IsEqualGUID (ex->SubFormat, + GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)) { if (format->wBitsPerSample == 32 && ex->Samples.wValidBitsPerSample == 32) fmt = GST_AUDIO_FORMAT_F32LE; @@ -404,8 +421,7 @@ gboolean gst_wasapi2_util_parse_waveformatex (WAVEFORMATEX * format, - GstCaps * template_caps, GstCaps ** out_caps, - GstAudioChannelPosition ** out_positions) + GstCaps ** out_caps, GstAudioChannelPosition ** out_positions) { const gchar *afmt; guint64 channel_mask; @@ -429,21 +445,24 @@ if (afmt == NULL) return FALSE; - *out_caps = gst_caps_copy (template_caps); + auto caps = gst_static_caps_get (&template_caps); + caps = gst_caps_make_writable (caps); channel_mask = gst_wasapi2_util_waveformatex_to_channel_mask (format, out_positions); - gst_caps_set_simple (*out_caps, + gst_caps_set_simple (caps, "format", G_TYPE_STRING, afmt, "channels", G_TYPE_INT, format->nChannels, "rate", G_TYPE_INT, format->nSamplesPerSec, NULL); if (channel_mask) { - gst_caps_set_simple (*out_caps, + gst_caps_set_simple (caps, "channel-mask", GST_TYPE_BITMASK, channel_mask, NULL); } + *out_caps = caps; + return TRUE; } @@ -542,10 +561,11 @@ format = (WAVEFORMATEX *) CoTaskMemAlloc (sizeof (WAVEFORMATEX)); format->wFormatTag = WAVE_FORMAT_PCM; format->nChannels = 2; - format->nSamplesPerSec = 44100; + format->nSamplesPerSec = 48000; format->wBitsPerSample = 16; format->nBlockAlign = format->nChannels * format->wBitsPerSample / 8; format->nAvgBytesPerSec = format->nSamplesPerSec * format->nBlockAlign; + format->cbSize = 0; return format; } @@ -592,3 +612,656 @@ return (const char *) render; } + +const gchar * +gst_wasapi2_data_flow_to_string (EDataFlow flow) +{ + switch (flow) { + case eRender: + return "eRender"; + case eCapture: + return "eCapture"; + case eAll: + return "eAll"; + default: + break; + } + + return "Unknown"; +} + +const gchar * +gst_wasapi2_role_to_string (ERole role) +{ + switch (role) { + case eConsole: + return "eConsole"; + case eMultimedia: + return "eMultimedia"; + case eCommunications: + return "eCommunications"; + default: + break; + } + + return "Unknown"; +} + +void +gst_wasapi2_free_wfx (WAVEFORMATEX * wfx) +{ + if (wfx) + CoTaskMemFree (wfx); +} + +void +gst_wasapi2_clear_wfx (WAVEFORMATEX ** wfx) +{ + if (*wfx) { + CoTaskMemFree (*wfx); + *wfx = nullptr; + } +} + +WAVEFORMATEX * +gst_wasapi2_copy_wfx (WAVEFORMATEX * src) +{ + guint total_size = sizeof (WAVEFORMATEX) + src->cbSize; + auto dst = (WAVEFORMATEX *) CoTaskMemAlloc (total_size); + memcpy (dst, src, total_size); + + return dst; +} + +static DWORD +make_channel_mask (WORD nChannels) +{ + switch (nChannels) { + case 1: + return KSAUDIO_SPEAKER_MONO; + case 2: + return KSAUDIO_SPEAKER_STEREO; + case 3: + return KSAUDIO_SPEAKER_3POINT0; + case 4: + return KSAUDIO_SPEAKER_QUAD; + case 5: + return KSAUDIO_SPEAKER_5POINT0; + case 6: + return KSAUDIO_SPEAKER_5POINT1; + case 7: + return KSAUDIO_SPEAKER_7POINT0; + case 8: + return KSAUDIO_SPEAKER_7POINT1; + default: + return 0; + } +} + +static WAVEFORMATEXTENSIBLE +make_wfx_ext (DWORD nSamplesPerSec, WORD nChannels, WORD wBitsPerSample, + WORD wValidBitsPerSample, bool is_float, DWORD dwChannelMask) +{ + WAVEFORMATEXTENSIBLE w = { }; + w.Format.wFormatTag = WAVE_FORMAT_EXTENSIBLE; + w.Format.nChannels = nChannels; + w.Format.nSamplesPerSec = nSamplesPerSec; + + w.Format.wBitsPerSample = wBitsPerSample; + w.Samples.wValidBitsPerSample = wValidBitsPerSample; + + w.dwChannelMask = dwChannelMask ? dwChannelMask : + make_channel_mask (nChannels); + w.SubFormat = is_float ? GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT + : GST_KSDATAFORMAT_SUBTYPE_PCM; + + w.Format.nBlockAlign = (wBitsPerSample / 8) * nChannels; + w.Format.nAvgBytesPerSec = w.Format.nSamplesPerSec * w.Format.nBlockAlign; + w.Format.cbSize = sizeof (WAVEFORMATEXTENSIBLE) - sizeof (WAVEFORMATEX); + + return w; +} + +static inline gboolean +is_extensible_format (const WAVEFORMATEX * wfx) +{ + return wfx->wFormatTag == WAVE_FORMAT_EXTENSIBLE && + wfx->cbSize >= (sizeof (WAVEFORMATEXTENSIBLE) - sizeof (WAVEFORMATEX)); +} + +static inline DWORD +get_wfx_channel_mask (const WAVEFORMATEX * wfx) +{ + if (is_extensible_format (wfx)) + return ((const WAVEFORMATEXTENSIBLE *) wfx)->dwChannelMask; + + return 0; +} + +/* *INDENT-OFF* */ +gboolean +gst_wasapi2_get_exclusive_mode_formats (IAudioClient * client, + IPropertyStore * props, GPtrArray * list) +{ + PROPVARIANT var; + PropVariantInit (&var); + WAVEFORMATEX *device_format = nullptr; + WAVEFORMATEX *closest = nullptr; + WAVEFORMATEX *basis = nullptr; + DWORD basis_ch_mask = 0; + WORD basis_channels = 0; + + /* Prefer device format if supported */ + auto hr = props->GetValue (PKEY_AudioEngine_DeviceFormat, &var); + if (gst_wasapi2_result (hr)) { + if (var.vt == VT_BLOB && var.blob.cbSize >= sizeof (WAVEFORMATEX) + && var.blob.pBlobData) { + device_format = (WAVEFORMATEX *) CoTaskMemAlloc (var.blob.cbSize); + + memcpy (device_format, var.blob.pBlobData, var.blob.cbSize); + } + PropVariantClear (&var); + } + + if (device_format) { + hr = client->IsFormatSupported (AUDCLNT_SHAREMODE_EXCLUSIVE, device_format, + &closest); + + if (hr == S_OK) { + basis = gst_wasapi2_copy_wfx (device_format); + g_ptr_array_add (list, device_format); + device_format = nullptr; + } else if (hr == S_FALSE && closest) { + basis = gst_wasapi2_copy_wfx (closest); + g_ptr_array_add (list, closest); + closest = nullptr; + } + } + + gst_wasapi2_clear_wfx (&device_format); + + /* Checks using pre-defined format list */ + struct DepthPair + { + WORD wBitsPerSample; + WORD wValidBitsPerSample; + bool is_float; + }; + + const DepthPair depth_pairs = { + {32, 32, true}, /* 32-float */ + {32, 32, false}, /* 32-int */ + {16, 16, false}, /* 16-int */ + {24, 24, false}, /* 24-packed */ + {32, 24, false}, /* 24-in-32 */ + }; + + const DWORD rates = { 192000, 176400, 96000, 88200, 48000, 44100 }; + const WORD chs = { 8, 6, 2, 1 }; + + if (basis) { + basis_ch_mask = get_wfx_channel_mask (basis); + basis_channels = basis->nChannels; + } + + for (auto r : rates) { + for (auto c : chs) { + for (auto d : depth_pairs) { + DWORD dwChannelMask = 0; + if (basis_ch_mask && c == basis_channels) + dwChannelMask = basis_ch_mask; + + auto wfx = make_wfx_ext (r, c, d.wBitsPerSample, d.wValidBitsPerSample, + d.is_float, dwChannelMask); + hr = client->IsFormatSupported (AUDCLNT_SHAREMODE_EXCLUSIVE, + (WAVEFORMATEX *) &wfx, &closest); + if (hr == S_OK) { + g_ptr_array_add (list, gst_wasapi2_copy_wfx ((WAVEFORMATEX *) &wfx)); + } else if (hr == S_FALSE && closest) { + g_ptr_array_add (list, closest); + closest = nullptr; + } + } + } + } + + if (!basis) { + if (list && list->len > 0) { + auto first = (WAVEFORMATEX *) g_ptr_array_index (list, 0); + basis = gst_wasapi2_copy_wfx (first); + } else { + basis = gst_wasapi2_get_default_mix_format (); + } + } + + gst_wasapi2_sort_wfx (list, basis); + gst_wasapi2_free_wfx (basis); + + return TRUE; +} + +gboolean +gst_wasapi2_get_shared_mode_formats (IAudioClient * client, GPtrArray * list) +{ + PROPVARIANT var; + PropVariantInit (&var); + WAVEFORMATEX *mix_format = nullptr; + WAVEFORMATEX *closest = nullptr; + + auto hr = client->GetMixFormat (&mix_format); + if (!gst_wasapi2_result (hr)) + return FALSE; + + g_ptr_array_add (list, gst_wasapi2_copy_wfx (mix_format)); + + /* Checks using pre-defined format list */ + struct DepthPair + { + WORD wBitsPerSample; + WORD wValidBitsPerSample; + bool is_float; + }; + + const DepthPair depth_pairs = { + {32, 32, true}, /* 32-float */ + {32, 32, false}, /* 32-int */ + {16, 16, false}, /* 16-int */ + {24, 24, false}, /* 24-packed */ + }; + + const DWORD rates = { 192000, 176400, 96000, 88200, 48000, 44100 }; + DWORD dwChannelMask = get_wfx_channel_mask (mix_format); + + if (dwChannelMask == 0) + dwChannelMask = make_channel_mask (mix_format->nChannels); + + for (auto r : rates) { + for (auto d : depth_pairs) { + auto wfx = make_wfx_ext (r, mix_format->nChannels, d.wBitsPerSample, + d.wValidBitsPerSample, d.is_float, dwChannelMask); + hr = client->IsFormatSupported (AUDCLNT_SHAREMODE_SHARED, + (WAVEFORMATEX *) &wfx, &closest); + if (hr == S_OK) { + g_ptr_array_add (list, gst_wasapi2_copy_wfx ((WAVEFORMATEX *) &wfx)); + } else if (hr == S_FALSE && closest) { + g_ptr_array_add (list, closest); + closest = nullptr; + } + } + } + + gst_wasapi2_sort_wfx (list, mix_format); + gst_wasapi2_free_wfx (mix_format); + + return TRUE; +} + +GstCaps * +gst_wasapi2_wfx_list_to_caps (GPtrArray * list) +{ + if (!list || list->len == 0) + return nullptr; + + std::vector <GstCaps *> caps_list; + + for (guint i = 0; i < list->len; i++) { + auto wfx = (WAVEFORMATEX *) g_ptr_array_index (list, i); + GstCaps *tmp; + + if (gst_wasapi2_util_parse_waveformatex (wfx, &tmp, nullptr)) { + bool unique = true; + for (auto it : caps_list) { + if (gst_caps_is_equal (it, tmp)) { + unique = false; + break; + } + } + + if (unique) + caps_list.push_back (tmp); + else + gst_caps_unref (tmp); + } + } + + if (caps_list.empty ()) + return nullptr; + + auto caps = gst_caps_new_empty (); + for (auto it : caps_list) + gst_caps_append (caps, it); + + return caps; +} +/* *INDENT-ON* */ + +struct FormatView +{ + WORD channels; + DWORD sample_rate; + GUID subformat; + WORD bits_per_sample; + WORD valid_bits_per_sample; + WORD raw_valid_bits_per_sample; + DWORD channel_mask; + WORD format_tag; +}; + +static inline gboolean +is_float_subformat (const FormatView * v) +{ + return IsEqualGUID (v->subformat, GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT); +} + +static inline gboolean +is_pcm_subformat (const FormatView * v) +{ + return IsEqualGUID (v->subformat, GST_KSDATAFORMAT_SUBTYPE_PCM); +} + +static inline gint +effective_bits (const FormatView * v) +{ + if (is_float_subformat (v)) + return 32; + + return v->valid_bits_per_sample ? v-> + valid_bits_per_sample : v->bits_per_sample; +} + +static inline gboolean +is_s24_in_32 (const FormatView * v) +{ + return is_pcm_subformat (v) && + v->bits_per_sample == 32 && + (v->raw_valid_bits_per_sample == 24 || v->valid_bits_per_sample == 24); +} + +static FormatView +make_view (const WAVEFORMATEX * wfx) +{ + FormatView view = { }; + + view.channels = wfx->nChannels; + view.sample_rate = wfx->nSamplesPerSec; + view.bits_per_sample = wfx->wBitsPerSample; + view.format_tag = wfx->wFormatTag; + + if (is_extensible_format (wfx)) { + auto wfe = (const WAVEFORMATEXTENSIBLE *) wfx; + view.subformat = wfe->SubFormat; + view.raw_valid_bits_per_sample = wfe->Samples.wValidBitsPerSample; + view.valid_bits_per_sample = view.raw_valid_bits_per_sample ? + view.raw_valid_bits_per_sample : view.bits_per_sample; + view.channel_mask = wfe->dwChannelMask; + } else { + if (wfx->wFormatTag == WAVE_FORMAT_PCM) { + view.subformat = GST_KSDATAFORMAT_SUBTYPE_PCM; + } else if (wfx->wFormatTag == WAVE_FORMAT_IEEE_FLOAT) { + view.subformat = GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT; + } + + view.raw_valid_bits_per_sample = view.bits_per_sample; + view.valid_bits_per_sample = view.bits_per_sample; + view.channel_mask = 0; + } + + return view; +} + +static gint +compare_format_similarity (const FormatView * a, const FormatView * b, + const FormatView * basis) +{ + gboolean a_sub_eq = IsEqualGUID (a->subformat, basis->subformat); + gboolean b_sub_eq = IsEqualGUID (b->subformat, basis->subformat); + + /* Check subformat (e.g., PCM vs FLOAT) */ + if (a_sub_eq != b_sub_eq) + return a_sub_eq ? -1 : 1; + + /* BPS diff */ + gint da_bits = + abs ((gint) a->bits_per_sample - (gint) basis->bits_per_sample); + gint db_bits = + abs ((gint) b->bits_per_sample - (gint) basis->bits_per_sample); + if (da_bits != db_bits) + return (da_bits < db_bits) ? -1 : 1; + + gint a_valid = a->valid_bits_per_sample ? + a->valid_bits_per_sample : a->bits_per_sample; + gint b_valid = b->valid_bits_per_sample ? + b->valid_bits_per_sample : b->bits_per_sample; + gint basis_valid = basis->valid_bits_per_sample ? + basis->valid_bits_per_sample : basis->bits_per_sample; + + gint da_valid = abs (a_valid - basis_valid); + gint db_valid = abs (b_valid - basis_valid); + if (da_valid != db_valid) + return (da_valid < db_valid) ? -1 : 1; + + /* Checks sample mask */ + gboolean a_mask_eq = (a->channel_mask != 0 && basis->channel_mask != 0 && + a->channel_mask == basis->channel_mask); + gboolean b_mask_eq = (b->channel_mask != 0 && basis->channel_mask != 0 && + b->channel_mask == basis->channel_mask); + if (a_mask_eq != b_mask_eq) + return a_mask_eq ? -1 : 1; + + /* Check format tag */ + gint dtag_a = abs ((gint) a->format_tag - (gint) basis->format_tag); + gint dtag_b = abs ((gint) b->format_tag - (gint) basis->format_tag); + if (dtag_a != dtag_b) + return (dtag_a < dtag_b) ? -1 : 1; + + return 0; +} + +static gint +compare_wfx_func (gconstpointer pa, gconstpointer pb, gpointer user_data) +{ + const WAVEFORMATEX *A = (const WAVEFORMATEX *) pa; + const WAVEFORMATEX *B = (const WAVEFORMATEX *) pb; + const WAVEFORMATEX *basis_wfx = (const WAVEFORMATEX *) user_data; + + FormatView a = make_view (A); + FormatView b = make_view (B); + FormatView basis = make_view (basis_wfx); + + /* S24_32LE is the lowest */ + gboolean a_s2432 = is_s24_in_32 (&a); + gboolean b_s2432 = is_s24_in_32 (&b); + if (a_s2432 != b_s2432) + return a_s2432 ? 1 : -1; + + /* Prefer same channel */ + gint dch_a = abs ((gint) a.channels - (gint) basis.channels); + gint dch_b = abs ((gint) b.channels - (gint) basis.channels); + if (dch_a != dch_b) + return (dch_a < dch_b) ? -1 : 1; + + /* Then sample rate */ + gint64 dra = (gint64) a.sample_rate - (gint64) basis.sample_rate; + gint64 drb = (gint64) b.sample_rate - (gint64) basis.sample_rate; + dra = dra >= 0 ? dra : -dra; + drb = drb >= 0 ? drb : -drb; + if (dra != drb) + return (dra < drb) ? -1 : 1; + + /* Prefere higher sample rate */ + if (a.sample_rate != b.sample_rate) + return (a.sample_rate > b.sample_rate) ? -1 : +1; + + /* High bit first */ + gint a_bits = effective_bits (&a); + gint b_bits = effective_bits (&b); + if (a_bits != b_bits) + return (a_bits > b_bits) ? -1 : +1; + + /* format compare */ + gint fcmp = compare_format_similarity (&a, &b, &basis); + if (fcmp != 0) + return fcmp; + + return 0; +} + +/* *INDENT-OFF* */ +static void +demote_s24_32le (GPtrArray *list) +{ + if (!list || list->len == 0) + return; + + std::vector<gpointer> head; + std::vector<gpointer> tail; + + head.reserve (list->len); + tail.reserve (list->len); + + for (guint i = 0; i < list->len; i++) { + auto wfx = (WAVEFORMATEX *) g_ptr_array_index (list, i); + FormatView v = make_view (wfx); + if (is_s24_in_32 (&v)) + tail.push_back ((gpointer) wfx); + else + head.push_back ((gpointer) wfx); + } + + guint idx = 0; + for (gpointer p : head) + list->pdataidx++ = p; + + for (gpointer p : tail) + list->pdataidx++ = p; +} +/* *INDENT-ON* */ + +void +gst_wasapi2_sort_wfx (GPtrArray * list, WAVEFORMATEX * wfx) +{ + if (!list || list->len == 0 || !wfx) + return; + + g_ptr_array_sort_with_data (list, compare_wfx_func, wfx); + demote_s24_32le (list); +} + +static DWORD +gst_wasapi2_mask_from_gst_positions (const GstAudioInfo * info) +{ + DWORD mask = 0; + + for (guint i = 0; i < (guint) GST_AUDIO_INFO_CHANNELS (info); i++) { + auto p = info->positioni; + + if (p == GST_AUDIO_CHANNEL_POSITION_NONE || + p == GST_AUDIO_CHANNEL_POSITION_INVALID) { + continue; + } + + for (guint k = 0; k < G_N_ELEMENTS (wasapi_to_gst_pos); k++) { + if (wasapi_to_gst_posk.gst_pos == p) { + mask |= (DWORD) wasapi_to_gst_posk.wasapi_pos; + break; + } + } + } + + if (mask == 0) { + guint ch = GST_AUDIO_INFO_CHANNELS (info); + if (ch < G_N_ELEMENTS (default_ch_masks)) + mask = default_ch_masksch; + } + + return mask; +} + +WAVEFORMATEX * +gst_wasapi2_audio_info_to_wfx (const GstAudioInfo * info) +{ + if (!info) + return nullptr; + + auto channels = GST_AUDIO_INFO_CHANNELS (info); + auto rate = GST_AUDIO_INFO_RATE (info); + auto fmt = GST_AUDIO_INFO_FORMAT (info); + + bool is_float = false; + WORD bits = 0; + WORD valid_bits = 0; + + switch (fmt) { + case GST_AUDIO_FORMAT_S16LE: + bits = 16; + valid_bits = 16; + break; + case GST_AUDIO_FORMAT_S24LE: + bits = 24; + valid_bits = 24; + break; + case GST_AUDIO_FORMAT_S24_32LE: + bits = 32; + valid_bits = 24; + break; + case GST_AUDIO_FORMAT_S32LE: + bits = 32; + valid_bits = 32; + break; + case GST_AUDIO_FORMAT_F32LE: + is_float = true; + bits = 32; + valid_bits = 32; + break; + case GST_AUDIO_FORMAT_F64LE: + is_float = true; + bits = 64; + valid_bits = 64; + break; + default: + return nullptr; + } + + DWORD ch_mask = gst_wasapi2_mask_from_gst_positions (info); + bool need_ext = false; + if ((!is_float && bits > 16) || + (valid_bits != bits) || (channels > 2) || (is_float && channels > 2)) { + need_ext = true; + } + + if (need_ext) { + auto w = (WAVEFORMATEXTENSIBLE *) + CoTaskMemAlloc (sizeof (WAVEFORMATEXTENSIBLE)); + + memset (w, 0, sizeof (WAVEFORMATEXTENSIBLE)); + w->Format.wFormatTag = WAVE_FORMAT_EXTENSIBLE; + w->Format.nChannels = (WORD) channels; + w->Format.nSamplesPerSec = rate; + w->Format.wBitsPerSample = bits; + + w->Samples.wValidBitsPerSample = valid_bits; + w->dwChannelMask = ch_mask ? ch_mask : make_channel_mask ((WORD) channels); + w->SubFormat = is_float ? GST_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT + : GST_KSDATAFORMAT_SUBTYPE_PCM; + + w->Format.nBlockAlign = (WORD) ((bits / 8) * channels); + w->Format.nAvgBytesPerSec = + w->Format.nSamplesPerSec * w->Format.nBlockAlign; + w->Format.cbSize = sizeof (WAVEFORMATEXTENSIBLE) - sizeof (WAVEFORMATEX); + + return (WAVEFORMATEX *) w; + } + + auto w = (WAVEFORMATEX *) CoTaskMemAlloc (sizeof (WAVEFORMATEX)); + + memset (w, 0, sizeof (WAVEFORMATEX)); + w->wFormatTag = is_float ? WAVE_FORMAT_IEEE_FLOAT : WAVE_FORMAT_PCM; + w->nChannels = (WORD) channels; + w->nSamplesPerSec = rate; + w->wBitsPerSample = bits; + w->nBlockAlign = (WORD) ((bits / 8) * channels); + w->nAvgBytesPerSec = w->nSamplesPerSec * w->nBlockAlign; + w->cbSize = 0; + + return w; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/gstwasapi2util.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/gstwasapi2util.h
Changed
@@ -101,7 +101,6 @@ const gchar * gst_wasapi2_util_waveformatex_to_audio_format (WAVEFORMATEX * format); gboolean gst_wasapi2_util_parse_waveformatex (WAVEFORMATEX * format, - GstCaps * template_caps, GstCaps ** out_caps, GstAudioChannelPosition ** out_positions); @@ -117,6 +116,30 @@ const char * gst_wasapi2_get_default_device_id (EDataFlow flow); +const gchar * gst_wasapi2_data_flow_to_string (EDataFlow flow); + +const gchar * gst_wasapi2_role_to_string (ERole role); + +void gst_wasapi2_free_wfx (WAVEFORMATEX * wfx); + +void gst_wasapi2_clear_wfx (WAVEFORMATEX ** wfx); + +WAVEFORMATEX * gst_wasapi2_copy_wfx (WAVEFORMATEX * format); + +gboolean gst_wasapi2_get_exclusive_mode_formats (IAudioClient * client, + IPropertyStore * props, + GPtrArray * list); + +gboolean gst_wasapi2_get_shared_mode_formats (IAudioClient * client, + GPtrArray * list); + +GstCaps * gst_wasapi2_wfx_list_to_caps (GPtrArray * list); + +void gst_wasapi2_sort_wfx (GPtrArray * list, + WAVEFORMATEX * wfx); + +WAVEFORMATEX * gst_wasapi2_audio_info_to_wfx (const GstAudioInfo * info); + G_END_DECLS #ifdef __cplusplus
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/meson.build
Changed
@@ -1,22 +1,20 @@ wasapi2_sources = - 'gstwasapi2src.c', - 'gstwasapi2sink.c', + 'gstwasapi2src.cpp', + 'gstwasapi2sink.cpp', 'gstwasapi2util.cpp', 'gstwasapi2device.cpp', - 'gstwasapi2ringbuffer.cpp', 'gstwasapi2activator.cpp', 'gstwasapi2enumerator.cpp', - 'gstwasapi2object.cpp', + 'gstwasapi2rbuf.cpp', 'plugin.cpp', wasapi2_headers = - 'gstwasapi2ringbuffer.h', 'gstwasapi2device.h', 'gstwasapi2util.h', 'gstwasapi2src.h', 'gstwasapi2sink.h', - 'gstwasapi2object.h', + 'gstwasapi2rbuf.h', mmdeviceapi_symbols = @@ -44,10 +42,9 @@ endif ole32_dep = cc.find_library('ole32', required : get_option('wasapi2')) -ksuser_dep = cc.find_library('ksuser', required : get_option('wasapi2')) mmdeviceapi_dep = cc.find_library('mmdevapi', required : get_option('wasapi2')) -mfplat_dep = cc.find_library('mfplat', required : get_option('wasapi2')) -wasapi2_dep = ole32_dep, ksuser_dep, mmdeviceapi_dep, mfplat_dep +avrt_dep = cc.find_library('avrt', required : get_option('wasapi2')) +wasapi2_dep = ole32_dep, mmdeviceapi_dep, avrt_dep extra_args = foreach dep: wasapi2_dep
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/wasapi2/plugin.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/wasapi2/plugin.cpp
Changed
@@ -21,13 +21,10 @@ #include <config.h> #endif -#include <winapifamily.h> - #include "gstwasapi2sink.h" #include "gstwasapi2src.h" #include "gstwasapi2device.h" #include "gstwasapi2util.h" -#include <mfapi.h> GST_DEBUG_CATEGORY (gst_wasapi2_debug); GST_DEBUG_CATEGORY (gst_wasapi2_client_debug); @@ -35,14 +32,12 @@ static void plugin_deinit (gpointer data) { - MFShutdown (); } static gboolean plugin_init (GstPlugin * plugin) { guint rank = GST_RANK_PRIMARY + 1; - HRESULT hr; /** * plugin-wasapi2: @@ -50,15 +45,7 @@ * Since: 1.18 */ - hr = MFStartup (MF_VERSION, MFSTARTUP_NOSOCKET); - if (!gst_wasapi2_result (hr)) { - GST_WARNING ("MFStartup failure, hr: 0x%x", (guint) hr); - return TRUE; - } - GST_DEBUG_CATEGORY_INIT (gst_wasapi2_debug, "wasapi2", 0, "wasapi2"); - GST_DEBUG_CATEGORY_INIT (gst_wasapi2_client_debug, "wasapi2client", - 0, "wasapi2client"); gst_element_register (plugin, "wasapi2sink", rank, GST_TYPE_WASAPI2_SINK); gst_element_register (plugin, "wasapi2src", rank, GST_TYPE_WASAPI2_SRC);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipc.cpp
Added
@@ -0,0 +1,49 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipc.h" +#include <mutex> + +/** + * GstWin32IpcLeakyType: + * + * Since: 1.28 + */ +GType +gst_win32_ipc_leaky_type_get_type (void) +{ + static GType type = 0; + static std::once_flag once; + static const GEnumValue leaky_types = { + {GST_WIN32_IPC_LEAKY_NONE, "None", "none"}, + {GST_WIN32_IPC_LEAKY_UPSTREAM, "Upstream", "upstream"}, + {GST_WIN32_IPC_LEAKY_DOWNSTREAM, "Downstream", "downstream"}, + {0, nullptr, nullptr}, + }; + + std::call_once (once,& { + type = g_enum_register_static ("GstWin32IpcLeakyType", leaky_types); + }); + + return type; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipc.h
Added
@@ -0,0 +1,36 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> + +G_BEGIN_DECLS + +typedef enum +{ + GST_WIN32_IPC_LEAKY_NONE, + GST_WIN32_IPC_LEAKY_UPSTREAM, + GST_WIN32_IPC_LEAKY_DOWNSTREAM, +} GstWin32IpcLeakyType; + +#define GST_TYPE_WIN32_IPC_LEAKY_TYPE (gst_win32_ipc_leaky_type_get_type()) +GType gst_win32_ipc_leaky_type_get_type (void); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcbasesink.cpp
Added
@@ -0,0 +1,540 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcbasesink.h" +#include "gstwin32ipcserver.h" +#include "gstwin32ipc.h" +#include <string> +#include <string.h> +#include <mutex> +#include <condition_variable> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_base_sink_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_base_sink_debug + +enum +{ + PROP_0, + PROP_PIPE_NAME, + PROP_LEAKY_TYPE, + PROP_MAX_BUFFERS, + PROP_CURRENT_LEVEL_BUFFERS, + PROP_NUM_CLIENTS, + PROP_WAIT_FOR_CONNECTION, + PROP_LAST, +}; + +static GParamSpec *propsPROP_LAST; + +#define DEFAULT_PIPE_NAME "\\\\.\\pipe\\gst.win32.ipc" +#define DEFAULT_MAX_BUFFERS 2 +#define DEFAULT_LEAKY_TYPE GST_WIN32_IPC_LEAKY_NONE +#define DEFAULT_WAIT_FOR_CONNECTION FALSE + +/* *INDENT-OFF* */ +struct _GstWin32IpcBaseSinkPrivate +{ + _GstWin32IpcBaseSinkPrivate () + { + meta = g_byte_array_new (); + pipe_name = g_strdup (DEFAULT_PIPE_NAME); + } + + ~_GstWin32IpcBaseSinkPrivate () + { + reset (); + + gst_clear_object (&server); + g_byte_array_unref (meta); + g_free (pipe_name); + } + + void reset () + { + gst_clear_caps (&caps); + num_clients = 0; + } + + std::mutex lock; + std::condition_variable cond; + + GstWin32IpcServer *server = nullptr; + GstCaps *caps = nullptr; + GByteArray *meta = nullptr; + guint num_clients = 0; + bool flushing = false; + + /* properties */ + gchar *pipe_name; + guint64 max_buffers = DEFAULT_MAX_BUFFERS; + GstWin32IpcLeakyType leaky = DEFAULT_LEAKY_TYPE; + gboolean wait_for_connection = DEFAULT_WAIT_FOR_CONNECTION; +}; +/* *INDENT-ON* */ + +static void gst_win32_ipc_base_sink_finalize (GObject * object); +static void gst_win32_ipc_base_sink_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_win32_base_sink_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +static GstClock *gst_win32_ipc_base_sink_provide_clock (GstElement * elem); + +static gboolean gst_win32_ipc_base_sink_start (GstBaseSink * sink); +static gboolean gst_win32_ipc_base_sink_stop (GstBaseSink * sink); +static gboolean gst_win32_ipc_base_sink_unlock (GstBaseSink * sink); +static gboolean gst_win32_ipc_base_sink_unlock_stop (GstBaseSink * sink); +static gboolean gst_win32_ipc_base_sink_set_caps (GstBaseSink * sink, + GstCaps * caps); +static GstFlowReturn gst_win32_ipc_base_sink_render (GstBaseSink * sink, + GstBuffer * buf); +static gboolean gst_win32_ipc_base_sink_event (GstBaseSink * sink, + GstEvent * event); + +/** + * GstWin32IpcBaseSink: + * + * Since: 1.28 + */ +#define gst_win32_ipc_base_sink_parent_class parent_class +G_DEFINE_ABSTRACT_TYPE (GstWin32IpcBaseSink, gst_win32_ipc_base_sink, + GST_TYPE_BASE_SINK); + +static void +gst_win32_ipc_base_sink_class_init (GstWin32IpcBaseSinkClass * klass) +{ + auto object_class = G_OBJECT_CLASS (klass); + auto element_class = GST_ELEMENT_CLASS (klass); + auto sink_class = GST_BASE_SINK_CLASS (klass); + + object_class->finalize = gst_win32_ipc_base_sink_finalize; + object_class->set_property = gst_win32_ipc_base_sink_set_property; + object_class->get_property = gst_win32_base_sink_get_property; + + propsPROP_PIPE_NAME = + g_param_spec_string ("pipe-name", "Pipe Name", + "The name of Win32 named pipe to communicate with clients. " + "Validation of the pipe name is caller's responsibility", + DEFAULT_PIPE_NAME, (GParamFlags) (G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY)); + + propsPROP_LEAKY_TYPE = + g_param_spec_enum ("leaky-type", "Leaky Type", + "Whether to drop buffers once the internal queue is full", + GST_TYPE_WIN32_IPC_LEAKY_TYPE, DEFAULT_LEAKY_TYPE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + propsPROP_MAX_BUFFERS = + g_param_spec_uint64 ("max-buffers", "Max Buffers", + "Maximum number of buffers in queue (0=unlimited)", + 0, G_MAXUINT64, DEFAULT_MAX_BUFFERS, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + propsPROP_CURRENT_LEVEL_BUFFERS = + g_param_spec_uint64 ("current-level-buffers", "Current Level Buffers", + "The number of currently queued buffers", + 0, G_MAXUINT64, 0, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS)); + + propsPROP_WAIT_FOR_CONNECTION = + g_param_spec_boolean ("wait-for-connection", "Wait for Connection", + "Blocks the stream until at least one client is connected", + DEFAULT_WAIT_FOR_CONNECTION, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + propsPROP_NUM_CLIENTS = + g_param_spec_uint ("num-clients", "Number of Clients", + "The number of connected clients", + 0, G_MAXUINT, 0, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_properties (object_class, PROP_LAST, props); + + element_class->provide_clock = + GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_provide_clock); + + sink_class->start = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_start); + sink_class->stop = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_stop); + sink_class->unlock = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_unlock); + sink_class->unlock_stop = + GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_unlock_stop); + sink_class->set_caps = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_set_caps); + sink_class->render = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_render); + sink_class->event = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_sink_event); + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_base_sink_debug, "win32ipcbasesink", + 0, "win32ipcbasesink"); + + gst_type_mark_as_plugin_api (GST_TYPE_WIN32_IPC_BASE_SINK, + (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_WIN32_IPC_LEAKY_TYPE, + (GstPluginAPIFlags) 0); +} + +static void +gst_win32_ipc_base_sink_init (GstWin32IpcBaseSink * self) +{ + self->priv = new GstWin32IpcBaseSinkPrivate (); + + GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_PROVIDE_CLOCK); + GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_REQUIRE_CLOCK); +} + +static void +gst_win32_ipc_base_sink_finalize (GObject * object) +{ + auto self = GST_WIN32_IPC_BASE_SINK (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_win32_ipc_base_sink_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_WIN32_IPC_BASE_SINK (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_PIPE_NAME: + g_free (priv->pipe_name); + priv->pipe_name = g_value_dup_string (value); + if (!priv->pipe_name) + priv->pipe_name = g_strdup (DEFAULT_PIPE_NAME); + break; + case PROP_LEAKY_TYPE: + priv->leaky = (GstWin32IpcLeakyType) g_value_get_enum (value); + if (priv->server) + gst_win32_ipc_server_set_leaky (priv->server, priv->leaky); + break; + case PROP_MAX_BUFFERS: + priv->max_buffers = g_value_get_uint64 (value); + if (priv->server) + gst_win32_ipc_server_set_max_buffers (priv->server, priv->max_buffers); + break; + case PROP_WAIT_FOR_CONNECTION: + { + auto wait = g_value_get_boolean (value); + if (priv->wait_for_connection != wait) { + priv->wait_for_connection = wait; + priv->cond.notify_all (); + } + break; + } + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_win32_base_sink_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_WIN32_IPC_BASE_SINK (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_PIPE_NAME: + g_value_set_string (value, priv->pipe_name); + break; + case PROP_LEAKY_TYPE: + g_value_set_enum (value, priv->leaky); + break; + case PROP_MAX_BUFFERS: + g_value_set_uint64 (value, priv->max_buffers); + break; + case PROP_CURRENT_LEVEL_BUFFERS: + if (priv->server) { + auto level = + gst_win32_ipc_server_get_current_level_buffers (priv->server); + g_value_set_uint64 (value, level); + } else { + g_value_set_uint64 (value, 0); + } + break; + case PROP_WAIT_FOR_CONNECTION: + g_value_set_boolean (value, priv->wait_for_connection); + break; + case PROP_NUM_CLIENTS: + g_value_set_uint (value, priv->num_clients); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstClock * +gst_win32_ipc_base_sink_provide_clock (GstElement * elem) +{ + return gst_system_clock_obtain (); +} + +static void +gst_win32_ipc_base_sink_on_num_clients (GObject * server, + GParamSpec * pspec, GstWin32IpcBaseSink * self) +{ + auto priv = self->priv; + + guint num_clients = 0; + g_object_get (server, "num-clients", &num_clients, nullptr); + + GST_DEBUG_OBJECT (self, "num-clients %u", num_clients); + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->num_clients = num_clients; + priv->cond.notify_all (); + } + + /* This is server's event loop thread. Use other thread to notify */ + gst_object_call_async (GST_OBJECT (self), + (GstObject * object, gpointer user_data)->void + { + g_object_notify_by_pspec (G_OBJECT (object), propsPROP_NUM_CLIENTS); + }, nullptr); +} + +static gboolean +gst_win32_ipc_base_sink_start (GstBaseSink * sink) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Start"); + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->server = gst_win32_ipc_server_new (priv->pipe_name, + priv->max_buffers, priv->leaky); + if (!priv->server) { + GST_ERROR_OBJECT (self, "Couldn't create pipe server"); + return FALSE; + } + } + + g_signal_connect (priv->server, "notify::num-clients", + G_CALLBACK (gst_win32_ipc_base_sink_on_num_clients), self); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_sink_stop (GstBaseSink * sink) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Stop"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->server) { + g_signal_handlers_disconnect_by_data (priv->server, self); + gst_clear_object (&priv->server); + } + + priv->reset (); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_sink_unlock (GstBaseSink * sink) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Unlock"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->server) + gst_win32_ipc_server_set_flushing (priv->server, TRUE); + priv->flushing = true; + priv->cond.notify_all (); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_sink_unlock_stop (GstBaseSink * sink) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Unlock stop"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->server) + gst_win32_ipc_server_set_flushing (priv->server, FALSE); + priv->flushing = false; + priv->cond.notify_all (); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_sink_set_caps (GstBaseSink * sink, GstCaps * caps) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + gst_caps_replace (&priv->caps, caps); + + return TRUE; +} + +static GstClockTime +gst_win32_ipc_base_sink_get_buffer_time (GstBaseSink * sink, + GstClockTime base_time, GstClockTime latency, gboolean clock_is_qpc, + GstClockTime now_qpc, GstClockTime now_gst, GstClockTime timestamp) +{ + if (!GST_CLOCK_TIME_IS_VALID (timestamp) || + !GST_CLOCK_TIME_IS_VALID (base_time)) { + return GST_CLOCK_TIME_NONE; + } + + GstClockTime running_time; + auto ret = gst_segment_to_running_time_full (&sink->segment, + GST_FORMAT_TIME, timestamp, &running_time); + if (!ret) + return GST_CLOCK_TIME_NONE; + + if (ret > 0) + running_time += base_time; + else if (base_time > timestamp) + running_time = base_time - timestamp; + else + running_time = 0; + + if (GST_CLOCK_TIME_IS_VALID (latency)) + running_time += latency; + + if (clock_is_qpc) + return running_time; + + if (running_time < now_gst) + return 0; + + running_time -= now_gst; + running_time += now_qpc; + + return running_time; +} + +static GstFlowReturn +gst_win32_ipc_base_sink_render (GstBaseSink * sink, GstBuffer * buf) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + if (!priv->server) { + GST_ERROR_OBJECT (self, "Pipe server was not configured"); + return GST_FLOW_ERROR; + } + + auto now_qpc = gst_util_get_timestamp (); + auto base_time = GST_ELEMENT_CAST (sink)->base_time; + auto latency = gst_base_sink_get_latency (sink); + GstClockTime now_gst = GST_CLOCK_TIME_NONE; + gboolean is_qpc = TRUE; + + auto clock = gst_element_get_clock (GST_ELEMENT_CAST (sink)); + if (clock) { + now_gst = gst_clock_get_time (clock); + is_qpc = gst_clock_is_system_monotonic (clock); + gst_object_unref (clock); + } + + auto pts = gst_win32_ipc_base_sink_get_buffer_time (sink, + base_time, latency, is_qpc, now_qpc, now_gst, GST_BUFFER_PTS (buf)); + auto dts = gst_win32_ipc_base_sink_get_buffer_time (sink, + base_time, latency, is_qpc, now_qpc, now_gst, GST_BUFFER_DTS (buf)); + + GstBuffer *prepared; + gsize size; + auto klass = GST_WIN32_IPC_BASE_SINK_GET_CLASS (self); + auto ret = klass->upload (self, buf, &prepared, &size); + if (ret != GST_FLOW_OK) + return ret; + + g_byte_array_set_size (priv->meta, 0); + gst_buffer_foreach_meta (prepared,(GstBuffer * prepared, GstMeta ** meta, + gpointer user_data)->gboolean { + auto self = GST_WIN32_IPC_BASE_SINK (user_data); + gst_meta_serialize_simple (*meta, self->priv->meta); + return TRUE; + } + , self); + + { + std::unique_lock < std::mutex > lk (priv->lock); + while (priv->wait_for_connection && priv->num_clients == 0 && + !priv->flushing) { + priv->cond.wait (lk); + } + + if (priv->flushing) { + GST_DEBUG_OBJECT (self, "We are flushing"); + gst_buffer_unref (prepared); + return GST_FLOW_FLUSHING; + } + } + + ret = gst_win32_ipc_server_send_data (priv->server, + prepared, priv->caps, priv->meta, pts, dts, size); + gst_buffer_unref (prepared); + + return ret; +} + +static gboolean +gst_win32_ipc_base_sink_event (GstBaseSink * sink, GstEvent * event) +{ + auto self = GST_WIN32_IPC_BASE_SINK (sink); + auto priv = self->priv; + + switch (GST_EVENT_TYPE (event)) { + case GST_EVENT_EOS: + { + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->server) { + GST_DEBUG_OBJECT (self, "Sending null data on EOS"); + gst_win32_ipc_server_send_data (priv->server, + nullptr, nullptr, nullptr, GST_CLOCK_TIME_NONE, + GST_CLOCK_TIME_NONE, 0); + } + break; + } + default: + break; + } + + return GST_BASE_SINK_CLASS (parent_class)->event (sink, event); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcbasesink.h
Added
@@ -0,0 +1,58 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/base/gstbasesink.h> + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_BASE_SINK (gst_win32_ipc_base_sink_get_type()) +#define GST_WIN32_IPC_BASE_SINK(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_WIN32_IPC_BASE_SINK,GstWin32IpcBaseSink)) +#define GST_WIN32_IPC_BASE_SINK_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_WIN32_IPC_BASE_SINK,GstWin32IpcBaseSinkClass)) +#define GST_WIN32_IPC_BASE_SINK_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_WIN32_IPC_BASE_SINK,GstWin32IpcBaseSinkClass)) +#define GST_IS_WIN32_IPC_BASE_SINK(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_WIN32_IPC_BASE_SINK)) +#define GST_IS_WIN32_IPC_BASE_SINK_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_WIN32_IPC_BASE_SINK)) + +typedef struct _GstWin32IpcBaseSink GstWin32IpcBaseSink; +typedef struct _GstWin32IpcBaseSinkClass GstWin32IpcBaseSinkClass; +typedef struct _GstWin32IpcBaseSinkPrivate GstWin32IpcBaseSinkPrivate; + +struct _GstWin32IpcBaseSink +{ + GstBaseSink parent; + + GstWin32IpcBaseSinkPrivate *priv; +}; + +struct _GstWin32IpcBaseSinkClass +{ + GstBaseSinkClass parent_class; + + GstFlowReturn (*upload) (GstWin32IpcBaseSink * sink, + GstBuffer * buffer, + GstBuffer ** uploaded, + gsize * size); +}; + +GType gst_win32_ipc_base_sink_get_type (void); +G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstWin32IpcBaseSink, gst_object_unref) + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcbasesrc.cpp
Added
@@ -0,0 +1,471 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcbasesrc.h" +#include "gstwin32ipcclient.h" +#include "gstwin32ipc.h" +#include <string> +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_base_src_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_base_src_debug + +enum +{ + PROP_0, + PROP_PIPE_NAME, + PROP_PROCESSING_DEADLINE, + PROP_LEAKY_TYPE, + PROP_MAX_BUFFERS, + PROP_CURRENT_LEVEL_BUFFERS, +}; + +#define DEFAULT_PIPE_NAME "\\\\.\\pipe\\gst.win32.ipc" +#define DEFAULT_PROCESSING_DEADLINE (20 * GST_MSECOND) +#define DEFAULT_MAX_BUFFERS 2 +#define DEFAULT_LEAKY_TYPE GST_WIN32_IPC_LEAKY_NONE + +/* *INDENT-OFF* */ +struct _GstWin32IpcBaseSrcPrivate +{ + _GstWin32IpcBaseSrcPrivate () + { + pipe_name = g_strdup (DEFAULT_PIPE_NAME); + } + + ~_GstWin32IpcBaseSrcPrivate () + { + g_free (pipe_name); + } + + GstWin32IpcClient *client = nullptr; + GstCaps *caps = nullptr; + std::mutex lock; + + /* properties */ + gchar *pipe_name; + GstClockTime processing_deadline = DEFAULT_PROCESSING_DEADLINE; + guint64 max_buffers = DEFAULT_MAX_BUFFERS; + GstWin32IpcLeakyType leaky = DEFAULT_LEAKY_TYPE; +}; +/* *INDENT-ON* */ + +static void gst_win32_ipc_base_src_finalize (GObject * object); +static void gst_win32_ipc_base_src_set_property (GObject * object, + guint prop_id, const GValue * value, GParamSpec * pspec); +static void gst_win32_base_src_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); + +static GstClock *gst_win32_base_src_provide_clock (GstElement * elem); + +static gboolean gst_win32_ipc_base_src_start (GstBaseSrc * src); +static gboolean gst_win32_ipc_base_src_stop (GstBaseSrc * src); +static gboolean gst_win32_ipc_base_src_unlock (GstBaseSrc * src); +static gboolean gst_win32_ipc_base_src_unlock_stop (GstBaseSrc * src); +static gboolean gst_win32_ipc_base_src_query (GstBaseSrc * src, + GstQuery * query); +static GstCaps *gst_win32_ipc_base_src_get_caps (GstBaseSrc * src, + GstCaps * filter); +static GstFlowReturn gst_win32_ipc_base_src_create (GstBaseSrc * src, + guint64 offset, guint size, GstBuffer ** buf); + +/** + * GstWin32IpcBaseSrc: + * + * Since: 1.28 + */ +#define gst_win32_ipc_base_src_parent_class parent_class +G_DEFINE_ABSTRACT_TYPE (GstWin32IpcBaseSrc, + gst_win32_ipc_base_src, GST_TYPE_BASE_SRC); + +static void +gst_win32_ipc_base_src_class_init (GstWin32IpcBaseSrcClass * klass) +{ + GObjectClass *object_class = G_OBJECT_CLASS (klass); + GstElementClass *element_class = GST_ELEMENT_CLASS (klass); + GstBaseSrcClass *src_class = GST_BASE_SRC_CLASS (klass); + + object_class->finalize = gst_win32_ipc_base_src_finalize; + object_class->set_property = gst_win32_ipc_base_src_set_property; + object_class->get_property = gst_win32_base_src_get_property; + + g_object_class_install_property (object_class, PROP_PIPE_NAME, + g_param_spec_string ("pipe-name", "Pipe Name", + "The name of Win32 named pipe to communicate with server. " + "Validation of the client name is caller's responsibility", + DEFAULT_PIPE_NAME, (GParamFlags) (G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY))); + g_object_class_install_property (object_class, PROP_PROCESSING_DEADLINE, + g_param_spec_uint64 ("processing-deadline", "Processing deadline", + "Maximum processing time for a buffer in nanoseconds", 0, G_MAXUINT64, + DEFAULT_PROCESSING_DEADLINE, (GParamFlags) (G_PARAM_READWRITE | + G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); + + g_object_class_install_property (object_class, PROP_LEAKY_TYPE, + g_param_spec_enum ("leaky-type", "Leaky Type", + "Whether to drop buffers once the internal queue is full", + GST_TYPE_WIN32_IPC_LEAKY_TYPE, DEFAULT_LEAKY_TYPE, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (object_class, PROP_MAX_BUFFERS, + g_param_spec_uint64 ("max-buffers", "Max Buffers", + "Maximum number of buffers in queue (0=unlimited)", + 0, G_MAXUINT64, DEFAULT_MAX_BUFFERS, + (GParamFlags) (G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); + + g_object_class_install_property (object_class, PROP_CURRENT_LEVEL_BUFFERS, + g_param_spec_uint64 ("current-level-buffers", "Current Level Buffers", + "The number of currently queued buffers", + 0, G_MAXUINT64, 0, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS))); + + element_class->provide_clock = + GST_DEBUG_FUNCPTR (gst_win32_base_src_provide_clock); + + src_class->start = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_start); + src_class->stop = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_stop); + src_class->unlock = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_unlock); + src_class->unlock_stop = + GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_unlock_stop); + src_class->query = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_query); + src_class->get_caps = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_get_caps); + src_class->create = GST_DEBUG_FUNCPTR (gst_win32_ipc_base_src_create); + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_base_src_debug, "win32ipcbasesrc", + 0, "win32ipcbasesrc"); + + gst_type_mark_as_plugin_api (GST_TYPE_WIN32_IPC_BASE_SRC, + (GstPluginAPIFlags) 0); + gst_type_mark_as_plugin_api (GST_TYPE_WIN32_IPC_LEAKY_TYPE, + (GstPluginAPIFlags) 0); +} + +static void +gst_win32_ipc_base_src_init (GstWin32IpcBaseSrc * self) +{ + self->priv = new GstWin32IpcBaseSrcPrivate (); + + gst_base_src_set_format (GST_BASE_SRC (self), GST_FORMAT_TIME); + gst_base_src_set_live (GST_BASE_SRC (self), TRUE); + + GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_PROVIDE_CLOCK); + GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_REQUIRE_CLOCK); +} + +static void +gst_win32_ipc_base_src_finalize (GObject * object) +{ + auto self = GST_WIN32_IPC_BASE_SRC (object); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_win32_ipc_base_src_set_property (GObject * object, guint prop_id, + const GValue * value, GParamSpec * pspec) +{ + auto self = GST_WIN32_IPC_BASE_SRC (object); + auto priv = self->priv; + + std::unique_lock < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_PIPE_NAME: + g_free (priv->pipe_name); + priv->pipe_name = g_value_dup_string (value); + if (!priv->pipe_name) + priv->pipe_name = g_strdup (DEFAULT_PIPE_NAME); + break; + case PROP_PROCESSING_DEADLINE: + { + GstClockTime prev_val, new_val; + prev_val = priv->processing_deadline; + new_val = g_value_get_uint64 (value); + priv->processing_deadline = new_val; + + if (prev_val != new_val) { + GST_DEBUG_OBJECT (self, "Posting latency message"); + lk.unlock (); + gst_element_post_message (GST_ELEMENT_CAST (self), + gst_message_new_latency (GST_OBJECT_CAST (self))); + } + break; + } + case PROP_LEAKY_TYPE: + priv->leaky = (GstWin32IpcLeakyType) g_value_get_enum (value); + if (priv->client) + gst_win32_ipc_client_set_leaky (priv->client, priv->leaky); + break; + case PROP_MAX_BUFFERS: + priv->max_buffers = g_value_get_uint64 (value); + if (priv->client) + gst_win32_ipc_client_set_max_buffers (priv->client, priv->max_buffers); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static void +gst_win32_base_src_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_WIN32_IPC_BASE_SRC (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + + switch (prop_id) { + case PROP_PIPE_NAME: + g_value_set_string (value, priv->pipe_name); + break; + case PROP_PROCESSING_DEADLINE: + g_value_set_uint64 (value, priv->processing_deadline); + break; + case PROP_LEAKY_TYPE: + g_value_set_enum (value, priv->leaky); + break; + case PROP_MAX_BUFFERS: + g_value_set_uint64 (value, priv->max_buffers); + break; + case PROP_CURRENT_LEVEL_BUFFERS: + if (priv->client) { + auto level = + gst_win32_ipc_client_get_current_level_buffers (priv->client); + g_value_set_uint64 (value, level); + } else { + g_value_set_uint64 (value, 0); + } + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static GstClock * +gst_win32_base_src_provide_clock (GstElement * elem) +{ + return gst_system_clock_obtain (); +} + +static gboolean +gst_win32_ipc_base_src_start (GstBaseSrc * src) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Start"); + + std::lock_guard < std::mutex > lk (priv->lock); + priv->client = gst_win32_ipc_client_new (priv->pipe_name, + 5, priv->max_buffers, priv->leaky); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_src_stop (GstBaseSrc * src) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Stop"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->client) + gst_win32_ipc_client_stop (priv->client); + + gst_clear_object (&priv->client); + gst_clear_caps (&priv->caps); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_src_unlock (GstBaseSrc * src) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Unlock"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->client) + gst_win32_ipc_client_set_flushing (priv->client, true); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_src_unlock_stop (GstBaseSrc * src) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "Unlock stop"); + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->client) + gst_win32_ipc_client_set_flushing (priv->client, false); + + return TRUE; +} + +static gboolean +gst_win32_ipc_base_src_query (GstBaseSrc * src, GstQuery * query) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_LATENCY: + { + GST_OBJECT_LOCK (self); + if (GST_CLOCK_TIME_IS_VALID (priv->processing_deadline)) { + gst_query_set_latency (query, TRUE, priv->processing_deadline, + GST_CLOCK_TIME_NONE); + } else { + gst_query_set_latency (query, TRUE, 0, 0); + } + GST_OBJECT_UNLOCK (self); + return TRUE; + } + default: + break; + } + + return GST_BASE_SRC_CLASS (parent_class)->query (src, query); +} + +static GstCaps * +gst_win32_ipc_base_src_get_caps (GstBaseSrc * src, GstCaps * filter) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + GstWin32IpcClient *client = nullptr; + GstCaps *caps = nullptr; + + GST_DEBUG_OBJECT (self, "Get caps"); + + priv->lock.lock (); + if (priv->caps) + caps = gst_caps_ref (priv->caps); + else if (priv->client) + client = (GstWin32IpcClient *) gst_object_ref (priv->client); + priv->lock.unlock (); + + if (!caps && client) + caps = gst_win32_ipc_client_get_caps (priv->client); + + if (!caps) + caps = gst_pad_get_pad_template_caps (GST_BASE_SRC_PAD (src)); + + if (filter) { + GstCaps *tmp = gst_caps_intersect_full (filter, + caps, GST_CAPS_INTERSECT_FIRST); + gst_caps_unref (caps); + caps = tmp; + } + + gst_clear_object (&client); + GST_DEBUG_OBJECT (self, "Returning caps %" GST_PTR_FORMAT, caps); + + return caps; +} + +static GstClockTime +gst_win32_ipc_base_src_get_buffer_time (GstBaseSrc * src, + GstClockTime base_time, gboolean clock_is_qpc, + GstClockTime now_qpc, GstClockTime now_gst, GstClockTime timestamp) +{ + if (!GST_CLOCK_TIME_IS_VALID (timestamp) || + !GST_CLOCK_TIME_IS_VALID (base_time)) { + return GST_CLOCK_TIME_NONE; + } + + if (clock_is_qpc) { + if (timestamp >= base_time) + return timestamp - base_time; + + return 0; + } + + GstClockTimeDiff running_time = now_gst - base_time + timestamp - now_qpc; + if (running_time >= 0) + return running_time; + + return 0; +} + +static GstFlowReturn +gst_win32_ipc_base_src_create (GstBaseSrc * src, guint64 offset, guint size, + GstBuffer ** buf) +{ + auto self = GST_WIN32_IPC_BASE_SRC (src); + auto priv = self->priv; + GstFlowReturn ret; + GstSample *sample = nullptr; + + GST_TRACE_OBJECT (self, "Create"); + + ret = gst_win32_ipc_client_run (priv->client); + if (ret != GST_FLOW_OK) + return ret; + + ret = gst_win32_ipc_client_get_sample (priv->client, &sample); + if (ret != GST_FLOW_OK) + return ret; + + auto now_qpc = gst_util_get_timestamp (); + auto clock = gst_element_get_clock (GST_ELEMENT_CAST (self)); + auto now_gst = gst_clock_get_time (clock); + auto base_time = GST_ELEMENT_CAST (self)->base_time; + auto is_qpc = gst_clock_is_system_monotonic (clock); + gst_object_unref (clock); + + auto buffer = gst_sample_get_buffer (sample); + auto pts = gst_win32_ipc_base_src_get_buffer_time (src, base_time, + is_qpc, now_qpc, now_gst, GST_BUFFER_PTS (buffer)); + auto dts = gst_win32_ipc_base_src_get_buffer_time (src, base_time, + is_qpc, now_qpc, now_gst, GST_BUFFER_DTS (buffer)); + + GST_BUFFER_PTS (buffer) = pts; + GST_BUFFER_DTS (buffer) = dts; + + std::unique_lock < std::mutex > lk (priv->lock); + auto caps = gst_sample_get_caps (sample); + if (!priv->caps || !gst_caps_is_equal (priv->caps, caps)) { + gst_caps_replace (&priv->caps, caps); + lk.unlock (); + gst_base_src_set_caps (src, priv->caps); + } + + *buf = gst_buffer_ref (buffer); + gst_sample_unref (sample); + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcbasesrc.h
Added
@@ -0,0 +1,53 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/base/gstbasesrc.h> + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_BASE_SRC (gst_win32_ipc_base_src_get_type()) +#define GST_WIN32_IPC_BASE_SRC(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_WIN32_IPC_BASE_SRC,GstWin32IpcBaseSrc)) +#define GST_WIN32_IPC_BASE_SRC_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_WIN32_IPC_BASE_SRC,GstWin32IpcBaseSrcClass)) +#define GST_WIN32_IPC_BASE_SRC_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS((obj), GST_TYPE_WIN32_IPC_BASE_SRC,GstWin32IpcBaseSrcClass)) +#define GST_IS_WIN32_IPC_BASE_SRC(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_WIN32_IPC_BASE_SRC)) +#define GST_IS_WIN32_IPC_BASE_SRC_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_WIN32_IPC_BASE_SRC)) + +typedef struct _GstWin32IpcBaseSrc GstWin32IpcBaseSrc; +typedef struct _GstWin32IpcBaseSrcClass GstWin32IpcBaseSrcClass; +typedef struct _GstWin32IpcBaseSrcPrivate GstWin32IpcBaseSrcPrivate; + +struct _GstWin32IpcBaseSrc +{ + GstBaseSrc parent; + + GstWin32IpcBaseSrcPrivate *priv; +}; + +struct _GstWin32IpcBaseSrcClass +{ + GstBaseSrcClass parent_class; +}; + +GType gst_win32_ipc_base_src_get_type (void); +G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstWin32IpcBaseSrc, gst_object_unref) + +G_END_DECLS \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcbufferpool.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcbufferpool.cpp
Changed
@@ -35,6 +35,7 @@ GstWin32IpcAllocator *alloc; GstVideoInfo info; gboolean add_videometa; + gboolean is_raw_video; }; #define gst_win32_ipc_buffer_pool_parent_class parent_class @@ -118,22 +119,28 @@ return FALSE; } - /* now parse the caps from the config */ - if (!gst_video_info_from_caps (&info, caps)) { - GST_WARNING_OBJECT (self, "Couldn't get video info from caps"); - return FALSE; + auto s = gst_caps_get_structure (caps, 0); + self->is_raw_video = gst_structure_has_name (s, "video/x-raw"); + if (self->is_raw_video) { + if (!gst_video_info_from_caps (&info, caps)) { + GST_WARNING_OBJECT (self, "Couldn't get video info from caps"); + return FALSE; + } + + if (size < info.size) { + GST_WARNING_OBJECT (self, "Size is smaller for the caps"); + return FALSE; + } + + info.size = MAX (size, info.size); + size = info.size; + self->info = info; + + GST_LOG_OBJECT (pool, "%dx%d, caps %" GST_PTR_FORMAT, + info.width, info.height, caps); } - if (size < info.size) { - GST_WARNING_OBJECT (self, "Size is smaller for the caps"); - return FALSE; - } - - info.size = MAX (size, info.size); - self->info = info; - - GST_LOG_OBJECT (pool, "%dx%d, caps %" GST_PTR_FORMAT, info.width, info.height, - caps); + /* now parse the caps from the config */ if (self->alloc) { gst_win32_ipc_allocator_set_active (self->alloc, FALSE); @@ -150,7 +157,7 @@ GST_BUFFER_POOL_OPTION_VIDEO_META); gst_buffer_pool_config_set_params (config, - caps, info.size, min_buffers, max_buffers); + caps, size, min_buffers, max_buffers); return GST_BUFFER_POOL_CLASS (parent_class)->set_config (pool, config) && ret; } @@ -174,7 +181,7 @@ buf = gst_buffer_new (); gst_buffer_append_memory (buf, mem); - if (self->add_videometa) { + if (self->is_raw_video && self->add_videometa) { gst_buffer_add_video_meta_full (buf, GST_VIDEO_FRAME_FLAG_NONE, GST_VIDEO_INFO_FORMAT (info), GST_VIDEO_INFO_WIDTH (info), GST_VIDEO_INFO_HEIGHT (info), GST_VIDEO_INFO_N_PLANES (info),
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcclient.cpp
Added
@@ -0,0 +1,1019 @@ +/* GStreamer + * Copyright (C) 2024 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcclient.h" +#include "gstwin32ipcprotocol.h" +#include <mutex> +#include <condition_variable> +#include <queue> +#include <atomic> +#include <memory> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_client_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_client_debug + +#define CONN_BUFFER_SIZE 1024 + +/* *INDENT-OFF* */ +struct GstWin32IpcClientConn : public OVERLAPPED +{ + GstWin32IpcClientConn (GstWin32IpcClient * client, HANDLE pipe_handle) + : client (client), pipe (pipe_handle) + { + OVERLAPPED *parent = static_cast<OVERLAPPED *> (this); + parent->Internal = 0; + parent->InternalHigh = 0; + parent->Offset = 0; + parent->OffsetHigh = 0; + + client_msg.resize (CONN_BUFFER_SIZE); + server_msg.resize (CONN_BUFFER_SIZE); + } + + ~GstWin32IpcClientConn () + { + if (pipe != INVALID_HANDLE_VALUE) { + CancelIo (pipe); + CloseHandle (pipe); + } + } + + GstWin32IpcClient *client; + + HANDLE pipe = INVALID_HANDLE_VALUE; + + GstWin32IpcPktType type; + std::vector<guint8> client_msg; + std::vector<guint8> server_msg; +}; + +struct GstWin32IpcImportData +{ + ~GstWin32IpcImportData () + { + GST_LOG_OBJECT (client, "Release handle \"%p\"", server_handle); + gst_object_unref (client); + if (mmf) + gst_win32_ipc_mmf_unref (mmf); + } + + GstWin32IpcClient *client; + HANDLE server_handle = nullptr; + GstWin32IpcMmf *mmf = nullptr; +}; + +struct GstWin32IpcReleaseData +{ + GstWin32IpcClient *self; + std::shared_ptr<GstWin32IpcImportData> imported; +}; + +struct GstWin32IpcClientPrivate +{ + GstWin32IpcClientPrivate () + { + wakeup_event = CreateEvent (nullptr, FALSE, FALSE, nullptr); + cancellable = CreateEvent (nullptr, TRUE, FALSE, nullptr); + + shutdown = false; + io_pending = true; + } + + ~GstWin32IpcClientPrivate () + { + gst_clear_caps (&caps); + CloseHandle (wakeup_event); + CloseHandle (cancellable); + if (server_process) + CloseHandle (server_process); + } + + std::string address; + GstClockTime timeout; + HANDLE wakeup_event; + HANDLE cancellable; + HANDLE server_process = nullptr; + std::mutex lock; + std::condition_variable cond; + GstCaps *caps = nullptr; + std::string caps_string; + bool server_eos = false; + bool flushing = false; + bool aborted = false; + bool sent_fin = false; + std::atomic<bool> shutdown = { false }; + std::atomic<bool> io_pending = { false }; + GThread *loop_thread = nullptr; + std::queue <GstSample *> samples; + std::shared_ptr<GstWin32IpcClientConn> conn; + std::queue<HANDLE> unused_data; + std::vector<std::weak_ptr<GstWin32IpcImportData>> imported; + + std::atomic<guint64> max_buffers = { 0 }; + std::atomic<GstWin32IpcLeakyType> leaky { GST_WIN32_IPC_LEAKY_DOWNSTREAM }; +}; +/* *INDENT-ON* */ + +struct _GstWin32IpcClient +{ + GstObject parent; + + GstWin32IpcClientPrivate *priv; +}; + +static void gst_win32_ipc_client_dispose (GObject * object); +static void gst_win32_ipc_client_finalize (GObject * object); +static void gst_win32_ipc_client_continue (GstWin32IpcClient * self); +static void gst_win32_ipc_client_send_msg (GstWin32IpcClient * self); + +#define gst_win32_ipc_client_parent_class parent_class +G_DEFINE_TYPE (GstWin32IpcClient, gst_win32_ipc_client, GST_TYPE_OBJECT); + +static void +gst_win32_ipc_client_class_init (GstWin32IpcClientClass * klass) +{ + GObjectClass *object_class = G_OBJECT_CLASS (klass); + + object_class->dispose = gst_win32_ipc_client_dispose; + object_class->finalize = gst_win32_ipc_client_finalize; + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_client_debug, "win32ipcclient", + 0, "win32ipcclient"); +} + +static void +gst_win32_ipc_client_init (GstWin32IpcClient * self) +{ + self->priv = new GstWin32IpcClientPrivate (); +} + +static void +gst_win32_ipc_client_dispose (GObject * object) +{ + auto self = GST_WIN32_IPC_CLIENT (object); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "dispose"); + + SetEvent (priv->cancellable); + + g_clear_pointer (&priv->loop_thread, g_thread_join); + + G_OBJECT_CLASS (parent_class)->dispose (object); +} + +static void +gst_win32_ipc_client_finalize (GObject * object) +{ + auto self = GST_WIN32_IPC_CLIENT (object); + + GST_DEBUG_OBJECT (self, "finalize"); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_win32_ipc_client_abort (GstWin32IpcClient * self) +{ + auto priv = self->priv; + std::lock_guard < std::mutex > lk (priv->lock); + priv->aborted = true; + priv->cond.notify_all (); +} + +static bool +gst_win32_ipc_client_config_data (GstWin32IpcClient * self) +{ + auto priv = self->priv; + auto conn = priv->conn; + std::string caps_string; + DWORD server_pid; + std::lock_guard < std::mutex > lk (priv->lock); + + if (!gst_win32_ipc_pkt_parse_config (conn->server_msg, + server_pid, caps_string)) { + GST_ERROR_OBJECT (self, "Couldn't parse CONFIG-DATA"); + return false; + } + + if (caps_string.empty ()) { + GST_ERROR_OBJECT (self, "Empty caps"); + return false; + } + + priv->caps_string = caps_string; + + gst_clear_caps (&priv->caps); + priv->caps = gst_caps_from_string (caps_string.c_str ()); + if (!priv->caps) { + GST_ERROR_OBJECT (self, "Invalid caps string \"%s\"", caps_string.c_str ()); + return false; + } + + if (priv->server_process) { + GST_WARNING_OBJECT (self, "Have server process handle already"); + CloseHandle (priv->server_process); + } + + priv->server_process = OpenProcess (PROCESS_DUP_HANDLE, FALSE, server_pid); + if (!priv->server_process) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, "Couldn't open server process, 0x%x (%s)", + last_err, err); + g_free (err); + return false; + } + + priv->cond.notify_all (); + + return true; +} + +static void +gst_win32_ipc_client_release_imported_data (GstWin32IpcReleaseData * data) +{ + auto self = data->self; + auto priv = self->priv; + HANDLE server_handle = data->imported->server_handle; + + GST_LOG_OBJECT (self, "Releasing data \"%p\"", server_handle); + + data->imported = nullptr; + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->unused_data.push (server_handle); + } + + SetEvent (priv->wakeup_event); + + gst_object_unref (data->self); + + delete data; +} + +static bool +gst_win32_ipc_client_have_data (GstWin32IpcClient * self) +{ + auto priv = self->priv; + GstBuffer *buffer = nullptr; + SIZE_T size; + std::string caps_string; + GstClockTime pts; + GstClockTime dts; + GstClockTime dur; + UINT buf_flags = 0; + std::shared_ptr < GstWin32IpcImportData > import_data; + HANDLE server_handle = nullptr; + HANDLE client_handle = nullptr; + std::vector < UINT8 > meta; + auto conn = priv->conn; + + std::unique_lock < std::mutex > lk (priv->lock); + + if (!gst_win32_ipc_pkt_parse_have_data (conn->server_msg, size, + pts, dts, dur, buf_flags, server_handle, caps_string, meta)) { + GST_ERROR_OBJECT (self, "Couldn't parse HAVE-DATA packet"); + return false; + } + + if (!caps_string.empty () && caps_string != priv->caps_string) { + auto new_caps = gst_caps_from_string (caps_string.c_str ()); + if (!new_caps) { + GST_ERROR_OBJECT (self, "Invalid caps string \"%s\"", + caps_string.c_str ()); + return false; + } + + gst_caps_unref (priv->caps); + priv->caps = new_caps; + } + + if (!DuplicateHandle (priv->server_process, server_handle, + GetCurrentProcess (), &client_handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, "Couldn't duplicate handle, 0x%x (%s)", + last_err, err); + g_free (err); + return false; + } + + GST_LOG_OBJECT (self, "Importing server handle %p", server_handle); + + auto mmf = gst_win32_ipc_mmf_open (size, client_handle); + if (!mmf) { + GST_ERROR_OBJECT (self, "Couldn't open resource"); + return false; + } + + import_data = std::make_shared < GstWin32IpcImportData > (); + import_data->client = (GstWin32IpcClient *) gst_object_ref (self); + import_data->server_handle = server_handle; + import_data->mmf = mmf; + + { + auto data = new GstWin32IpcReleaseData (); + data->self = (GstWin32IpcClient *) gst_object_ref (self); + data->imported = import_data; + + auto mem = gst_memory_new_wrapped (GST_MEMORY_FLAG_READONLY, + gst_win32_ipc_mmf_get_raw (mmf), size, 0, size, data, + (GDestroyNotify) gst_win32_ipc_client_release_imported_data); + + buffer = gst_buffer_new (); + gst_buffer_append_memory (buffer, mem); + + while (!meta.empty ()) { + guint32 consumed = 0; + if (!gst_meta_deserialize (buffer, meta.data (), meta.size (), + &consumed) || consumed == 0) { + break; + } + + meta.erase (meta.begin (), meta.begin () + consumed); + } + + priv->imported.push_back (import_data); + } + + GST_BUFFER_PTS (buffer) = pts; + GST_BUFFER_DTS (buffer) = dts; + GST_BUFFER_DURATION (buffer) = dur; + GST_BUFFER_FLAG_SET (buffer, buf_flags); + + auto sample = gst_sample_new (buffer, priv->caps, nullptr, nullptr); + gst_buffer_unref (buffer); + + std::queue < GstSample * >drop_queue; + bool drop_current = false; + + if (priv->max_buffers > 0) { + if (priv->leaky == GST_WIN32_IPC_LEAKY_NONE) { + if (priv->samples.size () >= priv->max_buffers) { + GST_DEBUG_OBJECT (self, "Waiting for free space"); + priv->cond.wait (lk,& { + auto max = priv->max_buffers.load (); + return priv->aborted || priv->flushing || priv->shutdown || + priv->leaky != GST_WIN32_IPC_LEAKY_NONE || max == 0 || + priv->samples.size () < max; + } + ); + } + + if (priv->aborted) { + GST_DEBUG_OBJECT (self, "Aborted while waiting for free slot"); + lk.unlock (); + + gst_sample_unref (sample); + return false; + } else if (priv->flushing || priv->shutdown) { + GST_DEBUG_OBJECT (self, "Flushing while waiting for free slot"); + lk.unlock (); + + gst_sample_unref (sample); + return true; + } + } else if (priv->leaky == GST_WIN32_IPC_LEAKY_DOWNSTREAM) { + while (priv->samples.size () >= priv->max_buffers) { + drop_queue.push (priv->samples.front ()); + priv->samples.pop (); + } + } else { + if (priv->samples.size () >= priv->max_buffers) { + GST_DEBUG_OBJECT (self, "Queue full, dropping current sample"); + drop_current = true; + } + } + } + + if (!drop_current) { + priv->samples.push (sample); + priv->cond.notify_all (); + } + + lk.unlock (); + + import_data = nullptr; + while (!drop_queue.empty ()) { + auto old = drop_queue.front (); + gst_sample_unref (old); + drop_queue.pop (); + } + + if (drop_current) + gst_sample_unref (sample); + + return true; +} + +static void +gst_win32_ipc_client_wait_msg_finish (GstWin32IpcClient * client) +{ + auto priv = client->priv; + auto conn = priv->conn; + GstWin32IpcPktHdr hdr; + + if (!gst_win32_ipc_pkt_identify (conn->server_msg, hdr)) { + GST_ERROR_OBJECT (client, "Broken header"); + gst_win32_ipc_client_abort (client); + return; + } + + switch (hdr.type) { + case GstWin32IpcPktType::CONFIG: + GST_LOG_OBJECT (client, "Got CONFIG"); + if (!gst_win32_ipc_client_config_data (client)) { + gst_win32_ipc_client_abort (client); + return; + } + + gst_win32_ipc_client_continue (client); + break; + case GstWin32IpcPktType::HAVE_DATA: + GST_LOG_OBJECT (client, "Got HAVE-DATA"); + if (!gst_win32_ipc_client_have_data (client)) { + gst_win32_ipc_client_abort (client); + return; + } + + GST_LOG_OBJECT (client, "Sending READ-DONE"); + gst_win32_ipc_pkt_build_read_done (conn->client_msg); + conn->type = GstWin32IpcPktType::READ_DONE; + gst_win32_ipc_client_send_msg (client); + break; + case GstWin32IpcPktType::EOS: + GST_DEBUG_OBJECT (client, "Got EOS"); + priv->server_eos = true; + priv->lock.lock (); + priv->cond.notify_all (); + priv->lock.unlock (); + gst_win32_ipc_client_continue (client); + break; + default: + GST_WARNING_OBJECT (client, "Unexpected packet type"); + gst_win32_ipc_client_abort (client); + break; + } +} + +static void WINAPI +gst_win32_ipc_client_payload_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + auto conn = static_cast < GstWin32IpcClientConn * >(overlap); + auto self = conn->client; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "ReadFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_client_abort (self); + } + + gst_win32_ipc_client_wait_msg_finish (self); +} + +static void WINAPI +gst_win32_ipc_client_win32_wait_header_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + auto conn = static_cast < GstWin32IpcClientConn * >(overlap); + auto self = conn->client; + GstWin32IpcPktHdr hdr; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "ReadFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_client_abort (self); + return; + } + + if (!gst_win32_ipc_pkt_identify (conn->server_msg, hdr)) { + GST_ERROR_OBJECT (self, "Broken header"); + gst_win32_ipc_client_abort (self); + return; + } + + if (hdr.payload_size == 0) { + gst_win32_ipc_client_wait_msg_finish (self); + return; + } + + GST_LOG_OBJECT (self, "Reading payload"); + + if (!ReadFileEx (conn->pipe, conn->server_msg.data () + + sizeof (GstWin32IpcPktHdr), hdr.payload_size, conn, + gst_win32_ipc_client_payload_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "ReadFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + gst_win32_ipc_client_abort (self); + } +} + +static void +gst_win32_ipc_client_wait_msg (GstWin32IpcClient * self) +{ + auto priv = self->priv; + auto conn = priv->conn; + priv->io_pending = true; + + if (!ReadFileEx (conn->pipe, conn->server_msg.data (), + sizeof (GstWin32IpcPktHdr), conn.get (), + gst_win32_ipc_client_win32_wait_header_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "ReadFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + gst_win32_ipc_client_abort (self); + } +} + +static void WINAPI +gst_win32_ipc_client_send_msg_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + auto conn = static_cast < GstWin32IpcClientConn * >(overlap); + auto self = conn->client; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "WriteFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_client_abort (self); + return; + } + + switch (conn->type) { + case GstWin32IpcPktType::NEED_DATA: + GST_LOG_OBJECT (self, "Sent NEED-DATA"); + gst_win32_ipc_client_wait_msg (self); + break; + case GstWin32IpcPktType::READ_DONE: + GST_LOG_OBJECT (self, "Sent READ-DONE"); + gst_win32_ipc_client_continue (self); + break; + case GstWin32IpcPktType::RELEASE_DATA: + GST_LOG_OBJECT (self, "Sent RELEASE-DATA"); + gst_win32_ipc_client_continue (self); + break; + case GstWin32IpcPktType::FIN: + GST_DEBUG_OBJECT (self, "Sent FIN"); + gst_win32_ipc_client_abort (self); + break; + default: + GST_ERROR_OBJECT (self, "Unexpected msg type"); + gst_win32_ipc_client_abort (self); + break; + } +} + +static void +gst_win32_ipc_client_send_msg (GstWin32IpcClient * self) +{ + auto priv = self->priv; + auto conn = priv->conn; + + priv->io_pending = true; + + if (!WriteFileEx (conn->pipe, conn->client_msg.data (), + conn->client_msg.size (), conn.get (), + gst_win32_ipc_client_send_msg_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "WriteFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + gst_win32_ipc_client_abort (self); + } +} + +static void +gst_win32_ipc_client_run_gc (GstWin32IpcClient * self) +{ + auto priv = self->priv; + + for (auto it = priv->imported.begin (); it != priv->imported.end ();) { + auto data = it->lock (); + if (!data) { + it = priv->imported.erase (it); + } else { + it++; + } + } +} + +static void +gst_win32_ipc_client_continue (GstWin32IpcClient * self) +{ + auto priv = self->priv; + std::unique_lock < std::mutex > lk (priv->lock); + auto conn = priv->conn; + + if (!conn) { + GST_WARNING_OBJECT (self, "No connection was made"); + priv->aborted = true; + priv->cond.notify_all (); + return; + } + + if (priv->aborted) { + priv->cond.notify_all (); + GST_DEBUG_OBJECT (self, "Operation was aborted"); + return; + } + + if (!priv->unused_data.empty ()) { + HANDLE server_handle = priv->unused_data.front (); + priv->unused_data.pop (); + + GST_LOG_OBJECT (self, "Sending RELEASE-DATA %p", server_handle); + + gst_win32_ipc_pkt_build_release_data (conn->client_msg, server_handle); + conn->type = GstWin32IpcPktType::RELEASE_DATA; + lk.unlock (); + + gst_win32_ipc_client_send_msg (self); + return; + } + + if (priv->shutdown) { + auto drop_queue = priv->samples; + while (!priv->samples.empty ()) + priv->samples.pop (); + lk.unlock (); + + while (!drop_queue.empty ()) { + auto sample = drop_queue.front (); + gst_sample_unref (sample); + drop_queue.pop (); + } + lk.lock (); + } + + if (priv->server_eos || priv->shutdown) { + gst_win32_ipc_client_run_gc (self); + + GST_DEBUG_OBJECT (self, "Remaining imported memory %" G_GSIZE_FORMAT, + priv->imported.size ()); + + if (priv->imported.empty ()) { + GST_DEBUG_OBJECT (self, "Drained"); + if (priv->sent_fin) { + priv->aborted = true; + priv->cond.notify_all (); + } else { + lk.unlock (); + + priv->sent_fin = true; + gst_win32_ipc_pkt_build_fin (conn->client_msg); + conn->type = GstWin32IpcPktType::FIN; + + GST_DEBUG_OBJECT (self, "Sending FIN"); + gst_win32_ipc_client_send_msg (self); + return; + } + } else { + priv->io_pending = false; + } + return; + } + + lk.unlock (); + + gst_win32_ipc_pkt_build_need_data (conn->client_msg); + conn->type = GstWin32IpcPktType::NEED_DATA; + + GST_LOG_OBJECT (self, "Sending NEED-DATA"); + gst_win32_ipc_client_send_msg (self); +} + +static gpointer +gst_win32_ipc_client_loop_thread_func (GstWin32IpcClient * self) +{ + auto priv = self->priv; + DWORD mode = PIPE_READMODE_MESSAGE; + guint wait_ret; + HANDLE pipe = INVALID_HANDLE_VALUE; + auto start_time = gst_util_get_timestamp (); + HANDLE waitables = { priv->cancellable, priv->wakeup_event }; + auto address = (wchar_t *) g_utf8_to_utf16 (priv->address.c_str (), + -1, nullptr, nullptr, nullptr); + +#if (_WIN32_WINNT >= _WIN32_WINNT_WIN8) + CREATEFILE2_EXTENDED_PARAMETERS params; + memset (¶ms, 0, sizeof (CREATEFILE2_EXTENDED_PARAMETERS)); + params.dwSize = sizeof (CREATEFILE2_EXTENDED_PARAMETERS); + params.dwFileAttributes = 0; + params.dwFileFlags = FILE_FLAG_OVERLAPPED; + params.dwSecurityQosFlags = SECURITY_IMPERSONATION; +#endif + + GST_DEBUG_OBJECT (self, "Starting loop thread"); + + std::unique_lock < std::mutex > lk (priv->lock); + do { + GstClockTime diff; + + if (priv->flushing) { + GST_DEBUG_OBJECT (self, "We are flushing"); + priv->aborted = true; + priv->cond.notify_all (); + goto out; + } +#if (_WIN32_WINNT >= _WIN32_WINNT_WIN8) + pipe = CreateFile2 (address, GENERIC_READ | GENERIC_WRITE, 0, + OPEN_EXISTING, ¶ms); +#else + pipe = CreateFileW (address, + GENERIC_READ | GENERIC_WRITE, 0, nullptr, OPEN_EXISTING, + FILE_FLAG_OVERLAPPED, nullptr); +#endif + + if (pipe != INVALID_HANDLE_VALUE) + break; + + if (priv->timeout > 0) { + diff = gst_util_get_timestamp () - start_time; + if (diff > priv->timeout) { + GST_WARNING_OBJECT (self, "Timeout"); + priv->aborted = true; + priv->cond.notify_all (); + goto out; + } + } + + /* Retry per 100ms */ + GST_DEBUG_OBJECT (self, "Sleep for next retry"); + priv->cond.wait_for (lk, std::chrono::milliseconds (100)); + } while (true); + + if (!SetNamedPipeHandleState (pipe, &mode, nullptr, nullptr)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "SetNamedPipeHandleState failed with 0x%x (%s)", + last_err, err); + g_free (err); + + CloseHandle (pipe); + priv->aborted = true; + priv->cond.notify_all (); + goto out; + } + + priv->conn = std::make_shared < GstWin32IpcClientConn > (self, pipe); + priv->cond.notify_all (); + lk.unlock (); + + gst_win32_ipc_client_wait_msg (self); + + do { + /* Enters alertable thread state and wait for I/O completion event + * or cancellable event */ + wait_ret = WaitForMultipleObjectsEx (G_N_ELEMENTS (waitables), waitables, + FALSE, INFINITE, TRUE); + if (wait_ret == WAIT_OBJECT_0) { + GST_DEBUG ("Operation cancelled"); + goto out; + } + + switch (wait_ret) { + case WAIT_IO_COMPLETION: + break; + case WAIT_OBJECT_0 + 1: + if (!priv->io_pending) + gst_win32_ipc_client_continue (self); + break; + default: + GST_WARNING ("Unexpected wait return 0x%x", wait_ret); + gst_win32_ipc_client_abort (self); + goto out; + } + } while (true); + +out: + while (!priv->samples.empty ()) { + auto sample = priv->samples.front (); + gst_sample_unref (sample); + priv->samples.pop (); + } + + priv->conn = nullptr; + g_free (address); + + GST_DEBUG_OBJECT (self, "Exit loop thread"); + + return nullptr; +} + +GstFlowReturn +gst_win32_ipc_client_run (GstWin32IpcClient * client) +{ + g_return_val_if_fail (GST_IS_WIN32_IPC_CLIENT (client), GST_FLOW_ERROR); + + auto priv = client->priv; + std::unique_lock < std::mutex > lk (priv->lock); + if (!priv->loop_thread) { + priv->loop_thread = g_thread_new ("win32-ipc-client", + (GThreadFunc) gst_win32_ipc_client_loop_thread_func, client); + + while (!priv->caps && !priv->aborted && !priv->flushing) + priv->cond.wait (lk); + } + + if (priv->flushing) { + GST_DEBUG_OBJECT (client, "We are flushing"); + return GST_FLOW_FLUSHING; + } else if (priv->aborted || !priv->caps) { + GST_DEBUG_OBJECT (client, "Aborted"); + return GST_FLOW_ERROR; + } + + return GST_FLOW_OK; +} + +GstCaps * +gst_win32_ipc_client_get_caps (GstWin32IpcClient * client) +{ + GstCaps *caps = nullptr; + + g_return_val_if_fail (GST_IS_WIN32_IPC_CLIENT (client), nullptr); + + auto priv = client->priv; + + if (gst_win32_ipc_client_run (client) != GST_FLOW_OK) + return nullptr; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->caps) + caps = gst_caps_ref (priv->caps); + + return caps; +} + +static void +gst_win32_ipc_client_stop_async (GstWin32IpcClient * client, gpointer user_data) +{ + auto priv = client->priv; + + GST_DEBUG_OBJECT (client, "Stopping"); + + SetEvent (priv->cancellable); + g_clear_pointer (&priv->loop_thread, g_thread_join); + + GST_DEBUG_OBJECT (client, "Stopped"); +} + +void +gst_win32_ipc_client_stop (GstWin32IpcClient * client) +{ + g_return_if_fail (GST_IS_WIN32_IPC_CLIENT (client)); + + auto priv = client->priv; + + GST_DEBUG_OBJECT (client, "Stopping"); + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->shutdown = true; + priv->cond.notify_all (); + } + + SetEvent (priv->wakeup_event); + + /* We don't know when imported memory gets released */ + gst_object_call_async (GST_OBJECT (client), + (GstObjectCallAsyncFunc) gst_win32_ipc_client_stop_async, nullptr); +} + +void +gst_win32_ipc_client_set_flushing (GstWin32IpcClient * client, bool flushing) +{ + g_return_if_fail (GST_IS_WIN32_IPC_CLIENT (client)); + + auto priv = client->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + priv->flushing = flushing; + priv->cond.notify_all (); +} + +GstFlowReturn +gst_win32_ipc_client_get_sample (GstWin32IpcClient * client, + GstSample ** sample) +{ + g_return_val_if_fail (GST_IS_WIN32_IPC_CLIENT (client), GST_FLOW_ERROR); + g_return_val_if_fail (sample, GST_FLOW_ERROR); + + auto priv = client->priv; + + GST_LOG_OBJECT (client, "Waiting for sample"); + std::unique_lock < std::mutex > lk (priv->lock); + while (!priv->flushing && !priv->aborted && !priv->server_eos && + priv->samples.empty ()) { + priv->cond.wait (lk); + } + + if (!priv->samples.empty ()) { + *sample = priv->samples.front (); + priv->samples.pop (); + + priv->cond.notify_all (); + + GST_LOG_OBJECT (client, "Have sample"); + return GST_FLOW_OK; + } + + if (priv->flushing) { + GST_DEBUG_OBJECT (client, "Flushing"); + return GST_FLOW_FLUSHING; + } + + GST_DEBUG_OBJECT (client, "EOS"); + + return GST_FLOW_EOS; +} + +void +gst_win32_ipc_client_set_leaky (GstWin32IpcClient * client, + GstWin32IpcLeakyType leaky) +{ + auto priv = client->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->leaky != leaky) { + priv->leaky = leaky; + priv->cond.notify_all (); + } +} + +void +gst_win32_ipc_client_set_max_buffers (GstWin32IpcClient * client, + guint64 max_buffers) +{ + auto priv = client->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->max_buffers != max_buffers) { + priv->max_buffers = max_buffers; + priv->cond.notify_all (); + } +} + +guint64 +gst_win32_ipc_client_get_current_level_buffers (GstWin32IpcClient * client) +{ + auto priv = client->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + return priv->samples.size (); +} + +GstWin32IpcClient * +gst_win32_ipc_client_new (const std::string & address, guint timeout, + guint64 max_buffers, GstWin32IpcLeakyType leaky) +{ + auto self = (GstWin32IpcClient *) + g_object_new (GST_TYPE_WIN32_IPC_CLIENT, nullptr); + gst_object_ref_sink (self); + + auto priv = self->priv; + priv->address = address; + priv->timeout = timeout * GST_SECOND; + priv->max_buffers = max_buffers; + priv->leaky = leaky; + + return self; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcclient.h
Added
@@ -0,0 +1,59 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstwin32ipcmmf.h" +#include "gstwin32ipc.h" +#include <string> +#include <vector> + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_CLIENT (gst_win32_ipc_client_get_type()) +G_DECLARE_FINAL_TYPE (GstWin32IpcClient, gst_win32_ipc_client, + GST, WIN32_IPC_CLIENT, GstObject); + +GstWin32IpcClient * gst_win32_ipc_client_new (const std::string & address, + guint timeout, + guint64 max_buffers, + GstWin32IpcLeakyType leaky); + +GstFlowReturn gst_win32_ipc_client_get_sample (GstWin32IpcClient * client, + GstSample ** sample); + +void gst_win32_ipc_client_set_flushing (GstWin32IpcClient * client, + bool flushing); + +GstCaps * gst_win32_ipc_client_get_caps (GstWin32IpcClient * client); + +GstFlowReturn gst_win32_ipc_client_run (GstWin32IpcClient * client); + +void gst_win32_ipc_client_stop (GstWin32IpcClient * client); + +void gst_win32_ipc_client_set_leaky (GstWin32IpcClient * client, + GstWin32IpcLeakyType leaky); + +void gst_win32_ipc_client_set_max_buffers (GstWin32IpcClient * client, + guint64 max_buffers); + +guint64 gst_win32_ipc_client_get_current_level_buffers (GstWin32IpcClient * client); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcmemory.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcmemory.cpp
Changed
@@ -22,10 +22,12 @@ #endif #include "gstwin32ipcmemory.h" -#include "gstwin32ipcutils.h" #include <string> #include <mutex> +#include <condition_variable> #include <string.h> +#include <atomic> +#include <queue> GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_allocator_debug); #define GST_CAT_DEFAULT gst_win32_ipc_allocator_debug @@ -36,28 +38,32 @@ static GstWin32IpcAllocator *gc_allocator = nullptr; -struct _GstWin32IpcAllocator +/* *INDENT-OFF* */ +struct GstWin32IpcAllocatorPrivate { - GstAllocator parent; + gsize size; - guint size; + bool is_gc = false; - gboolean is_gc; + std::mutex lock; + std::condition_variable cond; - GstAtomicQueue *queue; - GstPoll *poll; - gchar *prefix; - LONG64 seq_num; + std::atomic<guint64> seqnum = { 0 }; + std::queue<GstMemory *>queue; - CRITICAL_SECTION lock; - gboolean started; - gboolean active; + bool started = false; + bool active = false; + std::atomic<guint> outstanding = { 0 }; + guint cur_mems = 0; + bool flushing = false; +}; +/* *INDENT-ON* */ - /* atomic */ - gint outstanding; - guint max_mems; - guint cur_mems; - gboolean flushing; +struct _GstWin32IpcAllocator +{ + GstAllocator parent; + + GstWin32IpcAllocatorPrivate *priv; }; static void gst_win32_ipc_allocator_finalize (GObject * object); @@ -72,8 +78,8 @@ static GstMemory *gst_win32_ipc_allocator_share (GstMemory * mem, gssize offset, gssize size); -static gboolean gst_win32_ipc_allocator_start (GstWin32IpcAllocator * self); -static gboolean gst_win32_ipc_allocator_stop (GstWin32IpcAllocator * self); +static void gst_win32_ipc_allocator_start (GstWin32IpcAllocator * self); +static void gst_win32_ipc_allocator_stop (GstWin32IpcAllocator * self); static gboolean gst_win32_ipc_memory_release (GstMiniObject * mini_object); #define gst_win32_ipc_allocator_parent_class parent_class @@ -83,8 +89,8 @@ static void gst_win32_ipc_allocator_class_init (GstWin32IpcAllocatorClass * klass) { - GObjectClass *object_class = G_OBJECT_CLASS (klass); - GstAllocatorClass *alloc_class = GST_ALLOCATOR_CLASS (klass); + auto object_class = G_OBJECT_CLASS (klass); + auto alloc_class = GST_ALLOCATOR_CLASS (klass); object_class->finalize = gst_win32_ipc_allocator_finalize; @@ -98,7 +104,9 @@ static void gst_win32_ipc_allocator_init (GstWin32IpcAllocator * self) { - GstAllocator *alloc = GST_ALLOCATOR (self); + self->priv = new GstWin32IpcAllocatorPrivate (); + + auto alloc = GST_ALLOCATOR (self); alloc->mem_type = GST_WIN32_IPC_MEMORY_NAME; alloc->mem_map = gst_win32_ipc_allocator_map; @@ -106,33 +114,17 @@ alloc->mem_share = gst_win32_ipc_allocator_share; GST_OBJECT_FLAG_SET (alloc, GST_ALLOCATOR_FLAG_CUSTOM_ALLOC); - - InitializeCriticalSection (&self->lock); - - self->poll = gst_poll_new_timer (); - self->queue = gst_atomic_queue_new (16); - self->flushing = 1; - self->active = FALSE; - self->started = FALSE; - - /* 1 control write for flushing - the flush token */ - gst_poll_write_control (self->poll); - /* 1 control write for marking that we are not waiting for poll - the wait token */ - gst_poll_write_control (self->poll); } static void gst_win32_ipc_allocator_finalize (GObject * object) { - GstWin32IpcAllocator *self = GST_WIN32_IPC_ALLOCATOR (object); + auto self = GST_WIN32_IPC_ALLOCATOR (object); GST_DEBUG_OBJECT (self, "Finalize"); gst_win32_ipc_allocator_stop (self); - gst_atomic_queue_unref (self->queue); - gst_poll_free (self->poll); - DeleteCriticalSection (&self->lock); - g_free (self->prefix); + delete self->priv; G_OBJECT_CLASS (parent_class)->finalize (object); } @@ -147,18 +139,18 @@ static void gst_win32_ipc_allocator_free (GstAllocator * alloc, GstMemory * mem) { - GstWin32IpcMemory *imem = (GstWin32IpcMemory *) mem; + auto imem = (GstWin32IpcMemory *) mem; - win32_ipc_mmf_unref (imem->mmf); + gst_win32_ipc_mmf_unref (imem->mmf); g_free (imem); } static gpointer gst_win32_ipc_allocator_map (GstMemory * mem, gsize maxsize, GstMapFlags flags) { - GstWin32IpcMemory *imem = (GstWin32IpcMemory *) mem; + auto imem = (GstWin32IpcMemory *) mem; - return win32_ipc_mmf_get_raw (imem->mmf); + return gst_win32_ipc_mmf_get_raw (imem->mmf); } static void @@ -174,163 +166,88 @@ return nullptr; } -static gboolean -gst_win32_ipc_allocator_start (GstWin32IpcAllocator * self) -{ - if (self->started) - return TRUE; - - self->started = TRUE; - - return TRUE; -} - static void -gst_win32_ipc_allocator_do_set_flushing (GstWin32IpcAllocator * self, - gboolean flushing) +gst_win32_ipc_allocator_start (GstWin32IpcAllocator * self) { - if (GST_WIN32_IPC_ALLOCATOR_IS_FLUSHING (self) == flushing) - return; - - if (flushing) { - g_atomic_int_set (&self->flushing, 1); - /* Write the flush token to wake up any waiters */ - gst_poll_write_control (self->poll); - } else { - while (!gst_poll_read_control (self->poll)) { - if (errno == EWOULDBLOCK) { - /* This should not really happen unless flushing and unflushing - * happens on different threads. Let's wait a bit to get back flush - * token from the thread that was setting it to flushing */ - g_thread_yield (); - continue; - } else { - /* Critical error but GstPoll already complained */ - break; - } - } - - g_atomic_int_set (&self->flushing, 0); - } + auto priv = self->priv; + priv->started = true; } gboolean gst_win32_ipc_allocator_set_active (GstWin32IpcAllocator * self, gboolean active) { - gboolean ret = TRUE; - g_return_val_if_fail (GST_IS_WIN32_IPC_ALLOCATOR (self), FALSE); - EnterCriticalSection (&self->lock); - if (self->active == active) - goto out; + auto priv = self->priv; + + std::unique_lock < std::mutex > lk (priv->lock); + if ((priv->active && active) || (!priv->active && !active)) + return TRUE; if (active) { gst_win32_ipc_allocator_start (self); - /* flush_stop may release memory objects, setting to active to avoid running - * do_stop while activating the pool */ - self->active = TRUE; - - gst_win32_ipc_allocator_do_set_flushing (self, FALSE); + priv->active = true; + priv->flushing = false; } else { - gint outstanding; + priv->flushing = true; + priv->active = false; - /* set to flushing first */ - gst_win32_ipc_allocator_do_set_flushing (self, TRUE); + priv->cond.notify_all (); /* when all memory objects are in the pool, free them. Else they will be * freed when they are released */ - outstanding = g_atomic_int_get (&self->outstanding); - GST_LOG_OBJECT (self, "outstanding memories %d, (in queue %d)", - outstanding, gst_atomic_queue_length (self->queue)); - if (outstanding == 0) { - if (!gst_win32_ipc_allocator_stop (self)) { - GST_ERROR_OBJECT (self, "stop failed"); - ret = FALSE; - goto out; - } - } - - self->active = FALSE; + GST_LOG_OBJECT (self, "outstanding memories %d, (in queue %u)", + priv->outstanding.load (), (guint) priv->queue.size ()); + if (priv->outstanding == 0) + gst_win32_ipc_allocator_stop (self); } -out: - LeaveCriticalSection (&self->lock); - - return ret; + return TRUE; } static void gst_win32_ipc_allocator_free_memory (GstWin32IpcAllocator * self, GstMemory * mem) { - g_atomic_int_add (&self->cur_mems, -1); - GST_LOG_OBJECT (self, "freeing memory %p (%u left)", mem, self->cur_mems); + auto priv = self->priv; + + priv->cur_mems--; + + GST_LOG_OBJECT (self, "freeing memory %p (%u left)", mem, priv->cur_mems); GST_MINI_OBJECT_CAST (mem)->dispose = nullptr; gst_memory_unref (mem); } /* must be called with the lock */ -static gboolean +static void gst_win32_ipc_allocator_clear_queue (GstWin32IpcAllocator * self) { - GstMemory *memory; + auto priv = self->priv; GST_LOG_OBJECT (self, "Clearing queue"); - /* clear the pool */ - while ((memory = (GstMemory *) gst_atomic_queue_pop (self->queue))) { - while (!gst_poll_read_control (self->poll)) { - if (errno == EWOULDBLOCK) { - /* We put the memory into the queue but did not finish writing control - * yet, let's wait a bit and retry */ - g_thread_yield (); - continue; - } else { - /* Critical error but GstPoll already complained */ - break; - } - } - gst_win32_ipc_allocator_free_memory (self, memory); + while (!priv->queue.empty ()) { + auto mem = priv->queue.front (); + priv->queue.pop (); + gst_win32_ipc_allocator_free_memory (self, mem); } GST_LOG_OBJECT (self, "Clear done"); - - return self->cur_mems == 0; } -static gboolean +static void gst_win32_ipc_allocator_stop (GstWin32IpcAllocator * self) { - GST_DEBUG_OBJECT (self, "Stop"); + auto priv = self->priv; - if (self->started) { - if (!gst_win32_ipc_allocator_clear_queue (self)) - return FALSE; + GST_DEBUG_OBJECT (self, "Stop"); - self->started = FALSE; - } - - return TRUE; -} - -static void -dec_outstanding (GstWin32IpcAllocator * self) -{ - if (g_atomic_int_dec_and_test (&self->outstanding)) { - /* all memory objects are returned to the pool, see if we need to free them */ - if (GST_WIN32_IPC_ALLOCATOR_IS_FLUSHING (self)) { - /* take the lock so that set_active is not run concurrently */ - EnterCriticalSection (&self->lock); - /* now that we have the lock, check if we have been de-activated with - * outstanding buffers */ - if (!self->active) - gst_win32_ipc_allocator_stop (self); - LeaveCriticalSection (&self->lock); - } + if (priv->started) { + gst_win32_ipc_allocator_clear_queue (self); + priv->started = false; } } @@ -338,13 +255,18 @@ gst_win32_ipc_allocator_release_memory (GstWin32IpcAllocator * self, GstMemory * mem) { + auto priv = self->priv; + GST_MINI_OBJECT_CAST (mem)->dispose = nullptr; mem->allocator = (GstAllocator *) gst_object_ref (gc_allocator); /* keep it around in our queue */ - gst_atomic_queue_push (self->queue, mem); - gst_poll_write_control (self->poll); - dec_outstanding (self); + priv->queue.push (mem); + priv->outstanding--; + if (priv->outstanding == 0 && priv->flushing) + gst_win32_ipc_allocator_stop (self); + priv->cond.notify_all (); + priv->lock.unlock (); gst_object_unref (self); } @@ -352,21 +274,18 @@ static gboolean gst_win32_ipc_memory_release (GstMiniObject * mini_object) { - GstMemory *mem = GST_MEMORY_CAST (mini_object); - GstWin32IpcAllocator *self; + auto mem = GST_MEMORY_CAST (mini_object); g_assert (mem->allocator != nullptr); - self = GST_WIN32_IPC_ALLOCATOR (mem->allocator); + auto self = GST_WIN32_IPC_ALLOCATOR (mem->allocator); + auto priv = self->priv; /* Memory belongs to garbage collector, free this */ - if (self->is_gc) - return TRUE; - - if (GST_WIN32_IPC_ALLOCATOR_IS_FLUSHING (self)) + if (priv->is_gc) return TRUE; - /* return the memory to the allocator */ + priv->lock.lock (); gst_memory_ref (mem); gst_win32_ipc_allocator_release_memory (self, mem); @@ -376,25 +295,21 @@ static GstFlowReturn gst_win32_ipc_allocator_alloc (GstWin32IpcAllocator * self, GstMemory ** mem) { - GstWin32IpcMemory *new_mem; - Win32IpcMmf *mmf; - std::string mmf_name; - - mmf_name = std::string (self->prefix) + - std::to_string (InterlockedIncrement64 (&self->seq_num)); + auto priv = self->priv; - mmf = win32_ipc_mmf_alloc (self->size, mmf_name.c_str ()); + auto mmf = gst_win32_ipc_mmf_alloc (priv->size); if (!mmf) { GST_ERROR_OBJECT (self, "Couldn't allocate memory"); return GST_FLOW_ERROR; } - memset (win32_ipc_mmf_get_raw (mmf), 0, win32_ipc_mmf_get_size (mmf)); + memset (gst_win32_ipc_mmf_get_raw (mmf), 0, gst_win32_ipc_mmf_get_size (mmf)); - g_atomic_int_add (&self->cur_mems, 1); - new_mem = g_new0 (GstWin32IpcMemory, 1); + priv->cur_mems++; + + auto new_mem = g_new0 (GstWin32IpcMemory, 1); gst_memory_init (GST_MEMORY_CAST (new_mem), (GstMemoryFlags) 0, - GST_ALLOCATOR_CAST (gc_allocator), nullptr, self->size, 0, 0, self->size); + GST_ALLOCATOR_CAST (gc_allocator), nullptr, priv->size, 0, 0, priv->size); new_mem->mmf = mmf; *mem = GST_MEMORY_CAST (new_mem); @@ -406,76 +321,31 @@ gst_win32_ipc_allocator_acquire_memory_internal (GstWin32IpcAllocator * self, GstMemory ** memory) { - GstFlowReturn result; + auto priv = self->priv; - while (TRUE) { - if (GST_WIN32_IPC_ALLOCATOR_IS_FLUSHING (self)) { - GST_DEBUG_OBJECT (self, "We are flushing"); + do { + if (priv->flushing) { + GST_DEBUG_OBJECT (self, "we are flushing"); return GST_FLOW_FLUSHING; } - /* try to get a memory from the queue */ - *memory = (GstMemory *) gst_atomic_queue_pop (self->queue); - if (*memory) { - while (!gst_poll_read_control (self->poll)) { - if (errno == EWOULDBLOCK) { - /* We put the memory into the queue but did not finish writing control - * yet, let's wait a bit and retry */ - g_thread_yield (); - continue; - } else { - /* Critical error but GstPoll already complained */ - break; - } - } - result = GST_FLOW_OK; + if (!priv->queue.empty ()) { + *memory = priv->queue.front (); + priv->queue.pop (); GST_LOG_OBJECT (self, "acquired memory %p", *memory); break; } /* no memory, try to allocate some more */ GST_LOG_OBJECT (self, "no memory, trying to allocate"); - result = gst_win32_ipc_allocator_alloc (self, memory); - if (result == GST_FLOW_OK) - /* we have a memory, return it */ - break; + auto ret = gst_win32_ipc_allocator_alloc (self, memory); + if (ret != GST_FLOW_OK) + return ret; - if (G_UNLIKELY (result != GST_FLOW_EOS)) - /* something went wrong, return error */ - break; - - /* now we release the control socket, we wait for a memory release or - * flushing */ - if (!gst_poll_read_control (self->poll)) { - if (errno == EWOULDBLOCK) { - /* This means that we have two threads trying to allocate memory - * already, and the other one already got the wait token. This - * means that we only have to wait for the poll now and not write the - * token afterwards: we will be woken up once the other thread is - * woken up and that one will write the wait token it removed */ - GST_LOG_OBJECT (self, "waiting for free memory or flushing"); - gst_poll_wait (self->poll, GST_CLOCK_TIME_NONE); - } else { - /* This is a critical error, GstPoll already gave a warning */ - result = GST_FLOW_ERROR; - break; - } - } else { - /* We're the first thread waiting, we got the wait token and have to - * write it again later - * OR - * We're a second thread and just consumed the flush token and block all - * other threads, in which case we must not wait and give it back - * immediately */ - if (!GST_WIN32_IPC_ALLOCATOR_IS_FLUSHING (self)) { - GST_LOG_OBJECT (self, "waiting for free memory or flushing"); - gst_poll_wait (self->poll, GST_CLOCK_TIME_NONE); - } - gst_poll_write_control (self->poll); - } - } + break; + } while (true); - return result; + return GST_FLOW_OK; } gboolean @@ -494,23 +364,21 @@ g_object_new (GST_TYPE_WIN32_IPC_ALLOCATOR, nullptr); gst_object_ref_sink (gc_allocator); GST_OBJECT_FLAG_SET (gc_allocator, GST_OBJECT_FLAG_MAY_BE_LEAKED); - gc_allocator->is_gc = TRUE; + gc_allocator->priv->is_gc = true; }); } GstWin32IpcAllocator * -gst_win32_ipc_allocator_new (guint size) +gst_win32_ipc_allocator_new (gsize size) { - GstWin32IpcAllocator *self; - g_return_val_if_fail (size != 0, nullptr); gst_win32_ipc_allocator_init_once (); - self = (GstWin32IpcAllocator *) + auto self = (GstWin32IpcAllocator *) g_object_new (GST_TYPE_WIN32_IPC_ALLOCATOR, nullptr); - self->size = size; - self->prefix = gst_win32_ipc_get_mmf_prefix (); + auto priv = self->priv; + priv->size = size; gst_object_ref_sink (self); @@ -528,7 +396,9 @@ *memory = nullptr; - g_atomic_int_inc (&alloc->outstanding); + auto priv = alloc->priv; + + std::unique_lock < std::mutex > lk (priv->lock); ret = gst_win32_ipc_allocator_acquire_memory_internal (alloc, memory); if (ret == GST_FLOW_OK) { @@ -537,8 +407,7 @@ gst_object_unref (mem->allocator); mem->allocator = (GstAllocator *) gst_object_ref (alloc); GST_MINI_OBJECT_CAST (mem)->dispose = gst_win32_ipc_memory_release; - } else { - dec_outstanding (alloc); + priv->outstanding++; } return ret;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcmemory.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcmemory.h
Changed
@@ -20,7 +20,7 @@ #pragma once #include <gst/gst.h> -#include "protocol/win32ipcmmf.h" +#include "gstwin32ipcmmf.h" G_BEGIN_DECLS @@ -34,12 +34,12 @@ { GstMemory mem; - Win32IpcMmf *mmf; + GstWin32IpcMmf *mmf; }; gboolean gst_is_win32_ipc_memory (GstMemory * mem); -GstWin32IpcAllocator * gst_win32_ipc_allocator_new (guint size); +GstWin32IpcAllocator * gst_win32_ipc_allocator_new (gsize size); gboolean gst_win32_ipc_allocator_set_active (GstWin32IpcAllocator * alloc, gboolean active);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcmmf.cpp
Added
@@ -0,0 +1,197 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "gstwin32ipcmmf.h" +#include <string> + +GST_DEBUG_CATEGORY_EXTERN (gst_win32_ipc_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_debug + +struct GstWin32IpcMmf +{ + explicit GstWin32IpcMmf (HANDLE f, void *b, SIZE_T s) + : file (f), buffer (b), size (s), ref_count (1) + { + } + + ~GstWin32IpcMmf () + { + GST_TRACE ("Freeing %p", this); + + if (buffer) + UnmapViewOfFile (buffer); + if (file) + CloseHandle (file); + } + + HANDLE file; + void *buffer; + SIZE_T size; + ULONG ref_count; +}; + +static GstWin32IpcMmf * +gst_win32_ipc_mmf_new (HANDLE file, SIZE_T size) +{ + auto buffer = MapViewOfFile (file, FILE_MAP_ALL_ACCESS, 0, 0, size); + if (!buffer) { + auto err_code = GetLastError (); + auto msg = g_win32_error_message (err_code); + GST_ERROR ("MapViewOfFile failed with 0x%x (%s)", (guint) err_code, msg); + g_free (msg); + CloseHandle (file); + return nullptr; + } + + return new GstWin32IpcMmf (file, buffer, size); +} + +/** + * gst_win32_ipc_mmf_alloc: + * @size: Size of memory to allocate + * + * Creates shared memory + * + * Returns: a new GstWin32IpcMmf object + */ +GstWin32IpcMmf * +gst_win32_ipc_mmf_alloc (SIZE_T size) +{ + if (!size) { + GST_ERROR ("Zero size is not allowed"); + return nullptr; + } + + ULARGE_INTEGER alloc_size; + alloc_size.QuadPart = size; + + auto file = CreateFileMappingW (INVALID_HANDLE_VALUE, nullptr, + PAGE_READWRITE | SEC_COMMIT, alloc_size.HighPart, alloc_size.LowPart, + + nullptr); + if (!file) { + auto err_code = GetLastError (); + auto msg = g_win32_error_message (err_code); + GST_ERROR ("CreateFileMappingW failed with 0x%x (%s)", + (guint) err_code, msg); + g_free (msg); + return nullptr; + } + + return gst_win32_ipc_mmf_new (file, size); +} + +/** + * gst_win32_ipc_mmf_open: + * @size: Size of memory to allocate + * @file: (transfer full): File mapping handle + * + * Opens named shared memory + * + * Returns: a new GstWin32IpcMmf object + */ +GstWin32IpcMmf * +gst_win32_ipc_mmf_open (SIZE_T size, HANDLE file) +{ + if (!size) { + GST_ERROR ("Zero size is not allowed"); + + if (file) + CloseHandle (file); + + return nullptr; + } + + return gst_win32_ipc_mmf_new (file, size); +} + +/** + * gst_win32_ipc_mmf_get_size: + * @mmf: a GstWin32IpcMmf object + * + * Returns: the size of allocated memory + */ +SIZE_T +gst_win32_ipc_mmf_get_size (GstWin32IpcMmf * mmf) +{ + if (!mmf) + return 0; + + return mmf->size; +} + +/** + * gst_win32_ipc_mmf_get_raw: + * @mmf: a GstWin32IpcMmf object + * + * Returns: the address of allocated memory + */ +void * +gst_win32_ipc_mmf_get_raw (GstWin32IpcMmf * mmf) +{ + if (!mmf) + return nullptr; + + return mmf->buffer; +} + +HANDLE +gst_win32_ipc_mmf_get_handle (GstWin32IpcMmf * mmf) +{ + if (!mmf) + return nullptr; + + return mmf->file; +} + +/** + * gst_win32_ipc_mmf_ref: + * @mmf: a GstWin32IpcMmf object + * + * Increase ref count + */ +GstWin32IpcMmf * +gst_win32_ipc_mmf_ref (GstWin32IpcMmf * mmf) +{ + if (!mmf) + return nullptr; + + InterlockedIncrement (&mmf->ref_count); + + return mmf; +} + +/** + * gst_win32_ipc_mmf_unref: + * @mmf: a GstWin32IpcMmf object + * + * Decrease ref count + */ +void +gst_win32_ipc_mmf_unref (GstWin32IpcMmf * mmf) +{ + ULONG count; + + if (!mmf) + return; + + count = InterlockedDecrement (&mmf->ref_count); + if (count == 0) + delete mmf; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcmmf.h
Added
@@ -0,0 +1,44 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <windows.h> + +G_BEGIN_DECLS + +struct GstWin32IpcMmf; + +GstWin32IpcMmf * gst_win32_ipc_mmf_alloc (SIZE_T size); + +GstWin32IpcMmf * gst_win32_ipc_mmf_open (SIZE_T size, + HANDLE file); + +SIZE_T gst_win32_ipc_mmf_get_size (GstWin32IpcMmf * mmf); + +void * gst_win32_ipc_mmf_get_raw (GstWin32IpcMmf * mmf); + +HANDLE gst_win32_ipc_mmf_get_handle (GstWin32IpcMmf * mmf); + +GstWin32IpcMmf * gst_win32_ipc_mmf_ref (GstWin32IpcMmf * mmf); + +void gst_win32_ipc_mmf_unref (GstWin32IpcMmf * mmf); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcprotocol.cpp
Added
@@ -0,0 +1,460 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include "gstwin32ipcprotocol.h" +#include <string.h> + +constexpr UINT32 WIPC_TAG = 0x43504957u; /* WIPC */ +#define WIN32_IPC_VERSION 0x01u + +#ifdef _WIN64 +#define WIPC_IS_64BIT 1u +#else +#define WIPC_IS_64BIT 0u +#endif + +#define WIN32_IPC_MAGIC64 \ + ( (UINT64(WIPC_TAG) << 32) | (UINT64(0) << 16) | (UINT64(WIPC_IS_64BIT) << 8) | UINT64(WIN32_IPC_VERSION) ) + +#define return_val_if_fail(expr, val) \ + do { \ + if (!(expr)) \ + return (val); \ + } while (0) + +#define WRITE_TO(dst,src,size) \ + do { \ + memcpy (dst, src, size); \ + dst += size; \ + } while (0) + +#define READ_FROM(dst,src,size) \ + do { \ + memcpy (dst, src, size); \ + src += size; \ + } while (0) + +const char * +gst_win32_ipc_pkt_type_to_string (GstWin32IpcPktType type) +{ + switch (type) { + case GstWin32IpcPktType::CONFIG: + return "CONFIG"; + case GstWin32IpcPktType::NEED_DATA: + return "NEED-DATA"; + case GstWin32IpcPktType::HAVE_DATA: + return "HAVE-DATA"; + case GstWin32IpcPktType::READ_DONE: + return "READ-DONE"; + case GstWin32IpcPktType::RELEASE_DATA: + return "RELEASE-DATA"; + case GstWin32IpcPktType::EOS: + return "EOS"; + case GstWin32IpcPktType::FIN: + return "FIN"; + default: + break; + } + + return "Unknown"; +} + +GstWin32IpcPktType +gst_win32_ipc_pkt_type_from_raw (UINT32 type) +{ + return (GstWin32IpcPktType) type; +} + +UINT32 +gst_win32_ipc_pkt_type_to_raw (GstWin32IpcPktType type) +{ + return (UINT32) type; + +} + +struct PtrPos +{ + PtrPos (std::vector < UINT8 > &buf) + { + data = buf.data (); + remaining = buf.size (); + }; + + PtrPos (const std::vector < UINT8 > &buf) + { + data = (UINT8 *) buf.data (); + remaining = buf.size (); + }; + + UINT8 *data; + SIZE_T remaining; +}; + +static inline bool +write_to (PtrPos & p, const void *src, SIZE_T size) +{ + if (p.remaining < size) + return false; + + if (size == 0) + return true; + + memcpy (p.data, src, size); + p.data += size; + p.remaining -= size; + + return true; +} + +static inline bool +read_from (PtrPos & p, void *dst, SIZE_T size) +{ + if (p.remaining < size) + return false; + + if (size == 0) + return true; + + memcpy (dst, p.data, size); + p.data += size; + p.remaining -= size; + + return true; +} + +static inline bool +assign_from (PtrPos & p, std::string & dst, SIZE_T size) +{ + if (p.remaining < size) + return false; + + if (size > 0) + dst.assign ((const char *) p.data, size); + else + dst.clear (); + + p.data += size; + p.remaining -= size; + + return true; +} + +#define WRITE_TO_T(p,s,type) \ + do { \ + if (!write_to (p, s, sizeof (type))) \ + return false; \ + } while (0) + +#define WRITE_TO_S(p,s,size) \ + do { \ + if (!write_to (p, s, size)) \ + return false; \ + } while (0) + +#define READ_FROM_T(p,d,type) \ + do { \ + if (!read_from (p, d, sizeof (type))) \ + return false; \ + } while (0) + +#define READ_FROM_S(p,d,size) \ + do { \ + if (!read_from (p, d, size)) \ + return false; \ + } while (0) + +#define ASSIGN_FROM(p,d,size) \ + do { \ + if (!assign_from (p, d, size)) \ + return false; \ + } while (0) + +bool +gst_win32_ipc_pkt_identify (std::vector < UINT8 > &buf, GstWin32IpcPktHdr & hdr) +{ + PtrPos ptr (buf); + + READ_FROM_T (ptr, &hdr, GstWin32IpcPktHdr); + + if (hdr.magic != WIN32_IPC_MAGIC64) + return false; + + const SIZE_T need = sizeof (GstWin32IpcPktHdr) + hdr.payload_size; + const SIZE_T MAX_PKT_SIZE = 1024 * 1024 * 64; + + if (need > MAX_PKT_SIZE) + return false; + + buf.resize (need); + + return true; +} + +bool +gst_win32_ipc_pkt_build_config (std::vector < UINT8 > &buf, DWORD pid, + const std::string & caps) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::CONFIG; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = sizeof (DWORD) + sizeof (SIZE_T) + caps.size (); + + buf.resize (sizeof (GstWin32IpcPktHdr) + hdr.payload_size); + + PtrPos ptr (buf); + + WRITE_TO_T (ptr, &hdr, GstWin32IpcPktHdr); + WRITE_TO_T (ptr, &pid, DWORD); + + auto caps_len = caps.size (); + WRITE_TO_T (ptr, &caps_len, SIZE_T); + WRITE_TO_S (ptr, caps.c_str (), caps_len); + + return true; +} + +bool +gst_win32_ipc_pkt_parse_config (const std::vector < UINT8 > &buf, DWORD & pid, + std::string & caps) +{ + const SIZE_T min_payload_size = sizeof (DWORD) + sizeof (SIZE_T); + + return_val_if_fail (buf.size () >= + sizeof (GstWin32IpcPktHdr) + min_payload_size, false); + + PtrPos ptr (buf); + + GstWin32IpcPktHdr hdr = { }; + READ_FROM_T (ptr, &hdr, GstWin32IpcPktHdr); + + if (hdr.type != GstWin32IpcPktType::CONFIG || + hdr.magic != WIN32_IPC_MAGIC64 || hdr.payload_size < min_payload_size) { + return false; + } + + READ_FROM_T (ptr, &pid, DWORD); + + SIZE_T size; + READ_FROM_T (ptr, &size, SIZE_T); + ASSIGN_FROM (ptr, caps, size); + + return true; +} + +bool +gst_win32_ipc_pkt_build_need_data (std::vector < UINT8 > &buf) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::NEED_DATA; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = 0; + + buf.resize (sizeof (GstWin32IpcPktHdr)); + + memcpy (buf.data (), &hdr, sizeof (GstWin32IpcPktHdr)); + + return true; +} + +bool +gst_win32_ipc_pkt_build_have_data (std::vector < UINT8 > &buf, SIZE_T mmf_size, + UINT64 pts, UINT64 dts, UINT64 dur, UINT buf_flags, const HANDLE handle, + const char *caps, const std::vector < UINT8 > &meta) +{ + GstWin32IpcPktHdr hdr = { }; + SIZE_T caps_len = 0; + hdr.type = GstWin32IpcPktType::HAVE_DATA; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = 0; + + /* mmf size */ + hdr.payload_size += sizeof (SIZE_T); + + /* pts/dts/dur */ + hdr.payload_size += (sizeof (UINT64) * 3); + + /* buffer flags */ + hdr.payload_size += sizeof (UINT); + + /* Server handle value */ + hdr.payload_size += sizeof (HANDLE); + + /* caps size */ + hdr.payload_size += sizeof (SIZE_T); + + /* caps data */ + if (caps) { + caps_len = strlen (caps); + hdr.payload_size += caps_len; + } + + /* metadata size */ + hdr.payload_size += sizeof (SIZE_T); + + /* metadata */ + hdr.payload_size += meta.size (); + + buf.resize (sizeof (GstWin32IpcPktHdr) + hdr.payload_size); + + PtrPos ptr (buf); + + WRITE_TO_T (ptr, &hdr, GstWin32IpcPktHdr); + WRITE_TO_T (ptr, &mmf_size, SIZE_T); + WRITE_TO_T (ptr, &pts, UINT64); + WRITE_TO_T (ptr, &dts, UINT64); + WRITE_TO_T (ptr, &dur, UINT64); + WRITE_TO_T (ptr, &buf_flags, UINT); + WRITE_TO_T (ptr, &handle, HANDLE); + + WRITE_TO_T (ptr, &caps_len, SIZE_T); + if (caps_len) + WRITE_TO_S (ptr, caps, caps_len); + + auto size = meta.size (); + WRITE_TO_T (ptr, &size, SIZE_T); + WRITE_TO_S (ptr, meta.data (), size); + + return true; +} + +bool +gst_win32_ipc_pkt_parse_have_data (const std::vector < UINT8 > &buf, + SIZE_T & mmf_size, UINT64 & pts, UINT64 & dts, UINT64 & dur, + UINT & buf_flags, HANDLE & handle, std::string & caps, + std::vector < UINT8 > &meta) +{ + const SIZE_T min_payload_size = sizeof (SIZE_T) + (sizeof (UINT64) * 3) + + sizeof (UINT) + sizeof (HANDLE) + sizeof (SIZE_T) + sizeof (SIZE_T); + + return_val_if_fail (buf.size () >= + sizeof (GstWin32IpcPktHdr) + min_payload_size, false); + + PtrPos ptr (buf); + + GstWin32IpcPktHdr hdr = { }; + READ_FROM_T (ptr, &hdr, GstWin32IpcPktHdr); + + if (hdr.type != GstWin32IpcPktType::HAVE_DATA || + hdr.magic != WIN32_IPC_MAGIC64 || hdr.payload_size < min_payload_size) { + return false; + } + + READ_FROM_T (ptr, &mmf_size, SIZE_T); + READ_FROM_T (ptr, &pts, UINT64); + READ_FROM_T (ptr, &dts, UINT64); + READ_FROM_T (ptr, &dur, UINT64); + READ_FROM_T (ptr, &buf_flags, UINT); + READ_FROM_T (ptr, &handle, HANDLE); + + SIZE_T size; + READ_FROM_T (ptr, &size, SIZE_T); + ASSIGN_FROM (ptr, caps, size); + + READ_FROM_T (ptr, &size, SIZE_T); + meta.resize (size); + + READ_FROM_S (ptr, meta.data (), size); + + return true; +} + +bool +gst_win32_ipc_pkt_build_read_done (std::vector < UINT8 > &buf) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::READ_DONE; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = 0; + + buf.resize (sizeof (GstWin32IpcPktHdr)); + + memcpy (buf.data (), &hdr, sizeof (GstWin32IpcPktHdr)); + + return true; +} + +bool +gst_win32_ipc_pkt_build_release_data (std::vector < UINT8 > &buf, + const HANDLE handle) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::RELEASE_DATA; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = sizeof (HANDLE); + + buf.resize (sizeof (GstWin32IpcPktHdr) + hdr.payload_size); + + PtrPos ptr (buf); + WRITE_TO_T (ptr, &hdr, GstWin32IpcPktHdr); + WRITE_TO_T (ptr, &handle, HANDLE); + + return true; +} + +bool +gst_win32_ipc_pkt_parse_release_data (const std::vector < UINT8 > &buf, + HANDLE & handle) +{ + return_val_if_fail (buf.size () >= + sizeof (GstWin32IpcPktHdr) + sizeof (HANDLE), false); + + PtrPos ptr (buf); + + GstWin32IpcPktHdr hdr = { }; + READ_FROM_T (ptr, &hdr, GstWin32IpcPktHdr); + + if (hdr.type != GstWin32IpcPktType::RELEASE_DATA || + hdr.magic != WIN32_IPC_MAGIC64 || hdr.payload_size != sizeof (HANDLE)) { + return false; + } + + READ_FROM_T (ptr, &handle, HANDLE); + + return true; +} + +bool +gst_win32_ipc_pkt_build_eos (std::vector < UINT8 > &buf) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::EOS; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = 0; + + buf.resize (sizeof (GstWin32IpcPktHdr)); + + memcpy (buf.data (), &hdr, sizeof (GstWin32IpcPktHdr)); + + return true; +} + +bool +gst_win32_ipc_pkt_build_fin (std::vector < UINT8 > &buf) +{ + GstWin32IpcPktHdr hdr = { }; + hdr.type = GstWin32IpcPktType::FIN; + hdr.magic = WIN32_IPC_MAGIC64; + hdr.payload_size = 0; + + buf.resize (sizeof (GstWin32IpcPktHdr)); + + memcpy (buf.data (), &hdr, sizeof (GstWin32IpcPktHdr)); + + return true; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcprotocol.h
Added
@@ -0,0 +1,96 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <windows.h> +#include <vector> +#include <string> + +enum class GstWin32IpcPktType : UINT32 +{ + UNKNOWN, + CONFIG, + NEED_DATA, + HAVE_DATA, + READ_DONE, + RELEASE_DATA, + EOS, + FIN, +}; + +#pragma pack(push, 1) +struct GstWin32IpcPktHdr +{ + UINT64 magic; + GstWin32IpcPktType type; + UINT32 payload_size; +}; +#pragma pack(pop) + +const char * gst_win32_ipc_pkt_type_to_string (GstWin32IpcPktType type); + +GstWin32IpcPktType gst_win32_ipc_pkt_type_from_raw (UINT32 type); + +UINT32 gst_win32_ipc_pkt_type_to_raw (GstWin32IpcPktType type); + +bool gst_win32_ipc_pkt_identify (std::vector<UINT8> & buf, + GstWin32IpcPktHdr & header); + +bool gst_win32_ipc_pkt_build_config (std::vector<UINT8> & buf, + DWORD pid, + const std::string & caps); + +bool gst_win32_ipc_pkt_parse_config (const std::vector<UINT8> & buf, + DWORD & pid, + std::string & caps); + +bool gst_win32_ipc_pkt_build_need_data (std::vector<UINT8> & buf); + +bool gst_win32_ipc_pkt_build_have_data (std::vector<UINT8> & buf, + SIZE_T mmf_size, + UINT64 pts, + UINT64 dts, + UINT64 dur, + UINT buf_flags, + const HANDLE handle, + const char * caps, + const std::vector<UINT8> & meta); + +bool gst_win32_ipc_pkt_parse_have_data (const std::vector<UINT8> & buf, + SIZE_T & mmf_size, + UINT64 & pts, + UINT64 & dts, + UINT64 & dur, + UINT & buf_flags, + HANDLE & handle, + std::string & caps, + std::vector<UINT8> & meta); + +bool gst_win32_ipc_pkt_build_read_done (std::vector<UINT8> & buf); + +bool gst_win32_ipc_pkt_build_release_data (std::vector<UINT8> & buf, + const HANDLE handle); + +bool gst_win32_ipc_pkt_parse_release_data (const std::vector<UINT8> & buf, + HANDLE & handle); + +bool gst_win32_ipc_pkt_build_eos (std::vector<UINT8> & buf); + +bool gst_win32_ipc_pkt_build_fin (std::vector<UINT8> & buf); \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcserver.cpp
Added
@@ -0,0 +1,1119 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcserver.h" +#include "gstwin32ipcprotocol.h" +#include "gstwin32ipcmemory.h" +#include <unordered_map> +#include <mutex> +#include <condition_variable> +#include <atomic> +#include <memory> +#include <deque> +#include <vector> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_server_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_server_debug + +enum +{ + PROP_0, + PROP_NUM_CLIENTS, + PROP_LAST, +}; + +static GParamSpec *propsPROP_LAST; + +#define CONN_BUFFER_SIZE 1024 + +/* *INDENT-OFF* */ +struct GstWin32IpcServerData +{ + explicit GstWin32IpcServerData (GstBuffer * buf, GstCaps * c, + GByteArray * mtd, UINT64 seq) + { + if (buf) { + auto mem = (GstWin32IpcMemory *) gst_buffer_peek_memory (buf, 0); + mmf = mem->mmf; + gst_win32_ipc_mmf_ref (mmf); + handle = gst_win32_ipc_mmf_get_handle (mmf); + buffer = gst_buffer_ref (buf); + } + + if (c) + caps = gst_caps_ref (c); + + if (mtd && mtd->len) { + meta.resize (mtd->len); + memcpy (meta.data (), mtd->data, mtd->len); + } + seq_num = seq; + } + + ~GstWin32IpcServerData() + { + if (mmf) + gst_win32_ipc_mmf_unref (mmf); + + gst_clear_caps (&caps); + gst_clear_buffer (&buffer); + } + + GstWin32IpcMmf *mmf = nullptr; + HANDLE handle = nullptr; + GstCaps *caps = nullptr; + std::vector<UINT8> meta; + SIZE_T size = 0; + UINT64 seq_num; + GstClockTime pts = GST_CLOCK_TIME_NONE; + GstClockTime dts = GST_CLOCK_TIME_NONE; + GstClockTime dur = GST_CLOCK_TIME_NONE; + UINT buf_flags = 0; + GstBuffer *buffer = nullptr; +}; + +struct GstWin32IpcServerConn : public OVERLAPPED +{ + GstWin32IpcServerConn (HANDLE pipe_handle) : pipe (pipe_handle) + { + OVERLAPPED *parent = static_cast<OVERLAPPED *> (this); + parent->Internal = 0; + parent->InternalHigh = 0; + parent->Offset = 0; + parent->OffsetHigh = 0; + + client_msg.resize (CONN_BUFFER_SIZE); + server_msg.resize (CONN_BUFFER_SIZE); + } + + ~GstWin32IpcServerConn() + { + close (); + } + + void close() + { + if (pipe != INVALID_HANDLE_VALUE) { + CancelIoEx (pipe, nullptr); + DisconnectNamedPipe (pipe); + CloseHandle (pipe); + } + + pipe = INVALID_HANDLE_VALUE; + } + + GstWin32IpcServer *server; + + HANDLE pipe; + + GstWin32IpcPktType type; + std::vector<UINT8> client_msg; + std::vector<UINT8> server_msg; + std::shared_ptr<GstWin32IpcServerData> data; + std::vector<std::shared_ptr<GstWin32IpcServerData>> peer_handles; + GstCaps *caps = nullptr; + std::string caps_string; + + guint64 seq_num = 0; + guint id; + bool pending_have_data = false; + bool configured = false; + + std::atomic<bool> io_pending = { false }; +}; + +struct GstWin32IpcServerPrivate +{ + GstWin32IpcServerPrivate () + { + cancellable = CreateEvent (nullptr, TRUE, FALSE, nullptr); + wakeup_event = CreateEvent (nullptr, FALSE, FALSE, nullptr); + } + + ~GstWin32IpcServerPrivate () + { + CloseHandle (cancellable); + CloseHandle (wakeup_event); + } + + std::mutex lock; + std::condition_variable cond; + guint64 seq_num = 0; + guint next_conn_id = 0; + std::unordered_map<guint, std::shared_ptr<GstWin32IpcServerConn>> conn_map; + std::vector<std::shared_ptr<GstWin32IpcServerConn>> conn_gc; + std::vector<std::shared_ptr<GstWin32IpcServerConn>> conn_tmp; + GThread *loop_thread = nullptr; + std::atomic<bool> aborted = { false }; + std::deque<std::shared_ptr<GstWin32IpcServerData>> data_queue; + std::string address; + HANDLE cancellable; + HANDLE wakeup_event; + DWORD pid; + std::atomic<bool> flushing = { false }; + std::atomic<guint64> max_buffers = { 0 }; + std::atomic<GstWin32IpcLeakyType> leaky = { GST_WIN32_IPC_LEAKY_DOWNSTREAM }; +}; +/* *INDENT-ON* */ + +struct _GstWin32IpcServer +{ + GstObject parent; + + GstWin32IpcServerPrivate *priv; +}; + +static void gst_win32_ipc_server_dispose (GObject * object); +static void gst_win32_ipc_server_finalize (GObject * object); +static void gst_win32_ipc_server_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec); +static void gst_win32_ipc_server_on_idle (GstWin32IpcServer * self); +static void gst_win32_ipc_server_send_msg (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn); +static void gst_win32_ipc_server_wait_msg (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn); + +#define gst_win32_ipc_server_parent_class parent_class +G_DEFINE_TYPE (GstWin32IpcServer, gst_win32_ipc_server, GST_TYPE_OBJECT); + +static void +gst_win32_ipc_server_class_init (GstWin32IpcServerClass * klass) +{ + GObjectClass *object_class = G_OBJECT_CLASS (klass); + + object_class->dispose = gst_win32_ipc_server_dispose; + object_class->finalize = gst_win32_ipc_server_finalize; + object_class->get_property = gst_win32_ipc_server_get_property; + + propsPROP_NUM_CLIENTS = + g_param_spec_uint ("num-clients", "Number of clients", + "The number of connected clients", 0, G_MAXUINT, 0, + (GParamFlags) (G_PARAM_READABLE | G_PARAM_STATIC_STRINGS)); + + g_object_class_install_properties (object_class, PROP_LAST, props); + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_server_debug, "win32ipcserver", + 0, "win32ipcserver"); +} + +static void +gst_win32_ipc_server_init (GstWin32IpcServer * self) +{ + self->priv = new GstWin32IpcServerPrivate (); + self->priv->pid = GetCurrentProcessId (); +} + +static void +gst_win32_ipc_server_dispose (GObject * object) +{ + auto self = GST_WIN32_IPC_SERVER (object); + auto priv = self->priv; + + GST_DEBUG_OBJECT (self, "dispose"); + + SetEvent (priv->cancellable); + + g_clear_pointer (&priv->loop_thread, g_thread_join); + + G_OBJECT_CLASS (parent_class)->dispose (object); +} + +static void +gst_win32_ipc_server_finalize (GObject * object) +{ + auto self = GST_WIN32_IPC_SERVER (object); + + GST_DEBUG_OBJECT (self, "finalize"); + + delete self->priv; + + G_OBJECT_CLASS (parent_class)->finalize (object); +} + +static void +gst_win32_ipc_server_get_property (GObject * object, guint prop_id, + GValue * value, GParamSpec * pspec) +{ + auto self = GST_WIN32_IPC_SERVER (object); + auto priv = self->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + switch (prop_id) { + case PROP_NUM_CLIENTS: + g_value_set_uint (value, (guint) priv->conn_map.size ()); + break; + default: + G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); + break; + } +} + +static HANDLE +gst_win32_ipc_server_create_pipe (GstWin32IpcServer * self, + OVERLAPPED * overlap, bool &io_pending) +{ + auto priv = self->priv; + HANDLE pipe = CreateNamedPipeA (priv->address.c_str (), + PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, + PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT, + PIPE_UNLIMITED_INSTANCES, CONN_BUFFER_SIZE, CONN_BUFFER_SIZE, 5000, + nullptr); + + if (pipe == INVALID_HANDLE_VALUE) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, "CreateNamedPipeA failed with 0x%x (%s)", + last_err, err); + g_free (err); + return INVALID_HANDLE_VALUE; + } + + if (ConnectNamedPipe (pipe, overlap)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, "ConnectNamedPipe failed with 0x%x (%s)", + last_err, err); + g_free (err); + return INVALID_HANDLE_VALUE; + } + + io_pending = false; + guint last_err = GetLastError (); + + switch (last_err) { + case ERROR_IO_PENDING: + io_pending = true; + break; + case ERROR_PIPE_CONNECTED: + SetEvent (overlap->hEvent); + break; + default: + { + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, "ConnectNamedPipe failed with 0x%x (%s)", + last_err, err); + g_free (err); + CloseHandle (pipe); + return INVALID_HANDLE_VALUE; + } + } + + return pipe; +} + +/* *INDENT-OFF* */ +static void +gst_win32_ipc_server_close_connection (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + auto priv = self->priv; + bool wakeup = false; + + GST_DEBUG_OBJECT (self, "Closing conn-id %u", conn->id); + + conn->close (); + + { + std::lock_guard < std::mutex > lk (priv->lock); + auto it = priv->conn_map.find (conn->id); + if (it != priv->conn_map.end ()) { + auto keep = it->second; + priv->conn_map.erase (it); + + if (conn->io_pending) { + GST_DEBUG_OBJECT (self, "conn-id %u has pending I/O, moving to GC", + conn->id); + priv->conn_gc.push_back (keep); + } + } + + if (priv->conn_map.empty ()) + wakeup = true; + } + + if (wakeup) { + GST_DEBUG_OBJECT (self, "All connection were closed"); + /* Run idle func to flush buffer queue if needed */ + SetEvent (priv->wakeup_event); + } + + g_object_notify_by_pspec (G_OBJECT (self), propsPROP_NUM_CLIENTS); +} +/* *INDENT-ON* */ + +static void +gst_win32_ipc_server_eos (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + gst_win32_ipc_pkt_build_eos (conn->server_msg); + conn->type = GstWin32IpcPktType::EOS; + + gst_win32_ipc_server_send_msg (self, conn); +} + +static void +gst_win32_ipc_server_have_data (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + if (!conn->data) { + GST_ERROR_OBJECT (self, "Have no data to send, conn-id: %u", conn->id); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + auto & data = conn->data; + + conn->pending_have_data = false; + conn->seq_num = data->seq_num + 1; + + if (!data->buffer) { + GST_DEBUG_OBJECT (self, "Empty data, sending EOS, conn-id: %u", conn->id); + gst_win32_ipc_server_eos (self, conn); + return; + } + + gchar *caps_str = nullptr; + if (!conn->caps || !gst_caps_is_equal (conn->caps, data->caps)) { + gst_caps_replace (&conn->caps, data->caps); + caps_str = gst_caps_to_string (data->caps); + conn->caps_string = caps_str; + } + + GST_LOG_OBJECT (self, "Sending HAVE-DATA with handle \"%p\", conn-id :%u", + conn->data->handle, conn->id); + + auto ret = gst_win32_ipc_pkt_build_have_data (conn->server_msg, data->size, + data->pts, data->dts, data->dur, data->buf_flags, data->handle, caps_str, + data->meta); + g_free (caps_str); + + if (!ret) { + GST_ERROR_OBJECT (self, "Couldn't build HAVE-DATA pkt, conn-id: %u", + conn->id); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + conn->type = GstWin32IpcPktType::HAVE_DATA; + gst_win32_ipc_server_send_msg (self, conn); +} + +static bool +gst_win32_ipc_server_on_release_data (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + bool found = false; + HANDLE handle = nullptr; + + if (!gst_win32_ipc_pkt_parse_release_data (conn->client_msg, handle)) { + GST_ERROR_OBJECT (self, "Couldn't parse RELEASE-DATA, conn-id: %u", + conn->id); + return false; + } + + GST_LOG_OBJECT (self, "RELEASE-DATA \"%p\", conn-id: %u", handle, conn->id); + + for (auto it = conn->peer_handles.begin (); it != conn->peer_handles.end (); + it++) { + auto other = (*it)->handle; + if (handle == other) { + found = true; + conn->peer_handles.erase (it); + break; + } + } + + if (!found) { + GST_WARNING_OBJECT (self, + "Unexpected name to remove, conn-id: %u", conn->id); + return false; + } + + GST_LOG_OBJECT (self, "Client is holding %" G_GSIZE_FORMAT " handles", + conn->peer_handles.size ()); + + return true; +} + +static void +gst_win32_ipc_server_wait_msg_finish (GstWin32IpcServer * server, + GstWin32IpcServerConn * conn) +{ + GstWin32IpcPktHdr header; + + if (!gst_win32_ipc_pkt_identify (conn->client_msg, header)) { + GST_ERROR_OBJECT (server, "Broken header, conn-id: %u", conn->id); + gst_win32_ipc_server_close_connection (server, conn); + return; + } + + switch (header.type) { + case GstWin32IpcPktType::NEED_DATA: + GST_LOG_OBJECT (server, "NEED-DATA, conn-id: %u", conn->id); + if (!conn->data) { + GST_LOG_OBJECT (server, "Wait for available data, conn-id: %u", + conn->id); + conn->pending_have_data = true; + gst_win32_ipc_server_on_idle (server); + return; + } + gst_win32_ipc_server_have_data (server, conn); + break; + case GstWin32IpcPktType::READ_DONE: + GST_LOG_OBJECT (server, "READ-DONE, conn-id: %u", conn->id); + + if (!conn->data) { + GST_ERROR_OBJECT (server, "Unexpected READ-DATA, conn-id: %u", + conn->id); + gst_win32_ipc_server_close_connection (server, conn); + return; + } + + conn->peer_handles.push_back (conn->data); + conn->data = nullptr; + gst_win32_ipc_server_wait_msg (server, conn); + break; + case GstWin32IpcPktType::RELEASE_DATA: + GST_LOG_OBJECT (server, "RELEASE-DATA, conn-id: %u", conn->id); + if (!gst_win32_ipc_server_on_release_data (server, conn)) + gst_win32_ipc_server_close_connection (server, conn); + else + gst_win32_ipc_server_wait_msg (server, conn); + break; + case GstWin32IpcPktType::FIN: + GST_DEBUG_OBJECT (server, "FIN, conn-id %u", conn->id); + gst_win32_ipc_server_close_connection (server, conn); + break; + default: + GST_ERROR_OBJECT (server, "Unexpected packet, conn-id: %u", conn->id); + gst_win32_ipc_server_close_connection (server, conn); + break; + } +} + +static void WINAPI +gst_win32_ipc_server_payload_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + GstWin32IpcServerConn *conn = + static_cast < GstWin32IpcServerConn * >(overlap); + auto self = conn->server; + auto priv = self->priv; + + conn->io_pending = false; + + if (priv->aborted) + return; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "ReadFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + gst_win32_ipc_server_wait_msg_finish (self, conn); +} + +static void WINAPI +gst_win32_ipc_server_wait_msg_header_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + GstWin32IpcServerConn *conn = + static_cast < GstWin32IpcServerConn * >(overlap); + GstWin32IpcPktHdr hdr; + auto self = conn->server; + auto priv = self->priv; + + conn->io_pending = false; + + if (priv->aborted) + return; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "ReadFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + if (!gst_win32_ipc_pkt_identify (conn->client_msg, hdr)) { + GST_ERROR_OBJECT (self, "Broken header"); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + if (hdr.payload_size == 0) { + gst_win32_ipc_server_wait_msg_finish (conn->server, conn); + return; + } + + GST_LOG_OBJECT (self, "Reading payload"); + + conn->io_pending = true; + if (!ReadFileEx (conn->pipe, conn->client_msg.data () + + sizeof (GstWin32IpcPktHdr), hdr.payload_size, conn, + gst_win32_ipc_server_payload_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "ReadFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + conn->io_pending = false; + gst_win32_ipc_server_close_connection (self, conn); + } +} + +static void +gst_win32_ipc_server_wait_msg (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + auto priv = self->priv; + + if (priv->aborted) + return; + + conn->io_pending = true; + if (!ReadFileEx (conn->pipe, conn->client_msg.data (), + sizeof (GstWin32IpcPktHdr), conn, + gst_win32_ipc_server_wait_msg_header_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "ReadFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + conn->io_pending = false; + gst_win32_ipc_server_close_connection (self, conn); + } +} + +static void +gst_win32_ipc_server_config_data (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + auto priv = self->priv; + + if (conn->data) { + auto & data = conn->data; + if (!conn->caps || !gst_caps_is_equal (conn->caps, data->caps)) { + gst_caps_replace (&conn->caps, data->caps); + auto caps_str = gst_caps_to_string (data->caps); + conn->caps_string = caps_str; + g_free (caps_str); + } + } + + gst_win32_ipc_pkt_build_config (conn->server_msg, + priv->pid, conn->caps_string); + conn->type = GstWin32IpcPktType::CONFIG; + + GST_LOG_OBJECT (self, "Sending CONFIG, conn-id %u", conn->id); + gst_win32_ipc_server_send_msg (self, conn); +} + +/* *INDENT-OFF* */ +static void +gst_win32_ipc_server_on_idle (GstWin32IpcServer * self) +{ + auto priv = self->priv; + + GST_LOG_OBJECT (self, "idle"); + + std::vector < std::shared_ptr < GstWin32IpcServerConn >> to_config_data; + std::vector < std::shared_ptr < GstWin32IpcServerConn >> to_send_have_data; + guint64 base_seq = 0; + + { + std::unique_lock < std::mutex > lk (priv->lock); + if (priv->data_queue.empty ()) + return; + + base_seq = priv->data_queue.front ()->seq_num; + + for (auto it : priv->conn_map) { + auto conn = it.second; + if (!conn->configured) { + conn->configured = true; + conn->data = priv->data_queue.front (); + to_config_data.push_back (conn); + } else if (conn->pending_have_data) { + auto next_seq = conn->seq_num; + + if (next_seq < base_seq) { + GST_WARNING_OBJECT (self, "conn-id: %u next_seq < base_seq, resync", + conn->id); + next_seq = base_seq; + } + + auto offset = (size_t) (next_seq - base_seq); + if (offset < priv->data_queue.size ()) { + conn->data = priv->data_queueoffset; + to_send_have_data.push_back (conn); + } + } + } + } + + for (auto it: to_config_data) + gst_win32_ipc_server_config_data (self, it.get ()); + + for (auto it: to_send_have_data) + gst_win32_ipc_server_have_data (self, it.get ()); + + /* Drop fully consumed buffer from queue */ + { + std::unique_lock<std::mutex> lk (priv->lock); + + if (!priv->data_queue.empty ()) { + guint64 min_seq = G_MAXUINT64; + + for (auto it : priv->conn_map) { + auto conn = it.second; + if (conn->seq_num < min_seq) + min_seq = conn->seq_num; + } + + while (!priv->data_queue.empty () && + priv->data_queue.front ()->seq_num < min_seq) { + priv->data_queue.pop_front (); + } + + priv->cond.notify_all (); + } + } +} +/* *INDENT-ON* */ + +static void WINAPI +gst_win32_ipc_server_send_msg_finish (DWORD error_code, DWORD size, + OVERLAPPED * overlap) +{ + GstWin32IpcServerConn *conn = + static_cast < GstWin32IpcServerConn * >(overlap); + auto self = conn->server; + auto priv = self->priv; + + conn->io_pending = false; + + if (priv->aborted) + return; + + if (error_code != ERROR_SUCCESS) { + auto err = g_win32_error_message (error_code); + GST_WARNING_OBJECT (self, "WriteFileEx callback failed with 0x%x (%s)", + (guint) error_code, err); + g_free (err); + gst_win32_ipc_server_close_connection (self, conn); + return; + } + + GST_LOG_OBJECT (self, "Sent message"); + + switch (conn->type) { + case GstWin32IpcPktType::CONFIG: + GST_DEBUG_OBJECT (self, "Sent CONFIG-DATA, conn-id %u", conn->id); + gst_win32_ipc_server_wait_msg (self, conn); + break; + case GstWin32IpcPktType::HAVE_DATA: + GST_LOG_OBJECT (self, "Sent HAVE-DATA, conn-id %u", conn->id); + gst_win32_ipc_server_wait_msg (self, conn); + break; + case GstWin32IpcPktType::EOS: + GST_DEBUG_OBJECT (self, "Sent EOS, conn-id %u", conn->id); + gst_win32_ipc_server_wait_msg (self, conn); + break; + default: + GST_ERROR_OBJECT (self, "Unexpected msg type"); + gst_win32_ipc_server_close_connection (self, conn); + break; + } +} + +static void +gst_win32_ipc_server_send_msg (GstWin32IpcServer * self, + GstWin32IpcServerConn * conn) +{ + auto priv = self->priv; + + GST_LOG_OBJECT (self, "Sending message"); + + if (priv->aborted) + return; + + conn->io_pending = true; + + if (!WriteFileEx (conn->pipe, conn->server_msg.data (), + conn->server_msg.size (), conn, + gst_win32_ipc_server_send_msg_finish)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "WriteFileEx failed with 0x%x (%s)", + last_err, err); + g_free (err); + conn->io_pending = false; + gst_win32_ipc_server_close_connection (self, conn); + } +} + +static void +gst_win32_ipc_server_on_incoming_connection (GstWin32IpcServer * self, + std::shared_ptr < GstWin32IpcServerConn > conn) +{ + auto priv = self->priv; + + { + std::lock_guard < std::mutex > lk (priv->lock); + conn->server = self; + conn->id = priv->next_conn_id; + priv->next_conn_id++; + + conn->data = nullptr; + if (!priv->data_queue.empty ()) + conn->data = priv->data_queue.front (); + + GST_DEBUG_OBJECT (self, "New connection, conn-id: %u", conn->id); + + /* *INDENT-OFF* */ + priv->conn_map.insert ({conn->id, conn}); + /* *INDENT-ON* */ + } + + if (conn->data) { + conn->configured = true; + gst_win32_ipc_server_config_data (self, conn.get ()); + } else { + GST_DEBUG_OBJECT (self, "Have no config data yet, waiting for data"); + } + + g_object_notify_by_pspec (G_OBJECT (self), propsPROP_NUM_CLIENTS); +} + +/* *INDENT-OFF* */ +static bool +gst_win32_ipc_server_run_gc (GstWin32IpcServer * self) +{ + auto priv = self->priv; + bool any_pending = false; + + if (priv->conn_gc.empty ()) + return false; + + std::vector<std::shared_ptr<GstWin32IpcServerConn>> keep; + for (auto &conn : priv->conn_gc) { + if (conn->io_pending.load ()) { + keep.push_back (conn); + any_pending = true; + } else { + GST_DEBUG_OBJECT (self, "GC connection conn-id %u", conn->id); + } + } + + priv->conn_gc.swap (keep); + + return any_pending; +} + +static gpointer +gst_win32_ipc_server_loop_thread_func (GstWin32IpcServer * self) +{ + auto priv = self->priv; + bool io_pending = false; + guint wait_ret; + HANDLE pipe; + OVERLAPPED overlap; + HANDLE waitables3; + + GST_DEBUG_OBJECT (self, "Entering loop"); + + memset (&overlap, 0, sizeof (OVERLAPPED)); + + overlap.hEvent = CreateEvent (nullptr, TRUE, TRUE, nullptr); + pipe = gst_win32_ipc_server_create_pipe (self, &overlap, io_pending); + if (pipe == INVALID_HANDLE_VALUE) { + CloseHandle (overlap.hEvent); + priv->aborted = true; + goto out; + } + + waitables0 = overlap.hEvent; + waitables1 = priv->wakeup_event; + waitables2 = priv->cancellable; + + do { + wait_ret = WaitForMultipleObjectsEx (G_N_ELEMENTS (waitables), waitables, + FALSE, INFINITE, TRUE); + + if (wait_ret == WAIT_OBJECT_0 + 2) { + GST_DEBUG_OBJECT (self, "Operation cancelled"); + goto out; + } + + switch (wait_ret) { + case WAIT_OBJECT_0: + { + DWORD n_bytes; + + if (io_pending + && !GetOverlappedResult (pipe, &overlap, &n_bytes, FALSE)) { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_WARNING_OBJECT (self, "GetOverlappedResult failed with 0x%x (%s)", + last_err, err); + g_free (err); + CloseHandle (pipe); + pipe = INVALID_HANDLE_VALUE; + break; + } + + auto conn = std::make_shared < GstWin32IpcServerConn > (pipe); + conn->server = self; + pipe = INVALID_HANDLE_VALUE; + gst_win32_ipc_server_on_incoming_connection (self, conn); + + pipe = gst_win32_ipc_server_create_pipe (self, &overlap, io_pending); + break; + } + case WAIT_IO_COMPLETION: + break; + case WAIT_OBJECT_0 + 1: + gst_win32_ipc_server_on_idle (self); + break; + default: + { + guint last_err = GetLastError (); + auto err = g_win32_error_message (last_err); + GST_ERROR_OBJECT (self, + "WaitForMultipleObjectsEx return 0x%x, last error 0x%x (%s)", + wait_ret, last_err, err); + g_free (err); + priv->aborted = true; + goto out; + } + } + + gst_win32_ipc_server_run_gc (self); + } while (true); + +out: + if (pipe != INVALID_HANDLE_VALUE) { + CancelIo (pipe); + DisconnectNamedPipe (pipe); + CloseHandle (pipe); + } + + CloseHandle (overlap.hEvent); + + { + std::lock_guard < std::mutex > lk (priv->lock); + for (auto & it : priv->conn_map) + priv->conn_gc.push_back (it.second); + + priv->conn_map.clear (); + } + + /* Wait for pending APC if any */ + for (guint i = 0; i < 100; i++) { + if (!gst_win32_ipc_server_run_gc (self)) + break; + + SleepEx (10, TRUE); + } + + GST_DEBUG_OBJECT (self, "Exit loop thread"); + + return nullptr; +} +/* *INDENT-ON* */ + +GstFlowReturn +gst_win32_ipc_server_send_data (GstWin32IpcServer * server, + GstBuffer * buffer, GstCaps * caps, GByteArray * meta, GstClockTime pts, + GstClockTime dts, gsize size) +{ + GstWin32IpcServerPrivate *priv; + + g_return_val_if_fail (GST_IS_WIN32_IPC_SERVER (server), GST_FLOW_ERROR); + + priv = server->priv; + + GST_LOG_OBJECT (server, "Sending data"); + + { + std::unique_lock < std::mutex > lk (priv->lock); + if (priv->aborted) { + GST_DEBUG_OBJECT (server, "Was aborted"); + return GST_FLOW_ERROR; + } + + if (priv->max_buffers > 0 && buffer) { + if (priv->leaky == GST_WIN32_IPC_LEAKY_NONE) { + if (priv->data_queue.size () >= priv->max_buffers) { + GST_DEBUG_OBJECT (server, "Waiting for free space"); + priv->cond.wait (lk,& { + auto max = priv->max_buffers.load (); + return priv->aborted || priv->flushing || max == 0 || + priv->data_queue.size () < priv->max_buffers; + } + ); + } + + if (priv->aborted) { + GST_DEBUG_OBJECT (server, "Aborted while waiting for free slot"); + return GST_FLOW_ERROR; + } else if (priv->flushing) { + GST_DEBUG_OBJECT (server, "We are flushing"); + return GST_FLOW_FLUSHING; + } + } else { + if (priv->data_queue.size () >= priv->max_buffers) { + if (priv->leaky == GST_WIN32_IPC_LEAKY_DOWNSTREAM) { + auto dropped = priv->data_queue.front (); + priv->data_queue.pop_front (); + GST_DEBUG_OBJECT (server, + "Queue full, dropping oldest seq=%" G_GUINT64_FORMAT, + dropped->seq_num); + } else { + GST_DEBUG_OBJECT (server, "Queue full, dropping current buffer"); + return GST_FLOW_OK; + } + } + } + } + + auto data = std::make_shared < GstWin32IpcServerData > (buffer, caps, + meta, priv->seq_num); + GST_DEBUG_OBJECT (server, "Enqueue data, seq-num %" G_GUINT64_FORMAT, + priv->seq_num); + if (buffer) { + data->pts = pts; + data->dts = dts; + data->dur = GST_BUFFER_DURATION (buffer); + data->size = size; + data->buf_flags = GST_BUFFER_FLAGS (buffer); + } + + priv->seq_num++; + priv->data_queue.push_back (data); + } + + SetEvent (priv->wakeup_event); + + if (!buffer) { + GST_DEBUG_OBJECT (server, "Waiting for draining"); + std::unique_lock < std::mutex > lk (priv->lock); + while (!priv->aborted && !priv->flushing && !priv->data_queue.empty ()) + priv->cond.wait (lk); + + /* Always clear queue even if we are unblocked by abort/flush */ + priv->data_queue.clear (); + } + + return GST_FLOW_OK; +} + +void +gst_win32_ipc_server_set_flushing (GstWin32IpcServer * server, + gboolean flushing) +{ + auto priv = server->priv; + + { + std::lock_guard < std::mutex > lk (priv->lock); + priv->flushing = flushing; + priv->cond.notify_all (); + } + + SetEvent (priv->wakeup_event); +} + +void +gst_win32_ipc_server_set_max_buffers (GstWin32IpcServer * server, + guint64 max_buffers) +{ + auto priv = server->priv; + bool updated = false; + + { + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->max_buffers != max_buffers) { + updated = true; + priv->max_buffers = max_buffers; + priv->cond.notify_all (); + } + } + + if (updated) + SetEvent (priv->wakeup_event); +} + +void +gst_win32_ipc_server_set_leaky (GstWin32IpcServer * server, + GstWin32IpcLeakyType leaky) +{ + auto priv = server->priv; + bool updated = false; + + { + std::lock_guard < std::mutex > lk (priv->lock); + if (priv->leaky != leaky) { + updated = true; + priv->leaky = leaky; + priv->cond.notify_all (); + } + } + + if (updated) + SetEvent (priv->wakeup_event); +} + +guint64 +gst_win32_ipc_server_get_current_level_buffers (GstWin32IpcServer * server) +{ + auto priv = server->priv; + + std::lock_guard < std::mutex > lk (priv->lock); + return priv->data_queue.size (); +} + +GstWin32IpcServer * +gst_win32_ipc_server_new (const std::string & address, + guint64 max_buffers, GstWin32IpcLeakyType leaky) +{ + auto self = (GstWin32IpcServer *) + g_object_new (GST_TYPE_WIN32_IPC_SERVER, nullptr); + gst_object_ref_sink (self); + + auto priv = self->priv; + priv->address = address; + priv->max_buffers = max_buffers; + priv->leaky = leaky; + + priv->loop_thread = g_thread_new ("win32-ipc-server", + (GThreadFunc) gst_win32_ipc_server_loop_thread_func, self); + + return self; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcserver.h
Added
@@ -0,0 +1,58 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstwin32ipcmmf.h" +#include "gstwin32ipc.h" +#include <string> +#include <vector> + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_SERVER (gst_win32_ipc_server_get_type()) +G_DECLARE_FINAL_TYPE (GstWin32IpcServer, gst_win32_ipc_server, + GST, WIN32_IPC_SERVER, GstObject); + +GstWin32IpcServer * gst_win32_ipc_server_new (const std::string & address, + guint64 max_buffers, + GstWin32IpcLeakyType leaky); + +GstFlowReturn gst_win32_ipc_server_send_data (GstWin32IpcServer * server, + GstBuffer * buffer, + GstCaps * caps, + GByteArray * meta, + GstClockTime pts, + GstClockTime dts, + gsize size); + +void gst_win32_ipc_server_set_flushing (GstWin32IpcServer * server, + gboolean flushing); + +void gst_win32_ipc_server_set_max_buffers (GstWin32IpcServer * server, + guint64 max_buffers); + +void gst_win32_ipc_server_set_leaky (GstWin32IpcServer * server, + GstWin32IpcLeakyType leaky); + +guint64 gst_win32_ipc_server_get_current_level_buffers (GstWin32IpcServer * server); + + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcsink.cpp
Added
@@ -0,0 +1,346 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-win32ipcsrc + * @title: win32ipcsink + * + * Receive Windows memory mapped file backed buffers over Windows named pipe from + * win32ipcsink + * + * ## Example launch line + * ``` + * gst-launch-1.0 win32ipcsrc ! queue ! videoconvert ! d3d12videosink + * ``` + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcsink.h" +#include "gstwin32ipcbufferpool.h" +#include "gstwin32ipcmemory.h" +#include "gstwin32ipc.h" +#include <string.h> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_sink_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_sink_debug + +static GstStaticPadTemplate sink_template = GST_STATIC_PAD_TEMPLATE ("sink", + GST_PAD_SINK, + GST_PAD_ALWAYS, + GST_STATIC_CAPS_ANY); + +struct _GstWin32IpcSink +{ + GstWin32IpcBaseSink parent; + + GstVideoInfo info; + GstCaps *caps; + GstBufferPool *fallback_pool; + gboolean is_raw_video; + gsize pool_size; +}; + +static gboolean gst_win32_ipc_sink_set_caps (GstBaseSink * sink, + GstCaps * caps); +static gboolean gst_win32_ipc_sink_stop (GstBaseSink * sink); +static void gst_win32_ipc_sink_get_times (GstBaseSink * sink, + GstBuffer * buf, GstClockTime * start, GstClockTime * end); +static gboolean gst_win32_ipc_sink_propose_allocation (GstBaseSink * sink, + GstQuery * query); +static GstFlowReturn +gst_win32_ipc_sink_upload (GstWin32IpcBaseSink * sink, GstBuffer * buffer, + GstBuffer ** uploaded, gsize * size); + +#define gst_win32_ipc_sink_parent_class parent_class +G_DEFINE_TYPE (GstWin32IpcSink, gst_win32_ipc_sink, + GST_TYPE_WIN32_IPC_BASE_SINK); + +static void +gst_win32_ipc_sink_class_init (GstWin32IpcSinkClass * klass) +{ + auto element_class = GST_ELEMENT_CLASS (klass); + auto sink_class = GST_BASE_SINK_CLASS (klass); + auto win32_class = GST_WIN32_IPC_BASE_SINK_CLASS (klass); + + gst_element_class_set_static_metadata (element_class, + "Win32 IPC Sink", "Sink/Generic", "Windows shared memory sink", + "Seungha Yang <seungha@centricular.com>"); + gst_element_class_add_static_pad_template (element_class, &sink_template); + + sink_class->stop = GST_DEBUG_FUNCPTR (gst_win32_ipc_sink_stop); + sink_class->get_times = GST_DEBUG_FUNCPTR (gst_win32_ipc_sink_get_times); + sink_class->set_caps = GST_DEBUG_FUNCPTR (gst_win32_ipc_sink_set_caps); + sink_class->propose_allocation = + GST_DEBUG_FUNCPTR (gst_win32_ipc_sink_propose_allocation); + win32_class->upload = GST_DEBUG_FUNCPTR (gst_win32_ipc_sink_upload); + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_sink_debug, "win32ipcsink", + 0, "win32ipcsink"); +} + +static void +gst_win32_ipc_sink_init (GstWin32IpcSink * self) +{ +} + +static gboolean +gst_win32_ipc_sink_stop (GstBaseSink * sink) +{ + auto self = GST_WIN32_IPC_SINK (sink); + + GST_DEBUG_OBJECT (self, "Stop"); + if (self->fallback_pool) { + gst_clear_caps (&self->caps); + gst_buffer_pool_set_active (self->fallback_pool, FALSE); + gst_clear_object (&self->fallback_pool); + self->pool_size = 0; + } + + return GST_BASE_SINK_CLASS (parent_class)->stop (sink); +} + +static void +gst_win32_ipc_sink_get_times (GstBaseSink * sink, GstBuffer * buf, + GstClockTime * start, GstClockTime * end) +{ + auto self = GST_WIN32_IPC_SINK (sink); + if (!self->is_raw_video) { + GST_BASE_SINK_CLASS (parent_class)->get_times (sink, buf, start, end); + return; + } + + auto timestamp = GST_BUFFER_PTS (buf); + if (!GST_CLOCK_TIME_IS_VALID (timestamp)) + timestamp = GST_BUFFER_DTS (buf); + + if (GST_CLOCK_TIME_IS_VALID (timestamp)) { + *start = timestamp; + if (GST_BUFFER_DURATION_IS_VALID (buf)) { + *end = timestamp + GST_BUFFER_DURATION (buf); + } else if (self->info.fps_n > 0) { + *end = timestamp + + gst_util_uint64_scale_int (GST_SECOND, self->info.fps_d, + self->info.fps_n); + } else if (sink->segment.rate < 0) { + *end = timestamp; + } + } +} + +static gboolean +gst_win32_ipc_sink_set_caps (GstBaseSink * sink, GstCaps * caps) +{ + auto self = GST_WIN32_IPC_SINK (sink); + + gst_caps_replace (&self->caps, caps); + + auto s = gst_caps_get_structure (caps, 0); + self->is_raw_video = gst_structure_has_name (s, "video/x-raw"); + + if (!self->is_raw_video) + return GST_BASE_SINK_CLASS (parent_class)->set_caps (sink, caps); + + if (!gst_video_info_from_caps (&self->info, caps)) { + GST_WARNING_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + if (self->fallback_pool) { + gst_buffer_pool_set_active (self->fallback_pool, FALSE); + gst_object_unref (self->fallback_pool); + } + + self->fallback_pool = gst_win32_ipc_buffer_pool_new (); + auto config = gst_buffer_pool_get_config (self->fallback_pool); + gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_config_set_params (config, caps, (guint) self->info.size, + 0, 0); + gst_buffer_pool_set_config (self->fallback_pool, config); + gst_buffer_pool_set_active (self->fallback_pool, TRUE); + self->pool_size = self->info.size; + + return GST_BASE_SINK_CLASS (parent_class)->set_caps (sink, caps); +} + +static gboolean +gst_win32_ipc_sink_propose_allocation (GstBaseSink * sink, GstQuery * query) +{ + GstCaps *caps; + GstBufferPool *pool = nullptr; + GstVideoInfo info; + guint size; + gboolean need_pool; + + gst_query_parse_allocation (query, &caps, &need_pool); + if (!caps) { + GST_WARNING_OBJECT (sink, "No caps specified"); + return FALSE; + } + + auto s = gst_caps_get_structure (caps, 0); + if (!gst_structure_has_name (s, "video/x-raw")) + return FALSE; + + if (!gst_video_info_from_caps (&info, caps)) { + GST_WARNING_OBJECT (sink, "Invalid caps %" GST_PTR_FORMAT, caps); + return FALSE; + } + + /* the normal size of a frame */ + size = info.size; + if (need_pool) { + GstStructure *config; + + pool = gst_win32_ipc_buffer_pool_new (); + config = gst_buffer_pool_get_config (pool); + gst_buffer_pool_config_add_option (config, + GST_BUFFER_POOL_OPTION_VIDEO_META); + + size = GST_VIDEO_INFO_SIZE (&info); + + gst_buffer_pool_config_set_params (config, caps, (guint) size, 0, 0); + + if (!gst_buffer_pool_set_config (pool, config)) { + GST_ERROR_OBJECT (pool, "Couldn't set config"); + gst_object_unref (pool); + + return FALSE; + } + } + + gst_query_add_allocation_pool (query, pool, size, 0, 0); + gst_clear_object (&pool); + + gst_query_add_allocation_meta (query, GST_VIDEO_META_API_TYPE, NULL); + + return TRUE; +} + +static GstFlowReturn +gst_win32_ipc_sink_upload_raw_video (GstWin32IpcSink * self, + GstBuffer * buf, GstBuffer ** uploaded, gsize * size) +{ + GstBuffer *prepared = nullptr; + gst_buffer_pool_acquire_buffer (self->fallback_pool, &prepared, nullptr); + if (!prepared) { + GST_ERROR_OBJECT (self, "Couldn't acquire fallback buffer"); + return GST_FLOW_ERROR; + } + + GstVideoFrame src_frame, dst_frame; + if (!gst_video_frame_map (&src_frame, &self->info, buf, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Couldn't map input buffer"); + gst_buffer_unref (prepared); + return GST_FLOW_ERROR; + } + + if (!gst_video_frame_map (&dst_frame, &self->info, prepared, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Couldn't map fallback buffer"); + gst_video_frame_unmap (&src_frame); + gst_buffer_unref (prepared); + return GST_FLOW_ERROR; + } + + auto copy_ret = gst_video_frame_copy (&dst_frame, &src_frame); + gst_video_frame_unmap (&dst_frame); + gst_video_frame_unmap (&src_frame); + + if (!copy_ret) { + GST_ERROR_OBJECT (self, "Couldn't copy frame"); + gst_buffer_unref (prepared); + return GST_FLOW_ERROR; + } + + gst_buffer_copy_into (prepared, buf, GST_BUFFER_COPY_METADATA, 0, -1); + *uploaded = prepared; + *size = gst_buffer_get_size (prepared); + return GST_FLOW_OK; +} + +static GstFlowReturn +gst_win32_ipc_sink_upload (GstWin32IpcBaseSink * sink, GstBuffer * buf, + GstBuffer ** uploaded, gsize * size) +{ + auto self = GST_WIN32_IPC_SINK (sink); + + auto mem = gst_buffer_peek_memory (buf, 0); + if (gst_is_win32_ipc_memory (mem) && gst_buffer_n_memory (buf) == 1) { + GST_TRACE_OBJECT (self, "Upstream win32 memory"); + *uploaded = gst_buffer_ref (buf); + *size = gst_buffer_get_size (buf); + return GST_FLOW_OK; + } + + if (self->is_raw_video) + return gst_win32_ipc_sink_upload_raw_video (self, buf, uploaded, size); + + auto buf_size = gst_buffer_get_size (buf); + if (self->fallback_pool) { + if (self->pool_size < buf_size) { + gst_buffer_pool_set_active (self->fallback_pool, FALSE); + gst_clear_object (&self->fallback_pool); + } + } + + if (!self->fallback_pool) { + self->fallback_pool = gst_win32_ipc_buffer_pool_new (); + self->pool_size = buf_size + 1024; + auto config = gst_buffer_pool_get_config (self->fallback_pool); + gst_buffer_pool_config_set_params (config, self->caps, self->pool_size, + 0, 0); + gst_buffer_pool_set_config (self->fallback_pool, config); + gst_buffer_pool_set_active (self->fallback_pool, TRUE); + } + + GstBuffer *prepared = nullptr; + gst_buffer_pool_acquire_buffer (self->fallback_pool, &prepared, nullptr); + if (!prepared) { + GST_ERROR_OBJECT (self, "Couldn't acquire fallback buffer"); + return GST_FLOW_ERROR; + } + + GstMapInfo src_info, dst_info; + if (!gst_buffer_map (buf, &src_info, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Couldn't map input buffer"); + gst_buffer_unref (prepared); + return GST_FLOW_ERROR; + } + + if (!gst_buffer_map (prepared, &dst_info, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Couldn't map output buffer"); + gst_buffer_unmap (buf, &src_info); + gst_buffer_unref (prepared); + return GST_FLOW_ERROR; + } + + memcpy (dst_info.data, src_info.data, src_info.size); + gst_buffer_unmap (buf, &src_info); + gst_buffer_unmap (prepared, &dst_info); + + gst_buffer_copy_into (prepared, buf, GST_BUFFER_COPY_METADATA, 0, -1); + *uploaded = prepared; + *size = buf_size; + + return GST_FLOW_OK; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcsink.h
Added
@@ -0,0 +1,32 @@ +/* GStreamer + * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include <gst/video/video.h> +#include "gstwin32ipcbasesink.h" + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_SINK (gst_win32_ipc_sink_get_type()) +G_DECLARE_FINAL_TYPE (GstWin32IpcSink, gst_win32_ipc_sink, + GST, WIN32_IPC_SINK, GstWin32IpcBaseSink); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcsrc.cpp
Added
@@ -0,0 +1,77 @@ +/* GStreamer + * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +/** + * SECTION:element-win32ipcsink + * @title: win32ipcsink + * + * Send Windows memory mapped file backed buffers over Windows named pipe to + * win32ipcsrc + * + * ## Example launch line + * ``` + * gst-launch-1.0 videotestsrc ! queue ! win32ipcsink + * ``` + * + * Since: 1.28 + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include "gstwin32ipcsrc.h" +#include "gstwin32ipc.h" +#include <string> +#include <mutex> + +GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_src_debug); +#define GST_CAT_DEFAULT gst_win32_ipc_src_debug + +static GstStaticPadTemplate src_template = GST_STATIC_PAD_TEMPLATE ("src", + GST_PAD_SRC, + GST_PAD_ALWAYS, + GST_STATIC_CAPS_ANY); + +struct _GstWin32IpcSrc +{ + GstWin32IpcBaseSrc parent; +}; + +#define gst_win32_ipc_src_parent_class parent_class +G_DEFINE_TYPE (GstWin32IpcSrc, gst_win32_ipc_src, GST_TYPE_WIN32_IPC_BASE_SRC); + +static void +gst_win32_ipc_src_class_init (GstWin32IpcSrcClass * klass) +{ + auto element_class = GST_ELEMENT_CLASS (klass); + + gst_element_class_set_static_metadata (element_class, + "Win32 IPC Source", "Source/Generic", "Windows shared memory source", + "Seungha Yang <seungha@centricular.com>"); + gst_element_class_add_static_pad_template (element_class, &src_template); + + GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_src_debug, "win32ipcsrc", + 0, "win32ipcsrc"); +} + +static void +gst_win32_ipc_src_init (GstWin32IpcSrc * self) +{ +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcsrc.h
Added
@@ -0,0 +1,31 @@ +/* GStreamer + * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#pragma once + +#include <gst/gst.h> +#include "gstwin32ipcbasesrc.h" + +G_BEGIN_DECLS + +#define GST_TYPE_WIN32_IPC_SRC (gst_win32_ipc_src_get_type()) +G_DECLARE_FINAL_TYPE (GstWin32IpcSrc, gst_win32_ipc_src, + GST, WIN32_IPC_SRC, GstWin32IpcBaseSrc); + +G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcvideosink.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcvideosink.cpp
Changed
@@ -38,11 +38,9 @@ #endif #include "gstwin32ipcvideosink.h" -#include "gstwin32ipcutils.h" #include "gstwin32ipcbufferpool.h" #include "gstwin32ipcmemory.h" -#include "protocol/win32ipcpipeserver.h" -#include <string> +#include "gstwin32ipc.h" #include <string.h> GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_video_sink_debug); @@ -53,73 +51,38 @@ GST_PAD_ALWAYS, GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (GST_VIDEO_FORMATS_ALL))); -enum -{ - PROP_0, - PROP_PIPE_NAME, -}; - #define DEFAULT_PIPE_NAME "\\\\.\\pipe\\gst.win32.ipc.video" +#define DEFAULT_LEAKY_TYPE GST_WIN32_IPC_LEAKY_DOWNSTREAM struct _GstWin32IpcVideoSink { - GstBaseSink parent; + GstWin32IpcBaseSink parent; GstVideoInfo info; - Win32IpcPipeServer *pipe; - - Win32IpcVideoInfo minfo; - GstBufferPool *fallback_pool; - GstBuffer *prepared_buffer; - - /* properties */ - gchar *pipe_name; }; -static void gst_win32_ipc_video_sink_finalize (GObject * object); -static void gst_win32_ipc_video_sink_set_property (GObject * object, - guint prop_id, const GValue * value, GParamSpec * pspec); -static void gst_win32_video_sink_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec); - -static GstClock *gst_win32_ipc_video_sink_provide_clock (GstElement * elem); - -static gboolean gst_win32_ipc_video_sink_start (GstBaseSink * sink); -static gboolean gst_win32_ipc_video_sink_stop (GstBaseSink * sink); -static gboolean gst_win32_ipc_video_sink_unlock_stop (GstBaseSink * sink); static gboolean gst_win32_ipc_video_sink_set_caps (GstBaseSink * sink, GstCaps * caps); -static void gst_win32_ipc_video_sink_get_time (GstBaseSink * sink, +static gboolean gst_win32_ipc_video_sink_stop (GstBaseSink * sink); +static void gst_win32_ipc_video_sink_get_times (GstBaseSink * sink, GstBuffer * buf, GstClockTime * start, GstClockTime * end); static gboolean gst_win32_ipc_video_sink_propose_allocation (GstBaseSink * sink, GstQuery * query); -static GstFlowReturn gst_win32_ipc_video_sink_prepare (GstBaseSink * sink, - GstBuffer * buf); -static GstFlowReturn gst_win32_ipc_video_sink_render (GstBaseSink * sink, - GstBuffer * buf); +static GstFlowReturn +gst_win32_ipc_video_sink_upload (GstWin32IpcBaseSink * sink, GstBuffer * buffer, + GstBuffer ** uploaded, gsize * size); #define gst_win32_ipc_video_sink_parent_class parent_class G_DEFINE_TYPE (GstWin32IpcVideoSink, gst_win32_ipc_video_sink, - GST_TYPE_BASE_SINK); + GST_TYPE_WIN32_IPC_BASE_SINK); static void gst_win32_ipc_video_sink_class_init (GstWin32IpcVideoSinkClass * klass) { - GObjectClass *object_class = G_OBJECT_CLASS (klass); - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseSinkClass *sink_class = GST_BASE_SINK_CLASS (klass); - - object_class->finalize = gst_win32_ipc_video_sink_finalize; - object_class->set_property = gst_win32_ipc_video_sink_set_property; - object_class->get_property = gst_win32_video_sink_get_property; - - g_object_class_install_property (object_class, PROP_PIPE_NAME, - g_param_spec_string ("pipe-name", "Pipe Name", - "The name of Win32 named pipe to communicate with clients. " - "Validation of the pipe name is caller's responsibility", - DEFAULT_PIPE_NAME, (GParamFlags) (G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY))); + auto element_class = GST_ELEMENT_CLASS (klass); + auto sink_class = GST_BASE_SINK_CLASS (klass); + auto win32_class = GST_WIN32_IPC_BASE_SINK_CLASS (klass); gst_element_class_set_static_metadata (element_class, "Win32 IPC Video Sink", "Sink/Video", @@ -127,19 +90,13 @@ "Seungha Yang <seungha@centricular.com>"); gst_element_class_add_static_pad_template (element_class, &sink_template); - element_class->provide_clock = - GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_provide_clock); - - sink_class->start = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_start); sink_class->stop = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_stop); - sink_class->unlock_stop = - GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_unlock_stop); + sink_class->get_times = + GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_get_times); sink_class->set_caps = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_set_caps); sink_class->propose_allocation = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_propose_allocation); - sink_class->get_times = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_get_time); - sink_class->prepare = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_prepare); - sink_class->render = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_render); + win32_class->upload = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_sink_upload); GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_video_sink_debug, "win32ipcvideosink", 0, "win32ipcvideosink"); @@ -148,119 +105,31 @@ static void gst_win32_ipc_video_sink_init (GstWin32IpcVideoSink * self) { - self->pipe_name = g_strdup (DEFAULT_PIPE_NAME); - - GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_PROVIDE_CLOCK); - GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_REQUIRE_CLOCK); -} - -static void -gst_win32_ipc_video_sink_finalize (GObject * object) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (object); - - g_free (self->pipe_name); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_win32_ipc_video_sink_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (object); - - switch (prop_id) { - case PROP_PIPE_NAME: - GST_OBJECT_LOCK (self); - g_free (self->pipe_name); - self->pipe_name = g_value_dup_string (value); - if (!self->pipe_name) - self->pipe_name = g_strdup (DEFAULT_PIPE_NAME); - GST_OBJECT_UNLOCK (self); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_win32_video_sink_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (object); - - switch (prop_id) { - case PROP_PIPE_NAME: - GST_OBJECT_LOCK (self); - g_value_set_string (value, self->pipe_name); - GST_OBJECT_UNLOCK (self); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static GstClock * -gst_win32_ipc_video_sink_provide_clock (GstElement * elem) -{ - return gst_system_clock_obtain (); -} - -static gboolean -gst_win32_ipc_video_sink_start (GstBaseSink * sink) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - - GST_DEBUG_OBJECT (self, "Start"); - - self->pipe = win32_ipc_pipe_server_new (self->pipe_name); - if (!self->pipe) { - GST_ERROR_OBJECT (self, "Couldn't create pipe server"); - return FALSE; - } - - return TRUE; + g_object_set (self, "pipe-name", DEFAULT_PIPE_NAME, "leaky-type", + DEFAULT_LEAKY_TYPE, nullptr); } static gboolean gst_win32_ipc_video_sink_stop (GstBaseSink * sink) { - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); + auto self = GST_WIN32_IPC_VIDEO_SINK (sink); GST_DEBUG_OBJECT (self, "Stop"); - - g_clear_pointer (&self->pipe, win32_ipc_pipe_server_unref); - gst_clear_buffer (&self->prepared_buffer); - if (self->fallback_pool) { gst_buffer_pool_set_active (self->fallback_pool, FALSE); gst_clear_object (&self->fallback_pool); } - return TRUE; -} - -static gboolean -gst_win32_ipc_video_sink_unlock_stop (GstBaseSink * sink) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - - gst_clear_buffer (&self->prepared_buffer); - - return TRUE; + return GST_BASE_SINK_CLASS (parent_class)->stop (sink); } static void -gst_win32_ipc_video_sink_get_time (GstBaseSink * sink, GstBuffer * buf, +gst_win32_ipc_video_sink_get_times (GstBaseSink * sink, GstBuffer * buf, GstClockTime * start, GstClockTime * end) { - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - GstClockTime timestamp; + auto self = GST_WIN32_IPC_VIDEO_SINK (sink); - timestamp = GST_BUFFER_PTS (buf); + auto timestamp = GST_BUFFER_PTS (buf); if (!GST_CLOCK_TIME_IS_VALID (timestamp)) timestamp = GST_BUFFER_DTS (buf); @@ -281,38 +150,27 @@ static gboolean gst_win32_ipc_video_sink_set_caps (GstBaseSink * sink, GstCaps * caps) { - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - GstStructure *config; + auto self = GST_WIN32_IPC_VIDEO_SINK (sink); if (!gst_video_info_from_caps (&self->info, caps)) { - GST_WARNING_OBJECT (self, "Invalid caps"); + GST_WARNING_OBJECT (self, "Invalid caps %" GST_PTR_FORMAT, caps); return FALSE; } - memset (&self->minfo, 0, sizeof (Win32IpcVideoInfo)); - self->minfo.format = - (Win32IpcVideoFormat) GST_VIDEO_INFO_FORMAT (&self->info); - self->minfo.width = GST_VIDEO_INFO_WIDTH (&self->info); - self->minfo.height = GST_VIDEO_INFO_HEIGHT (&self->info); - self->minfo.fps_n = self->info.fps_n; - self->minfo.fps_d = self->info.fps_d; - self->minfo.par_n = self->info.par_n; - self->minfo.par_d = self->info.par_d; - if (self->fallback_pool) { gst_buffer_pool_set_active (self->fallback_pool, FALSE); gst_object_unref (self->fallback_pool); } self->fallback_pool = gst_win32_ipc_buffer_pool_new (); - config = gst_buffer_pool_get_config (self->fallback_pool); + auto config = gst_buffer_pool_get_config (self->fallback_pool); gst_buffer_pool_config_add_option (config, GST_BUFFER_POOL_OPTION_VIDEO_META); gst_buffer_pool_config_set_params (config, caps, (guint) self->info.size, 0, 0); gst_buffer_pool_set_config (self->fallback_pool, config); gst_buffer_pool_set_active (self->fallback_pool, TRUE); - return TRUE; + return GST_BASE_SINK_CLASS (parent_class)->set_caps (sink, caps); } static gboolean @@ -367,165 +225,53 @@ } static GstFlowReturn -gst_win32_ipc_video_sink_prepare (GstBaseSink * sink, GstBuffer * buf) +gst_win32_ipc_video_sink_upload (GstWin32IpcBaseSink * sink, GstBuffer * buf, + GstBuffer ** uploaded, gsize * size) { - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - GstVideoFrame frame, mmf_frame; - GstMemory *mem; - GstFlowReturn ret; - - gst_clear_buffer (&self->prepared_buffer); - - if (!gst_video_frame_map (&frame, &self->info, buf, GST_MAP_READ)) { - GST_ERROR_OBJECT (self, "Couldn't map frame"); - return GST_FLOW_ERROR; - } + auto self = GST_WIN32_IPC_VIDEO_SINK (sink); - mem = gst_buffer_peek_memory (buf, 0); + auto mem = gst_buffer_peek_memory (buf, 0); if (gst_is_win32_ipc_memory (mem) && gst_buffer_n_memory (buf) == 1) { - GST_LOG_OBJECT (self, "Upstream memory is mmf"); - - self->prepared_buffer = gst_buffer_ref (buf); - - self->minfo.size = GST_VIDEO_FRAME_SIZE (&frame); - for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&frame); i++) { - self->minfo.offseti = GST_VIDEO_FRAME_PLANE_OFFSET (&frame, i); - self->minfo.stridei = GST_VIDEO_FRAME_PLANE_STRIDE (&frame, i); - } - - gst_video_frame_unmap (&frame); - + GST_TRACE_OBJECT (self, "Upstream win32 memory"); + *uploaded = gst_buffer_ref (buf); + *size = gst_buffer_get_size (buf); return GST_FLOW_OK; } - GST_LOG_OBJECT (self, "Copying into mmf buffer"); - - ret = gst_buffer_pool_acquire_buffer (self->fallback_pool, - &self->prepared_buffer, nullptr); - if (ret != GST_FLOW_OK) { - GST_ERROR_OBJECT (self, "Couldn't acquire buffer"); - gst_video_frame_unmap (&frame); - return GST_FLOW_ERROR; - } - - if (!gst_video_frame_map (&mmf_frame, &self->info, self->prepared_buffer, - GST_MAP_WRITE)) { - GST_ERROR_OBJECT (self, "Couldn't map mmf frame"); - gst_video_frame_unmap (&frame); - gst_clear_buffer (&self->prepared_buffer); + GstBuffer *prepared = nullptr; + gst_buffer_pool_acquire_buffer (self->fallback_pool, &prepared, nullptr); + if (!prepared) { + GST_ERROR_OBJECT (self, "Couldn't acquire fallback buffer"); return GST_FLOW_ERROR; } - if (!gst_video_frame_copy (&mmf_frame, &frame)) { - GST_ERROR_OBJECT (self, "Couldn't copy buffer"); - gst_video_frame_unmap (&frame); - gst_video_frame_unmap (&mmf_frame); - gst_clear_buffer (&self->prepared_buffer); + GstVideoFrame src_frame, dst_frame; + if (!gst_video_frame_map (&src_frame, &self->info, buf, GST_MAP_READ)) { + GST_ERROR_OBJECT (self, "Couldn't map input buffer"); + gst_buffer_unref (prepared); return GST_FLOW_ERROR; } - gst_video_frame_unmap (&frame); - - self->minfo.size = GST_VIDEO_FRAME_SIZE (&mmf_frame); - for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&mmf_frame); i++) { - self->minfo.offseti = GST_VIDEO_FRAME_PLANE_OFFSET (&mmf_frame, i); - self->minfo.stridei = GST_VIDEO_FRAME_PLANE_STRIDE (&mmf_frame, i); - } - - gst_video_frame_unmap (&mmf_frame); - - return GST_FLOW_OK; -} - -static void -gst_win32_ipc_video_sink_mmf_free (void *user_data) -{ - GstBuffer *buffer = GST_BUFFER_CAST (user_data); - - GST_LOG ("Relese %" GST_PTR_FORMAT, buffer); - - gst_buffer_unref (buffer); -} - -static GstFlowReturn -gst_win32_ipc_video_sink_render (GstBaseSink * sink, GstBuffer * buf) -{ - GstWin32IpcVideoSink *self = GST_WIN32_IPC_VIDEO_SINK (sink); - GstClockTime pts; - GstClockTime now_qpc; - GstClockTime buf_pts; - GstClockTime buffer_clock = GST_CLOCK_TIME_NONE; - Win32IpcMmf *mmf; - GstWin32IpcMemory *mem; - - if (!self->prepared_buffer) { - GST_ERROR_OBJECT (self, "No prepared buffer"); + if (!gst_video_frame_map (&dst_frame, &self->info, prepared, GST_MAP_WRITE)) { + GST_ERROR_OBJECT (self, "Couldn't map fallback buffer"); + gst_video_frame_unmap (&src_frame); + gst_buffer_unref (prepared); return GST_FLOW_ERROR; } - mem = (GstWin32IpcMemory *) gst_buffer_peek_memory (self->prepared_buffer, 0); + auto copy_ret = gst_video_frame_copy (&dst_frame, &src_frame); + gst_video_frame_unmap (&dst_frame); + gst_video_frame_unmap (&src_frame); - g_assert (mem != nullptr); - g_assert (gst_is_win32_ipc_memory (GST_MEMORY_CAST (mem))); - - mmf = mem->mmf; - - pts = now_qpc = gst_util_get_timestamp (); - - buf_pts = GST_BUFFER_PTS (buf); - if (!GST_CLOCK_TIME_IS_VALID (buf_pts)) - buf_pts = GST_BUFFER_DTS (buf); - - if (GST_CLOCK_TIME_IS_VALID (buf_pts)) { - buffer_clock = gst_segment_to_running_time (&sink->segment, - GST_FORMAT_TIME, buf_pts) + - GST_ELEMENT_CAST (sink)->base_time + gst_base_sink_get_latency (sink); - } - - if (GST_CLOCK_TIME_IS_VALID (buffer_clock)) { - GstClock *clock = gst_element_get_clock (GST_ELEMENT_CAST (sink)); - gboolean is_qpc = TRUE; - - is_qpc = gst_win32_ipc_clock_is_qpc (clock); - if (!is_qpc) { - GstClockTime now_gst = gst_clock_get_time (clock); - GstClockTimeDiff converted = buffer_clock; - - GST_LOG_OBJECT (self, "Clock is not QPC"); - - converted -= now_gst; - converted += now_qpc; - - if (converted < 0) { - /* Shouldn't happen */ - GST_WARNING_OBJECT (self, "Negative buffer clock"); - pts = 0; - } else { - pts = converted; - } - } else { - GST_LOG_OBJECT (self, "Clock is QPC already"); - /* buffer clock is already QPC time */ - pts = buffer_clock; - } - gst_object_unref (clock); - } - - self->minfo.qpc = pts; - - if (!self->pipe) { - GST_ERROR_OBJECT (self, "Pipe server was not configured"); + if (!copy_ret) { + GST_ERROR_OBJECT (self, "Couldn't copy frame"); + gst_buffer_unref (prepared); return GST_FLOW_ERROR; } - /* win32_ipc_pipe_server_send_mmf() takes ownership of mmf */ - if (!win32_ipc_pipe_server_send_mmf (self->pipe, - win32_ipc_mmf_ref (mmf), &self->minfo, - g_steal_pointer (&self->prepared_buffer), - gst_win32_ipc_video_sink_mmf_free)) { - GST_ERROR_OBJECT (self, "Couldn't send buffer"); - return GST_FLOW_ERROR; - } + gst_buffer_copy_into (prepared, buf, GST_BUFFER_COPY_METADATA, 0, -1); + *uploaded = prepared; + *size = gst_buffer_get_size (prepared); return GST_FLOW_OK; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcvideosink.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcvideosink.h
Changed
@@ -20,13 +20,13 @@ #pragma once #include <gst/gst.h> -#include <gst/base/gstbasesink.h> #include <gst/video/video.h> +#include "gstwin32ipcbasesink.h" G_BEGIN_DECLS #define GST_TYPE_WIN32_IPC_VIDEO_SINK (gst_win32_ipc_video_sink_get_type()) G_DECLARE_FINAL_TYPE (GstWin32IpcVideoSink, gst_win32_ipc_video_sink, - GST, WIN32_IPC_VIDEO_SINK, GstBaseSink); + GST, WIN32_IPC_VIDEO_SINK, GstWin32IpcBaseSink); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcvideosrc.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcvideosrc.cpp
Changed
@@ -38,9 +38,9 @@ #endif #include "gstwin32ipcvideosrc.h" -#include "gstwin32ipcutils.h" -#include "protocol/win32ipcpipeclient.h" +#include "gstwin32ipc.h" #include <string> +#include <mutex> GST_DEBUG_CATEGORY_STATIC (gst_win32_ipc_video_src_debug); #define GST_CAT_DEFAULT gst_win32_ipc_video_src_debug @@ -50,80 +50,26 @@ GST_PAD_ALWAYS, GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (GST_VIDEO_FORMATS_ALL))); -enum -{ - PROP_0, - PROP_PIPE_NAME, - PROP_PROCESSING_DEADLINE, -}; - #define DEFAULT_PIPE_NAME "\\\\.\\pipe\\gst.win32.ipc.video" -#define DEFAULT_PROCESSING_DEADLINE (20 * GST_MSECOND) +#define DEFAULT_LEAKY_TYPE GST_WIN32_IPC_LEAKY_DOWNSTREAM struct _GstWin32IpcVideoSrc { - GstBaseSrc parent; - - GstVideoInfo info; - - Win32IpcPipeClient *pipe; - GstCaps *caps; - gboolean flushing; - SRWLOCK lock; - gboolean have_video_meta; - gsize offsetGST_VIDEO_MAX_PLANES; - gint strideGST_VIDEO_MAX_PLANES; - GstBufferPool *pool; - - /* properties */ - gchar *pipe_name; - GstClockTime processing_deadline; + GstWin32IpcBaseSrc parent; }; -static void gst_win32_ipc_video_src_finalize (GObject * object); -static void gst_win32_ipc_video_src_set_property (GObject * object, - guint prop_id, const GValue * value, GParamSpec * pspec); -static void gst_win32_video_src_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec); - -static GstClock *gst_win32_video_src_provide_clock (GstElement * elem); - -static gboolean gst_win32_ipc_video_src_start (GstBaseSrc * src); -static gboolean gst_win32_ipc_video_src_stop (GstBaseSrc * src); -static gboolean gst_win32_ipc_video_src_unlock (GstBaseSrc * src); -static gboolean gst_win32_ipc_video_src_unlock_stop (GstBaseSrc * src); -static gboolean gst_win32_ipc_video_src_query (GstBaseSrc * src, - GstQuery * query); -static gboolean gst_win32_ipc_video_src_decide_allocation (GstBaseSrc * src, - GstQuery * query); -static GstFlowReturn gst_win32_ipc_video_src_create (GstBaseSrc * src, - guint64 offset, guint size, GstBuffer ** buf); +static GstCaps *gst_win32_ipc_video_src_fixate (GstBaseSrc * src, + GstCaps * caps); #define gst_win32_ipc_video_src_parent_class parent_class -G_DEFINE_TYPE (GstWin32IpcVideoSrc, gst_win32_ipc_video_src, GST_TYPE_BASE_SRC); +G_DEFINE_TYPE (GstWin32IpcVideoSrc, + gst_win32_ipc_video_src, GST_TYPE_WIN32_IPC_BASE_SRC); static void gst_win32_ipc_video_src_class_init (GstWin32IpcVideoSrcClass * klass) { - GObjectClass *object_class = G_OBJECT_CLASS (klass); - GstElementClass *element_class = GST_ELEMENT_CLASS (klass); - GstBaseSrcClass *src_class = GST_BASE_SRC_CLASS (klass); - - object_class->finalize = gst_win32_ipc_video_src_finalize; - object_class->set_property = gst_win32_ipc_video_src_set_property; - object_class->get_property = gst_win32_video_src_get_property; - - g_object_class_install_property (object_class, PROP_PIPE_NAME, - g_param_spec_string ("pipe-name", "Pipe Name", - "The name of Win32 named pipe to communicate with server. " - "Validation of the pipe name is caller's responsibility", - DEFAULT_PIPE_NAME, (GParamFlags) (G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_READY))); - g_object_class_install_property (object_class, PROP_PROCESSING_DEADLINE, - g_param_spec_uint64 ("processing-deadline", "Processing deadline", - "Maximum processing time for a buffer in nanoseconds", 0, G_MAXUINT64, - DEFAULT_PROCESSING_DEADLINE, (GParamFlags) (G_PARAM_READWRITE | - G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); + auto element_class = GST_ELEMENT_CLASS (klass); + auto src_class = GST_BASE_SRC_CLASS (klass); gst_element_class_set_static_metadata (element_class, "Win32 IPC Video Source", "Source/Video", @@ -131,18 +77,7 @@ "Seungha Yang <seungha@centricular.com>"); gst_element_class_add_static_pad_template (element_class, &src_template); - element_class->provide_clock = - GST_DEBUG_FUNCPTR (gst_win32_video_src_provide_clock); - - src_class->start = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_start); - src_class->stop = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_stop); - src_class->unlock = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_unlock); - src_class->unlock_stop = - GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_unlock_stop); - src_class->query = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_query); - src_class->decide_allocation = - GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_decide_allocation); - src_class->create = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_create); + src_class->fixate = GST_DEBUG_FUNCPTR (gst_win32_ipc_video_src_fixate); GST_DEBUG_CATEGORY_INIT (gst_win32_ipc_video_src_debug, "win32ipcvideosrc", 0, "win32ipcvideosrc"); @@ -151,411 +86,24 @@ static void gst_win32_ipc_video_src_init (GstWin32IpcVideoSrc * self) { - gst_base_src_set_format (GST_BASE_SRC (self), GST_FORMAT_TIME); - gst_base_src_set_live (GST_BASE_SRC (self), TRUE); - self->pipe_name = g_strdup (DEFAULT_PIPE_NAME); - self->processing_deadline = DEFAULT_PROCESSING_DEADLINE; - - GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_PROVIDE_CLOCK); - GST_OBJECT_FLAG_SET (self, GST_ELEMENT_FLAG_REQUIRE_CLOCK); -} - -static void -gst_win32_ipc_video_src_finalize (GObject * object) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (object); - - g_free (self->pipe_name); - - G_OBJECT_CLASS (parent_class)->finalize (object); -} - -static void -gst_win32_ipc_video_src_set_property (GObject * object, guint prop_id, - const GValue * value, GParamSpec * pspec) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (object); - - switch (prop_id) { - case PROP_PIPE_NAME: - GST_OBJECT_LOCK (self); - g_free (self->pipe_name); - self->pipe_name = g_value_dup_string (value); - if (!self->pipe_name) - self->pipe_name = g_strdup (DEFAULT_PIPE_NAME); - GST_OBJECT_UNLOCK (self); - break; - case PROP_PROCESSING_DEADLINE: - { - GstClockTime prev_val, new_val; - GST_OBJECT_LOCK (self); - prev_val = self->processing_deadline; - new_val = g_value_get_uint64 (value); - self->processing_deadline = new_val; - GST_OBJECT_UNLOCK (self); - - if (prev_val != new_val) { - GST_DEBUG_OBJECT (self, "Posting latency message"); - gst_element_post_message (GST_ELEMENT_CAST (self), - gst_message_new_latency (GST_OBJECT_CAST (self))); - } - break; - } - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static void -gst_win32_video_src_get_property (GObject * object, guint prop_id, - GValue * value, GParamSpec * pspec) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (object); - - switch (prop_id) { - case PROP_PIPE_NAME: - GST_OBJECT_LOCK (self); - g_value_set_string (value, self->pipe_name); - GST_OBJECT_UNLOCK (self); - break; - case PROP_PROCESSING_DEADLINE: - GST_OBJECT_LOCK (self); - g_value_set_uint64 (value, self->processing_deadline); - GST_OBJECT_UNLOCK (self); - break; - default: - G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); - break; - } -} - -static GstClock * -gst_win32_video_src_provide_clock (GstElement * elem) -{ - return gst_system_clock_obtain (); -} - -static gboolean -gst_win32_ipc_video_src_start (GstBaseSrc * src) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - - GST_DEBUG_OBJECT (self, "Start"); - - gst_video_info_init (&self->info); - - return TRUE; -} - -static gboolean -gst_win32_ipc_video_src_stop (GstBaseSrc * src) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - - GST_DEBUG_OBJECT (self, "Stop"); - - if (self->pipe) { - win32_ipc_pipe_client_stop (self->pipe); - g_clear_pointer (&self->pipe, win32_ipc_pipe_client_unref); - } - - gst_clear_caps (&self->caps); - if (self->pool) { - gst_buffer_pool_set_active (self->pool, FALSE); - gst_clear_object (&self->pool); - } - - return TRUE; -} - -static gboolean -gst_win32_ipc_video_src_unlock (GstBaseSrc * src) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - - GST_DEBUG_OBJECT (self, "Unlock"); - - AcquireSRWLockExclusive (&self->lock); - self->flushing = TRUE; - if (self->pipe) - win32_ipc_pipe_client_set_flushing (self->pipe, TRUE); - ReleaseSRWLockExclusive (&self->lock); - - return TRUE; -} - -static gboolean -gst_win32_ipc_video_src_unlock_stop (GstBaseSrc * src) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - - GST_DEBUG_OBJECT (self, "Unlock stop"); - - AcquireSRWLockExclusive (&self->lock); - self->flushing = FALSE; - if (self->pipe) - win32_ipc_pipe_client_set_flushing (self->pipe, FALSE); - ReleaseSRWLockExclusive (&self->lock); - - return TRUE; -} - -static gboolean -gst_win32_ipc_video_src_query (GstBaseSrc * src, GstQuery * query) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - - switch (GST_QUERY_TYPE (query)) { - case GST_QUERY_LATENCY: - { - GST_OBJECT_LOCK (self); - if (GST_CLOCK_TIME_IS_VALID (self->processing_deadline)) { - gst_query_set_latency (query, TRUE, self->processing_deadline, - /* pipe server can hold up to 5 memory objects */ - 5 * self->processing_deadline); - } else { - gst_query_set_latency (query, TRUE, 0, 0); - } - GST_OBJECT_UNLOCK (self); - return TRUE; - } - default: - break; - } - - return GST_BASE_SRC_CLASS (parent_class)->query (src, query); -} - -static gboolean -gst_win32_ipc_video_src_decide_allocation (GstBaseSrc * src, GstQuery * query) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - gboolean ret; - - ret = GST_BASE_SRC_CLASS (parent_class)->decide_allocation (src, query); - if (!ret) - return ret; - - self->have_video_meta = gst_query_find_allocation_meta (query, - GST_VIDEO_META_API_TYPE, nullptr); - GST_DEBUG_OBJECT (self, "Downstream supports video meta: %d", - self->have_video_meta); - - return TRUE; + g_object_set (self, "pipe-name", DEFAULT_PIPE_NAME, + "leaky-type", DEFAULT_LEAKY_TYPE, nullptr); } static GstCaps * -gst_win32_ipc_video_src_update_info_and_get_caps (GstWin32IpcVideoSrc * self, - const Win32IpcVideoInfo * info) +gst_win32_ipc_video_src_fixate (GstBaseSrc * src, GstCaps * caps) { - GstVideoInfo vinfo; + /* We don't negotiate with server. In here, we do fixate resolution to + * 320 x 240 (same as default of videotestsrc) which makes a little more + * sense than 1x1 */ + caps = gst_caps_make_writable (caps); - gst_video_info_set_format (&vinfo, (GstVideoFormat) info->format, - info->width, info->height); - vinfo.fps_n = info->fps_n; - vinfo.fps_d = info->fps_d; - vinfo.par_n = info->par_n; - vinfo.par_d = info->par_d; + for (guint i = 0; i < gst_caps_get_size (caps); i++) { + GstStructure *s = gst_caps_get_structure (caps, i); - if (!self->caps || !gst_video_info_is_equal (&self->info, &vinfo)) { - self->info = vinfo; - return gst_video_info_to_caps (&vinfo); + gst_structure_fixate_field_nearest_int (s, "width", 320); + gst_structure_fixate_field_nearest_int (s, "height", 240); } - return nullptr; -} - -static gboolean -gst_win32_ipc_ensure_fallback_pool (GstWin32IpcVideoSrc * self) -{ - GstStructure *config; - - if (self->pool) - return TRUE; - - self->pool = gst_video_buffer_pool_new (); - config = gst_buffer_pool_get_config (self->pool); - gst_buffer_pool_config_set_params (config, self->caps, - GST_VIDEO_INFO_SIZE (&self->info), 0, 0); - if (!gst_buffer_pool_set_config (self->pool, config)) { - GST_ERROR_OBJECT (self, "Couldn't set config"); - goto error; - } - - if (!gst_buffer_pool_set_active (self->pool, TRUE)) { - GST_ERROR_OBJECT (self, "Couldn't set active"); - goto error; - } - - return TRUE; - -error: - gst_clear_object (&self->pool); - return FALSE; -} - -struct MmfReleaseData -{ - Win32IpcPipeClient *pipe; - Win32IpcMmf *mmf; -}; - -static void -gst_win32_ipc_video_src_release_mmf (MmfReleaseData * data) -{ - win32_ipc_pipe_client_release_mmf (data->pipe, data->mmf); - win32_ipc_pipe_client_unref (data->pipe); - delete data; -} - -static GstFlowReturn -gst_win32_ipc_video_src_create (GstBaseSrc * src, guint64 offset, guint size, - GstBuffer ** buf) -{ - GstWin32IpcVideoSrc *self = GST_WIN32_IPC_VIDEO_SRC (src); - GstCaps *caps; - Win32IpcMmf *mmf; - Win32IpcVideoInfo info; - GstFlowReturn ret = GST_FLOW_OK; - GstBuffer *buffer; - GstClock *clock; - GstClockTime pts; - GstClockTime base_time; - GstClockTime now_qpc; - GstClockTime now_gst; - gboolean is_qpc = TRUE; - gboolean need_video_meta = FALSE; - - AcquireSRWLockExclusive (&self->lock); - if (self->flushing) { - ReleaseSRWLockExclusive (&self->lock); - return GST_FLOW_FLUSHING; - } - - if (!self->pipe) { - self->pipe = win32_ipc_pipe_client_new (self->pipe_name); - if (!self->pipe) { - ReleaseSRWLockExclusive (&self->lock); - GST_ERROR_OBJECT (self, "Couldn't create pipe"); - return GST_FLOW_ERROR; - } - } - ReleaseSRWLockExclusive (&self->lock); - - if (!win32_ipc_pipe_client_get_mmf (self->pipe, &mmf, &info)) { - AcquireSRWLockExclusive (&self->lock); - if (self->flushing) { - ret = GST_FLOW_FLUSHING; - GST_DEBUG_OBJECT (self, "Flushing"); - } else { - ret = GST_FLOW_EOS; - GST_WARNING_OBJECT (self, "Couldn't get buffer from server"); - } - ReleaseSRWLockExclusive (&self->lock); - return ret; - } - - caps = gst_win32_ipc_video_src_update_info_and_get_caps (self, &info); - for (guint i = 0; i < GST_VIDEO_INFO_N_PLANES (&self->info); i++) { - self->offseti = (gsize) info.offseti; - self->stridei = (gint) info.stridei; - - if (self->offseti != self->info.offseti || - self->stridei != self->info.stridei) { - need_video_meta = TRUE; - } - } - - if (caps) { - if (self->pool) { - gst_buffer_pool_set_active (self->pool, FALSE); - gst_clear_object (&self->pool); - } - - gst_caps_replace (&self->caps, caps); - GST_DEBUG_OBJECT (self, "Setting caps %" GST_PTR_FORMAT, caps); - gst_pad_set_caps (GST_BASE_SRC_PAD (src), caps); - gst_caps_unref (caps); - } - - if (self->have_video_meta || !need_video_meta) { - MmfReleaseData *data = new MmfReleaseData (); - data->pipe = win32_ipc_pipe_client_ref (self->pipe); - data->mmf = mmf; - - buffer = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_READONLY, - win32_ipc_mmf_get_raw (mmf), win32_ipc_mmf_get_size (mmf), - 0, win32_ipc_mmf_get_size (mmf), data, - (GDestroyNotify) gst_win32_ipc_video_src_release_mmf); - - if (self->have_video_meta) { - gst_buffer_add_video_meta_full (buffer, - GST_VIDEO_FRAME_FLAG_NONE, GST_VIDEO_INFO_FORMAT (&self->info), - GST_VIDEO_INFO_WIDTH (&self->info), - GST_VIDEO_INFO_HEIGHT (&self->info), - GST_VIDEO_INFO_N_PLANES (&self->info), self->offset, self->stride); - } - } else { - GstVideoFrame mmf_frame, frame; - - if (!gst_win32_ipc_ensure_fallback_pool (self)) { - win32_ipc_mmf_unref (mmf); - return GST_FLOW_ERROR; - } - - ret = gst_buffer_pool_acquire_buffer (self->pool, &buffer, nullptr); - if (ret != GST_FLOW_OK) { - GST_ERROR_OBJECT (self, "Couldn't acquire buffer"); - win32_ipc_mmf_unref (mmf); - return GST_FLOW_ERROR; - } - - gst_video_frame_map (&frame, &self->info, buffer, GST_MAP_WRITE); - mmf_frame.info = self->info; - - for (guint i = 0; i < GST_VIDEO_FRAME_N_PLANES (&frame); i++) { - mmf_frame.info.offseti = self->offseti; - mmf_frame.info.stridei = self->stridei; - mmf_frame.datai = (guint8 *) win32_ipc_mmf_get_raw (mmf) + - self->offseti; - } - - gst_video_frame_copy (&frame, &mmf_frame); - gst_video_frame_unmap (&frame); - win32_ipc_mmf_unref (mmf); - } - - now_qpc = gst_util_get_timestamp (); - clock = gst_element_get_clock (GST_ELEMENT_CAST (self)); - now_gst = gst_clock_get_time (clock); - base_time = GST_ELEMENT_CAST (self)->base_time; - - is_qpc = gst_win32_ipc_clock_is_qpc (clock); - gst_object_unref (clock); - - if (!is_qpc) { - GstClockTimeDiff now_pts = now_gst - base_time + info.qpc - now_qpc; - - if (now_pts >= 0) - pts = now_pts; - else - pts = 0; - } else { - if (info.qpc >= base_time) { - /* Our base_time is also QPC */ - pts = info.qpc - base_time; - } else { - GST_WARNING_OBJECT (self, "Server QPC is smaller than our QPC base time"); - pts = 0; - } - } - - GST_BUFFER_PTS (buffer) = pts; - GST_BUFFER_DTS (buffer) = GST_CLOCK_TIME_NONE; - GST_BUFFER_DURATION (buffer) = GST_CLOCK_TIME_NONE; - - *buf = buffer; - - return GST_FLOW_OK; + return gst_caps_fixate (caps); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/gstwin32ipcvideosrc.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/gstwin32ipcvideosrc.h
Changed
@@ -20,13 +20,13 @@ #pragma once #include <gst/gst.h> -#include <gst/base/gstbasesrc.h> #include <gst/video/video.h> +#include "gstwin32ipcbasesrc.h" G_BEGIN_DECLS #define GST_TYPE_WIN32_IPC_VIDEO_SRC (gst_win32_ipc_video_src_get_type()) G_DECLARE_FINAL_TYPE (GstWin32IpcVideoSrc, gst_win32_ipc_video_src, - GST, WIN32_IPC_VIDEO_SRC, GstBaseSrc); + GST, WIN32_IPC_VIDEO_SRC, GstWin32IpcBaseSrc); G_END_DECLS
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/meson.build
Changed
@@ -1,23 +1,28 @@ win32ipc_sources = - 'protocol/win32ipcmmf.cpp', - 'protocol/win32ipcpipeclient.cpp', - 'protocol/win32ipcpipeserver.cpp', - 'protocol/win32ipcprotocol.cpp', - 'protocol/win32ipcutils.cpp', + 'gstwin32ipc.cpp', + 'gstwin32ipcmmf.cpp', + 'gstwin32ipcclient.cpp', + 'gstwin32ipcserver.cpp', + 'gstwin32ipcprotocol.cpp', 'gstwin32ipcbufferpool.cpp', 'gstwin32ipcmemory.cpp', - 'gstwin32ipcutils.cpp', 'gstwin32ipcvideosink.cpp', 'gstwin32ipcvideosrc.cpp', + 'gstwin32ipcbasesink.cpp', + 'gstwin32ipcbasesrc.cpp', + 'gstwin32ipcsink.cpp', + 'gstwin32ipcsrc.cpp', 'plugin.cpp', win32ipc_headers = + 'gstwin32ipc.h', 'gstwin32ipcvideosink.h', 'gstwin32ipcmemory.h', 'gstwin32ipcvideosrc.h', 'gstwin32ipcbufferpool.h', - 'gstwin32ipcutils.h', + 'gstwin32ipcsrc.h', + 'gstwin32ipcsink.h', doc_sources =
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/win32ipc/plugin.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/win32ipc/plugin.cpp
Changed
@@ -32,6 +32,8 @@ #include <gst/gst.h> #include "gstwin32ipcvideosink.h" #include "gstwin32ipcvideosrc.h" +#include "gstwin32ipcsink.h" +#include "gstwin32ipcsrc.h" GST_DEBUG_CATEGORY (gst_win32_ipc_debug); #define GST_CAT_DEFAULT gst_win32_ipc_debug @@ -45,6 +47,10 @@ "win32ipcvideosink", GST_RANK_NONE, GST_TYPE_WIN32_IPC_VIDEO_SINK); gst_element_register (plugin, "win32ipcvideosrc", GST_RANK_NONE, GST_TYPE_WIN32_IPC_VIDEO_SRC); + gst_element_register (plugin, + "win32ipcsink", GST_RANK_NONE, GST_TYPE_WIN32_IPC_SINK); + gst_element_register (plugin, + "win32ipcsrc", GST_RANK_NONE, GST_TYPE_WIN32_IPC_SRC); return TRUE; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/gstksclock.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/gstksclock.c
Changed
@@ -65,6 +65,8 @@ gobject_class->dispose = gst_ks_clock_dispose; gobject_class->finalize = gst_ks_clock_finalize; + + gst_ks_debug_init (); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/gstksvideodevice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/gstksvideodevice.c
Changed
@@ -131,6 +131,8 @@ g_param_spec_string ("device-path", "Device Path", "The device path", DEFAULT_DEVICE_PATH, G_PARAM_READWRITE | G_PARAM_CONSTRUCT_ONLY | G_PARAM_STATIC_STRINGS)); + + gst_ks_debug_init (); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/gstksvideosrc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/gstksvideosrc.c
Changed
@@ -221,6 +221,8 @@ g_param_spec_boolean ("enable-quirks", "Enable quirks", "Enable driver-specific quirks", DEFAULT_ENABLE_QUIRKS, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); + + gst_ks_debug_init (); } static void @@ -1028,8 +1030,7 @@ static gboolean plugin_init (GstPlugin * plugin) { - GST_DEBUG_CATEGORY_INIT (gst_ks_debug, "ksvideosrc", - 0, "Kernel streaming video source"); + gst_ks_debug_init (); if (!gst_element_register (plugin, "ksvideosrc", GST_RANK_PRIMARY, GST_TYPE_KS_VIDEO_SRC))
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/ksdeviceprovider.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/ksdeviceprovider.c
Changed
@@ -66,6 +66,8 @@ "KernelStreaming Device Provider", "Sink/Source/Audio/Video", "List and provide KernelStreaming source and sink devices", "Руслан Ижбулатов <lrn1986@gmail.com>"); + + gst_ks_debug_init (); } static void @@ -658,6 +660,8 @@ g_param_spec_string ("path", "System device path", "The system path to the device", "", G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE | G_PARAM_CONSTRUCT_ONLY)); + + gst_ks_debug_init (); } static void
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/ksvideohelpers.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/ksvideohelpers.c
Changed
@@ -748,3 +748,14 @@ return caps; } + +void +gst_ks_debug_init (void) +{ + static gsize res = 0; + if (g_once_init_enter (&res)) { + GST_DEBUG_CATEGORY_INIT (gst_ks_debug, "ksvideosrc", + 0, "Kernel streaming video source"); + g_once_init_leave (&res, 1); + } +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/sys/winks/ksvideohelpers.h -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/sys/winks/ksvideohelpers.h
Changed
@@ -78,6 +78,8 @@ GstCaps * ks_video_get_all_caps (void); +void gst_ks_debug_init (void); + G_END_DECLS #endif /* __KSVIDEOHELPERS_H__ */
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/adaptive_demux_common.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/adaptive_demux_common.c
Changed
@@ -423,8 +423,7 @@ gst_message_parse_state_changed (msg, &old_state, &new_state, NULL); GST_DEBUG ("Element %s changed state from %s to %s", GST_OBJECT_NAME (msg->src), - gst_element_state_get_name (old_state), - gst_element_state_get_name (new_state)); + gst_state_get_name (old_state), gst_state_get_name (new_state)); if (strstr (srcName, "srcbin") == srcName && old_state == GST_STATE_PLAYING && new_state == GST_STATE_PAUSED) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/audiovisualizer.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/audiovisualizer.c
Changed
@@ -74,6 +74,7 @@ gst_element_set_state (pipeline, GST_STATE_NULL); g_main_loop_unref (loop); + gst_bus_remove_signal_watch (bus); gst_object_unref (bus); gst_object_unref (pipeline); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/avtpaafdepay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/avtpaafdepay.c
Changed
@@ -60,6 +60,8 @@ memcpy (pdu->avtp_payload, audio_data, sizeof (audio_data)); gst_buffer_unmap (buf, &info); + GST_BUFFER_DTS (buf) = GST_SECOND; + return buf; } @@ -207,6 +209,7 @@ const GstSegment *segment; h = setup_harness (); + buf = create_input_buffer (h); gst_harness_push (h, buf); @@ -231,9 +234,9 @@ fail_unless (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT); gst_event_parse_segment (event, &segment); fail_unless (segment->format == GST_FORMAT_TIME); - fail_unless (segment->base == 3000); - fail_unless (segment->start == 3000); - fail_unless (segment->stop == -1); + fail_unless_equals_uint64 (segment->base, 0); + fail_unless_equals_uint64 (segment->start, 0); + fail_unless_equals_uint64 (segment->stop, GST_CLOCK_TIME_NONE); gst_event_unref (event); gst_harness_teardown (h); @@ -257,6 +260,8 @@ fail_unless (memcmp (info.data, audio_data, info.size) == 0); gst_buffer_unmap (out, &info); + fail_unless_equals_uint64 (GST_BUFFER_PTS (out), 3000); + gst_buffer_unref (out); gst_harness_teardown (h); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/avtpcrfcheck.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/avtpcrfcheck.c
Changed
@@ -117,7 +117,7 @@ pdu = (struct avtp_stream_pdu *) info.data; r = avtp_pdu_get ((struct avtp_common_pdu *) pdu, AVTP_FIELD_SUBTYPE, &type); - g_assert (r == 0); + fail_unless_equals_int (r, 0); if (type == AVTP_SUBTYPE_AAF) avtp_aaf_pdu_set (pdu, AVTP_AAF_FIELD_TIMESTAMP, avtp_tstamp); else if (type == AVTP_SUBTYPE_CVF) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/avtpcrfsync.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/avtpcrfsync.c
Changed
@@ -126,7 +126,7 @@ GST_BUFFER_DTS (buf) = orig->buf_dts; r = avtp_pdu_get ((struct avtp_common_pdu *) pdu, AVTP_FIELD_SUBTYPE, &type); - g_assert (r == 0); + fail_unless_equals_int (r, 0); if (type == AVTP_SUBTYPE_AAF) avtp_aaf_pdu_set (pdu, AVTP_AAF_FIELD_TIMESTAMP, orig->avtp_ts); else if (type == AVTP_SUBTYPE_CVF) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/avtpcvfdepay.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/avtpcvfdepay.c
Changed
@@ -111,6 +111,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 10); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -175,6 +176,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 10); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -240,6 +242,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -290,6 +293,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 10); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -346,6 +350,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -382,6 +387,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -526,11 +532,13 @@ /* Invalid buffer size (too small to fit an AVTP header) */ small = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE / 2); + GST_BUFFER_DTS (in) = GST_SECOND; gst_harness_push (h, small); fail_unless_equals_uint64 (gst_harness_buffers_received (h), 0); /* Invalid buffer size (too small to fit a fragment header) */ small = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 1); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (small, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; avtp_cvf_pdu_init (pdu, AVTP_CVF_FORMAT_SUBTYPE_H264); @@ -571,6 +579,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 10); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -703,6 +712,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -798,6 +808,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -871,6 +882,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -959,6 +971,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -1095,6 +1108,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -1182,6 +1196,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 10); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -1265,6 +1280,7 @@ /* Create the input AVTPDU */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + DATA_LEN); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -1350,6 +1366,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data; @@ -1438,6 +1455,7 @@ /* Create the input AVTPDU header */ in = gst_harness_create_buffer (h, AVTP_CVF_H264_HEADER_SIZE + 4); + GST_BUFFER_DTS (in) = GST_SECOND; gst_buffer_map (in, &map, GST_MAP_READWRITE); pdu = (struct avtp_stream_pdu *) map.data;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/camerabin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/camerabin.c
Changed
@@ -866,23 +866,23 @@ if (width != 0 && height != 0) { g_signal_emit_by_name (playbin, "get-video-pad", 0, &pad, NULL); - g_assert (pad != NULL); + g_assert_nonnull (pad); caps = gst_pad_get_current_caps (pad); - g_assert (gst_structure_get_int (gst_caps_get_structure (caps, 0), + g_assert_true (gst_structure_get_int (gst_caps_get_structure (caps, 0), "width", &caps_width)); - g_assert (gst_structure_get_int (gst_caps_get_structure (caps, 0), + g_assert_true (gst_structure_get_int (gst_caps_get_structure (caps, 0), "height", &caps_height)); - g_assert (width == caps_width); - g_assert (height == caps_height); + fail_unless_equals_int (width, caps_width); + fail_unless_equals_int (height, caps_height); gst_caps_unref (caps); gst_object_unref (pad); } if (has_audio) { g_signal_emit_by_name (playbin, "get-audio-pad", 0, &pad, NULL); - g_assert (pad != NULL); + g_assert_nonnull (pad); gst_object_unref (pad); } @@ -1470,6 +1470,10 @@ g_object_set (camera, "video-profile", profile, NULL); gst_encoding_profile_unref (profile); + + caps = gst_caps_from_string ("video/x-raw, format=(string)Y444"); + g_object_set (camera, "video-capture-caps", caps, NULL); + gst_caps_unref (caps); } if (gst_element_set_state (GST_ELEMENT (camera), GST_STATE_PLAYING) == @@ -1532,23 +1536,23 @@ gst_object_unref (camera); camera = NULL; } - g_assert (camera != NULL); + g_assert_nonnull (camera); expectedcaps = gst_caps_from_string (VIDEO_PAD_SUPPORTED_CAPS); g_object_get (G_OBJECT (camera), "video-capture-supported-caps", &padcaps, NULL); - g_assert (expectedcaps != NULL); - g_assert (padcaps != NULL); - g_assert (gst_caps_is_equal (padcaps, expectedcaps)); + g_assert_nonnull (expectedcaps); + g_assert_nonnull (padcaps); + g_assert_true (gst_caps_is_equal (padcaps, expectedcaps)); gst_caps_unref (expectedcaps); gst_caps_unref (padcaps); expectedcaps = gst_caps_from_string (IMAGE_PAD_SUPPORTED_CAPS); g_object_get (G_OBJECT (camera), "image-capture-supported-caps", &padcaps, NULL); - g_assert (expectedcaps != NULL); - g_assert (padcaps != NULL); - g_assert (gst_caps_is_equal (padcaps, expectedcaps)); + g_assert_nonnull (expectedcaps); + g_assert_nonnull (padcaps); + g_assert_true (gst_caps_is_equal (padcaps, expectedcaps)); gst_caps_unref (expectedcaps); gst_caps_unref (padcaps);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/cccombiner.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/cccombiner.c
Changed
@@ -155,7 +155,7 @@ const guint8 cc_data3 = { 0xfc, 0x20, 0x20 }; GstElement *element = gst_element_factory_make ("cccombiner", NULL); - g_assert (element != NULL); + g_assert_nonnull (element); /* these must be set before it changes the state */ g_object_set (element, "schedule", FALSE, "output-padding", FALSE, NULL);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/fdkaac.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/fdkaac.c
Changed
@@ -76,7 +76,7 @@ if (val != NULL) { buf = gst_value_get_buffer (val); gst_buffer_map (buf, &map, GST_MAP_READ); - g_assert (map.size <= 16); + g_assert_cmpuint (map.size, <=, 16); memcpy (aac_sample.codec_data, map.data, map.size); aac_sample.codec_data_len = map.size; gst_buffer_unmap (buf, &map); @@ -86,7 +86,7 @@ buf = gst_sample_get_buffer (sample); gst_buffer_map (buf, &map, GST_MAP_READ); - g_assert (map.size >= sizeof (aac_sample.buf_hdr)); + g_assert_cmpuint (map.size, >=, sizeof (aac_sample.buf_hdr)); memcpy (aac_sample.buf_hdr, map.data, sizeof (aac_sample.buf_hdr)); gst_buffer_unmap (buf, &map); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/h264parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/h264parse.c
Changed
@@ -1593,6 +1593,110 @@ GST_END_TEST; +GST_START_TEST (test_packetized_avc_drop_corrupt) +{ + GstBuffer *cdata; + GstCaps *in_caps, *out_caps; + GstHarness *h = gst_harness_new ("h264parse"); + GstBuffer *buf, *bufout; + GstMapInfo mapout; + + in_caps = gst_caps_from_string (stream_type_to_caps_str (PACKETIZED_AU)); + cdata = + gst_buffer_new_memdup (h264_avc_codec_data, sizeof (h264_avc_codec_data)); + gst_caps_set_simple (in_caps, "codec_data", GST_TYPE_BUFFER, cdata, + "stream-format", G_TYPE_STRING, "avc", NULL); + gst_buffer_unref (cdata); + out_caps = gst_caps_from_string (stream_type_to_caps_str (PACKETIZED_AU)); + + gst_harness_set_caps (h, in_caps, out_caps); + + /* avc idr frame nal */ + static guint8 *h264_idr_avc; + + /* make avc idr frame NAL */ + h264_idr_avc = g_malloc (sizeof (h264_idrframe)); + GST_WRITE_UINT32_BE (h264_idr_avc, sizeof (h264_idrframe) - 4); + memcpy (h264_idr_avc + 4, h264_idrframe + 4, sizeof (h264_idrframe) - 4); + + static guint8 h264_garbage_avc = { + 0x00, 0x00, 0x00, 0x00, 0x05 + }; + + /* Send all => drop garbage end but keep correct frame. */ + buf = composite_buffer (100, 0, 2, h264_idr_avc, sizeof (h264_idrframe), + h264_garbage_avc, sizeof (h264_garbage_avc)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 3 IDR frames => all should be kept. */ + buf = composite_buffer (200, 0, 3, h264_idr_avc, sizeof (h264_idrframe), + h264_idr_avc, sizeof (h264_idrframe), h264_idr_avc, + sizeof (h264_idrframe)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 2 IDR and one garbage => keep the first two and drop garabage. */ + buf = composite_buffer (300, 0, 3, h264_idr_avc, sizeof (h264_idrframe), + h264_idr_avc, sizeof (h264_idrframe), h264_garbage_avc, + sizeof (h264_garbage_avc)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Only send part of correct frame => drop everything */ + buf = wrap_buffer (h264_idr_avc, sizeof (h264_idrframe) - 10, 400, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send garbage frame => drop everything */ + buf = wrap_buffer (h264_garbage_avc, sizeof (h264_garbage_avc), 500, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* EOS for pending buffers to be drained if any */ + gst_harness_push_event (h, gst_event_new_eos ()); + + fail_unless_equals_int (gst_harness_buffers_received (h), 3); + + /* Verify IDR + garbage. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gsize sps_pps_sz = sizeof (h264_sps) + sizeof (h264_pps); + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, sps_pps_sz + sizeof (h264_idrframe)); + fail_unless (memcmp (mapout.data + sps_pps_sz, + h264_idr_avc, sizeof (h264_idrframe)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 3 * IDR. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 3 * sizeof (h264_idrframe)); + fail_unless (memcmp (mapout.data, h264_idr_avc, sizeof (h264_idrframe)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h264_idrframe), h264_idr_avc, + sizeof (h264_idrframe)) == 0); + fail_unless (memcmp (mapout.data + 2 * sizeof (h264_idrframe), h264_idr_avc, + sizeof (h264_idrframe)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 2 * IDR + garbage. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 2 * sizeof (h264_idrframe)); + fail_unless (memcmp (mapout.data, h264_idr_avc, sizeof (h264_idrframe)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h264_idrframe), h264_idr_avc, + sizeof (h264_idrframe)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + gst_harness_teardown (h); +} + +GST_END_TEST; + + /* * TODO: * - Both push- and pull-modes need to be tested @@ -1707,6 +1811,7 @@ tcase_add_test (tc_chain, test_parse_aud_insert); tcase_add_test (tc_chain, test_parse_sei_userdefinedunregistered); tcase_add_test (tc_chain, test_parse_to_avc3_without_sps); + tcase_add_test (tc_chain, test_packetized_avc_drop_corrupt); nf += gst_check_run_suite (s, "h264parse", __FILE__); }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/h264timestamper.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/h264timestamper.c
Changed
@@ -57,7 +57,7 @@ GstMapInfo map_info; gsize offset = 0; - g_assert (gst_buffer_map (buffer, &map_info, GST_MAP_WRITE)); + g_assert_true (gst_buffer_map (buffer, &map_info, GST_MAP_WRITE)); memcpy (&map_info.dataoffset, h264_sps, G_N_ELEMENTS (h264_sps)); offset += G_N_ELEMENTS (h264_sps); memcpy (&map_info.dataoffset, h264_pps, G_N_ELEMENTS (h264_pps));
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/h265parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/h265parse.c
Changed
@@ -51,6 +51,18 @@ * */ +static const guint8 h265_hvcc_codec_data = { + 0x01, 0x04, 0x08, 0x00, 0x00, 0x00, 0x98, 0x08, 0x00, 0x00, 0x00, 0x00, 0x3f, + 0xf0, 0x00, 0xfc, 0xff, 0xfc, 0xfc, 0x00, 0x00, 0x0f, 0x03, 0x20, 0x00, 0x01, + 0x00, 0x17, 0x40, 0x01, 0x0c, 0x01, 0xff, 0xff, 0x04, 0x08, 0x00, 0x00, 0x03, + 0x00, 0x98, 0x08, 0x00, 0x00, 0x03, 0x00, 0x00, 0x3f, 0x95, 0x98, 0x09, 0x21, + 0x00, 0x01, 0x00, 0x2f, 0x42, 0x01, 0x01, 0x04, 0x08, 0x00, 0x00, 0x03, 0x00, + 0x98, 0x08, 0x00, 0x00, 0x03, 0x00, 0x00, 0x3f, 0x90, 0x11, 0x08, 0x8a, 0x52, + 0xca, 0xcd, 0x57, 0x95, 0xff, 0xe0, 0x00, 0x20, 0x00, 0x2d, 0x41, 0x81, 0x81, + 0x81, 0x00, 0x00, 0x03, 0x00, 0x01, 0x00, 0x00, 0x03, 0x00, 0x1e, 0x08, 0x22, + 0x00, 0x01, 0x00, 0x06, 0x44, 0x01, 0xc1, 0x73, 0xd0, 0x89 +}; + static const guint8 h265_vps = { 0x00, 0x00, 0x00, 0x01, 0x40, 0x01, 0x0c, 0x01, 0xff, 0xff, 0x01, 0x60, 0x00, 0x00, 0x03, 0x00, 0x90, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x3f, 0x95, @@ -137,6 +149,10 @@ 0xb4, 0x22, 0x40 }; +static const guint8 h265_aud = { + 0x00, 0x00, 0x00, 0x01, 0x46, 0x01, 0x50 +}; + static const guint8 h265_128x128_slice_idr_n_lp = { 0x00, 0x00, 0x00, 0x01, 0x28, 0x01, 0xaf, 0x0e, 0xe0, 0x34, 0x82, 0x15, 0x84, 0xf4, 0x70, 0x4f, @@ -145,6 +161,15 @@ 0xef, 0x4f, 0xe1, 0xa3, 0xd4, 0x00, 0x02, 0xc2 }; +static const guint8 h265_128x128_slice_idr_n_lp_with_aud = { + 0x00, 0x00, 0x00, 0x01, 0x46, 0x01, 0x50, + 0x00, 0x00, 0x00, 0x01, 0x28, 0x01, 0xaf, 0x0e, + 0xe0, 0x34, 0x82, 0x15, 0x84, 0xf4, 0x70, 0x4f, + 0xff, 0xed, 0x41, 0x3f, 0xff, 0xe4, 0xcd, 0xc4, + 0x7c, 0x03, 0x0c, 0xc2, 0xbb, 0xb0, 0x74, 0xe5, + 0xef, 0x4f, 0xe1, 0xa3, 0xd4, 0x00, 0x02, 0xc2 +}; + /* multi-sliced data, generated on zynqultrascaleplus with: * gst-launch-1.0 videotestsrc num-buffers=1 pattern=green \ * ! video/x-raw,width=128,height=128 \ @@ -207,9 +232,11 @@ guint8 *data = map.data; /* VPS, SPS, PPS */ - fail_unless (map.size == vdata->data_to_verify_size + + fail_unless (map.size == vdata->data_to_verify_size + sizeof (h265_aud) + ctx_headers0.size + ctx_headers1.size + ctx_headers2.size); + fail_unless (memcmp (data, h265_aud, sizeof (h265_aud)) == 0); + data += sizeof (h265_aud); fail_unless (memcmp (data, ctx_headers0.data, ctx_headers0.size) == 0); data += ctx_headers0.size; fail_unless (memcmp (data, ctx_headers1.data, ctx_headers1.size) == 0); @@ -222,9 +249,11 @@ vdata->data_to_verify_size) == 0); } else { /* IDR frame */ - fail_unless (map.size == vdata->data_to_verify_size); - - fail_unless (memcmp (map.data, vdata->data_to_verify, map.size) == 0); + guint aud_size = sizeof (h265_aud); + fail_unless (map.size == vdata->data_to_verify_size + aud_size); + fail_unless (memcmp (map.data, h265_aud, aud_size) == 0); + fail_unless (memcmp (map.data + aud_size, vdata->data_to_verify, + map.size - aud_size) == 0); } gst_buffer_unmap (buffer, &map); @@ -333,70 +362,6 @@ GST_END_TEST; -/* 8bits 4:4:4 encoded stream, and profile-level-tier is not spec compliant. - * extracted from the file reported at - * https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1009 - */ -static const guint8 broken_profile_codec_data = { - 0x01, 0x24, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x99, 0xf0, 0x00, 0xfc, 0xff, 0xf8, 0xf8, 0x00, 0x00, 0x0f, 0x03, 0x20, - 0x00, 0x01, 0x00, 0x18, 0x40, 0x01, 0x0c, 0x01, 0xff, 0xff, 0x24, 0x08, - 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, - 0x00, 0x99, 0xac, 0x09, 0x21, 0x00, 0x01, 0x00, 0x2c, 0x42, 0x01, 0x01, - 0x24, 0x08, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, - 0x00, 0x03, 0x00, 0x99, 0x90, 0x00, 0x3c, 0x04, 0x00, 0x44, 0x0f, 0x84, - 0x72, 0xd6, 0x94, 0x84, 0xb2, 0x5c, 0x40, 0x20, 0x00, 0x00, 0x03, 0x00, - 0x20, 0x00, 0x00, 0x07, 0x81, 0x22, 0x00, 0x01, 0x00, 0x08, 0x44, 0x01, - 0xc0, 0xf7, 0x18, 0x30, 0x0c, 0xc9 -}; - -GST_START_TEST (test_parse_fallback_profile) -{ - GstHarness *h = gst_harness_new ("h265parse"); - GstCaps *caps; - GstBuffer *codec_data; - GstEvent *event; - - codec_data = gst_buffer_new_memdup (broken_profile_codec_data, - sizeof (broken_profile_codec_data)); - - caps = gst_caps_from_string ("video/x-h265, stream-format=(string)hvc1, " - "alignment=(string)au"); - gst_caps_set_simple (caps, "codec_data", GST_TYPE_BUFFER, codec_data, NULL); - gst_buffer_unref (codec_data); - - gst_harness_set_src_caps (h, caps); - while ((event = gst_harness_pull_event (h)) != NULL) { - GstStructure *s; - const gchar *profile; - - if (GST_EVENT_TYPE (event) != GST_EVENT_CAPS) { - gst_event_unref (event); - continue; - } - - gst_event_parse_caps (event, &caps); - s = gst_caps_get_structure (caps, 0); - profile = gst_structure_get_string (s, "profile"); - - /* h265parse must provide profile */ - fail_unless (profile); - - /* must not be main profile at least. - * main-444 is expected but we might update the profile parsing - * logic later. At least it should not be main profile - */ - fail_if (g_strcmp0 (profile, "main") == 0); - - gst_event_unref (event); - break; - } - - gst_harness_teardown (h); -} - -GST_END_TEST; - static Suite * h265parse_suite (void) { @@ -409,7 +374,6 @@ tcase_add_test (tc_chain, test_parse_split); tcase_add_test (tc_chain, test_parse_detect_stream); tcase_add_test (tc_chain, test_parse_detect_stream_with_hdr_sei); - tcase_add_test (tc_chain, test_parse_fallback_profile); return s; } @@ -587,13 +551,13 @@ sizeof (h265_128x128_slice_idr_n_lp), 100, 0); fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check (h, h265_128x128_slice_idr_n_lp, 100, 0); + pull_and_check (h, h265_128x128_slice_idr_n_lp_with_aud, 100, 0); buf = wrap_buffer (h265_128x128_slice_idr_n_lp, sizeof (h265_128x128_slice_idr_n_lp), 200, 0); fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check (h, h265_128x128_slice_idr_n_lp, 200, 0); + pull_and_check (h, h265_128x128_slice_idr_n_lp_with_aud, 200, 0); } GST_START_TEST (test_flow_nal_nal) @@ -647,7 +611,9 @@ fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check (h, h265_128x128_slice_idr_n_lp, 100, 0); + + /* h265parse will insert AUD for byte-stream + AU output */ + pull_and_check (h, h265_128x128_slice_idr_n_lp_with_aud, 100, 0); gst_harness_teardown (h); } @@ -691,7 +657,8 @@ test_headers_outalign_au (GstHarness * h) { fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check_composite (h, 10, 0, 4, + pull_and_check_composite (h, 10, 0, 5, + h265_aud, sizeof (h265_aud), h265_128x128_vps, sizeof (h265_128x128_vps), h265_128x128_sps, sizeof (h265_128x128_sps), h265_128x128_pps, sizeof (h265_128x128_pps), @@ -836,7 +803,7 @@ sizeof (h265_128x128_slice_idr_n_lp), 1000, GST_BUFFER_FLAG_DISCONT); fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check (h, h265_128x128_slice_idr_n_lp, 1000, + pull_and_check (h, h265_128x128_slice_idr_n_lp_with_aud, 1000, GST_BUFFER_FLAG_DISCONT); } @@ -964,7 +931,8 @@ /* now we can see the initial AU on the output */ fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check_composite (h, 10, 0, 5, + pull_and_check_composite (h, 10, 0, 6, + h265_aud, sizeof (h265_aud), h265_128x128_sliced_vps, sizeof (h265_128x128_sliced_vps), h265_128x128_sliced_sps, sizeof (h265_128x128_sliced_sps), h265_128x128_sliced_pps, sizeof (h265_128x128_sliced_pps), @@ -981,7 +949,8 @@ fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check_composite (h, 100, 0, 2, + pull_and_check_composite (h, 100, 0, 3, + h265_aud, sizeof (h265_aud), h265_128x128_slice_1_idr_n_lp, sizeof (h265_128x128_slice_1_idr_n_lp), h265_128x128_slice_2_idr_n_lp, sizeof (h265_128x128_slice_2_idr_n_lp)); @@ -999,7 +968,8 @@ bytestream_push_first_au_inalign_au (h, TRUE); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check_composite (h, 10, 0, 5, + pull_and_check_composite (h, 10, 0, 6, + h265_aud, sizeof (h265_aud), h265_128x128_sliced_vps, sizeof (h265_128x128_sliced_vps), h265_128x128_sliced_sps, sizeof (h265_128x128_sliced_sps), h265_128x128_sliced_pps, sizeof (h265_128x128_sliced_pps), @@ -1012,7 +982,8 @@ h265_128x128_slice_2_idr_n_lp, sizeof (h265_128x128_slice_2_idr_n_lp)); fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); fail_unless_equals_int (gst_harness_buffers_in_queue (h), 1); - pull_and_check_composite (h, 100, 0, 2, + pull_and_check_composite (h, 100, 0, 3, + h265_aud, sizeof (h265_aud), h265_128x128_slice_1_idr_n_lp, sizeof (h265_128x128_slice_1_idr_n_lp), h265_128x128_slice_2_idr_n_lp, sizeof (h265_128x128_slice_2_idr_n_lp)); @@ -1175,6 +1146,9 @@ GstHarness *h; GstCaps *caps; GstBuffer *codec_data; + GstBuffer *buf, *bufout; + GstMapInfo mapout; + /* Consists of 4 arrays (VPS, SPS, PPS, SEI -> broken) and each array contains * single nalu * Captured from the log at @@ -1203,7 +1177,18 @@ h = gst_harness_new ("h265parse"); gst_harness_set_src_caps (h, caps); - gst_harness_push_event (h, gst_event_new_eos ()); + + /* hvcc idr frame nal */ + static guint8 *h265_idr_hvcc; + + /* make hvcc frame NAL */ + h265_idr_hvcc = g_malloc (sizeof (h265_idr)); + GST_WRITE_UINT32_BE (h265_idr_hvcc, sizeof (h265_idr) - 4); + memcpy (h265_idr_hvcc + 4, h265_idr + 4, sizeof (h265_idr) - 4); + + /* Send idr to trigger caps event */ + buf = composite_buffer (100, 0, 1, h265_idr_hvcc, sizeof (h265_idr)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); while (TRUE) { GstEvent *event = gst_harness_pull_event (h); @@ -1228,6 +1213,206 @@ gst_event_unref (event); } + /* Verify IDR */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, sizeof (h265_idr)); + fail_unless (memcmp (mapout.data, h265_idr_hvcc, sizeof (h265_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + gst_harness_teardown (h); +} + +GST_END_TEST; + +/* 8bits 4:4:4 encoded stream, and profile-level-tier is not spec compliant. + * extracted from the file reported at + * https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1009 + */ +static const guint8 broken_profile_codec_data = { + 0x01, 0x24, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x99, 0xf0, 0x00, 0xfc, 0xff, 0xf8, 0xf8, 0x00, 0x00, 0x0f, 0x03, 0x20, + 0x00, 0x01, 0x00, 0x18, 0x40, 0x01, 0x0c, 0x01, 0xff, 0xff, 0x24, 0x08, + 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, + 0x00, 0x99, 0xac, 0x09, 0x21, 0x00, 0x01, 0x00, 0x2c, 0x42, 0x01, 0x01, + 0x24, 0x08, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, 0x00, 0x03, 0x00, + 0x00, 0x03, 0x00, 0x99, 0x90, 0x00, 0x3c, 0x04, 0x00, 0x44, 0x0f, 0x84, + 0x72, 0xd6, 0x94, 0x84, 0xb2, 0x5c, 0x40, 0x20, 0x00, 0x00, 0x03, 0x00, + 0x20, 0x00, 0x00, 0x07, 0x81, 0x22, 0x00, 0x01, 0x00, 0x08, 0x44, 0x01, + 0xc0, 0xf7, 0x18, 0x30, 0x0c, 0xc9 +}; + +GST_START_TEST (test_parse_fallback_profile) +{ + GstHarness *h = gst_harness_new ("h265parse"); + GstCaps *caps; + GstBuffer *codec_data; + GstEvent *event; + GstBuffer *buf, *bufout; + GstMapInfo mapout; + + codec_data = gst_buffer_new_memdup (broken_profile_codec_data, + sizeof (broken_profile_codec_data)); + + caps = gst_caps_from_string ("video/x-h265, stream-format=(string)hvc1, " + "alignment=(string)au"); + gst_caps_set_simple (caps, "codec_data", GST_TYPE_BUFFER, codec_data, NULL); + gst_buffer_unref (codec_data); + + gst_harness_set_src_caps (h, caps); + + /* hvcc idr frame nal */ + static guint8 *h265_idr_hvcc; + + /* make hvcc frame NAL */ + h265_idr_hvcc = g_malloc (sizeof (h265_idr)); + GST_WRITE_UINT32_BE (h265_idr_hvcc, sizeof (h265_idr) - 4); + memcpy (h265_idr_hvcc + 4, h265_idr + 4, sizeof (h265_idr) - 4); + + /* Send idr to trigger caps event */ + buf = composite_buffer (100, 0, 1, h265_idr_hvcc, sizeof (h265_idr)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + while ((event = gst_harness_pull_event (h)) != NULL) { + GstStructure *s; + const gchar *profile; + + if (GST_EVENT_TYPE (event) != GST_EVENT_CAPS) { + gst_event_unref (event); + continue; + } + + gst_event_parse_caps (event, &caps); + s = gst_caps_get_structure (caps, 0); + profile = gst_structure_get_string (s, "profile"); + + /* h265parse must provide profile */ + fail_unless (profile); + + /* must not be main profile at least. + * main-444 is expected but we might update the profile parsing + * logic later. At least it should not be main profile + */ + fail_if (g_strcmp0 (profile, "main") == 0); + + gst_event_unref (event); + break; + } + + /* Verify IDR */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, sizeof (h265_idr)); + fail_unless (memcmp (mapout.data, h265_idr_hvcc, sizeof (h265_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + gst_harness_teardown (h); +} + +GST_END_TEST; + +GST_START_TEST (test_packetized_hvcc_drop_corrupt) +{ + GstBuffer *cdata; + GstCaps *in_caps, *out_caps; + GstHarness *h = gst_harness_new ("h265parse"); + GstBuffer *buf, *bufout; + GstMapInfo mapout; + const gchar *in_caps_str = + "video/x-h265, stream-format=(string)hvc1, alignment=(string)au"; + const gchar *out_caps_str = + "video/x-h265, stream-format=(string)hvc1, alignment=(string)au"; + + in_caps = gst_caps_from_string (in_caps_str); + cdata = gst_buffer_new_memdup (h265_hvcc_codec_data, + sizeof (h265_hvcc_codec_data)); + gst_caps_set_simple (in_caps, "codec_data", GST_TYPE_BUFFER, cdata, NULL); + gst_buffer_unref (cdata); + out_caps = gst_caps_from_string (out_caps_str); + gst_harness_set_caps (h, in_caps, out_caps); + + /* hvcc idr frame nal */ + static guint8 *h265_idr_hvcc; + + /* make hvcc frame NAL */ + h265_idr_hvcc = g_malloc (sizeof (h265_idr)); + GST_WRITE_UINT32_BE (h265_idr_hvcc, sizeof (h265_idr) - 4); + memcpy (h265_idr_hvcc + 4, h265_idr + 4, sizeof (h265_idr) - 4); + + static guint8 h265_garbage_hvcc = { + 0x00, 0x00, 0x00, 0x00, 0x05 + }; + + /* Send all => drop garbage end but keep correct frame. */ + buf = composite_buffer (100, 0, 2, h265_idr_hvcc, sizeof (h265_idr), + h265_garbage_hvcc, sizeof (h265_garbage_hvcc)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 3 IDR frames => all should be kept. */ + buf = composite_buffer (200, 0, 3, h265_idr_hvcc, sizeof (h265_idr), + h265_idr_hvcc, sizeof (h265_idr), h265_idr_hvcc, sizeof (h265_idr)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 2 IDR and one garbage => keep the first two and drop garabage. */ + buf = composite_buffer (300, 0, 3, h265_idr_hvcc, sizeof (h265_idr), + h265_idr_hvcc, sizeof (h265_idr), h265_garbage_hvcc, + sizeof (h265_garbage_hvcc)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Only send part of correct frame => drop everything */ + buf = wrap_buffer (h265_idr_hvcc, sizeof (h265_idr) - 10, 400, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send garbage frame => drop everything */ + buf = wrap_buffer (h265_garbage_hvcc, sizeof (h265_garbage_hvcc), 500, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* EOS for pending buffers to be drained if any */ + gst_harness_push_event (h, gst_event_new_eos ()); + + fail_unless_equals_int (gst_harness_buffers_received (h), 3); + + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + /* Verify IDR + garbage. */ + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, sizeof (h265_idr)); + fail_unless (memcmp (mapout.data, h265_idr_hvcc, sizeof (h265_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 3 * IDR. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 3 * sizeof (h265_idr)); + fail_unless (memcmp (mapout.data, h265_idr_hvcc, sizeof (h265_idr)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h265_idr), h265_idr_hvcc, + sizeof (h265_idr)) == 0); + fail_unless (memcmp (mapout.data + 2 * sizeof (h265_idr), h265_idr_hvcc, + sizeof (h265_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 2 * IDR + garbage. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 2 * sizeof (h265_idr)); + fail_unless (memcmp (mapout.data, h265_idr_hvcc, sizeof (h265_idr)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h265_idr), h265_idr_hvcc, + sizeof (h265_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + g_free (h265_idr_hvcc); gst_harness_teardown (h); } @@ -1271,6 +1456,8 @@ tcase_add_test (tc_chain, test_parse_sei_userdefinedunregistered); tcase_add_test (tc_chain, test_invalid_sei_in_hvcc); + tcase_add_test (tc_chain, test_parse_fallback_profile); + tcase_add_test (tc_chain, test_packetized_hvcc_drop_corrupt); return s; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/h266parse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/h266parse.c
Changed
@@ -397,7 +397,7 @@ annexb_to_length_prefixed (const guint8 * in_data, gsize size, guint8 nal_length_size, guint8 ** out_data, gsize * out_size) { - g_assert (size > 4); + g_assert_cmpuint (size, >, 4); *out_size = size - 4 + nal_length_size; *out_data = g_malloc (*out_size); guint32 length = GUINT32_TO_BE ((size - 4) << (32 - 8 * nal_length_size)); @@ -1230,6 +1230,112 @@ GST_END_TEST; +GST_START_TEST (test_packetized_vvc1_drop_corrupt) +{ + GstBuffer *cdata; + GstCaps *in_caps, *out_caps; + GstHarness *h = gst_harness_new ("h266parse"); + GstBuffer *buf, *bufout; + GstMapInfo mapout; + const gchar *in_caps_str = + "video/x-h266, parsed=(boolean)false, stream-format=vvc1, alignment=au"; + const gchar *out_caps_str = + "video/x-h266, parsed=(boolean)true, stream-format=vvc1, alignment=au"; + + in_caps = gst_caps_from_string (in_caps_str); + cdata = gst_buffer_new_memdup (h266_vvc1_codec_data, + sizeof (h266_vvc1_codec_data)); + gst_caps_set_simple (in_caps, "codec_data", GST_TYPE_BUFFER, cdata, + "stream-format", G_TYPE_STRING, "vvc1", NULL); + gst_buffer_unref (cdata); + out_caps = gst_caps_from_string (out_caps_str); + gst_harness_set_caps (h, in_caps, out_caps); + + /* vvc1 idr frame nal */ + static guint8 *h266_idr_vvc1; + + /* make vvc1 idr frame NAL */ + h266_idr_vvc1 = g_malloc (sizeof (h266_idr)); + GST_WRITE_UINT32_BE (h266_idr_vvc1, sizeof (h266_idr) - 4); + memcpy (h266_idr_vvc1 + 4, h266_idr + 4, sizeof (h266_idr) - 4); + + static guint8 h266_garbage_vvc1 = { + 0x00, 0x00, 0x00, 0x00, 0x05 + }; + + /* Send all => drop garbage end but keep correct frame. */ + buf = composite_buffer (100, 0, 2, h266_idr_vvc1, sizeof (h266_idr), + h266_garbage_vvc1, sizeof (h266_garbage_vvc1)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 3 IDR frames => all should be kept. */ + buf = composite_buffer (200, 0, 3, h266_idr_vvc1, sizeof (h266_idr), + h266_idr_vvc1, sizeof (h266_idr), h266_idr_vvc1, sizeof (h266_idr)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send 2 IDR and one garbage => keep the first two and drop garabage. */ + buf = composite_buffer (300, 0, 3, h266_idr_vvc1, sizeof (h266_idr), + h266_idr_vvc1, sizeof (h266_idr), h266_garbage_vvc1, + sizeof (h266_garbage_vvc1)); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Only send part of correct frame => drop everything */ + buf = wrap_buffer (h266_idr_vvc1, sizeof (h266_idr) - 10, 300, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* Send garbage frame => drop everything */ + buf = wrap_buffer (h266_garbage_vvc1, sizeof (h266_garbage_vvc1), 400, 0); + fail_unless_equals_int (gst_harness_push (h, buf), GST_FLOW_OK); + + /* EOS for pending buffers to be drained if any */ + gst_harness_push_event (h, gst_event_new_eos ()); + + fail_unless_equals_int (gst_harness_buffers_received (h), 3); + + /* Verify IDR + garbage. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gsize vps_sps_pps_sz = sizeof (h266_vps) + sizeof (h266_sps) + + sizeof (h266_pps); + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, vps_sps_pps_sz + sizeof (h266_idr)); + fail_unless (memcmp (mapout.data + vps_sps_pps_sz, + h266_idr_vvc1, sizeof (h266_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 3 * IDR. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 3 * sizeof (h266_idr)); + fail_unless (memcmp (mapout.data, h266_idr_vvc1, sizeof (h266_idr)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h266_idr), h266_idr_vvc1, + sizeof (h266_idr)) == 0); + fail_unless (memcmp (mapout.data + 2 * sizeof (h266_idr), h266_idr_vvc1, + sizeof (h266_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + /* Verify 2 * IDR + garbage. */ + bufout = gst_harness_pull (h); + fail_unless (bufout != NULL); + + gst_buffer_map (bufout, &mapout, GST_MAP_READ); + fail_unless_equals_int (mapout.size, 2 * sizeof (h266_idr)); + fail_unless (memcmp (mapout.data, h266_idr_vvc1, sizeof (h266_idr)) == 0); + fail_unless (memcmp (mapout.data + sizeof (h266_idr), h266_idr_vvc1, + sizeof (h266_idr)) == 0); + gst_buffer_unmap (bufout, &mapout); + gst_buffer_unref (bufout); + + gst_harness_teardown (h); +} + +GST_END_TEST; + static Suite * h266parse_harnessed_suite (void) { @@ -1268,6 +1374,8 @@ tcase_add_test (tc_chain, test_drain); + tcase_add_test (tc_chain, test_packetized_vvc1_drop_corrupt); + return s; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/id3mux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/id3mux.c
Changed
@@ -373,7 +373,7 @@ gst_buffer_map (buf, &map, GST_MAP_READ); - g_assert (map.size % MP3_FRAME_SIZE == 0); + fail_unless_equals_uint64 (map.size % MP3_FRAME_SIZE, 0); for (off = 0; off < map.size; off += MP3_FRAME_SIZE) { fail_unless (memcmp (map.data + off, mp3_dummyhdr, @@ -403,23 +403,23 @@ GstStateChangeReturn state_result; pipeline = gst_pipeline_new ("pipeline"); - g_assert (pipeline != NULL); + g_assert_nonnull (pipeline); fakesrc = gst_element_factory_make ("fakesrc", "fakesrc"); - g_assert (fakesrc != NULL); + g_assert_nonnull (fakesrc); id3mux = gst_element_factory_make ("id3mux", "id3mux"); - g_assert (id3mux != NULL); + g_assert_nonnull (id3mux); g_object_set (id3mux, "v2-version", v2version, NULL); identity = gst_element_factory_make ("identity", "identity"); - g_assert (identity != NULL); + g_assert_nonnull (identity); id3demux = gst_element_factory_make ("id3demux", "id3demux"); - g_assert (id3demux != NULL); + g_assert_nonnull (id3demux); fakesink = gst_element_factory_make ("fakesink", "fakesink"); - g_assert (fakesink != NULL); + g_assert_nonnull (fakesink); /* set up sink */ outbuf = NULL;
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/ioutracker.c
Added
@@ -0,0 +1,466 @@ +/* + * GStreamer gstreamer-ioutracker + * Copyright (C) 2025 Collabora Ltd. + * author: Olivier Crête <olivier.crete@collabora.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#include <gst/check/gstcheck.h> +#include <gst/check/gstharness.h> + +#include <gst/analytics/analytics.h> + +static GstHarness * +setup_ioutracker (void) +{ + GstHarness *h = gst_harness_new ("ioutracker"); + + gst_harness_play (h); + gst_harness_set_src_caps (h, gst_caps_new_empty_simple ("video/x-raw")); + + return h; +} + +static GstBuffer * +create_buffer (GstClockTime ts, guint x, guint y, gint h, gint w, + GstAnalyticsODMtd * od_mtd) +{ + GstBuffer *b = gst_buffer_new (); + GstAnalyticsRelationMeta *rmeta = gst_buffer_add_analytics_relation_meta (b); + + GST_BUFFER_PTS (b) = GST_BUFFER_DTS (b) = ts; + + gst_analytics_relation_meta_add_od_mtd (rmeta, 0, x, y, w, h, 1.0, od_mtd); + + return b; +} + +GST_START_TEST (test_no_intersection) +{ + GstHarness *h = setup_ioutracker (); + GstBuffer *b; + GstAnalyticsRelationMeta *rmeta; + GstAnalyticsODMtd od_mtd; + GstAnalyticsTrackingMtd t_mtd, t_mtd2; + guint64 tracking_id; + guint64 tracking_id1; + GstClockTime tracking_first_seen, tracking_last_seen; + gboolean tracking_lost; + gpointer state = NULL; + gboolean has_1 = FALSE; + gboolean has_2 = FALSE; + + b = gst_harness_push_and_pull (h, create_buffer (0, 0, 0, 10, 10, &od_mtd)); + fail_unless (b); + + rmeta = gst_buffer_get_analytics_relation_meta (b); + fail_unless (rmeta == od_mtd.meta); + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (rmeta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless (tracking_lost == FALSE); + tracking_id1 = tracking_id; + + gst_buffer_unref (b); + + /* Now send a second buffer in a separate location */ + b = gst_harness_push_and_pull (h, create_buffer (10, 20, 20, 10, 10, + &od_mtd)); + fail_unless (b); + + /* Now one object and 2 tracks */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 3); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 10); + fail_unless_equals_int64 (tracking_last_seen, 10); + fail_unless (tracking_id != tracking_id1); + fail_unless (tracking_lost == FALSE); + + while (gst_analytics_relation_meta_iterate (od_mtd.meta, + &state, gst_analytics_tracking_mtd_get_mtd_type (), &t_mtd2)) { + if (t_mtd.id == t_mtd2.id) { + fail_unless (has_1 == FALSE); + has_1 = TRUE; + continue; + } + + fail_unless (has_2 == FALSE); + has_2 = TRUE; + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd2, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + } + fail_unless (has_1); + fail_unless (has_2); + + gst_buffer_unref (b); + + gst_harness_teardown (h); +} + +GST_END_TEST; + +GST_START_TEST (test_intersection) +{ + GstHarness *h = setup_ioutracker (); + GstBuffer *b; + GstAnalyticsODMtd od_mtd; + GstAnalyticsTrackingMtd t_mtd; + guint64 tracking_id; + guint64 tracking_id1; + GstClockTime tracking_first_seen, tracking_last_seen; + gboolean tracking_lost; + + g_object_set (h->element, "iou-score-threshold", 0.4, NULL); + + b = gst_harness_push_and_pull (h, create_buffer (0, 0, 0, 10, 10, &od_mtd)); + fail_unless (b); + + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id1, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + /* Now send a second buffer with large intersection */ + b = gst_harness_push_and_pull (h, create_buffer (10, 0, 4, 10, 10, &od_mtd)); + fail_unless (b); + + /* Now 1 object and 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 10); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + /* Now send a third buffer with large intersection */ + b = gst_harness_push_and_pull (h, create_buffer (20, 0, 8, 10, 10, &od_mtd)); + fail_unless (b); + + /* Now 1 object and 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 20); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + /* Now send a for buffer with large intersection with the 3rd, + * but none with the original one. + */ + b = gst_harness_push_and_pull (h, create_buffer (30, 0, 12, 10, 10, &od_mtd)); + fail_unless (b); + + /* Now 1 object and 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 30); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + + gst_harness_teardown (h); +} + +GST_END_TEST; + +GST_START_TEST (test_lost) +{ + GstHarness *h = setup_ioutracker (); + GstBuffer *b; + GstAnalyticsRelationMeta *rmeta; + GstAnalyticsODMtd od_mtd; + GstAnalyticsTrackingMtd t_mtd; + guint64 tracking_id; + guint64 tracking_id1; + GstClockTime tracking_first_seen, tracking_last_seen; + gboolean tracking_lost; + gpointer state = NULL; + + g_object_set (h->element, "min-frame-count-for-lost-track", 2, NULL); + + b = gst_harness_push_and_pull (h, create_buffer (0, 0, 0, 10, 10, &od_mtd)); + fail_unless (b); + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless (tracking_lost == FALSE); + tracking_id1 = tracking_id; + gst_buffer_unref (b); + + /* Now send a second buffer with no meta */ + b = gst_buffer_new (); + GST_BUFFER_PTS (b) = GST_BUFFER_DTS (b) = 10; + b = gst_harness_push_and_pull (h, b); + fail_unless (b); + rmeta = gst_buffer_get_analytics_relation_meta (b); + fail_unless (rmeta); + + /* Now one object and 2 tracks */ + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta), 1); + + fail_unless (gst_analytics_relation_meta_iterate (rmeta, &state, + gst_analytics_tracking_mtd_get_mtd_type (), &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + gst_buffer_unref (b); + + /* Now send a third buffer with no meta */ + b = gst_buffer_new (); + GST_BUFFER_PTS (b) = GST_BUFFER_DTS (b) = 20; + b = gst_harness_push_and_pull (h, b); + fail_unless (b); + rmeta = gst_buffer_get_analytics_relation_meta (b); + fail_unless (rmeta); + + /* Now one object and 2 tracks */ + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta), 1); + + state = NULL; + fail_unless (gst_analytics_relation_meta_iterate (rmeta, &state, + gst_analytics_tracking_mtd_get_mtd_type (), &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless_equals_int64 (tracking_id, tracking_id1); + /* The track is lost */ + fail_unless (tracking_lost == TRUE); + gst_buffer_unref (b); + + /* Now send a fourth buffer with no meta */ + b = gst_buffer_new (); + GST_BUFFER_PTS (b) = GST_BUFFER_DTS (b) = 30; + b = gst_harness_push_and_pull (h, b); + fail_unless (b); + rmeta = gst_buffer_get_analytics_relation_meta (b); + fail_unless (rmeta == NULL); + gst_buffer_unref (b); + + gst_harness_teardown (h); +} + +GST_END_TEST; + +GST_START_TEST (test_catch_up) +{ + GstHarness *h = setup_ioutracker (); + GstBuffer *b; + GstAnalyticsODMtd od_mtd; + GstAnalyticsTrackingMtd t_mtd; + guint64 tracking_id; + guint64 tracking_id1; + GstClockTime tracking_first_seen, tracking_last_seen; + gboolean tracking_lost; + GstAnalyticsRelationMeta *rmeta; + gpointer state = NULL; + GstClockTime ts; + + g_object_set (h->element, "iou-score-threshold", 0.2, + "min-frame-count-for-lost-track", 10, NULL); + /* Send a firs buffer */ + + b = gst_harness_push_and_pull (h, create_buffer (0, 0, 0, 10, 10, &od_mtd)); + fail_unless (b); + + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id1, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 0); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + /* Now send a second buffer with an intersection */ + b = gst_harness_push_and_pull (h, create_buffer (10, 0, 6, 10, 10, &od_mtd)); + fail_unless (b); + + /* Now 1 object and 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 10); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + /* Now send a few buffers with no meta */ + for (ts = 20; ts < 50; ts += 10) { + + b = gst_buffer_new (); + GST_BUFFER_PTS (b) = GST_BUFFER_DTS (b) = ts; + b = gst_harness_push_and_pull (h, b); + fail_unless (b); + rmeta = gst_buffer_get_analytics_relation_meta (b); + fail_unless (rmeta); + + /* Now has 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta), 1); + + state = NULL; + fail_unless (gst_analytics_relation_meta_iterate (rmeta, &state, + gst_analytics_tracking_mtd_get_mtd_type (), &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 10); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + gst_buffer_unref (b); + } + + /* Now send a sixth buffer with no intersection, + * but we expect the prediction to catch up + */ + b = gst_harness_push_and_pull (h, create_buffer (50, 0, 16, 10, 10, &od_mtd)); + fail_unless (b); + + /* Now 1 object and 1 track */ + fail_unless_equals_int (gst_analytics_relation_get_length (od_mtd.meta), 2); + + fail_unless (gst_analytics_relation_meta_get_direct_related (od_mtd.meta, + od_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + gst_analytics_tracking_mtd_get_mtd_type (), NULL, &t_mtd)); + + fail_unless (gst_analytics_tracking_mtd_get_info (&t_mtd, + &tracking_id, &tracking_first_seen, &tracking_last_seen, + &tracking_lost)); + fail_unless_equals_int64 (tracking_first_seen, 0); + fail_unless_equals_int64 (tracking_last_seen, 50); + fail_unless_equals_int64 (tracking_id, tracking_id1); + fail_unless (tracking_lost == FALSE); + + gst_buffer_unref (b); + + + gst_harness_teardown (h); +} + +GST_END_TEST; + + +static Suite * +ioutracker_suite (void) +{ + Suite *s; + TCase *tc_chain; + + s = suite_create ("ioutracker"); + tc_chain = tcase_create ("general"); + + suite_add_tcase (s, tc_chain); + tcase_add_test (tc_chain, test_no_intersection); + tcase_add_test (tc_chain, test_intersection); + tcase_add_test (tc_chain, test_lost); + tcase_add_test (tc_chain, test_catch_up); + + return s; +} + +GST_CHECK_MAIN (ioutracker);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/mpegtsmux.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/mpegtsmux.c
Changed
@@ -134,6 +134,7 @@ teardown_src_pad (mux, sinkname); gst_check_teardown_sink_pad (mux); gst_check_teardown_element (mux); + gst_check_drop_buffers (); } static void @@ -507,7 +508,7 @@ GstState next_state = statesi % G_N_ELEMENTS (states); fail_unless (gst_element_set_state (mux, next_state) == GST_STATE_CHANGE_SUCCESS, - "could not set to %s", gst_element_state_get_name (next_state)); + "could not set to %s", gst_state_get_name (next_state)); /* push some buffers when playing - this triggers a lot of activity */ if (GST_STATE_PLAYING == next_state) {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/nvenc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/nvenc.c
Changed
@@ -322,7 +322,7 @@ /* change resolution */ caps = gst_caps_from_string ("video/x-raw,format=NV12"); gst_caps_set_simple (caps, "width", G_TYPE_INT, to_width, - "height", G_TYPE_INT, to_width, NULL); + "height", G_TYPE_INT, to_height, NULL); GST_DEBUG ("Set new resolution %dx%d", to_width, to_height); gst_harness_set_src_caps (h, caps); @@ -376,14 +376,14 @@ GST_START_TEST (test_resolution_change_to_larger) { - resolution_change_common (64, 64, 128, 128); + resolution_change_common (320, 320, 640, 640); } GST_END_TEST; GST_START_TEST (test_resolution_change_to_smaller) { - resolution_change_common (128, 128, 64, 64); + resolution_change_common (640, 640, 320, 320); } GST_END_TEST;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/pnm.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/pnm.c
Changed
@@ -52,7 +52,7 @@ gst_parse_launch ("videotestsrc num-buffers=1 ! capsfilter name=incf ! pnmenc name=enc ! pnmdec ! capsfilter name=outcf ! appsink name=sink", NULL); - g_assert (pipeline != NULL); + g_assert_nonnull (pipeline); incf = gst_bin_get_by_name (GST_BIN (pipeline), "incf"); enc = gst_bin_get_by_name (GST_BIN (pipeline), "enc");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/rtponvifparse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/rtponvifparse.c
Changed
@@ -176,14 +176,14 @@ buf = buffers->data; if (clean_point) - g_assert (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)); + g_assert_false (GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)); else - g_assert (GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)); + g_assert_true (GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)); if (discont) - g_assert (GST_BUFFER_IS_DISCONT (buf)); + g_assert_true (GST_BUFFER_IS_DISCONT (buf)); else - g_assert (!GST_BUFFER_IS_DISCONT (buf)); + g_assert_false (GST_BUFFER_IS_DISCONT (buf)); g_list_foreach (buffers, (GFunc) gst_mini_object_unref, NULL); g_list_free (buffers);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/test_http_src.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/test_http_src.c
Changed
@@ -188,7 +188,7 @@ "Value of the User-Agent HTTP request header field", DEFAULT_USER_AGENT, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); - gst_element_class_set_metadata (gstelement_class, + gst_element_class_set_static_metadata (gstelement_class, "Test HTTP source element for unit tests", "Source/Network", "Use in unit tests", "Alex Ashley <alex.ashley@youview.com>");
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/unixfd.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/unixfd.c
Changed
@@ -224,6 +224,76 @@ GST_END_TEST; +GST_START_TEST (test_unixfd_copy) +{ + GError *error = NULL; + + /* Ensure we don't have socket from previous failed test */ + gchar *tempdir = g_dir_make_tmp ("unixfd-test-XXXXXX", &error); + g_assert_no_error (error); + gchar *socket_path = g_strdup_printf ("%s/socket", tempdir); + + GstCaps *caps = gst_caps_new_empty_simple ("video/x-raw"); + + /* Setup service */ + gchar *pipeline_str = + g_strdup_printf + ("appsrc name=src format=time ! unixfdsink socket-path=%s sync=false async=false wait-for-connection=true", + socket_path); + GstElement *pipeline_service = gst_parse_launch (pipeline_str, &error); + g_assert_no_error (error); + fail_unless (gst_element_set_state (pipeline_service, + GST_STATE_PLAYING) == GST_STATE_CHANGE_SUCCESS); + GstElement *appsrc = gst_bin_get_by_name (GST_BIN (pipeline_service), "src"); + gst_object_unref (appsrc); + g_free (pipeline_str); + + /* Setup client */ + pipeline_str = + g_strdup_printf + ("unixfdsrc socket-path=%s ! appsink name=sink sync=false async=false", + socket_path); + GstElement *pipeline_client = gst_parse_launch (pipeline_str, &error); + g_assert_no_error (error); + fail_unless (gst_element_set_state (pipeline_client, + GST_STATE_PLAYING) == GST_STATE_CHANGE_SUCCESS); + GstElement *appsink = gst_bin_get_by_name (GST_BIN (pipeline_client), "sink"); + gst_object_unref (appsink); + g_free (pipeline_str); + + /* Send a buffer with system memory */ + GstSegment segment; + gst_segment_init (&segment, GST_FORMAT_TIME); + const char content = "Hello world!"; + GstBuffer *buf = gst_buffer_new_memdup (content, strlen (content)); + GstSample *sample = gst_sample_new (buf, caps, &segment, NULL); + gst_app_src_push_sample (GST_APP_SRC (appsrc), sample); + gst_sample_unref (sample); + gst_buffer_unref (buf); + + /* Wait for it */ + sample = gst_app_sink_pull_sample (GST_APP_SINK (appsink)); + buf = gst_sample_get_buffer (sample); + fail_unless (gst_buffer_memcmp (buf, 0, content, strlen (content)) == 0); + gst_sample_unref (sample); + + /* Teardown */ + fail_unless (gst_element_set_state (pipeline_client, + GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS); + fail_unless (gst_element_set_state (pipeline_service, + GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS); + + g_rmdir (tempdir); + g_free (tempdir); + + gst_object_unref (pipeline_service); + gst_object_unref (pipeline_client); + g_free (socket_path); + gst_caps_unref (caps); +} + +GST_END_TEST; + GST_START_TEST (test_unixfd_big_payload) { GError *error = NULL; @@ -269,7 +339,8 @@ GstSegment segment; gst_segment_init (&segment, GST_FORMAT_TIME); gst_meta_register_custom_simple ("test_unixfd_big_payload"); - GstBuffer *buf = gst_buffer_new (); + const char content = "Hello world!"; + GstBuffer *buf = gst_buffer_new_memdup (content, strlen (content)); GstCustomMeta *meta = gst_buffer_add_custom_meta (buf, "test_unixfd_big_payload"); gst_structure_set (meta->structure, "data", GST_TYPE_BUFFER, indata, NULL); @@ -281,6 +352,7 @@ /* Wait for it */ sample = gst_app_sink_pull_sample (GST_APP_SINK (appsink)); buf = gst_sample_get_buffer (sample); + fail_unless (gst_buffer_memcmp (buf, 0, content, strlen (content)) == 0); meta = gst_buffer_get_custom_meta (buf, "test_unixfd_big_payload"); fail_unless (meta != NULL); GstBuffer *outdata = NULL; @@ -325,6 +397,7 @@ suite_add_tcase (s, tc); tcase_add_test (tc, test_unixfd_videotestsrc); tcase_add_test (tc, test_unixfd_segment); + tcase_add_test (tc, test_unixfd_copy); tcase_add_test (tc, test_unixfd_big_payload); return s;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/vkcolorconvert.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/vkcolorconvert.c
Changed
@@ -125,7 +125,6 @@ suite_add_tcase (s, tc_basic); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/vkdeviceprovider.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/vkdeviceprovider.c
Changed
@@ -106,7 +106,6 @@ suite_add_tcase (s, tc_basic); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/vkupload.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/vkupload.c
Changed
@@ -28,62 +28,163 @@ #include <gst/check/gstcheck.h> #include <gst/check/gstharness.h> #include <gst/vulkan/vulkan.h> +#include <stdio.h> -#define SOURCE "videotestsrc num-buffers=1 pattern=blue ! " -#define CAPS "format=NV12, width=320, height=240" +static const gchar *formats = { "NV12", "RGBA" }; -static void -check_output_buffer (GstBuffer * buf) +static const struct { - GstMapInfo mapinfo; - guint i; - - fail_unless (gst_buffer_map (buf, &mapinfo, GST_MAP_READ)); - - /* Check for a 320x240 blue square in NV12 format */ - /* Y */ - for (i = 0; i < 0x12c00; i++) - fail_unless (mapinfo.datai == 0x29); - /* UV */ - for (i = 0x12c00; i < 0x1c1f0; i++) - fail_unless (mapinfo.datai == 0xf0 && mapinfo.data++i == 0x6e); - gst_buffer_unmap (buf, &mapinfo); + guint width; + guint height; +} resolutions = { + {320, 240}, + {640, 480}, + {15, 10}, + {128, 96}, + {256, 144}, + {349, 287}, + {352, 289}, +}; + +static gboolean +cmp_buffers (GstBuffer * buf1, GstBuffer * buf2, const GstVideoInfo * info) +{ + GstVideoFrame frame1, frame2; + gint compGST_VIDEO_MAX_COMPONENTS, stride1, stride2; + guint32 width, height; + gboolean ret = FALSE; + + fail_unless (gst_video_frame_map (&frame1, info, buf1, GST_MAP_READ)); + fail_unless (gst_video_frame_map (&frame2, info, buf2, GST_MAP_READ)); + + for (int plane = 0; plane < GST_VIDEO_INFO_N_PLANES (info); plane++) { + guint8 *row1, *row2; + + gst_video_format_info_component (info->finfo, plane, comp); + + width = GST_VIDEO_INFO_COMP_WIDTH (info, comp0) + * GST_VIDEO_INFO_COMP_PSTRIDE (info, comp0); + /* some tiled formats might have 0 pixel stride */ + if (width == 0) { + width = MIN (GST_VIDEO_INFO_COMP_PSTRIDE (&frame1.info, plane), + GST_VIDEO_INFO_COMP_PSTRIDE (&frame2.info, plane)); + } + height = GST_VIDEO_INFO_COMP_HEIGHT (info, comp0); + + stride1 = GST_VIDEO_INFO_PLANE_STRIDE (&frame1.info, plane); + stride2 = GST_VIDEO_INFO_PLANE_STRIDE (&frame2.info, plane); + + row1 = frame1.dataplane; + row2 = frame2.dataplane; + + for (int i = 0; i < height; i++) { + GST_MEMDUMP ("input row:", row1, width); + GST_MEMDUMP ("output row:", row2, width); + + if (memcmp (row1, row2, width) != 0) + goto bail; + + row1 += stride1; + row2 += stride2; + } + } + + ret = TRUE; + +bail: + gst_video_frame_unmap (&frame1); + gst_video_frame_unmap (&frame2); + + return ret; } -GST_START_TEST (test_vulkan_upload_buffer) +static gboolean +run_test (const gchar * launchline, const gchar * format, guint width, + guint height, const gchar * sink_caps_str) { - GstHarness *h; - GstBuffer *buf; + GstHarness *h_src, *h_el = NULL; + GstBuffer *inbuf, *outbuf; + GstCaps *src_caps, *caps = NULL; + GstVideoInfo src_info; + gboolean ret = FALSE; + + src_caps = + gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, format, + "width", G_TYPE_INT, width, "height", G_TYPE_INT, height, NULL); + + if (!gst_video_info_from_caps (&src_info, src_caps)) + return FALSE; + + h_src = gst_harness_new_parse ("videotestsrc num-buffers=1 pattern=blue"); + gst_harness_set_sink_caps (h_src, src_caps); + + gst_harness_play (h_src); + while (TRUE) { + GstEvent *event = gst_harness_pull_event (h_src); + if (!event) + break; + if (GST_EVENT_TYPE (event) == GST_EVENT_CAPS) + gst_event_parse_caps (event, &caps); + if (caps) + caps = gst_caps_ref (caps); + gst_event_unref (event); + if (caps) + break; + } - h = gst_harness_new_parse (SOURCE "vulkanupload"); - gst_harness_set_sink_caps_str (h, "video/x-raw(memory:VulkanBuffer), " CAPS); - gst_harness_play (h); + if (!caps) + goto bail; - buf = gst_harness_pull (h); - ck_assert (buf); - check_output_buffer (buf); + inbuf = gst_harness_pull (h_src); + if (!inbuf) + goto bail; - gst_buffer_unref (buf); - gst_harness_teardown (h); + h_el = gst_harness_new_parse (launchline); + + gst_harness_set_src_caps (h_el, caps); + gst_harness_set_sink_caps_str (h_el, sink_caps_str); + + outbuf = gst_harness_push_and_pull (h_el, inbuf); + if (!outbuf) + goto bail; + + GST_INFO ("Testing format: %s %dx%d", format, width, height); + + ret = cmp_buffers (inbuf, outbuf, &src_info); + + gst_buffer_unref (outbuf); + +bail: + if (h_el) + gst_harness_teardown (h_el); + gst_harness_teardown (h_src); + + return ret; +} + +GST_START_TEST (test_vulkan_upload_buffer) +{ + for (int i = 0; i < G_N_ELEMENTS (formats); i++) { + for (int j = 0; j < G_N_ELEMENTS (resolutions); j++) { + fail_unless (run_test ("vulkanupload", formatsi, resolutionsj.width, + resolutionsj.height, "video/x-raw(memory:VulkanBuffer)")); + } + } } GST_END_TEST; GST_START_TEST (test_vulkan_upload_image) { - GstHarness *h; - GstBuffer *buf; - h = gst_harness_new_parse (SOURCE "vulkanupload ! vulkandownload"); - gst_harness_set_sink_caps_str (h, "video/x-raw, " CAPS); - gst_harness_play (h); - - buf = gst_harness_pull (h); - ck_assert (buf); - check_output_buffer (buf); - - gst_buffer_unref (buf); - gst_harness_teardown (h); + for (int i = 0; i < G_N_ELEMENTS (formats); i++) { + for (int j = 0; j < G_N_ELEMENTS (resolutions); j++) { + fail_unless (run_test + ("vulkanupload ! video/x-raw(memory:VulkanImage) ! vulkandownload", + formatsi, resolutionsj.width, resolutionsj.height, + "video/x-raw")); + } + } } GST_END_TEST; @@ -98,7 +199,6 @@ suite_add_tcase (s, tc_basic); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/vmaf.c
Added
@@ -0,0 +1,180 @@ +/* GStreamer + * + * Copyright (C) 2025 Fluendo S.A. <contact@fluendo.com> + * Authors: Diego Nieto <dnieto@fluendo.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif +#include <gst/gst.h> +#include <gst/check/gstcheck.h> +#include <gst/video/video.h> +#include <string.h> + +typedef struct +{ + GMainLoop *loop; + gboolean eos; + gboolean score_received; + GstStructure *score_structure; + gdouble vmaf_score; +} TestData; + +static void +on_element_message (GstBus * bus, GstMessage * message, gpointer user_data) +{ + TestData *data = (TestData *) user_data; + const GstStructure *structure; + + if (GST_MESSAGE_TYPE (message) != GST_MESSAGE_ELEMENT) + return; + + structure = gst_message_get_structure (message); + if (gst_structure_has_name (structure, "VMAF")) { + data->score_received = TRUE; + if (data->score_structure) + gst_structure_free (data->score_structure); + data->score_structure = gst_structure_copy (structure); + + if (gst_structure_get_double (structure, "score", &data->vmaf_score)) { + GST_DEBUG ("Received VMAF score: %f", data->vmaf_score); + } + } +} + +static void +on_message_cb (GstBus * bus, GstMessage * message, gpointer user_data) +{ + TestData *data = (TestData *) user_data; + + switch (GST_MESSAGE_TYPE (message)) { + case GST_MESSAGE_ERROR: + case GST_MESSAGE_WARNING: + g_assert_not_reached (); + break; + case GST_MESSAGE_EOS: + g_main_loop_quit (data->loop); + data->eos = TRUE; + break; + case GST_MESSAGE_ELEMENT: + on_element_message (bus, message, user_data); + break; + default: + break; + } +} + +static void +run_vmaf_test (const gchar * pipeline_string, gboolean check_additional_metrics) +{ + GstElement *pipeline; + GstBus *bus; + GMainLoop *loop; + TestData data = { NULL, }; + GstStateChangeReturn ret; + + GST_DEBUG ("Testing VMAF pipeline"); + + pipeline = gst_parse_launch (pipeline_string, NULL); + fail_unless (pipeline != NULL); + g_object_set (G_OBJECT (pipeline), "async-handling", TRUE, NULL); + + loop = g_main_loop_new (NULL, FALSE); + + bus = gst_element_get_bus (pipeline); + fail_unless (bus != NULL); + gst_bus_add_signal_watch (bus); + + data.loop = loop; + data.eos = FALSE; + data.score_received = FALSE; + data.score_structure = NULL; + + g_signal_connect (bus, "message", (GCallback) on_message_cb, &data); + + ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); + fail_unless (ret == GST_STATE_CHANGE_SUCCESS + || ret == GST_STATE_CHANGE_ASYNC); + + g_main_loop_run (loop); + + fail_unless (gst_element_set_state (pipeline, + GST_STATE_NULL) == GST_STATE_CHANGE_SUCCESS); + fail_unless (data.eos == TRUE); + + fail_unless (data.score_received, "Score message was not received"); + fail_unless (data.score_structure != NULL, "Score structure is NULL"); + + fail_unless (gst_structure_has_name (data.score_structure, "VMAF")); + fail_unless (gst_structure_has_field_typed (data.score_structure, "timestamp", + G_TYPE_UINT64)); + fail_unless (gst_structure_has_field_typed (data.score_structure, + "stream-time", G_TYPE_UINT64)); + fail_unless (gst_structure_has_field_typed (data.score_structure, + "running-time", G_TYPE_UINT64)); + fail_unless (gst_structure_has_field_typed (data.score_structure, "duration", + G_TYPE_UINT64)); + fail_unless (gst_structure_has_field_typed (data.score_structure, "score", + G_TYPE_DOUBLE)); + fail_unless (gst_structure_has_field_typed (data.score_structure, "type", + G_TYPE_STRING)); + + if (data.score_structure) + gst_structure_free (data.score_structure); + + gst_object_unref (pipeline); + g_main_loop_unref (loop); + gst_bus_remove_signal_watch (bus); + gst_object_unref (bus); +} + +GST_START_TEST (test_vmaf_identical_frames) +{ + gchar *pipeline; + + pipeline = + g_strdup_printf + ("videotestsrc num-buffers=5 pattern=solid-color foreground-color=0x00ff0000 ! " + "video/x-raw,format=I420,width=320,height=180,framerate=25/1 ! v.ref_sink " + "vmaf name=v frame-message=true threads=0 ! " "fakesink " + "videotestsrc num-buffers=5 pattern=solid-color foreground-color=0x00ff0000 ! " + "video/x-raw,format=I420,width=320,height=180,framerate=25/1 ! " + "v.dist_sink"); + + run_vmaf_test (pipeline, FALSE); + g_free (pipeline); +} + +GST_END_TEST; + +static Suite * +vmaf_suite (void) +{ + Suite *s = suite_create ("vmaf"); + TCase *tc = tcase_create ("general"); + + suite_add_tcase (s, tc); + tcase_set_timeout (tc, 60); + + tcase_add_test (tc, test_vmaf_identical_frames); + + return s; +} + +GST_CHECK_MAIN (vmaf);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/elements/webrtcbin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/elements/webrtcbin.c
Changed
@@ -83,6 +83,7 @@ gulong error_signal_handler_id; gpointer user_data; GDestroyNotify data_notify; + GCond add_candidate_result_cond; /* *INDENT-OFF* */ void (*on_negotiation_needed) (struct test_webrtc * t, GstElement * element, @@ -233,7 +234,7 @@ g_mutex_lock (&t->lock); - g_assert (t->answer_desc == NULL); + g_assert_null (t->answer_desc); t->answer_desc = answer; if (t->on_answer_created) { @@ -316,7 +317,7 @@ g_mutex_lock (&t->lock); - g_assert (t->offer_desc == NULL); + g_assert_null (t->offer_desc); t->offer_desc = offer; if (t->on_offer_created) { @@ -375,8 +376,8 @@ { gchar *dump_name = g_strconcat (GST_OBJECT_NAME (msg->src), "-state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name); @@ -651,6 +652,7 @@ g_mutex_init (&ret->lock); g_cond_init (&ret->cond); + g_cond_init (&ret->add_candidate_result_cond); ret->states = g_array_new (FALSE, TRUE, sizeof (TestState)); @@ -793,6 +795,7 @@ g_mutex_clear (&t->lock); g_cond_clear (&t->cond); + g_cond_clear (&t->add_candidate_result_cond); g_array_free (t->states, TRUE); t->states = NULL; @@ -1754,6 +1757,7 @@ guint port; guint64 priority; gchar *address, *candidateType, *protocol; + gchar *foundation, *username_fragment; fail_unless (gst_structure_get (s, "address", G_TYPE_STRING, &address, NULL)); fail_unless (gst_structure_get (s, "port", G_TYPE_UINT, &port, NULL)); @@ -1765,9 +1769,54 @@ fail_unless (strcmp (protocol, "udp") || strcmp (protocol, "tcp")); + fail_unless (gst_structure_get (s, "foundation", G_TYPE_STRING, &foundation, + NULL)); + fail_unless (gst_structure_get (s, "username-fragment", G_TYPE_STRING, + &username_fragment, NULL)); + + if (strcmp (candidateType, "host")) { + guint related_port; + gchar *related_address; + fail_unless (gst_structure_get (s, "related-address", G_TYPE_STRING, + &related_address, NULL)); + fail_unless (gst_structure_get (s, "related-port", G_TYPE_UINT, + &related_port, NULL)); + g_free (related_address); + } else { + fail_if (gst_structure_has_field (s, "related-address")); + fail_if (gst_structure_has_field (s, "related-port")); + } + + if (!strcmp (protocol, "tcp")) { + GstWebRTCICETcpCandidateType tcp_type; + fail_unless (gst_structure_get (s, "tcp-type", + GST_TYPE_WEBRTC_ICE_TCP_CANDIDATE_TYPE, &tcp_type, NULL)); + fail_if (tcp_type == GST_WEBRTC_ICE_TCP_CANDIDATE_TYPE_NONE); + } else { + fail_if (gst_structure_has_field (s, "tcp-type")); + } g_free (address); g_free (candidateType); g_free (protocol); + g_free (foundation); + g_free (username_fragment); +} + +static void +validate_transport_stats (const GstStructure * s, const GstStructure * stats) +{ + gchar *selected_candidate_pair_id; + GstWebRTCDTLSTransportState state; + GstWebRTCDTLSRole dtls_role; + + fail_unless (gst_structure_get (s, "selected-candidate-pair-id", + G_TYPE_STRING, &selected_candidate_pair_id, NULL)); + fail_unless (gst_structure_get (s, "dtls-state", + GST_TYPE_WEBRTC_DTLS_TRANSPORT_STATE, &state, NULL)); + fail_unless (gst_structure_get (s, "dtls-role", GST_TYPE_WEBRTC_DTLS_ROLE, + &dtls_role, NULL)); + + g_free (selected_candidate_pair_id); } static void @@ -1833,6 +1882,7 @@ } else if (type == GST_WEBRTC_STATS_DATA_CHANNEL) { } else if (type == GST_WEBRTC_STATS_STREAM) { } else if (type == GST_WEBRTC_STATS_TRANSPORT) { + validate_transport_stats (s, stats); } else if (type == GST_WEBRTC_STATS_CANDIDATE_PAIR) { } else if (type == GST_WEBRTC_STATS_LOCAL_CANDIDATE) { validate_candidate_stats (s, stats); @@ -2930,6 +2980,100 @@ GST_END_TEST; +typedef struct _stat_type_find_data +{ + GstWebRTCStatsType t; + GstStructure *res; +} stat_type_find_data; + +static gboolean +find_typed_stat (const GstIdStr * id, const GValue * value, gpointer user_data) +{ + stat_type_find_data *data = (stat_type_find_data *) user_data; + + if (!GST_VALUE_HOLDS_STRUCTURE (value)) + return TRUE; + + const GstStructure *structure = gst_value_get_structure (value); + GstWebRTCStatsType statsType; + if (!gst_structure_get (structure, "type", GST_TYPE_WEBRTC_STATS_TYPE, + &statsType, NULL)) + return TRUE; + + if (statsType == data->t) { + data->res = gst_structure_copy (structure); + return FALSE; + } + return TRUE; +} + +static GstStructure * +get_typed_stats (GstElement * webrtcbin, GstWebRTCStatsType t) +{ + GstPromise *p; + GstPromiseResult res; + stat_type_find_data data; + + data.t = t; + data.res = NULL; + + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "get-stats", NULL, p); + res = gst_promise_wait (p); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + + gst_structure_foreach_id_str (gst_promise_get_reply (p), find_typed_stat, + &data); + gst_promise_unref (p); + return data.res; +} + +GST_START_TEST (test_data_channel_ice_stats) +{ + struct test_webrtc *t = test_webrtc_new (); + GObject *channel = NULL; + VAL_SDP_INIT (media_count, _count_num_sdp_media, GUINT_TO_POINTER (1), NULL); + VAL_SDP_INIT (offer, on_sdp_has_datachannel, NULL, &media_count); + GstStructure *local_ice_cand_stats, *remote_ice_cand_stats; + + t->on_negotiation_needed = NULL; + t->on_ice_candidate = NULL; + t->on_prepare_data_channel = have_prepare_data_channel; + t->on_data_channel = signal_data_channel; + + fail_if (gst_element_set_state (t->webrtc1, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + fail_if (gst_element_set_state (t->webrtc2, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + + g_signal_emit_by_name (t->webrtc1, "create-data-channel", "label", NULL, + &channel); + g_assert_nonnull (channel); + + fail_if (gst_element_set_state (t->webrtc1, GST_STATE_PLAYING) == + GST_STATE_CHANGE_FAILURE); + fail_if (gst_element_set_state (t->webrtc2, GST_STATE_PLAYING) == + GST_STATE_CHANGE_FAILURE); + + /* Wait SCTP transport creation */ + test_validate_sdp_full (t, &offer, &offer, 1 << STATE_CUSTOM, FALSE); + + local_ice_cand_stats = + get_typed_stats (t->webrtc1, GST_WEBRTC_STATS_LOCAL_CANDIDATE); + remote_ice_cand_stats = + get_typed_stats (t->webrtc1, GST_WEBRTC_STATS_REMOTE_CANDIDATE); + + g_assert_nonnull (local_ice_cand_stats); + g_assert_nonnull (remote_ice_cand_stats); + gst_structure_free (local_ice_cand_stats); + gst_structure_free (remote_ice_cand_stats); + + g_object_unref (channel); + test_webrtc_free (t); +} + +GST_END_TEST; + static void _count_non_rejected_media (struct test_webrtc *t, GstElement * element, GstWebRTCSessionDescription * sd, gpointer user_data) @@ -5050,6 +5194,241 @@ GST_END_TEST; +static GstWebRTCSessionDescription * +remove_sdp_attributes (const GstWebRTCSessionDescription * desc, guint n_attrs, + const char *const *attributes) +{ + GstSDPMessage *sdp; + guint total_medias = gst_sdp_message_medias_len (desc->sdp); + + gst_sdp_message_copy (desc->sdp, &sdp); + for (guint i = 0; i < total_medias; i++) { + gst_sdp_message_remove_media (sdp, i); + } + + for (guint i = 0; i < total_medias; i++) { + const GstSDPMedia *media = gst_sdp_message_get_media (desc->sdp, i); + guint total_attributes = gst_sdp_media_attributes_len (media); + GstSDPMedia *new_media; + + gst_sdp_media_copy (media, &new_media); + for (guint ii = 0; ii < total_attributes; ii++) { + const GstSDPAttribute *attribute = + gst_sdp_media_get_attribute (new_media, ii); + for (guint iii = 0; iii < n_attrs; iii++) { + if (!g_strcmp0 (attribute->key, attributesiii)) { + gst_sdp_media_remove_attribute (new_media, ii); + ii--; + total_attributes--; + break; + } + } + } + gst_sdp_message_add_media (sdp, new_media); + gst_sdp_media_free (new_media); + } + + return gst_webrtc_session_description_new (desc->type, sdp); +} + +static void +do_missing_mid_test (gboolean is_offer) +{ + struct test_webrtc *t = test_webrtc_new (); + const gchar *attributes_to_remove = { "mid", "group" }; + GstWebRTCSessionDescription *modified_desc = NULL; + GstPromise *promise; + GstPromiseResult res; + const GstStructure *s; + GstWebRTCSessionDescription *desc; + GstHarness *h1; + + t->on_negotiation_needed = NULL; + t->on_ice_candidate = NULL; + t->on_pad_added = _pad_added_fakesink; + + h1 = gst_harness_new_with_element (t->webrtc1, "sink_0", NULL); + add_audio_test_src_harness (h1, 0xDEADBEEF); + t->harnesses = g_list_prepend (t->harnesses, h1); + + fail_if (gst_element_set_state (t->webrtc1, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + fail_if (gst_element_set_state (t->webrtc2, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "create-offer", NULL, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_unless (s != NULL); + fail_if (gst_structure_has_field (s, "error")); + gst_structure_get (s, "offer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &desc, + NULL); + fail_unless (desc != NULL); + gst_promise_unref (promise); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "set-local-description", desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + if (is_offer) { + modified_desc = + remove_sdp_attributes (desc, G_N_ELEMENTS (attributes_to_remove), + attributes_to_remove); + } + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "set-remote-description", + modified_desc ? modified_desc : desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + g_clear_pointer (&modified_desc, gst_webrtc_session_description_free); + gst_webrtc_session_description_free (desc); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "create-answer", NULL, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_unless (s != NULL); + s = gst_promise_get_reply (promise); + fail_unless (s != NULL); + fail_if (gst_structure_has_field (s, "error")); + gst_structure_get (s, "answer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &desc, + NULL); + fail_unless (desc != NULL); + gst_promise_unref (promise); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "set-local-description", desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + if (!is_offer) { + modified_desc = + remove_sdp_attributes (desc, G_N_ELEMENTS (attributes_to_remove), + attributes_to_remove); + } + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "set-remote-description", + modified_desc ? modified_desc : desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + g_clear_pointer (&modified_desc, gst_webrtc_session_description_free); + gst_webrtc_session_description_free (desc); + + fail_if (gst_element_set_state (t->webrtc1, GST_STATE_PLAYING) == + GST_STATE_CHANGE_FAILURE); + fail_if (gst_element_set_state (t->webrtc2, GST_STATE_PLAYING) == + GST_STATE_CHANGE_FAILURE); + + test_webrtc_free (t); +} + +GST_START_TEST (test_missing_mid_in_offer) +{ + do_missing_mid_test (TRUE); +} + +GST_END_TEST; + +GST_START_TEST (test_missing_mid_in_answer) +{ + do_missing_mid_test (FALSE); +} + +GST_END_TEST; + +static void +assert_promise_raises_invalid_state_error (GstPromise * p, + const gchar * expected_message) +{ + GstPromiseResult res; + const GstStructure *reply; + GError *error = NULL; + + res = gst_promise_wait (p); + fail_unless (res == GST_PROMISE_RESULT_REPLIED); + reply = gst_promise_get_reply (p); + fail_unless (reply != NULL); + fail_unless (gst_structure_has_field_typed (reply, "error", G_TYPE_ERROR)); + gst_structure_get (reply, "error", G_TYPE_ERROR, &error, NULL); + fail_unless (g_error_matches (error, GST_WEBRTC_ERROR, + GST_WEBRTC_ERROR_INVALID_STATE)); + fail_unless_matches_string (error->message, expected_message); + g_clear_error (&error); + gst_promise_unref (p); +} + +GST_START_TEST (test_using_webrtcbin_once_closed) +{ + GstElement *webrtcbin = gst_element_factory_make ("webrtcbin", NULL); + GstPromise *p; + GstPromiseResult res; + GstWebRTCDataChannel *channel; + + /* Closing an already closed connection should fail. */ + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "close", p); + assert_promise_raises_invalid_state_error (p, "Connection is already closed"); + + /* Create an offer, that shouldn't fail. */ + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "create-offer", NULL, p); + res = gst_promise_wait (p); + fail_unless (res == GST_PROMISE_RESULT_REPLIED); + gst_promise_unref (p); + + /* Close the connection shouldn't fail now. */ + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "close", p); + res = gst_promise_wait (p); + fail_unless (res == GST_PROMISE_RESULT_REPLIED); + gst_promise_unref (p); + + /* Creating an offer on a closed connection should fail. */ + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "create-offer", NULL, p); + assert_promise_raises_invalid_state_error (p, + "Could not create offer. webrtcbin is closed"); + + /* Creating an answer on a closed connection should fail. */ + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "create-answer", NULL, p); + assert_promise_raises_invalid_state_error (p, + "Could not create answer. webrtcbin is closed"); + + /* Creating a data-channel on a closed connection should fail. */ + g_signal_emit_by_name (webrtcbin, "create-data-channel", "label", NULL, + &channel); + fail_unless (channel == NULL); + + p = gst_promise_new (); + g_signal_emit_by_name (webrtcbin, "add-ice-candidate-full", 0, + "a=candidate:1 foo", p); + assert_promise_raises_invalid_state_error (p, + "Could not add ICE candidate. webrtcbin is closed"); + + gst_object_unref (webrtcbin); +} + +GST_END_TEST; + static void new_jitterbuffer_set_fast_start (GstElement * rtpbin, GstElement * rtpjitterbuffer, guint session_id, guint ssrc, @@ -5772,7 +6151,7 @@ /* take up to either space or nul-terminator */ while (p && *p && *p != ' ') p++; - g_assert (v != p); + g_assert_true (v != p); v = g_strndup (v, p - v); GST_INFO ("rid = %s", v); @@ -6551,6 +6930,264 @@ GST_END_TEST; +/* Using different ice-ufrag in bundled medias is allowed as long as they don't share the same ice-pwd. */ +GST_START_TEST (test_bundle_with_different_ice_credentials) +{ + GstPromise *promise; + struct test_webrtc *t = test_webrtc_new (); + const gchar *sdp_str = "v=0\r\n\ +o=- 4962303333179871722 1 IN IP4 0.0.0.0\r\n\ +s=-\r\n\ +t=0 0\r\n\ +a=ice-options:trickle\r\n\ +a=group:BUNDLE a1 v1\r\n\ +m=audio 10100 UDP/TLS/RTP/SAVPF 96\r\n\ +c=IN IP4 0.0.0.0\r\n\ +a=mid:a1\r\n\ +a=sendrecv\r\n\ +a=rtpmap:96 opus/48000/2\r\n\ +a=extmap:1 urn:ietf:params:rtp-hdrext:sdes:mid\r\n\ +a=extmap:2 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\n\ +a=msid:47017fee-b6c1-4162-929c-a25110252400 f83006c5-a0ff-4e0a-9ed9-d3e6747be7d9\r\n\ +a=ice-ufrag:ETEn\r\n\ +a=ice-pwd:OtSK0WpNtpUjkY4+86js7ZQl\r\n\ +a=fingerprint:sha-256 19:E2:1C:3B:4B:9F:81:E6:B8:5C:F4:A5:A8:D8:73:04:BB:05:2F:70:9F:04:A9:0E:05:E9:26:33:E8:70:88:A2\r\n\ +a=setup:actpass\r\n\ +a=rtcp-mux\r\n\ +a=rtcp-rsize\r\n\ +m=video 10102 UDP/TLS/RTP/SAVPF 100\r\n\ +c=IN IP4 0.0.0.0\r\n\ +a=mid:v1\r\n\ +a=sendrecv\r\n\ +a=rtpmap:100 VP8/90000\r\n\ +a=extmap:1 urn:ietf:params:rtp-hdrext:sdes:mid\r\n\ +a=msid:47017fee-b6c1-4162-929c-a25110252400 f30bdb4a-5db8-49b5-bcdc-e0c9a23172e0\r\n\ +a=ice-ufrag:BGKk\r\n\ +a=ice-pwd:mqyWsAjvtKwTGnvhPztQ9mIf\r\n\ +a=fingerprint:sha-256 19:E2:1C:3B:4B:9F:81:E6:B8:5C:F4:A5:A8:D8:73:04:BB:05:2F:70:9F:04:A9:0E:05:E9:26:33:E8:70:88:A2\r\n\ +a=setup:actpass\r\n\ +a=rtcp-mux\r\n\ +a=rtcp-rsize\r\n"; + GstSDPMessage *sdp; + const GstStructure *reply; + + t->on_negotiation_needed = NULL; + t->on_offer_created = NULL; + t->on_answer_created = NULL; + + gst_sdp_message_new_from_text (sdp_str, &sdp); + GstWebRTCSessionDescription *desc = + gst_webrtc_session_description_new (GST_WEBRTC_SDP_TYPE_OFFER, + sdp); + gst_element_set_state (t->webrtc1, GST_STATE_READY); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "set-remote-description", desc, promise); + gst_promise_wait (promise); + gst_promise_unref (promise); + gst_webrtc_session_description_free (desc); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "create-answer", NULL, promise); + gst_promise_wait (promise); + reply = gst_promise_get_reply (promise); + fail_if (gst_structure_has_field (reply, "error")); + gst_promise_unref (promise); + + test_webrtc_free (t); +} + +GST_END_TEST; + +static void +validate_ice_attr (struct test_webrtc *t, GstElement * element, + const gchar * sdp_str, const gchar * expected_error_message) +{ + GstPromise *promise; + GstSDPMessage *sdp; + const GstStructure *reply; + GstWebRTCSessionDescription *desc; + GError *error = NULL; + + gst_sdp_message_new_from_text (sdp_str, &sdp); + desc = gst_webrtc_session_description_new (GST_WEBRTC_SDP_TYPE_OFFER, sdp); + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "set-remote-description", desc, promise); + gst_promise_wait (promise); + reply = gst_promise_get_reply (promise); + if (expected_error_message) { + fail_unless (gst_structure_get (reply, "error", G_TYPE_ERROR, &error, + NULL)); + fail_unless (g_error_matches (error, GST_WEBRTC_ERROR, + GST_WEBRTC_ERROR_SDP_SYNTAX_ERROR)); + fail_unless_equals_string (error->message, expected_error_message); + g_clear_error (&error); + } else { + fail_if (reply != NULL); + } + gst_promise_unref (promise); + gst_webrtc_session_description_free (desc); +} + +GST_START_TEST (test_invalid_ice_attrs) +{ + struct test_webrtc *t = test_webrtc_new (); + const gchar *sdp_preamble = "v=0\r\n\ +o=- 0 3 IN IP4 127.0.0.1\r\n\ +s=-\r\n\ +t=0 0\r\n\ +a=fingerprint:sha-256 A7:24:72:CA:6E:02:55:39:BA:66:DF:6E:CC:4C:D8:B0:1A:BF:1A:56:65:7D:F4:03:AD:7E:77:43:2A:29:EC:93\r\n\ +m=video 1 RTP/SAVPF 100\r\n\ +c=IN IP4 0.0.0.0\r\n\ +a=rtcp-mux\r\n\ +a=sendonly\r\n\ +a=mid:video\r\n\ +a=rtpmap:100 VP8\r\n\ +a=setup:actpass\r\n"; + const gchar *valid_ufrag = "a=ice-ufrag:ETEn\r\n"; + const gchar *valid_pwd = "a=ice-pwd:OtSK0WpNtpUjkY4+86js7Z/l\r\n"; + const gchar *invalid_ufrag = "a=ice-ufrag:ETEn$\r\n"; + const gchar *invalid_pwd = "a=ice-pwd:OtSK0WpNtpUjk$Y4+86js7Z/l\r\n"; + const gchar *too_short_ufrag = "a=ice-ufrag:foo\r\n"; + const gchar *too_short_pwd = "a=ice-pwd:thisistooshort\r\n"; + const gchar *invalid_ufrag_error_message = + "media 0 has an invalid \'ice-ufrag\' attribute"; + const gchar *invalid_pwd_error_message = + "media 0 has an invalid \'ice-pwd\' attribute"; + gchar *sdp_str; + + t->on_negotiation_needed = NULL; + t->on_offer_created = NULL; + t->on_answer_created = NULL; + gst_element_set_state (t->webrtc1, GST_STATE_READY); + + sdp_str = g_strconcat (sdp_preamble, invalid_ufrag, valid_pwd, NULL); + validate_ice_attr (t, t->webrtc1, sdp_str, invalid_ufrag_error_message); + g_free (sdp_str); + + sdp_str = g_strconcat (sdp_preamble, valid_ufrag, invalid_pwd, NULL); + validate_ice_attr (t, t->webrtc1, sdp_str, invalid_pwd_error_message); + g_free (sdp_str); + + sdp_str = g_strconcat (sdp_preamble, too_short_ufrag, valid_pwd, NULL); + validate_ice_attr (t, t->webrtc1, sdp_str, invalid_ufrag_error_message); + g_free (sdp_str); + + sdp_str = g_strconcat (sdp_preamble, valid_ufrag, too_short_pwd, NULL); + validate_ice_attr (t, t->webrtc1, sdp_str, invalid_pwd_error_message); + g_free (sdp_str); + + sdp_str = g_strconcat (sdp_preamble, valid_ufrag, valid_pwd, NULL); + validate_ice_attr (t, t->webrtc1, sdp_str, NULL); + g_free (sdp_str); + + test_webrtc_free (t); +} GST_END_TEST; + +static void +_add_ice_candidate_promise_changed (GstPromise * promise, gpointer user_data) +{ + struct test_webrtc *t = user_data; + const GstStructure *reply; + GError *error = NULL; + + reply = gst_promise_get_reply (promise); + fail_unless (gst_structure_get (reply, "error", G_TYPE_ERROR, &error, NULL)); + g_clear_error (&error); + + g_mutex_lock (&t->lock); + g_cond_broadcast (&t->add_candidate_result_cond); + gst_promise_unref (promise); + g_mutex_unlock (&t->lock); +} + +GST_START_TEST (test_mdns_resolve_error) +{ + struct test_webrtc *t = test_webrtc_new (); + GstPromise *promise; + GstPromiseResult res; + const GstStructure *s; + GstWebRTCSessionDescription *desc; + GstHarness *h1; + + t->on_negotiation_needed = NULL; + t->on_ice_candidate = NULL; + t->on_pad_added = _pad_added_fakesink; + + h1 = gst_harness_new_with_element (t->webrtc1, "sink_0", NULL); + add_audio_test_src_harness (h1, 0xDEADBEEF); + t->harnesses = g_list_prepend (t->harnesses, h1); + + fail_if (gst_element_set_state (t->webrtc1, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + fail_if (gst_element_set_state (t->webrtc2, GST_STATE_READY) == + GST_STATE_CHANGE_FAILURE); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "create-offer", NULL, promise); + res = gst_promise_wait (promise); + fail_unless (res == GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_unless (s != NULL); + fail_if (gst_structure_has_field (s, "error")); + gst_structure_get (s, "offer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &desc, + NULL); + fail_unless (desc != NULL); + gst_promise_unref (promise); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc1, "set-local-description", desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "set-remote-description", desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + + gst_webrtc_session_description_free (desc); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "create-answer", NULL, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_unless (s != NULL); + fail_if (gst_structure_has_field (s, "error")); + gst_structure_get (s, "answer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &desc, + NULL); + fail_unless (desc != NULL); + gst_promise_unref (promise); + + promise = gst_promise_new (); + g_signal_emit_by_name (t->webrtc2, "set-local-description", desc, promise); + res = gst_promise_wait (promise); + fail_unless_equals_int (res, GST_PROMISE_RESULT_REPLIED); + s = gst_promise_get_reply (promise); + fail_if (s && gst_structure_has_field (s, "error")); + gst_promise_unref (promise); + gst_webrtc_session_description_free (desc); + + g_signal_emit_by_name (t->webrtc2, "add-ice-candidate-full", 0, + "a=candidate:0 1 UDP 2122252543 invalid.local 53970 typ host", + gst_promise_new_with_change_func (_add_ice_candidate_promise_changed, t, + NULL)); + + g_mutex_lock (&t->lock); + g_cond_wait (&t->add_candidate_result_cond, &t->lock); + g_mutex_unlock (&t->lock); + + test_webrtc_free (t); +} + +GST_END_TEST; + static Suite * webrtcbin_suite (void) { @@ -6575,6 +7212,7 @@ tcase_add_test (tc, test_sdp_no_media); tcase_add_test (tc, test_session_stats); tcase_add_test (tc, test_stats_with_stream); + tcase_add_test (tc, test_data_channel_ice_stats); if (vp8enc) { tcase_add_test (tc, test_stats_with_two_streams); } else { @@ -6630,6 +7268,9 @@ tcase_add_test (tc, test_sdp_session_setup_attribute); tcase_add_test (tc, test_rtp_header_extension_sendonly_recvonly_pair); tcase_add_test (tc, test_invalid_bundle_in_pending_remote_description); + tcase_add_test (tc, test_missing_mid_in_offer); + tcase_add_test (tc, test_missing_mid_in_answer); + tcase_add_test (tc, test_using_webrtcbin_once_closed); if (sctpenc && sctpdec) { tcase_add_test (tc, test_data_channel_create); tcase_add_test (tc, test_data_channel_create_two_channels); @@ -6653,6 +7294,9 @@ } tcase_add_test (tc, test_offer_rollback); tcase_add_test (tc, test_video_rtx_no_duplicate_payloads); + tcase_add_test (tc, test_bundle_with_different_ice_credentials); + tcase_add_test (tc, test_invalid_ice_attrs); + tcase_add_test (tc, test_mdns_resolve_error); } else { GST_WARNING ("Some required elements were not found. " "All media tests are disabled. nicesrc %p, nicesink %p, "
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/gst-plugins-bad.supp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/gst-plugins-bad.supp
Changed
@@ -36,7 +36,22 @@ } { - Issues in srt library + Issues in srt library, variant 1 + Memcheck:Param + sendmsg(msg.msg_control) + fun:sendmsg + fun:_ZNK3srt8CChannel6sendtoERKNS_12sockaddr_anyERNS_7CPacketES3_ +} + +{ + Issues in srt library, variant 2 + Memcheck:Param + sendmsg(msg.msg_control) + fun:_ZNK3srt8CChannel6sendtoERKNS_12sockaddr_anyERNS_7CPacketES3_ +} + +{ + Issues in srt library, variant 2 Memcheck:Param sendmsg(msg.msg_control) fun:__libc_sendmsg @@ -116,3 +131,56 @@ fun:srtp_element_init fun:gst_element_register_srtpenc } +{ + <The OpenSSL version shipping in Fedora 40 leaks memory, the issue is fixed in F42> + Memcheck:Leak + match-leak-kinds: indirect + fun:malloc + fun:CRYPTO_malloc + ... + fun:tls1_change_cipher_state + fun:ossl_statem_client_post_work + fun:UnknownInlinedFun + fun:state_machine +} +{ + <The OpenSSL version shipping in Fedora 40 leaks memory, the issue is fixed in F42> + Memcheck:Leak + match-leak-kinds: indirect + fun:malloc + fun:CRYPTO_malloc + ... + fun:ossl_ec_key_simple_generate_key + fun:ossl_ec_key_gen + fun:EC_KEY_generate_key +} + +{ + <Mesa llvmpipe 0 realloc error, see https://gitlab.freedesktop.org/mesa/mesa/-/issues/13539> + Memcheck:ReallocZero + fun:realloc + fun:llvmpipe_register_texture + fun:llvmpipe_create_image_handle + fun:lvp_CreateDevice + ... +} + +{ + <libvmaf svm uninitialised value> + Memcheck:Cond + ... + fun:svm_predict_values + fun:svm_predict + fun:vmaf_predict_score_at_index + ... +} + +{ + <libvmaf svm uninitialised value> + Memcheck:Value8 + ... + fun:svm_predict_values + fun:svm_predict + fun:vmaf_predict_score_at_index + ... +} \ No newline at end of file
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/analyticsmeta.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/analyticsmeta.c
Changed
@@ -516,6 +516,96 @@ GST_END_TEST; +GST_START_TEST (test_copy_metas) +{ + GstBuffer *buf, *buf2; + GstAnalyticsRelationMeta *rmeta, *rmeta2; + gboolean ret; + GstAnalyticsODMtd od_mtd1, od_mtd2; + gpointer state = NULL; + GstAnalyticsMtd mtd1, mtd2; + gint x, y, w, h; + gfloat conf; + + buf = gst_buffer_new (); + + rmeta = gst_buffer_add_analytics_relation_meta (buf); + + GQuark type = g_quark_from_string ("dog"); + ret = gst_analytics_relation_meta_add_od_mtd (rmeta, type, 10, 10, + 10, 10, 1.0, &od_mtd1); + fail_unless (ret == TRUE); + + ret = gst_analytics_relation_meta_add_od_mtd (rmeta, type, 20, 20, + 20, 20, 1.0, &od_mtd2); + fail_unless (ret == TRUE); + + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta), 2); + + ret = gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, od_mtd1.id, od_mtd2.id); + fail_unless (ret == TRUE); + + + /* Lets copy it into a new buffer */ + buf2 = gst_buffer_new (); + ret = gst_buffer_copy_into (buf2, buf, GST_BUFFER_COPY_META, 0, -1); + fail_unless (ret == TRUE); + + rmeta2 = gst_buffer_get_analytics_relation_meta (buf2); + fail_unless (rmeta2 != NULL); + + fail_unless_equals_int (gst_analytics_relation_get_length (rmeta2), 2); + + /* First meta */ + ret = gst_analytics_relation_meta_iterate (rmeta2, &state, + GST_ANALYTICS_MTD_TYPE_ANY, &mtd1); + fail_unless (ret == TRUE); + fail_unless_equals_int (od_mtd1.id, mtd1.id); + + + ret = gst_analytics_od_mtd_get_confidence_lvl (&mtd1, &conf); + fail_unless (ret == TRUE); + fail_unless_equals_float (conf, 1.0); + + ret = gst_analytics_od_mtd_get_location (&mtd1, &x, &y, &w, &h, &conf); + fail_unless (ret == TRUE); + fail_unless_equals_int (x, 10); + fail_unless_equals_int (y, 10); + fail_unless_equals_int (w, 10); + fail_unless_equals_int (h, 10); + + /* Second meta */ + ret = gst_analytics_relation_meta_iterate (rmeta2, &state, + GST_ANALYTICS_MTD_TYPE_ANY, &mtd2); + fail_unless (ret == TRUE); + fail_unless_equals_int (od_mtd2.id, mtd2.id); + + ret = gst_analytics_od_mtd_get_confidence_lvl (&mtd2, &conf); + fail_unless (ret == TRUE); + fail_unless_equals_float (conf, 1.0); + + ret = gst_analytics_od_mtd_get_location (&mtd2, &x, &y, &w, &h, &conf); + fail_unless (ret == TRUE); + fail_unless_equals_int (x, 20); + fail_unless_equals_int (y, 20); + fail_unless_equals_int (w, 20); + fail_unless_equals_int (h, 20); + + fail_unless_equals_int (gst_analytics_relation_meta_get_relation (rmeta, + mtd1.id, mtd2.id), GST_ANALYTICS_REL_TYPE_RELATE_TO); + + /* No third meta */ + ret = gst_analytics_relation_meta_iterate (rmeta2, &state, + GST_ANALYTICS_MTD_TYPE_ANY, &mtd1); + fail_unless (ret == FALSE); + + gst_buffer_unref (buf); + gst_buffer_unref (buf2); +} + +GST_END_TEST; + GST_START_TEST (test_add_od_meta) { /* Verity we can add Object Detection relatable metadata to a relation @@ -1089,10 +1179,11 @@ GstBuffer *buf1, *buf2; GstAnalyticsRelationMetaInitParams init_params = { 5, 150 }; GstAnalyticsRelationMeta *rmeta; - GstAnalyticsTrackingMtd tracking_mtd; - guint tracking_id; - GstClockTime tracking_observation_time_1; - gboolean ret; + GstAnalyticsTrackingMtd tracking_mtd, tracking_mtd2; + guint64 tracking_id, ret_trk_id; + GstClockTime time_1, time_2, time_ret_f, time_ret_l; + gboolean ret, found = FALSE, lost; + gpointer state = NULL; /* Verify we can add multiple trackings to relation metadata */ @@ -1100,9 +1191,9 @@ buf1 = gst_buffer_new (); rmeta = gst_buffer_add_analytics_relation_meta_full (buf1, &init_params); tracking_id = 1; - tracking_observation_time_1 = GST_BUFFER_TIMESTAMP (buf1); + time_1 = GST_BUFFER_TIMESTAMP (buf1); ret = gst_analytics_relation_meta_add_tracking_mtd (rmeta, tracking_id, - tracking_observation_time_1, &tracking_mtd); + time_1, &tracking_mtd); fail_unless (ret == TRUE); gst_buffer_unref (buf1); @@ -1111,14 +1202,129 @@ rmeta = gst_buffer_add_analytics_relation_meta_full (buf2, &init_params); tracking_id = 1; ret = gst_analytics_relation_meta_add_tracking_mtd (rmeta, tracking_id, - tracking_observation_time_1, &tracking_mtd); + time_1, &tracking_mtd); fail_unless (ret == TRUE); + /* add itermadiate tracking point to very first and last are correct */ + time_2 = GST_BUFFER_TIMESTAMP (buf2) + 1; + ret = gst_analytics_tracking_mtd_update_last_seen (&tracking_mtd, time_2); + + /* add last tracking point */ + time_2 += 1; + ret = gst_analytics_tracking_mtd_update_last_seen (&tracking_mtd, time_2); + + /* Verify we can retrieve tracking mtd */ + found = gst_analytics_relation_meta_iterate (rmeta, &state, + GST_ANALYTICS_MTD_TYPE_ANY, &tracking_mtd2); + + /* Verify retrieved mtd is correct */ + fail_unless (found == TRUE); + fail_unless (tracking_mtd2.id == tracking_mtd.id); + fail_unless (tracking_mtd2.meta == tracking_mtd.meta); + + /* Verify specific tracking mtd data */ + gst_analytics_tracking_mtd_get_info (&tracking_mtd2, &ret_trk_id, &time_ret_f, + &time_ret_l, &lost); + fail_unless (tracking_id == ret_trk_id); + fail_unless (time_1 == time_ret_f); + fail_unless (time_2 == time_ret_l); + fail_unless (lost == FALSE); + + /* Set tracking lost */ + gst_analytics_tracking_mtd_set_lost (&tracking_mtd); + + /* Verify tracking lost was updated but other tracking data are still + * available */ + gst_analytics_tracking_mtd_get_info (&tracking_mtd2, &ret_trk_id, &time_ret_f, + &time_ret_l, &lost); + + fail_unless (tracking_id == ret_trk_id); + fail_unless (time_1 == time_ret_f); + fail_unless (time_2 == time_ret_l); + fail_unless (lost == TRUE); + gst_buffer_unref (buf2); } GST_END_TEST; +GST_START_TEST (test_od_trk_relation) +{ + /* Verify we retrive tracking from relation with OD */ + GstBuffer *buf1; + guint64 tracking_id; + GstAnalyticsRelationMetaInitParams init_params = { 5, 150 }; + GstAnalyticsRelationMeta *rmeta; + GstAnalyticsTrackingMtd tracking_mtd, tracking_mtd2; + GstClockTime tracking_observation_time_1; + gboolean ret, found = FALSE; + gpointer state = NULL; + GQuark type = g_quark_from_string ("dog"); + gint x = 20; + gint y = 20; + gint w = 10; + gint h = 15; + gfloat loc_conf_lvl = 0.6f; + GstAnalyticsODMtd od_mtd, od_mtd2; + + + /* creating a buffer where we add a relation-meta */ + buf1 = gst_buffer_new (); + rmeta = gst_buffer_add_analytics_relation_meta_full (buf1, &init_params); + tracking_id = 1; + tracking_observation_time_1 = GST_BUFFER_TIMESTAMP (buf1); + ret = gst_analytics_relation_meta_add_tracking_mtd (rmeta, tracking_id, + tracking_observation_time_1, &tracking_mtd); + fail_unless (ret == TRUE); + + /* adding object-detection to rmeta */ + ret = gst_analytics_relation_meta_add_od_mtd (rmeta, type, x, y, + w, h, loc_conf_lvl, &od_mtd); + + /* set relation from object-detection to tracking */ + gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, od_mtd.id, tracking_mtd.id); + + /* query for related mtd of any type on od_mtd */ + found = gst_analytics_relation_meta_get_direct_related (rmeta, od_mtd.id, + GST_ANALYTICS_REL_TYPE_RELATE_TO, GST_ANALYTICS_MTD_TYPE_ANY, &state, + &tracking_mtd2); + + fail_unless (found == TRUE); + fail_unless (tracking_mtd2.id == tracking_mtd.id); + fail_unless (tracking_mtd2.meta == tracking_mtd.meta); + + state = NULL; + /* query for related mtd of any type on tracking. */ + found = gst_analytics_relation_meta_get_direct_related (rmeta, + tracking_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + GST_ANALYTICS_MTD_TYPE_ANY, &state, &od_mtd2); + + /* since relation are directed and we only set a relation from + * object-detection to tracking, we shouldn't find any relation */ + fail_unless (found == FALSE); + + /* set relation from tracking to object-detection */ + gst_analytics_relation_meta_set_relation (rmeta, + GST_ANALYTICS_REL_TYPE_RELATE_TO, tracking_mtd.id, od_mtd.id); + + state = NULL; + /* query for related mtd of any type on tracking. */ + found = gst_analytics_relation_meta_get_direct_related (rmeta, + tracking_mtd.id, GST_ANALYTICS_REL_TYPE_RELATE_TO, + GST_ANALYTICS_MTD_TYPE_ANY, &state, &od_mtd2); + + /* now we should find as it was added */ + fail_unless (found == TRUE); + fail_unless (od_mtd2.id == od_mtd.id); + fail_unless (od_mtd2.meta == od_mtd.meta); + + gst_buffer_unref (buf1); +} + +GST_END_TEST; + + GST_START_TEST (test_verify_mtd_clear) { /* This test use segmentation mtd but it's a general functionality of @@ -1531,7 +1737,7 @@ /* Verify segmentation analytics-meta and associated classification * match truth vectors */ gsize idx; - GstBufferMapInfo mmap_info; /* mask map info */ + GstMapInfo mmap_info; /* mask map info */ gst_buffer_map (mbuf, &mmap_info, GST_MAP_READ); for (gsize r = 0; r < 24; r++) { gsize mr = r / 2; @@ -1581,6 +1787,326 @@ GST_END_TEST; +GST_START_TEST (test_add_tensor_mtd) +{ + /* Verify we can add a tensor to analytics-meta and retrieve it */ + GstBuffer *vbuf, *tbuf; + GstAnalyticsRelationMeta *rmeta; + GstTensor *tensor2; + GstTensor *tensor3; + GstAnalyticsTensorMtd tensor_mtd; + gboolean ret; + gsize dims2 = { 2, 3 }; + + vbuf = gst_buffer_new (); + + rmeta = gst_buffer_add_analytics_relation_meta (vbuf); + ret = gst_analytics_relation_meta_add_tensor_mtd (rmeta, 22, &tensor_mtd); + fail_unless (ret == TRUE); + + tensor2 = gst_analytics_tensor_mtd_get_tensor (&tensor_mtd); + fail_unless (tensor2); + + fail_unless_equals_int (tensor2->num_dims, 22); + fail_unless_equals_int (tensor2->id, 0); + fail_unless (tensor2->data == NULL); + + tbuf = gst_buffer_new_allocate (NULL, sizeof (float) * dims0 * dims1, + NULL); + + ret = gst_analytics_relation_meta_add_tensor_mtd_simple (rmeta, + g_quark_from_string ("test2"), GST_TENSOR_DATA_TYPE_FLOAT32, + tbuf, GST_TENSOR_DIM_ORDER_ROW_MAJOR, G_N_ELEMENTS (dims), dims, + &tensor_mtd); + fail_unless (ret == TRUE); + + tensor3 = gst_analytics_tensor_mtd_get_tensor (&tensor_mtd); + fail_unless (tensor3); + + fail_unless_equals_int (tensor3->num_dims, 2); + fail_unless_equals_int (tensor3->id, g_quark_from_string ("test2")); + fail_unless_equals_int (tensor3->dims_order, GST_TENSOR_DIM_ORDER_ROW_MAJOR); + fail_unless_equals_int (tensor3->dims0, 2); + fail_unless_equals_int (tensor3->dims1, 3); + fail_unless (tensor3->data == tbuf); + + gst_buffer_unref (vbuf); +} + +GST_END_TEST; + +GST_START_TEST (test_iou_int) +{ + gint bb1_x = 30, bb1_y = 30, bb1_w = 10, bb1_h = 10; + gint bb2_x = 35, bb2_y = 30, bb2_w = 10, bb2_h = 10; + gfloat iou; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_y = 35; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 40; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 0.0 / 200.0); + + bb2_x = 30; + bb2_y = 35; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 25; + bb2_y = 35; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 25; + bb2_y = 30; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 25; + bb2_y = 25; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 30; + bb2_y = 25; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 30; + bb2_y = 25; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 30; + bb2_y = 30; + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 100.0 / 100.0); + + bb1_x = 0; + bb1_y = 0; + bb1_w = 10; + bb1_h = 10; + + bb2_x = -5; + bb2_y = 0; + bb2_w = 10; + bb2_h = 10; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 100.0); + + bb2_y = -5; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 100.0); + + bb1_x = -5; + bb1_y = -5; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 25.0); + + bb1_x = -5; + bb1_y = 0; + + bb2_x = 0; + bb2_y = -5; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 75.0); + + bb2_y = -10; + + iou = gst_analytics_image_util_iou_int (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 0.0 / 100.0); +} + +GST_END_TEST; + +GST_START_TEST (test_iou_float) +{ + gfloat bb1_x = 30.0, bb1_y = 30.0, bb1_w = 10.0, bb1_h = 10.0; + gfloat bb2_x = 35.0, bb2_y = 30.0, bb2_w = 10.0, bb2_h = 10.0; + gfloat iou; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_y = 35; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 40; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 0.0 / 200.0); + + bb2_x = 30; + bb2_y = 35; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 25; + bb2_y = 35; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 25; + bb2_y = 30; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 25; + bb2_y = 25; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 175.0); + + bb2_x = 30; + bb2_y = 25; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 30; + bb2_y = 25; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 150.0); + + bb2_x = 30; + bb2_y = 30; + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 100.0 / 100.0); + + bb1_x = 0; + bb1_y = 0; + bb1_w = 10; + bb1_h = 10; + + bb2_x = -5; + bb2_y = 0; + bb2_w = 10; + bb2_h = 10; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 50.0 / 100.0); + + bb2_y = -5; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 100.0); + + bb1_x = -5; + bb1_y = -5; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 25.0); + + bb1_x = -5; + bb1_y = 0; + + bb2_x = 0; + bb2_y = -5; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 25.0 / 75.0); + + bb2_y = -10; + + iou = gst_analytics_image_util_iou_float (bb1_x, bb1_y, bb1_w, bb1_h, bb2_x, + bb2_y, bb2_w, bb2_h); + fail_unless_equals_float (iou, 0.0 / 100.0); + +} + +GST_END_TEST; + +GST_START_TEST (test_get_tensor) +{ + GstBuffer *buf, *tensor_data; + GstTensorMeta *tmeta; + GstTensor *tensor; + const GstTensor *tensor2; + GstTensor **tensors; + GQuark tensor_id = g_quark_from_string ("tensor-encoding-1"); + gsize dims = { 1 }; + gsize index; + + /* Verify we can add a tensor-meta to a buffer */ + + /* Create tensor data */ + guint8 *data = g_malloc0 (1); + *data = 28; + + /* Wrap tensor data into a GstBuffer */ + tensor_data = gst_buffer_new_wrapped_full (0, data, 1, 0, 1, data, g_free); + + /* Create a new buffer where we attach tensor-meta */ + buf = gst_buffer_new (); + tmeta = gst_buffer_add_tensor_meta (buf); + + /* Create a tensor */ + tensor = gst_tensor_new_simple (tensor_id, GST_TENSOR_DATA_TYPE_UINT8, + tensor_data, GST_TENSOR_DIM_ORDER_COL_MAJOR, 1, dims); + + /* Create an array of tensor to fullfil GstTensor API */ + tensors = g_new (GstTensor *, 1); + tensors0 = tensor; + + /* Set tensor-meta's tensors */ + gst_tensor_meta_set (tmeta, 1, tensors); + + /* Retieve tensor using index interface */ + index = gst_tensor_meta_get_index_from_id (tmeta, tensor_id); + + fail_unless (index == 0); + + tensor2 = gst_tensor_meta_get (tmeta, index); + + /* Verify tensor retrieved */ + fail_unless (tensor == tensor2); + + /* Retrieve tensor using tensor-id directly */ + tensor2 = gst_tensor_meta_get_by_id (tmeta, tensor_id); + + fail_unless (tensor == tensor2); + + gst_buffer_unref (buf); +} + +GST_END_TEST; + static Suite * analyticmeta_suite (void) { @@ -1592,6 +2118,9 @@ TCase *tc_chain_od_cls; TCase *tc_chain_tracking; TCase *tc_chain_segmentation; + TCase *tc_chain_util; + TCase *tc_chain_tensors; + TCase *tc_chain_tensor_mtd; s = suite_create ("Analytic Meta Library"); @@ -1611,6 +2140,7 @@ tcase_add_test (tc_chain_relation, test_path_relation_meta); tcase_add_test (tc_chain_relation, test_cyclic_relation_meta); tcase_add_test (tc_chain_relation, test_verify_mtd_clear); + tcase_add_test (tc_chain_relation, test_copy_metas); tc_chain_od = tcase_create ("Object Detection Mtd"); suite_add_tcase (s, tc_chain_od); @@ -1631,12 +2161,25 @@ tc_chain_tracking = tcase_create ("Tracking Mtd"); suite_add_tcase (s, tc_chain_tracking); tcase_add_test (tc_chain_tracking, test_add_tracking_meta); + tcase_add_test (tc_chain_tracking, test_od_trk_relation); tc_chain_segmentation = tcase_create ("Segmentation Mtd"); suite_add_tcase (s, tc_chain_segmentation); tcase_add_test (tc_chain_segmentation, test_add_segmentation_meta); tcase_add_test (tc_chain_segmentation, test_associate_segmentation_meta); + tc_chain_util = tcase_create ("Utility"); + suite_add_tcase (s, tc_chain_util); + tcase_add_test (tc_chain_util, test_iou_int); + tcase_add_test (tc_chain_util, test_iou_float); + + tc_chain_tensors = tcase_create ("TensorMeta"); + suite_add_tcase (s, tc_chain_tensors); + tcase_add_test (tc_chain_tensors, test_get_tensor); + + tc_chain_tensor_mtd = tcase_create ("Tensor Mtd"); + suite_add_tcase (s, tc_chain_tensor_mtd); + tcase_add_test (tc_chain_tensor_mtd, test_add_tensor_mtd); return s; }
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/insertbin.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/insertbin.c
Changed
@@ -163,7 +163,8 @@ srcpad = gst_check_setup_src_pad (insertbin, &srcpad_template); sinkpad = gst_check_setup_sink_pad (insertbin, &sinkpad_template); - g_assert (srcpad && sinkpad); + g_assert_nonnull (srcpad); + g_assert_nonnull (sinkpad); ASSERT_CRITICAL (gst_insert_bin_append (GST_INSERT_BIN (insertbin), NULL, NULL, NULL));
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/mse.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/mse.c
Changed
@@ -37,7 +37,7 @@ #include <gst/mse/gstmediasourcetrackbuffer-private.h> #include <gst/mse/gstmediasourcesamplemap-private.h> -static GstCheckLogFilter * +GST_UNUSED_CHECKS static GstCheckLogFilter * add_log_filter (GLogLevelFlags level, const gchar * regex) { GRegex *gregex = g_regex_new (regex, 0, 0, NULL); @@ -149,6 +149,7 @@ GST_START_TEST (test_add_source_buffer_with_content_type_null) { +#ifndef G_DISABLE_CHECKS add_log_filter (G_LOG_LEVEL_CRITICAL, "^.*_add_source_buffer: assertion 'type != NULL' failed"); @@ -157,6 +158,7 @@ g_assert_null (gst_media_source_add_source_buffer (media_source, NULL, NULL)); gst_object_unref (media_source); +#endif } GST_END_TEST; @@ -590,12 +592,14 @@ GST_START_TEST (test_track_create_with_invalid_type) { +#ifndef G_DISABLE_CHECKS add_log_filter (G_LOG_LEVEL_CRITICAL, "^.*track_new_full: assertion .*type .* failed"); g_assert_null (gst_media_source_track_new (-1, "")); g_assert_null (gst_media_source_track_new (GST_MEDIA_SOURCE_TRACK_TYPE_OTHER + 1, "")); +#endif } GST_END_TEST; @@ -889,6 +893,7 @@ GST_START_TEST (test_sample_map_add_invalid_sample) { +#ifndef G_DISABLE_CHECKS add_log_filter (G_LOG_LEVEL_CRITICAL, "^.*_sample_map_add: assertion .* failed"); @@ -903,6 +908,7 @@ gst_object_unref (map); gst_sample_unref (sample); +#endif } GST_END_TEST;
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/play.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/play.c
Changed
@@ -351,7 +351,7 @@ break; case 7: fail_unless_equals_int (change, STATE_CHANGE_POSITION_UPDATED); - g_assert (new_state->position <= old_state->duration); + g_assert_cmpuint (new_state->position, <=, old_state->duration); if (new_state->position == old_state->duration) new_state->test_data = GINT_TO_POINTER ((video ? 0x10 : 0x00) | (step + 1)); @@ -1635,7 +1635,7 @@ guint port; uris = soup_server_get_uris (server); - g_assert (g_slist_length (uris) == 1); + fail_unless_equals_int (g_slist_length (uris), 1); port = g_uri_get_port (uris->data); g_slist_free_full (uris, (GDestroyNotify) g_uri_unref);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkcodecparams_vp9.c
Added
@@ -0,0 +1,35 @@ +/* GStreamer + * + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 59 Temple Place - Suite 330, + * Boston, MA 02111-1307, USA. + */ + +/* 1 frame 320x240 blue box */ + + + +static const uint8_t vp9_obu = { + 0x82, 0x49, 0x83, 0x42, 0x20, 0x13, 0xf0, 0x0e, 0xf6, 0x00, 0x38, 0x24, + 0x1c, 0x18, 0x42, 0x00, 0x00, 0x50, 0x61, 0xf6, 0x30, 0x00, 0x00, 0x67, + 0x15, 0xe9, 0x6f, 0xff, 0xff, 0xff, 0xfd, 0x8a, 0x60, 0xff, 0xff, 0xff, + 0xfc, 0x74, 0xea, 0x2a, 0x51, 0x9b, 0xaa, 0x23, 0x01, 0x04, 0xfd, 0x00 +}; + +static const uint8_t vp9_obu_2 = { + 0x86, 0x00, 0x40, 0x92, 0x9c, 0x00, 0x49, 0x40, 0x00, 0x04, 0x26, 0xac, + 0x00, 0x00, 0x5e, 0x91, 0xc5, 0xe0, 0x00 +};
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkcommandpool.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkcommandpool.c
Changed
@@ -112,7 +112,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkdevice.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkdevice.c
Changed
@@ -87,7 +87,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkformat.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkformat.c
Changed
@@ -101,7 +101,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkimage.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkimage.c
Changed
@@ -232,7 +232,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkimagebufferpool.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkimagebufferpool.c
Changed
@@ -26,6 +26,10 @@ #include <gst/check/gstcheck.h> #include <gst/vulkan/vulkan.h> +#if GST_VULKAN_HAVE_VIDEO_EXTENSIONS +#include "gst/vulkan/gstvkvideoutils-private.h" +#endif + static GstVulkanInstance *instance; static GstVulkanDevice *device; static GstVulkanQueue *queue = NULL; @@ -208,7 +212,7 @@ } GST_END_TEST; -#endif +#endif /* GST_VULKAN_HAVE_VIDEO_EXTENSIONS */ static Suite * vkimagebufferpool_suite (void) @@ -220,7 +224,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkinstance.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkinstance.c
Changed
@@ -157,7 +157,6 @@ tcase_add_test (tc_basic, test_instance_new); tcase_add_test (tc_basic, test_instance_version_before_open); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkmemory.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkmemory.c
Changed
@@ -97,7 +97,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkvideodecode.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkvideodecode.c
Changed
@@ -27,6 +27,7 @@ #include <gst/vulkan/vulkan.h> #include "gst/vulkan/gstvkdecoder-private.h" +#include "gst/vulkan/gstvkvideoutils-private.h" static GstVulkanInstance *instance; static GstVulkanDevice *device; @@ -394,6 +395,12 @@ .sliceCount = pic.slice_offs->len - 1, .pSliceOffsets = (guint32 *) pic.slice_offs->data, }; + VkVideoDecodeH264InlineSessionParametersInfoKHR inline_params = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H264_INLINE_SESSION_PARAMETERS_INFO_KHR, + .pStdSPS = &h264_std_sps, + .pStdPPS = &h264_std_pps, + }; /* *INDENT-OFF* */ pic.pic_res = (VkVideoPictureResourceInfoKHR) { @@ -427,6 +434,11 @@ }; /* *INDENT-ON* */ + if (gst_vulkan_decoder_has_feature (dec, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) { + vk_pic.pNext = &inline_params; + } + fail_unless (gst_vulkan_decoder_decode (dec, &pic, &err)); } @@ -561,6 +573,13 @@ .sliceSegmentCount = pic.slice_offs->len - 1, .pSliceSegmentOffsets = (guint32 *) pic.slice_offs->data, }; + VkVideoDecodeH265InlineSessionParametersInfoKHR inline_params = { + .sType = + VK_STRUCTURE_TYPE_VIDEO_DECODE_H265_INLINE_SESSION_PARAMETERS_INFO_KHR, + .pStdSPS = &h265_std_sps, + .pStdPPS = &h265_std_pps, + .pStdVPS = &h265_std_vps, + }; /* *INDENT-OFF* */ pic.pic_res = (VkVideoPictureResourceInfoKHR) { @@ -594,6 +613,11 @@ }; /* *INDENT-ON* */ + if (gst_vulkan_decoder_has_feature (dec, + GST_VULKAN_DECODER_FEATURE_INLINE_PARAMS)) { + vk_pic.pNext = &inline_params; + } + fail_unless (gst_vulkan_decoder_decode (dec, &pic, &err)); } @@ -609,6 +633,274 @@ GST_END_TEST; +#include "vkcodecparams_vp9.c" + +GST_START_TEST (test_vp9_decoder) +{ + GstVulkanDecoder *dec; + GError *err = NULL; + VkVideoFormatPropertiesKHR format_prop; + /* *INDENT-OFF* */ + GstVulkanVideoProfile profile = { + .profile = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &profile.usage, + .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR, + .chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, + .chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + .lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + }, + .usage.decode = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_USAGE_INFO_KHR, + .videoUsageHints = VK_VIDEO_DECODE_USAGE_DEFAULT_KHR, + .pNext = &profile.codec, + }, + .codec.vp9dec = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PROFILE_INFO_KHR, + .stdProfile = STD_VIDEO_VP9_PROFILE_0, + } + }; + + /* *INDENT-ON* */ + GstVulkanVideoCapabilities video_caps; + GstVulkanDecoderPicture pic1, pic2 = { NULL, }; + /* *INDENT-OFF* */ + StdVideoVP9ColorConfig colorConfig = { + .flags = { + .color_range = 0, + }, + .BitDepth = 0, + .color_space = STD_VIDEO_VP9_COLOR_SPACE_BT_601, + .subsampling_x = 1, + .subsampling_y = 1, + }; + StdVideoVP9LoopFilter loopFilter = { + .flags = { + .loop_filter_delta_enabled = 1, + .loop_filter_delta_update = 1, + }, + .loop_filter_level = 0, + .loop_filter_sharpness = 0, + .update_ref_delta = 13, + .loop_filter_ref_deltas = {1, 0, -1, -1}, + .update_mode_delta = 0, + .loop_filter_mode_deltas = {0, 0}, + }; + + /* *INDENT-ON* */ + + setup_queue (VK_QUEUE_VIDEO_DECODE_BIT_KHR, + VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR); + if (!video_queue) { + GST_WARNING ("Unable to find decoding queue"); + return; + } + + dec = gst_vulkan_decoder_new_from_queue (video_queue, + VK_VIDEO_CODEC_OPERATION_DECODE_VP9_BIT_KHR); + if (!dec) { + GST_WARNING ("Unable to create a vulkan decoder"); + return; + } + + fail_unless (gst_vulkan_decoder_start (dec, &profile, &err)); + + fail_unless (gst_vulkan_decoder_update_ycbcr_sampler (dec, + VK_SAMPLER_YCBCR_RANGE_ITU_FULL, VK_CHROMA_LOCATION_COSITED_EVEN, + VK_CHROMA_LOCATION_MIDPOINT, &err)); + + + fail_unless (gst_vulkan_decoder_out_format (dec, &format_prop)); + fail_unless (gst_vulkan_decoder_caps (dec, &video_caps)); + + /* decode pic1 */ + { + /* setup the vulkan picture */ + /* *INDENT-OFF* */ + StdVideoDecodeVP9PictureInfo std_pic = { + .flags = { + .error_resilient_mode = 0, + .intra_only = 0, + .allow_high_precision_mv = 0, + .refresh_frame_context = 1, + .frame_parallel_decoding_mode = 1, + .segmentation_enabled = 0, + .show_frame = 1, + .UsePrevFrameMvs = 0, + }, + .profile = STD_VIDEO_VP9_PROFILE_0, + .frame_type = STD_VIDEO_VP9_FRAME_TYPE_KEY, + .frame_context_idx = 0, + .reset_frame_context = 0, + .refresh_frame_flags = 0xff, + .ref_frame_sign_bias_mask = 0, + .interpolation_filter = STD_VIDEO_VP9_INTERPOLATION_FILTER_EIGHTTAP, + .base_q_idx = 33, + .delta_q_y_dc = 0, + .delta_q_uv_dc = 0, + .delta_q_uv_ac = 0, + .tile_cols_log2 = 0, + .tile_rows_log2 = 0, + .pColorConfig = &colorConfig, + .pLoopFilter = &loopFilter, + .pSegmentation = NULL, + }; + + VkVideoDecodeVP9PictureInfoKHR vk_pic = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PICTURE_INFO_KHR, + .pStdPictureInfo = &std_pic, + .referenceNameSlotIndices = {-1, -1, -1}, + .uncompressedHeaderOffset = 0, + .compressedHeaderOffset = 18, + .tilesOffset = 23, + }; + /* *INDENT-ON* */ + + get_output_buffer (dec, format_prop.format, &pic1); + /* get input buffer */ + fail_unless (gst_vulkan_decoder_append_slice (dec, &pic1, vp9_obu, + sizeof (vp9_obu), FALSE)); + + /* *INDENT-OFF* */ + pic1.pic_res = (VkVideoPictureResourceInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedOffset = (VkOffset2D) {0, 0}, + .codedExtent = (VkExtent2D) {320, 240}, + .baseArrayLayer = 0, + .imageViewBinding = pic1.img_view_ref->view, + }; + pic1.slot = (VkVideoReferenceSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_REFERENCE_SLOT_INFO_KHR, + .pNext = NULL, + .slotIndex = 0, + .pPictureResource = &pic1.pic_res, + }; + pic1.decode_info = (VkVideoDecodeInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_INFO_KHR, + .pNext = &vk_pic, + .flags = 0, + .srcBufferOffset = 0, + .dstPictureResource = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedOffset = (VkOffset2D) {0, 0}, + .codedExtent = (VkExtent2D) {320, 240}, + .baseArrayLayer = 0, + .imageViewBinding = pic1.img_view_out->view, + }, + .pSetupReferenceSlot = &pic1.slot, + .referenceSlotCount = 0, + .pReferenceSlots = pic1.slots, + }; + /* *INDENT-ON* */ + fail_unless (gst_vulkan_decoder_decode (dec, &pic1, &err)); + download_and_check_output_buffer (dec, format_prop.format, &pic1); + } + + + /* decode pic2 */ + { + /* setup the vulkan picture */ + /* *INDENT-OFF* */ + StdVideoDecodeVP9PictureInfo std_pic = { + /* *INDENT-OFF* */ + .flags = { + .error_resilient_mode = 0, + .intra_only = 0, + .allow_high_precision_mv = 0, + .refresh_frame_context = 1, + .frame_parallel_decoding_mode = 1, + .segmentation_enabled = 0, + .show_frame = 1, + .UsePrevFrameMvs = 0, + }, + .profile = STD_VIDEO_VP9_PROFILE_0, + .frame_type = STD_VIDEO_VP9_FRAME_TYPE_NON_KEY, + .frame_context_idx = 0, + .reset_frame_context = 0, + .refresh_frame_flags = 0xff, + .ref_frame_sign_bias_mask = 0, + .interpolation_filter = STD_VIDEO_VP9_INTERPOLATION_FILTER_EIGHTTAP, + .base_q_idx = 33, + .delta_q_y_dc = 0, + .delta_q_uv_dc = 0, + .delta_q_uv_ac = 0, + .tile_cols_log2 = 0, + .tile_rows_log2 = 0, + .pColorConfig = &colorConfig, + .pLoopFilter = &loopFilter, + .pSegmentation = NULL, + }; + + VkVideoDecodeVP9PictureInfoKHR vk_pic = { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_VP9_PICTURE_INFO_KHR, + .pStdPictureInfo = &std_pic, + .referenceNameSlotIndices = {0, 0, 0}, + .uncompressedHeaderOffset = 0, + .compressedHeaderOffset = 10, + .tilesOffset = 14, + }; + /* *INDENT-ON* */ + + get_output_buffer (dec, format_prop.format, &pic2); + /* get input buffer */ + fail_unless (gst_vulkan_decoder_append_slice (dec, &pic2, vp9_obu_2, + sizeof (vp9_obu_2), FALSE)); + + /* *INDENT-OFF* */ + pic2.pic_res = (VkVideoPictureResourceInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedOffset = {0, 0}, + .codedExtent = {320, 240}, + .baseArrayLayer = 0, + .imageViewBinding = pic2.img_view_ref->view, + }; + + pic2.slot = (VkVideoReferenceSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_REFERENCE_SLOT_INFO_KHR, + .pNext = NULL, + .slotIndex = 1, + .pPictureResource = &pic2.pic_res, + }; + /* *INDENT-ON* */ + + /* setup the reference for pic2 */ + pic2.slots0 = pic1.slot; + pic2.refs0 = &pic1; + + /* *INDENT-OFF* */ + pic2.decode_info = (VkVideoDecodeInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_DECODE_INFO_KHR, + .pNext = &vk_pic, + .flags = 0, + .srcBufferOffset = 0, + .dstPictureResource = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PICTURE_RESOURCE_INFO_KHR, + .codedOffset = {0, 0}, + .codedExtent = {320, 240}, + .baseArrayLayer = 0, + .imageViewBinding = pic2.img_view_out->view, + }, + .pSetupReferenceSlot = &pic2.slot, + .referenceSlotCount = 1, + .pReferenceSlots = pic2.slots, + }; + /* *INDENT-ON* */ + + fail_unless (gst_vulkan_decoder_decode (dec, &pic2, &err)); + download_and_check_output_buffer (dec, format_prop.format, &pic2); + } + + fail_unless (gst_vulkan_decoder_stop (dec)); + + gst_vulkan_decoder_picture_release (&pic1); + gst_vulkan_decoder_picture_release (&pic2); + + gst_object_unref (dec); +} + +GST_END_TEST; + + static Suite * vkvideo_suite (void) { @@ -619,13 +911,14 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ + /* FIXME: CI doesn't have a software vulkan video decoder (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance); if (have_instance) { tcase_add_test (tc_basic, test_h264_decoder); tcase_add_test (tc_basic, test_h265_decoder); + tcase_add_test (tc_basic, test_vp9_decoder); } return s;
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkvideoencodeav1.c
Added
@@ -0,0 +1,842 @@ +/* GStreamer + * + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 59 Temple Place - Suite 330, + * Boston, MA 02111-1307, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/codecparsers/gstav1parser.h> + +#include "vkvideoencodebase.c" + +GstAV1Parser *parser = NULL; + +#define MAX_ORDER_HINT 7 +#define FRAME_ID_BITS 15 +#define DELTA_FRAME_ID_BITS 14 + +typedef struct +{ + GstVulkanEncoderPicture picture; + + gboolean is_ref; + gint pic_num; + gint pic_order_cnt; + + VkVideoEncodeAV1PictureInfoKHR enc_pic_info; + + StdVideoEncodeAV1PictureInfo pic_info; + StdVideoEncodeAV1ReferenceInfo ref_info; + VkVideoEncodeAV1DpbSlotInfoKHR dpb_slot_info; + VkVideoEncodeAV1RateControlInfoKHR rc_info; + +} GstVulkanAV1EncoderFrame; + + +static GstAV1OBUType +check_av1_obu (guint8 * bitstream, gsize size, GstAV1OBU * obu) +{ + GstAV1ParserResult res = GST_AV1_PARSER_OK; + guint32 consumed; + guint32 offset = 0; + + if (!parser) { + parser = gst_av1_parser_new (); + } + + while (offset < size) { + + res = + gst_av1_parser_identify_one_obu (parser, bitstream + offset, size, obu, + &consumed); + assert_equals_int (res, GST_AV1_PARSER_OK); + + switch (obu->obu_type) { + case GST_AV1_OBU_TEMPORAL_DELIMITER: + { + res = gst_av1_parser_parse_temporal_delimiter_obu (parser, obu); + assert_equals_int (res, GST_AV1_PARSER_OK); + break; + } + case GST_AV1_OBU_SEQUENCE_HEADER: + { + GstAV1SequenceHeaderOBU seq_header; + res = + gst_av1_parser_parse_sequence_header_obu (parser, obu, &seq_header); + assert_equals_int (res, GST_AV1_PARSER_OK); + break; + } + case GST_AV1_OBU_FRAME_HEADER: + { + GstAV1FrameHeaderOBU frame_header; + res = + gst_av1_parser_parse_frame_header_obu (parser, obu, &frame_header); + assert_equals_int (res, GST_AV1_PARSER_OK); + break; + } + case GST_AV1_OBU_FRAME: + { + GstAV1FrameOBU frame; + res = gst_av1_parser_parse_frame_obu (parser, obu, &frame); + assert_equals_int (res, GST_AV1_PARSER_OK); + break; + } + case GST_AV1_OBU_TILE_GROUP: + { + GstAV1TileGroupOBU tile_group; + res = gst_av1_parser_parse_tile_group_obu (parser, obu, &tile_group); + assert_equals_int (res, GST_AV1_PARSER_OK); + fail_unless (tile_group.num_tiles > 0); + break; + } + + default: + GST_ERROR ("Unknown OBU type: %d", obu->obu_type); + fail_unless (0); + break; + } + offset += consumed; + } + + return obu->obu_type; +} + +static void +check_av1_obu_frame (GstAV1OBU * obu, GstAV1FrameType frame_type) +{ + GstAV1FrameOBU frame; + GstAV1ParserResult res = GST_AV1_PARSER_OK; + + res = gst_av1_parser_parse_frame_obu (parser, obu, &frame); + assert_equals_int (res, GST_AV1_PARSER_OK); + assert_equals_int (frame.frame_header.frame_type, frame_type); +} + +static gint +_av1_helper_msb (guint n) +{ + int log = 0; + guint value = n; + int i; + + g_assert_cmpuint (n, !=, 0); + + for (i = 4; i >= 0; --i) { + const gint shift = (1 << i); + const guint x = value >> shift; + if (x != 0) { + value = x; + log += shift; + } + } + + return log; +} + +static void +check_av1_session_params (GstVulkanEncoder * enc) +{ + GError *err = NULL; + guint8 *bitstream = NULL; + gsize bitstream_size = 0; + GstAV1OBU obu; + + fail_unless (gst_vulkan_encoder_video_session_parameters_overrides (enc, + NULL, NULL, &bitstream_size, (gpointer *) & bitstream, &err)); + + assert_equals_int (check_av1_obu (bitstream, bitstream_size, &obu), + GST_AV1_OBU_SEQUENCE_HEADER); + + g_free (bitstream); +} + +static GstVulkanAV1EncoderFrame * +_av1_encode_frame_new (GstVulkanEncoder * enc, GstBuffer * img_buffer, + gsize size, gboolean is_ref) +{ + GstVulkanAV1EncoderFrame *frame; + + frame = g_new (GstVulkanAV1EncoderFrame, 1); + fail_unless (gst_vulkan_encoder_picture_init (&frame->picture, enc, + img_buffer, size)); + + frame->is_ref = is_ref; + + return frame; +} + +static void +_av1_encode_frame_free (GstVulkanEncoder * enc, gpointer pframe) +{ + GstVulkanAV1EncoderFrame *frame = (GstVulkanAV1EncoderFrame *) pframe; + gst_vulkan_encoder_picture_clear (&frame->picture, enc); + g_free (frame); +} + +static GstVulkanAV1EncoderFrame * +allocate_av1_frame (GstVulkanEncoder * enc, int width, int height, + gboolean is_ref) +{ + GstVulkanAV1EncoderFrame *frame; + GstBuffer *in_buffer, *img_buffer; + + in_buffer = generate_input_buffer (buffer_pool, width, height); + fail_unless (in_buffer); + + fail_unless (upload_buffer_to_image (img_pool, in_buffer, + &img_buffer) == GST_FLOW_OK); + + frame = _av1_encode_frame_new (enc, img_buffer, width * height * 3, is_ref); + fail_unless (frame); + + gst_buffer_unref (in_buffer); + gst_buffer_unref (img_buffer); + + return frame; +} + +static void +setup_codec_pic (GstVulkanEncoderPicture * pic, VkVideoEncodeInfoKHR * info, + gpointer data) +{ + GstVulkanAV1EncoderFrame *frame = (GstVulkanAV1EncoderFrame *) pic; + + info->pNext = &frame->enc_pic_info; + pic->dpb_slot.pNext = &frame->dpb_slot_info; + + + /* *INDENT-OFF* */ + frame->dpb_slot_info = (VkVideoEncodeAV1DpbSlotInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_DPB_SLOT_INFO_KHR, + .pNext = NULL, + .pStdReferenceInfo = &frame->ref_info, + }; + /* *INDENT-ON* */ + + if (frame->pic_info.frame_type == STD_VIDEO_AV1_FRAME_TYPE_KEY) { + frame->pic_info.refresh_frame_flags = 0xff; + } else { + frame->pic_info.refresh_frame_flags = + 1 << frame->picture.dpb_slot.slotIndex; + } +} + +static void +setup_rc_codec (GstVulkanEncoderPicture * pic, + VkVideoEncodeRateControlInfoKHR * rc_info, + VkVideoEncodeRateControlLayerInfoKHR * rc_layer, gpointer data) +{ + GstVulkanAV1EncoderFrame *frame = (GstVulkanAV1EncoderFrame *) pic; + + /* *INDENT-OFF* */ + frame->rc_info = (VkVideoEncodeAV1RateControlInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_RATE_CONTROL_INFO_KHR, + .pNext = NULL, + .flags = VK_VIDEO_ENCODE_AV1_RATE_CONTROL_REFERENCE_PATTERN_FLAT_BIT_KHR | + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_REGULAR_GOP_BIT_KHR, + .gopFrameCount = 1, + .keyFramePeriod = 1, + .consecutiveBipredictiveFrameCount = 0, + .temporalLayerCount = 0, + }; + /* *INDENT-ON* */ + + rc_info->pNext = &frame->rc_info; +} + +static GstVulkanEncoder * +setup_av1_encoder (guint32 width, gint32 height, int gop_size) +{ + GstVulkanEncoder *enc = NULL; + GError *err = NULL; + GstVulkanVideoProfile profile; + GstVulkanEncoderParameters enc_params; + StdVideoAV1SequenceHeader av1_seq_header; + StdVideoAV1Profile av1_profile = STD_VIDEO_AV1_PROFILE_MAIN; + StdVideoAV1ColorConfig av1_color_config; + StdVideoEncodeAV1DecoderModelInfo av1_model_info; + StdVideoEncodeAV1OperatingPointInfo av1_operating_point_info; + GstVulkanEncoderQualityProperties quality_props; + + /* *INDENT-OFF* */ + profile = (GstVulkanVideoProfile) { + .profile = { + .sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR, + .pNext = &profile.usage.encode, + .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR, + .chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, + .lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + .chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + }, + .usage.encode = { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_USAGE_INFO_KHR, + .pNext = &profile.codec, + .videoUsageHints = VK_VIDEO_ENCODE_USAGE_DEFAULT_KHR, + .videoContentHints = VK_VIDEO_ENCODE_CONTENT_DEFAULT_KHR, + .tuningMode = VK_VIDEO_ENCODE_TUNING_MODE_DEFAULT_KHR, + }, + .codec.av1enc = { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_PROFILE_INFO_KHR, + .stdProfile = av1_profile, + } + }; + + quality_props = (GstVulkanEncoderQualityProperties) { + .quality_level = -1, + .codec.av1 = { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_QUALITY_LEVEL_PROPERTIES_KHR, + }, + }; + /* *INDENT-ON* */ + + setup_queue (VK_QUEUE_VIDEO_ENCODE_BIT_KHR, + VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR); + + if (!video_queue) { + GST_WARNING ("Unable to find encoding queue"); + return NULL; + } + + if (!graphics_queue) { + GST_WARNING ("Unable to find graphics queue"); + return NULL; + } + + enc = gst_vulkan_encoder_create_from_queue (video_queue, + VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR); + + if (!enc) { + GST_WARNING ("Unable to create a vulkan encoder, queue=%p", video_queue); + return NULL; + } + + fail_unless (gst_vulkan_encoder_quality_level (enc) == -1); + + fail_unless (gst_vulkan_encoder_start (enc, &profile, &quality_props, &err)); + + fail_unless (gst_vulkan_encoder_quality_level (enc) > -1); + + fail_unless (gst_vulkan_encoder_is_started (enc)); + + /* *INDENT-OFF* */ + av1_color_config = (StdVideoAV1ColorConfig) { + .flags = (StdVideoAV1ColorConfigFlags) { + .mono_chrome = 0, + .color_range = 0, + .separate_uv_delta_q = 0, + .color_description_present_flag = 0, + }, + .BitDepth = 8, /* VK_FORMAT_G8_B8R8_2PLANE_420_UNORM */ + .subsampling_x = 1, + .subsampling_y = 1, + .color_primaries = STD_VIDEO_AV1_COLOR_PRIMARIES_BT_UNSPECIFIED, + .transfer_characteristics = STD_VIDEO_AV1_TRANSFER_CHARACTERISTICS_UNSPECIFIED, + .matrix_coefficients = STD_VIDEO_AV1_MATRIX_COEFFICIENTS_UNSPECIFIED, + .chroma_sample_position = STD_VIDEO_AV1_CHROMA_SAMPLE_POSITION_UNKNOWN, + }; + + av1_seq_header = (StdVideoAV1SequenceHeader) { + .flags = (StdVideoAV1SequenceHeaderFlags) { + .still_picture = 0, + .reduced_still_picture_header = 0, + .use_128x128_superblock = 0, + .enable_filter_intra = 0, + .enable_intra_edge_filter = 0, + .enable_interintra_compound = 0, + .enable_masked_compound = 0, + .enable_warped_motion = 0, + .enable_dual_filter = 0, + .enable_order_hint = 1, + .enable_jnt_comp = 0, + .enable_ref_frame_mvs = 0, + .frame_id_numbers_present_flag = 0, + .enable_superres = 0, + .enable_cdef = 0, + .enable_restoration = 0, + .film_grain_params_present = 0, + .timing_info_present_flag = 0, + .initial_display_delay_present_flag = 0, + }, + .seq_profile = av1_profile, + .frame_width_bits_minus_1 = _av1_helper_msb (width), + .frame_height_bits_minus_1 = _av1_helper_msb (height), + .max_frame_width_minus_1 = width - 1, + .max_frame_height_minus_1 = height - 1, + .delta_frame_id_length_minus_2 = DELTA_FRAME_ID_BITS - 2, /* Comes from vk_video_samples */ + .additional_frame_id_length_minus_1 = FRAME_ID_BITS - DELTA_FRAME_ID_BITS - 1, /* Comes from vk_video_samples */ + .order_hint_bits_minus_1 = MAX (_av1_helper_msb(gop_size), MAX_ORDER_HINT - 1), /* Should be ceil log2 of the gop size with MAX_ORDER_HINT as max value */ + .seq_force_integer_mv = 0, + .seq_force_screen_content_tools = 0, + .pColorConfig = &av1_color_config, + .pTimingInfo = NULL, + }; + + av1_model_info = (StdVideoEncodeAV1DecoderModelInfo) { + .buffer_delay_length_minus_1 = 0, + .buffer_removal_time_length_minus_1 = 0, + .frame_presentation_time_length_minus_1 = 0, + .num_units_in_decoding_tick = 0, + }; + + av1_operating_point_info = (StdVideoEncodeAV1OperatingPointInfo) { + .flags = (StdVideoEncodeAV1OperatingPointInfoFlags) { + .decoder_model_present_for_this_op = 0, + .low_delay_mode_flag = 0, + .initial_display_delay_present_for_this_op = 0, + }, + .operating_point_idc = 0, + .seq_level_idx = 0, + .seq_tier = 0, + .decoder_buffer_delay = 0, + .encoder_buffer_delay = 0, + .initial_display_delay_minus_1 = 0, + }; + + enc_params.av1 = (VkVideoEncodeAV1SessionParametersCreateInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_SESSION_PARAMETERS_CREATE_INFO_KHR, + .pNext = NULL, + .pStdSequenceHeader = &av1_seq_header, + .pStdDecoderModelInfo = &av1_model_info, + .stdOperatingPointCount = 1, + .pStdOperatingPoints = &av1_operating_point_info, + }; + /* *INDENT-ON* */ + + fail_unless (gst_vulkan_encoder_update_video_session_parameters (enc, + &enc_params, &err)); + + check_av1_session_params (enc); + + return enc; +} + +static void +encode_frame (GstVulkanEncoder * enc, GstVulkanAV1EncoderFrame * frame, + StdVideoAV1FrameType frame_type, guint frame_num, + GstVulkanAV1EncoderFrame ** list0, gint list0_num, + GstVulkanAV1EncoderFrame ** list1, gint list1_num) +{ + GstVulkanVideoCapabilities enc_caps; + int i, ref_pics_num = 0; + GstVulkanEncoderPicture *ref_pics16 = { NULL, }; + GstVulkanEncoderPicture *picture = &frame->picture; + GstVulkanEncoderCallbacks cb = { setup_codec_pic, setup_rc_codec }; + + GST_DEBUG ("Encoding frame num:%d", frame_num); + + fail_unless (gst_vulkan_encoder_caps (enc, &enc_caps)); + + gst_vulkan_encoder_set_callbacks (enc, &cb, &enc_caps, NULL); + + /* *INDENT-OFF* */ + frame->pic_info = (StdVideoEncodeAV1PictureInfo) { + .flags = (StdVideoEncodeAV1PictureInfoFlags) { + .error_resilient_mode = (frame_type == STD_VIDEO_AV1_FRAME_TYPE_KEY), + .disable_cdf_update = 0, + .use_superres = 0, + .render_and_frame_size_different = 0, + .allow_screen_content_tools = 0, + .is_filter_switchable = 0, + .force_integer_mv = 0, + .frame_size_override_flag = 0, + .buffer_removal_time_present_flag = 0, + .allow_intrabc = 0, + .frame_refs_short_signaling = 0, + .allow_high_precision_mv = 0, + .is_motion_mode_switchable = 0, + .use_ref_frame_mvs = 0, + .disable_frame_end_update_cdf = 0, + .allow_warped_motion = 0, + .reduced_tx_set = 0, + .skip_mode_present = 0, + .delta_q_present = 0, + .delta_lf_present = 0, + .delta_lf_multi = 0, + .segmentation_enabled = 0, + .segmentation_update_map = 0, + .segmentation_temporal_update = 0, + .segmentation_update_data = 0, + .UsesLr = 0, + .usesChromaLr = 0, + .show_frame = (frame->pic_order_cnt <= frame->pic_num), + .showable_frame = (frame_type != STD_VIDEO_AV1_FRAME_TYPE_KEY), + }, + .frame_type = frame_type, + .frame_presentation_time = 0, + .current_frame_id = frame_num, + .order_hint = frame->pic_order_cnt % (1 << MAX_ORDER_HINT), + .primary_ref_frame = STD_VIDEO_AV1_PRIMARY_REF_NONE, + .refresh_frame_flags = 0xff, /* set during `setup_codec_pic` callback */ + .coded_denom = 0, + .render_width_minus_1 = GST_VIDEO_INFO_WIDTH (&out_info) - 1, + .render_height_minus_1 = GST_VIDEO_INFO_HEIGHT (&out_info) - 1, + .interpolation_filter = STD_VIDEO_AV1_INTERPOLATION_FILTER_EIGHTTAP, + .TxMode = STD_VIDEO_AV1_TX_MODE_ONLY_4X4, + .delta_q_res = 0, + .delta_lf_res = 0, + .pTileInfo = NULL, + .pQuantization = NULL, + .pSegmentation = NULL, + .pLoopFilter = NULL, + .pCDEF = NULL, + .pLoopRestoration = NULL, + .pGlobalMotion = NULL, + .pExtensionHeader = NULL, + .pBufferRemovalTimes = NULL, + }; + + frame->enc_pic_info = (VkVideoEncodeAV1PictureInfoKHR) { + .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_PICTURE_INFO_KHR, + .pNext = NULL, + .predictionMode = VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_INTRA_ONLY_KHR, + .rateControlGroup = VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_INTRA_KHR, + .constantQIndex = 64, + .pStdPictureInfo = &frame->pic_info, + .primaryReferenceCdfOnly = VK_FALSE, + .generateObuExtensionHeader = VK_FALSE, + }; + /* *INDENT-ON* */ + + memset (frame->pic_info.ref_order_hint, 0, STD_VIDEO_AV1_NUM_REF_FRAMES); + memset (frame->pic_info.ref_frame_idx, 0, STD_VIDEO_AV1_REFS_PER_FRAME); + memset (frame->pic_info.delta_frame_id_minus_1, 0, + STD_VIDEO_AV1_REFS_PER_FRAME * sizeof (uint32_t)); + + if (frame_type != STD_VIDEO_AV1_FRAME_TYPE_KEY) { + if (list1_num) { /* Bi-directional frame */ + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_BIDIRECTIONAL_COMPOUND_KHR; + frame->enc_pic_info.rateControlGroup = + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_BIPREDICTIVE_KHR; + frame->pic_info.refresh_frame_flags = 0; + } else { + if (enc_caps.encoder.codec.av1.maxUnidirectionalCompoundReferenceCount + && list0_num > 1) { + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_UNIDIRECTIONAL_COMPOUND_KHR; + } else { + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_SINGLE_REFERENCE_KHR; + } + frame->enc_pic_info.rateControlGroup = + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_PREDICTIVE_KHR; + } + } + + + if (frame_type != STD_VIDEO_AV1_FRAME_TYPE_KEY) { + if (list1_num != 0) { + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_BIDIRECTIONAL_COMPOUND_KHR; + frame->enc_pic_info.rateControlGroup = + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_BIPREDICTIVE_KHR; + } else { + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_SINGLE_REFERENCE_KHR; + frame->enc_pic_info.rateControlGroup = + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_PREDICTIVE_KHR; + } + } else { + frame->enc_pic_info.predictionMode = + VK_VIDEO_ENCODE_AV1_PREDICTION_MODE_INTRA_ONLY_KHR; + frame->enc_pic_info.rateControlGroup = + VK_VIDEO_ENCODE_AV1_RATE_CONTROL_GROUP_INTRA_KHR; + } + + /* Cause a crash in NVIDIA driver if the referenceNameSlotIndices are not all + * -1 by default. */ + memset (frame->enc_pic_info.referenceNameSlotIndices, -1, + STD_VIDEO_AV1_REFS_PER_FRAME * sizeof (int32_t)); + + /* *INDENT-OFF* */ + frame->ref_info = (StdVideoEncodeAV1ReferenceInfo) { + .flags = (StdVideoEncodeAV1ReferenceInfoFlags) { + .disable_frame_end_update_cdf = 0, + .segmentation_enabled = 0, + }, + .RefFrameId = 0, /* FIXME Vulkan Video Samples value is 0 too */ + .frame_type = frame_type, + .OrderHint = frame->pic_order_cnt % (1 << MAX_ORDER_HINT), + .pExtensionHeader = NULL, + }; + /* *INDENT-ON* */ + + for (i = 0; i < list0_num; i++) { + ref_picsi = &list0i->picture; + frame->enc_pic_info.referenceNameSlotIndicesi = + list0i->picture.dpb_slot.slotIndex; + ref_pics_num++; + } + + for (i = 0; i < list1_num; i++) { + ref_picsi + list0_num = &list1i->picture; + frame->enc_pic_info.referenceNameSlotIndicesSTD_VIDEO_AV1_REFS_PER_FRAME - + 1 = list1i->picture.dpb_slot.slotIndex; + ref_pics_num++; + } + + fail_unless (gst_vulkan_encoder_encode (enc, &in_info, picture, ref_pics_num, + ref_pics)); +} + +static void +tear_down_encoder (GstVulkanEncoder * enc) +{ + if (enc) { + fail_unless (gst_vulkan_encoder_stop (enc)); + gst_object_unref (enc); + } + if (exec) { + if (!gst_vulkan_operation_wait (exec)) { + GST_WARNING + ("Failed to wait for all fences to complete before shutting down"); + } + gst_object_unref (exec); + exec = NULL; + } + gst_clear_object (&video_queue); + gst_clear_object (&graphics_queue); + gst_av1_parser_free (parser); + parser = NULL; +} + +static void +check_encoded_frame (GstVulkanAV1EncoderFrame * frame, + GstAV1FrameType frame_type) +{ + GstMapInfo info; + GstAV1OBU obu; + GstAV1OBUType obu_type; + fail_unless (frame->picture.out_buffer != NULL); + gst_buffer_map (frame->picture.out_buffer, &info, GST_MAP_READ); + fail_unless (info.size); + GST_MEMDUMP ("out buffer", info.data, info.size); + + obu_type = check_av1_obu (info.data, info.size, &obu); + if (obu_type == GST_AV1_OBU_FRAME) { + check_av1_obu_frame (&obu, frame_type); + } + gst_buffer_unmap (frame->picture.out_buffer, &info); +} + +#define N_BUFFERS STD_VIDEO_AV1_NUM_REF_FRAMES + 1 +#define FRAME_WIDTH 720 +#define FRAME_HEIGHT 480 + +GST_START_TEST (test_encoder_av1_key) +{ + GstVulkanEncoder *enc; + uint32_t width = FRAME_WIDTH; + uint32_t height = FRAME_HEIGHT; + GstVulkanAV1EncoderFrame *frame; + int frame_num = 0; + int i; + /* Create and setup an AV1 encoder with its initial session parameters */ + enc = setup_av1_encoder (width, height, N_BUFFERS); + if (!enc) { + GST_WARNING ("Unable to initialize AV1 encoder"); + return; + } + + buffer_pool = allocate_buffer_pool (enc, width, height); + img_pool = allocate_image_buffer_pool (enc, width, height); + + /* Encode N_BUFFERS of I-Frames */ + for (i = 0; i < N_BUFFERS; i++) { + frame = allocate_av1_frame (enc, width, height, TRUE); + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_KEY, + frame_num, NULL, 0, NULL, 0); + check_encoded_frame (frame, GST_AV1_KEY_FRAME); + + frame_num++; + _av1_encode_frame_free (enc, frame); + } + + fail_unless (gst_buffer_pool_set_active (buffer_pool, FALSE)); + gst_object_unref (buffer_pool); + fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); + gst_object_unref (img_pool); + + tear_down_encoder (enc); +} + +GST_END_TEST; + +GST_START_TEST (test_encoder_av1_inter) +{ + GstVulkanEncoder *enc; + uint32_t width = FRAME_WIDTH; + uint32_t height = FRAME_HEIGHT; + GstVulkanAV1EncoderFrame *frame; + GstVulkanAV1EncoderFrame *list02 = { NULL, }; + int frame_num = 0; + int i; + /* Create and setup an AV1 encoder with its initial session parameters */ + enc = setup_av1_encoder (width, height, N_BUFFERS); + if (!enc) { + GST_WARNING ("Unable to initialize AV1 encoder"); + return; + } + + buffer_pool = allocate_buffer_pool (enc, width, height); + img_pool = allocate_image_buffer_pool (enc, width, height); + + frame = allocate_av1_frame (enc, width, height, TRUE); + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_KEY, + frame_num, NULL, 0, NULL, 0); + check_encoded_frame (frame, GST_AV1_KEY_FRAME); + list00 = frame; + frame_num++; + + /* Encode N_BUFFERS of Inter-Frames */ + for (i = 1; i < N_BUFFERS; i++) { + frame = allocate_av1_frame (enc, width, height, TRUE); + frame->pic_num = frame_num; + frame->pic_order_cnt = frame_num; + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_INTER, + frame_num, list0, 1, NULL, 0); + check_encoded_frame (frame, GST_AV1_INTER_FRAME); + _av1_encode_frame_free (enc, list00); + list00 = frame; + frame_num++; + } + + _av1_encode_frame_free (enc, frame); + + fail_unless (gst_buffer_pool_set_active (buffer_pool, FALSE)); + gst_object_unref (buffer_pool); + fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); + gst_object_unref (img_pool); + + tear_down_encoder (enc); +} + +GST_END_TEST; + +GST_START_TEST (test_encoder_av1_inter_bi) +{ + GstVulkanEncoder *enc; + uint32_t width = FRAME_WIDTH; + uint32_t height = FRAME_HEIGHT; + GstVulkanAV1EncoderFrame *frame; + GstVulkanAV1EncoderFrame *list0STD_VIDEO_AV1_NUM_REF_FRAMES = { NULL, }; + GstVulkanAV1EncoderFrame *list1STD_VIDEO_AV1_NUM_REF_FRAMES = { NULL, }; + gint list0_num = 0; + gint list1_num = 0; + int frame_num = 0; + GstVulkanVideoCapabilities enc_caps; + + /* Create and setup an AV1 encoder with its initial session parameters */ + enc = setup_av1_encoder (width, height, 4); + if (!enc) { + GST_WARNING ("Unable to initialize AV1 encoder"); + return; + } + + fail_unless (gst_vulkan_encoder_caps (enc, &enc_caps)); + + if (!enc_caps.encoder.codec.av1.maxBidirectionalCompoundReferenceCount) { + GST_WARNING ("Driver does not support bi-directional frames"); + goto beach; + } + + buffer_pool = allocate_buffer_pool (enc, width, height); + img_pool = allocate_image_buffer_pool (enc, width, height); + + /* Encode 1st picture as an IDR-Frame */ + frame = allocate_av1_frame (enc, width, height, TRUE); + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_KEY, + frame_num, NULL, 0, NULL, 0); + check_encoded_frame (frame, GST_AV1_KEY_FRAME); + list00 = frame; + list0_num++; + frame_num++; + + /* Encode 4th picture as a P-Frame */ + frame = allocate_av1_frame (enc, width, height, TRUE); + frame->pic_num = frame_num; /* Encode order */ + frame->pic_order_cnt = 3; /* Display order */ + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_INTER, + frame_num, list0, list0_num, list1, list1_num); + check_encoded_frame (frame, GST_AV1_INTER_FRAME); + list10 = frame; + list1_num++; + frame_num++; + + /* Encode 2nd picture as a B-Frame */ + frame = allocate_av1_frame (enc, width, height, FALSE); + frame->pic_num = frame_num; + frame->pic_order_cnt = 1; + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_INTER, + frame_num, list0, list0_num, list1, list1_num); + check_encoded_frame (frame, GST_AV1_INTER_FRAME); + frame_num++; + _av1_encode_frame_free (enc, frame); + + /* Encode 3rd picture as a B-Frame */ + frame = allocate_av1_frame (enc, width, height, FALSE); + frame->pic_num = frame_num; + frame->pic_order_cnt = 2; + + encode_frame (enc, frame, STD_VIDEO_AV1_FRAME_TYPE_INTER, + frame_num, list0, list0_num, list1, list1_num); + check_encoded_frame (frame, GST_AV1_INTER_FRAME); + frame_num++; + _av1_encode_frame_free (enc, frame); + + _av1_encode_frame_free (enc, list00); + _av1_encode_frame_free (enc, list10); + + fail_unless (gst_buffer_pool_set_active (buffer_pool, FALSE)); + gst_object_unref (buffer_pool); + fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); + gst_object_unref (img_pool); + +beach: + tear_down_encoder (enc); +} + +GST_END_TEST; + + +static Suite * +vkvideo_suite (void) +{ + Suite *s = suite_create ("vkvideo"); + TCase *tc_basic = tcase_create ("general"); + gboolean have_instance; + + suite_add_tcase (s, tc_basic); + tcase_add_checked_fixture (tc_basic, setup, teardown); + + /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ + instance = gst_vulkan_instance_new (); + have_instance = gst_vulkan_instance_open (instance, NULL); + gst_object_unref (instance); + if (have_instance) { + tcase_add_test (tc_basic, test_encoder_av1_key); + tcase_add_test (tc_basic, test_encoder_av1_inter); + tcase_add_test (tc_basic, test_encoder_av1_inter_bi); + } + + return s; +} + +GST_CHECK_MAIN (vkvideo);
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkvideoencodebase.c
Added
@@ -0,0 +1,348 @@ +/* GStreamer + * + * Copyright (C) 2025 Igalia, S.L. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 59 Temple Place - Suite 330, + * Boston, MA 02111-1307, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/check/gstcheck.h> +#include <gst/vulkan/vulkan.h> +#include <gst/vulkan/gstvkencoder-private.h> + +static GstVulkanInstance *instance; +static GstVulkanQueue *video_queue = NULL; +static GstVulkanQueue *graphics_queue = NULL; +static GstVulkanDevice *device; +static GstBufferPool *img_pool; +static GstBufferPool *buffer_pool; +static GstVulkanOperation *exec = NULL; +static GstVideoInfo in_info; +static GstVideoInfo out_info; + +static void +setup (void) +{ + instance = gst_vulkan_instance_new (); + fail_unless (gst_vulkan_instance_open (instance, NULL)); +} + +static void +teardown (void) +{ + gst_clear_object (&video_queue); + gst_clear_object (&graphics_queue); + gst_clear_object (&device); + gst_object_unref (instance); +} + +struct QueueProps +{ + guint expected_flags; + guint codec; +}; + +static gboolean +_choose_queue (GstVulkanDevice * device, GstVulkanQueue * _queue, gpointer data) +{ + guint flags = + device->physical_device->queue_family_props_queue->family.queueFlags; + guint32 codec = + device->physical_device->queue_family_ops_queue->family.video; + struct QueueProps *qprops = data; + + if ((flags & VK_QUEUE_TRANSFER_BIT) == VK_QUEUE_TRANSFER_BIT) { + gst_object_replace ((GstObject **) & graphics_queue, + GST_OBJECT_CAST (_queue)); + } + + if (((flags & qprops->expected_flags) == qprops->expected_flags) + && ((codec & qprops->codec) == qprops->codec)) + gst_object_replace ((GstObject **) & video_queue, GST_OBJECT_CAST (_queue)); + + + return !(graphics_queue && video_queue); +} + +static void +setup_queue (guint expected_flags, guint codec) +{ + int i; + struct QueueProps qprops = { expected_flags, codec }; + + for (i = 0; i < instance->n_physical_devices; i++) { + device = gst_vulkan_device_new_with_index (instance, i); + fail_unless (gst_vulkan_device_open (device, NULL)); + gst_vulkan_device_foreach_queue (device, _choose_queue, &qprops); + if (video_queue && GST_IS_VULKAN_QUEUE (video_queue) + && graphics_queue && GST_IS_VULKAN_QUEUE (graphics_queue)) + break; + gst_clear_object (&device); + gst_clear_object (&video_queue); + gst_clear_object (&graphics_queue); + } +} + +/* initialize the vulkan image buffer pool */ +static GstBufferPool * +allocate_image_buffer_pool (GstVulkanEncoder * enc, uint32_t width, + uint32_t height) +{ + GstVideoFormat format = GST_VIDEO_FORMAT_NV12; + GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", + G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, + width, "height", G_TYPE_INT, height, NULL); + GstBufferPool *pool = gst_vulkan_image_buffer_pool_new (video_queue->device); + GstStructure *config = gst_buffer_pool_get_config (pool); + gsize frame_size = width * height * 2; + + gst_caps_set_features_simple (caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, NULL)); + fail_unless (gst_vulkan_encoder_create_dpb_pool (enc, caps)); + + gst_video_info_from_caps (&out_info, caps); + + gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); + gst_vulkan_image_buffer_pool_config_set_allocation_params (config, + VK_IMAGE_USAGE_TRANSFER_DST_BIT | + VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR, + VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, + VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, + VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT); + + profile_caps = gst_vulkan_encoder_profile_caps (enc); + gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); + + gst_caps_unref (caps); + gst_caps_unref (profile_caps); + + fail_unless (gst_buffer_pool_set_config (pool, config)); + fail_unless (gst_buffer_pool_set_active (pool, TRUE)); + return pool; +} + +static GstBufferPool * +allocate_buffer_pool (GstVulkanEncoder * enc, uint32_t width, uint32_t height) +{ + GstVideoFormat format = GST_VIDEO_FORMAT_NV12; + GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", + G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, + width, "height", G_TYPE_INT, height, NULL); + gsize frame_size = width * height * 2; + GstBufferPool *pool = gst_vulkan_buffer_pool_new (video_queue->device); + GstStructure *config = gst_buffer_pool_get_config (pool); + + gst_caps_set_features_simple (caps, + gst_caps_features_new (GST_CAPS_FEATURE_MEMORY_VULKAN_BUFFER, NULL)); + + gst_video_info_from_caps (&in_info, caps); + + gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); + + + profile_caps = gst_vulkan_encoder_profile_caps (enc); + gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); + + gst_caps_unref (caps); + gst_caps_unref (profile_caps); + + gst_vulkan_image_buffer_pool_config_set_allocation_params (config, + VK_IMAGE_USAGE_TRANSFER_SRC_BIT, + VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, + VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, VK_ACCESS_TRANSFER_WRITE_BIT); + + fail_unless (gst_buffer_pool_set_config (pool, config)); + fail_unless (gst_buffer_pool_set_active (pool, TRUE)); + + return pool; +} + +static GstBuffer * +generate_input_buffer (GstBufferPool * pool, int width, int height) +{ + int i; + GstBuffer *buffer; + GstMapInfo info; + GstMemory *mem; + + if ((gst_buffer_pool_acquire_buffer (pool, &buffer, NULL)) + != GST_FLOW_OK) + goto out; + + // PLANE Y COLOR BLUE + mem = gst_buffer_peek_memory (buffer, 0); + gst_memory_map (mem, &info, GST_MAP_WRITE); + for (i = 0; i < width * height; i++) + info.datai = 0x29; + gst_memory_unmap (mem, &info); + + // PLANE UV + mem = gst_buffer_peek_memory (buffer, 1); + gst_memory_map (mem, &info, GST_MAP_WRITE); + for (i = 0; i < width * height / 2; i++) { + info.datai = 0xf0; + info.datai++ = 0x6e; + } + + gst_memory_unmap (mem, &info); + +out: + return buffer; +} + +/* upload the raw input buffer pool into a vulkan image buffer */ +static GstFlowReturn +upload_buffer_to_image (GstBufferPool * pool, GstBuffer * inbuf, + GstBuffer ** outbuf) +{ + GstFlowReturn ret = GST_FLOW_ERROR; + GError *error = NULL; + GstVulkanCommandBuffer *cmd_buf; + guint i, n_mems, n_planes; + GArray *barriers = NULL; + VkImageLayout dst_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL; + VkDependencyInfoKHR dependency_info; + + if ((ret = gst_buffer_pool_acquire_buffer (pool, outbuf, NULL)) + != GST_FLOW_OK) + goto out; + + if (!exec) { + GstVulkanCommandPool *cmd_pool = + gst_vulkan_queue_create_command_pool (graphics_queue, &error); + if (!cmd_pool) + goto error; + + exec = gst_vulkan_operation_new (cmd_pool); + gst_object_unref (cmd_pool); + } + + if (!gst_vulkan_operation_add_dependency_frame (exec, *outbuf, + VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, + VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT)) + goto error; + + if (!gst_vulkan_operation_begin (exec, &error)) + goto error; + + cmd_buf = exec->cmd_buf; + + if (!gst_vulkan_operation_add_frame_barrier (exec, *outbuf, + VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, + VK_ACCESS_TRANSFER_WRITE_BIT, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, + NULL)) + goto unlock_error; + + barriers = gst_vulkan_operation_retrieve_image_barriers (exec); + if (barriers->len == 0) { + ret = GST_FLOW_ERROR; + goto unlock_error; + } + + dependency_info = (VkDependencyInfoKHR) { + .sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO_KHR,.imageMemoryBarrierCount = + barriers->len,.pImageMemoryBarriers = + (VkImageMemoryBarrier2 *) barriers->data,}; + + gst_vulkan_operation_pipeline_barrier2 (exec, &dependency_info); + dst_layout = g_array_index (barriers, VkImageMemoryBarrier2KHR, 0).newLayout; + + g_clear_pointer (&barriers, g_array_unref); + + n_mems = gst_buffer_n_memory (*outbuf); + n_planes = GST_VIDEO_INFO_N_PLANES (&out_info); + + for (i = 0; i < n_planes; i++) { + VkBufferImageCopy region; + GstMemory *in_mem, *out_mem; + GstVulkanBufferMemory *buf_mem; + GstVulkanImageMemory *img_mem; + const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, + VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, + }; + VkImageAspectFlags plane_aspect; + guint idx; + + in_mem = gst_buffer_peek_memory (inbuf, i); + + buf_mem = (GstVulkanBufferMemory *) in_mem; + + if (n_planes == n_mems) + plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; + else + plane_aspect = aspectsi; + + /* *INDENT-OFF* */ + region = (VkBufferImageCopy) { + .bufferOffset = 0, + .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&in_info, i), + .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&in_info, i), + .imageSubresource = { + .aspectMask = plane_aspect, + .mipLevel = 0, + .baseArrayLayer = 0, + .layerCount = 1, + }, + .imageOffset = { .x = 0, .y = 0, .z = 0, }, + .imageExtent = { + .width = GST_VIDEO_INFO_COMP_WIDTH (&out_info, i), + .height = GST_VIDEO_INFO_COMP_HEIGHT (&out_info, i), + .depth = 1, + } + }; + /* *INDENT-ON* */ + + idx = MIN (i, n_mems - 1); + out_mem = gst_buffer_peek_memory (*outbuf, idx); + if (!gst_is_vulkan_image_memory (out_mem)) { + GST_WARNING ("Output is not a GstVulkanImageMemory"); + goto unlock_error; + } + img_mem = (GstVulkanImageMemory *) out_mem; + + gst_vulkan_command_buffer_lock (cmd_buf); + vkCmdCopyBufferToImage (cmd_buf->cmd, buf_mem->buffer, img_mem->image, + dst_layout, 1, ®ion); + gst_vulkan_command_buffer_unlock (cmd_buf); + } + + if (!gst_vulkan_operation_end (exec, &error)) + goto error; + + /*Hazard WRITE_AFTER_WRITE */ + gst_vulkan_operation_wait (exec); + + ret = GST_FLOW_OK; + +out: + return ret; + +unlock_error: + gst_vulkan_operation_reset (exec); + +error: + if (error) { + GST_WARNING ("Error: %s", error->message); + g_clear_error (&error); + } + gst_clear_buffer (outbuf); + ret = GST_FLOW_ERROR; + goto out; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkvideoencodeh264.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkvideoencodeh264.c
Changed
@@ -22,27 +22,13 @@ #include "config.h" #endif -#include <gst/gst.h> -#include <gst/check/gstcheck.h> -#include <gst/vulkan/vulkan.h> #include <gst/codecparsers/gsth264parser.h> -#include <gst/vulkan/gstvkencoder-private.h> + +#include "vkvideoencodebase.c" // Include h264 std session params #include "vkcodecparams_h264.c" -static GstVulkanInstance *instance; - -static GstVulkanQueue *encode_queue = NULL; -static GstVulkanQueue *gfx_queue = NULL; -static GstBufferPool *img_pool; -static GstBufferPool *buffer_pool; - -static GstVulkanOperation *exec = NULL; - -static GstVideoInfo in_info; -static GstVideoInfo out_info; - typedef struct { GstVulkanEncoderPicture picture; @@ -79,279 +65,18 @@ static void _h264_encode_frame_free (GstVulkanEncoder * enc, gpointer pframe) { - GstVulkanH264EncodeFrame *frame = pframe; + GstVulkanH264EncodeFrame *frame = (GstVulkanH264EncodeFrame *) pframe; gst_vulkan_encoder_picture_clear (&frame->picture, enc); g_free (frame); } -static void -setup (void) -{ - instance = gst_vulkan_instance_new (); - fail_unless (gst_vulkan_instance_open (instance, NULL)); -} - -static void -teardown (void) -{ - gst_clear_object (&encode_queue); - gst_clear_object (&gfx_queue); - gst_object_unref (instance); -} #define H264_MB_SIZE_ALIGNMENT 16 -/* initialize the vulkan image buffer pool */ -static GstBufferPool * -allocate_image_buffer_pool (GstVulkanEncoder * enc, uint32_t width, - uint32_t height) -{ - GstVideoFormat format = GST_VIDEO_FORMAT_NV12; - GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", - G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, - width, "height", G_TYPE_INT, height, NULL); - GstBufferPool *pool = gst_vulkan_image_buffer_pool_new (encode_queue->device); - GstStructure *config = gst_buffer_pool_get_config (pool); - gsize frame_size = width * height * 2; //NV12 - - gst_caps_set_features_simple (caps, - gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, - NULL)); - fail_unless (gst_vulkan_encoder_create_dpb_pool (enc, caps)); - - gst_video_info_from_caps (&out_info, caps); - - gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); - gst_vulkan_image_buffer_pool_config_set_allocation_params (config, - VK_IMAGE_USAGE_TRANSFER_DST_BIT | - VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR, - VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, - VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, - VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT); - - profile_caps = gst_vulkan_encoder_profile_caps (enc); - gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); - - gst_caps_unref (caps); - gst_caps_unref (profile_caps); - - fail_unless (gst_buffer_pool_set_config (pool, config)); - fail_unless (gst_buffer_pool_set_active (pool, TRUE)); - return pool; -} - -static GstBufferPool * -allocate_buffer_pool (GstVulkanEncoder * enc, uint32_t width, uint32_t height) -{ - GstVideoFormat format = GST_VIDEO_FORMAT_NV12; - GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", - G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, - width, "height", G_TYPE_INT, height, NULL); - gsize frame_size = width * height * 2; //NV12 - GstBufferPool *pool = gst_vulkan_buffer_pool_new (encode_queue->device); - GstStructure *config = gst_buffer_pool_get_config (pool); - - gst_caps_set_features_simple (caps, - gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_BUFFER, - NULL)); - - gst_video_info_from_caps (&in_info, caps); - - gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); - - - profile_caps = gst_vulkan_encoder_profile_caps (enc); - gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); - - gst_caps_unref (caps); - gst_caps_unref (profile_caps); - - gst_vulkan_image_buffer_pool_config_set_allocation_params (config, - VK_IMAGE_USAGE_TRANSFER_SRC_BIT, - VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, - VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, VK_ACCESS_TRANSFER_WRITE_BIT); - - fail_unless (gst_buffer_pool_set_config (pool, config)); - fail_unless (gst_buffer_pool_set_active (pool, TRUE)); - - return pool; -} - -static GstBuffer * -generate_input_buffer (GstBufferPool * pool, int width, int height) -{ - int i; - GstBuffer *buffer; - GstMapInfo info; - GstMemory *mem; - - if ((gst_buffer_pool_acquire_buffer (pool, &buffer, NULL)) - != GST_FLOW_OK) - goto out; - - // PLANE Y COLOR BLUE - mem = gst_buffer_peek_memory (buffer, 0); - gst_memory_map (mem, &info, GST_MAP_WRITE); - for (i = 0; i < width * height; i++) - info.datai = 0x29; - gst_memory_unmap (mem, &info); - - // PLANE UV - mem = gst_buffer_peek_memory (buffer, 1); - gst_memory_map (mem, &info, GST_MAP_WRITE); - for (i = 0; i < width * height / 2; i++) { - info.datai = 0xf0; - info.datai++ = 0x6e; - } - - gst_memory_unmap (mem, &info); - -out: - return buffer; -} - -/* upload the raw input buffer pool into a vulkan image buffer */ -static GstFlowReturn -upload_buffer_to_image (GstBufferPool * pool, GstBuffer * inbuf, - GstBuffer ** outbuf) -{ - GstFlowReturn ret = GST_FLOW_OK; - GError *error = NULL; - GstVulkanCommandBuffer *cmd_buf; - guint i, n_mems, n_planes; - GArray *barriers = NULL; - VkImageLayout dst_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL; - - if ((ret = gst_buffer_pool_acquire_buffer (pool, outbuf, NULL)) - != GST_FLOW_OK) - goto out; - - if (!exec) { - GstVulkanCommandPool *cmd_pool = - gst_vulkan_queue_create_command_pool (gfx_queue, &error); - if (!cmd_pool) - goto error; - - exec = gst_vulkan_operation_new (cmd_pool); - gst_object_unref (cmd_pool); - } - - if (!gst_vulkan_operation_add_dependency_frame (exec, *outbuf, - VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT)) - goto error; - - if (!gst_vulkan_operation_begin (exec, &error)) - goto error; - - cmd_buf = exec->cmd_buf; - - if (!gst_vulkan_operation_add_frame_barrier (exec, *outbuf, - VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, - VK_ACCESS_TRANSFER_WRITE_BIT, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, - NULL)) - goto unlock_error; - - barriers = gst_vulkan_operation_retrieve_image_barriers (exec); - if (barriers->len == 0) { - ret = GST_FLOW_ERROR; - goto unlock_error; - } - - VkDependencyInfoKHR dependency_info = { - .sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO_KHR, - .pImageMemoryBarriers = (gpointer) barriers->data, - .imageMemoryBarrierCount = barriers->len, - }; - - gst_vulkan_operation_pipeline_barrier2 (exec, &dependency_info); - dst_layout = g_array_index (barriers, VkImageMemoryBarrier2KHR, 0).newLayout; - - g_clear_pointer (&barriers, g_array_unref); - - n_mems = gst_buffer_n_memory (*outbuf); - n_planes = GST_VIDEO_INFO_N_PLANES (&out_info); - - for (i = 0; i < n_planes; i++) { - VkBufferImageCopy region; - GstMemory *in_mem, *out_mem; - GstVulkanBufferMemory *buf_mem; - GstVulkanImageMemory *img_mem; - const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, - VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, - }; - VkImageAspectFlags plane_aspect; - guint idx; - - in_mem = gst_buffer_peek_memory (inbuf, i); - - buf_mem = (GstVulkanBufferMemory *) in_mem; - - if (n_planes == n_mems) - plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; - else - plane_aspect = aspectsi; - - /* *INDENT-OFF* */ - region = (VkBufferImageCopy) { - .bufferOffset = 0, - .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&in_info, i), - .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&in_info, i), - .imageSubresource = { - .aspectMask = plane_aspect, - .mipLevel = 0, - .baseArrayLayer = 0, - .layerCount = 1, - }, - .imageOffset = { .x = 0, .y = 0, .z = 0, }, - .imageExtent = { - .width = GST_VIDEO_INFO_COMP_WIDTH (&out_info, i), - .height = GST_VIDEO_INFO_COMP_HEIGHT (&out_info, i), - .depth = 1, - } - }; - - idx = MIN (i, n_mems - 1); - out_mem = gst_buffer_peek_memory (*outbuf, idx); - if (!gst_is_vulkan_image_memory (out_mem)) { - GST_WARNING ("Output is not a GstVulkanImageMemory"); - goto unlock_error; - } - img_mem = (GstVulkanImageMemory *) out_mem; - - gst_vulkan_command_buffer_lock (cmd_buf); - vkCmdCopyBufferToImage (cmd_buf->cmd, buf_mem->buffer, img_mem->image, - dst_layout, 1, ®ion); - gst_vulkan_command_buffer_unlock (cmd_buf); - } - - if (!gst_vulkan_operation_end (exec, &error)) - goto error; - - /*Hazard WRITE_AFTER_WRITE*/ - gst_vulkan_operation_wait (exec); - - - ret = GST_FLOW_OK; - -out: - return ret; - -unlock_error: - gst_vulkan_operation_reset (exec); - -error: - if (error) { - GST_WARNING ("Error: %s", error->message); - g_clear_error (&error); - } - gst_clear_buffer (outbuf); - ret = GST_FLOW_ERROR; - goto out; -} static GstVulkanH264EncodeFrame * -allocate_frame (GstVulkanEncoder * enc, int width, +allocate_h264_frame (GstVulkanEncoder * enc, int width, int height, gboolean is_ref) { GstVulkanH264EncodeFrame *frame; @@ -359,7 +84,7 @@ in_buffer = generate_input_buffer (buffer_pool, width, height); - upload_buffer_to_image(img_pool, in_buffer, &img_buffer); + upload_buffer_to_image (img_pool, in_buffer, &img_buffer); frame = _h264_encode_frame_new (enc, img_buffer, width * height * 3, is_ref); fail_unless (frame); @@ -379,7 +104,7 @@ gpointer data) { GstVulkanH264EncodeFrame *frame = (GstVulkanH264EncodeFrame *) pic; - GstVulkanVideoCapabilities *enc_caps = data; + GstVulkanVideoCapabilities *enc_caps = (GstVulkanVideoCapabilities *) data; info->pNext = &frame->enc_pic_info; pic->dpb_slot.pNext = &frame->dpb_slot_info; @@ -415,9 +140,9 @@ /* *INDENT-OFF* */ frame->rc_info = (VkVideoEncodeH264RateControlInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_RATE_CONTROL_INFO_KHR, + .pNext = NULL, .flags = VK_VIDEO_ENCODE_H264_RATE_CONTROL_REFERENCE_PATTERN_FLAT_BIT_KHR | VK_VIDEO_ENCODE_H264_RATE_CONTROL_REGULAR_GOP_BIT_KHR, - .pNext = NULL, .gopFrameCount = 1, .idrPeriod = 1, .consecutiveBFrameCount = 0, @@ -454,10 +179,11 @@ }, .first_mb_in_slice = 0, .slice_type = slice_type, - .cabac_init_idc = STD_VIDEO_H264_CABAC_INIT_IDC_0, - .disable_deblocking_filter_idc = STD_VIDEO_H264_DISABLE_DEBLOCKING_FILTER_IDC_DISABLED, .slice_alpha_c0_offset_div2 = 0, .slice_beta_offset_div2 = 0, + .slice_qp_delta = 0, + .cabac_init_idc = STD_VIDEO_H264_CABAC_INIT_IDC_0, + .disable_deblocking_filter_idc = STD_VIDEO_H264_DISABLE_DEBLOCKING_FILTER_IDC_DISABLED, .pWeightTable = NULL, /* *INDENT-ON* */ }; @@ -513,8 +239,8 @@ frame->slice_info = (VkVideoEncodeH264NaluSliceInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_NALU_SLICE_INFO_KHR, .pNext = NULL, - .pStdSliceHeader = &frame->slice_hdr, .constantQp = 26, + .pStdSliceHeader = &frame->slice_hdr, }; fail_unless (frame->slice_info.constantQp >= enc_caps.encoder.codec.h264.minQp); @@ -631,7 +357,6 @@ setup_h264_encoder (guint32 width, gint32 height, gint sps_id, gint pps_id) { GstVulkanEncoder *enc = NULL; - int i; GError *err = NULL; uint32_t mbAlignedWidth, mbAlignedHeight; GstVulkanVideoProfile profile; @@ -647,12 +372,12 @@ .pNext = &profile.usage.encode, .videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR, .chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR, - .chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, .lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, + .chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR, }, .usage.encode = { - .pNext = &profile.codec, .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_USAGE_INFO_KHR, + .pNext = &profile.codec, .videoUsageHints = VK_VIDEO_ENCODE_USAGE_DEFAULT_KHR, .videoContentHints = VK_VIDEO_ENCODE_CONTENT_DEFAULT_KHR, .tuningMode = VK_VIDEO_ENCODE_TUNING_MODE_DEFAULT_KHR, @@ -670,32 +395,24 @@ }; /* *INDENT-ON* */ - for (i = 0; i < instance->n_physical_devices; i++) { - GstVulkanDevice *device = gst_vulkan_device_new_with_index (instance, i); - encode_queue = - gst_vulkan_device_select_queue (device, VK_QUEUE_VIDEO_ENCODE_BIT_KHR); - gfx_queue = gst_vulkan_device_select_queue (device, VK_QUEUE_GRAPHICS_BIT); - gst_object_unref (device); - - if (encode_queue && gfx_queue) - break; - } + setup_queue (VK_QUEUE_VIDEO_ENCODE_BIT_KHR, + VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR); - if (!encode_queue) { + if (!video_queue) { GST_WARNING ("Unable to find encoding queue"); return NULL; } - if (!gfx_queue) { + if (!graphics_queue) { GST_WARNING ("Unable to find graphics queue"); return NULL; } - enc = gst_vulkan_encoder_create_from_queue (encode_queue, + enc = gst_vulkan_encoder_create_from_queue (video_queue, VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR); if (!enc) { - GST_WARNING ("Unable to create a vulkan encoder, queue=%p", encode_queue); + GST_WARNING ("Unable to create a vulkan encoder, queue=%p", video_queue); return NULL; } @@ -722,10 +439,10 @@ /* *INDENT-OFF* */ params_add = (VkVideoEncodeH264SessionParametersAddInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_ADD_INFO_KHR, - .pStdSPSs = &h264_std_sps, .stdSPSCount = 1, - .pStdPPSs = &h264_std_pps, + .pStdSPSs = &h264_std_sps, .stdPPSCount = 1, + .pStdPPSs = &h264_std_pps, }; enc_params.h264 = (VkVideoEncodeH264SessionParametersCreateInfoKHR) { .sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_SESSION_PARAMETERS_CREATE_INFO_KHR, @@ -758,8 +475,8 @@ gst_object_unref (exec); exec = NULL; } - gst_clear_object (&encode_queue); - gst_clear_object (&gfx_queue); + gst_clear_object (&video_queue); + gst_clear_object (&graphics_queue); } static void @@ -794,7 +511,7 @@ enc = setup_h264_encoder (width, height, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H264 encoder"); - return; + goto beach; } buffer_pool = allocate_buffer_pool (enc, width, height); @@ -802,7 +519,7 @@ /* Encode N_BUFFERS of I-Frames */ for (i = 0; i < N_BUFFERS; i++) { - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h264_frame (enc, width, height, TRUE); encode_frame (enc, frame, STD_VIDEO_H264_SLICE_TYPE_I, frame_num, NULL, 0, NULL, 0, sps_id, pps_id); check_encoded_frame (frame, GST_H264_NAL_SLICE_IDR); @@ -816,6 +533,7 @@ fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); gst_object_unref (img_pool); +beach: tear_down_encoder (enc); } @@ -837,14 +555,14 @@ enc = setup_h264_encoder (width, height, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H264 encoder"); - return; + goto beach; } buffer_pool = allocate_buffer_pool (enc, width, height); img_pool = allocate_image_buffer_pool (enc, width, height); /* Encode first picture as an IDR-Frame */ - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h264_frame (enc, width, height, TRUE); encode_frame (enc, frame, STD_VIDEO_H264_SLICE_TYPE_I, frame_num, NULL, 0, NULL, 0, sps_id, pps_id); check_encoded_frame (frame, GST_H264_NAL_SLICE_IDR); @@ -853,7 +571,7 @@ /* Encode following pictures as P-Frames */ for (i = 1; i < N_BUFFERS; i++) { - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h264_frame (enc, width, height, TRUE); frame->pic_num = frame_num; frame->pic_order_cnt = frame_num; @@ -872,6 +590,7 @@ fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); gst_object_unref (img_pool); +beach: tear_down_encoder (enc); } @@ -895,7 +614,7 @@ enc = setup_h264_encoder (width, height, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H264 encoder"); - return; + goto beach; } fail_unless (gst_vulkan_encoder_caps (enc, &enc_caps)); @@ -909,7 +628,7 @@ img_pool = allocate_image_buffer_pool (enc, width, height); /* Encode 1st picture as an IDR-Frame */ - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h264_frame (enc, width, height, TRUE); encode_frame (enc, frame, STD_VIDEO_H264_SLICE_TYPE_I, frame_num, NULL, 0, NULL, 0, sps_id, pps_id); check_encoded_frame (frame, GST_H264_NAL_SLICE_IDR); @@ -918,7 +637,7 @@ frame_num++; /* Encode 4th picture as a P-Frame */ - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h264_frame (enc, width, height, TRUE); frame->pic_num = 3; frame->pic_order_cnt = frame->pic_num * 2; encode_frame (enc, frame, STD_VIDEO_H264_SLICE_TYPE_P, @@ -929,7 +648,7 @@ frame_num++; /* Encode second picture as a B-Frame */ - frame = allocate_frame (enc, width, height, FALSE); + frame = allocate_h264_frame (enc, width, height, FALSE); frame->pic_num = 1; frame->pic_order_cnt = frame->pic_num * 2; encode_frame (enc, frame, STD_VIDEO_H264_SLICE_TYPE_B, @@ -939,7 +658,7 @@ _h264_encode_frame_free (enc, frame); /* Encode third picture as a B-Frame */ - frame = allocate_frame (enc, width, height, FALSE); + frame = allocate_h264_frame (enc, width, height, FALSE); frame->pic_num = 2; frame->pic_order_cnt = frame->pic_num * 2; @@ -973,7 +692,7 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ + /* FIXME: CI doesn't have a software vulkan video encoder (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkvideoencodeh265.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkvideoencodeh265.c
Changed
@@ -22,29 +22,14 @@ #include "config.h" #endif -#include <gst/gst.h> -#include <gst/check/gstcheck.h> #include <gst/codecparsers/gsth265parser.h> -#include <gst/vulkan/vulkan.h> -#include <gst/vulkan/gstvkencoder-private.h> - - #include <math.h> +#include "vkvideoencodebase.c" + // Include h265 std session params #include "vkcodecparams_h265.c" -static GstVulkanInstance *instance; - -static GstVulkanQueue *encode_queue = NULL; -static GstVulkanQueue *gfx_queue = NULL; -static GstBufferPool *img_pool; -static GstBufferPool *buffer_pool; - -static GstVulkanOperation *exec = NULL; - -static GstVideoInfo in_info; -static GstVideoInfo out_info; typedef struct { GstVulkanEncoderPicture picture; @@ -83,279 +68,15 @@ static void _h265_encode_frame_free (GstVulkanEncoder * enc, gpointer pframe) { - GstVulkanH265EncodeFrame *frame = pframe; + GstVulkanH265EncodeFrame *frame = (GstVulkanH265EncodeFrame *) pframe; gst_vulkan_encoder_picture_clear (&frame->picture, enc); g_free (frame); } -static void -setup (void) -{ - instance = gst_vulkan_instance_new (); - fail_unless (gst_vulkan_instance_open (instance, NULL)); -} - -static void -teardown (void) -{ - gst_clear_object (&encode_queue); - gst_clear_object (&gfx_queue); - gst_object_unref (instance); -} - -/* initialize the input vulkan image buffer pool */ -static GstBufferPool * -allocate_image_buffer_pool (GstVulkanEncoder * enc, uint32_t width, - uint32_t height) -{ - GstVideoFormat format = GST_VIDEO_FORMAT_NV12; - GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", - G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, - width, "height", G_TYPE_INT, height, NULL); - GstBufferPool *pool = gst_vulkan_image_buffer_pool_new (encode_queue->device); - GstStructure *config = gst_buffer_pool_get_config (pool); - gsize frame_size = width * height * 2; //NV12 - - gst_caps_set_features_simple (caps, - gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_IMAGE, - NULL)); - - fail_unless (gst_vulkan_encoder_create_dpb_pool (enc, caps)); - gst_video_info_from_caps (&out_info, caps); - - gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); - gst_vulkan_image_buffer_pool_config_set_allocation_params (config, - VK_IMAGE_USAGE_TRANSFER_DST_BIT | - VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR, - VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, - VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, - VK_ACCESS_TRANSFER_READ_BIT | VK_ACCESS_TRANSFER_WRITE_BIT); - - profile_caps = gst_vulkan_encoder_profile_caps (enc); - gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); - - gst_caps_unref (caps); - gst_caps_unref (profile_caps); - - fail_unless (gst_buffer_pool_set_config (pool, config)); - fail_unless (gst_buffer_pool_set_active (pool, TRUE)); - return pool; -} - -/* initialize the raw input buffer pool */ -static GstBufferPool * -allocate_buffer_pool (GstVulkanEncoder * enc, uint32_t width, uint32_t height) -{ - GstVideoFormat format = GST_VIDEO_FORMAT_NV12; - GstCaps *profile_caps, *caps = gst_caps_new_simple ("video/x-raw", "format", - G_TYPE_STRING, gst_video_format_to_string (format), "width", G_TYPE_INT, - width, "height", G_TYPE_INT, height, NULL); - gsize frame_size = width * height * 2; //NV12 - GstBufferPool *pool = gst_vulkan_buffer_pool_new (encode_queue->device); - GstStructure *config = gst_buffer_pool_get_config (pool); - - gst_caps_set_features_simple (caps, - gst_caps_features_new_static_str (GST_CAPS_FEATURE_MEMORY_VULKAN_BUFFER, - NULL)); - - gst_video_info_from_caps (&in_info, caps); - - gst_buffer_pool_config_set_params (config, caps, frame_size, 1, 0); - - profile_caps = gst_vulkan_encoder_profile_caps (enc); - gst_vulkan_image_buffer_pool_config_set_encode_caps (config, profile_caps); - - gst_caps_unref (caps); - gst_caps_unref (profile_caps); - - gst_vulkan_image_buffer_pool_config_set_allocation_params (config, - VK_IMAGE_USAGE_TRANSFER_SRC_BIT, - VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, - VK_IMAGE_LAYOUT_VIDEO_ENCODE_SRC_KHR, VK_ACCESS_TRANSFER_WRITE_BIT); - - fail_unless (gst_buffer_pool_set_config (pool, config)); - fail_unless (gst_buffer_pool_set_active (pool, TRUE)); - - return pool; -} - -/* generate a buffer representing a blue window in NV12 format */ -static GstBuffer * -generate_input_buffer (GstBufferPool * pool, int width, int height) -{ - int i; - GstBuffer *buffer; - GstMapInfo info; - GstMemory *mem; - - if ((gst_buffer_pool_acquire_buffer (pool, &buffer, NULL)) - != GST_FLOW_OK) - goto out; - - // PLANE Y COLOR BLUE - mem = gst_buffer_peek_memory (buffer, 0); - gst_memory_map (mem, &info, GST_MAP_WRITE); - for (i = 0; i < width * height; i++) - info.datai = 0x29; - gst_memory_unmap (mem, &info); - - // PLANE UV - mem = gst_buffer_peek_memory (buffer, 1); - gst_memory_map (mem, &info, GST_MAP_WRITE); - for (i = 0; i < width * height / 2; i++) { - info.datai = 0xf0; - info.datai++ = 0x6e; - } - - gst_memory_unmap (mem, &info); - -out: - return buffer; -} - -/* upload the raw input buffer pool into a vulkan image buffer */ -static GstFlowReturn -upload_buffer_to_image (GstBufferPool * pool, GstBuffer * inbuf, - GstBuffer ** outbuf) -{ - GstFlowReturn ret = GST_FLOW_OK; - GError *error = NULL; - GstVulkanCommandBuffer *cmd_buf; - guint i, n_mems, n_planes; - GArray *barriers = NULL; - VkImageLayout dst_layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL; - - - if ((ret = gst_buffer_pool_acquire_buffer (pool, outbuf, NULL)) - != GST_FLOW_OK) - goto out; - - if (!exec) { - GstVulkanCommandPool *cmd_pool = - gst_vulkan_queue_create_command_pool (gfx_queue, &error); - if (!cmd_pool) - goto error; - - exec = gst_vulkan_operation_new (cmd_pool); - gst_object_unref (cmd_pool); - } - - if (!gst_vulkan_operation_add_dependency_frame (exec, *outbuf, - VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT)) - goto error; - - if (!gst_vulkan_operation_begin (exec, &error)) - goto error; - - cmd_buf = exec->cmd_buf; - - if (!gst_vulkan_operation_add_frame_barrier (exec, *outbuf, - VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, - VK_ACCESS_TRANSFER_WRITE_BIT, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, - NULL)) - goto unlock_error; - - barriers = gst_vulkan_operation_retrieve_image_barriers (exec); - if (barriers->len == 0) { - ret = GST_FLOW_ERROR; - goto unlock_error; - } - - VkDependencyInfoKHR dependency_info = { - .sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO_KHR, - .pImageMemoryBarriers = (gpointer) barriers->data, - .imageMemoryBarrierCount = barriers->len, - }; - - gst_vulkan_operation_pipeline_barrier2 (exec, &dependency_info); - dst_layout = g_array_index (barriers, VkImageMemoryBarrier2KHR, 0).newLayout; - - g_clear_pointer (&barriers, g_array_unref); - - n_mems = gst_buffer_n_memory (*outbuf); - n_planes = GST_VIDEO_INFO_N_PLANES (&out_info); - - for (i = 0; i < n_planes; i++) { - VkBufferImageCopy region; - GstMemory *in_mem, *out_mem; - GstVulkanBufferMemory *buf_mem; - GstVulkanImageMemory *img_mem; - const VkImageAspectFlags aspects = { VK_IMAGE_ASPECT_PLANE_0_BIT, - VK_IMAGE_ASPECT_PLANE_1_BIT, VK_IMAGE_ASPECT_PLANE_2_BIT, - }; - VkImageAspectFlags plane_aspect; - guint idx; - - in_mem = gst_buffer_peek_memory (inbuf, i); - - buf_mem = (GstVulkanBufferMemory *) in_mem; - - if (n_planes == n_mems) - plane_aspect = VK_IMAGE_ASPECT_COLOR_BIT; - else - plane_aspect = aspectsi; - - /* *INDENT-OFF* */ - region = (VkBufferImageCopy) { - .bufferOffset = 0, - .bufferRowLength = GST_VIDEO_INFO_COMP_WIDTH (&in_info, i), - .bufferImageHeight = GST_VIDEO_INFO_COMP_HEIGHT (&in_info, i), - .imageSubresource = { - .aspectMask = plane_aspect, - .mipLevel = 0, - .baseArrayLayer = 0, - .layerCount = 1, - }, - .imageOffset = { .x = 0, .y = 0, .z = 0, }, - .imageExtent = { - .width = GST_VIDEO_INFO_COMP_WIDTH (&out_info, i), - .height = GST_VIDEO_INFO_COMP_HEIGHT (&out_info, i), - .depth = 1, - } - }; - - idx = MIN (i, n_mems - 1); - out_mem = gst_buffer_peek_memory (*outbuf, idx); - if (!gst_is_vulkan_image_memory (out_mem)) { - GST_WARNING ("Output is not a GstVulkanImageMemory"); - goto unlock_error; - } - img_mem = (GstVulkanImageMemory *) out_mem; - - gst_vulkan_command_buffer_lock (cmd_buf); - vkCmdCopyBufferToImage (cmd_buf->cmd, buf_mem->buffer, img_mem->image, - dst_layout, 1, ®ion); - gst_vulkan_command_buffer_unlock (cmd_buf); - } - - if (!gst_vulkan_operation_end (exec, &error)) - goto error; - - /*Hazard WRITE_AFTER_WRITE*/ - gst_vulkan_operation_wait (exec); - - ret = GST_FLOW_OK; - -out: - return ret; - -unlock_error: - gst_vulkan_operation_reset (exec); - -error: - if (error) { - GST_WARNING ("Error: %s", error->message); - g_clear_error (&error); - } - gst_clear_buffer (outbuf); - ret = GST_FLOW_ERROR; - goto out; -} - /* allocate a frame to be encoded from given buffer pools */ static GstVulkanH265EncodeFrame * -allocate_frame (GstVulkanEncoder * enc, int width, +allocate_h265_frame (GstVulkanEncoder * enc, int width, int height, gboolean is_ref) { GstVulkanH265EncodeFrame *frame; @@ -365,7 +86,7 @@ in_buffer = generate_input_buffer (buffer_pool, width, height); /* get a Vulkan image buffer out of the input buffer */ - upload_buffer_to_image(img_pool, in_buffer, &img_buffer); + upload_buffer_to_image (img_pool, in_buffer, &img_buffer); frame = _h265_encode_frame_new (enc, img_buffer, width * height * 3, is_ref); fail_unless (frame); @@ -726,7 +447,6 @@ gint sps_id, gint pps_id) { GstVulkanEncoder *enc; - int i; GError *err = NULL; uint32_t mbAlignedWidth, mbAlignedHeight; StdVideoH265ProfileIdc profile_idc = STD_VIDEO_H265_PROFILE_IDC_MAIN; @@ -769,32 +489,24 @@ }; /* *INDENT-ON* */ - for (i = 0; i < instance->n_physical_devices; i++) { - GstVulkanDevice *device = gst_vulkan_device_new_with_index (instance, i); - encode_queue = - gst_vulkan_device_select_queue (device, VK_QUEUE_VIDEO_ENCODE_BIT_KHR); - gfx_queue = gst_vulkan_device_select_queue (device, VK_QUEUE_GRAPHICS_BIT); - gst_object_unref (device); - - if (encode_queue && gfx_queue) - break; - } + setup_queue (VK_QUEUE_VIDEO_ENCODE_BIT_KHR, + VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR); - if (!encode_queue) { + if (!video_queue) { GST_WARNING ("Unable to find encoding queue"); return NULL; } - if (!gfx_queue) { + if (!graphics_queue) { GST_WARNING ("Unable to find graphics queue"); return NULL; } - enc = gst_vulkan_encoder_create_from_queue (encode_queue, + enc = gst_vulkan_encoder_create_from_queue (video_queue, VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR); if (!enc) { - GST_WARNING ("Unable to create a vulkan encoder, queue=%p", encode_queue); + GST_WARNING ("Unable to create a vulkan encoder, queue=%p", video_queue); return NULL; } @@ -965,7 +677,7 @@ enc = setup_h265_encoder (width, height, vps_id, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H265 encoder"); - return; + goto beach; } buffer_pool = allocate_buffer_pool (enc, width, height); @@ -973,7 +685,7 @@ /* Encode N_BUFFERS I-Frames */ for (i = 0; i < N_BUFFERS; i++) { - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h265_frame (enc, width, height, TRUE); encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_I, frame_num, NULL, 0, NULL, 0, vps_id, sps_id, pps_id); check_encoded_frame (frame, GST_H265_NAL_SLICE_IDR_W_RADL); @@ -987,6 +699,7 @@ fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); gst_object_unref (img_pool); +beach: tear_down_encoder (enc); } @@ -1009,13 +722,13 @@ enc = setup_h265_encoder (width, height, vps_id, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H265 encoder"); - return; + goto beach; } buffer_pool = allocate_buffer_pool (enc, width, height); img_pool = allocate_image_buffer_pool (enc, width, height); - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h265_frame (enc, width, height, TRUE); frame->pic_num = frame_num; /* Encode first picture as an IDR-Frame */ encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_I, @@ -1026,7 +739,7 @@ /* Encode following pictures as a P-Frames */ for (i = 1; i < N_BUFFERS; i++) { - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h265_frame (enc, width, height, TRUE); frame->pic_num = frame_num; encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_P, frame_num, list0, list0_num, NULL, 0, vps_id, sps_id, pps_id); @@ -1041,6 +754,7 @@ fail_unless (gst_buffer_pool_set_active (img_pool, FALSE)); gst_object_unref (img_pool); +beach: tear_down_encoder (enc); } @@ -1065,7 +779,7 @@ enc = setup_h265_encoder (width, height, vps_id, sps_id, pps_id); if (!enc) { GST_WARNING ("Unable to initialize H265 encoder"); - return; + goto beach; } fail_unless (gst_vulkan_encoder_caps (enc, &enc_caps)); @@ -1079,7 +793,7 @@ img_pool = allocate_image_buffer_pool (enc, width, height); /* Encode first picture as an IDR-Frame */ - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h265_frame (enc, width, height, TRUE); frame->pic_num = frame_num; encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_I, frame_num, NULL, 0, NULL, 0, vps_id, sps_id, pps_id); @@ -1088,7 +802,7 @@ frame_num++; /* Encode 4th picture as a P-Frame */ - frame = allocate_frame (enc, width, height, TRUE); + frame = allocate_h265_frame (enc, width, height, TRUE); frame->pic_num = frame_num + 2; encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_P, frame_num, list0, list0_num, NULL, 0, vps_id, sps_id, pps_id); @@ -1097,7 +811,7 @@ frame_num++; /* Encode 2nd picture as a B-Frame */ - frame = allocate_frame (enc, width, height, FALSE); + frame = allocate_h265_frame (enc, width, height, FALSE); frame->pic_num = frame_num - 1; encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_B, frame_num, list0, list0_num, list1, list1_num, vps_id, sps_id, pps_id); @@ -1106,7 +820,7 @@ _h265_encode_frame_free (enc, frame); /* Encode 3rd picture as a B-Frame */ - frame = allocate_frame (enc, width, height, FALSE); + frame = allocate_h265_frame (enc, width, height, FALSE); frame->pic_num = frame_num - 1; encode_frame (enc, frame, STD_VIDEO_H265_SLICE_TYPE_B, frame_num, list0, list0_num, list1, list1_num, vps_id, sps_id, pps_id); @@ -1139,7 +853,7 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ + /* FIXME: CI doesn't have a software vulkan video encoder (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/libs/vkwindow.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/libs/vkwindow.c
Changed
@@ -68,7 +68,6 @@ suite_add_tcase (s, tc_basic); tcase_add_checked_fixture (tc_basic, setup, teardown); - /* FIXME: CI doesn't have a software vulkan renderer (and none exists currently) */ instance = gst_vulkan_instance_new (); have_instance = gst_vulkan_instance_open (instance, NULL); gst_object_unref (instance);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/check/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/check/meson.build
Changed
@@ -35,9 +35,9 @@ 'elements/autovideoconvert.c', get_option('autoconvert').disabled(), 'elements/avwait.c', get_option('timecode').disabled(), 'elements/camerabin.c', get_option('camerabin2').disabled(), - 'elements/ccconverter.c', not closedcaption_dep.found(), gstvideo_dep, - 'elements/cccombiner.c', not closedcaption_dep.found(), , - 'elements/ccextractor.c', not closedcaption_dep.found(), , + 'elements/ccconverter.c', get_option('closedcaption').disabled(), gstvideo_dep, + 'elements/cccombiner.c', get_option('closedcaption').disabled(), + 'elements/ccextractor.c', get_option('closedcaption').disabled(), 'elements/cudaconvert.c', false, gstgl_dep, gmodule_dep, 'elements/cudafilter.c', false, gstgl_dep, gmodule_dep, 'elements/dashsink.c', @@ -54,8 +54,9 @@ 'elements/hlsdemux_m3u8.c', not hls_dep.found(), hls_dep, 'elements/id3mux.c', get_option('id3tag').disabled(), 'elements/interlace.c', get_option('interlace').disabled(), + 'elements/ioutracker.c', get_option('tensordecoders').disabled(), gstanalytics_dep, 'elements/jpeg2000parse.c', false, libparser_dep, gstcodecparsers_dep, - 'elements/line21.c', not closedcaption_dep.found(), , + 'elements/line21.c', get_option('closedcaption').disabled(), 'elements/mfvideosrc.c', host_machine.system() != 'windows', , 'elements/mpegtsdemux.c', get_option('mpegtsdemux').disabled(), gstmpegts_dep, 'elements/mpegtsmux.c', get_option('mpegtsmux').disabled(), gstmpegts_dep, @@ -67,7 +68,7 @@ 'elements/nvenc.c', false, gstgl_dep, gmodule_dep, 'elements/nvdec.c', not gstgl_dep.found(), gstgl_dep, gmodule_dep, 'elements/svthevcenc.c', not svthevcenc_dep.found(), svthevcenc_dep, - 'elements/openjpeg.c', not openjpeg_dep.found(), openjpeg_dep, + 'elements/openjpeg.c', not openjpeg_dep.found(), openjpeg_dep, 'elements/pcapparse.c', false, libparser_dep, 'elements/pnm.c', get_option('pnm').disabled(), 'elements/proxysink.c', get_option('proxy').disabled(), @@ -122,6 +123,7 @@ 'libs/vkvideodecode.c', not gstvulkan_dep.found() or vulkan_conf.get('GST_VULKAN_HAVE_VIDEO_EXTENSIONS') != 1, gstvulkan_dep, 'libs/vkvideoencodeh264.c', not gstvulkan_dep.found() or vulkan_conf.get('GST_VULKAN_HAVE_VIDEO_EXTENSIONS') != 1, gstvulkan_dep, gstcodecparsers_dep, 'libs/vkvideoencodeh265.c', not gstvulkan_dep.found() or vulkan_conf.get('GST_VULKAN_HAVE_VIDEO_EXTENSIONS') != 1, gstvulkan_dep, gstcodecparsers_dep, + 'libs/vkvideoencodeav1.c', not gstvulkan_dep.found() or vulkan_conf.get('GST_VULKAN_HAVE_VIDEO_EXTENSIONS') != 1, gstvulkan_dep, gstcodecparsers_dep, 'libs/d3d11device.cpp', not gstd3d11_dep.found(), gstd3d11_dep, 'libs/d3d11memory.c', not gstd3d11_dep.found(), gstd3d11_dep, 'libs/cudamemory.c', not gstcuda_dep.found(), gstcuda_dep, gstcuda_stub_dep, @@ -170,6 +172,7 @@ 'elements/netsim.c', 'elements/shm.c', not shm_enabled, shm_deps, 'elements/unixfd.c', not gio_unix_dep.found(), + 'elements/vmaf.c', get_option('vmaf').disabled() or not libvmaf_dep.found(), libvmaf_dep, 'elements/voaacenc.c', not voaac_dep.found() or not cdata.has('HAVE_UNISTD_H'), voaac_dep, 'elements/webrtcbin.c', not libnice_dep.found(), gstwebrtc_dep,
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/camerabin2/gst-camerabin2-test.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/camerabin2/gst-camerabin2-test.c
Changed
@@ -444,8 +444,7 @@ gst_message_parse_state_changed (message, &oldstate, &newstate, NULL); GST_DEBUG_OBJECT (GST_MESSAGE_SRC (message), "state-changed: %s -> %s", - gst_element_state_get_name (oldstate), - gst_element_state_get_name (newstate)); + gst_state_get_name (oldstate), gst_state_get_name (newstate)); } break; case GST_MESSAGE_EOS:
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/d3d12/d3d12fisheyedewarp.cpp
Added
@@ -0,0 +1,258 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <windows.h> +#include <string.h> +#include "../key-handler.h" + +static GMainLoop *loop = nullptr; + +static gboolean +bus_msg (GstBus * bus, GstMessage * msg, gpointer user_data) +{ + switch (GST_MESSAGE_TYPE (msg)) { + case GST_MESSAGE_ERROR: + { + GError *err; + gchar *dbg; + + gst_message_parse_error (msg, &err, &dbg); + gst_printerrln ("ERROR %s", err->message); + if (dbg != nullptr) + gst_printerrln ("ERROR debug information: %s", dbg); + g_clear_error (&err); + g_free (dbg); + + g_main_loop_quit (loop); + break; + } + case GST_MESSAGE_EOS: + { + gst_println ("Got EOS"); + g_main_loop_quit (loop); + break; + } + default: + break; + } + + return TRUE; +} + +static void +print_keyboard_help (void) +{ + static struct + { + const gchar *key_desc; + const gchar *key_help; + } key_controls = { + {"left arrow", "Decrease Y angle"}, + {"right arrow", "Increase X angle"}, + {"down arrow", "Decrease Y angle"}, + {"up arrow", "Increase Y angle"}, + {"-", "Decrease Z angle"}, + {"+", "Increase Z angle"}, + {"0 - 3", "Select projection type"}, + {"t", "Toggle rotation space"}, + {"space", "Reset angle"}, + {"q", "Quit"}, + }; + + guint i, chars_to_pad, desc_len, max_desc_len = 0; + + gst_print ("\n%s\n", "Keyboard controls:"); + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + desc_len = g_utf8_strlen (key_controlsi.key_desc, -1); + max_desc_len = MAX (max_desc_len, desc_len); + } + ++max_desc_len; + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + chars_to_pad = max_desc_len - g_utf8_strlen (key_controlsi.key_desc, -1); + gst_print ("\t%s", key_controlsi.key_desc); + gst_print ("%-*s: ", chars_to_pad, ""); + gst_print ("%s\n", key_controlsi.key_help); + } + gst_print ("\n"); +} + +static void +keyboard_cb (gchar input, gboolean is_ascii, GstElement * dewarp) +{ + static double x_angle = 0; + static double y_angle = 0; + static double z_angle = 0; + static int rotation_space = 0; + + if (!is_ascii) { + switch (input) { + case KB_ARROW_UP: + x_angle += 1.0; + gst_println ("Increase X angle to %lf", x_angle); + g_object_set (dewarp, "rotation-x", x_angle, nullptr); + break; + case KB_ARROW_DOWN: + x_angle -= 1.0; + gst_println ("Decrease X angle to %lf", x_angle); + g_object_set (dewarp, "rotation-x", x_angle, nullptr); + break; + case KB_ARROW_LEFT: + y_angle -= 1.0; + gst_println ("Decrease Y angle to %lf", y_angle); + g_object_set (dewarp, "rotation-y", y_angle, nullptr); + break; + case KB_ARROW_RIGHT: + y_angle += 1.0; + gst_println ("Increase Y angle to %lf", y_angle); + g_object_set (dewarp, "rotation-y", y_angle, nullptr); + break; + default: + break; + } + } else { + switch (input) { + case '-': + z_angle -= 1.0; + gst_println ("Decrease Z angle to %lf", z_angle); + g_object_set (dewarp, "rotation-z", z_angle, nullptr); + break; + case '+': + z_angle += 1.0; + gst_println ("Increase Z angle to %lf", z_angle); + g_object_set (dewarp, "rotation-z", z_angle, nullptr); + break; + case '0': + gst_println ("Updated mode: passthrough"); + g_object_set (dewarp, "projection-type", 0, nullptr); + break; + case '1': + gst_println ("Updated mode: equirect"); + g_object_set (dewarp, "projection-type", 1, nullptr); + break; + case '2': + gst_println ("Updated mode: panorama"); + g_object_set (dewarp, "projection-type", 2, nullptr); + break; + case '3': + gst_println ("Updated mode: perspective"); + g_object_set (dewarp, "projection-type", 3, nullptr); + break; + case 't': + case 'T': + rotation_space++; + rotation_space %= 2; + gst_println ("Updated rotation space: %s", + rotation_space == 0 ? "local" : "world"); + g_object_set (dewarp, "rotation-space", rotation_space, nullptr); + break; + case ' ': + x_angle = 0; + y_angle = 0; + z_angle = 0; + gst_println ("Reset angle"); + g_object_set (dewarp, "rotation-x", x_angle, + "rotation-y", y_angle, "rotation-z", z_angle, nullptr); + break; + case 'q': + g_main_loop_quit (loop); + break; + default: + break; + } + } +} + +gint +main (gint argc, gchar ** argv) +{ + gchar *location = nullptr; + gdouble radius_x = 0.5; + gdouble radius_y = 0.5; + GOptionEntry options = { + {"location", 0, 0, G_OPTION_ARG_STRING, &location, + "Fisheye image file location"}, + {"radius-x", 0, 0, G_OPTION_ARG_DOUBLE, &radius_x, + "Normalized horizontal radius of fisheye circle"}, + {"radius-y", 0, 0, G_OPTION_ARG_DOUBLE, &radius_y, + "Normalized horizontal radius of fisheye circle"}, + {nullptr} + }; + + auto option_ctx = + g_option_context_new ("Fisheye dewarp example using d3d12fisheyedewarp"); + g_option_context_add_main_entries (option_ctx, options, nullptr); + g_option_context_set_help_enabled (option_ctx, TRUE); + GError *err = nullptr; + if (!g_option_context_parse (option_ctx, &argc, &argv, &err)) { + gst_printerrln ("option parsing failed: %s\n", err->message); + g_clear_error (&err); + return 0; + } + g_option_context_free (option_ctx); + + if (!location) { + gst_println ("Location must be specified"); + return 0; + } + + gst_init (nullptr, nullptr); + loop = g_main_loop_new (nullptr, FALSE); + + auto pipeline_str = g_strdup_printf ("filesrc location=%s " + "! decodebin ! d3d12upload ! imagefreeze ! tee name=t ! queue " + "! d3d12fisheyedewarp name=dewarp ! d3d12videosink t. ! queue ! d3d12videosink", + location); + + auto pipeline = gst_parse_launch (pipeline_str, nullptr); + g_free (location); + g_free (pipeline_str); + if (!pipeline) { + gst_println ("Couldn't create pipeline"); + return 0; + } + + auto remap = gst_bin_get_by_name (GST_BIN (pipeline), "dewarp"); + + g_object_set (remap, "radius-x", radius_x, "radius-y", radius_y, nullptr); + + gst_bus_add_watch (GST_ELEMENT_BUS (pipeline), bus_msg, nullptr); + + print_keyboard_help (); + set_key_handler ((KeyInputCallback) keyboard_cb, remap); + + gst_element_set_state (pipeline, GST_STATE_PLAYING); + + g_main_loop_run (loop); + + gst_element_set_state (pipeline, GST_STATE_NULL); + gst_bus_remove_watch (GST_ELEMENT_BUS (pipeline)); + + gst_object_unref (remap); + gst_object_unref (pipeline); + + return 0; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/d3d12/d3d12remap-fisheye.cpp
Added
@@ -0,0 +1,647 @@ +/* GStreamer + * Copyright (C) 2025 Seungha Yang <seungha@centricular.com> + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Alternatively, the contents of this file may be used under the + * GNU Lesser General Public License Version 2.1 (the "LGPL"), in + * which case the following provisions apply instead of the ones + * mentioned above: + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <gst/d3d12/gstd3d12.h> +#include <wrl.h> +#include <directx/d3dx12.h> +#include <memory> + +#include <windows.h> +#include <string.h> +#include <d3dcompiler.h> +#include <DirectXMath.h> +#include "../key-handler.h" + +/* *INDENT-OFF* */ +using namespace DirectX; +using namespace Microsoft::WRL; + +static const gchar *shader_str = R"( +RWTexture2D<float4> uvLUT : register(u0); + +cbuffer Parameters : register(b0) +{ + float4x4 RotationMatrix; + float2 lutResolution; + float perspectiveFOV; + float fisheyeFOV; + float2 fisheyeCircleCenter; + float2 fisheyeCircleRadius; +} + +numthreads(8, 8, 1) +void CSMain(uint3 DTid : SV_DispatchThreadID) +{ + if (DTid.x >= (uint)lutResolution.x || DTid.y >= (uint)lutResolution.y) + return; + + float2 pixelPos = float2(DTid.x, DTid.y); + float2 uv_ndc = (pixelPos / lutResolution) * 2.0 - 1.0; + + float hFOV_rad = radians(perspectiveFOV); + float halfWidth = tan(hFOV_rad * 0.5); + float aspect = lutResolution.y / lutResolution.x; + float x = uv_ndc.x * halfWidth; + float y = uv_ndc.y * halfWidth * aspect; + + float3 rayDir = normalize(float3(x, y, 1.0)); + float3x3 rotation3x3 = float3x3( + RotationMatrix._11, RotationMatrix._12, RotationMatrix._13, + RotationMatrix._21, RotationMatrix._22, RotationMatrix._23, + RotationMatrix._31, RotationMatrix._32, RotationMatrix._33 + ); + rayDir = mul(rotation3x3, rayDir); + + float theta = acos(rayDir.z); + float maxAngle = radians(fisheyeFOV * 0.5); + + float4 fishUV = float4(0.0, 0.0, 0.0, 1.0); + if (theta <= maxAngle) { + float r_fishX = (fisheyeCircleRadius.x / maxAngle) * theta; + float r_fishY = (fisheyeCircleRadius.y / maxAngle) * theta; + + float phi = atan2(rayDir.y, rayDir.x); + fishUV.xy = fisheyeCircleCenter + + float2(r_fishX * cos(phi), r_fishY * sin(phi)); + } else { + fishUV.w = 0.0; + } + + uvLUTint2(DTid.xy) = fishUV; +} +)"; +/* *INDENT-ON* */ + +static GMainLoop *loop = nullptr; + +#define REMAP_SIZE 1024 + +struct ConstBuf +{ + XMFLOAT4X4 RotationMatrix; + FLOAT lutResolution2; + FLOAT perspectiveFOV; + FLOAT fisheyeFOV; + FLOAT fisheyeCircleCenter2; + FLOAT fisheyeCircleRadius2; +}; + +struct RemapResource +{ + ~RemapResource() + { + if (fence_val > 0 && device) { + /* Make sure there's no pending GPU task */ + gst_d3d12_device_fence_wait (device, D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_val); + } + + cl = nullptr; + uv_remap = nullptr; + gst_clear_object (&ca_pool); + gst_clear_object (&fence_data_pool); + gst_clear_object (&device); + } + + void UpdateAngle (FLOAT tilt_angle, FLOAT pan_angle, FLOAT roll_angle) + { + float tilt_rad = XMConvertToRadians(tilt_angle); + float pan_rad = XMConvertToRadians(pan_angle); + float roll_rad = XMConvertToRadians(roll_angle); + + XMMATRIX rot_x = XMMatrixRotationX(tilt_rad); + XMMATRIX rot_y = XMMatrixRotationY(pan_rad); + XMMATRIX rot_z = XMMatrixRotationZ(roll_rad); + + XMMATRIX m = XMMatrixMultiply(rot_z, XMMatrixMultiply(rot_y, rot_x)); + XMStoreFloat4x4 (&cbuf.RotationMatrix, m); + } + + bool UpdateRemapResource () + { + GstD3D12FenceData *fence_data; + gst_d3d12_fence_data_pool_acquire (fence_data_pool, &fence_data); + + GstD3D12CmdAlloc *gst_ca; + if (!gst_d3d12_cmd_alloc_pool_acquire (ca_pool, &gst_ca)) { + gst_println ("Couldn't acquire cmd allocator"); + gst_d3d12_fence_data_unref (fence_data); + return false; + } + + gst_d3d12_fence_data_push (fence_data, gst_ca, (GDestroyNotify) + gst_mini_object_unref); + + auto ca = gst_d3d12_cmd_alloc_get_handle (gst_ca); + auto hr = ca->Reset (); + if (!gst_d3d12_result (hr, device)) { + gst_print ("Couldn't reset cmd allocator"); + gst_d3d12_fence_data_unref (fence_data); + return false; + } + + if (!cl) { + auto device_handle = gst_d3d12_device_get_device_handle (device); + hr = device_handle->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + ca, nullptr, IID_PPV_ARGS (&cl)); + } else { + hr = cl->Reset (ca, nullptr); + } + + if (!gst_d3d12_result (hr, device)) { + gst_print ("Couldn't setup cmd list"); + gst_d3d12_fence_data_unref (fence_data); + return false; + } + + ID3D12DescriptorHeap *heaps = { desc_heap.Get () }; + + cl->SetComputeRootSignature (rs.Get ()); + cl->SetPipelineState (pso.Get ()); + cl->SetDescriptorHeaps (1, heaps); + cl->SetComputeRoot32BitConstants (0, sizeof (cbuf) / 4, &cbuf, 0); + cl->SetComputeRootDescriptorTable (1, + desc_heap->GetGPUDescriptorHandleForHeapStart ()); + cl->Dispatch ((REMAP_SIZE + 7) / 8, (REMAP_SIZE + 7) / 8, 1); + hr = cl->Close (); + + if (!gst_d3d12_result (hr, device)) { + gst_print ("Couldn't close cmd list"); + gst_d3d12_fence_data_unref (fence_data); + return false; + } + + ID3D12CommandList *cmd_list = { cl.Get () }; + hr = gst_d3d12_device_execute_command_lists (device, + D3D12_COMMAND_LIST_TYPE_DIRECT, 1, cmd_list, &fence_val); + if (!gst_d3d12_result (hr, device)) { + gst_println ("Couldn't execute command list"); + gst_d3d12_fence_data_unref (fence_data); + return false; + } + + gst_d3d12_device_set_fence_notify (device, D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_val, fence_data, (GDestroyNotify) gst_mini_object_unref); + + return true; + } + + GstD3D12Device *device = nullptr; + GstD3D12CmdAllocPool *ca_pool = nullptr; + GstD3D12FenceDataPool *fence_data_pool = nullptr; + ComPtr<ID3D12RootSignature> rs; + ComPtr<ID3D12PipelineState> pso; + ComPtr<ID3D12GraphicsCommandList> cl; + ComPtr<ID3D12Resource> uv_remap; + ComPtr<ID3D12DescriptorHeap> desc_heap; + ConstBuf cbuf; + UINT64 fence_val = 0; +}; + +static HRESULT +creat_rs_blob (GstD3D12Device * device, ID3DBlob ** blob) +{ + D3D12_VERSIONED_ROOT_SIGNATURE_DESC desc = { }; + CD3DX12_ROOT_PARAMETER root_params2; + CD3DX12_DESCRIPTOR_RANGE range_uav; + + root_params0.InitAsConstants (sizeof (ConstBuf) / 4, 0); + + range_uav.Init (D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 0); + root_params1.InitAsDescriptorTable (1, &range_uav); + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (desc, 2, root_params, + 0, nullptr, + D3D12_ROOT_SIGNATURE_FLAG_DENY_VERTEX_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS); + + ComPtr < ID3DBlob > error_blob; + auto hr = D3DX12SerializeVersionedRootSignature (&desc, + D3D_ROOT_SIGNATURE_VERSION_1_0, blob, &error_blob); + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + gst_println ("Couldn't serialize rs, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } + + return hr; +} + +static HRESULT +compile_shader (GstD3D12Device * device, ID3DBlob ** blob) +{ + ComPtr < ID3DBlob > error_blob; + auto hr = D3DCompile (shader_str, strlen (shader_str), nullptr, nullptr, nullptr, + "CSMain", "cs_5_0", 0, 0, blob, &error_blob); + + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + gst_println ("Couldn't compile shader, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } + + return hr; +} + +static std::shared_ptr<RemapResource> +create_remap_resource (void) +{ + auto ret = std::make_shared<RemapResource> (); + + ret->device = gst_d3d12_device_new (0); + if (!ret->device) { + gst_println ("Couldn't create d3d12 device"); + return nullptr; + } + + ret->fence_data_pool = gst_d3d12_fence_data_pool_new (); + auto device = gst_d3d12_device_get_device_handle (ret->device); + ret->ca_pool = gst_d3d12_cmd_alloc_pool_new (device, + D3D12_COMMAND_LIST_TYPE_DIRECT); + + /* Prepare compute shader and resource. + * Compute shader will write UV remap data to RGBA texture + * (R -> U, G -> V, B -> unused, A -> mask where A < 0.5 will fill background + * color) + */ + ComPtr<ID3DBlob> shader_blob; + auto hr = compile_shader (ret->device, &shader_blob); + if (FAILED (hr)) + return nullptr; + + ComPtr<ID3DBlob> rs_blob; + hr = creat_rs_blob (ret->device, &rs_blob); + if (FAILED (hr)) + return nullptr; + + auto device_handle = gst_d3d12_device_get_device_handle (ret->device); + hr = device_handle->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&ret->rs)); + if (!gst_d3d12_result (hr, ret->device)) { + gst_println ("Couldn't create root signature"); + return nullptr; + } + + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = ret->rs.Get (); + pso_desc.CS.pShaderBytecode = shader_blob->GetBufferPointer (); + pso_desc.CS.BytecodeLength = shader_blob->GetBufferSize (); + hr = device_handle->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&ret->pso)); + if (!gst_d3d12_result (hr, ret->device)) { + gst_println ("Couldn't create pso"); + return nullptr; + } + + D3D12_HEAP_PROPERTIES heap_prop = CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); + D3D12_RESOURCE_DESC resource_desc = + CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_R16G16B16A16_UNORM, + REMAP_SIZE, REMAP_SIZE, 1, 1, 1, 0, + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); + hr = device_handle->CreateCommittedResource (&heap_prop, D3D12_HEAP_FLAG_NONE, + &resource_desc, D3D12_RESOURCE_STATE_COMMON, nullptr, + IID_PPV_ARGS (&ret->uv_remap)); + if (!gst_d3d12_result (hr, ret->device)) { + gst_println ("Couldn't create texture"); + return nullptr; + } + + D3D12_DESCRIPTOR_HEAP_DESC desc_heap_desc = { }; + desc_heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; + desc_heap_desc.NumDescriptors = 1; + desc_heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; + hr = device_handle->CreateDescriptorHeap (&desc_heap_desc, + IID_PPV_ARGS (&ret->desc_heap)); + if (!gst_d3d12_result (hr, ret->device)) { + gst_println ("Couldn't create descriptor heap"); + return nullptr; + } + + auto cpu_handle = ret->desc_heap->GetCPUDescriptorHandleForHeapStart (); + D3D12_UNORDERED_ACCESS_VIEW_DESC uav_desc = { }; + uav_desc.Format = DXGI_FORMAT_R16G16B16A16_UNORM; + uav_desc.ViewDimension = D3D12_UAV_DIMENSION_TEXTURE2D; + device_handle->CreateUnorderedAccessView (ret->uv_remap.Get (), + nullptr, &uav_desc, cpu_handle); + + ret->cbuf.lutResolution0 = REMAP_SIZE; + ret->cbuf.lutResolution1 = REMAP_SIZE; + ret->cbuf.perspectiveFOV = 120; + ret->cbuf.fisheyeFOV = 180; + ret->cbuf.fisheyeCircleCenter0 = 0.5; + ret->cbuf.fisheyeCircleCenter1 = 0.5; + ret->cbuf.fisheyeCircleRadius0 = 0.5; + ret->cbuf.fisheyeCircleRadius1 = 0.5; + + ret->UpdateAngle (0, 0, 0); + + if (!ret->UpdateRemapResource ()) + return nullptr; + + return ret; +} + +static gboolean +bus_msg (GstBus * bus, GstMessage * msg, gpointer user_data) +{ + switch (GST_MESSAGE_TYPE (msg)) { + case GST_MESSAGE_ERROR: + { + GError *err; + gchar *dbg; + + gst_message_parse_error (msg, &err, &dbg); + gst_printerrln ("ERROR %s", err->message); + if (dbg != nullptr) + gst_printerrln ("ERROR debug information: %s", dbg); + g_clear_error (&err); + g_free (dbg); + + g_main_loop_quit (loop); + break; + } + case GST_MESSAGE_EOS: + { + gst_println ("Got EOS"); + g_main_loop_quit (loop); + break; + } + default: + break; + } + + return TRUE; +} + +static void +print_keyboard_help (void) +{ + static struct + { + const gchar *key_desc; + const gchar *key_help; + } key_controls = { + {"left arrow", "Decrease pan angle"}, + {"right arrow", "Increase pan angle"}, + {"down arrow", "Decrease tilt angle"}, + {"up arrow", "Increase tilt angle"}, + {"-", "Decrease roll angle"}, + {"+", "Increase roll angle"}, + {"1", "Decrease perspective FOV"}, + {"2", "Increase perspective FOV"}, + {"space", "Reset angle"}, + {"q", "Quit"}, + }; + + guint i, chars_to_pad, desc_len, max_desc_len = 0; + + gst_print ("\n%s\n", "Keyboard controls:"); + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + desc_len = g_utf8_strlen (key_controlsi.key_desc, -1); + max_desc_len = MAX (max_desc_len, desc_len); + } + ++max_desc_len; + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + chars_to_pad = max_desc_len - g_utf8_strlen (key_controlsi.key_desc, -1); + gst_print ("\t%s", key_controlsi.key_desc); + gst_print ("%-*s: ", chars_to_pad, ""); + gst_print ("%s\n", key_controlsi.key_help); + } + gst_print ("\n"); +} + +struct AppData +{ + RemapResource *resource; + GstElement *remap; +}; + +static void +keyboard_cb (gchar input, gboolean is_ascii, AppData * app_data) +{ + static FLOAT tilt = 0; + static FLOAT pan = 0; + static FLOAT roll = 0; + static FLOAT fov = 120; + bool update_angle = false; + bool update_fov = false; + + if (!is_ascii) { + switch (input) { + case KB_ARROW_UP: + tilt += 1.0; + if (tilt > 45.0) + tilt = 45.0; + gst_println ("Increase tilt angle to %lf", tilt); + update_angle = true; + break; + case KB_ARROW_DOWN: + tilt -= 1.0; + if (tilt < -45.0) + tilt = -45.0; + gst_println ("Decrease tilt angle to %lf", tilt); + update_angle = true; + break; + case KB_ARROW_LEFT: + pan -= 1.0; + if (pan < -45.0) + pan = -45.0; + gst_println ("Decrease pan angle to %lf", pan); + update_angle = true; + break; + case KB_ARROW_RIGHT: + pan += 1.0; + if (pan > 45.0) + pan = 45.0; + gst_println ("Increase pan angle to %lf", pan); + update_angle = true; + break; + default: + break; + } + } else { + switch (input) { + case '-': + roll -= 1.0; + if (roll < -45.0) + roll = -45.0; + gst_println ("Decrease roll angle to %lf", roll); + update_angle = true; + break; + case '+': + roll += 1.0; + if (roll > 45.0) + roll = 45.0; + gst_println ("Increase roll angle to %lf", roll); + update_angle = true; + break; + case '1': + fov -= 1.0; + if (fov < 10) + fov = 10; + gst_println ("Decrease fov to %lf", fov); + update_fov = true; + break; + case '2': + fov += 1.0; + if (fov > 120.0) + fov = 120.0; + gst_println ("Increase fov to %lf", fov); + update_fov = true; + break; + case ' ': + pan = 0; + tilt = 0; + roll = 0; + fov = 120; + gst_println ("Reset angle"); + update_angle = true; + update_fov = true; + break; + case 'q': + g_main_loop_quit (loop); + break; + default: + break; + } + } + + if (!update_angle && !update_fov) + return; + + if (update_angle) + app_data->resource->UpdateAngle (tilt, pan, roll); + + if (update_fov) + app_data->resource->cbuf.perspectiveFOV = fov; + + app_data->resource->UpdateRemapResource (); +} + +gint +main (gint argc, gchar ** argv) +{ + AppData data; + gchar *location = nullptr; + GOptionEntry options = { + {"location", 0, 0, G_OPTION_ARG_STRING, &location, + "Fisheye image file location"}, + {nullptr} + }; + + auto option_ctx = + g_option_context_new ("Fisheye to perspective projection using d3d12remap"); + g_option_context_add_main_entries (option_ctx, options, nullptr); + g_option_context_set_help_enabled (option_ctx, TRUE); + GError *err = nullptr; + if (!g_option_context_parse (option_ctx, &argc, &argv, &err)) { + gst_printerrln ("option parsing failed: %s\n", err->message); + g_clear_error (&err); + return 0; + } + g_option_context_free (option_ctx); + + if (!location) { + gst_println ("Location must be specified"); + return 0; + } + + gst_init (nullptr, nullptr); + loop = g_main_loop_new (nullptr, FALSE); + + auto resource = create_remap_resource (); + if (!resource) + return 0; + + auto pipeline_str = g_strdup_printf ("filesrc location=%s " + "! decodebin ! d3d12upload ! imagefreeze ! tee name=t ! queue " + "! d3d12remap name=remap ! d3d12videosink t. ! queue ! d3d12videosink", + location); + + auto pipeline = gst_parse_launch (pipeline_str, nullptr); + g_free (location); + g_free (pipeline_str); + if (!pipeline) { + gst_println ("Couldn't create pipeline"); + return 0; + } + + auto remap = gst_bin_get_by_name (GST_BIN (pipeline), "remap"); + g_object_set (remap, "uv-remap", resource->uv_remap.Get (), nullptr); + + gst_bus_add_watch (GST_ELEMENT_BUS (pipeline), bus_msg, nullptr); + + data.resource = resource.get(); + data.remap = gst_bin_get_by_name (GST_BIN (pipeline), "remap"); + + print_keyboard_help (); + set_key_handler ((KeyInputCallback) keyboard_cb, &data); + + gst_element_set_state (pipeline, GST_STATE_PLAYING); + + g_main_loop_run (loop); + + gst_element_set_state (pipeline, GST_STATE_NULL); + gst_bus_remove_watch (GST_ELEMENT_BUS (pipeline)); + + gst_object_unref (data.remap); + gst_object_unref (pipeline); + resource = nullptr; + + return 0; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/d3d12/d3d12swapchainsink-win32.cpp -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/d3d12/d3d12swapchainsink-win32.cpp
Changed
@@ -23,6 +23,8 @@ #include <gst/gst.h> #include <gst/video/video.h> +#include <gst/d3d12/gstd3d12.h> +#include <directx/d3dx12.h> #include <windows.h> #include <dcomp.h> @@ -30,14 +32,51 @@ #include <dxgi.h> #include <wrl.h> #include <memory> +#include <d3dcompiler.h> +#include <string.h> +#include "../key-handler.h" using namespace Microsoft::WRL; static GMainLoop *loop_ = nullptr; static HWND hwnd_ = nullptr; +static gchar *snapshot_location = nullptr; +#define VIEW_WIDTH 640 +#define VIEW_HEIGHT 480 +#define REMAP_SIZE 1024 + +static const gchar *shader_str = R"( +RWTexture2D<float4> uvLUT : register(u0); + +numthreads(8, 8, 1) +void CSMain(uint3 DTid : SV_DispatchThreadID) +{ + uint width, height; + uvLUT.GetDimensions(width, height); + + if (DTid.x >= width || DTid.y >= height) + return; + + float4 remapUV = float4(0.0, 0.0, 0.0, 1.0); + remapUV.x = 1.0 - ((float) DTid.x / (float) width); + remapUV.y = 1.0 - ((float) DTid.y / (float) height); + + uvLUTint2(DTid.xy) = remapUV; +} +)"; struct GpuResource { + ~GpuResource () + { + if (fence_val > 0 && device) { + gst_d3d12_device_fence_wait (device, D3D12_COMMAND_LIST_TYPE_DIRECT, + fence_val); + } + + gst_clear_object (&device); + } + ComPtr<IDCompositionDesktopDevice> dcomp_device; ComPtr<IDCompositionTarget> target; ComPtr<IDCompositionVisual2> visual; @@ -45,11 +84,20 @@ ComPtr<IDCompositionVisual2> swapchain_visual; ComPtr<ID3D11Device> device11; ComPtr<ID3D11DeviceContext> context11; + GstD3D12Device *device = nullptr; + guint64 fence_val = 0; + ComPtr<ID3D12CommandAllocator> ca; + ComPtr<ID3D12GraphicsCommandList> cl; + ComPtr<ID3D12RootSignature> rs; + ComPtr<ID3D12PipelineState> pso; + ComPtr<ID3D12Resource> uv_remap; + ComPtr<ID3D12DescriptorHeap> desc_heap; }; struct AppData { GstElement *pipeline = nullptr; + GstElement *sink = nullptr; std::shared_ptr<GpuResource> resource; }; @@ -108,15 +156,15 @@ } if (SUCCEEDED (hr)) { - if (width > 320) { - FLOAT offset_x = ((FLOAT) (width - 320)) / 2.0; + if (width > VIEW_WIDTH) { + FLOAT offset_x = ((FLOAT) (width - VIEW_WIDTH)) / 2.0; resource->swapchain_visual->SetOffsetX (offset_x); } else { resource->swapchain_visual->SetOffsetX (0.0); } - if (height > 240) { - FLOAT offset_y = ((FLOAT) (height - 240)) / 2.0; + if (height > VIEW_HEIGHT) { + FLOAT offset_y = ((FLOAT) (height - VIEW_HEIGHT)) / 2.0; resource->swapchain_visual->SetOffsetY (offset_y); } else { resource->swapchain_visual->SetOffsetY (0.0); @@ -180,6 +228,303 @@ return G_SOURCE_CONTINUE; } +static void +keyboard_cb (gchar input, gboolean is_ascii, AppData * app_data) +{ + static gboolean set_remap = FALSE; + static GstState state = GST_STATE_PLAYING; + static gboolean force_aspect_ratio = TRUE; + + if (is_ascii) { + switch (input) { + case ' ': + if (state == GST_STATE_PAUSED) + state = GST_STATE_PLAYING; + else + state = GST_STATE_PAUSED; + gst_println ("Change state to %s", gst_state_get_name (state)); + + gst_element_set_state (app_data->pipeline, state); + break; + case 'f': + case 'F': + force_aspect_ratio = force_aspect_ratio ? FALSE : TRUE; + g_object_set (app_data->sink, + "force-aspect-ratio", force_aspect_ratio, nullptr); + gst_println ("Change force-aspect-ratio to %d", force_aspect_ratio); + break; + case 'm': + case 'M': + set_remap = set_remap ? FALSE : TRUE; + gst_println ("Set remap %d", set_remap); + if (set_remap) { + ID3D12Resource *remap2; + D3D12_VIEWPORT viewport2; + guint64 bg_colors2 = { G_GUINT64_CONSTANT(0xffff000000000000), + G_GUINT64_CONSTANT(0xffff000000000000) }; + + /* top-left, draw original image */ + remap0 = nullptr; + viewport0.TopLeftX = 0; + viewport0.TopLeftY = 0; + viewport0.Width = 0.5; + viewport0.Height = 0.5; + + /* bottom-right, perform uv remap */ + remap1 = app_data->resource->uv_remap.Get (); + viewport1.TopLeftX = 0.5; + viewport1.TopLeftY = 0.5; + viewport1.Width = 0.5; + viewport1.Height = 0.5; + + g_signal_emit_by_name (app_data->sink, "uv-remap", 2, remap, viewport, + bg_colors); + } else { + /* Clear remap */ + g_signal_emit_by_name (app_data->sink, + "uv-remap", 0, nullptr, nullptr, nullptr); + } + + /* Redraw to update view */ + if (state == GST_STATE_PAUSED) + g_signal_emit_by_name (app_data->sink, "redraw"); + break; + case 'c': + case 'C': + if (snapshot_location) { + GstSample *sample = nullptr; + GstSample *out_sample = nullptr; + gboolean remove_borders = TRUE; + g_signal_emit_by_name (app_data->sink, "last-rendered-sample", + remove_borders, &sample); + if (sample) { + auto caps = gst_caps_new_simple ("image/jpeg", nullptr); + out_sample = gst_video_convert_sample (sample, caps, 10 * GST_SECOND, + nullptr); + gst_caps_unref (caps); + gst_sample_unref (sample); + } + + if (out_sample) { + auto buf = gst_sample_get_buffer (out_sample); + GstMapInfo map; + gst_buffer_map (buf, &map, GST_MAP_READ); + gst_println ("Writing snapshot to %s", snapshot_location); + g_file_set_contents (snapshot_location, (gchar *) map.data, + map.size, nullptr); + + gst_buffer_unmap (buf, &map); + gst_sample_unref (out_sample); + } + } + break; + case 'q': + g_main_loop_quit (loop_); + break; + default: + break; + } + } +} + +static HRESULT +creat_rs_blob (GstD3D12Device * device, ID3DBlob ** blob) +{ + D3D12_VERSIONED_ROOT_SIGNATURE_DESC desc = { }; + CD3DX12_ROOT_PARAMETER root_params; + CD3DX12_DESCRIPTOR_RANGE range_uav; + + range_uav.Init (D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 0); + root_params.InitAsDescriptorTable (1, &range_uav); + CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC::Init_1_0 (desc, 1, &root_params, + 0, nullptr, + D3D12_ROOT_SIGNATURE_FLAG_DENY_VERTEX_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | + D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS); + + ComPtr < ID3DBlob > error_blob; + auto hr = D3DX12SerializeVersionedRootSignature (&desc, + D3D_ROOT_SIGNATURE_VERSION_1_0, blob, &error_blob); + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + gst_println ("Couldn't serialize rs, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } + + return hr; +} + +static HRESULT +compile_shader (GstD3D12Device * device, ID3DBlob ** blob) +{ + ComPtr < ID3DBlob > error_blob; + auto hr = D3DCompile (shader_str, strlen (shader_str), + nullptr, nullptr, nullptr, "CSMain", "cs_5_0", 0, 0, blob, &error_blob); + + if (!gst_d3d12_result (hr, device)) { + const gchar *error_msg = nullptr; + if (error_blob) + error_msg = (const gchar *) error_blob->GetBufferPointer (); + + gst_println ("Couldn't compile shader, hr: 0x%x, error detail: %s", + (guint) hr, GST_STR_NULL (error_msg)); + } + + return hr; +} + +static gboolean +create_remap_resource (GpuResource * resource) +{ + resource->device = gst_d3d12_device_new (0); + if (!resource->device) { + gst_println ("Couldn't create d3d12 device"); + return FALSE; + } + + /* Prepare compute shader and resource. + * Compute shader will write UV remap data to RGBA texture + * (R -> U, G -> V, B -> unused, A -> mask where A < 0.5 will fill background + * color) + */ + ComPtr<ID3DBlob> shader_blob; + auto hr = compile_shader (resource->device, &shader_blob); + if (FAILED (hr)) + return FALSE; + + ComPtr<ID3DBlob> rs_blob; + hr = creat_rs_blob (resource->device, &rs_blob); + if (FAILED (hr)) + return FALSE; + + auto device_handle = gst_d3d12_device_get_device_handle (resource->device); + hr = device_handle->CreateRootSignature (0, rs_blob->GetBufferPointer (), + rs_blob->GetBufferSize (), IID_PPV_ARGS (&resource->rs)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create root signature"); + return FALSE; + } + + D3D12_COMPUTE_PIPELINE_STATE_DESC pso_desc = { }; + pso_desc.pRootSignature = resource->rs.Get (); + pso_desc.CS.pShaderBytecode = shader_blob->GetBufferPointer (); + pso_desc.CS.BytecodeLength = shader_blob->GetBufferSize (); + hr = device_handle->CreateComputePipelineState (&pso_desc, + IID_PPV_ARGS (&resource->pso)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create pso"); + return FALSE; + } + + D3D12_HEAP_PROPERTIES heap_prop = + CD3DX12_HEAP_PROPERTIES (D3D12_HEAP_TYPE_DEFAULT); + D3D12_RESOURCE_DESC resource_desc = + CD3DX12_RESOURCE_DESC::Tex2D (DXGI_FORMAT_R16G16B16A16_UNORM, + REMAP_SIZE, REMAP_SIZE, 1, 1, 1, 0, + D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS | + D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS); + hr = device_handle->CreateCommittedResource (&heap_prop, D3D12_HEAP_FLAG_NONE, + &resource_desc, D3D12_RESOURCE_STATE_COMMON, nullptr, + IID_PPV_ARGS (&resource->uv_remap)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create texture"); + return FALSE; + } + + D3D12_DESCRIPTOR_HEAP_DESC desc_heap_desc = { }; + desc_heap_desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; + desc_heap_desc.NumDescriptors = 1; + desc_heap_desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; + hr = device_handle->CreateDescriptorHeap (&desc_heap_desc, + IID_PPV_ARGS (&resource->desc_heap)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create descriptor heap"); + return FALSE; + } + + auto cpu_handle = resource->desc_heap->GetCPUDescriptorHandleForHeapStart (); + D3D12_UNORDERED_ACCESS_VIEW_DESC uav_desc = { }; + uav_desc.Format = DXGI_FORMAT_R16G16B16A16_UNORM; + uav_desc.ViewDimension = D3D12_UAV_DIMENSION_TEXTURE2D; + device_handle->CreateUnorderedAccessView (resource->uv_remap.Get (), + nullptr, &uav_desc, cpu_handle); + + hr = device_handle->CreateCommandAllocator (D3D12_COMMAND_LIST_TYPE_DIRECT, + IID_PPV_ARGS (&resource->ca)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create command allocator"); + return FALSE; + } + + hr = device_handle->CreateCommandList (0, D3D12_COMMAND_LIST_TYPE_DIRECT, + resource->ca.Get (), nullptr, IID_PPV_ARGS (&resource->cl)); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't create command list"); + return FALSE; + } + + ID3D12DescriptorHeap *heaps = { resource->desc_heap.Get () }; + resource->cl->SetComputeRootSignature (resource->rs.Get ()); + resource->cl->SetPipelineState (resource->pso.Get ()); + resource->cl->SetDescriptorHeaps (1, heaps); + resource->cl->SetComputeRootDescriptorTable (0, + resource->desc_heap->GetGPUDescriptorHandleForHeapStart ()); + resource->cl->Dispatch ((REMAP_SIZE + 7) / 8, (REMAP_SIZE + 7) / 8, 1); + hr = resource->cl->Close (); + + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't close command list"); + return FALSE; + } + + ID3D12CommandList *cmd_list = { resource->cl.Get () }; + hr = gst_d3d12_device_execute_command_lists (resource->device, + D3D12_COMMAND_LIST_TYPE_DIRECT, 1, cmd_list, &resource->fence_val); + if (!gst_d3d12_result (hr, resource->device)) { + gst_println ("Couldn't execute command list"); + return FALSE; + } + + return TRUE; +} + +static void +print_keyboard_help (void) +{ + static struct + { + const gchar *key_desc; + const gchar *key_help; + } key_controls = { + {"m", "Toggle remap on/off"}, + {"space", "Toggle pause/play"}, + {"c", "Capture snapshot"}, + {"q", "Quit"}, + }; + + guint i, chars_to_pad, desc_len, max_desc_len = 0; + + gst_print ("\n%s\n", "Keyboard controls:"); + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + desc_len = g_utf8_strlen (key_controlsi.key_desc, -1); + max_desc_len = MAX (max_desc_len, desc_len); + } + ++max_desc_len; + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + chars_to_pad = max_desc_len - g_utf8_strlen (key_controlsi.key_desc, -1); + gst_print ("\t%s", key_controlsi.key_desc); + gst_print ("%-*s: ", chars_to_pad, ""); + gst_print ("%s\n", key_controlsi.key_help); + } + gst_print ("\n"); +} + int main (int argc, char ** argv) { @@ -189,6 +534,8 @@ gchar *uri = nullptr; GOptionEntry options = { {"uri", 0, 0, G_OPTION_ARG_STRING, &uri, "URI to play"}, + {"snapshot-location", 0, 0, G_OPTION_ARG_STRING, &snapshot_location, + "JPEG file path for saving the snapshot image"}, {nullptr} }; @@ -220,6 +567,8 @@ } g_object_set (app_data.pipeline, "video-sink", sink, "uri", uri, nullptr); + /* playbin will take floating refcount */ + gst_object_ref (sink); } else { app_data.pipeline = gst_parse_launch ("d3d12testsrc ! " "video/x-raw(memory:D3D12Memory),format=RGBA,width=240,height=240 ! " @@ -239,7 +588,7 @@ &app_data); /* Set swapchain resolution and border color */ - g_signal_emit_by_name (sink, "resize", 320, 240); + g_signal_emit_by_name (sink, "resize", VIEW_WIDTH, VIEW_HEIGHT); guint64 border_color = 0; /* alpha */ @@ -248,6 +597,8 @@ border_color |= ((guint64) (G_MAXUINT16 / 2)) << 32; g_object_set (sink, "border-color", border_color, nullptr); + app_data.sink = sink; + /* Gets swapchain handle. This swapchain will be bound to a dcomp visual node */ IUnknown *swapchain = nullptr; g_object_get (sink, "swapchain", &swapchain, nullptr); @@ -256,10 +607,6 @@ return 1; } - /* playbin will take floating refcount */ - if (!uri) - gst_object_unref (sink); - /* Creates d3d11 device to initialize dcomp device. * Note that d3d11 (or d2d) device will not be required if swapchain is * the only visual node (i.e., root node) which needs to be composed. @@ -269,6 +616,9 @@ ComPtr<IDXGIFactory1> factory; ComPtr<IDXGIAdapter> adapter; + if (!create_remap_resource (resource.get ())) + return 1; + hr = CreateDXGIFactory1 (IID_PPV_ARGS (&factory)); if (FAILED (hr)) { gst_printerrln ("CreateDXGIFactory1 failed"); @@ -294,7 +644,7 @@ /* Prepare main window */ WNDCLASSEXW wc = { }; - RECT wr = { 0, 0, 640, 480 }; + RECT wr = { 0, 0, VIEW_WIDTH * 2, VIEW_HEIGHT * 2 }; HINSTANCE hinstance = GetModuleHandle (nullptr); wc.cbSize = sizeof (WNDCLASSEXW); wc.lpfnWndProc = (WNDPROC) window_proc; @@ -346,7 +696,8 @@ } /* Create background visual, and clear color using d3d11 API */ - hr = resource->dcomp_device->CreateVirtualSurface (640, 480, + hr = resource->dcomp_device->CreateVirtualSurface (VIEW_WIDTH * 2, + VIEW_HEIGHT * 2, DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ALPHA_MODE_PREMULTIPLIED, &resource->bg_surface); if (FAILED (hr)) { @@ -400,13 +751,13 @@ return 1; } - hr = resource->swapchain_visual->SetOffsetX (160.0); + hr = resource->swapchain_visual->SetOffsetX (VIEW_WIDTH / 2); if (FAILED (hr)) { gst_printerrln ("SetOffsetX failed"); return 1; } - hr = resource->swapchain_visual->SetOffsetY (120.0); + hr = resource->swapchain_visual->SetOffsetY (VIEW_HEIGHT / 2); if (FAILED (hr)) { gst_printerrln ("SetOffsetY failed"); return 1; @@ -426,14 +777,19 @@ app_data.resource = std::move (resource); + set_key_handler ((KeyInputCallback) keyboard_cb, &app_data); + print_keyboard_help (); gst_element_set_state (app_data.pipeline, GST_STATE_PLAYING); g_main_loop_run (loop_); + unset_key_handler (); + gst_element_set_state (app_data.pipeline, GST_STATE_NULL); gst_bus_remove_watch (GST_ELEMENT_BUS (app_data.pipeline)); app_data.resource = nullptr; gst_object_unref (app_data.pipeline); + gst_object_unref (app_data.sink); if (hwnd_) DestroyWindow (hwnd_);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/d3d12/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/d3d12/meson.build
Changed
@@ -8,12 +8,28 @@ have_dcomp_h = cc.has_header('dcomp.h') have_d3d11_h = cc.has_header('d3d11.h') have_dxgi_h = cc.has_header('dxgi.h') +have_d3dcompile_h = cc.has_header('d3dcompiler.h') dwrite_dep = cc.find_library('dwrite', required: false) dcomp_dep = cc.find_library('dcomp', required: false) d3d11_dep = cc.find_library('d3d11', required: false) dxgi_dep = cc.find_library('dxgi', required: false) runtimeobject_dep = cc.find_library('runtimeobject', required: false) coremessaging_lib = cc.find_library('coremessaging', required: false) +d3dcompile_lib = cc.find_library('d3dcompiler', required: false) + +have_dx_math = cxx.compiles(''' + #include <windows.h> + #include <DirectXMath.h> + using namespace DirectX; + int main(int argc, char ** argv) { + XMMATRIX matrix; + XMFLOAT4X4 dump; + matrix = XMMatrixIdentity (); + XMStoreFloat4x4 (&dump, matrix); + return 0; + } + ''', + name: 'DirectXMath support in Windows SDK') executable('d3d12enc-dynamic-reconfigure', 'd3d12enc-dynamic-reconfigure.c', '../key-handler.c', @@ -31,6 +47,14 @@ install: false ) +executable('d3d12fisheyedewarp', + 'd3d12fisheyedewarp.cpp', '../key-handler.c', + include_directories : configinc, + dependencies: gst_dep, gstbase_dep, gstvideo_dep, + c_args : gst_plugins_bad_args, + install: false +) + if gstd3d12_dep.found() if have_d2d_h and have_dwrite_h and have_d3d12video_h and dwrite_dep.found() executable('d3d12videosink-overlay', 'd3d12videosink-overlay.cpp', @@ -41,17 +65,34 @@ install: false, ) endif -endif -if cc.get_id() == 'msvc' and have_dcomp_h and dcomp_dep.found() and \ - have_d3d11_h and d3d11_dep.found() and have_dxgi_h and dxgi_dep.found() - executable('d3d12swapchainsink-win32', 'd3d12swapchainsink-win32.cpp', - c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', - cpp_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', - include_directories : configinc, libsinc, - dependencies: gst_dep, gstvideo_dep, dcomp_dep, d3d11_dep, dxgi_dep, - install: false, - ) + if have_d3dcompile_h and d3dcompile_lib.found() and have_dx_math and dx_headers_dep.found() + extra_args = '-DGST_USE_UNSTABLE_API', '-DGST_D3D12_USE_DIRECTX_HEADERS' + extra_args += cc.get_supported_arguments( + '/wd4062', # 'identifier' : unreferenced local variable + ) + executable('d3d12remap-fisheye', + 'd3d12remap-fisheye.cpp', '../key-handler.c', + c_args : gst_plugins_bad_args + extra_args, + cpp_args : gst_plugins_bad_args + extra_args, + include_directories : configinc, + dependencies: gst_dep, gstd3d12_dep, gstvideo_dep, d3dcompile_lib, dx_headers_dep, + install: false + ) + + if cc.get_id() == 'msvc' and have_dcomp_h and dcomp_dep.found() and \ + have_d3d11_h and d3d11_dep.found() and have_dxgi_h and dxgi_dep.found() + executable('d3d12swapchainsink-win32', + 'd3d12swapchainsink-win32.cpp', '../key-handler.c', + c_args : gst_plugins_bad_args + extra_args, + cpp_args : gst_plugins_bad_args + extra_args, + include_directories : configinc, libsinc, + dependencies: gst_dep, gstvideo_dep, gstd3d12_dep, d3dcompile_lib, + dx_headers_dep, dcomp_dep, d3d11_dep, dxgi_dep, + install: false, + ) + endif + endif endif have_winrt_comp_headers = true
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/inter
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/inter/gstintertest.c
Added
@@ -0,0 +1,496 @@ +/* GstInterTest + * Copyright (C) 2011 David Schleef <ds@schleef.org> + * Copyright (C) 2010 Entropy Wave Inc + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <stdlib.h> + +typedef struct _GstInterTest GstInterTest; +struct _GstInterTest +{ + GstElement *pipeline; + GstBus *bus; + GMainLoop *main_loop; + + GstElement *source_element; + GstElement *sink_element; + + gboolean paused_for_buffering; + guint timer_id; +}; + +GstInterTest *gst_inter_test_new (void); +void gst_inter_test_free (GstInterTest * intertest); +void gst_inter_test_create_pipeline_server (GstInterTest * intertest); +void gst_inter_test_create_pipeline_test_sources (GstInterTest * intertest); +void gst_inter_test_create_pipeline_playbin (GstInterTest * intertest, + const char *uri); +void gst_inter_test_start (GstInterTest * intertest); +void gst_inter_test_stop (GstInterTest * intertest); + +static gboolean gst_inter_test_handle_message (GstBus * bus, + GstMessage * message, gpointer data); +static gboolean onesecond_timer (gpointer priv); + + +gboolean verbose; + +static const gchar **uri_arg = NULL; + +static GOptionEntry entries = { + {"verbose", 'v', 0, G_OPTION_ARG_NONE, &verbose, "Be verbose", NULL}, + {G_OPTION_REMAINING, 0, 0, G_OPTION_ARG_FILENAME_ARRAY, &uri_arg, 0, + "URL"}, + + {NULL} + +}; + +int +main (int argc, char *argv) +{ + GError *error = NULL; + GOptionContext *context; + GstInterTest *intertest1; + GstInterTest *intertest2; + GMainLoop *main_loop; + const gchar *uri = NULL; + + context = g_option_context_new ("- Internal src/sink test"); + g_option_context_add_main_entries (context, entries, GETTEXT_PACKAGE); + g_option_context_add_group (context, gst_init_get_option_group ()); + if (!g_option_context_parse (context, &argc, &argv, &error)) { + gst_println ("option parsing failed: %s", error->message); + g_option_context_free (context); + g_clear_error (&error); + exit (1); + } + g_option_context_free (context); + + if (uri_arg) + uri = uri_arg0; + + intertest1 = gst_inter_test_new (); + gst_inter_test_create_pipeline_server (intertest1); + gst_inter_test_start (intertest1); + + intertest2 = gst_inter_test_new (); + gst_inter_test_create_pipeline_playbin (intertest2, uri); + gst_inter_test_start (intertest2); + + main_loop = g_main_loop_new (NULL, TRUE); + intertest1->main_loop = main_loop; + intertest2->main_loop = main_loop; + + g_main_loop_run (main_loop); + g_main_loop_unref (main_loop); + + gst_inter_test_free (intertest1); + gst_inter_test_free (intertest2); + + gst_deinit (); + exit (0); +} + + +GstInterTest * +gst_inter_test_new (void) +{ + GstInterTest *intertest; + + intertest = g_new0 (GstInterTest, 1); + + return intertest; +} + +void +gst_inter_test_free (GstInterTest * intertest) +{ + if (!intertest) + return; + + gst_clear_object (&intertest->source_element); + gst_clear_object (&intertest->sink_element); + + if (intertest->bus) { + gst_bus_remove_watch (intertest->bus); + gst_bus_set_flushing (intertest->bus, TRUE); + gst_clear_object (&intertest->bus); + } + + if (intertest->pipeline) { + gst_element_set_state (intertest->pipeline, GST_STATE_NULL); + gst_clear_object (&intertest->pipeline); + } + g_free (intertest); +} + +void +gst_inter_test_create_pipeline_playbin (GstInterTest * intertest, + const char *uri) +{ + GstElement *pipeline; + GstElement *playbin; + GstElement *audio_sink; + GstElement *video_sink; + + if (uri == NULL) { + gst_inter_test_create_pipeline_test_sources (intertest); + return; + } + + pipeline = gst_pipeline_new (NULL); + playbin = gst_element_factory_make ("playbin3", "source"); + audio_sink = gst_element_factory_make ("interaudiosink", NULL); + video_sink = gst_element_factory_make ("intervideosink", NULL); + g_object_set (playbin, "audio-sink", audio_sink, "video-sink", video_sink, + NULL); + gst_bin_add (GST_BIN_CAST (pipeline), playbin); + + intertest->pipeline = pipeline; + + gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); + intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); + gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); + + intertest->source_element = + gst_bin_get_by_name (GST_BIN (pipeline), "source"); + gst_println ("source_element is %" GST_PTR_FORMAT, intertest->source_element); + + gst_println ("setting uri to %s", uri); + g_object_set (intertest->source_element, "uri", uri, NULL); +} + +void +gst_inter_test_create_pipeline_test_sources (GstInterTest * intertest) +{ + GString *pipe_desc; + GstElement *pipeline; + GError *error = NULL; + + pipe_desc = g_string_new (""); + + g_string_append (pipe_desc, "videotestsrc name=source num-buffers=100 ! "); + g_string_append (pipe_desc, + "video/x-raw,format=(string)I420,width=320,height=240 ! "); + g_string_append (pipe_desc, "timeoverlay ! "); + g_string_append (pipe_desc, "intervideosink name=sink sync=true "); + g_string_append (pipe_desc, + "audiotestsrc samplesperbuffer=1600 num-buffers=100 ! audio/x-raw,format=F32LE ! audioconvert ! "); + g_string_append (pipe_desc, "interaudiosink sync=true "); + + if (verbose) + gst_println ("pipeline: %s", pipe_desc->str); + + pipeline = gst_parse_launch (pipe_desc->str, &error); + g_string_free (pipe_desc, TRUE); + + if (error) { + gst_println ("pipeline parsing error: %s", error->message); + gst_object_unref (pipeline); + g_clear_error (&error); + return; + } + + intertest->pipeline = pipeline; + + gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); + intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); + gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); + + intertest->source_element = + gst_bin_get_by_name (GST_BIN (pipeline), "source"); + intertest->sink_element = gst_bin_get_by_name (GST_BIN (pipeline), "sink"); +} + +void +gst_inter_test_create_pipeline_server (GstInterTest * intertest) +{ + GString *pipe_desc; + GstElement *pipeline; + GError *error = NULL; + + pipe_desc = g_string_new (""); + + g_string_append (pipe_desc, "intervideosrc ! queue ! "); + g_string_append (pipe_desc, "autovideosink name=sink "); + g_string_append (pipe_desc, "interaudiosrc ! queue ! "); + g_string_append (pipe_desc, "autoaudiosink "); + + if (verbose) + gst_println ("pipeline: %s", pipe_desc->str); + + pipeline = (GstElement *) gst_parse_launch (pipe_desc->str, &error); + g_string_free (pipe_desc, TRUE); + + if (error) { + gst_println ("pipeline parsing error: %s", error->message); + gst_object_unref (pipeline); + g_clear_error (&error); + return; + } + + intertest->pipeline = pipeline; + + gst_pipeline_set_auto_flush_bus (GST_PIPELINE (pipeline), FALSE); + intertest->bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); + gst_bus_add_watch (intertest->bus, gst_inter_test_handle_message, intertest); + + intertest->source_element = + gst_bin_get_by_name (GST_BIN (pipeline), "source"); + intertest->sink_element = gst_bin_get_by_name (GST_BIN (pipeline), "sink"); +} + +void +gst_inter_test_start (GstInterTest * intertest) +{ + gst_element_set_state (intertest->pipeline, GST_STATE_READY); + + intertest->timer_id = g_timeout_add_seconds (1, onesecond_timer, intertest); +} + +void +gst_inter_test_stop (GstInterTest * intertest) +{ + gst_element_set_state (intertest->pipeline, GST_STATE_NULL); + + g_source_remove (intertest->timer_id); +} + +static void +gst_inter_test_handle_eos (GstInterTest * intertest) +{ + gst_inter_test_stop (intertest); +} + +static void +gst_inter_test_handle_error (GstInterTest * intertest, GError * error, + const char *debug) +{ + gst_printerrln ("error: %s", error->message); + gst_inter_test_stop (intertest); +} + +static void +gst_inter_test_handle_warning (GstInterTest * intertest, GError * error, + const char *debug) +{ + gst_printerrln ("warning: %s", error->message); +} + +static void +gst_inter_test_handle_info (GstInterTest * intertest, GError * error, + const char *debug) +{ + gst_println ("info: %s", error->message); +} + +static void +gst_inter_test_handle_null_to_ready (GstInterTest * intertest) +{ + gst_element_set_state (intertest->pipeline, GST_STATE_PAUSED); + +} + +static void +gst_inter_test_handle_ready_to_paused (GstInterTest * intertest) +{ + if (!intertest->paused_for_buffering) { + gst_element_set_state (intertest->pipeline, GST_STATE_PLAYING); + } +} + +static void +gst_inter_test_handle_paused_to_playing (GstInterTest * intertest) +{ + +} + +static void +gst_inter_test_handle_playing_to_paused (GstInterTest * intertest) +{ + +} + +static void +gst_inter_test_handle_paused_to_ready (GstInterTest * intertest) +{ + +} + +static void +gst_inter_test_handle_ready_to_null (GstInterTest * intertest) +{ + g_main_loop_quit (intertest->main_loop); +} + + +static gboolean +gst_inter_test_handle_message (GstBus * bus, GstMessage * message, + gpointer data) +{ + GstInterTest *intertest = (GstInterTest *) data; + + switch (GST_MESSAGE_TYPE (message)) { + case GST_MESSAGE_EOS: + gst_inter_test_handle_eos (intertest); + break; + case GST_MESSAGE_ERROR: + { + GError *error = NULL; + gchar *debug; + + gst_message_parse_error (message, &error, &debug); + gst_inter_test_handle_error (intertest, error, debug); + g_clear_error (&error); + g_free (debug); + } + break; + case GST_MESSAGE_WARNING: + { + GError *error = NULL; + gchar *debug; + + gst_message_parse_warning (message, &error, &debug); + gst_inter_test_handle_warning (intertest, error, debug); + g_clear_error (&error); + g_free (debug); + } + break; + case GST_MESSAGE_INFO: + { + GError *error = NULL; + gchar *debug; + + gst_message_parse_info (message, &error, &debug); + gst_inter_test_handle_info (intertest, error, debug); + g_clear_error (&error); + g_free (debug); + } + break; + case GST_MESSAGE_TAG: + { + GstTagList *tag_list; + + gst_message_parse_tag (message, &tag_list); + if (verbose) + gst_println ("tag: %" GST_PTR_FORMAT, tag_list); + gst_tag_list_unref (tag_list); + } + break; + case GST_MESSAGE_STATE_CHANGED: + { + GstState oldstate, newstate, pending; + + gst_message_parse_state_changed (message, &oldstate, &newstate, &pending); + if (GST_ELEMENT (message->src) == intertest->pipeline) { + if (verbose) + gst_println ("state change from %s to %s", + gst_state_get_name (oldstate), gst_state_get_name (newstate)); + switch (GST_STATE_TRANSITION (oldstate, newstate)) { + case GST_STATE_CHANGE_NULL_TO_READY: + gst_inter_test_handle_null_to_ready (intertest); + break; + case GST_STATE_CHANGE_READY_TO_PAUSED: + gst_inter_test_handle_ready_to_paused (intertest); + break; + case GST_STATE_CHANGE_PAUSED_TO_PLAYING: + gst_inter_test_handle_paused_to_playing (intertest); + break; + case GST_STATE_CHANGE_PLAYING_TO_PAUSED: + gst_inter_test_handle_playing_to_paused (intertest); + break; + case GST_STATE_CHANGE_PAUSED_TO_READY: + gst_inter_test_handle_paused_to_ready (intertest); + break; + case GST_STATE_CHANGE_READY_TO_NULL: + gst_inter_test_handle_ready_to_null (intertest); + break; + default: + if (verbose) + gst_println ("unknown state change from %s to %s", + gst_state_get_name (oldstate), gst_state_get_name (newstate)); + } + } + } + break; + case GST_MESSAGE_BUFFERING: + { + int percent; + gst_message_parse_buffering (message, &percent); + //gst_println("buffering %d", percent); + if (!intertest->paused_for_buffering && percent < 100) { + gst_println ("pausing for buffing"); + intertest->paused_for_buffering = TRUE; + gst_element_set_state (intertest->pipeline, GST_STATE_PAUSED); + } else if (intertest->paused_for_buffering && percent == 100) { + gst_println ("unpausing for buffing"); + intertest->paused_for_buffering = FALSE; + gst_element_set_state (intertest->pipeline, GST_STATE_PLAYING); + } + } + break; + case GST_MESSAGE_STATE_DIRTY: + case GST_MESSAGE_CLOCK_PROVIDE: + case GST_MESSAGE_CLOCK_LOST: + case GST_MESSAGE_NEW_CLOCK: + case GST_MESSAGE_STRUCTURE_CHANGE: + case GST_MESSAGE_STREAM_STATUS: + break; + case GST_MESSAGE_STEP_DONE: + case GST_MESSAGE_APPLICATION: + case GST_MESSAGE_ELEMENT: + case GST_MESSAGE_SEGMENT_START: + case GST_MESSAGE_SEGMENT_DONE: + case GST_MESSAGE_LATENCY: + case GST_MESSAGE_ASYNC_START: + case GST_MESSAGE_ASYNC_DONE: + case GST_MESSAGE_REQUEST_STATE: + case GST_MESSAGE_STEP_START: + default: + if (verbose) { + gst_println ("message: %s", GST_MESSAGE_TYPE_NAME (message)); + } + break; + case GST_MESSAGE_QOS: + break; + } + + return TRUE; +} + +static gboolean +onesecond_timer (gpointer priv) +{ + //GstInterTest *intertest = (GstInterTest *)priv; + + gst_println ("."); + + return G_SOURCE_CONTINUE; +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/inter/meson.build
Added
@@ -0,0 +1,9 @@ +if get_option('inter').disabled() + subdir_done() +endif + +executable('intertest', 'gstintertest.c', + include_directories: configinc, + dependencies: gst_dep, + c_args: gst_plugins_bad_args, + install: false)
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/ipcpipeline/ipc-play.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/ipcpipeline/ipc-play.c
Changed
@@ -411,7 +411,7 @@ if (buffering) { buffering = FALSE; gst_element_set_state (GST_ELEMENT (pipeline), desired_state); - g_print ("\n%s\n", gst_element_state_get_name (desired_state)); + g_print ("\n%s\n", gst_state_get_name (desired_state)); } } else { /* buffering... */ @@ -446,7 +446,7 @@ GST_DEBUG_GRAPH_SHOW_VERBOSE, "ipc.slave.reqstate"); g_print ("Setting state to %s as requested by %s...\n", - gst_element_state_get_name (state), name); + gst_state_get_name (state), name); gst_element_set_state (GST_ELEMENT (pipeline), state); g_free (name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/mediafoundation/mfvideoenc-dynamic-reconfigure.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/mediafoundation/mfvideoenc-dynamic-reconfigure.c
Changed
@@ -161,7 +161,7 @@ gst_message_parse_state_changed (msg, &old, &new, &pending); state_transition_name = g_strdup_printf ("%s_%s", - gst_element_state_get_name (old), gst_element_state_get_name (new)); + gst_state_get_name (old), gst_state_get_name (new)); /* dump graph for (some) pipeline state changes */ {
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/meson.build -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/meson.build
Changed
@@ -8,6 +8,7 @@ subdir('d3d12') subdir('directfb') subdir('gtk') +subdir('inter') subdir('ipcpipeline') subdir('mediafoundation') subdir('mpegts') @@ -19,6 +20,7 @@ subdir('qt6d3d11') subdir('uvch264') subdir('va') +subdir('vulkan') subdir('waylandsink') subdir('webrtc') subdir('wpe')
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/nvcodec/nvcodec.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/nvcodec/nvcodec.c
Changed
@@ -152,7 +152,7 @@ gst_message_parse_state_changed (msg, &old, &new, &pending); state_transition_name = g_strdup_printf ("%s_%s", - gst_element_state_get_name (old), gst_element_state_get_name (new)); + gst_state_get_name (old), gst_state_get_name (new)); /* dump graph for (some) pipeline state changes */ {
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/vulkan
Added
+(directory)
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/vulkan/meson.build
Added
@@ -0,0 +1,21 @@ +if not gstvulkan_dep.found() + subdir_done() +endif + +executable('vulkanenc', + 'vulkanenc.c', '../key-handler.c', + include_directories : configinc, + dependencies: gst_dep, gstbase_dep, gstvideo_dep, + c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', + install: false) + +sdl3_dep = dependency('sdl3', version : '>=3.2.0', required : get_option('examples')) + +if sdl3_dep.found() + executable('sdl3_vulkandec', + 'sdl3_vulkandec.c', + include_directories : configinc, + dependencies: gst_dep, gstapp_dep, gstvulkan_dep, sdl3_dep, + c_args : gst_plugins_bad_args + '-DGST_USE_UNSTABLE_API', + install: false) +endif
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/vulkan/sdl3_vulkandec.c
Added
@@ -0,0 +1,586 @@ +/* + * GStreamer + * Copyright (C) 2025 anonymix007 <48598263+anonymix007@users.noreply.github.com> + * Victor Jaquez <vjaquez@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include "config.h" +#endif + +#include <gst/gst.h> +#include <gst/app/app.h> +#include <gst/video/video.h> +#include <gst/vulkan/vulkan.h> + +#define SDL_MAIN_USE_CALLBACKS +#include <SDL3/SDL.h> +#include <SDL3/SDL_main.h> + +typedef struct +{ + GstElement *pipeline; + GstBus *bus; + + GstVulkanInstance *instance; + GstVulkanDevice *device; + + SDL_Window *window; + SDL_Renderer *renderer; + + SDL_Thread *loop_thread; + + SDL_Texture *texture; + + /* operation */ + GstSample *last_sample; + GMutex lock; + GCond cond; + gboolean rendered; + gboolean quit; +} AppData; + +static void +end_stream_cb (GstMessage * msg, AppData * appdata) +{ + switch (GST_MESSAGE_TYPE (msg)) { + case GST_MESSAGE_EOS: + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, "End of stream"); + break; + case GST_MESSAGE_ERROR:{ + gchar *debug = NULL; + GError *err = NULL; + + gst_message_parse_error (msg, &err, &debug); + + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Error: %s", err->message); + g_error_free (err); + + if (debug) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "\tDebug details: %s", + debug); + g_free (debug); + } + break; + } + default: + break; + } + + appdata->quit = TRUE; +} + +static SDL_PixelFormat +sdl_format_from_vk (VkFormat format) +{ + switch (format) { + case VK_FORMAT_B8G8R8A8_UNORM: + return SDL_PIXELFORMAT_ARGB8888; + case VK_FORMAT_R8G8B8A8_UNORM: + return SDL_PIXELFORMAT_ABGR8888; + case VK_FORMAT_R8_UNORM: + // This value is probably a GStreamer bug: + // https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/4623 + // "Vulkan native YUV formats" are kinda broken + case VK_FORMAT_G8_B8R8_2PLANE_420_UNORM: + return SDL_PIXELFORMAT_NV12; + default: + g_assert_not_reached (); + } +} + +static gboolean +create_texture (AppData * appdata, GstBuffer * buffer) +{ + VkImage vkimage; + VkFormat vkformat; + guint width, height; + + SDL_PropertiesID props = SDL_CreateProperties (); + + { + GstMemory *memory; + GstVulkanImageMemory *vkmem; + + g_assert (gst_buffer_n_memory (buffer) == 1); + memory = gst_buffer_peek_memory (buffer, 0); + g_assert (gst_is_vulkan_image_memory (memory)); + vkmem = (GstVulkanImageMemory *) memory; + + g_assert (vkmem->device == appdata->device); + + vkimage = vkmem->image; + vkformat = vkmem->create_info.format; + width = vkmem->create_info.extent.width; + height = vkmem->create_info.extent.height; + } + + if (appdata->texture) + SDL_DestroyTexture (appdata->texture); + + SDL_SetNumberProperty (props, SDL_PROP_TEXTURE_CREATE_WIDTH_NUMBER, width); + SDL_SetNumberProperty (props, SDL_PROP_TEXTURE_CREATE_HEIGHT_NUMBER, height); + SDL_SetNumberProperty (props, SDL_PROP_TEXTURE_CREATE_FORMAT_NUMBER, + sdl_format_from_vk (vkformat)); + SDL_SetNumberProperty (props, SDL_PROP_TEXTURE_CREATE_VULKAN_TEXTURE_NUMBER, + vkimage); + + appdata->texture = SDL_CreateTextureWithProperties (appdata->renderer, props); + SDL_DestroyProperties (props); + + if (!appdata->texture) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to create texture: %s", + SDL_GetError ()); + return FALSE; + } + + return TRUE; +} + +static gboolean +draw (AppData * appdata) +{ + if (appdata->last_sample) { + GstBuffer *buffer = gst_sample_get_buffer (appdata->last_sample); + if (!create_texture (appdata, buffer)) + return FALSE; + } else if (!appdata->texture) { + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, + "Neither a sample nor a texture is available yet"); + } + + if (!appdata->texture) { + if (!SDL_SetRenderDrawColor (appdata->renderer, 0xFF, 0x18, 0x18, 0xFF)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to set color: %s", + SDL_GetError ()); + return FALSE; + } + if (!SDL_RenderClear (appdata->renderer)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to clear with color: %s", SDL_GetError ()); + return FALSE; + } + } else { + if (!SDL_RenderTexture (appdata->renderer, appdata->texture, NULL, NULL)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to render texture: %s", SDL_GetError ()); + return FALSE; + } + } + + if (!SDL_RenderPresent (appdata->renderer)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to present: %s", + SDL_GetError ()); + return FALSE; + } + + return TRUE; +} + +static void +sdl_send_expose (void *userdata) +{ + SDL_Event event; + SDL_zero (event); + event.type = SDL_EVENT_WINDOW_EXPOSED; + SDL_PushEvent (&event); +} + +static GstFlowReturn +get_sample (AppData * appdata, GstSample * sample) +{ + if (!sample) + return GST_FLOW_EOS; + + g_mutex_lock (&appdata->lock); + + gst_clear_sample (&appdata->last_sample); + appdata->last_sample = sample; + appdata->rendered = false; + + SDL_RunOnMainThread (sdl_send_expose, NULL, false); + + while (!appdata->rendered && !appdata->quit) + g_cond_wait (&appdata->cond, &appdata->lock); + + g_mutex_unlock (&appdata->lock); + + return appdata->rendered ? GST_FLOW_OK : GST_FLOW_ERROR; +} + +static GstFlowReturn +new_sample_cb (GstAppSink * sink, gpointer data) +{ + return get_sample (data, gst_app_sink_pull_sample (sink));; +} + +static GstFlowReturn +new_preroll_cb (GstAppSink * sink, gpointer data) +{ + return get_sample (data, gst_app_sink_pull_preroll (sink)); +} + +static GstPadProbeReturn +pad_query_cb (GstPad * pad, GstPadProbeInfo * info, gpointer data) +{ + AppData *appdata = data; + + if (GST_PAD_PROBE_INFO_TYPE (info) & GST_PAD_PROBE_TYPE_QUERY_DOWNSTREAM) { + GstQuery *query = GST_PAD_PROBE_INFO_QUERY (info); + + switch (GST_QUERY_TYPE (query)) { + case GST_QUERY_CONTEXT: + if (gst_vulkan_handle_context_query (GST_PAD_PARENT (pad), query, NULL, + appdata->instance, appdata->device)) { + return GST_PAD_PROBE_HANDLED; + } + default: + break; + } + } + + return GST_PAD_PROBE_OK; +} + +static gboolean +sdl_renderer_init (AppData * app) +{ + SDL_PropertiesID props = SDL_CreateProperties (); + + SDL_SetStringProperty (props, SDL_PROP_RENDERER_CREATE_NAME_STRING, "vulkan"); + SDL_SetPointerProperty (props, SDL_PROP_RENDERER_CREATE_WINDOW_POINTER, + app->window); + SDL_SetPointerProperty (props, + SDL_PROP_RENDERER_CREATE_VULKAN_INSTANCE_POINTER, + app->instance->instance); + SDL_SetPointerProperty (props, + SDL_PROP_RENDERER_CREATE_VULKAN_PHYSICAL_DEVICE_POINTER, + app->device->physical_device->device); + SDL_SetPointerProperty (props, SDL_PROP_RENDERER_CREATE_VULKAN_DEVICE_POINTER, + app->device->device); + + app->renderer = SDL_CreateRendererWithProperties (props); + + SDL_DestroyProperties (props); + + if (!app->renderer) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to create renderer: %s", + SDL_GetError ()); + } else { + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, "Renderer name: %s", + SDL_GetRendererName (app->renderer)); + } + + return (app->renderer != NULL); +} + +static void +sdl_deinit (AppData * app) +{ + if (app->loop_thread) + SDL_WaitThread (app->loop_thread, NULL); + if (app->texture) + SDL_DestroyTexture (app->texture); + if (app->renderer) + SDL_DestroyRenderer (app->renderer); + if (app->window) + SDL_DestroyWindow (app->window); +} + +struct DevData +{ + gboolean graphics_queue; + gboolean video_queue; + VkVideoCodecOperationFlagsKHR codecs; +}; + +static gboolean +vulkan_pick_queues (GstVulkanDevice * device, GstVulkanQueue * queue, + gpointer data) +{ + struct DevData *dev = data; + guint flags = + device->physical_device->queue_family_propsqueue->family.queueFlags; + guint32 codecs = + device->physical_device->queue_family_opsqueue->family.video; + + + dev->graphics_queue |= + ((flags & VK_QUEUE_TRANSFER_BIT) == VK_QUEUE_TRANSFER_BIT); + dev->video_queue |= + (((flags & VK_QUEUE_VIDEO_DECODE_BIT_KHR) == + VK_QUEUE_VIDEO_DECODE_BIT_KHR) + && ((codecs & dev->codecs) == dev->codecs)); + + return !(dev->graphics_queue && dev->video_queue); +} + +static gboolean +vulkan_init (AppData * app, VkVideoCodecOperationFlagsKHR codecs) +{ + struct DevData dev = { FALSE, FALSE, codecs }; + GError *error = NULL; + + app->instance = gst_vulkan_instance_new (); + if (!app->instance) + return FALSE; + + if (!gst_vulkan_instance_fill_info (app->instance, &error)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to populate Vulkan instance: %s", error->message); + goto bail; + } + + /* SDL stupidity: if instance supports both xlib and xcb, SDL chooses xlib; + * while GStreamer only enables xcb */ + { + GstVulkanDisplayType display_type; + + display_type = gst_vulkan_display_choose_type (app->instance); + if (display_type == GST_VULKAN_DISPLAY_TYPE_XCB) + gst_vulkan_instance_enable_extension (app->instance, + "VK_KHR_xlib_surface"); + } + + + if (!gst_vulkan_instance_open (app->instance, &error)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to open Vulkan instance: %s", error->message); + goto bail; + } + + for (int i = 0; i < app->instance->n_physical_devices; i++) { + app->device = gst_vulkan_device_new_with_index (app->instance, i); + if (!app->device) + continue; + + if (!gst_vulkan_device_open (app->device, &error)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to open Vulkan devices: %s", error->message); + g_clear_error (&error); + gst_clear_object (&app->device); + continue; + } + + gst_vulkan_device_foreach_queue (app->device, vulkan_pick_queues, &dev); + if (dev.graphics_queue && dev.video_queue) + break; + + dev.graphics_queue = dev.video_queue = FALSE; + + gst_clear_object (&app->device); + } + + /* TODO: check device can render too */ + + if (!app->device) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "No usable Vulkan device found"); + goto bail; + } else { + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, "Using device %s", + app->device->physical_device->properties.deviceName); + } + + return TRUE; + +bail: + g_clear_error (&error); + gst_clear_object (&app->instance); + return FALSE; +} + +static void +vulkan_deinit (AppData * app) +{ + gst_clear_object (&app->device); + gst_clear_object (&app->instance); +} + +static int +bus_thread (void *data) +{ + AppData *appdata = data; + GstMessage *msg; + + msg = gst_bus_timed_pop_filtered (appdata->bus, GST_CLOCK_TIME_NONE, + GST_MESSAGE_EOS | GST_MESSAGE_ERROR); + end_stream_cb (msg, appdata); + gst_message_unref (msg); + + return 0; +} + +SDL_AppResult +SDL_AppInit (void **data, int argc, char **argv) +{ + GError *error = NULL; + AppData *appdata = g_new0 (AppData, 1); + + if (!SDL_SetHint (SDL_HINT_MAIN_CALLBACK_RATE, "120")) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to set FPS: %s", + SDL_GetError ()); + return SDL_APP_FAILURE; + } + + if (!SDL_Init (SDL_INIT_VIDEO)) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Failed to initialize SDL: %s", + SDL_GetError ()); + return SDL_APP_FAILURE; + } + + gst_init (&argc, &argv); + if (argc != 2) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, "Missing H.264 file to render"); + return SDL_APP_FAILURE; + } + + if (!vulkan_init (appdata, VK_VIDEO_CODEC_OPERATION_DECODE_H264_BIT_KHR)) + return SDL_APP_FAILURE; + + appdata->pipeline = gst_parse_launch ("filesrc name=src ! parsebin ! " + "vulkanh264dec ! appsink name=vksink", &error); + if (!appdata->pipeline) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to parse GStreamer pipeline: %s", error->message); + g_clear_error (&error); + return SDL_APP_FAILURE; + } else if (error) { + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, "Pipeline parsing warning: %s", + error->message); + g_clear_error (&error); + } + + appdata->window = SDL_CreateWindow ("SDL GStreamer Vulkan Demo", 1280, 800, + SDL_WINDOW_RESIZABLE | SDL_WINDOW_VULKAN); + if (!appdata->window) { + SDL_LogError (SDL_LOG_CATEGORY_APPLICATION, + "Failed to create SDL Vulkan window: %s", SDL_GetError ()); + vulkan_deinit (appdata); + gst_clear_object (&appdata->pipeline); + return SDL_APP_FAILURE; + } + + if (!sdl_renderer_init (appdata)) { + vulkan_deinit (appdata); + gst_clear_object (&appdata->pipeline); + sdl_deinit (appdata); + return SDL_APP_FAILURE; + } + + g_mutex_init (&appdata->lock); + + { + GstElement *vksink = + gst_bin_get_by_name (GST_BIN (appdata->pipeline), "vksink"); + GstPad *pad = gst_element_get_static_pad (vksink, "sink"); + GstCaps *caps = gst_caps_from_string ("video/x-raw(memory:VulkanImage)"); + GstAppSinkCallbacks callbacks = { + .new_sample = new_sample_cb, + .new_preroll = new_preroll_cb, + }; + + g_assert (pad != NULL); + + gst_app_sink_set_callbacks (GST_APP_SINK (vksink), &callbacks, appdata, + NULL); + g_object_set (vksink, "caps", caps, NULL); + gst_caps_unref (caps); + + gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_QUERY_DOWNSTREAM, pad_query_cb, + appdata, NULL); + gst_object_unref (pad); + gst_object_unref (vksink); + } + + { + GstElement *src = gst_bin_get_by_name (GST_BIN (appdata->pipeline), "src"); + g_object_set (src, "location", argv1, NULL); + gst_object_unref (src); + } + + appdata->bus = gst_pipeline_get_bus (GST_PIPELINE (appdata->pipeline)); + + gst_element_set_state (appdata->pipeline, GST_STATE_PLAYING); + + appdata->loop_thread = SDL_CreateThread (bus_thread, "gst-bus-thread", + appdata); + + *data = appdata; + + return SDL_APP_CONTINUE; +} + +SDL_AppResult +SDL_AppEvent (void *data, SDL_Event * event) +{ + AppData *appdata = data; + + switch (event->type) { + case SDL_EVENT_KEY_DOWN: + if (event->key.key != SDLK_ESCAPE) + break; + /* fallthrough */ + case SDL_EVENT_QUIT: + SDL_LogInfo (SDL_LOG_CATEGORY_APPLICATION, "SDL_EVENT_QUIT"); + gst_element_send_event (appdata->pipeline, gst_event_new_eos ()); + break; + default: + break; + } + + return SDL_APP_CONTINUE; +} + +SDL_AppResult +SDL_AppIterate (void *data) +{ + AppData *appdata = data; + + g_mutex_lock (&appdata->lock); + if (!appdata->quit) + appdata->rendered = draw (appdata); + g_mutex_unlock (&appdata->lock); + + g_cond_signal (&appdata->cond); + + return appdata->quit ? SDL_APP_SUCCESS : SDL_APP_CONTINUE; +} + +void +SDL_AppQuit (void *data, SDL_AppResult result) +{ + AppData *appdata = data; + + if (appdata) { + gst_element_set_state (appdata->pipeline, GST_STATE_NULL); + gst_object_unref (appdata->pipeline); + gst_object_unref (appdata->bus); + + gst_clear_sample (&appdata->last_sample); + g_mutex_clear (&appdata->lock); + + sdl_deinit (appdata); + + vulkan_deinit (appdata); + } + + g_free (data); + + SDL_Quit (); +}
View file
_service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/vulkan/vulkanenc.c
Added
@@ -0,0 +1,536 @@ +/* GStreamer + * Copyright (C) 2022 Seungha Yang <seungha@centricular.com> + * 2025 Víctor Jáquez <vjaquez@igalia.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + * + * You should have received a copy of the GNU Library General Public + * License along with this library; if not, write to the + * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, + * Boston, MA 02110-1301, USA. + */ + +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include <gst/gst.h> +#include <gst/video/video.h> +#include <stdlib.h> +#include "../key-handler.h" + +static GMainLoop *loop = NULL; +static gint width = 640; +static gint height = 480; +static guint rc_ctrl = 0; +static gboolean alive = FALSE; + +G_LOCK_DEFINE_STATIC (input_lock); + +typedef struct +{ + GstElement *pipeline; + GstElement *capsfilter; + GstElement *encoder; + gulong probe_id; + + gint prev_width; + gint prev_height; +} TestCallbackData; + +static gboolean +bus_msg (GstBus * bus, GstMessage * msg, gpointer user_data) +{ + switch (GST_MESSAGE_TYPE (msg)) { + case GST_MESSAGE_ERROR:{ + GError *err; + gchar *dbg; + + gst_message_parse_error (msg, &err, &dbg); + gst_printerrln ("ERROR %s", err->message); + if (dbg != NULL) + gst_printerrln ("ERROR debug information: %s", dbg); + g_clear_error (&err); + g_free (dbg); + + g_main_loop_quit (loop); + break; + } + case GST_MESSAGE_PROPERTY_NOTIFY:{ + const GValue *val; + const gchar *name; + GstObject *obj; + gchar *val_str = NULL; + gchar *obj_name; + + gst_message_parse_property_notify (msg, &obj, &name, &val); + + if (!GST_IS_VIDEO_ENCODER (obj)) + break; + + obj_name = gst_object_get_name (GST_OBJECT (obj)); + if (val) { + if (G_VALUE_HOLDS_STRING (val)) + val_str = g_value_dup_string (val); + else if (G_VALUE_TYPE (val) == GST_TYPE_CAPS) + val_str = gst_caps_to_string (g_value_get_boxed (val)); + else if (G_VALUE_HOLDS_BOOLEAN (val) || G_VALUE_HOLDS_INT (val) + || G_VALUE_HOLDS_UINT (val) || G_VALUE_HOLDS_ENUM (val)) + val_str = gst_value_serialize (val); + else + val_str = g_strdup ("(unknown type)"); + } else { + val_str = g_strdup ("(no value)"); + } + + gst_println ("%s: %s = %s", obj_name, name, val_str); + g_free (obj_name); + g_free (val_str); + break; + } + default: + break; + } + + return TRUE; +} + +static void +loop_rate_control (GstElement * encoder) +{ + GParamSpec *pspec = + g_object_class_find_property (G_OBJECT_GET_CLASS (encoder), + "rate-control"); + GEnumClass *enum_class; + gint i, default_value; + + if (!pspec) + return; + + enum_class = G_PARAM_SPEC_ENUM (pspec)->enum_class; + + if (rc_ctrl == 0) { + default_value = G_PARAM_SPEC_ENUM (pspec)->default_value; + for (i = 0; i < enum_class->n_values; i++) { + if (enum_class->valuesi.value == default_value) { + rc_ctrl = i; + break; + } + } + } + + i = ++rc_ctrl % enum_class->n_values; + g_object_set (encoder, "rate-control", enum_class->valuesi.value, NULL); +} + +static GstPadProbeReturn +resolution_change_probe (GstPad * pad, GstPadProbeInfo * info, + gpointer user_data) +{ + GstPadProbeReturn ret = GST_PAD_PROBE_OK; + TestCallbackData *data = (TestCallbackData *) user_data; + + G_LOCK (input_lock); + + if (GST_IS_BUFFER (GST_PAD_PROBE_INFO_DATA (info))) { + GstBuffer *buffer = GST_PAD_PROBE_INFO_BUFFER (info); + GstPad *peer = gst_pad_get_peer (pad); + GstFlowReturn flow_ret = GST_FLOW_OK; + + ret = GST_PAD_PROBE_HANDLED; + + if (peer) { + flow_ret = gst_pad_chain (peer, buffer); + + if (flow_ret != GST_FLOW_OK) { + gst_pad_remove_probe (pad, data->probe_id); + data->probe_id = 0; + } else { + if (data->prev_width != width || data->prev_height != height) { + GstCaps *caps = NULL; + gint next_width, next_height; + + next_width = width; + next_height = height; + + g_object_get (data->capsfilter, "caps", &caps, NULL); + caps = gst_caps_make_writable (caps); + gst_caps_set_simple (caps, + "width", G_TYPE_INT, next_width, "height", G_TYPE_INT, + next_height, NULL); + g_object_set (data->capsfilter, "caps", caps, NULL); + gst_caps_unref (caps); + + data->prev_width = next_width; + data->prev_height = next_height; + } + } + } + } + + G_UNLOCK (input_lock); + + return ret; +} + +static void +print_keyboard_help (void) +{ + /* *INDENT-OFF* */ + static struct + { + const gchar *key_desc; + const gchar *key_help; + } key_controls = { + { "q", "Quit"}, + { "right arrow", "Increase Width"}, + { "left arrow", "Decrease Width"}, + { "up arrow", "Increase Height"}, + { "down arrow", "Decrease Height"}, + { "r", "Loop rate control"}, + { ">", "Increase bitrate by 100 kbps"}, + { "<", "Decrease bitrate by 100 kbps"}, + { "{", "Increase quality"}, + { "}", "Decrease quality"}, + { "I", "Increase CPQ (only in CPQ)"}, + { "i", "Decrease CPQ (only in CPQ)"}, + { "P", "Increase max QP (only in CBR/VBR)"}, + { "p", "Decrease max QP (only in CBR/VBR)"}, + { "B", "Increase min QP (only in CBR/VBR)"}, + { "b", "Decrease min QP (only in CBR/VBR)"}, + { "f", "Force to set a key frame"}, + { "k", "show keyboard shortcuts" } + }; + /* *INDENT-ON* */ + + guint i, chars_to_pad, desc_len, max_desc_len = 0; + + gst_print ("\n\n%s\n\n", "Keyboard controls:"); + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + desc_len = g_utf8_strlen (key_controlsi.key_desc, -1); + max_desc_len = MAX (max_desc_len, desc_len); + } + ++max_desc_len; + + for (i = 0; i < G_N_ELEMENTS (key_controls); ++i) { + chars_to_pad = max_desc_len - g_utf8_strlen (key_controlsi.key_desc, -1); + gst_print ("\t%s", key_controlsi.key_desc); + gst_print ("%-*s: ", chars_to_pad, ""); + gst_print ("%s\n", key_controlsi.key_help); + } + gst_print ("\n"); +} + +static void +keyboard_cb (gchar input, gboolean is_ascii, gpointer user_data) +{ + TestCallbackData *data = (TestCallbackData *) user_data; + + G_LOCK (input_lock); + + if (!is_ascii) { + switch (input) { + case KB_ARROW_UP: + height += 2; + break; + case KB_ARROW_DOWN: + height -= 2; + height = MAX (height, 16); + break; + case KB_ARROW_LEFT: + width -= 2; + width = MAX (width, 16); + break; + case KB_ARROW_RIGHT: + width += 2; + break; + default: + break; + } + } else { + switch (input) { + case 'k': + case 'K': + print_keyboard_help (); + break; + case 'q': + case 'Q': + gst_element_send_event (data->pipeline, gst_event_new_eos ()); + g_main_loop_quit (loop); + break; + case 'r': + case 'R': + loop_rate_control (data->encoder); + break; + case '>':{ + guint bitrate; + + g_object_get (data->encoder, "bitrate", &bitrate, NULL); + bitrate += 100; + if (bitrate <= 2048000) + g_object_set (data->encoder, "bitrate", bitrate, NULL); + break; + } + case '<':{ + gint bitrate; + + g_object_get (data->encoder, "bitrate", &bitrate, NULL); + bitrate -= 100; + if (bitrate < 0) + bitrate = 0; + g_object_set (data->encoder, "bitrate", bitrate, NULL); + break; + } + case 'I':{ + guint qp_i, qp_p, qp_b; + + g_object_get (data->encoder, "qp_i", &qp_i, "qp_p", &qp_p, "qp_b", + &qp_b, NULL); + qp_i += 1; + qp_p += 1; + qp_b += 1; + g_object_set (data->encoder, "qp_i", qp_i, "qp_p", qp_p, "qp_b", qp_b, + NULL); + break; + } + case 'i':{ + guint qp_i, qp_p, qp_b; + + g_object_get (data->encoder, "qp_i", &qp_i, "qp_p", &qp_p, "qp_b", + &qp_b, NULL); + + if (qp_i > 0) { + qp_i -= 1; + qp_p -= 1; + qp_b -= 1; + } + + g_object_set (data->encoder, "qp_i", qp_i, "qp_p", qp_p, "qp_b", qp_b, + NULL); + break; + } + case 'P':{ + guint mqp; + + g_object_get (data->encoder, "max-qp", &mqp, NULL); + mqp += 1; + g_object_set (data->encoder, "max-qp", mqp, NULL); + break; + } + case 'p':{ + guint mqp; + + g_object_get (data->encoder, "max-qp", &mqp, NULL); + if (mqp > 0) + mqp -= 1; + g_object_set (data->encoder, "max-qp", mqp, NULL); + break; + } + case 'B':{ + guint mqp; + + g_object_get (data->encoder, "min-qp", &mqp, NULL); + mqp += 1; + g_object_set (data->encoder, "min-qp", mqp, NULL); + break; + } + case 'b':{ + guint mqp; + + g_object_get (data->encoder, "min-qp", &mqp, NULL); + if (mqp > 0) + mqp -= 1; + g_object_set (data->encoder, "min-qp", mqp, NULL); + break; + } + case '{':{ + guint quality; + + g_object_get (data->encoder, "quality", &quality, NULL); + quality += 1; + g_object_set (data->encoder, "quality", quality, NULL); + break; + } + case '}':{ + guint quality; + + g_object_get (data->encoder, "quality", &quality, NULL); + if (quality > 0) + quality -= 1; + g_object_set (data->encoder, "quality", quality, NULL); + break; + } + case 'f':{ + GstEvent *event = gst_video_event_new_upstream_force_key_unit + (GST_CLOCK_TIME_NONE, TRUE, 0); + gst_println ("Sending force keyunit event"); + gst_element_send_event (data->encoder, event); + break; + } + default: + break; + } + } + + G_UNLOCK (input_lock); +} + +gint +main (gint argc, gchar ** argv) +{ + GstElement *pipeline; + GstElement *src, *capsfilter, *convert, *enc, *dec, *parser, *vpp, *sink; + GstElement *vul; + GstStateChangeReturn sret; + GError *error = NULL; + GOptionContext *option_ctx; + GstCaps *caps; + GstPad *pad; + TestCallbackData data = { 0, }; + gchar *codec = NULL; + gulong deep_notify_id = 0; + guint idx; + gboolean res; + + /* *INDENT-OFF* */ + const GOptionEntry options = { + {"codec", 'c', 0, G_OPTION_ARG_STRING, &codec, + "Codec to test: *h264 "}, + {"alive", 'a', 0, G_OPTION_ARG_NONE, &alive, + "Set test source as a live stream"}, + {NULL} + }; + const struct { + const char *codec; + const char *encoder; + const char *parser; + const char *decoder; + } elements_map = { + { "h264", "vulkanh264enc", "h264parse", "avdec_h264" }, + }; + /* *INDENT-ON* */ + +#define MAKE_ELEMENT_AND_ADD(elem, name) G_STMT_START { \ + GstElement *_elem = gst_element_factory_make (name, NULL); \ + if (!_elem) { \ + gst_printerrln ("%s is not available", name); \ + exit (1); \ + } \ + gst_println ("Adding element %s", name); \ + elem = _elem; \ + gst_bin_add (GST_BIN (pipeline), elem); \ +} G_STMT_END + + option_ctx = + g_option_context_new ("Vulkan video encoder dynamic reconfigure example"); + g_option_context_add_main_entries (option_ctx, options, NULL); + g_option_context_add_group (option_ctx, gst_init_get_option_group ()); + g_option_context_set_help_enabled (option_ctx, TRUE); + if (!g_option_context_parse (option_ctx, &argc, &argv, &error)) { + gst_printerrln ("option parsing failed: %s\n", error->message); + g_clear_error (&error); + exit (1); + } + + g_option_context_free (option_ctx); + gst_init (NULL, NULL); + + if (!codec) + codec = g_strdup ("h264"); + + for (idx = 0; idx < G_N_ELEMENTS (elements_map); idx++) { + if (g_strcmp0 (elements_mapidx.codec, codec) == 0) + break; + } + + g_free (codec); + + if (idx == G_N_ELEMENTS (elements_map)) { + gst_printerrln ("Unsupported codec: %s", codec); + exit (1); + } + + pipeline = gst_pipeline_new (NULL); + + MAKE_ELEMENT_AND_ADD (src, "videotestsrc"); + g_object_set (src, "pattern", 1, "is-live", alive, NULL); + + MAKE_ELEMENT_AND_ADD (capsfilter, "capsfilter"); + MAKE_ELEMENT_AND_ADD (convert, "videoconvert"); + MAKE_ELEMENT_AND_ADD (vul, "vulkanupload"); + MAKE_ELEMENT_AND_ADD (enc, elements_mapidx.encoder); + MAKE_ELEMENT_AND_ADD (dec, elements_mapidx.decoder); + MAKE_ELEMENT_AND_ADD (vpp, "videoconvert"); + MAKE_ELEMENT_AND_ADD (sink, "autovideosink"); + + if (elements_mapidx.parser) { + MAKE_ELEMENT_AND_ADD (parser, elements_mapidx.parser); + res = gst_element_link_many (src, capsfilter, convert, vul, enc, parser, + dec, vpp, sink, NULL); + } else { + res = gst_element_link_many (src, capsfilter, convert, vul, enc, dec, vpp, + sink, NULL); + } + + if (!res) { + gst_printerrln ("Failed to link element"); + exit (1); + } + + caps = gst_caps_new_simple ("video/x-raw", "width", G_TYPE_INT, + width, "height", G_TYPE_INT, height, + "format", G_TYPE_STRING, "I420", NULL); + g_object_set (capsfilter, "caps", caps, NULL); + gst_caps_unref (caps); + + g_object_set (convert, "chroma-mode", 3, NULL); + g_object_set (convert, "dither", 0, NULL); + + data.pipeline = pipeline; + data.capsfilter = capsfilter; + data.encoder = enc; + + pad = gst_element_get_static_pad (capsfilter, "src"); + data.probe_id = gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER, + resolution_change_probe, &data, NULL); + gst_object_unref (pad); + data.prev_width = width; + data.prev_height = height; + + loop = g_main_loop_new (NULL, FALSE); + + deep_notify_id = + gst_element_add_property_deep_notify_watch (pipeline, NULL, TRUE); + + gst_bus_add_watch (GST_ELEMENT_BUS (pipeline), bus_msg, &data); + + /* run the pipeline */ + sret = gst_element_set_state (pipeline, GST_STATE_PLAYING); + if (sret == GST_STATE_CHANGE_FAILURE) { + gst_printerrln ("Pipeline doesn't want to playing"); + } else { + set_key_handler ((KeyInputCallback) keyboard_cb, &data); + g_main_loop_run (loop); + unset_key_handler (); + } + + if (deep_notify_id != 0) + g_signal_handler_disconnect (pipeline, deep_notify_id); + + gst_element_set_state (pipeline, GST_STATE_NULL); + gst_bus_remove_watch (GST_ELEMENT_BUS (pipeline)); + + gst_object_unref (pipeline); + g_main_loop_unref (loop); + + return 0; +}
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/webrtc/webrtc.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/webrtc/webrtc.c
Changed
@@ -20,8 +20,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/webrtc/webrtcbidirectional.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/webrtc/webrtcbidirectional.c
Changed
@@ -20,8 +20,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/webrtc/webrtcrenego.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/webrtc/webrtcrenego.c
Changed
@@ -55,8 +55,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/webrtc/webrtcswap.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/webrtc/webrtcswap.c
Changed
@@ -20,8 +20,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/webrtc/webrtctransceiver.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/webrtc/webrtctransceiver.c
Changed
@@ -20,8 +20,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tests/examples/wpe/wpe.c -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tests/examples/wpe/wpe.c
Changed
@@ -34,8 +34,8 @@ { gchar *dump_name = g_strconcat ("state_changed-", - gst_element_state_get_name (old), "_", - gst_element_state_get_name (new), NULL); + gst_state_get_name (old), "_", + gst_state_get_name (new), NULL); GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (msg->src), GST_DEBUG_GRAPH_SHOW_ALL, dump_name); g_free (dump_name);
View file
_service:download_files:gst-plugins-bad-1.26.10.tar.xz/tools/element-templates/sinkpad -> _service:download_files:gst-plugins-bad-1.28.0.tar.xz/tools/element-templates/sinkpad
Changed
@@ -76,13 +76,15 @@ gst_replace_sink_getcaps (GstPad *pad) { GstReplace *replace; - GstCaps *caps; + GstCaps *caps, *tcaps; replace = GST_REPLACE (gst_pad_get_parent (pad)); GST_DEBUG_OBJECT(replace, "getcaps"); - caps = gst_caps_copy (gst_pad_get_pad_template_caps (pad)); + tcaps = gst_pad_get_pad_template_caps (pad); + caps = gst_caps_copy (tcaps); + gst_caps_unref (tcaps); gst_object_unref (replace); return caps;
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.