Prosecution Insights
Last updated: April 19, 2026
Application No. 19/061,900

ICC PROFILE METADATA FOR VIDEO STREAMS

Non-Final OA §103
Filed
Feb 24, 2025
Examiner
NIRJHAR, NASIM NAZRUL
Art Unit
2896
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
379 granted / 512 resolved
+6.0% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
75.4%
+35.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the correspondence filled on 2/24/25. Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 8, 12-13, 16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1). Regarding to claim 1, 12 and 19: Examiner’s Note: Video encoding and decoding are done using same and opposite algorithm. 1. Sullivan teach a method of video decoding performed at a computing system (Sullivan Fig. 3 [0052] In the example receiver system, the channel decoder (355) is configured to process channel-coded data. For example, the channel decoder (355) de-packetizes and/or demultiplexes data that has been organized for transmission or storage as a media stream, in which case the channel decoder (355) can parse syntax elements added as part of the syntax of the media transmission stream. Or, the channel decoder (355) separates coded video data that has been organized for storage as a file, in which case the channel decoder (355) can parse syntax elements added as part of the syntax of the media storage file) having memory and one or more processors, the method comprising: (Sullivan [0022] FIG. 1) receiving a video bitstream comprising a set of pictures, (Sullivan [0041] After one or more of the source pictures have been stored in picture buffers, a picture selector selects an individual source picture from the source picture storage area to encode as the current picture. The order in which pictures are selected by the picture selector for input to the video encoder (340) may differ from the order in which the pictures are produced by the video source (310), e.g., the encoding of some pictures may be delayed in order, so as to allow some later pictures to be encoded first and to thus facilitate temporally backward prediction. [0062] A buffer (not shown) is configured to stored output of the post-processor (380). An output sequencer identifies when the next picture to be produced in output order is available in a decoded picture storage area for the output video (381). When the next picture to be produced in output order is available in the decoded picture storage area, it is read by the output sequencer and output to the output destination (390) (e.g., display). In general, the order in which pictures are output from the decoded picture storage area by the output sequencer may differ from the order in which the pictures are decoded by the decoder (360).) wherein the video bitstream corresponds to a source device; (Sullivan [0039] With reference to FIG. 3, the example transmitter system is configured to receive a sequence of source video pictures as input video (311) from a video source (310) and produce encoded data (341) in a channel-coded elementary video bitstream (343) as output.) identifying color profile metadata for the source device based on a supplementary enhancement information (SEI) message for the video bitstream; (Sullivan [0080] parameters such as color primary chromaticity values, transfer characteristics, and matrix coefficients are signaled as part of media metadata in SEI message or VUI messages defined in a standard.) reconstructing the set of pictures using information from the video bitstream; and (Sullivan [0051] With reference to FIG. 3, the example receiver system is configured to receive encoded data (341) in a channel-coded elementary video bitstream (343) from the channel (350) and produce pictures of reconstructed video (361), which may be post-processed, as output for an output destination (390). The example receiver system includes a channel decoder (355), a buffer (358), a video decoder (360), a metadata parser (370), a post-processor (380), a condition detector (382), and an output destination (390)) causing the set of pictures to be presented at an output device (Sullivan [0061] The post-processor (380) and output destination (390) can be in the same device as other components of the example receiver system. Or, the post-processor (380) and output destination (390) can be in different devices. For example, the post-processor (380) and output destination (390) are part of a display device, and the remaining components of the example receiver system are part of a media player or set-top box. [0062] A buffer (not shown) is configured to stored output of the post-processor (380). An output sequencer identifies when the next picture to be produced in output order is available in a decoded picture storage area for the output video (381). When the next picture to be produced in output order is available in the decoded picture storage area, it is read by the output sequencer and output to the output destination (390) (e.g., display)) using color characteristics from the color profile metadata. (Sullivan [0057] Examples of metadata (331) that describes nominal lighting condition(s) are presented below. To the post-processor (380), the metadata parser (370) provides information indicating the characteristics of a reference viewing environment, including information (321) indicating nominal lighting condition(s) of the reference viewing environment. The information (321) indicating nominal lighting condition(s) can include a nominal level of ambient light and/or a nominal color characteristic (e.g., color temperature, chromaticity value or coordinates) [0059] The post-processor (380) is configured to perform post-processing of decoded pictures of the reconstructed video (361) after decoding, producing output video (381). The post-processing can include resampling processing (e.g., to restore the spatial resolution of chroma components or to adapt the video content for use on a display with a different spatial resolution) after decoding as well as color space conversion from primary and secondary color components. For example, after decoding, chroma sample values may be re-sampled to a higher chroma sampling rate (e.g., from a YUV 4:2:0 format or YUV 4:2:2 format), and video may be converted from a color space such as YUV to another color space such as RGB, GBR, or BGR. [0060] The post-processor (380) is also configured to adjust, in order to compensate for differences between the actual lighting condition(s) and the nominal lighting condition(s), a characteristic of at least some sample values of the video. In doing so, the post-processor (380) can in effect change the interpretation of the sample values of the reconstructed video (361) or output video (381)) The concept “color profile metadata” is a broad term. Sullivan [0080] parameters such as color primary chromaticity values, signaled as part of media metadata can broadly be considered “color profile metadata”. However, Sullivan does not explicitly teach “color profile metadata” as per details of specification and dependent claim 3. However Martinsen teach color profile metadata (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen in video/camera technology. One would be motivated to do so, to incorporate color profile metadata. This functionality will improve efficiency with predictable results. Regarding to claim 3 and 13: 3. Sullivan teach the method of claim 1, Sullivan do not explicitly teach wherein the color profile metadata comprises an International Color Consortium (ICC) profile. However Martinsen teach wherein the color profile metadata comprises an International Color Consortium (ICC) profile. (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) Regarding to claim 4: 4. Sullivan teach the method of claim 3, Sullivan do not explicitly teach wherein the color profile metadata indicates an ICC major version number and an ICC minor version number. However Martinsen teach wherein the color profile metadata indicates an ICC major version number and an ICC minor version number. (Martinsen [0029] The color profile 214 indicates how the color output by the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206 differs from a standard or reference set of colors. This allows adjustments to be made by the output device when output the content on the substrate so that the colors of the content appear as intended or desired by a user of the output device. In one or more embodiments, the color profile 214 is an International Color Consortium (ICC) color profile in accordance with the ICC v2 specification (e.g., as described in the International Color Consortium Specification ICC.1:2001-04 (2001)). It should be noted, however, that the color profile 214 can additionally or alternatively be a color profile in accordance with different specifications (e.g., different ICC specifications). Please note ICC specification teach the color profile metadata indicates an ICC major version number and an ICC minor version number. pp 13 "6.1.3 Profile Version Profile version number where the first 8 bits are the major version number and the next 8 bits are for the minor version number. The major and minor version numbers are set by the International Color Consortium and will match up with the profile format revisions. The current version number is 02h with a minor version number of 00h ... Major version change can only happen if there is an incompatible change. An example of a major version change may be the addition of new required tags. Minor version change can happen with compatible changes. An example of a minor version number change may be the addition of new optional tags.". ICC specifications is incorporated part of Martinsen as per [0029]) Regarding to claim 8 and 16: 8. Sullivan teach the method of claim 1, wherein the video bitstream corresponds to a source video sequence comprising image data (Sullivan [0039] With reference to FIG. 3, the example transmitter system is configured to receive a sequence of source video pictures as input video (311) from a video source (310) and produce encoded data (341) in a channel-coded elementary video bitstream (343) as output. In FIG. 3, the example transmitter system includes a video source (310), a pre-processor (320), a metadata generator (330), a video encoder (340), a buffer (342), and a channel coder (345)) Sullivan do not explicitly teach the color profile metadata. However Martinsen teach and the color profile metadata. (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) Claims 2 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1), further in view of Andrivon (U.S. Pub. No. 20210176471 A1). Regarding to claim 2: 2. Sullivan teach the method of claim 1, Sullivan do not explicitly teach further comprising parsing an image format metadata (IFM) type identifier that indicates a type of metadata included in the SEI message, wherein the color profile metadata is identified when the IFM type identifier indicates that the SEI message contains the color profile metadata. However Martinsen teach wherein the color profile metadata is identified (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) The motivation for combining Sullivan and Martinsen as set forth in claim 1 is equally applicable to claim 2. However Andrivon teach further comprising parsing an image format metadata (IFM) type identifier that indicates a type of metadata included in the SEI message, (Andrivon [0052] Consequently, when metadata, transported with a specific formatting, are carried through the uncompressed interface with an associated and decoded image/video stream, the apparatus A3 cannot identify the formatting of those metadata. For example, the apparatus A3 can not determine if the format of metadata is carried on a AVC SEI message or HEVC SEI message. This can create interoperability issues as the apparatus A3 may assume a particular format to be parsed while the metadata are not formatted according to said particular format. Then, the parsed metadata may be totally corrupted and not usable or if used may beget a very altered image/video reconstructed from the received decoded image/video and those altered metadata. [0055] According to at least one embodiment, there is provided a device included in the apparatus A3 that is configured to compare a first set of bits of a payload of received formatted metadata with at least one given second set of bits identifying a particular formatting of said received formatted metadata, and to reconstruct an image/video from image data associated with said formatted metadata and parameters obtained by parsing said received formatted metadata according to a particular formatting identified from the result of said comparison. [0056] Such a device then determines/identifies the formatting of the metadata carried on uncompressed interface to be parsed by comparing sets of bits) when the IFM type identifier indicates that the SEI message (Andrivon [0052] Consequently, when metadata, transported with a specific formatting, are carried through the uncompressed interface with an associated and decoded image/video stream, the apparatus A3 cannot identify the formatting of those metadata. For example, the apparatus A3 can not determine if the format of metadata is carried on a AVC SEI message or HEVC SEI message. This can create interoperability issues as the apparatus A3 may assume a particular format to be parsed while the metadata are not formatted according to said particular format. Then, the parsed metadata may be totally corrupted and not usable or if used may beget a very altered image/video reconstructed from the received decoded image/video and those altered metadata. [0055] According to at least one embodiment, there is provided a device included in the apparatus A3 that is configured to compare a first set of bits of a payload of received formatted metadata with at least one given second set of bits identifying a particular formatting of said received formatted metadata, and to reconstruct an image/video from image data associated with said formatted metadata and parameters obtained by parsing said received formatted metadata according to a particular formatting identified from the result of said comparison. [0056] Such a device then determines/identifies the formatting of the metadata carried on uncompressed interface to be parsed by comparing sets of bits) contains the color profile metadata. (Andrivon [0012] Static metadata are valid for the whole video content (scene, movie, clip . . . ) and may depend on the image content per se or the representation format of the image content. The static metadata may define, for example, image format, color space, or color gamut. For instance, SMPTE ST 2086:2014, “Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images” defines static metadata that describes the mastering display used to grade the material in a production environment. The Mastering Display Colour Volume (MDCV) SEI (Supplemental Enhanced Information) message corresponds to ST 2086 for both H.264/AVC (“Advanced video coding for generic audiovisual Services”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, April 2017) and HEVC video codecs. [0013] Dynamic metadata is content-dependent information, so that metadata could change with the image/video content, for example for each image or for each group of images. As an example, SMPTE ST 2094:2016, “Dynamic Metadata for Color Volume Transform” defines dynamic metadata typically generated in a production environment. SMPTE ST 2094-30 can be distributed in HEVC and AVC coded video streams using, for example, the Colour Remapping Information (CRI) SEI message) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen and Andrivon in video/camera technology. One would be motivated to do so, to incorporate parsing an image format metadata (IFM) type identifier that indicates a type of metadata included in the SEI message. This functionality will improve quality with predictable results. Regarding to claim 20: 20. Sullivan teach the non-transitory computer-readable storage medium of claim 19, or indicates a storage location of the color profile metadata. (Part of OR condition, so rejection is not required) Sullivan do not explicitly teach wherein the SEI message contains the color profile metadata. However Andrivon teach wherein the SEI message contains the color profile metadata. (Andrivon [0012] Static metadata are valid for the whole video content (scene, movie, clip . . . ) and may depend on the image content per se or the representation format of the image content. The static metadata may define, for example, image format, color space, or color gamut. For instance, SMPTE ST 2086:2014, “Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images” defines static metadata that describes the mastering display used to grade the material in a production environment. The Mastering Display Colour Volume (MDCV) SEI (Supplemental Enhanced Information) message corresponds to ST 2086 for both H.264/AVC (“Advanced video coding for generic audiovisual Services”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, April 2017) and HEVC video codecs. [0013] Dynamic metadata is content-dependent information, so that metadata could change with the image/video content, for example for each image or for each group of images. As an example, SMPTE ST 2094:2016, “Dynamic Metadata for Color Volume Transform” [color profile metadata] defines dynamic metadata typically generated in a production environment. SMPTE ST 2094-30 can be distributed in HEVC and AVC coded video streams using, for example, the Colour Remapping Information (CRI) SEI message) Claims 5-6 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1), further in view of Choi (U.S. Pub. No. 20220337857 A1). Regarding to claim 5 and 14: 5. Sullivan teach the method of claim 1, Sullivan do not explicitly teach wherein the color profile metadata is obtained from a payload of the SEI message. However Martinsen teach wherein the color profile metadata. (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) The motivation for combining Sullivan and Martinsen as set forth in claim 1 is equally applicable to claim 5. However Choi teach metadata is obtained from a payload of the SEI message. (Choi [0032] The design of the proposed syntax structure is aimed to be specified in SEI as a codec-agnostic approach, but potentially similar syntax elements can be specified in parameter sets targeting VVC/HEVC/ AV1&2/AVS-extensions, metadata track of the file format or any other payload format. [0117] Examples of SEI messages for carriage of NN information, according to embodiments, will now be described. Although the examples assume the syntax elements and parameters are signaled in one or more SEI messages, any parameter set (e.g. SPS, PPS, APS), any metadata track of a file format, or any payload type can carry the same or slightly modified syntax elements and parameters.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen and Choi in video/camera technology. One would be motivated to do so, to incorporate metadata is obtained from a payload of the SEI message. This functionality will improve user experience with predictable results. Regarding to claim 6 and 15: 6. Sullivan teach the method of claim 1, Sullivan do not explicitly teach wherein the SEI message includes a uniform resource identifier (URI) string for the color profile metadata. However Martinsen teach for the color profile metadata. (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) However Choi teach wherein the SEI message includes a uniform resource identifier (URI) string. (Choi 0098] Ideally, any neural network model may be exported to NNEF and other formats, and network accelerator and libraries may consume data in the formats without compatibility issue with any network framework. As practical method, embodiments may directly reference outside files or bitstreams with URI information. However, it is also desired to have a lightweight syntax design to represent video coding specific networks for VVC or HEVC-extension, with novel neural-network based video coding tools, because a generic representation of a network model may be bulky to be used for the compressed video format. Since most network models used for video compression are based on a convolutional neural network (CNN), having a compact representation of the CNN in the SEI message is expected to be helpful in reducing the total bitrate as well as enabling easy access to the network model data according to exemplary embodiments) Claims 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1), further in view of Ramasubramonian (U.S. Pub. No. 20200204809 A1). Regarding to claim 7: 7. Sullivan teach the method of claim 1, Sullivan do not explicitly teach further comprising applying a filtering process to the set of pictures using the color profile metadata. However Martinsen teach color profile metadata. (Martinsen [0012] An output metadata service maintains a data store with output metadata for various output device and substrate combinations. Generally, the output metadata for an output device and substrate combination describes various different aspects of how the output device is to output color on the substrate. This can include, for example, an International Color Consortium (ICC) color profile and substrate type selection. [0026] The data structure 200 also includes output metadata 208. The output metadata 208 describes, for the output device identified by the output device identifier 204 when outputting content to the substrate identified by the substrate identifier 206, various different aspects of how the output device is to output color on the substrate. In general, the output metadata can be used to determine how the output device is to output color on the substrate so that the result is visually appealing to a user (e.g., has a faithful or accurate representation of the colors). [0027] In the example of FIG. 2, the output metadata 208 is illustrated as including a substrate type 212 and a color profile 214. It should be noted, however, that these are examples of metadata, and that all of this metadata need not be maintained and/or that additional metadata can be included in the data structure 200 (e.g., a type of ink (such as whether the ink is photo ink or matte ink), a substrate size, and so forth). It should further be noted that all of this metadata need not be provided in response to a request for the output metadata. For example, the output metadata 208 may be more metadata that the requester desires, and a subset of the output metadata 208 is provided to the requester) The motivation for combining Sullivan and Martinsen as set forth in claim 1 is equally applicable to claim 7. However Ramasubramonian teach further comprising applying a filtering process to the set of pictures using the metadata. (Ramasubramonian [0124] A nested message defined in a regional nesting message of a picture can include one or more sets of data (e.g., metadata or other set of data) that can be applied to one or more regions of the picture. In some examples, a set of data in a nested message defines a function that is to be performed on the one or more regions by a decoder device, a player device, or other device. For example, a set of data can define any suitable function, such as the functions performed using the film grain characteristics SEI message, the tone mapping information SEI message, the post filter hint SEI message, the chroma resampling filter hint SEI message, the color remapping information SEI message, the knee function information SEI message, or any other suitable data used to perform a function on a region of a video picture.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen and Ramasubramonian in video/camera technology. One would be motivated to do so, to incorporate applying a filtering process to the set of pictures using the metadata. This functionality will improve reliability with predictable results. Claims 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1), further in view of Hannuksela (U.S. Pub. No. 20220239949 A1). Regarding to claim 9 and 17: 9. Sullivan teach the method of claim 1, Sullivan do not explicitly teach further comprising determining that the SEI message is present based on a syntax element in a network abstraction layer (NAL). However Hannuksela teach further comprising determining that the SEI message is present based on a syntax element in a network abstraction layer (NAL). (Hannuksela [0647] According to an embodiment, it is indicated if a NAL unit is included in CPB for HRD management. For example, a decoding control NAL unit and/or an SEI NAL unit syntax may include a syntax element that specifies whether the NAL unit is included in the CPB. In an embodiment, a player or alike creates a decoding control NAL unit and/or an SEI NAL unit into the bitstream, and sets the syntax element to indicate that the NAL unit is not included in the CPB.) The motivation for combining Sullivan and Martinsen as set forth in claim 1 is equally applicable to claim 9. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen and Hannuksela in video/camera technology. One would be motivated to do so, to incorporate determining that the SEI message is present based on a syntax element in a network abstraction layer (NAL). This functionality will improve accuracy with predictable results. Claims 10-11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (U.S. Pub. No. 20160366444 A1), in view of Martinsen (U.S. Pub. No. 20180069989 A1), further in view of Chen (U.S. Pub. No. 20230171434 A1). Regarding to claim 10 and 18: 10. Sullivan teach the method of claim 1, Sullivan do not explicitly teach wherein the SEI message includes a first indicator indicating whether a previously-processed SEI message of a same type is not to be persisted. However Chen teach wherein the SEI message includes a first indicator indicating whether a previously-processed SEI message of a same type is not to be persisted. (Chen [0032] source_colour_volume_cancel_flag equal to 1 indicates that the source color volume SEI message cancels the persistence of any previous source color volume SEI message in output order that applies to the current layer. source_colour_volume_cancel_flag equal to 0 indicates that source color volume follows.) The motivation for combining Sullivan and Martinsen as set forth in claim 1 is equally applicable to claim 10. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Sullivan, further incorporating Martinsen and Chen in video/camera technology. One would be motivated to do so, to incorporate the SEI message includes a first indicator indicating whether a previously-processed SEI message of a same type is not to be persisted. This functionality will improve flexibility with predictable results. Regarding to claim 11: 11. Sullivan teach the method of claim 10, or a URI according to a second indicator. (Part of OR condition, so rejection is not required) Sullivan do not explicitly teach further comprising, when the first indicator indicates that that the previously-processed SEI message is to be persisted, determining whether to obtain the color profile metadata from a payload of the SEI message. However Chen teach further comprising, when the first indicator indicates that that the previously-processed SEI message is to be persisted, (Chen [0032] source_colour_volume_cancel_flag equal to 1 indicates that the source color volume SEI message cancels the persistence of any previous source color volume SEI message in output order that applies to the current layer. source_colour_volume_ cancel_flag equal to 0 indicates that source color volume follows.) determining whether to obtain the color profile metadata from a payload of the SEI message. (Chen [0044] FIG. 4 depicts an example process for extracting color volume information for a video source using SEI messaging according to an embodiment. First (405), a decoder may detect whether a first SEI messaging variable indicating an identifying number (ID) of source color volume information (e.g., source_colour_volume_id) is present. Then, given the presence of such a variable, the decoder may check (step 407) whether its value is within a permissible range. If it is an illegal value, then the process terminates (step 409). If it is a legal value, then in step (410), as shown also in Table 1, the decoder can read additional flags related to the persistence of the first variable across the bit stream (e.g., see the syntax elements for source_colour_volume_cancel_ flag and source_colour_volume_persistence_flag). In step (412), via a second SEI messaging parameter (e.g., source_colour_primaries), a decoder may check whether the metadata define explicitly the color volume that source data content truly occupies. If it is true (e.g., source_colour_primaries=2) then, in step (420), the (x, y) color chromaticity coordinates for each color primary (e.g., red, green, and blue) are read, otherwise, in step (425), the decoder extracts the minimum, maximum, and average luminance values. Optionally, SEI messaging may also define the (x, y) color chromaticity coordinates corresponding to the color primaries of the min, mid, and max luminance values defined earlier. In an embodiment, this may be indicated by a third parameter (e.g., luminance_colour_primaries_info_present_flag=1). If no such information is present (step 430), then the process terminates (409), otherwise, (in step 435), the decoder extracts the (x, y) color chromaticity coordinates for the color primaries for each of the min, mid, and max luminance values.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIM N NIRJHAR/Primary Examiner, Art Unit 2896
Read full office action

Prosecution Timeline

Feb 24, 2025
Application Filed
Mar 08, 2026
Non-Final Rejection — §103
Mar 26, 2026
Interview Requested
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598324
DEPTH DIFFERENCES IN PLACE OF MOTION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12593131
VELOCITY MATCHING IMAGING OF A TARGET ELEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12593074
SYSTEMS AND METHODS OF BUFFERING IMAGE DATA BETWEEN A PIXEL PROCESSOR AND AN ENTROPY CODER
2y 5m to grant Granted Mar 31, 2026
Patent 12587662
METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587628
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month