Prosecution Insights
Last updated: April 19, 2026
Application No. 18/007,219

INTEGRATING A DECODER FOR I-IIERACI-IICAL VIDEO CODING

Final Rejection §103
Filed
Jan 27, 2023
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s response towards claim objections has been fully considered and is withdrawn. Applicant’s response towards rejection under 35 U.S.C. 112(b) has been fully considered and is withdrawn. Applicant’s arguments with respect to claim(s) 1,17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,3-9,19-20 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) Regarding claim 1, Ferrera teaches, A video decoder (¶30 and Fig. 1, “decoder device 110 is configured to decode a data signal” depicted in Fig. 1) comprising: one or more decoder plug-ins, (¶107-110, decoder device 110 configured to “receive data associated with a field of view associated with a viewer”) each decoder plug-in providing a wrapper (¶107-110, data associated with a field of view used “to identify the target region”) for one or more respective base decoders (¶107-110, decoder device 110 “fetches data relating to a base layer” used to generate data of the further region at “lowest level of qualities as it is visible to the viewer”) to implement a base decoding layer (¶107-110, decoder device 110 “fetches data relating to a base layer” for tiles in focus) to decode an encoded video signal, (¶107-110 and 31, decoder device 110 decodes “received, encoded data signal” associated with a field of view associated with viewer used to “identify the target region”) each wrapper implementing an interface for data exchange (¶41,107-110, and 31, decoder device 110 “receiving data comprising an identification of the target region” received from encoder device 108) with the each corresponding base decoder; (¶41,107-110, and 31, decoder device 110 “fetches data relating to a base layer” based on “data signal” comprising an identification of the target region received from encoded data signal) an enhancement decoder (¶107-110, decoder device 110 fetches data relating to generating “the high quality layer for tiles in focus” associated with a field of view associated with a viewer and use the data associated with the field of view to identify the target region) to implement an enhancement decoding layer, (¶107-110, decoder device 110 “decodes the viewable section to a relatively high level of quality” of fetched data for high quality layer based on “data associated with a field of view associated with a viewer” used to “identify the target region”) the enhancement decoder (¶107-110, decoder 110configured to fetch data relating to “the high quality layer”) being configured to: receive an encoded enhancement signal; (¶107-110, “decoder device 110 fetches data relating” to the “high quality layer” for tiles in focus “associated with a viewer and use the data associated with the field of view to identify the target region”) and a decoder integration layer to control operation (¶46 and Fig. 1, decoder layer 110, depicted in Fig. 1, has “control over which regions of the image” are decoded at “particular levels of quality”) of the one or more decoder plug-ins (¶46-48 and fig. 1, decoder layer 110 controls the decoded “particular levels of quality”) and the enhancement decoder (¶44-48,107-110, and Fig. 1, decoder device 110 generates data representing region of “data signal at the higher level of quality”) to generate a decoded reconstruction of the original input video signal (¶44-45 and fig. 1, decoder device 110 “use the fully encoded data signal to generate a version of the data signal” from the encoder device 108) using a decoded video signal from the base encoding layer (¶44-45 and fig. 1, “decoder device 110 decodes part of the fully encoded data signal at a first level of quality”) and the one or more layers (¶44-45, “part at a second, higher level of quality”) from the enhancement encoding layer, (¶44-45 and fig. 1, decoder device 110 decodes part of the fully encoded data signal “at a second, higher level of quality”) wherein the decoder integration layer (¶46,113-121, and Fig. 11, decoder device 110 that has control over regions of the image “decoded at particular levels of quality” as part of “apparatus 1100” depicted in Fig. 11) provides a control interface for the video decoder. (¶119 and Fig. 11, apparatus 110, depicted in Fig. 11, comprising “one or more I/O devices 1106” that may enable a “user to provide input to the apparatus 1100”) But does not explicitly teach, each decoder plug-in providing a base control interface to a corresponding base decoder layer to call functions of the each corresponding base decoder, the each decoder plug-in providing an application program interface (API) to control operations, decode the encoded enhancement signal to obtain one or more layers of residual data, the one or more layers of residual data being generated based on a comparison of data derived from the decoded video signal and data derived from an original input video signal, and a decoder integration layer to control operation to generate a decoded reconstruction of the original input video signal using one or more layers of residual data However, Ugur teaches additionally, each decoder plug-in (¶251, “decoder and/or decoder process”) providing a base control interface (¶251, decoder or decoder process includes “first interface or interfaces”) to a corresponding base decoder layer (¶251, interface or interfaces to input “decoded picture of a base layer”) to call functions (¶251, interface(s) includes “mechanism to provide information characterizing or associated with the reconstructed/decoded picture”) of the each corresponding base decoder, (¶251, “decoded picture of a base layer”) the each decoder plug-in providing an application program interface (API) to control operations, (¶251-259, “decoder and/or decoder process” includes interface or interfaces such as “an application programming interface or API” to input a reconstructed or decoded picture of a base layer such as “layer_id value”, “temporal_id value”, “spatial extents (e.g. horizontal and vertical sample counts)”, etc.) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrera with the interface of Ugur which can provide information associated with a decoded picture of a base layer. This arrangement provides information that relates to inter-layer prediction that can use pictures in a base layer of a scalable video sequence. But does not explicitly teach, decode the encoded enhancement signal to obtain one or more layers of residual data, the one or more layers of residual data being generated based on a comparison of data derived from the decoded video signal and data derived from an original input video signal, and a decoder integration layer to control operation to generate a decoded reconstruction of the original input video signal using one or more layers of residual data However, Su teaches additionally, decode the encoded enhancement signal (¶166,170, and Fig. 4B, EL decoding operation (454), depicted in Fig. 4B, generating “residual values by decoding the EL image data (408)” placed in “one or more enhancement layer containers”) to obtain one or more layers of residual data, (¶166, “EL image data (408) comprises residual image data” placed in one or more enhancement layer containers) the one or more layers of residual data (¶166, EL image data (408) comprises “residual image data” of the (e.g., VDR, etc.) source video content (404) relative to predicted image data generated from the BL image data (406)”) being generated based on a comparison of data (¶166,152,37, and Fig. 4A, EL image data carrying “residual (or differential) image data” generated differences produced by “subtraction operation (424)” the mapped (BL) code words generated by “BL decoding operation (420)” and “source video content (404)” as presented in Fig. 4A) derived from the decoded video signal (¶166, 152, and Fig. 4A, subtraction of “image data generated from the BL image data (406)”) and data derived from an original input video signal, (¶166,152, and Fig. 4A, subtraction of image data from the “video source content (404)”) and a decoder integration layer (¶167 and fig. 4B, “reconstructed BL+EL video content 466” depicted in Fig. 4B) to control operation to generate a decoded reconstruction (¶167 and Fig. 4B, “generate one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.)”) of the original input video signal (¶167 and Fig. 4B, “one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source images in source video content” depicted in Fig. 4B) using a decoded video signal (¶167 and fig. 4B, “decoding operations on the BL image data (406) and the EL image data (408) to generate one or more wide dynamic range (e.g., VDR, etc.) images” depicted in Fig. 4B) from the base encoding layer (¶167 and fig. 4B, perform decoding operations on the “BL image data (406)” used to generate one or more wide dynamic range images) and the one or more layers of residual data from the enhancement layer (¶167 and fig. 4B, perform decoding operations on the “EL image data (408)” used to generate one or more wide dynamic range images) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrera with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. Regarding claim 3, Ferrara with Ugur with Su teach the limitations of claim 1, Su teaches additionally, apply the one or more layers (¶166-167, generate “wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source images in source video”) of residual data from the enhancement encoding layer (¶166-167, “EL image data (408) comprises residual image data of the (e.g., VDR, etc.) source video content (404)”) to the decoded video signal from the base encoding layer (¶166-167, “perform decoding operations on the BL image data (406)” used to generate “wide dynamic range (e.g., VDR, etc.) images”) to generate the decoded reconstruction of the original input video signal. (¶166-167, “perform decoding operations on the BL image data (406) and the EL image data (408) to generate one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source images in source video content”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. Regarding claim 4, Ferrara with Ugur with Su teach the limitations of claim 1, Su teaches additionally, decoder integration layer (¶167 and Fig. 4B, decoding operation of multi-layer decoder 452 including “reconstructed BL+EL video content 466” receiving added together decoded “BL image data (406) and the EL image data (408)” as depicted in Fig. 4B) is configured to obtain: data from one or more input buffers (¶138, “base layer container” and “residual data containers”) comprising the encoded video signal (¶138, “BL image data (406) is placed in a base layer container”) and the encoded enhancement signal (¶138, “EL image data (408) is placed in one or more enhancement layer containers”) in an encoding order, (¶167,138,154, Fig. 4A and 4B, “reconstructed BL+EL video content 466” receiving added together decoded “BL image data (406) and the EL image data (408)” as depicted in Fig. 4B where “EL image container in the enhancement layer may be logically separate from the BL image container in the base layer, even though both image containers can be concurrently contained in a single digital video signal”) wherein the one or more input buffers (¶148 and 138, “BL stream 406” placed in a base layer container transmitted to “downstream decoder (e.g., 452)”) are also fed to the base decoders; (¶148,138,168, and Fig. 4B, “BL stream 406” placed in a base layer container transmitted to BL decoding operation (460) of “downstream decoder (e.g., 452)” as depicted in Fig. 4B) and one or more base decoded frames (¶199, “one or more relatively low dynamic range images are ordered”) of the decoded video signal (¶199, one or more relatively low dynamic range images are “decoded from a multi-layer video signal”) from the base encoding layer (¶168 and 199, BL decoding operation (460) decoding “BL image data (406)”) in presentation order. (¶199, one or more relatively low dynamic range images are ordered in “a displaying order in which the one or more relatively low dynamic range images are to be rendered”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. Regarding claim 5, Ferrara with Ugur with Su teach the limitations of claim 1, Su teaches additionally, output type configuration parameter, (¶165, “multi-layer decoder (452) is configured to receive metadata (430)”) wherein the decoder integration layer is configured to vary (¶165-167, “reconstructed BL+EL video content 466”, of multi-layer decoder (452), based on “EL layers” generated using “operational parameters” used to generate “EL image data (408)”) how the decoded reconstruction of the original input video signal is output (¶165-167, “reconstructed BL+EL video content 466” generate “one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source”) based on a value of the output type configuration parameter. (¶165-167, reconstructed BL+EL video content 466 based on EL image data (408) using “operational parameters used in operations that generate” the EL image data (408)) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. Regarding claim 6, Ferrara with Ugur with Su teach the limitations of claim 1, Su teaches additionally, decoder integration layer (¶167 and 173, reconstructed BL+EL video content 466 generates “reconstructed version of one or more wide dynamic range images”) is configured to output the decoded reconstruction of the original input video signal (¶173, “reconstructed version of the one or more wide dynamic range images can be outputted”) as one or more buffers. (¶173, image data from the reconstructed version of the one or more wide dynamic range images “stored in the EL collected information (472)”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. Regarding claim 7, Ferrara with Ugur with Su teach the limitations of claim 1, Ferrara teaches additionally, output the decoded reconstruction (¶107, decoder device 110 generate data to align the target region of the “image to be displayed to the viewer where they are looking”) of the original input video signal (¶107, decoder device 110 may generate data for representing the “at least part of the further region” at a level of quality intermediate the highest and lowest level of qualities as it is “visible to the viewer”) as one or more on-screen surfaces. (¶107, “decoder device 110 may generate data” that align the “target region with where the viewer is looking”) Regarding claim 8, Ferrara with Ugur with Su teach the limitations of claim 1, Ferrara teaches additionally, output the decoded reconstruction (¶107, decoder device 110 generate data to align the target region of the “image to be displayed to the viewer where they are looking”) of the original input video signal (¶107, decoder device 110 may generate data for representing the “at least part of the further region” at a level of quality intermediate the highest and lowest level of qualities as it is “visible to the viewer”) as one or more off-screen textures. (¶107 and 109, “decoder device 110 may generate data” that align the “target region” associated with the predicted gaze towards regions that “will be added to the target region in a subsequent image” where the user is “likely to be looking”) Regarding claim 9, Ferrara with Ugur with Su teach the limitations of claim 8, Ferrara teaches additionally, a render instruction (¶107-110, “data associated with one or more gaze positions”) and, when the decoder integration layer (¶107-110, decoder device 110 “receive data associated with one or more gaze positions”) receives the render instruction, (¶107-110, “data associated with one or more gaze positions”) the decoder integration layer is configured to render the one or more off-screen textures. (¶107-110, decoder device 110 may “predict which regions and corresponding tiles will be added to the target region in a subsequent image” to generate data for “regions at a desired level of quality once such a prediction has been made” based on the “one or more gaze positions associated with a user” that identifies the target region) Regarding claim 19, it is the decoding system that includes the video decoder of claim 1. Su teaches additionally, A video decoding system, (¶167 and Fig. 4B, “multi-layer decoder 452” depicted in Fig. 4B) comprising: one or more base decoders; (¶167 and Fig. 4B, multi-layer decoder 452 configured to perform decoding operations with “BL decoding operation (460)” configured to decode “BL image data (406)”) and a client (¶219,221-234, and Fig. 6, “bus 602” connecting input device 614 to processor 604 to perform the process steps described, such as “decoding” method, as “command selections”) which provides one or more calls (¶221-234 and Fig. 6, but 302 communicating “command selections” and “information” executed by processor 604) to the video decoder via the control interface to instruct (¶219,221-234, and Fig. 6, “input device 614” depicted in Fig. 6 communicating “command selections to processor 604” to perform a “decoder” method) generation of a decoded reconstruction of an original input video signal using the video decoder. (¶219,167, Fig. 4B and 6, “perform decoding operations” of the decoding method of multi-layer decoder (452), depicted in Fig. 4B, “to generate one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source images”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. see rejection of claim 1 to teach the video decoder according to claim 1 of claim 19. Regarding claim 20, it is the decoder integration layer claim of video decoder claim 1. Refer to rejection of claim 1 to teach the limitations of claim 20. Claim(s) 2 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of CHO; In Won et al. (US 20200220907 A1) Regarding claim 2, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 2, However, Cho teaches additionally, one or more decoder plug-ins are configured to instruct (¶100-103, first client device 210A may forward data encoded to the “counterpart of the video call”) the corresponding base decoder (¶100-103, first client device 210A may forward data encoded to “restore the base layer”) through a library function call or operating system function call. (¶100-103, first client device 210A may forward to “the counterpart of the video call” data encoded for each layer so that the device may “restore the base layer” and the upper layer) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the video call system of Cho which has video call coding that can use different encoding methods for each layer. This allows for a video call capability that can enhance the experience, that can depend on terminal capability, by enhancing the picture quality through merging high quality positions with a base layer. Claim(s) 10 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of TSUKAGOSHI; Ikuo (US 20190007707 A1) Regarding claim 10, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 10, However, Tsukagoshi teaches additionally, a pipeline mode parameter, (¶74, video encoder 106 adds “information indicating decoding order”) wherein the decoder integration layer (¶117, “access units that are extracted by and transmitted from the system decoder 203”) is configured to control stages of the enhancement layer to be performed (¶74, “information indicating decoding order (encoding order) to each access unit of the enhanced streams (enhanced layer streams)”) on a CPU or GPU (¶113, control unit 201 using “a central processing unit (CPU)” to control receiving apparatus that includes “system decoder 203”) based on a value of the pipeline mode parameter. (¶113,117, and 74, “system decoder 203” extracting access units “indicating decoding order” added to access units of the enhanced streams (enhanced layer streams)) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the access units of Tsukagoshi which indicates decoding order for enhanced layer streams. This information allows for consistent correct decoding order processing. Claim(s) 11 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of Xia; Zhi Jin et al. (US 20100034273 A1) Regarding claim 11, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 11, However, Xia teaches additionally, fall back (¶48, when “impairment of an enhanced layer block with base_mode_flag equal to 1, a lower layer decoding circuit comes in action”) to passing an output of the base decoding layer (¶48, “a lower layer stream BLS is received and entropy decoded ED”) as the decoded reconstruction of the original input video signal (¶48, entropy decoded output of a “lower layer stream BLS” processed as the up-scaling result used for the “enhanced layer stream ELS”) where no encoded enhancement signal is received. (¶48, up-scaling result used for the “enhanced layer stream ELS” instead of the “missing or damaged enhanced layer block”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the base layer up-scaling of Xia which occurs when an impairment to the enhances layer block is identified. This helps improve the prediction of lost or damaged enhanced spatial layer blocks. Claim(s) 12 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of Takahashi; Maki et al. (US 20110164683 A1) Regarding claim 12, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 12, However, Takahashi teaches additionally, skip frame instruction (¶152, when the temperature of the apparatus increases, “decoding of the enhanced layer stream 351 is skipped”) and wherein the decoder integration layer (¶152, “input control section 314”) is configured to control the operation to not decode a frame (¶152, “decoding of the enhanced layer stream 351 is skipped”) of the encoded enhancement signal (¶152, “enhanced layer stream 351”) and/or not decode a frame of the encoded video signal in response to receiving the skip frame instruction. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the temperature control of Takahashi which obtains the temperature information of a scalable video stream decoding apparatus. This allows for restraining heat generation and reduce power consumption. Claim(s) 13 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of RASSOOL; Reza (US 20200120360 A1) Regarding claim 13, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 13, However, Rasool teaches additionally, decoder plug-ins provide a base control interface (¶172-175 and Fig. 19, video decoding /rendering device 600 implementing “layered viewport frame assembly sub-routine 1900”) to the base decoder layer (¶175, “base layer data is obtained from the decoded video buffer”) to call functions of the corresponding base decoder. (¶175, “layered viewport frame assembly sub-routine 1900 may call a base layer frame assembly sub-routine 2000”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the sub-routine calling of Rassool which can call a base layer sub-routine. Having this routine can enable a form of video postprocessing. Claim(s) 14 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of Jeong; Yowon (US 20180220140 A1) Regarding claim 14, Ferrara with Ugur with Su teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 14, However, Jeong teaches additionally, a set of predetermined decoding options, (¶102, “predetermined video coding algorithm”) wherein the decoder integration layer (¶102, “decoding unit 113A”) is configured to retrieve a configuration data structure (¶102, “video decoding unit 113A may decode” one of a plurality of “layers included in the encoded image IMG_E” based on scalable video coding algorithm) comprising a set of decoding settings (¶102, video decoding unit 113A may perform decoding operation “based on the scalable video coding algorithm”) corresponding to the set of predetermined decoding options. (¶102, decoding encoded image IMG_E based on “scalable video coding algorithm” being a “predetermined video coding algorithm” used to generate a preliminary decoded image IMG_PD) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the decoding unit of Jeong which can decode encoded images using predetermined video coding algorithms. This allows for using the video coding algorithms of the various video coding standards. Claim(s) 15 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of KASAI; Hiroyuki et al. (US 20100046633 A1) Regarding claim 15, Ferrara with Ugur with Su teach the limitations of claim 1, Ferrara teaches additionally, decoder integration layer should control operation (¶46 and Fig. 1, decoder layer 110, depicted in Fig. 1, has “control over which regions of the image” are decoded at “particular levels of quality”) of the one or more decoder plug-ins (¶46-48 and fig. 1, decoder layer 110 controls the decoded “particular levels of quality”) and the enhancement decoder (¶44-48,107-110, and Fig. 1, decoder device 110 generates data representing region of “data signal at the higher level of quality”) But does not explicitly teach the additional limitations of claim 15, However, Kasai teaches additionally, an indication of a mode (¶67, “acquire the asynchronous or synchronous stream” when a session for receiving the streams is initiated) in which the decoder integration layer (¶67, “control unit 240” performs a control operation) should control operation of the one or more decoder plug-ins, (¶67, “control unit 240” performs a control operation to acquire “asynchronous stream Dw” and “synchronous stream”) wherein, in a synchronous mode, (¶75, “acquisition start command of the synchronous stream is received from the control unit 240”) the decoder integration layer is configured to block a call to a decode function until decoding is complete; (¶75-77, “synchronous stream reception unit 214 accumulates the received synchronous stream in a synchronous stream buffer 229” until the synchronous stream read command is input from the synchronous stream acquisition unit 230) and, in an asynchronous mode, (¶69, “acquisition start command of the asynchronous stream Dw is input from the control unit 240”) the decoder integration layer is configured to return upon call to a decode function (¶72, “asynchronous stream reception unit 213 accumulates the received asynchronous stream Dw in the asynchronous stream accumulation unit 222”) and callback when decoding completes. (¶72, “asynchronous stream reception unit 213 accumulates the received asynchronous stream Dw in the asynchronous stream accumulation unit 222” until an asynchronous stream read command is input from an asynchronous stream acquisition unit 223) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the input control of Kasai that can receive input from asynchronous accumulation and synchronous accumulation. This allows for stable content viewing by a user. Claim(s) 16-18 rejected under 35 U.S.C. 103 as being unpatentable over FERRARA; Simone et al. (US 20190222851 A1) in view of Ugur; Kemal et al. (US 20140092964 A1) in view of SU; Guan-Ming (US 20170214924 A1) in view of Righetto; Augusto Cesar et al. (US 9846682 B1) in view of CHO; In Won et al. (US 20200220907 A1) Regarding claim 16, Ferrara with Ugur with Su teaches the limitations of claim 1, Ferrara teaches additionally, wherein the control interface comprises a set of functions (¶119 and Fig. 11, apparatus 110, depicted in Fig. 11, comprising “one or more I/O devices 1106” may communicate information to enable “user to provide input to the apparatus 1100”) to instruct respective phases of operation of the decoder integration layer, (¶119 and Fig. 11, “one or more I/O devices 1106 may enable information to be provided to a user”) the set of functions comprising one or more of: a decode function, (¶107 and 113, decoder device 110 as part of “a decoder device” configured to generate data representing the region where the user is looking) in response to which the decoder integration layer controls operation (¶46 and Fig. 1, decoder layer 110, depicted in Fig. 1, has “control over which regions of the image” are decoded at “particular levels of quality”) of the one or more decoder plug-ins (¶46-48 and fig. 1, decoder layer 110 controls the decoded “particular levels of quality”) and the enhancement decoder (¶44-48,107-110, and Fig. 1, decoder device 110 generates data representing region of “data signal at the higher level of quality”) to generate a decoded reconstruction of the original input video signal (¶44-45 and fig. 1, decoder device 110 “use the fully encoded data signal to generate a version of the data signal” from the encoder device 108) using the decoded video signal from the base encoding layer (¶44-45 and fig. 1, “decoder device 110 decodes part of the fully encoded data signal at a first level of quality”) and the one or more layers (¶44-45, “part at a second, higher level of quality”) of the enhancement encoding layer; (¶44-45 and fig. 1, decoder device 110 decodes part of the fully encoded data signal “at a second, higher level of quality”) Su teaches additionally, the decoder integration layer (¶167 and fig. 4B, “reconstructed BL+EL video content 466” depicted in Fig. 4B) to control operation (¶167 and Fig. 4B, “generate one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.)”) to generate a decoded reconstruction of the original input video signal (¶167 and Fig. 4B, “one or more wide dynamic range (e.g., VDR, etc.) images that represents a reconstructed version (e.g., reconstructed BL+EL video content 466, etc.) of source images in source video content” depicted in Fig. 4B) using (¶167 and fig. 4B, perform decoding operations on the “BL image data (406)” used to generate one or more wide dynamic range images) and the one or more layers of residual data from the enhancement encoding layer; (¶167 and fig. 4B, perform decoding operations on the “EL image data (408)” used to generate one or more wide dynamic range images) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su which teaches the enhancement layer relates to a residual value. Using this type of residual data provides a base layer to enhanced layer prediction that reduces the amount of image data the enhancement layer needs for reconstructing a dynamic range image data. But does not explicitly teach, a create function, in response to which an instance of the decoder integration layer is created; a destruct function, in response to which the instance of the decoder integration layer is destroyed; a feed input function which passes an input buffer comprising the encoded video signal and the encoded enhancement signal to the video decoder; and, a call back function, in response to which the decoder integration layer will call back when the decoded reconstruction of the original input video signal is generated. However, Righetto teaches additionally, a create function, (25:43-67, “renderer plug-in 314” able to create plug-in layer by using a “create plug-in layer”) in response to which an instance of the decoder integration layer is created; (25:43-67, “renderer plug-in 314” may create additional plug-in layers “as dictated”) a destruct function, (25:43-67, “renderer plug-in 314” may delete the plug-in layer) in response to which the instance of the decoder integration layer is destroyed; (25:43-67, “renderer plug-in 314” may delete the plug-in layer “when an instance of a plug-in layer is no longer needed”) a feed input function (25:43-67 and 26:1-34, “renderer plug-in 314” may update the plug-in layer) which passes an input buffer (25:43-67 and 26:1-34, “renderer plug-in 314 may be able to move the corresponding elements in real time using an “update plug-in” interface) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the plug-in rendering of Righetto which can create, destroy and feed layers of information. This allows for a technique that can target content updating or refreshing without needing to update or refresh an entire view. But does not explicitly teach, a feed input function which passes an input buffer comprising the encoded video signal and the encoded enhancement signal to the video decoder; and, a call back function, in response to which the decoder integration layer will call back when the decoded reconstruction of the original input video signal is generated. However, Cho teaches, a feed input function (¶100-103 and Fig. 8, “operation 850” depicted in fig. 8) which passes an input buffer (¶109,100-103, and Fig. 8, processing device accessing “data” of the “number of layers to be forwarded”) comprising the encoded video signal (¶100-103 and Fig. 8, in operation 850, first client device 210A may encode each of “the base layer” and the upper layer) and the encoded enhancement signal to the video decoder; (¶100-103 and Fig. 8, in operation 850, first client device 210A may encode each of the base layer and “the upper layer”) and, a call back function, (¶100-103 and Fig. 8, “operation 860” depicted in fig. 8 may restore a scene of forwarded data “encoded for each layer and position information of the upper layer in the scene to the counterpart of the video call”) in response to which the decoder integration layer will call back (¶100-103 and Fig. 8, “may restore the base layer and the upper layer by decoding the data encoded for each layer”) when the decoded reconstruction of the original input video signal is generated. (¶100-103 and Fig. 8, “operation 860” depicted in fig. 8 restores the “base layer and the upper layer by decoding the data encoded for each layer”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the decoder of Ferrara with the interface of Ugur with the self-adaptive prediction of Su with the plug-in rendering of Righetto with the video call system of Cho which has video call coding that can use different encoding methods for each layer. This allows for a video call capability that can enhance the experience, that can depend on terminal capability, by enhancing the picture quality through merging high quality positions with a base layer. Regarding claim 17, it is the method claim similar to the decoder claim 16, dependent on claim 1. Refer to rejection of claim 16 to teach the limitations of claim 17. Regarding claim 18, Ferrara with Ugur with Su with Righetto with Cho teach the limitations of claim 17, Ferrara teaches additionally, A non-transitory computer readable medium (¶116 and Fig. 11, “computer-useable volatile memory 1103”) comprising instructions (¶116 and Fig. 11, computer-useable volatile memory 1103 “configured to store information and/or instructions for the one or more processors 1101”) which, when executed by a processor, (¶116 and Fig. 11, “information and/or instructions for the one or more processors 1101” configured to “process information and/or instructions”) cause the processor to carry out the method of claim 17. (¶116 and Fig. 11, “the one or more processors 1101” configured to “process information and/or instructions”) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jan 27, 2023
Application Filed
Aug 08, 2023
Response after Non-Final Action
Apr 02, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Jul 14, 2025
Examiner Interview Summary
Jul 14, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Final Rejection — §103
Apr 07, 2026
Request for Continued Examination
Apr 14, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month