Prosecution Insights
Last updated: April 19, 2026
Application No. 19/011,001

TRANSMISSION OF RECONSTRUCTION DATA IN A TIERED SIGNAL QUALITY HIERARCHY

Non-Final OA §103§DP
Filed
Jan 06, 2025
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status 1. The present application is being examined under the pre-AIA first to invent provisions. 2. This Office Action is sent in response to Applicant’s Communication received on January 6, 2025 for application number 19/011,001. This Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Oath/Declaration, Abstract and Claims. 3. Claims 1-20 are presented for examination. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on January 6, 2025 and November 10,2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Remarks 5. Examiner notes that this application also discloses only subject matter disclosed in first prior application no 13/188, 237, filed on July 21, 2011 and second prior application no 17/122,434 filed on December 15, 2020, and names the inventor or at least one joint inventor named in the prior applications. Accordingly, this application may constitute a continuation or division. Said first prior application was granted a patent, U.S. Patent No. 10,873,772 B2 and said second prior application was granted a patent, U.S. Patent No. 11,695,973 B2. Examiner revised claims 1-37 of U.S. Patent No. 10,873,772 B2 and claims 1-19 of U.S. Patent No. 11,695,973 B2 but couldn’t find any grounds of rejection of Double Patenting type. 6. Further, this application also discloses also discloses only subject matter disclosed in prior application no 18/345,616 filed on June 30, 2023, and names the inventor or at least one joint inventor named in the prior application. Accordingly, this application may constitute a continuation or division. Said prior application was abandoned. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 9. Claims 1-3, 10-13, 16 and 19 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 20120063517 A1)(hereinafter Choi). Regarding claims 1, 16 and 19, Lee discloses a method of processing a signal in a hierarchy including multiple levels of quality [See Lee: at least Figs. 1-6 regarding Fine Granularity Scalability (FGS)-based video encoding and decoding method for processing base and enhancement layers ], a computer system[See Lee: at least Figs. 1-6 regarding Fine Granularity Scalability (FGS)-based video encoding and decoding apparatus for processing base and enhancement layers], and a non-transitory computer-readable storage medium having instructions stored thereon, the instructions, when carried out by a processing device processor[See Lee: at least par. 5, 28 regarding The terms "unit" and "module", which are used in the exemplary embodiments of the present invention, denote software components, or hardware components, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). Each module executes certain functions. A module can be implemented to reside in an addressable storage medium, or to run on one or more processors. Therefore, as an example, a module includes various components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, sub-routines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables… Moreover, components and modules can be implemented to drive one or more central processing units (CPUs) in a device or security multimedia card.], the method comprising / comprising / causing the processing device to perform operations of: a processor[See Lee: at least par. 28 regarding The terms "unit" and "module", which are used in the exemplary embodiments of the present invention, denote software components, or hardware components, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). Each module executes certain functions. A module can be implemented to reside in an addressable storage medium, or to run on one or more processors. Moreover, components and modules can be implemented to drive one or more central processing units (CPUs) in a device..]; a non-transitory computer-readable storage medium that stores instructions that when executed by the processor[See Lee: at least par. 5, 28 regarding A module can be implemented to reside in an addressable storage medium, or to run on one or more processors. Therefore, as an example, a module includes various components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, sub-routines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables], cause the computer system to: generating / generate residual data at a first level of quality in the hierarchy [See Lee: at least Figs. 1-6 and par. 30-32 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The difference between the reconstructed base layer frame 102 calculated by the inverse quantization & inverse transform unit 301 and the original frame 101 is obtained by a subtracter 11. Data obtained using the subtracter 11 is transformed and quantized by a transform & quantization unit 202 in order to generate a first enhancement layer frame 502.(Residual data at a fist level of quality is generated for a first enhancement layer)]; generating / generate residual data at a second level of quality in the hierarchy [See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503 (Residual data at a second level of quality is generated for a second enhancement layer)]; generating / generate residual data at a third level of quality in the hierarchy [See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at a third level of quality is generated for a third enhancement layer)], the second level of quality being higher than the first level of quality and the third level of quality being higher than the second level of quality[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained… The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Accordingly, a second enhancement layer adds extra data such as higher resolution, quality or frame rate to improve the first enhancement layer. A third enhancement layer adds extra data such as higher resolution, quality or frame rate to improve the second enhancement layer., and so on…)]; obtaining / obtain the residual data at the first level of quality; obtaining / obtain a rendition of the signal at the first level of quality; combining / combine the residual data at the first level of quality with data derived from the rendition of the signal at the first level of quality to generate a rendition of the signal at the second level of quality [See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503 (Residual data at a first level of quality is obtained and combined with the first enhancement layer to generate a second enhancement layer)]; obtaining / obtain the residual data at the second level of quality; combining / comibine the residual data at the second level of quality with data derived from the rendition of the signal at the second level of quality to generate a rendition of the signal at the third level of quality[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at a second level of quality is obtained and combined with the second enhancement layer to generate a third enhancement layer)], and wherein the residual data at the second level of quality and the rendition of the signal at the second level of quality each comprise multiple data elements, wherein each of the residual data elements indicates an adjustment to be made to a corresponding data element of the rendition of the signal at the second level of quality[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503 (Residual data at a second level of quality is generated for a second enhancement layer frame. The residual data is obtained by the difference, thus the residual data comprises multiple data elements that indicate an adjustment to render the second enhancement layer frame)]; Lee does not explicitly disclose wherein the residual data at the second level of quality and the rendition of the signal at the second level of quality are of the same size. However, Choi teaches wherein the residual data at the second level of quality and the rendition of the signal at the second level of quality are of the same size[See Choi: at least Figs. 1-10 and par. 6-11, 32-46 regarding The format up-converter 105 is configured to perform an up-conversion in terms of, for example, the size (or the frame rate) or a view point of an input picture, and may be considered to perform a process that is a reverse of the process of the format down-converter 101. The format up-converter 105 up-converts the reconstructed basement layer picture into a picture having the same format as that of the enhancement layer. The input picture that is input to the format down-converter 101 is also input to a subtractor 107. The subtractor 107 outputs residual data obtained by subtracting the up-converted picture, output by format up-converter 105, from the input picture. A residual mapping/scaling unit 109 converts the residual data into a residual picture. The residual picture is input to residual encoder 111, which outputs an enhancement layer bitstream by performing residual encoding on the input residual picture… Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The second quality layer encoder 303 encodes a picture, corresponding to a difference between the residual picture and the first residual differential picture, into a second bitstream and a second residual differential picture…(Thus, the second residual differential picture and the second bitstream have the same size)]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee with Choi teachings by including “wherein the residual data at the second level of quality and the rendition of the signal at the second level of quality are of the same size” because this combination has the benefit of providing residual operations for reducing complexity and refining picture quality picture encoding and picture decoding [See Choi: at least par. 6-11]. Further on, when combined teachings, Lee and Choi teach or suggest obtaining / obtain the residual data at the third level of quality; and combining / combine the residual data at the third level of quality with data derived from the rendition of the signal at the third level of quality to generate a rendition of the signal at a fourth level of quality[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at a third level of quality is obtained and combined with the third enhancement layer to generate a fourth enhancement layer. Further, other layers or quality levels can be successively generated in the same manner) See Choi: at least Figs. 1-10 and par. 6-11, 32-46 regarding Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change.(Thus, a fourth quality layer can also be configured in a similar manner)], wherein the residual data at the third level of quality and the rendition of the signal at the third level of quality each comprise multiple data elements, wherein each of the residual data elements indicates an adjustment to be made to a corresponding data element of the rendition of the signal at the third level of quality[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at a third level of quality is generated for a third enhancement layer. The residual data is obtained by the difference, thus the residual data comprises multiple data elements that indicate an adjustment to render the third enhancement layer frame. Further, residual data for other layers or quality levels can be successively generated in the same manner)], wherein the residual data at the third level of quality and the rendition of the signal at the third level of quality are of the same size[See Choi: at least Figs. 1-10 and par. 6-11, 32-46 regarding The format up-converter 105 is configured to perform an up-conversion in terms of, for example, the size (or the frame rate) or a view point of an input picture, and may be considered to perform a process that is a reverse of the process of the format down-converter 101. The format up-converter 105 up-converts the reconstructed basement layer picture into a picture having the same format as that of the enhancement layer. The input picture that is input to the format down-converter 101 is also input to a subtractor 107. The subtractor 107 outputs residual data obtained by subtracting the up-converted picture, output by format up-converter 105, from the input picture. A residual mapping/scaling unit 109 converts the residual data into a residual picture. The residual picture is input to residual encoder 111, which outputs an enhancement layer bitstream by performing residual encoding on the input residual picture… Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change…(Thus, the third residual differential picture and the third bitstream have the same size)]. Regarding claim 2, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Lee and Choi teach or suggest wherein lower levels of quality are associated with coarser attributes of the signal and higher levels of quality are associated with finer attributes of the signal[See Lee: at least par. 7, 54 regarding Fine Granularity Scalability (FGS) encodes the base layer and the enhancement layer.. Next, residual data, obtained by the difference between the base layer, generated in step S103, and the original data generated in step S101, is extracted, so the enhancement layer is generated in step S105. In order to generate the enhancement layer, various fine-granular schemes can be used.(Accordingly, a base layer is associated with coarser attributes when compared to an enhancement layer that is associated with finer attributes) See Choi: at least par. 2, 24-30, 46, 59 regarding a hierarchical picture encoding/decoding method and apparatus for refining picture quality using residual pictures in video compression codec processing videos… Quality refinement: a process for refining the quality of residual samples reconstructed using refined data. Quality layers: one or more layers used in the quality refinement process. Quality basement layer: a layer representing the lowest-quality picture among reconstructed pictures, among multiple quality layers in one picture. Quality enhancement layer: a layer representing a high-quality picture among reconstructed pictures, among multiple quality layers in one picture.]. Regarding claim 3, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Lee and Choi teach or suggest wherein at least a portion of the method is performed on a decoder [See Lee: at least par. 7-17 regarding Scalability is a technique using a base layer and an enhancement layer, and allowing a decoder to observe the processing status, network status, and others, and to perform selective decoding with respect to time, space, or the Signal-to-Noise Ratio (SNR). Of scalabilities, Fine Granularity Scalability (FGS) encodes the base layer and the enhancement layer. After the enhancement layer has been encoded, the encoded enhancement layer may not be transmitted or decoded according to the transmission efficiency of a network or the status of a decoder. Through FGS, data can be suitably transmitted according to a bit rate…(Thus, at least a portion of the method is performed by the decoder) See Choi: at least Figs. 2, 7-8, 10 regarding residual decoding process and residual decoder]. Regarding claim 10, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Lee and Choi teach or suggest further comprising, at each level of quality: applying one or more operations to the rendition of the signal that is derived from a lower level of quality prior to said combining [See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at each level of quality is obtained prior to combined with an enhancement layer. Further, other layers or quality levels can be successively generated in the same manner) See Choi: at least Figs. 1-10 and par. 6-11, 32-46 regarding Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change.(Thus, each quality layer can also be configured in a similar manner where the residual picture is obtained prior to the combining step)]. Regarding claim 11, Lee and Choi teach all of the limitations of claim 10, and are analyzed as previously discussed with respect to that claim. Further on, Lee and Choi teach or suggest wherein the operations are filtering operations [See Lee: at least Figs. 1-6, par. 8-10, 52, 67 regarding Accordingly, at the time of decoding video, visible boundaries between blocks may appear. The operation of smoothing the boundaries between blocks is called deblocking, and a component for smoothing the boundaries is called a deblocking filter…See Choi: at least Figs. 1-8, par. 33-59 regaridng The format up-converter 105 is configured to perform an up-conversion in terms of, for example, the size (or the frame rate) or a view point of an input picture, and may be considered to perform a process that is a reverse of the process of the format down-converter 101. The format up-converter 105 up-converts the reconstructed basement layer picture into a picture having the same format as that of the enhancement layer. The input picture that is input to the format down-converter 101 is also input to a subtractor 107. The subtractor 107 outputs residual data obtained by subtracting the up-converted picture, output by format up-converter 105, from the input picture. A residual mapping/scaling unit 109 converts the residual data into a residual picture. The residual picture is input to residual encoder 111, which outputs an enhancement layer bitstream by performing residual encoding on the input residual picture…] Regarding claim 12, Lee and Choi teach all of the limitations of claim 10, and are analyzed as previously discussed with respect to that claim. Further on, Choi teaches or suggests wherein the operations are non-linear functions[See Choi: at least Figs. 1-8, par. 33-59 regaridng The format up-converter 105 is configured to perform an up-conversion in terms of, for example, the size (or the frame rate) or a view point of an input picture, and may be considered to perform a process that is a reverse of the process of the format down-converter 101. The format up-converter 105 up-converts the reconstructed basement layer picture into a picture having the same format as that of the enhancement layer. The input picture that is input to the format down-converter 101 is also input to a subtractor 107. The subtractor 107 outputs residual data obtained by subtracting the up-converted picture, output by format up-converter 105, from the input picture. A residual mapping/scaling unit 109 converts the residual data into a residual picture. The residual picture is input to residual encoder 111, which outputs an enhancement layer bitstream by performing residual encoding on the input residual picture…]. Regarding claim 13, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Lee and Choi teach or suggest wherein combining comprises adding residual data elements to corresponding signal elements [See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated….See Choi: at least Figs. 1-10 and par. 6-11, 32-46, 53-63 regarding The residual data is next added to the format up-converted picture by an adder 209, and the result is the generation of a reconstructed enhancement layer picture. Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change…]. 10. Claims 4, 5 and 6 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 2012/0063517 A1)(hereinafter Choi) in further view of Lee et al.(US 2005/0152611 A1)(hereinafter Lee2). Regarding claim 4, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose wherein processing at a given level of quality comprises: dividing a data element of the residual data at the given level of quality into a plurality of data elements. However, dividing a data element of the residual data at the given level of quality into a plurality of data elements was well known in the art at the time of the invention was made as evident from the teaching of Lee2[See Lee2: Figs. 2-12 and par. 18-24, 62-68, 90-109 regarding FIG. 3 is a flowchart of a wavelet-based scalable video encoding method in which a motion compensated residual is compressed using the tiling method shown in FIG. 2. Motion estimation is performed with respect to an input video 10 in step S110. Temporal filtering is performed using a motion vector obtained from the motion estimation in step S120. A spatial domain, i.e., a motion compensated residual frame resulting from the temporal filtering, is divided into a plurality of tiles or blocks T0, T1, . . . , Tn-1, Tn in step S130…In Fig. 11, the bitstream 20 received from the encoder 300 is decomposed into bitstreams for respective wavelet blocks in step S310. The decomposed bitstreams, i.e., wavelet blocks WB are allocated bit rates, respectively, in steps S320 through S323. For allocation of bit rates, a target bit rate is determined, and higher bit rates are allocated to portions determined as being more important than other portions such that the sum of allocated bit rates becomes the target bit rate… In Fig. 12, the bitstream 25 received from the pre-decoder 350 is decomposed into bitstreams for respective wavelet blocks in step S410. Inverse embedded quantization is individually performed on the decomposed bitstreams, thereby obtaining wavelet coefficients arranged in the wavelet blocks WB in steps S420 through S423…]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Lee2 teachings by including “wherein processing at a given level of quality comprises: dividing a data element of the residual data at the given level of quality into a plurality of data elements” because this combination has the benefit of providing an improved tiling method for scalable video coding [See Lee2: at least par. 3-30]. Regarding claim 5, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose wherein the signal comprises image or video data and the method comprises: partitioning an image or frame of video data into a plurality of tiles, wherein the method is performed with residual data and renditions of the signal for each of the plurality of tiles. However, partitioning an image or frame of video data into a plurality of tiles, wherein the method is performed with residual data and renditions of the signal for each of the plurality of tiles was well known in the art at the time of the invention was made as evident from the teaching of Lee2[See Lee2: Figs. 2-12 and par. 18-24, 62-68, 90-109 regarding FIG. 3 is a flowchart of a wavelet-based scalable video encoding method in which a motion compensated residual is compressed using the tiling method shown in FIG. 2. Motion estimation is performed with respect to an input video 10 in step S110. Temporal filtering is performed using a motion vector obtained from the motion estimation in step S120. A spatial domain, i.e., a motion compensated residual frame resulting from the temporal filtering, is divided into a plurality of tiles or blocks T0, T1, . . . , Tn-1, Tn in step S130…In Fig. 11, the bitstream 20 received from the encoder 300 is decomposed into bitstreams for respective wavelet blocks in step S310. The decomposed bitstreams, i.e., wavelet blocks WB are allocated bit rates, respectively, in steps S320 through S323. For allocation of bit rates, a target bit rate is determined, and higher bit rates are allocated to portions determined as being more important than other portions such that the sum of allocated bit rates becomes the target bit rate… In Fig. 12, the bitstream 25 received from the pre-decoder 350 is decomposed into bitstreams for respective wavelet blocks in step S410. Inverse embedded quantization is individually performed on the decomposed bitstreams, thereby obtaining wavelet coefficients arranged in the wavelet blocks WB in steps S420 through S423…]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Lee2 teachings by including “wherein the signal comprises image or video data and the method comprises: partitioning an image or frame of video data into a plurality of tiles, wherein the method is performed with residual data and renditions of the signal for each of the plurality of tiles” because this combination has the benefit of providing an improved tiling method for scalable video coding [See Lee2: at least par. 3-30]. Regarding claim 6, Lee, Choi and Lee2 teach all of the limitations of claim 5, and are analyzed as previously discussed with respect to that claim. Further on, Lee2 teaches or suggests wherein data relating to the plurality of tiles is processed in parallel using one or more processors [See Lee2: Figs. 2-12 and par. 18-24, 62-68, 90-109 regarding FIG. 3 is a flowchart of a wavelet-based scalable video encoding method in which a motion compensated residual is compressed using the tiling method shown in FIG. 2. Motion estimation is performed with respect to an input video 10 in step S110. Temporal filtering is performed using a motion vector obtained from the motion estimation in step S120. A spatial domain, i.e., a motion compensated residual frame resulting from the temporal filtering, is divided into a plurality of tiles or blocks T0, T1, . . . , Tn-1, Tn in step S130…In Fig. 11, the bitstream 20 received from the encoder 300 is decomposed into bitstreams for respective wavelet blocks in step S310. The decomposed bitstreams, i.e., wavelet blocks WB are allocated bit rates, respectively, in steps S320 through S323. For allocation of bit rates, a target bit rate is determined, and higher bit rates are allocated to portions determined as being more important than other portions such that the sum of allocated bit rates becomes the target bit rate… In Fig. 12, the bitstream 25 received from the pre-decoder 350 is decomposed into bitstreams for respective wavelet blocks in step S410. Inverse embedded quantization is individually performed on the decomposed bitstreams, thereby obtaining wavelet coefficients arranged in the wavelet blocks WB in steps S420 through S423…(As shown in the figures, the data related to the tiles/wavelet blocks is processed in parallel)]. 11. Claims 7, 8, 17 and 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 2012/0063517 A1)(hereinafter Choi) in further view of Su et al.(US 2014/0050271 A1)(hereinafter Su). Regarding claim 7, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose wherein generating residual data at one or more of the first to third levels of quality comprises: applying a differentiable function to the residual data. However, applying a differentiable function to residual data at one or more levels of quality in video coding was well known in the art at the time of the invention was made as evident from the teaching of Su[See Su: at least Figs. 1-4 and par. 34-51 regarding or example, in a VDR-SDR system, the base layer 337 may represent the SDR representation of the coded signal and the metadata 335 may include information related the prediction (250) and quantization (210) steps used in the encoder. Residual 332 is decoded (340), de-quantized (350), and added to the output 395 of the predictor 390 to generate the output VDR signal 370. In an example embodiment of this invention, novel, non-linear de-quantizers based on the characteristics of sigmoid transfer functions, such as the µ-law (µ-law) transfer function, are described. As shown in Figure 4, the sigmoid function is a differentiable function.]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Su teachings by including “wherein generating residual data at one or more of the first to third levels of quality comprises: applying a differentiable function to the residual data” because this combination has the benefit of providing more efficient residual operations in layered / scalable coding [See Su: at least par. 2-16]. Regarding claim 8, Lee, Choi and Su teach all of the limitations of claim 7, and are analyzed as previously discussed with respect to that claim. Further on, Su teaches or suggests wherein the differentiable function simulates a step function [See Su: at least Figs. 1-4 and par. 34-51 regarding or example, in a VDR-SDR system, the base layer 337 may represent the SDR representation of the coded signal and the metadata 335 may include information related the prediction (250) and quantization (210) steps used in the encoder. Residual 332 is decoded (340), de-quantized (350), and added to the output 395 of the predictor 390 to generate the output VDR signal 370. In an example embodiment of this invention, novel, non-linear de-quantizers based on the characteristics of sigmoid transfer functions, such as the µ-law (µ-law) transfer function, are described. From FIG. 4, one may note that equation (5) resembles a sigmoid function where mu controls the slope of the function for its midrange input values. For large values of mu, c(x) is almost linear in the midrange. (As shown in Figure 4, the sigmoid function is a differentiable function that for large values of mu simulates a step function)]. Regarding claims 17 and 20, Lee and Choi teach all of the limitations of claims 16 and 19, and are analyzed as previously discussed with respect to those claims. Further on, Lee and Choi teach or suggest wherein: lower levels of quality are associated with coarser attributes of the signal and higher levels of quality are associated with finer attributes of the signal[See Lee: at least par. 7, 54 regarding Fine Granularity Scalability (FGS) encodes the base layer and the enhancement layer.. Next, residual data, obtained by the difference between the base layer, generated in step S103, and the original data generated in step S101, is extracted, so the enhancement layer is generated in step S105. In order to generate the enhancement layer, various fine-granular schemes can be used.(Accordingly, a base layer is associated with coarser attributes when compared to an enhancement layer that is associated with finer attributes) See Choi: at least par. 2, 24-30, 46, 59 regarding a hierarchical picture encoding/decoding method and apparatus for refining picture quality using residual pictures in video compression codec processing videos… Quality refinement: a process for refining the quality of residual samples reconstructed using refined data. Quality layers: one or more layers used in the quality refinement process. Quality basement layer: a layer representing the lowest-quality picture among reconstructed pictures, among multiple quality layers in one picture. Quality enhancement layer: a layer representing a high-quality picture among reconstructed pictures, among multiple quality layers in one picture.]; at least a portion of the method is performed on a decoder[See Lee: at least par. 7-17 regarding Scalability is a technique using a base layer and an enhancement layer, and allowing a decoder to observe the processing status, network status, and others, and to perform selective decoding with respect to time, space, or the Signal-to-Noise Ratio (SNR). Of scalabilities, Fine Granularity Scalability (FGS) encodes the base layer and the enhancement layer. After the enhancement layer has been encoded, the encoded enhancement layer may not be transmitted or decoded according to the transmission efficiency of a network or the status of a decoder. Through FGS, data can be suitably transmitted according to a bit rate…(Thus, at least a portion of the method is performed by the decoder) See Choi: at least Figs. 2, 7-8, 10 regarding residual decoding process and residual decoder]; and combining comprises adding residual data elements to corresponding signal elements[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding Since an enhancement layer denotes data to be added to the base layer, the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame…The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated….See Choi: at least Figs. 1-10 and par. 6-11, 32-46, 53-63 regarding The residual data is next added to the format up-converted picture by an adder 209, and the result is the generation of a reconstructed enhancement layer picture. Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change…]. Lee and Choi do not explicitly disclose generating residual data at one or more of the first to third levels of quality comprises applying a differentiable function to the residual data. However, applying a differentiable function to residual data at one or more levels of quality in video coding was well known in the art at the time of the invention was made as evident from the teaching of Su[See Su: at least Figs. 1-4 and par. 34-51 regarding or example, in a VDR-SDR system, the base layer 337 may represent the SDR representation of the coded signal and the metadata 335 may include information related the prediction (250) and quantization (210) steps used in the encoder. Residual 332 is decoded (340), de-quantized (350), and added to the output 395 of the predictor 390 to generate the output VDR signal 370. In an example embodiment of this invention, novel, non-linear de-quantizers based on the characteristics of sigmoid transfer functions, such as the µ-law (µ-law) transfer function, are described. As shown in Figure 4, the sigmoid function is a differentiable function.]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Su teachings by including “generating residual data at one or more of the first to third levels of quality comprises applying a differentiable function to the residual data” because this combination has the benefit of providing more efficient residual operations in layered / scalable coding [See Su: at least par. 2-16]. 12. Claim 9 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 20120063517 A1)(hereinafter Choi) in further view of Karczewicz(US 2008/0089425 A1)(hereinafter Karczewicz). Regarding claim 9, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose wherein the method is performed using one or more graphical processing units. However, the use of graphical processing units for scalable or hierarchical video coding systems was well known in the art at the time of the invention was made as evident from the teaching of Karczewicz[See Karczewicz: at least Fig. 14 and par. 75-81 regarding The digital section 420 includes various processing, interface and memory units such as, for example, a modem processor 422, a video processor 424, a controller/processor 426, a display processor 428, an ARM/DSP 432, a graphics processing unit (GPU) 434, an internal memory 436, and an external bus interface (EBI) 438.]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Karczewicz teachings by including “wherein the method is performed using one or more graphical processing units” because this combination has the benefit of incorporating graphical processing units to assist in the scalable/hierarchical video coding and processing. 13. Claim 14 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 20120063517 A1)(hereinafter Choi) in further view of Borer(US 2007/0223582 A1)(hereinafter Borer). Regarding claim 14, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose comprising: generating probability distribution information for use in decoding at least one symbol derived from the residual data. However, generating probability distribution information for use in decoding at least one symbol derived from the residual data was well known in the art at the time of the invention was made as evident from the teaching of Borer[See Borer: at least Figs. 1-33, par. 153-160 , 177-213 regarding In general, the noise introduced by the quantisation and inverse quantisation process is proportional to the quantisation factor. The constant of proportionality varies with the type of quantiser used and with the probability density function (pdf) of the value that are input to the quantiser. The constant of proportionality may also vary with the quantised value in a non-uniform quantiser… Conceptually, an arithmetic coder can be thought of a progressive way of producing variable-length codes for entire sequences of symbols based on the probabilities of their constituent symbols. For example, if we know the probability of 0 and 1 in a binary sequence, we also know the probability of the sequence itself occurring... The present system computes these estimates for each context simply by counting their occurrences. In order for the decoder to be in the same state as the encoder, these statistics cannot be updated until after a binary symbol has been encoded. This means that the contexts must be initialised with a count for both 0 and 1, which is used for encoding the first symbol in that context…]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Borer teachings by including “comprising: generating probability distribution information for use in decoding at least one symbol derived from the residual data” because this combination has the benefit of incorporating probability distribution information to derive symbol from the hierarchical or scalable decoding of residual data. 14. Claims 15 and 18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Lee et al.(US 2006/0159359 A1)(hereinafter Lee) in further view of CHOI et al.(US 20120063517 A1)(hereinafter Choi) in further view of Chen et al.(US 2009/0262801 A1)(hereinafter Chen). Regarding claim 15, Lee and Choi teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Lee and Choi do not explicitly disclose wherein generating residual data comprises: applying a dead zone to the residual data to generate adjusted residual data. However, applying dead zones in hierarchical or scalable coding was well known in the art at the time of the invention was made as evident from the teaching of Chen[See Chen: at least par. 26-28, 38 regarding The residual information is typically transformed from a pixel domain to a transform domain, e.g., using discrete cosine transformation (DCT). The residual coefficients are then typically quantized. The quantization process is often used to provide rate control in the video coding scheme…The dead zone refers to a region of magnitude for coefficients below which any coefficient will be quantized to zero. That is to say, if a coefficient magnitude of a given coefficient is in the dead zone, quantization of that given coefficient will result in a value of zero. As described in greater detail below, the dead zone may be defined by both the QP defined for the video coding and also a so-called dead zone parameter. The techniques of this disclosure may allow for finer control over the coding rate than can be achieved solely through adjustment of a QP. To do so, this disclosure provides the selection of so-called "dead zone parameters" for video blocks of residual coefficients. The dead zone parameter (f) is a parameter that, together with the QP, defines the dead zone. The dead zone refers to a region of magnitude for the coefficients below which any coefficient will be quantized to zero…]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Chen teachings by including “wherein generating residual data comprises: applying a dead zone to the residual data to generate adjusted residual data” because this combination has the benefit of providing operations to apply dead zones to residual data for fine control over the coding rate[See Chen: at least par. 2-14]. Regarding claim 18, Lee and Choi teach all of the limitations of claim 17, and are analyzed as previously discussed with respect to that claim. wherein generating residual data at one or more of the first to third levels of quality comprises applying one or more operations to the rendition of the signal that is derived from a lower level of quality prior to said combining[See Lee: at least Figs. 1-6 and par. 30-32, 41, 47, 52-56 regarding The first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame. For this operation, the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated. The frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104. The difference between the frame 104 and the original frame 101 is obtained by a subtracter 11. Residual data, obtained by the difference, is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503. The above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated. (Residual data at each level of quality is obtained prior to combined with an enhancement layer. Further, other layers or quality levels can be successively generated in the same manner) See Choi: at least Figs. 1-10 and par. 6-11, 32-46 regarding Referring to FIG. 3, the residual encoder includes three quality layer encoders 301, 303 and 305, the number of which corresponds to the number of quality layers.. The third quality layer encoder 305 encodes a picture, corresponding to a difference between the residual picture and the second residual differential picture, into a third bitstream and a third residual differential picture. The first to third residual differential pictures are inputs of the selective motion compensator 307…Although the residual encoder in FIG. 3 has three quality layers, this is merely for the sake of providing a teaching example; the number of quality layers is subject to change.(Thus, each quality layer can also be configured in a similar manner where the residual picture is obtained prior to the combining step)], said operations being non-linear functions[See Choi: at least Figs. 1-8, par. 33-59 regaridng The format up-converter 105 is configured to perform an up-conversion in terms of, for example, the size (or the frame rate) or a view point of an input picture, and may be considered to perform a process that is a reverse of the process of the format down-converter 101. The format up-converter 105 up-converts the reconstructed basement layer picture into a picture having the same format as that of the enhancement layer. The input picture that is input to the format down-converter 101 is also input to a subtractor 107. The subtractor 107 outputs residual data obtained by subtracting the up-converted picture, output by format up-converter 105, from the input picture. A residual mapping/scaling unit 109 converts the residual data into a residual picture. The residual picture is input to residual encoder 111, which outputs an enhancement layer bitstream by performing residual encoding on the input residual picture…]. Lee and Choi do not explicitly disclose applying a dead zone to the residual data to generate adjusted residual data. However, applying dead zones in hierarchical or scalable coding was well known in the art at the time of the invention was made as evident from the teaching of Chen[See Chen: at least par. 26-28, 38 regarding The residual information is typically transformed from a pixel domain to a transform domain, e.g., using discrete cosine transformation (DCT). The residual coefficients are then typically quantized. The quantization process is often used to provide rate control in the video coding scheme…The dead zone refers to a region of magnitude for coefficients below which any coefficient will be quantized to zero. That is to say, if a coefficient magnitude of a given coefficient is in the dead zone, quantization of that given coefficient will result in a value of zero. As described in greater detail below, the dead zone may be defined by both the QP defined for the video coding and also a so-called dead zone parameter. The techniques of this disclosure may allow for finer control over the coding rate than can be achieved solely through adjustment of a QP. To do so, this disclosure provides the selection of so-called "dead zone parameters" for video blocks of residual coefficients. The dead zone parameter (f) is a parameter that, together with the QP, defines the dead zone. The dead zone refers to a region of magnitude for the coefficients below which any coefficient will be quantized to zero…]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify Lee and Choi with Chen teachings by including “applying a dead zone to the residual data to generate adjusted residual data” because this combination has the benefit of providing operations to apply dead zones to residual data for fine control over the coding rate[See Chen: at least par. 2-14]. Conclusion 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jan 06, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month