Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,371

LOW COMPLEXITY ENHANCEMENT VIDEO CODING WITH SIGNAL ELEMENT MODIFICATION

Non-Final OA §103
Filed
Sep 27, 2024
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 28,46,47 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) Regarding claim 28, Racape teaches, A method (title, “motion flow coding”) comprising: modifying a to-be-modified signal (¶129 and fig. 14, “original image block”) using a modification signal to produce a modified signal, (¶129 and fig. 14, “predicted block” based on performs “intra prediction (160)” or “motion simulation (175) and compensation (170)” indicated by a “intra/inter decision” by a “prediction mode flag”) wherein the modified signal (¶129, “predicted block”) has an element (¶129, “prediction residuals”) corresponding to an element of the to-be-modified signal, (¶129, prediction residuals calculated using prediction block from the original image block of partitioned “units of” CUs) and wherein the element of the modified signal (¶129, “prediction residuals” calculated based on the predicted block) has a modified value with respect to a value of the corresponding element of the to-be-modified signal; (¶129, “Prediction residuals are calculated” by “subtracting (110) the predicted block from the original image block”) sending the modified signal, (¶129, calculated “prediction residuals” based on predicted block subtracted from the original image block) or a down-sampled modified signal derived based on down- sampling the modified signal, to be encoded; (¶129-130 and fig. 14, transformed and quantized “prediction residuals” are “entropy encoded (145) to output a bitstream” depicted in fig. 14) Another embodiment of Racape teaches additionally, receiving a decoded modified signal, (¶129-133, “Combining (255) the decoded prediction residuals and the predicted block” based on received quantized “prediction residuals” output as a video “bitstream is decoded by the decoder elements”) the decoded modified signal being a decoded version of the modified signal as encoded (¶129-133, and fig. 14-15, “prediction residuals” calculated using prediction block that is entropy coded and output as a “video bitstream” that is “first entropy decoded (230)”) or of the down-sampled modified signal as encoded; using the decoded modified signal to generate a processed signal; (¶133, “image block is reconstructed” based on Combining (255) the “decoded prediction residuals”, calculated using prediction block subtracted from the original block, “and the predicted block”) The first embodiment discloses the encoding of a picture which modifies coding units by intra predicting or motion compensating image blocks and calculating prediction residuals for encoding that can be received and decoded by the second embodiment which decoded the predicted residuals of the predicted block of an original image block for reconstruction. The decoding performed as disclosed in the second embodiment is generally reciprocal to the encoding performed in the first embodiment disclosed. While not expressly taught together in one embodiment, it is clear to see that the two embodiments relate to one another and interact in a similar way as the limitations claimed when a prediction residual related to a predicted block of an original image is encoded and then decoded for reconstructing an image. The overall format allows for optimizing the compression of videos by minimizing the size of the transmitted bitrate, while keeping the highest quality possible. But does not explicitly teach, generating residual data based at least on: a value of an element of a target signal; and a value of a corresponding element of the processed signal. However, Lee teaches additionally, using the decoded modified signal (¶62 and fig. 1, generate a reconstructed residual block using dequantized “transform coefficients quantized by the quantization module 135” and inverse transform “dequantized transform coefficients”) to generate a processed signal; (¶62 and fig. 1, “generate a reconstructed residual block”) and generating residual data (¶57 and fig. 1, “residual block may be generated based on a differential value between prediction target block (original block) and the generated prediction block”) based at least on: a value of an element of a target signal; (¶57, “prediction target block”) and a value of a corresponding element of the processed signal. (¶56-57, “generated prediction block” generated based on information on a pixel in the current picture by intra prediction module 125) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee which has a prediction target block and a generates prediction block. Using the techniques disclosed by Lee allows for reducing a size of a bit string for symbols to be encoded. Regarding claim 46, it is the method similar to claim 28. In particular, the method of claim 46 appears to rearrange the limitations of claim 28 and combining the processed signal and residual data to generate a reconstructed signal. Racape teaches additionally, combining the processed signal (¶131 and fig. 14, “predicted block”) and residual data (¶131 and fig. 14, “prediction residuals”) to generate a reconstructed signal. (¶131 and fig. 14, “image block is reconstructed by combining “prediction residuals and the predicted block”) Regarding claim 47, it is the apparatus claim of method claim 28. Racape teaches additionally, An apparatus (title, ¶157 and 137-139, “motion flow coding for deep learning” implemented in “an apparatus”) comprising: a processor; (¶157,137-139, and fig. 16, apparatus implemented in “hardware” of methods such as “a processor” of processor 1010 in system 1000 depicted in fig. 16) a non-transitory computer readable storge device (¶137-139 and fig. 16, “memory inside of processor 1010”) having stored thereon computer executable instructions (¶137-139 and fig. 16, memory “used to store instructions and to provide working memory for processing that is needed during encoding or decoding”) that, when executed by the processor, (¶137-139 and fig. 16, program code loaded onto processor 1010 or encoder/decoder 1030 loaded onto memory “for execution by processor 1010”) cause the apparatus to perform the following: (¶157, “methods can be implemented in” a processor) Refer to the rejection of claim 28 to teach the additional limitations of claim 47. Claims 29-32,34-36,39,45 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of HANDFORD; David (US 20190313109 A1) Regarding claim 29, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 29, However, Handford teaches additionally, down-sampling the modified signal (¶37-38, “derives data 212 based on the input data 206” by performing a downsampling operation on the input data 206) to produce the down-sampled modified signal; (¶37-38, data 212 is derived by performing a downsampling operation on the input data 206 referred to as “downsampled data”) and sending the down-sampled modified signal to be encoded. (¶38-39, “downsampled data 212 is processed to generate processed data 213” that involves “encoding the downsampled data 212”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford that can perform downsampling of input data. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 30, Racape with Lee with Handford teaches the limitations of claim 29, Lee teaches additionally, the target signal comprises the to-be-modified signal. (¶57, “prediction target block (original block)”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford which has a prediction target block and a generates prediction block. Using the techniques disclosed by Lee allows for reducing a size of a bit string for symbols to be encoded. Regarding claim 31, Racape with Lee with Handford teaches the limitations of claim 30, Lee teaches additionally, sending the residual data, (¶57, “a residual block may be generated based on a differential value between prediction target block (original block) and the generated prediction block”) or data derived based on the residual data, to be encoded. (¶57-61, and fig. 1, “entropy encoding module 165 may entropy-encode the values obtained” such as quantized transform coefficients of “residual block” received by rearrangement module 160 depicted in fig. 1) Regarding claim 32, Racape with Lee with Handford teaches the limitations of claim 29, Handford teaches additionally, generating correction data (¶41 and fig. 2A, “generating the processed data 213” obtaining “correction data”) based on: a value of an element of the down-sampled modified signal; (¶41, obtaining correction data based on comparison including “downsampled data 212”) a value of a corresponding element of the decoded modified signal; (¶40-41, obtaining correction data based on comparison including “decoded signal obtained by the first apparatus 202” by decoding the “encoded signal”) and a value of a corresponding element of the modification signal (¶41, correction data based on “difference between the downsampled data 212 and the decoded signal”) or a value of a corresponding element of a signal based on the modification signal. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford which has a prediction target block and a generates prediction block. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 34, Racape with Lee with Handford teaches the limitations of claim 32, Handford teaches additionally, sending the correction data, (¶41, first apparatus 202 outputs “correction data” as well as the encoded signal) or data derived based on the correction data, to be encoded. (¶41, first apparatus 202 outputs “the correction data” as well as “the encoded signal” allowing to correct for errors introduced in encoding and decoding) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the correction data of Handford which is output with the encoded signal. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 35, Racape with Lee with Handford teaches the limitations of claim 29, Handford teaches additionally, target signal (¶46, residual data obtained using “upsampled data 214”) comprises the modified signal. (¶44-46, “upsampled data 214” used to obtain residual data based on “data at the relatively low level of quality” processed data 213) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the processed data of Handford which is sampled and used in obtaining residual data. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 36, Racape with Lee with Handford teaches the limitations of claim 35, Handford teaches additionally, generating further residual data (¶46,89-92, fig. 2A-2B and 5A-5B, “residual data 216 may be in the form of a set of residual elements” depicted in fig. 2A-2B and obtain “a set of Δt residual elements 516” depicted in fig. 5A-5B) based on: a value of an element of the residual data; (¶46 and fig. 2A-2B, residual data arranged as “an array of residual elements”) and a value of a corresponding element of the modification signal or a value of a corresponding element of a signal based on the modification signal. (¶89-92 and fig. 5A-5B, a set of Δt residual elements 516 associated with differential rendition 510 “Δt input data 510” associated with samples) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the various residual data of Handford associates with an array of residual elements and differential renditions. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 39, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 39, However, Handford teaches additionally, the target signal (¶36-38, “input data 206” relating to part of an image) comprises a to-be-down-sampled signal, (¶36-38, input data 206 “arranged as an array comprising first and second rows of signal elements”) wherein the to-be-down-sampled signal is down-sampled to derive the to- be-modified signal. (¶36-38, “data 212 is derived” by performing downsampling operation on the input data 206 referred to as “downsampled data”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford which has a prediction target block and a generates prediction block. This allows for reducing the amount of data transmitted via data communication networks. Regarding claim 45, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 45, However, Handford teaches additionally, decoded modified signal (¶73 and fig. 3B, generate processed data 322 comprising “decoding an encoded signal to produce a decoded signal”) to generate the processed signal comprises performing an up-sampling operation. (¶73 and fig. 3B, “upsampled data 314 may be derived by performing an upsampling operation” on the “processed data 322” at the relatively low level of quality) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the upsampling of Handford which has a prediction target block and a generates prediction block. This allows for reducing the amount of data transmitted via data communication networks. Claims 33,37 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of HANDFORD; David (US 20190313109 A1) in view of METOEVI; Isabelle et al. (US 20120300834 A1) Regarding claim 33, Racape with Lee with Handford teaches the limitations of claim 32, But does not explicitly teach the additional limitations of claim 33, However, Metoevi teaches additionally, deriving said signal (¶250, “residual information”) based on the modification signal by processing the modification signal. (¶250, “residual information has been determined” based on calculations “adjusted or modified as required”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford with the method of Metoevi which residual information determination is adjusted or modified. This allows for improvements in quality. Regarding claim 37, Racape with Lee with Handford teaches the limitations of claim 36, But does not explicitly teach the additional limitations of claim 37, However, Metoevi teaches additionally, deriving said signal (¶250, “residual information”) based on the modification signal by processing the modification signal. (¶250, “residual information has been determined” based on calculations “adjusted or modified as required”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford with the method of Metoevi which residual information determination is adjusted or modified. This allows for improvements in quality. Claims 38 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of HANDFORD; David (US 20190313109 A1) in view of Nilsson; Mattias (US 20150043655 A1) Regarding claim 38, Racape with Lee with Handford teaches the limitations of claim 36, But does not explicitly teach the additional limitations of claim 38, However, Nilsson teaches additionally, sending the further residual data, (¶35, “residual samples are output from the prediction coder 18”) or data derived based on the further residual data, to be encoded. (¶35, residual samples are output from the prediction coder 18 “to the input of the entropy encoder 20” to encode frequently occurring sample values) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the downsampling of Handford with the encoding of Nilsson which encodes a set of residual samples. Adding this teaching can help by reducing computational complexity. Claims 40-42 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of Good; Charles F. et al. (US 20140281014 A1) Regarding claim 40, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 40, However, Good teaches additionally, modification signal comprises one of an overlay to be applied (¶30-31, “first overlay module 222 may dynamically generate overlay images” by inserting “retrieved image into the one or frames”) to the to-be-modified signal or a watermark. (¶31, dynamic overlays may be used to “insert hidden watermarks”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the graphic overlay of Good which overlays a watermark into frames of a stream. This allows for tracking the origin or use of a stream. Regarding claim 41, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 41, However, Good teaches additionally, the modified signal has another element (¶30-31, “image 241 from a storage device 240” inserted into other frames of “one or frames of the decoded stream 231” including “a watermark, a logo, and a static advertisement”) corresponding to another element of the to-be-modified signal, (¶30-31, “image 241 corresponds to a static overlay” inserted into “one or frames of the decoded stream 231”) and wherein the other element of the modified signal has the same value as the value of the corresponding other element of the to-be-modified signal. (¶30-31, dynamic overlays used to insert “hidden watermarks” and “add locale or usage specific overlays such as news tickers with local news based on a viewer's location, local weather alerts, emergency broadcast information, etc.” into the “one or frames of the decoded stream 231”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the graphic overlay of Good which dynamically overlays images into frames of a stream. This allows for tracking the origin or use of a stream as well as providing better performance. Regarding claim 42, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 42, However, Good teaches additionally, modifying of the to-be-modified signal (¶30-31, “first overlay module 222 may dynamically generate overlay images” by inserting the retrieved image into one or frames of the decoded stream 231) comprises combining the to-be-modified signal with the modification signal. (¶30-31, “first overlay module 222 may dynamically generate overlay images” by inserting “retrieved image into the one or frames the decoded stream 231”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the graphic overlay of Good which overlays a watermark into frames of a stream. This allows for tracking the origin or use of a stream. Claims 43 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of Alshina; Elena et al. (US 20150237376 A1) Regarding claim 43, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 43, However, Alshina teaches additionally, modifying of the to-be-modified signal (¶128, “inter-layer offset determiner 16 correct the sample value”) comprises shifting a position of one or more values of the to-be-modified signal (¶128, “correct the sample value”) within the to-be-modified signal (¶128, “sample value of the current pixel”) based on the modification signal. (¶128, “correct the sample value of the current pixel by a first offset according to the first category”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the offset determiner of Alshina which corrects sample values. This allows for reconstruction that can increase resolution. Claims 44 rejected under 35 U.S.C. 103 as being unpatentable over RACAPE; Fabien et al. (US 20240155148 A1) in view of Lee; Bae Keun et al. (US 20140307784 A1) in view of YAMORI; Akihiro et al. (US 20090296809 A1) Regarding claim 44, Racape with Lee teaches the limitations of claim 28, But does not explicitly teach the additional limitations of claim 43, However, Yamori teaches additionally, the to-be-modified signal (¶66, “input signal”) comprises a luminance component and chrominance components, (¶66, “luminance/chrominance components of an input signal”) wherein the luminance component of the to-be-modified signal is modified based on the modification signal, (¶66, “a function to reduce/convert a luminance” at the pre-stage of the decoding) and wherein the chrominance components of the to-be-modified signal are not modified based on the modification signal. (¶66, function to reduce/convert a luminance “instead of chrominance” at the pre-stage of the decoding that maintains “the resolution of a chrominance”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the coding for deep learning of Racape with the image encoding of Lee with the coding of Yamori which provides a function to luminance instead of chrominance. This allows for improvements to coding process performance based on complexity. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIMMY S. LEE Examiner Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483 /REBECCA A VOLENTINE/Primary Examiner, Art Unit 2483 March 2, 2026
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Feb 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month