Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,821

VIDEO DECODING USING POST-PROCESSING CONTROL

Non-Final OA §102
Filed
May 26, 2023
Examiner
NOH, JAE NAM
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
2 (Non-Final)
86%
Grant Probability
Favorable
2-3
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 445 resolved
+27.8% vs TC avg
Minimal -10% lift
Without
With
+-10.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
26 currently pending
Career history
471
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
31.5%
-8.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§102
DETAILED ACTION This action is in response to the application filed on 9/2/2025. Claims 25-44 are pending. Acknowledgment is made of a claim for foreign priority. All of the certified copies of the priority documents have been received. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The references listed on the Information Disclosure Statement submitted on 8/8/2023 has/have been considered by the examiner (see attached PTO-1449). Claim Rejections 35 USC §102 & 103 Applicant's amendment filed 9/2/2025 has been fully considered but they are not persuasive. Applicant states: 1. Page 9 “In rejecting the first element of claim 25, the Office Action primarily relies on paragraphs 1, 3, and 13 of Bordes. Applicant concedes that these paragraphs mention displaying an HDR image on an LDR screen. However, displaying an HDR image on an LDR screen does not by itself teach or suggest the required claim element of "receiving an indication of at least one desired video output property from a rendering platform." The claimed invention specifies that the desired video output property is received from a rendering platform, which requires interaction with the rendering platform. In contrast, the cited portions of Bordes do not disclose any interaction with the LDR screen. It could be that the method in Bordes always outputs an LDR image for display and the Office Action provided no argument as to why it would be implied that the device performing the method of Bordes receives an indication from a rendering platform that the rendering platform desires an output with particular properties. Indeed, the Office Action merely states that "receiving input indicating a rendering parameter is considered to be inherent." (Office Action page 3).“, any emphasis not shown. Examiner’s response: It is inherent that the disclosed display in the reference would have controls to display “at least one desired video output property” of a video which may include any and all menu items related to displayed property such as brightness, contrast and etc. 2. Page 10 “Applicant notes that Bordes teaches that a video decoder and an apparatus that may be implemented in the video decoder. For example, paragraph 90 of the reference teaches that an apparatus may be implemented in decoder. However, Bordes does not provide any detail of the interaction between the apparatus 1100 and the video decoder. The statement of Bordes that the apparatus 1100 may be implemented in an appropriate video decoder does not necessarily mean that the described apparatus 1100 is used in particular for video decoding. For example, the apparatus 1100 may be implemented in a video decoder separately from a decoding operation. It is possible that the apparatus 1100 may be used for other operations related to video processing, but not necessarily for video decoding. Therefore, Applicant submits that the decoding step of claim 25 in the context of the claimed subject-matter is not disclosed or implied in Bordes.“, any emphasis not shown. Examiner’s response: The claim language requires “decoding one or more received video streams into a reconstructed video output stream.” It is understood by the one of ordinary skilled in the art that the cited portions of the reference such as the HEVC performs that vary step. Claim Mapping Notation In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. “[0027]…” (Paragraph number [0027]) [4:3-15] ”…” (Column 4 Lines 3-15) Furthermore, unless necessary to distinguish from other references in this action, “et al.” will be omitted when referring to the reference. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 25-44 are rejected under 35 U.S.C. 102(a1) as being anticipated by Bordes (EP2958101) 25. (New) A method for video decoding, the method comprising: receiving an indication of at least one desired video output property from a rendering platform; and decoding one or more received video streams into a reconstructed video output stream, “[0001] The present disclosure relates to the technical field of image processing and display, and in particular to methods and apparatus for displaying a High Dynamic Range (HDR) (or Extended Dynamic Range (EDR), which is exchangeable with respect to HDR) image on a Low Dynamic Range (LDR) screen, especially a large LDR screen having a size larger than that of the HDR image.” Receiving input indicating a rendering parameter is considered to be inherent. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” wherein post-processing is applied prior to output of the reconstructed video stream; “[0013]…dithering the LDR image based on the HDR image having a size of N×M pixels and a bit-depth of n bits; and displaying the dithered LDR image on the LDR screen.” wherein the method further comprises applying a sample conversion to the reconstructed video output stream prior to the post-processing to provide the desired video output property when the desired video output property differs from the reconstructed video output stream property. “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels and a bit-depth of n bits; performing bit conversion on the up-sampled HDR image to obtain a LDR image having a size of P× Q pixels and a bit-depth of m bits…”. 26. (New) The method of claim 25, wherein the post-processing is applied dynamically and content-adaptively. “[0043].In this example, the original image is considered in the dithering. For example, an additional term may be added in the above Expression (2): See Expression (5) “[0044] In the expression (5), ↓ is a down-sampling operator, and (, ) are Lagrangian multipliers.” “[0045] In an implementation, the dithering may only consider the original image. For example, α and/or λ can be set to zero. Setting α to zero allows to speed-up calculation in the dithering while maintaining the original bit-depth accuracy distribution.” 27. (New) The method of claim 25, wherein the desired video output property is a desired video output resolution, and “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels…” the sample conversion comprises converting from a resolution of the reconstructed video output stream to the desired video output resolution when the desired video output resolution differs from the resolution of the reconstructed video output stream. “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels…” 28. (New) The method of claim 27, wherein the upsampling comprises one of non- linear upsampling, neural network upsampling or fractional upsampling. “[0033] The up-sampling in step S310 may be performed by an up-sampling fitter. The up-sampling filter includes a bilateral filter and various existing filters capable of achieving up-sampling.” 29. (New) The method of claim 25, wherein the desired video output property is a desired bit-depth, and “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels and a bit-depth of n bits; performing bit conversion on the up-sampled HDR image to obtain a LDR image having a size of P× Q pixels and a bit-depth of m bits…”. the sample conversion comprises converting from a bit-depth of the reconstructed video output stream to the desired video output bit-depth when the desired video output bit-depth differs from the bit-depth of the reconstructed video output stream. “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels and a bit-depth of n bits; performing bit conversion on the up-sampled HDR image to obtain a LDR image having a size of P× Q pixels and a bit-depth of m bits…”. 30. (New) The method of claim 25, wherein the sample conversion comprises upsampling the reconstructed video output stream resolution to a desired output resolution. “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels…” 31. (New) The method of claim 25, wherein the post-processing comprises dithering. “[0013] According to a first aspect of the present invention disclosure, there is provided a method for displaying a HDR image on a LDR screen. The HDR image has a size of Nx M pixels and a bit-depth of n bits, and the LDR screen has a size of P×Q pixels and a bit-depth of m bits, wherein all of N, M, P, Q, n, and m are positive integers, and P>N, Q≥M, and n>m. The method comprises: up-sampling the HDR image to obtain an up-sampled HDR image having a size of P×Q pixels and a bit-depth of n bits; performing bit conversion on the up-sampled HDR image to obtain a LDR image having a size of P× Q pixels and a bit-depth of m bits; dithering the LDR image based on the HDR image having a size of N×M pixels and a bit-depth of n bits; and displaying the dithered LDR image on the LDR screen.” 32. (New) The method of claim 31, further comprising receiving one or more of a dithering type and a dithering strength and/or “[0039] The dithering here considers the original image, i.e., the HDR image having a size of N×M pixels and a bit-depth of n bits. The dithering may employ various existing dithering algorithms such as Thresholding dithering, Ordered dithering, Floyd-Steinberg dithering, etc.” One of ordinary skilled in the art understands that a certain type of dithering is used and a strength is an inherent feature of a dithering process. Below are in the alternative language. wherein at least one of the following applies: a) wherein the dithering strength is set based on at least one of a determination of contrast or a determination of frame content; b) receiving a parameter that indicates a base quantisation parameter - QP - value to start applying the dither; c) receiving a parameter that indicates a base quantisation parameter - QP - value at which to saturate the dither; and d) receiving an input to enable or disable the dithering. Regarding the claim 33, it recites elements that are at least included in the claim 1 above, but in a different form. Therefore, the rationale for the rejection of the claim 1 applies equally as well to the claim 33. Regarding the processor and the computer storage medium in the claim, see claims 11 and 12 of the reference. 34. (New) The system of claim 33, wherein the decoding is achieved using one or more decoders comprising one or more of AV1, VVC, AVC and LCEVC. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” 35. (New) The system of claim 34, wherein the one or more decoders are implemented using native or operating system functions. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” 36. (New) The system of claim 33, comprising a decoder integration layer and one or more decoder plug-ins, wherein a control interface forms part of the decoder integration layer; and the one or more decoder plug-ins provide an interface to the one or more decoders. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The elements of the claim are known to be inherent items in HEVC by one of ordinary skilled in the art. 37. (New) The system of claim 37, wherein the post processing is achieved using a post-processing module and the sample conversion is achieved using a sample conversion module, wherein at least one of the post-processing module or the sample conversion module form part of one or more of the decoder integration layer and the one or more decoder plug-ins. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The elements of the claim are known to be inherent items in HEVC by one of ordinary skilled in the art. 38. (New) The system of claim 37, wherein the one or more decoders comprise a decoder to implement a base decode layer to decode a video stream and an enhancement decoder to implement an enhancement decode layer. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The scalable decoding is known to be a feature of the HEVC by the one of ordinary skilled in the art. 39. (New) The system of claim 38, wherein the enhancement decoder is configured to: receive an encoded enhancement stream, and decode the encoded enhancement stream to obtain one or more layers of residual data, the one or more layers of residual data being generated based on a comparison of data derived from a decoded video stream and data derived from an original input video stream. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The scalable decoding is known to be a feature of the HEVC by the one of ordinary skilled in the art. 40. (New) The system of claim 39, wherein the decoder integration layer controls operation of the one or more decoder plug-ins and the enhancement decoder to generate the reconstructed video output stream using a decoded video stream from the base decode layer and the one or more layers of residual data from the enhancement decode layer. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The scalable decoding is known to be a feature of the HEVC by the one of ordinary skilled in the art. 41. (New) The system claim 36, wherein the rendering platform is a client application on a client computing device and the control interface is an application programming interface - API - accessible to the client application. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” The elements of the claim are known to be inherent items in HEVC by one of ordinary skilled in the art. 42. (New) The system of claim 41, wherein the post-processing is enabled or disabled via the control interface by the rendering platform. “[0039] The dithering here considers the original image, i.e., the HDR image having a size of N×M pixels and a bit-depth of n bits. The dithering may employ various existing dithering algorithms such as Thresholding dithering, Ordered dithering, Floyd-Steinberg dithering, etc.” Control of dithering is considered inherent by one of ordinary skilled in the art.. 43. (New) The system claim 41, wherein the desired video output property is communicated from the rendering platform via the control interface. “[0001] The present disclosure relates to the technical field of image processing and display, and in particular to methods and apparatus for displaying a High Dynamic Range (HDR) (or Extended Dynamic Range (EDR), which is exchangeable with respect to HDR) image on a Low Dynamic Range (LDR) screen, especially a large LDR screen having a size larger than that of the HDR image.” Receiving input indicating a rendering parameter is considered to be inherent. “[003] The standard High Efficiency Video Coding (HEVC) has adopted a video profile supporting 10-bitss as input video. This format tends to be adopted by many other standardization groups such as Digital Video Broadcasting-Ultra High Definition Television (DVB-UHDTV) and applications. However, most of the existing displays have anticipated the market trend of large images (e.g., 4K) but not the increase of video bit-depth beyond the traditional 8-bits format.” Regarding the claim 44, it recites elements that are at least included in the claim 1 above, but in a different form. Therefore, the rationale for the rejection of the claim 1 applies equally as well to the claim 44. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tourapis et al. (US 20160269739 A1) and Chan et al. (US 20110116654 A1) disclose relevant art related to the subject matter of the present invention. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. An extension of time may be obtained under 37 CFR 1.136(a). However, in no event, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE N NOH whose telephone number is (571)270-0686. The examiner can normally be reached on Mon-Fri 8:30AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE N NOH/ Primary Examiner Art Unit 2481
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Apr 25, 2025
Non-Final Rejection — §102
Sep 02, 2025
Response Filed
Dec 14, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604025
METHOD FOR VERIFYING IMAGE DATA ENCODED IN AN ENCODER UNIT
2y 5m to grant Granted Apr 14, 2026
Patent 12593071
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587679
LOW-LATENCY MACHINE LEARNING-BASED STEREO STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12574571
FRAME SELECTION FOR STREAMING APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574529
IMAGE ENCODING AND DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
86%
Grant Probability
76%
With Interview (-10.0%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month