Prosecution Insights
Last updated: April 19, 2026
Application No. 19/027,659

FLEXIBLE REFERENCE PICTURE MANAGEMENT FOR VIDEO ENCODING AND DECODING

Non-Final OA §DP
Filed
Jan 17, 2025
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,277,631. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 2 of present Application ‘659 is similar to claim 1 of Patent ‘631. Thus, claim 2 of present Application ‘659 is anticipated by claim 1 of Patent ‘631. Peruse the table below. Claim 3 of present Application ‘659 is similar to claim 2 of Patent ‘631. Thus, claim 3 of present Application ‘659 is anticipated by claim 2 of Patent ‘631. Claim 4 of present Application ‘659 is similar to claim 3 of Patent ‘631. Thus, claim 4 of present Application ‘659 is anticipated by claim 3 of Patent ‘631. Claim 5 of present Application ‘659 is similar to claim 4 of Patent ‘631. Thus, claim 5 of present Application ‘659 is anticipated by claim 4 of Patent ‘631. Claim 6 of present Application ‘659 is similar to claim 5 of Patent ‘631. Thus, claim 6 of present Application ‘659 is anticipated by claim 5 of Patent ‘631. Claim 7 of present Application ‘659 is similar to claim 6 of Patent ‘631. Thus, claim 7 of present Application ‘659 is anticipated by claim 6 of Patent ‘631. Claim 8 of present Application ‘659 is similar to claim 7 of Patent ‘631. Thus, claim 8 of present Application ‘659 is anticipated by claim 7 of Patent ‘631. Claim 9 of present Application ‘659 is similar to claim 8 of Patent ‘631. Thus, claim 9 of present Application ‘659 is anticipated by claim 8 of Patent ‘631. Claim 10 of present Application ‘659 is similar to claim 9 of Patent ‘631. Thus, claim 10 of present Application ‘659 is anticipated by claim 9 of Patent ‘631. Claim 11 of present Application ‘659 is similar to claim 18 of Patent ‘631. Thus, claim 11 of present Application ‘659 is anticipated by claim 18 of Patent ‘631. Claim 12 of present Application ‘659 is similar to claim 19 of Patent ‘631. Thus, claim 12 of present Application ‘659 is anticipated by claim 19 of Patent ‘631. Claim 13 of present Application ‘659 is similar to claim 20 of Patent ‘631. Thus, claim 13 of present Application ‘659 is anticipated by claim 20 of Patent ‘631. Claim 14 of present Application ‘659 is similar to claim 10 of Patent ‘631. Thus, claim 14 of present Application ‘659 is anticipated by claim 10 of Patent ‘631. Claim 15 of present Application ‘659 is similar to claim 11 of Patent ‘631. Thus, claim 15 of present Application ‘659 is anticipated by claim 11 of Patent ‘631. Claim 16 of present Application ‘659 is similar to claim 12 of Patent ‘631. Thus, claim 16 of present Application ‘659 is anticipated by claim 12 of Patent ‘631. Claim 17 of present Application ‘659 is similar to claim 13 of Patent ‘631. Thus, claim 17 of present Application ‘659 is anticipated by claim 13 of Patent ‘631. Claim 18 of present Application ‘659 is similar to claim 14 of Patent ‘631. Thus, claim 18 of present Application ‘659 is anticipated by claim 14 of Patent ‘631. Claim 19 of present Application ‘659 is similar to claim 15 of Patent ‘631. Thus, claim 19 of present Application ‘659 is anticipated by claim 15 of Patent ‘631. Claim 20 of present Application ‘659 is similar to the combination of claims 10 and 16 of Patent ‘631. Thus, claim 20 of present Application ‘659 is anticipated by the combination of claims 10 and 16 of Patent ‘631. Claim 21 of present Application ‘659 is similar to claim 17 of Patent ‘631. Thus, claim 21 of present Application ‘659 is anticipated by claim 17 of Patent ‘631. Claims 2-10 and 14-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 and 13-20 of U.S. Patent No. 11,831,899. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 2 of present Application ‘659 is similar to claim 1 of Patent ‘899. Thus, claim 2 of present Application ‘659 is anticipated by claim 1 of Patent ‘899. Peruse the table below. Claim 3 of present Application ‘659 is similar to claim 2 of Patent ‘899. Thus, claim 3 of present Application ‘659 is anticipated by claim 2 of Patent ‘899. Claim 4 of present Application ‘659 is similar to claim 3 of Patent ‘899. Thus, claim 4 of present Application ‘659 is anticipated by claim 3 of Patent ‘899. Claim 5 of present Application ‘659 is similar to claim 4 of Patent ‘899. Thus, claim 5 of present Application ‘659 is anticipated by claim 4 of Patent ‘899. Claim 6 of present Application ‘659 is similar to claim 5 of Patent ‘899. Thus, claim 6 of present Application ‘659 is anticipated by claim 5 of Patent ‘899. Claim 7 of present Application ‘659 is similar to claim 6 of Patent ‘899. Thus, claim 7 of present Application ‘659 is anticipated by claim 6 of Patent ‘899. Claim 8 of present Application ‘659 is similar to claim 7 of Patent ‘899. Thus, claim 8 of present Application ‘659 is anticipated by claim 7 of Patent ‘899. Claim 9 of present Application ‘659 is similar to claim 8 of Patent ‘899. Thus, claim 9 of present Application ‘659 is anticipated by claim 8 of Patent ‘899. Claim 10 of present Application ‘659 is similar to claim 9 of Patent ‘899. Thus, claim 10 of present Application ‘659 is anticipated by claim 9 of Patent ‘899. Claim 14 of present Application ‘659 is similar to claim 13 of Patent ‘899. Thus, claim 14 of present Application ‘659 is anticipated by claim 13 of Patent ‘899. Claim 15 of present Application ‘659 is similar to claim 14 of Patent ‘899. Thus, claim 15 of present Application ‘659 is anticipated by claim 14 of Patent ‘899. Claim 16 of present Application ‘659 is similar to claim 15 of Patent ‘899. Thus, claim 16 of present Application ‘659 is anticipated by claim 15 of Patent ‘899. Claim 17 of present Application ‘659 is similar to claim 16 of Patent ‘899. Thus, claim 17 of present Application ‘659 is anticipated by claim 16 of Patent ‘899. Claim 18 of present Application ‘659 is similar to claim 17 of Patent ‘899. Thus, claim 18 of present Application ‘659 is anticipated by claim 17 of Patent ‘899. Claim 19 of present Application ‘659 is similar to claim 18 of Patent ‘899. Thus, claim 19 of present Application ‘659 is anticipated by claim 18 of Patent ‘899. Claim 20 of present Application ‘659 is similar to the combination of claims 13 and 19 of Patent ‘899. Thus, claim 20 of present Application ‘659 is anticipated by the combination of claims 13 and 19 of Patent ‘899. Claim 21 of present Application ‘659 is similar to claim 20 of Patent ‘899. Thus, claim 21 of present Application ‘659 is anticipated by claim 20 of Patent ‘899. Claims 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 13, 15 and 16 of U.S. Patent No. 11,831,899 in view of Puri (US 2017/0013279). Regarding claim 11, claim 13 of Patent ‘899 discloses most of the limitations of claim 11 of present Application ‘659. Peruse the table below. Claim 13 of Patent ‘899 does not disclose transmitting the encoded data to a client for playback, wherein the encoded data is organized to facilitate decoding to reconstruct the multiple pictures, with a computer-implemented video decoder at the client. However, Puri teaches transmitting the encoded data to a client for playback (paragraph [134], Puri discloses transmission of data over a decoder associated with a client system for receiving video data to be playback to a desktop computer, laptop computer, tablet, mobile phone or the like), wherein the encoded data is organized to facilitate decoding to reconstruct the multiple pictures (paragraph [114], Puri discloses the reconstruction of pictures or frames at a decoder, wherein the video data encoded is received to be organized into picture groups, tiles, pixels and partitions to be reconstructed for reconstructing the video frames), with a computer-implemented video decoder at the client (paragraph [134], Puri discloses transmission of data over a decoder associated with a client system for receiving video data to be playback to a client at a desktop computer, laptop computer, tablet, mobile phone or the like). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 13 of Patent ‘899 and Puri together as a whole for effectively decoding video data over a network so as to view high quality images at the client terminal. Claim 12 of present Application ‘659 is similar to claim 15 of Patent ‘899. Thus, claim 12 of present Application ‘659 is anticipated by claim 15 of Patent ‘899. Claim 13 of present Application ‘659 is similar to claim 16 of Patent ‘899. Thus, claim 13 of present Application ‘659 is anticipated by claim 16 of Patent ‘899. Peruse the table below. Present Application 19/027,659 US Patent No. 11,277,631 US Patent No. 11,831,899 Claim 2. A computer system comprising one or more processing units and memory, wherein the computer system implements a video encoder configured to perform operations comprising: encoding multiple pictures of a video sequence, thereby producing encoded data, wherein the encoding includes: reconstructing a given picture of the multiple pictures, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; determining, based at least in part on an input version of the given picture and the scaled reconstructed version of the given picture, filter parameters that specify a filter adapted to remove noise for the given picture; entropy coding the filter parameters; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures; and outputting, as part of a bitstream, the encoded data, wherein the encoded data includes the entropy-coded filter parameters. Claim 1. One or more computer-readable media having stored thereon computer-executable instructions for causing a computer system, when programmed thereby, to perform operations comprising: encoding multiple pictures of a video sequence, thereby producing encoded data, wherein the encoding includes: reconstructing a given picture of the multiple pictures, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; determining, based at least in part on an input version of the given picture and the scaled reconstructed version of the given picture, filter parameters that specify a filter adapted to remove noise for the given picture; entropy coding the filter parameters; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures; and outputting, as part of a bitstream, the encoded data, wherein the encoded data includes the entropy-coded filter parameters. Claim 1. In a computer system that implements a video encoder, a method comprising: encoding multiple pictures of a video sequence, thereby producing encoded data, wherein the encoding includes: reconstructing a given picture of the multiple pictures, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; determining, based at least in part on an input version of the given picture and the scaled reconstructed version of the given picture, filter parameters that specify a filter adapted to remove noise for the given picture; entropy coding the filter parameters; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures; and outputting, as part of a bitstream, the encoded data, wherein the encoded data includes the entropy-coded filter parameters. Claim 3. The computer system of claim 2, wherein the encoding further includes: filtering the input version of the given picture, thereby producing a denoised input version of the given picture, wherein the filter parameters are determined using the denoised input version of the given picture as an optimization target. Claim 2. The one or more computer-readable media of claim 1, wherein the encoding further includes: filtering the input version of the given picture, thereby producing a denoised input version of the given picture, wherein the filter parameters are determined using the denoised input version of the given picture as an optimization target. Claim 2. The method of claim 1, wherein the encoding further includes: filtering the input version of the given picture, thereby producing a denoised input version of the given picture, wherein the filter parameters are determined using the denoised input version of the given picture as an optimization target. Claim 4. The computer system of claim 2, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 3. The one or more computer-readable media of claim 1, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 3. The method of claim 1, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 5. The computer system of claim 2, wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to a scaling factor, and wherein the encoded data further includes the scaling factor. Claim 4. The one or more computer-readable media of claim 1, wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to a scaling factor, and wherein the encoded data further includes the scaling factor. Claim 4. The method of claim 1, wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to a scaling factor, and wherein the encoded data further includes the scaling factor. Claim 6. The computer system of claim 2, wherein the encoding further includes, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 5. The one or more computer-readable media of claim 1, wherein the encoding further includes, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 5. The method of claim 1, wherein the encoding further includes, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 7. The computer system of claim 2, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 6. The one or more computer-readable media of claim 1, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 6. The method of claim 1, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 8. The computer system of claim 2, wherein the filter is a Wiener filter. Claim 7. The one or more computer-readable media of claim 1, wherein the filter is a Wiener filter. Claim 7. The method of claim 1, wherein the filter is a Wiener filter. Claim 9. The computer system of claim 2, wherein the buffer is a decoded picture buffer. Claim 8. The one or more computer-readable media of claim 1, wherein the buffer is a decoded picture buffer. Claim 8. The method of claim 1, wherein the buffer is a decoded picture buffer. Claim 10. The computer system of claim 2, wherein the encoding further includes assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Claim 9. The one or more computer-readable media of claim 1, wherein the encoding further comprises assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Claim 9. The method of claim 1, wherein the encoding further comprises assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Claim 11. In a computer system, a method comprising: receiving, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and transmitting the encoded data to a client for playback, wherein the encoded data is organized to facilitate decoding to reconstruct the multiple pictures, with a computer-implemented video decoder at the client, by operations comprising: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 18. One or more computer-readable media having stored thereon encoded data, as part of a bitstream, for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures, the encoded data being organized to facilitate decoding to reconstruct the multiple pictures, with a computer system that implements a video decoder, by operations comprising: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 13. A computer system comprising: a first buffer, implemented using memory, configured to receive, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and a video decoder configured to perform operations to decode the encoded data to reconstruct the multiple pictures, the operations including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a second buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 12. The method of claim 11, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 19. The one or more computer-readable media of claim 18, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 15. The computer system of claim 13, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 13. The method of claim 11, wherein the operations further include, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 20. The one or more computer-readable media of claim 18, wherein the operations further include, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 16. The computer system of claim 13, wherein the operations further include, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 14. One or more non-transitory computer-readable media having stored thereon computer-executable instructions for causing a computer system, when programmed thereby, to perform operations comprising: receiving, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and decoding the encoded data to reconstruct the multiple pictures, including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 10. In a computer system that implements a video decoder, a method comprising: receiving, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and decoding the encoded data to reconstruct the multiple pictures, including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 13. A computer system comprising: a first buffer, implemented using memory, configured to receive, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and a video decoder configured to perform operations to decode the encoded data to reconstruct the multiple pictures, the operations including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a second buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 15. The one or more non-transitory computer-readable media of claim 14, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 11. The method of claim 10, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 14. The computer system of claim 13, wherein the scaling increases spatial resolution of the reconstructed version of the given picture from a first spatial resolution to a second spatial resolution larger than the first spatial resolution. Claim 16. The one or more non-transitory computer-readable media of claim 14, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 12. The method of claim 10, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 15. The computer system of claim 13, wherein the encoded data further includes a scaling factor, and wherein the scaling changes spatial resolution of the reconstructed version of the given picture according to the scaling factor. Claim 17. The one or more non-transitory computer-readable media of claim 14, wherein the decoding further includes, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 13. The method of claim 10, wherein the decoding further includes, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 16. The computer system of claim 13, wherein the operations further include, before the scaling, filtering the reconstructed version of the given picture using a de-ringing filter. Claim 18. The one or more non-transitory computer-readable media of claim 14, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 14. The method of claim 10, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 17. The computer system of claim 13, wherein the filter is selected from the group consisting of: a variation of frequency-domain lowpass filter; a variation of spatial/temporal-domain lowpass filter; a variation of spatial/temporal-domain median filter; and a filter that uses block-matching and three-dimensional filtering. Claim 19. The one or more non-transitory computer-readable media of claim 14, wherein the filter is a Wiener filter. Claim 15. The method of claim 10, wherein the filter is a Wiener filter. Claim 18. The computer system of claim 13, wherein the filter is a Wiener filter. Claim 20. The one or more non-transitory computer-readable media of claim 14, wherein the denoised reconstructed version of the given picture is stored in a decoded picture buffer. Claim 10. In a computer system that implements a video decoder, a method comprising: receiving, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and decoding the encoded data to reconstruct the multiple pictures, including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 16. The method of claim 10, wherein the buffer is a decoded picture buffer. Claim 13. A computer system comprising: a first buffer, implemented using memory, configured to receive, as part of a bitstream, encoded data for multiple pictures of a video sequence, wherein the encoded data includes entropy-coded filter parameters that specify a filter adapted to remove noise for a given picture of the multiple pictures; and a video decoder configured to perform operations to decode the encoded data to reconstruct the multiple pictures, the operations including: entropy decoding the filter parameters for the filter; reconstructing the given picture, thereby producing a reconstructed version of the given picture; scaling the reconstructed version of the given picture; filtering the scaled reconstructed version of the given picture, using the filter specified by the filter parameters, thereby producing a denoised reconstructed version of the given picture; storing the denoised reconstructed version of the given picture in a second buffer for use as a reference picture; and using the reference picture in motion compensation operations for another picture of the multiple pictures. Claim 19. The computer system of claim 13, wherein the second buffer is a decoded picture buffer. Claim 21. The one or more non-transitory computer-readable media of claim 14, wherein the decoding further includes assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Claim 17. The method of claim 10, wherein the decoding further comprises assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Claim 20. The computer system of claim 13, wherein the operations further comprise assigning a reference picture index to the denoised reconstructed version of the given picture according to one or more rules. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Feb 18, 2025
Response after Non-Final Action
Feb 12, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month