Prosecution Insights
Last updated: April 19, 2026
Application No. 19/089,130

SYSTEMS AND METHODS FOR ADAPTIVE DECODER SIDE PADDING IN VIDEO REGION PACKING

Non-Final OA §101§103
Filed
Mar 25, 2025
Examiner
NAWAZ, TALHA M
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Op Solutions LLC
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
538 granted / 604 resolved
+31.1% vs TC avg
Minimal -1% lift
Without
With
+-0.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
24.9%
-15.1% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application discloses and claims only subject matter disclosed in prior application, and names the inventor or at least one joint inventor named in the prior application. Accordingly, this application may constitute a continuation or divisional. Should applicant desire to claim the benefit of the filing date of the prior application, attention is directed to 35 U.S.C. 120, 37 CFR 1.78, and MPEP § 211 et seq. The presentation of a benefit claim may result in an additional fee under 37 CFR 1.17(w)(1) or (2) being required, if the earliest filing date for which benefit is claimed under 35 U.S.C. 120, 121, 365(c), or 386(c) and 1.78(d) in the application is more than six years before the actual filing date of the application. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a region unpacking module/ region padding module” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are drawn to a "computer-readable medium”. The specification is silent regarding the meaning of this term. Thus, applying the broadest reasonable interpretation in light of the specification and taking into account the meaning of the words in their ordinary usage they would be understood by one of ordinary skill in the art (MPEP §2111) , the claim as a whole covers both transitory and not transitory media. A transitory medium does not fall into any of the 4 categories of invention (Process, Machine, Manufacture, or composition of matter). The applicants are respectfully suggested to amend the claims to read as "A non-transitory computer readable medium …" to overcome the 35 USC § 101 rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Joshi et al. (US20240137548) (hereinafter Joshi) in view of Boyce et al. (US2023006741) (hereinafter Boyce). Regarding claim 1, Joshi discloses a decoder for a decoding a bitstream encoded with a packed frame having at least on region of interest defined therein and encoded region parameters associated therewith, the decoder comprising (Figs. 7-9, 16-21, 0066; coding process performed on received data in bitstream): a region unpacking module, the region unpacking module receiving the packed frame and region parameters and reconstructing an unpacked frame with said at least one region of interest [Figs. 16-21, 0086-0097, 0137-0139, 0169-0182; unpacking (extracting) frame data and reconstructing based on frame parameters for a specified portion of the video frame]. a region padding module, the region padding module receiving the unpacked frame and region parameters and applying at least one padding parameter to at least one dimension of a region of interest in the unpacked frame [Figs. 16-21, 0086-0097, 0137-0139, 0170-0172, 0181-0184; including padding parameters to an area of the video frame]. Joshi discloses the limitations of the claim. However, Joshi does not explicitly disclose a video decoder receiving the bitstream and extracting the packed frame and region parameters therefrom. Boyce more explicitly discloses video decoder receiving the bitstream and extracting the packed frame and region parameters therefrom [Figs. 1-6, 0030-0033, 0040-0043, 0050-0056; performing patch based coding on region of interest of a frame received in a bitstream]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Joshi with the teachings of Boyce as stated above. By incorporating the teachings as such advantageous flexibility to changes in optimal visual features to avoid obsolescence is achieved (see Boyce 0028). Regarding claim 2, Joshi discloses wherein the region padding module further receives adaptive padding parameters and wherein said applied padding parameters are determined at least in part on said adaptive padding parameters [0137-0139; utilizing padded rows based on determined frame requirements]. Regarding claim 3, Joshi discloses wherein the regions of interest are defined by rectangular bounding boxes and wherein the applied padding parameters are pixels of a predetermined color added to at least one boundary of the region bounding box [0070, 0086, 0167-0168; coding in respective color space and padding parameters applied as need to region of displacement frame]. Regarding claim 4, Joshi discloses wherein the regions of interest are defined by rectangular bounding boxes and wherein the applied padding parameters are pixels of an average color determined by the pixels within the region, the pixels being added to at least one boundary of the bounding box [0030-0033, 0070, 0086, 0167-0168; creating mesh using edges and vertex associated with texture attribute]. Regarding claim 5, Joshi discloses wherein the applied padding parameters are a fixed number of pixels [0107-0115; interleaving done at pixel level in a coded block]. Regarding claim 6, Joshi discloses wherein the region padding module further receives adaptive padding parameters and wherein said applied padding parameters comprise a variable number of pixels determined at least in part by the adaptive padding parameters [Figs. 16-21, 0086-0097, 0137-0139, 0170-0172, 0181-0184; including padding parameters to an area of the video frame]. Regarding claim 7, Joshi discloses wherein the padding parameters comprise repeating pixels at the edge of a region of interest [0107-0115; interleaving done at pixel level in a coded block]. Regarding claim 8, Joshi discloses wherein a padding value is signaled in the bitstream and the padding parameter is determined at least in part on the padding value [Figs. 16-21, 0086-0097, 0137-0139, 0170-0172, 0181-0184; including padding parameters to an area of the video frame]. Regarding claim 9, Joshi discloses a method of decoding a bitstream having a packed frame with at least one region of interest defined therein and encoded region parameters associated therewith, the decoder comprising (Figs. 16-21, 0066; coding process performed on received data in bitstream) receiving the bitstream (Figs. 16-21, 0066; coding process performed on received data in bitstream) reconstructing an unpacked frame with said at least one region of interest from the packed frame and region parameters [Figs. 16-21, 0066-0072, 0176-0183; unpacking frame data and reconstructing based on frame parameters for a specified portion of the video frame]. applying at least one padding parameter to at least one dimension of a region of interest in the unpacked frame [Figs. 16-21, 0066-0072, 0086-0097, 0176-0183; adding padding parameters to an area of the video frame]. Joshi discloses the limitations of the claim. However, Joshi does not explicitly disclose extracting the packed frame and region parameters from the bitstream. Boyce more explicitly discloses extracting the packed frame and region parameters from the bitstream [Figs. 1-6, 0030-0033, 0040-0043, 0050-0056; performing patch based coding on region of interest of a frame received in a bitstream]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Joshi with the teachings of Boyce as stated above. By incorporating the teachings as such advantageous flexibility to changes in optimal visual features to avoid obsolescence is achieved (see Boyce 0028). Regarding claim 10, Joshi discloses further comprising receiving adaptive padding parameters and wherein said applied padding parameters are determined at least in part from said adaptive padding parameters [0137-0139; utilizing padded rows based on determined frame requirements]. Regarding claim 11, Joshi discloses wherein the regions of interest are defined by rectangular bounding boxes and wherein the applied padding parameters are pixels of a predetermined color added to at least one boundary of the region bounding box [0070, 0086, 0167-0168; coding in respective color space and padding parameters applied as need to region of displacement frame]. Regarding claim 12, Joshi discloses wherein the regions of interest are defined by rectangular bounding boxes and wherein the applied padding parameters are pixels of an average color determined by the pixels within the region, the pixels being added to at least one boundary of the bounding box [0070, 0086, 0167-0168; coding in respective color space and padding parameters applied as need to region of displacement frame]. Regarding claim 13, Joshi discloses wherein the applied padding parameters are a fixed number of pixels [0107-0115; interleaving done at pixel level in a coded block]. Regarding claim 14, Joshi discloses further comprising receiving adaptive padding parameters and wherein said applied padding parameters comprise a variable number of pixels determined at least in part by the adaptive padding parameters [Figs. 16-21, 0086-0097, 0137-0139, 0170-0172, 0181-0184; including padding parameters to an area of the video frame]. Regarding claim 15, Joshi discloses wherein the padding parameters comprise repeating pixels at the edge of a region of interest [0107-0115; interleaving done at pixel level in a coded block]. Regarding claim 16, Joshi discloses wherein a padding value is signaled in the bitstream and the padding parameter is determined at least in part on the padding value [Figs. 16-21, 0086-0097, 0137-0139, 0170-0172, 0181-0184; including padding parameters to an area of the video frame]. Regarding claim 17, Joshi discloses a computer-readable medium having stored thereon instructions that, when executed by a processor, cause the processor to perform steps comprising (Figs. 1-6, 0066; CRM): decoding a compressed bitstream using an adaptive video decoder to provide a first decoded output [0066 ; decoding received data from bitstream]. unpacking and de-transforming the first decoded output using an unpacker and de-transformer to provide an output video with adaptive padding transformation in the region packing [Figs. 16-21, 0086-0097, 0137-0139, 0169-0182; adding padding parameters to an area of the video frame]. wherein the unpacker and de-transformer receives unpacked reconstructed video and provides selectively padded regions for machine task evaluation [Figs. 4, 16-21, 0086-0097, 0137-0139, 0169-0182; unpacking frame data and reconstructing based on frame parameters for a specified portion of the video frame]. Joshi discloses the limitations of the claim. However, Joshi does not explicitly disclose de-transformer receives unpacked reconstructed video and provides selectively padded regions for machine task evaluation. Boyce more explicitly discloses de-transformer receives unpacked reconstructed video and provides selectively padded regions for machine task evaluation [Figs. 1-6, 0030-0033, 0040-0043, 0050-0056; performing patch based coding on region of interest of a frame received in a bitstream]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Joshi with the teachings of Boyce as stated above. By incorporating the teachings as such advantageous flexibility to changes in optimal visual features to avoid obsolescence is achieved (see Boyce 0028). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TALHA M NAWAZ whose telephone number is (571)270-5439. The examiner can normally be reached Flex, M-R 6:30am-3:30pm; F 8:30am-12:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joe G Ustaris can be reached at 571-272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TALHA M NAWAZ/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Mar 25, 2025
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593023
Electronic Device with Reliable Passthrough Video Fallback Capability and Hierarchical Failure Detection Scheme
2y 5m to grant Granted Mar 31, 2026
Patent 12587631
Motion Dependent Display
2y 5m to grant Granted Mar 24, 2026
Patent 12587673
METHOD FOR DECODER-SIDE MOTION VECTOR DERIVATION USING SPATIAL CORRELATION
2y 5m to grant Granted Mar 24, 2026
Patent 12581024
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573203
MEDICAL OBSERVATION SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
88%
With Interview (-0.8%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month