Prosecution Insights
Last updated: April 19, 2026
Application No. 18/927,730

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §102§103
Filed
Oct 25, 2024
Examiner
CHANG, DANIEL
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
233 granted / 367 resolved
+5.5% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because the language, “[e]mbodiments of the present disclosure provide […],” in the abstract recites legal phraseology and is requiring the reader to go into the Specification. Correction is required. See MPEP § 608.01(b). Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 20 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Guo et al. (US 2018/0103273 A1) (hereinafter Guo). Regarding claim 20, “non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises […],” is a product by process claim limitation where the product is the bitstream/image data and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps.” Thus, the scope of the claim is the non-transitory computer-readable recording medium storing the bitstream. The structure includes the information and samples manipulated by the steps. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated.” MPEP §2111.05(I)(A). When a claimed, “non-transitory computer-readable recording medium,” merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The non-transitory computer-readable recording/storage medium storing the claimed bitstream/image data in claims 18-19 merely services as a support for the storage of the bitstream/image data and provides no functional relationship between the stored bitstream/image data and recording/storage medium. Therefore the bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Guo which recites in Paragraphs [0049] & [0055], wherein encoding the video data in this way may be necessary to ensure that the video data may be stored on a given type of computer-readable media, such as a DVD or CD-ROM. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 10-11 & 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US 2018/0103273 A1) (hereinafter Guo) in view of Gao et al., “Non-EE2: Adaptive Blending for GPM,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29 26th Meeting, 20–29 April 2022, JVET-Z0137 (hereinafter Gao). Regarding claim 1, Guo discloses a method for video processing, comprising: obtaining, for a conversion between a current video block of a video and a bitstream of the video [Paragraph [0059]-[0061], Encoding unit receiving data for video encoding (conversion)], a value for a metric of a blending region [Paragraph [0117]-[0119] & [0133]-[0135], Fig. 11 & 13, size of PU below threshold, as value for metric for transition zone 1200]; and performing the conversion based on the value for the metric [Paragraph [0133]-[0139], Fig.13, size of PU below threshold, as value for metric for transition zone, and used within operation 1300 is part of a method for coding video data]. However, Guo does not explicitly disclose the value for a metric of a blending region in a direction, the blending region being comprised in a target region associated with the current video block, the value being determined from a plurality of predetermined values for the metric, values for samples of the blending region being determined based on values for samples of a first part of the target region and values for samples of a second part of the target region. Gao teaches of the value for a metric of a blending region in a direction, the blending region being comprised in a target region associated with the current video block, the value being determined from a plurality of predetermined values for the metric, values for samples of the blending region being determined based on values for samples of a first part of the target region and values for samples of a second part of the target region [Sections 1-2, Fig. 1, width of the blending area surrounding the GPM partition boundary, wherein width is in direction with theta, within block having GPM the width of the blending area (i.e., θ ) is allowed to be selected from a set of pre-defined values { 0 ,   1 ,   2 ,   4 ,   8   } . The optimal blending area width is determined for each GPM CU at encoder and signaled to decoder based on one syntax element merge_gpm_blending_width_idx, and weighting values are determining using samples (Xc,Yc) both sides of boundary line, separating first and second part of target region]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). Regarding claim 2, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Gao teaches wherein the metric is a width between two sides of the blending region, and/or the first part or the second part comprises one of the following: a template of the current video block, a partition of the current video block, a subpartition of the current video block, or a subblock of the current video block [Sections 1-2, Fig. 1, width of the blending area surrounding the GPM partition boundary, weighting values are determining using samples (Xc,Yc) both sides of boundary line, separating first and second part of target region]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). Regarding claim 3, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Gao teaches wherein a value for a sample of the blending region is determined as a weighted sum of a value for a sample of the first part and a value for a sample of the second part, or wherein a value for a sample of the blending region is equal to a value for a sample of the first part or a value for a sample of the second part, or wherein a cost is determined based on the target region [Sections 1-2, Fig. 1, PNG media_image1.png 153 774 media_image1.png Greyscale ]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). Regarding claim 4, Guo and Gao disclose the method of claim 3, and are analyzed as previously discussed with respect to the claim. Furthermore, Guo discloses wherein a motion vector (MV) is determined from a plurality of MVs for the current video block based on the cost, or wherein a reference picture is determined from a plurality of reference pictures for the current video block based on the cost, or wherein a partition mode is determined from a plurality of partition modes for the current video block based on the cost, or wherein a geometric partitioning mode (GPM) blending scheme is determine from a plurality of GPM blending schemes for the current video block based on the cost [Paragraph [0143]-[0145], cost function determining Geometric Partitioning Mode, claim 83]. Regarding claim 5, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Guo discloses wherein the blending region is determined based on the value for the metric, and/or wherein the target region comprises one of the following: a coding unit (CU), a prediction unit (PU), a transform unit (TU), a template, or a part of a template, and/or wherein the current video block comprises more than one partition, and/or wherein the current video block is coded with a GPM-based mode or a multiple hypothesis prediction, and/or wherein the current video block is a reference video block of a further video block of the video, the further video block is different from the current video block and coded with a GPM-based mode, and/or wherein the current video block is coded with a template-based coding tool, or the current video block is not coded with a template-based coding tool [Paragraph [0133]-[0139], Fig.13, size of PU below threshold, as value for metric for transition zone, and used within operation 1300 is part of a method for coding video data within CU/PU]. Regarding claim 6, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Gao teaches wherein the first part is a first template of the current video block, the second part is a second template of the current video block, the blending region is around a partition line between the first template and the second template, and a weighted blending process is applied on the first template and the second template based on the width of the blending region [Sections 1-2, Fig. 1, width of the blending area surrounding the GPM partition boundary, weighting values are determining using samples (Xc,Yc) both sides of boundary line, separating first and second part of target region as first and second templates]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). Regarding claim 10, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Gao teaches wherein the plurality of predetermined values are stored in a look-up table, or wherein the plurality of predetermined values are comprised in a first set of predetermined values [Sections 1-2, Fig. 1, width of the blending area surrounding the GPM partition boundary, wherein width is in direction with theta, within block having GPM the width of the blending area (i.e., θ ) is allowed to be selected from a set of pre-defined values { 0 ,   1 ,   2 ,   4 ,   8   } . The optimal blending area width is determined for each GPM CU at encoder and signaled to decoder based on one syntax element merge_gpm_blending_width_idx, and weighting values are determining using samples (Xc,Yc) both sides of boundary line, separating first and second part of target region]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). Regarding claim 11, Guo and Gao disclose the method of claim 10, and are analyzed as previously discussed with respect to the claim. Furthermore, Gao teaches wherein obtaining the value for the metric comprises: determining the first set of predetermined values from a plurality of sets of predetermined values based on one of the following: a size of the current video block, a width of the current video block, a height of the current video block, a coding tree unit (CTU) size of the video, a resolution of the video, or a first syntax element indicated in the bitstream [Sections 1-2, Fig. 1, width of the blending area surrounding the GPM partition boundary, wherein width is in direction with theta, within block having GPM the width of the blending area (i.e., θ ) is allowed to be selected from a set of pre-defined values { 0 ,   1 ,   2 ,   4 ,   8   } . The optimal blending area width is determined for each GPM CU at encoder and signaled to decoder based on one syntax element merge_gpm_blending_width_idx, and weighting values are determining using samples (Xc,Yc) both sides of boundary line, separating first and second part of target region]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Guo to integrate the blending techniques in Gao as above, to improve the prediction efficiency of geometric partition mode (Gao, Abstract). . Regarding claim 16, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Guo discloses wherein the conversion includes encoding the current video block into the bitstream [Paragraph [0049]-[0054], Encoding]. Regarding claim 17, Guo and Gao disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Guo discloses wherein the conversion includes decoding the current video block from the bitstream [Paragraph [0055]-[0058], Decoding]. Regarding claim 18, apparatus claim 18 is drawn to the apparatus using/performing the same method as claimed in claim 1. Therefore apparatus claim 18 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above. Regarding claims 19-20, non-transitory computer-readable storage medium claims (19-20) recite similar features as recited in method claim 1. Thus, non-transitory computer-readable storage medium claims (19-20) correspond to method claim 1, and are rejected for the same reasons of obviousness as listed above. Allowable Subject Matter Claims 7-9 & 12-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The various claimed limitations mentioned in the claims are not taught or suggested by the prior art taken either singly or in combination, with emphasize that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL CHANG/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Oct 25, 2024
Application Filed
Dec 20, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593069
LOW MEMORY DESIGN FOR MULTIPLE REFERENCE LINE SELECTION SCHEME
2y 5m to grant Granted Mar 31, 2026
Patent 12587672
DECOUPLED MODE INFERENCE AND PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574541
IMAGE PROCESSING METHOD AND ASSOCIATED IMAGE PROCESSING CIRCUIT
2y 5m to grant Granted Mar 10, 2026
Patent 12570145
AUTOSTEREOSCOPIC CAMPFIRE DISPLAY
2y 5m to grant Granted Mar 10, 2026
Patent 12574513
METHOD AND DEVICE FOR ENCODING/DECODING VIDEO SIGNAL BY USING OPTIMIZED CONVERSION BASED ON MULTIPLE GRAPH-BASED MODEL
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+13.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month