Prosecution Insights
Last updated: April 19, 2026
Application No. 18/390,238

LOCAL GLOBAL PREDICTION MODES WITH PROJECTED MOTION FIELDS

Non-Final OA §102§103§112
Filed
Dec 20, 2023
Examiner
UHL, LINDSAY JANE KILE
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Google LLC
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
89%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
324 granted / 404 resolved
+22.2% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
38 currently pending
Career history
442
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
65.4%
+25.4% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 404 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This Office Action is in response to the application filed on December 20, 2023. Claims 1-20 are pending and are examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 13-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, claim 13 recites a computer-readable medium having stored thereon a bitstream and describes that the bitstream is configured for decoding operations, but does not indicate that there are any coding instructions on the computer-readable medium for accomplishing such decoding of the bitstream. A computer-readable storage medium itself cannot decode data without instructions for such decoding. Accordingly, Applicant has failed to particularly point out and distinctly claim the subject matter which the inventor regards as the invention. Claims 14-20 are rejected for the same reasons as being dependent upon base claim 13. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 13-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Publication No. 2013/0016789 (“Lou”). With respect to claim 13, patentable weight is given to data stored on a computer-readable medium when there exists a functional relationship between the data and its associated substrate. MPEP 2111.05 III. For example, if a claim is drawn to a computer-readable medium containing programming, a functional relationship exists if the programming “performs some function with respect to the computer with which it is associated.” Id. However, if the claim recites that the computer-readable medium merely serves as a support for information or data, no functional relationship exists and the information or data is not given patentable weight. Id. Claim 13 is directed to a non-transitory computer-readable medium having stored thereon an encoded bitstream which is configured for decoding, wherein the decoding operations comprise several steps. These elements or steps are not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support for the bitstream and provides no functional relationship between the steps/elements that describe the generation of the bitstream and intended computer system. Therefore, those claim elements are not given patentable weight. Thus the claim scope is just a storage medium storing data and is anticipated by Lou which recites a storage medium storing a bitstream (see ¶155). Dependent claims 14-20 merely recite further limitations regarding the elements or steps of the decoding operations for the bitstream. These are also not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. Accordingly, for similar reasons, these claim elements are not given patentable weight and claims 14-20 are also rejected as just a storage medium storing data and is anticipated by Lou which recites a storage medium storing a bitstream (see ¶155). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2024/0098300 (“Gao”), which corresponds to a priority application filed September 2022, in view of the level of skill in the art. With respect to claim 1, Gao discloses the invention substantially as claimed, including A method comprising: obtaining an encoded bitstream (see Fig. 8, items 871, “coded video sequence”, Abstract, describing obtaining a coded/encoded bitstream of video); generating reconstructed frame data (see Fig. 8, items 874, “reconstructed pictures”, Abstract, describing generating reconstructed pictures/frame data), wherein generating the reconstructed frame data includes: identifying a current frame (see Fig. 18, item 1802, Abstract, describing that the system may identify a current frame); identifying a current reference frame (see Fig. 18, items 1804, 1806, Abstract, describing that the system identifies a reference frame for a current block of the current frame, i.e., current reference frame); identifying a current superblock from the current frame (see ¶¶99, 167, 209-210, 214-216, describing that the block/CU may be as large as 64x64 or 128x128, and that motion information may be determined at signaled at the tile or superblock level for a current picture/frame); obtaining a projected motion field, for the current superblock, using motion data from the current reference frame (see ¶¶230-233, 235, 237-238, describing obtaining nearby motion vectors, i.e., a projected motion field, using motion data from a reference frame for the current block (which as detailed above, may be a superblock); obtaining reference warp motion parameters, for the current superblock, by fitting the projected motion field to a warp motion model (see Fig. 21, ¶¶230-233, 235, 238-241, 247, describing obtaining reference warp motion parameters for the coding block (which as detailed above may be a superblock) by fitting the projected nearby motion/per-pixel motion, i.e., motion field, to a warp motion model); obtaining, from the encoded bitstream, differential motion parameters for a current block from the current superblock (see ¶¶238, 247, describing that in WARP_DELTA mode, the system may obtain a delta from a predicted motion vector, i.e., differential motion parameter, for a current block (which as detailed above may be a superblock) from the encoded bitstream); obtaining motion parameters for the current block by adding the reference warp motion parameters and the differential motion parameters (see citations with respect to element above, describing that in WARP_DELTA, the delta for a predicted warp model is coded “similarly to how motion vectors are coded as a delta from a predicted motion vector”; see also ¶¶70, 169, describing that when motion vectors are coded as a delta/MVD/residual from a predicted motion vector, the motion vector for the current block is obtained by adding the prediction from the reference block and the delta/MVD/residual/differential motion information – one of ordinary skill in the art at the time of filing would have understood that when Gao describes that the current block’s “warp model is coded as a delta from a predicted warp model, similarly to how motion vectors are coded as a delta from a predicted motion vector”, this would indicate that such a delta is then added to the prediction/reference warp motion parameters to obtain the final motion parameters for the current block); obtaining a predicted block for the current block in accordance with the motion parameters (see citations and arguments with respect to elements above, describing the use of the warped motion parameters to obtain motion vectors for the current block and Fig. 8, item 874, ¶¶111, 113-114, describing that in the coding system obtains “prediction results”, i.e., prediction block, for the current block in accordance with the prediction information (motion vectors as described in ¶103) which are combined with a residual to form a reconstructed block); obtaining a reconstructed block by adding the predicted block and a reconstructed residual block obtained by decoding residual data for the current block from the encoded bitstream (see citations with respect to element above, describing that the reconstructed block is formed by adding the predicted block and a reconstructed residual block obtained by decoding residual data for the current block from the encoded bitstream); and including the reconstructed block in the reconstructed frame data (see ¶114, describing that the reconstructed block is part of the reconstructed picture, i.e., reconstructed frame data that is output as part of the reconstructed video); including the reconstructed frame data in an output video stream (see citations with respect to element above and Figs. 5, 8, input to item 512, item “reconstructed pictures”, describing that the reconstructed picture/frame is output in a video stream); and outputting the output video stream (see citations with respect to elements above describing outputting an output video stream). As detailed above, Gao does not explicitly state that the delta/differential motion parameter is added to the reference warp motion parameter. However, Gao does state that when warp delta is used, the warp model is coded as a delta from a predicted warp model “similarly to how motion vectors are coded as a delta from a predicted motion vector” (see ¶247). As identified in other portions of Gao, one of ordinary skill in the art would have understood this to mean that a prediction index and a delta are coded and that the decoder then may determine the appropriate prediction and add the delta to it (see, e.g., ¶¶70, 169). Accordingly, to such a person, applying this concept to the warp model to code a delta for the warp model would have indicated that the final model/set of parameters would be obtained by signaling a predicted reference warp model and adding a delta/differential to it. Accordingly, it would have been obvious to modify Gao to specifically recite this and, in view of the level of skill in the art, Gao discloses each and every element of independent claim 1. With respect to claim 2, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: obtaining the projected motion field includes using motion data from a second reference frame, wherein the motion data from the second reference frame includes a motion vector that intersects the current reference frame (see citations and arguments with respect to claim 1 above and Gao Figs. 18-19, ¶¶97, 101, 110, describing that Gao’s motion information can include motion data from 2 reference frames with motion vectors that intersect the current reference frame). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 2. With respect to claim 3, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: the current superblock is a 64x64-pixel superblock, a 128×128-pixel superblock, or a 256×256-pixel superblock (see citations and arguments with respect to claim 1 above, describing that the superblock may be 64x64 or 128x128). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 3. With respect to claim 4, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: identifying the current superblock includes identifying a current group of superblocks that includes the current superblock; obtaining the projected motion field includes obtaining the projected motion field for the current group of superblocks; and obtaining the reference warp motion parameters includes obtaining the reference warp motion parameters for the current group of superblocks (see citations and arguments with respect to claim 1 above and ¶¶178, 232, 238, 247, describing that the superblock’s motion may be obtained as global motion (i.e., motion at a frame level – frames would be understood to be groups of superblocks) and the warp parameters may be obtained as global reference warp motion parameters). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 4. With respect to claim 5, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: obtaining the projected motion field includes obtaining, for a respective 8x8 block of the current superblock, zero or more projected motion vectors between the respective 8x8 block and a reference block in the current reference frame (see citations and arguments with respect to claim 1 above, and Fig. 21, ¶¶93, 146, Table 1, 238, 240, describing that obtaining the motion may include obtaining zero or more projected nearby motion vectors and/or vectors for 8x8 blocks). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 5. With respect to claim 6, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of dependent claim 5. Gao additionally discloses: wherein: obtaining a respective projected motion vector includes obtaining, as the respective projected motion vector, a result of multiplying a motion vector from the reference block in the current reference frame by a result of dividing a temporal distance between the current reference frame and the current frame by a temporal distance between the current reference frame and a second reference frame, wherein the current frame is temporally between the current reference frame and the second reference frame (see citations and arguments with respect to claim 1 above and ¶¶172, 212, describing that motion vectors may be scaled based on the POC differences in each direction, i.e., based on the temporal distances between the reference frames in each direction). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 6. With respect to claim 7, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: the warp motion model is a four-parameter warp motion model, a six-parameter warp motion model, or an eight-parameter warp motion model (see citations and arguments with respect to claim 1 above and ¶¶230, 235, table 7, describing that the warp motion model may be four or six parameter). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 7. With respect to claim 8, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: fitting the projected motion field includes least-squares regression with respect to the projected motion field (see citations and arguments with respect to claim 1 above and ¶238, describing that fitting the warp parameters to the projected motion field includes least squares regression with respect to the motion field). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 8. With respect to claim 9, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: obtaining the differential motion parameters is omitted; and obtaining the motion parameters for the current block includes using the reference warp motion parameters as the motion parameters for the current block (see citations and arguments with respect to claim 1 above and ¶246, describing that the current block’s motion parameters may be obtained by copying them from the neighbor’s/reference warp motion parameters and that WARP_DELTA is optional, i.e., obtaining the differential parameters may be omitted). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 9. With respect to claim 10, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: the reference warp motion parameters indicate warped motion between the current reference frame and the current frame (see citations and arguments with respect to claim 1 above and Fig. 21, ¶¶230-232, 238, 247, describing that the reference warp motion parameters indicate warped motion between the reference frame and target/current frame). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 10. With respect to claim 11, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: wherein: the current reference frame is from a plurality of reference frames available for decoding the current frame; obtaining the projected motion field includes obtaining a plurality of projected motion fields that includes the projected motion field, wherein obtaining the plurality of projected motion fields includes obtaining respective projected motion fields on a per-reference frame basis with respect to the plurality of reference frames; and obtaining the reference warp motion parameters includes obtaining a plurality of reference warp motion parameter sets on a per-projected motion field basis with respect to the plurality of projected motion fields, where a reference warp motion parameter set from the plurality of reference warp motion parameter sets includes the reference warp motion parameters (see citations and arguments with respect to claim 1 above ¶¶232, 235, 237-238, 245, describing that multiple reference frames may be obtained and motion information for each reference frame may be obtained and warped based on a parameter sets for each motion information, i.e., based on a plurality of reference warp motion parameter sets; Examiner also notes that it is clear that since there are multiple blocks/superblocks in each frame, in the described system, in order to obtain the reconstructed current frame, this concept would be repeated for each, i.e., collectively using multiple reference frames, multiple reference fields, and multiple reference warp motion parameter sets with respect to the projected motion fields). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 11. With respect to claim 12, Gao discloses the invention substantially as claimed. As described above, Gao in view of the level of skill in the art discloses all the elements of independent claim 1. Gao additionally discloses: An apparatus for decoding using local global prediction modes with projected motion fields (see Figs. 5, 8, showing such an apparatus), the apparatus comprising: a memory including computer executable instructions for decoding an encoded video stream (see ¶¶259, 286-287, 299-301, describing a memory for storing instructions for decoding an encoded video stream and a processor for executing such instructions – Examiner interprets this memory to be a non-transitory memory in accordance with Applicant’s specification at ¶295); and a processor that executes the instructions (see citations with respect to element above) to: obtain an encoded bitstream (see citations with respect to corresponding element of claim 1 above); generate reconstructed frame data (see citations with respect to corresponding element of claim 1 above), wherein to generate the reconstructed frame data the processor executes the instructions to: identify a current frame (see citations with respect to corresponding element of claim 1 above); identify a current reference frame (see citations with respect to corresponding element of claim 1 above); identify a current superblock from the current frame (see citations with respect to corresponding element of claim 1 above); obtain a projected motion field, for the current superblock, using motion data from the current reference frame (see citations with respect to corresponding element of claim 1 above); obtain reference warp motion parameters for the current superblock, wherein, to obtain the reference warp motion parameters, the processor executes the instructions to fit the projected motion field to a warp motion model (see citations with respect to corresponding element of claim 1 above); obtain differential motion parameters, for a current block from the current superblock, from the encoded bitstream (see citations with respect to corresponding element of claim 1 above); obtain motion parameters, for the current block, wherein, to obtain the motion parameters, the processor executes the instructions to add the reference warp motion parameters and the differential motion parameters (see citations with respect to corresponding element of claim 1 above); obtain a predicted block for the current block in accordance with the motion parameters (see citations with respect to corresponding element of claim 1 above); obtain a reconstructed block, wherein, to obtain the reconstructed block, the processor executes the instructions to add the predicted block and a reconstructed residual block obtained by decoding residual data for the current block from the encoded bitstream (see citations with respect to corresponding element of claim 1 above); and include the reconstructed block in the reconstructed frame data (see citations with respect to corresponding element of claim 1 above); include the reconstructed frame data in an output video stream (see citations with respect to corresponding element of claim 1 above); and output the output video stream (see citations with respect to corresponding element of claim 1 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 12. With respect to claim 13, claim 13 recites the elements of claim 1 in non-transitory computer-readable storage medium storing a bitstream form rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 1 (see citations and arguments with respect to claim 1 above and Gao ¶¶60, 259, 286-287, 299-301 – although the exact language of paragraph 60 describing the storage of the encoded bitstream does not appear in Gao’s provisional, Gao’s provisional is directed to the AV1 standard of transmitting encoded data from an encoder to a decoder. Examiner takes Official Notice that one of ordinary skill in the art at the time of filing would have understood such transmission to require storage (even if very briefly) of such a bitstream after encoding and before transmission and/or after transmission and before decoding in a storage medium). Accordingly, the disclosure cited with respect to claim 1 also applies to claim 13. With respect to claim 14, claim 14 recites the elements of claim 2 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 2 (see citations and arguments with respect to claims 1 and 2 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 2 also applies to claim 14. With respect to claim 15, claim 15 recites the elements of claim 4 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 4 (see citations and arguments with respect to claims 1 and 4 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 4 also applies to claim 15. With respect to claim 16, claim 16 recites the elements of claim 5 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 5 (see citations and arguments with respect to claims 1 and 5 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 5 also applies to claim 16. With respect to claim 17, claim 17 recites the elements of claim 6 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 6 (see citations and arguments with respect to claims 1 and 6 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 6 also applies to claim 17. With respect to claim 18, claim 18 recites the elements of claim 8 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 8 (see citations and arguments with respect to claims 1 and 8 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 8 also applies to claim 18. With respect to claim 19, claim 19 recites the elements of claim 9 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 9 (see citations and arguments with respect to claims 1 and 9 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 9 also applies to claim 19. With respect to claim 20, claim 20 recites the elements of claim 10 in non-transitory computer-readable storage medium form storing a bitstream rather than method form. Gao discloses that its system may be embodied by a non-transitory computer-readable storage medium storing a bitstream, the bitstream configured for decoding, including the decoding operations of claim 10 (see citations and arguments with respect to claims 1 and 10 above Official Notice described in claim 13 above). Accordingly, the disclosure cited with respect to claim 10 also applies to claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDSAY JANE KILE UHL whose telephone number is (571)270-0337. The examiner can normally be reached 8:30 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LINDSAY J UHL Primary Examiner Art Unit 2481 /LINDSAY J UHL/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
May 17, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604000
SYSTEMS AND METHODS FOR PARTITION-BASED PREDICTION MODE REORDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12604030
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Apr 14, 2026
Patent 12598329
SYNTAX DESIGN METHOD AND APPARATUS FOR PERFORMING CODING BY USING SYNTAX
2y 5m to grant Granted Apr 07, 2026
Patent 12593032
METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL BY USING INTER PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12587636
GEOMETRIC PARTITION MODE WITH MOTION VECTOR REFINEMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
89%
With Interview (+8.7%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 404 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month