Prosecution Insights
Last updated: April 19, 2026
Application No. 18/891,073

INTRA BLOCK COPY WITH TRIANGULAR PARTITIONS

Non-Final OA §102§103§DP
Filed
Sep 20, 2024
Examiner
BILLAH, MASUM
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
335 granted / 419 resolved
+22.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 419 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the application 18/891,073 filed on 09/20/2024. Claims 1 – 20 have been examined and are pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/24/2025, 02/25/2025 and 09/20/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). Claims 1, 2, 17 – 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 16 – 19 U.S Patent No. 12,160,573 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because: Current Application 18/891,073 US Patent No. 12,160,573 B2 Claim 1. 1. A method of processing video data, comprising: determining, for a conversion between a block of a video and a bitstream of the block, that at least one of an intra block copy (IBC) mode, an intra mode, an inter mode and a palette mode is applied to the block, wherein the block is split into two or multiple triangular or wedgelet sub-regions; and performing the conversion based on the determining. Claim 1. A method of processing video data, comprising: making a determination, for a conversion between a block of a video and a bitstream of the block, that an intra block copy (IBC) mode is applied to multiple sub-regions of the block, wherein the block is split into two or multiple triangular or wedgelet sub-regions; and performing the conversion based on the determination, wherein, in the IBC mode, prediction samples are derived from blocks of sample values of a same decoded slice as determined by block vectors; wherein, when one block is split into the multiple sub-regions, intermediate prediction blocks are generated using information of each sub-region, and wherein a final prediction block is obtained based on a weighted average of the intermediate prediction blocks. Claim 2. Claim 2. Claim 17. Claim 16. Claim 18. Claim 17. Claim 19. Claim 18. Claim 20. Claim 19. Nonetheless, claim 1 of the present application made the claim a broader version of claims 1 U.S Patent No. 12,160,573 B2. Therefore, since omission of an element and its function in a combination is an obvious expedient if the remaining elements perform the same functions as before (In re Karlson (CCPA) 136 USPQ 184 (1963)), claim 1, 2, 17 – 20 is not patentably distinct from claim claims 1, 2, 16 - 19 U.S Patent No. 12,160,573 B2. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 14, 17 – 20 are rejected under 35 U.S.C. 102(a)(1) or 102(a)(2) as being by Wang et al. (US 2020/0296389 A1). Regarding claim 1, Wang discloses: “a method of processing video data, comprising: determining, for a conversion between a block of a video and a bitstream of the block, that at least one of an intra block copy (IBC) mode, an intra mode, an inter mode and a palette mode is applied to the block [see para: 0086; Mode selection unit 202 includes a motion estimation unit 222, motion compensation unit 224, and an intra-prediction unit 226. Mode selection unit 202 may include additional functional units to perform video prediction in accordance with other prediction modes. As examples, mode selection unit 202 may include a palette unit, an intra-block copy unit (which may be part of motion estimation unit 222 and/or motion compensation unit 224), an affine unit, a linear model (LM) unit, or the like. And see para: 0099; For other video coding techniques such as an intra-block copy mode coding, an affine-mode coding, and linear model (LM) mode coding, as few examples, mode selection unit 202, via respective units associated with the coding techniques, generates a prediction block for the current block being encoded. In some examples, such as palette mode coding, mode selection unit 202 may not generate a prediction block, and instead generate syntax elements that indicate the manner in which to reconstruct the block based on a selected palette. In such modes, mode selection unit 202 may provide these syntax elements to entropy encoding unit 220 to be encoded], wherein the block is split into two or multiple triangular or wedgelet sub-regions [see para: 0019; FIG. 5A is a conceptual diagram illustrating a first example of splitting a coding unit into a first triangle-shaped partition and a second triangle-shaped partition based on inter prediction, in accordance with the techniques of the disclosure]; and performing the conversion based on the determining [see para: 0049; In general, video encoder 200 and video decoder 300 may perform block-based coding of pictures. The term “block” generally refers to a structure including data to be processed (e.g., encoded, decoded, or otherwise used in the encoding and/or decoding process). For example, a block may include a two-dimensional matrix of samples of luminance and/or chrominance data. In general, video encoder 200 and video decoder 300 may code video data represented in a YUV (e.g., Y, Cb, Cr) format. That is, rather than coding red, green, and blue (RGB) data for samples of a picture, video encoder 200 and video decoder 300 may code luminance and chrominance components, where the chrominance components may include both red hue and blue hue chrominance components. In some examples, video encoder 200 converts received RGB formatted data to a YUV representation prior to encoding, and video decoder 300 converts the YUV representation to the RGB format. Alternatively, pre-processing units and post-processing units (not shown) may perform these conversions] [see para: 0175 – 0186; Output of this process is the (nCbW)×(nCbH) array pbSamples of prediction sample values. [0176] The variable nCbR is derived as follows: nCbR=(nCbW >nCbH)?(nCbW/nCbH):(nCbH/nCbW) (8-841) [0178] The variable bitDepth is derived as follows: If cIdx is equal to 0, bitDepth is set equal to BitDepthY. [0180] Otherwise, bitDepth is set equal to BitDepthC. [0181] Variables shift1 and offset1 are derived as follows: The variable shift1 is set equal to Max(5, 17−bitDepth). The variable offset1 is set equal to 1<<(shift1−1). [0184] Depending on the values of triangleDir, wS and cIdx, the prediction samples pbSamples[×] [y] with x=0 . . . nCbW−1 and y=0 . . . nCbH−1 are derived as follows: The variable wIdx is derived as follows: [0186] If cIdx is equal to 0 and triangleDir is equal to 0, the following applies: wIdx=(nCbW >nCbH)?(Clip3(0, 8,(x/nCbR−y)+4)) (8-842) :(Clip3(0, 8,(x−y/nCbR)+4))]. Regarding claim 2, Wang discloses: “wherein the block is split into two triangular sub-regions by applying either a diagonal split or an anti-diagonal split to the block [see fig. 5A-B]. Regarding claim 14, Wang discloses: “wherein certain coding methods are disabled for a block coded with the method, and wherein the certain coding methods include one or more of sub-block transform, affine motion prediction, multiple reference line intra prediction, matrix-based intra prediction, symmetric motion vector difference (MVD) coding, merge with MVD decoder side motion derivation/refinement, bi-directional optimal flow, reduced secondary transform, and multiple transform set [see para: 0101; In some examples, transform processing unit 206 may perform multiple transforms to a residual block, e.g., a primary transform and a secondary transform, such as a rotational transform. In some examples, transform processing unit 206 does not apply transforms to a residual block. And see para: 0062; For uni-directional or bi-directional inter-prediction, for example, video encoder 200 may encode motion vectors using advanced motion vector prediction (AMVP) or merge mode. Video encoder 200 may use similar modes to encode motion vectors for affine motion compensation mod]. Regarding claim 17, Wang discloses: “wherein the conversion includes encoding the block of the video into the bitstream [see para: 0068; In this manner, video encoder 200 may generate a bitstream including encoded video data, e.g., syntax elements describing partitioning of a picture into blocks (e.g., CUs) and prediction and/or residual information for the blocks. Ultimately, video decoder 300 may receive the bitstream and decode the encoded video data]. Regarding claim 18, Wang discloses: “wherein the conversion includes decoding the block from the bitstream [see para: 0068; In this manner, video encoder 200 may generate a bitstream including encoded video data, e.g., syntax elements describing partitioning of a picture into blocks (e.g., CUs) and prediction and/or residual information for the blocks. Ultimately, video decoder 300 may receive the bitstream and decode the encoded video data]. Regarding claim 19 – 20, claim 19 and 20 is rejected under the same art and evidentiary limitations as determined for the method of claim 1 but for an apparatus and CRM. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 7 – 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2020/0296389 A1) in view of Sun et al. (US 2020/0213591 A1). Regarding claim 7, Wang disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim. Wang does not explicitly disclose: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the IBC mode, and another one of the multiple sub-regions is coded with a non-IBC mode, and wherein the another one of the multiple sub-regions is coded with one mode of following modes: the intra mode, the inter mode, the palette mode, a pulse coded modulation (PCM) mode, or a residual differential pulse coded modulation (RDPCM) mode”. However, Sun, from the same or similar field of endeavor teaches: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the IBC mode, and another one of the multiple sub-regions is coded with a non-IBC mode [see para: 0041; In the IBC mode, a prediction unit (PU) may be predicted from a previously reconstructed block within the same picture. Similar to a PU in motion compensation, a displacement vector, called a block vector or a BV, may be used to signal the relative displacement from the position of the current PU to that of the reference block. The prediction errors after the IBC compensation may then be coded using transformation, quantization and entropy coding. Because the IBC mode and the HEVC inter mode share many similarities, the block level IBC operations and HEVC inter mode the HEVC SCC are unified, specifically, the current (partially decoded) picture may be treated as a reference picture for decoding the current slice (602 and 604)], and wherein the another one of the multiple sub-regions is coded with one mode of following modes: the intra mode, the inter mode, the palette mode, a pulse coded modulation (PCM) mode, or a residual differential pulse coded modulation (RDPCM) mode [see para: 0059; In a CABAC engine, if the system were able to identify that the statistic of a bin to be encoded is different in a different condition, the system would be able to design different context models and adaptively select the corresponding context models according to the condition. In the palette mode signaling, a palette mode flag, such as cu_palette_flag, is signaled in each CU to indicate whether the CU is coded by palette mode, and in the CPR signaling, a prediction mode flag, such as pred_mode_flag, is signaled in each CU to indicate whether the CU is coded by inter/intra mode. However, respective context models for the cu_palette_flag and the pred_mode_flag do not depend on the size of the CU or the type, luma or chroma, of the CU. In other words, a same, or shared, context model is used regardless of the size and the type of the CU for the cu_palette_flag, and another shared context model is used regardless of the size and the type of the CU for the pred_mode_flag]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding in triangular prediction unit mode using different chroma formats disclosed by Wang to add the teachings of Sun as above, in order to provide a means for improving encoding or decoding technique and efficiency, multiple sub blocks are coded with IBC mode and other blocks will used non-IBC mode and other available modes in the system. These various modes has different types flexibility and efficiency to encode or decode [Sun see para: 0041; 0059]. Regarding claim 8, Wang disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim. Wang does not explicitly disclose: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the intra mode, and another one of the multiple sub-regions is coded with a non-intra mode, wherein the another one of the multiple sub-regions is coded with the inter mode, and wherein a motion vector of a sub-region coded with the inter mode is obtained using a way for a conventional inter-coded block”. However, Sun, from the same or similar field of endeavor teaches: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the intra mode, and another one of the multiple sub-regions is coded with a non-intra mode, wherein the another one of the multiple sub-regions is coded with the inter mode, and wherein a motion vector of a sub-region coded with the inter mode is obtained using a way for a conventional inter-coded block [see para: 0042; The bitstream syntax in this approach may follow the same syntax structure for the inter coding while the decoding process may be unified with the inter coding. The difference may be that the block vector (which is the motion vector pointing to the current picture) always uses an integer-pel resolution. For example, in an encoder search for this mode, both block width and height are smaller than or equal to 16; a chroma interpolation is enabled when a luma block vector is an odd integer number; and an adaptive motion vector resolution (AMVR) for the CPR mode is enabled when the sequence parameter set (SPS) flag is on. In this case, when the AMVR is used, a block vector may be switched between 1-pel integer and 4-pel integer resolution at a block level. And see para: 0043; The decision whether to code a picture area using an inter-picture (temporal) or an intra-picture (spatial) prediction may be made at a leaf CU level. Each leaf CU may be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process may be applied, and relevant information may be transmitted to the decoder on a PU basis]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding in triangular prediction unit mode using different chroma formats disclosed by Wang to add the teachings of Sun as above, in order to provide a means for improving encoding or decoding technique and efficiency, based on the available mode, divided sub regions are coded with non-intra mode, inter mode and inter coded mode [Sun see para: 0042]. Regarding claim 9, Wang disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim. Wang does not explicitly disclose: “, wherein when the block is split into multiple sub-regions, all the multiple sub-regions are coded with the palette mode, and wherein at least two of the multiple sub-regions are coded with different palettes”. However, Sun, from the same or similar field of endeavor teaches: “wherein when the block is split into multiple sub-regions, all the multiple sub-regions are coded with the palette mode, and wherein at least two of the multiple sub-regions are coded with different palettes [see para: 0069; At block 1302, a prediction mode of a CU used may be determined based a prediction mode flag associated with the CU. Prior to determining the characteristic of the CU for the intra-prediction mode, whether a coding mode of the CU is a palette mode may be determined based on a signaling palette flag associated with the CU at block 1304. The coding mode of the CU may be determined to be 1) the palette mode if the signaling flag is a palette mode flag having a value of 1, and 2) a non-palette mode if the signaling flag is the palette mode flag having a value of 0. Each mode may then proceed to the process described in FIG. 11. For the non-palette mode, the blocks corresponding to those in FIG. 11 are re-numbered as 1306, 1308, 1310, and 1312. For the inter-prediction mode, such as the CPR, the process may proceed as described in FIG. 11. The blocks for the inter-prediction corresponding to those in FIG. 11 are renumbered as 1314, 1316, 1318, and 1320]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding in triangular prediction unit mode using different chroma formats disclosed by Wang to add the teachings of Sun as above, in order to provide a means for improving encoding or decoding technique and efficiency, based on the available mode, blocks will be divided either with palette mode and rest of the blocks will be used non-palette or other types of mode [Sun see para: 0069]. Regarding claim 10, Wang disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim. Wang does not explicitly disclose: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the palette mode, and another one of the multiple sub-regions is coded with a non-palette mode” However, Sun, from the same or similar field of endeavor teaches: “wherein when the block is split into multiple sub-regions, at least one of the multiple sub-regions is coded with the palette mode, and another one of the multiple sub-regions is coded with a non-palette mode [see para: 0033; FIG. 3 illustrates an example diagram 300 of the palette mode applied to CU 302. For simplicity, a pixel or a palette index is shown to correspond to only one value. However, in HEVC SCC, a pixel or a palette index may represent three color component values, such as YCbCr or GBR. [0034] In the HEVC SCC palette mode, a flag is transmitted for each CU to indicate whether the palette mode is used for that CU, such as a CU 302. If the palette mode is used for the CU 302, the pixels, having pixel values close to palette colors, such as color A 304, color B 306, and color C 308, are represented by the palette color values 310 as shown in a color histogram 312]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding in triangular prediction unit mode using different chroma formats disclosed by Wang to add the teachings of Sun as above, in order to provide a means for improving encoding or decoding technique and efficiency, based on the available mode, blocks will be divided either with palatte mode and rest of the blocks will used non-palette mode [Sun see para: 0033]. Regarding claim 11, Wang disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim. Wang does not explicitly disclose: “wherein the method is applied to all components, wherein the all components comprise a luma component, and wherein a chroma block of the block of the video is split following a same splitting pattern as the luma component, or a chroma block of the block of the video is split with a different splitting pattern to the luma component”. However, Sun, from the same or similar field of endeavor teaches: “wherein the method is applied to all components, wherein the all components comprise a luma component [see para: 0058; Thus, a CU in an I slice may consist of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice may always consist of coding blocks of all three color components unless the video is monochrome], and wherein a chroma block of the block of the video is split following a same splitting pattern as the luma component, or a chroma block of the block of the video is split with a different splitting pattern to the luma component [see para: 0057; FIG. 10 illustrates example non-TT splits 1002, 1004, 1006, and 1008 for a 128×128 luma block 1010. To allow a 64×64 luma block and a 32×32 chroma pipelining design in the VVC hardware decoders, the TT split may be forbidden when either the width or the height of a luma coding block is larger than 64, as illustrated in FIG. 10. The TT split may also be forbidden when either the width or the height of a chroma coding block is larger than 32. [0058] In the VVC, the coding tree scheme may support the ability for the luma and chroma to have a separate block tree structure. For P and B slices, the luma and chroma CTBs in one CTU may have to share the same coding tree structure. However, for I slices, the luma and chroma may have separate block tree structures. When separate block tree mode is applied, luma CTB may partitioned into CUs by one coding tree structure, and the chroma CTBs may be partitioned into chroma CUs by another coding tree structure. Thus, a CU in an I slice may consist of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice may always consist of coding blocks of all three color components unless the video is monochrome]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding in triangular prediction unit mode using different chroma formats disclosed by Wang to add the teachings of Sun as above, in order to splitting the block of the video or bitstream, coding algorithm follow the same splitting pattern on luma or chroma component or both [Sun see para: 0058; 0057]. Allowable Subject Matter Claims 3 – 6, 12, 13, 15 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al (US 10,516,885 B1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Masum Billah whose telephone number is (571)270-0701. The examiner can normally be reached Mon - Friday 9 - 5 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie J. Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MASUM BILLAH/Primary Patent Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603983
APPARATUS AND METHOD FOR GENERATING OBJECT-BASED STEREOSCOPIC IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12597123
RAIL FEATURE IDENTIFICATION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12597258
ALERT DIRECTIVES AND FOCUSED ALERT DIRECTIVES IN A BEHAVIORAL RECOGNITION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591954
DEPTH INFORMATION DETECTOR, TIME-OF-FLIGHT CAMERA, AND DEPTH IMAGE ACQUISITION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581101
TEMPLATE MATCHING REFINEMENT FOR AFFINE MOTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 419 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month