Prosecution Insights
Last updated: April 19, 2026
Application No. 18/932,451

PRECISION REFINEMENT FOR MOTION COMPENSATION WITH OPTICAL FLOW

Final Rejection §103§DP
Filed
Oct 30, 2024
Examiner
ZEWEDE, ASTEWAYE GETTU
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Interdigital Vc Holdings Inc.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
36 granted / 45 resolved
+22.0% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
0.7%
-39.3% vs TC avg
§103
67.0%
+27.0% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the amendment filed on 01/26/2026. Claims 1, 4, 6, 9, 11, 13, 14, 16, 18, and 19 are currently amended. Claims 1-20 are pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/04/2025, 02/04/2025, 05/28/2025, 09/19/2025, and 11/12/2025 filed in accordance with the provisions of 37 CFR 1.97. Accordingly, it is being considered by the examiner. Response to Amendments /Argument Applicant’s Amendment filed on January 26, 2026 has been entered and made of record. Response to Arguments Applicant argues that amended Claim 1 is patentably distinct from the claims of the reference patent because the present claim allegedly requires motion vector refinement decoded at the sample level, whereas the prior patent signals motion vector refinement at the block level. This argument has been fully considered but is not persuasive. As amended, claim 1 recites “determining a respective motion vector refinement associated with at least the first each of the sample positions, wherein the motion vector refinements are decoded from a bitstream as sample-level indices.” However, the claim does not require that the motion vector refinement values themselves differ on a per-sample basis, nor does it exclude a block-level refinement that is uniformly applied to all sample positions within a block. Indeed, claim 1 remains broad enough to encompass embodiments in which: a single motion vector refinement is signaled for a block, and that refinement is applied to each sample position in the block, even if the signaling is represented or indexed in a sample-level manner. Thus, the recitation of “sample-level indices” pertains to the manner in which refinement information is represented or decoded in the bitstream, rather than imposing a substantive limitation on how the refinement itself is computed or applied. The underlying refinement technique and its application across sample positions within a block therefore remain substantially the same. Accordingly, amended Claim 1 does not define a patentably distinct invention from the claims of the reference patent, and the rejection for non-statutory double patenting is maintained. This rejection may be overcome upon submission of an appropriate terminal disclaimer. Applicant argues that Zhang fails to disclose that “the motion vector refinement is decoded from a bitstream as sample-level indices,” asserting that Zhang only signals a CU-level index. This argument has been fully considered but is not persuasive. . Zhang clearly teaches signaling motion-related parameters in a bitstream using an index that selects an entry from a predefined candidate list. Specifically, Zhang discloses that an index indicating the position of the control point motion vector predictor (CPMVP) in a candidate list is signaled in the bitstream (Col. 13, lines 34-36). The decoder parses the transmitted index and reconstructs the corresponding motion information based on the selected candidate. Zhang further discloses signaling the difference between the control point motion vector (CPMV) and the selected predictor in the bitstream (Col. 13, lines 39-41). Accordingly, Zhang teaches representing motion-related coding information using index-based signaling for efficient bitstream representation. Although Zhang does not use the exact phrase “sample-level indices,” the claim does not require any particular syntax beyond decoding refinement information from a bitstream as an index. Once motion-related parameters are represented as selectable values from a defined set, encoding or decoding that selection as an index is a conventional and well-established video coding practice, as Zhang demonstrates. Motion vector refinement information constitutes the same category of motion-related coding data within the same compression framework. Extending Zhang’s index-based signaling mechanism to motion vector refinement information associated with sample position therefore represents the straightforward application of a known signaling technique to similar motion related parameters for the same purpose, namely efficient bitstream representation. The resulting reduction in signaling overhead would have been expected by a person of ordinary skill in the art and reflects routine optimization within a well-understood video coding architecture. Accordingly, the claimed limitation is at least rendered obvious by Zhang. Furthermore, Applicant’s attention is directed to the fact that the field of video compression and coding recognizes sample-level motion refinement, as demonstrated in the reference: Lei Zhao et al., “Mode-Dependent Pixel-Wise Motion Refinement for HEVC.” Proceedings of the International Conference on Image Processing (ICIP), IEEE, 2016. Accordingly, a claim cannot be rendered patentable merely by adding a limitation that recites sample-level motion refinement, since such refinement techniques were already known in the art. Double Patenting The Obviousness double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Langi, 759 F.2d 887,225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937,214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CPR l.32l(c) or l.32l(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717 .02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CPR l.32l(b). The filing of a terminal disclaimer by itself is not a complete reply to a non-statutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B. l. For a reply to a non-final Office action, see 37 CPR 1.11 l(a). For a reply to final Office action, see 37 CPR 1.113(c). A request for reconsideration while not provided for in 37 CPR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto processed and approved immediately upon submission. For more information about eTerminal Disclaimers, to refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of non-statutory double patenting as being unpatented over claims 1-17 of U.S, Patent No 12/160,582. Although the claims of the instant application are not identical, to those of the referenced patent, they are not patentably distinct therefrom. The difference between the claim sets are merely in terminology, as illustrated in the claim comparison table below, and do not result in any patentable distinction. Accordingly, the instant claims are considered to be an obvious variation of the claims of the cited patent. To overcome this rejection, a terminal disclaimer must be filed. The terminal disclaimer must disclaim any term of the instant application that would extends beyond the term of the referenced patent and must include the required common ownership and enforceable provisions under C.F.R. § 1.321. The following table provides an exemplary comparison between representative claims, and shows that the difference are merely in wording and do not constitute a patentable distinction. 18/932,451 (Instant Application) 17/619,192 (US Patent US 12160582 B2) EXEMPLARY CLAIM 1 CLAIM 1 1. A video decoding method comprising: obtaining an initial predicted sample value, each sample position in a current block of samples; determining a respective motion vector refinement associated with each of the sample positions, wherein the motion vector refinement [[is]] are decoded from a bitstream as sample-level indices determining, at each of the sample positions, a spatial gradient of sample values; at each of the sample positions, determining a sample difference value based on a scalar product of the spatial gradient and the motion vector refinement; and modifying the initial predicted sample value based on the respective sample difference values. 1. A video coding method comprising: obtaining an initial predicted sample value, based on motion-compensated prediction, for at least a first sample position in a current block of samples; determining a motion vector refinement associated with at least the first sample position, wherein the motion vector refinement is signaled at the block level, and the motion vector refinement is the same for all sample positions in the current block; determining, at the first sample position, a spatial gradient of sample values; determining a sample difference value based on a scalar product of the spatial gradient and the motion vector refinement; and modifying the initial predicted sample value based on the sample difference value. Claims 2-20 list all the same elements of claims 2-17. Therefore, the supporting rationale of the rejection applies equally as well to claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness 6. Claims 1-2, 6-7, and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Luo et al., "CE2-related: Prediction refinement with optical flow for affine mode," Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, Switzerland, 19-27 March 2019, Document: JVET-N0236-r5 hereinafter “Luo”, in view of Zhang et al., (US-11356697-B2) hereinafter “Zhang”. Regarding Claim 1 Luo-Zhang Luo discloses (Currently Amended) 1. A video decoding method (Luo, Page. 1 abs “…method to refine the sub-block based affine motion compensated prediction with optical flow….”) comprising: obtaining an initial predicted sample value, each sample position in a current block of samples; (Page. 1, under section. 2: "Step 1) The sub-block-based affine motion compensation is performed to generate sub-block prediction") determining a respective a motion vector refinement associated with each of the sample positions, . . . (Luo, Page. 2, under sec. 2 - motion vector refinement is performed using affine motion models) determining, at each of the sample positions, a spatial gradient of sample values; (Page. 1, under sec. 2: "Step 2) The spatial gradients gx(i, j) and gy(i, j) of the sub-block prediction are calculated at each sample location”) . . . at each of the sample positions, determining a sample difference value based on a scalar product of the spatial gradient and the motion vector refinement; (Page. 2, under section 2: "Step 3) The luma prediction refinement is calculated by the optical flow equation" - see the formula above FIG. 1 and the values of c, d, e and f from 4-parameter and 6-parameter affine models) and modifying the initial predicted sample value based on the respective sample difference values. (Page. 3, under section 2: "Step 4) Finally, the luma prediction refinement is added to the sub-block prediction. Luo does not explicitly disclose , wherein the motion vector refinement [[is]] are decoded from a bitstream as sample-level-indices However, in the same field of endeavor Zhang discloses more explicitly the following: wherein the motion vector refinement is decoded from a bitstream as sample-level-indices (Zhang, Col. 5, lines 15-16 “..., index of best merge candidate is encoded….” Col. 8, lines 38-40”… multiple sets of motion information (including motion vectors and reference indices”)…” Col. 8, lines 48-50 “…obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU…. ”.) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Luo with those of Zhang to create the system of Luo so that the motion vector refinement is decoded from a bitstream as a signals contained in the bitstream as taught by Zhang. A person having ordinary skill in the art would have been motivated to incorporate Zhang’s teaching of signaling-motion vector refinement information in the bitstream using an index in order to improve encoder-decoder synchronization and bitstream efficiency thereby achieving “higher coding efficiency and support for higher resolutions.” (Zhang, Col.4, lines 23-24) Note: The motivation that was utilized in the rejection of claim 1, applies equally as well to claims 2, 6-7, and 11-20. Regarding Claim 2 Luo-Zhang Luo-Zhang discloses The method of claim 1, further comprising decoding refinement precision information from the bitstream, wherein determining the sample difference value comprises scaling the scalar product by an amount indicated by the precision information. (Zhang, Col. 5, lines 50-67 and Col. 6, lines 1-3“only one candidate is added to the list. Particularly, in the derivation of this temporal merge candidate, a scaled motion vector is derived based on co-located PU belonging to the picture which has the smallest POC difference with current picture within the given reference picture list. The reference picture list to be used for derivation of the co-located PU is explicitly signaled in the slice header. ) FIG. 5 shows an example of the derivation of the scaled motion vector for a temporal merge candidate (as the dotted line), which is scaled from the motion vector of the co-located PU using the POC distances, tb and td, where tb is defined to be the POC difference between the reference picture of the current picture and the current picture and td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture. The reference picture index of temporal merge candidate is set equal to zero. For a B-slice, two motion vectors, one is for reference picture list 0 and the other is for reference picture list 1, are obtained and combined to make the bi-predictive merge candidate.”) Claim Rejections - 35 USC § 103 7. Claim 3 and 8 are rejected under U.S.C. 103 as being unpatentable over Luo-Zhang in view of Taquet et al (US-20220159249-A1) hereinafter “Taquet”. Regarding Claim 3 Luo-Zhang-Taquet Luo-Zhang discloses 3. The method of 2, Lue-Zhang does not explicitly disclose wherein scaling the scalar product comprises bit-shifting the scalar product by an amount indicated by the precision information. However, in the same field of endeavor Taquet discloses more explicitly the following: wherein scaling the scalar product comprises bit-shifting the scalar product by an amount indicated by the precision information. (Taquet, [0170] “...a<<N means that a bit-shift to the left of N bits is applied to the integer value of a. It is equivalent to performing an integer multiplication by two to the power of N. a>>N means that a bit-shift to the right of N bits is applied to the integer value of a. … is equivalent to performing an integer division by two to the power of N. … N=7 provides the decimal precision fixed in VVC for ALF computation but other values could be used in other embodiments. The effect of adding (1<<(N−1)) before performing the right shift>>N is a rounding of the fixed point result of the scalar product.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Luo-Zhang with Taquet to create the system of Luo-Zhang as outlined above so as “to scale the scalar product comprises bit-shifting the scalar product by an amount indicated by the precision information.” as suggested by Taquet. One of ordinary skill in the art would have been motivated to incorporate Taquet’ s precision-based bit-shifting technique into Luo-Zhan’s scaler product computation in order to enhance “coding efficiency improvement” (Taquet, [0263]) Note: The motivation that was utilized in the rejection of claim 3, applies equally as well to claim 8. Regarding Claims 6-8 With respect to claims 6-8, the claims are drawn to apparatuses that perform a series of steps that are commensurate in scope with steps of claims 1-3, respectively. Accordingly, claims 6-8 are rejected for the same reasons of obviousness with the same motivation as noted in the above rejection of claims 1-3, respectively. Furthermore, for the apparatus, Zhang, Col. 25, lines 33-36 “an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described,..” which corresponds apparatus feature. Regarding Claim 11 Luo-Zhang The independent claim 11 recites a limitation that are substantially the similar to those of independent claim 1, except that claim 11 is directed to encoder rather than a decoder. It is well established in the art that video compression systems comprise complementary components, namely encoder (compressor) and a decoder (decompressor), which perform reciprocal operations. The encoder compresses source data to reduce the bit rate foe transmission or storage, while the decoder reconstructs the data from the compressed bitstream by performing a corresponding inverse process. Regarding Claim 12 Luo-Zhang Luo-Zhang discloses 12. The method of claim 11, wherein determining a motion vector refinement comprises selecting the motion vector refinement to substantially minimize a prediction error with respect to an input video block. (Luo, Page 2, specifically, the method refines motion compensation by adding a difference derived by the optical flow equation. Where Δ I ( i , j ) is calculated according to Δ I ( i , j ) =   g x i , j * Δ v x ( i , j ) + g y i , j * Δ v y ( i , j ) where the Δ v ( i , j ) is the difference between pixel motion vector computed for sample location ( i , j ) , and the sub-block motion vector the corresponding sub-block, see, Fig. 1. The refinement is calculated so that the updated predicated samples reduce the predication error between the reconstructed and reference samples. In addition, Zhang further discloses in Col. 12, lines 55-65 and Col. 13, lines 1-2 “MvPre is the motion vector fraction accuracy (e.g., 1/16 in JEM). (v.sub.2x, v.sub.2y) is motion vector of the bottom-left control point, calculated according to Eq. (1). M and N can be adjusted downward if necessary to make it a divisor of w and h, respectively. (109) FIG. 15 shows an example of affine MVF per sub-block for a block 1500. To derive motion vector of each M×N sub-block, the motion vector of the center sample of each sub-block can be calculated according to Eq. (1), and rounded to the motion vector fraction accuracy (e.g., 1/16 in JEM). Then the motion compensation interpolation filters can be applied to generate the prediction of each sub-block with derived motion vector. After the MCP, the high accuracy motion vector of each sub-block is rounded and saved as the same accuracy as the normal motion vector.”) Regarding Claim 13 Luo-Zhang Luo-Zhang discloses (Currently Amended) 13. The method of claim 11, wherein [[the]] each index identifies one of a plurality of motion vector refinements from the group consisting of (0, -1), (1,0), (0,1), and (-1,0). (Luo, Page. 2, under section 2: teaches the motion vector refinement Δ I ( i , j ) is calculated according to Δ I ( i , j ) =   g x i , j * Δ v x ( i , j ) + g y i , j * Δ v y ( i , j ) Where Δ I ( i , j ) represents the difference between the pixel motion vector computed for a given sample location (i , j) and the sub-block MV of the corresponding sub-block. This equation shows that refinement may be applied independently in the x and y direction, thereby defining a set of directional motion-vector such as (0,1), (1,0), (0,1), and (-1,0)") Regarding Claim 14 Luo-Zhang Luo-Zhang discloses (Currently Amended) 14. The method of claim 11, wherein [[the]] each index identifies one of a plurality of motion vector refinements from the group consisting of (0, -1), (1,0), (0,1), (-1,0), (-1,-1), (1,-1), (1,1), and (-1,1). (Luo, Page. 2, under section. 2: teaches that index identifies one of a plurality of motion vector refinements from the group consisting of (0,1), (1,0), (0,1), and (-1,0)"). Regarding Claim 15 Luo-Zhang Claims 15 recite limitations that are substantially similar to those of dependent claims 2, except that claims 15 are directed to encoding rather than decoding. Therefore, the reasoning and rejection of claims 2 applies equally to claim 15. Regarding Claim 16-20 With respect to claims 16-20, the claims are drawn to apparatuses that perform a series of steps that are commensurate in scope with steps of claims 11-15, respectively. Accordingly, claims 16-20 are rejected for the same reasons of obviousness with the same motivation as noted in the above rejection of claims 11-15, respectively. Furthermore, for the apparatus, Zhang, Col. 25, lines 33-36 “an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described,..” which corresponds apparatus feature. Allowable Subject Matter Claims 4-5 and 9-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. A. Lei Zhao et al. “Mode-Dependent Pixel-Wise Motion Refinement for HEVC.” Proceedings - International Conference on Image Processing. IEEE, 2016. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASTEWAYE GETTU ZEWEDE/Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Oct 30, 2024
Application Filed
Oct 29, 2025
Non-Final Rejection — §103, §DP
Jan 26, 2026
Response Filed
Feb 21, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598390
CONTROL APPARATUS, IMAGING APPARATUS, AND LENS APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587663
SLIDING-WINDOW RATE-DISTORTION OPTIMIZATION IN NEURAL NETWORK-BASED VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12537980
Attention Based Context Modelling for Image and Video Compression
2y 5m to grant Granted Jan 27, 2026
Patent 12470842
MULTIFOCAL CAMERA BY REFRACTIVE INSERTION AND REMOVAL MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12470679
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND DISPLAY SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month