Prosecution Insights
Last updated: April 19, 2026
Application No. 18/457,217

IMAGE PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, IMAGING APPARATUS, AND STORAGE MEDIUM

Final Rejection §101§102§103
Filed
Aug 28, 2023
Examiner
O'MALLEY, CONOR AIDAN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
72%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Moderate +6% lift
Without
With
+5.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: "IMAGE PROCESSING APPARATUS CAPABLE OF ACCURATE OBJECT TRACKING AMIDST MULTIPLE OBJECT TYPES, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM". Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process performed using generic computer elements. This judicial exception is not integrated into a practical application because the generically recited computer elements are not significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite generic computer elements, a generically claimed “memory device” which could be any kind of memory device, a “processor” which is a generic claiming of computer processors, an “image sensor” which is another generic computer element that is a device capable of acquiring images, and a “display” which is another generic computer element. In regards to claim 1, an image processing apparatus comprising: a memory device that stores a set of instructions; and at least one processor that executes the set of instructions (A person of ordinary skill in the art can store a set of instruction and could execute a set of instructions) to acquire an image output from an image sensor (A person of ordinary skill can receive an image from a sensor, this is mere data-gathering); detect objects of different types (A person of ordinary skill can detect and identify objects of different types); determine an object as a main object based on a result of the detection (A person of ordinary skill in the art can determine which objects are the main objects, and which objects are not the main objects as this is a form of judgement or determination); and track the object determined as the main object wherein, while the object being tracked is continuously detected (A person of ordinary skill in the art can track an object of some sort continuously and keep it continuously detected), when a current frame includes an object of a type different from a type of the object being tracked, determining whether or not to change a type corresponding to the main object in accordance with a predetermined criterion, changes the type of the main object if the predetermined criterion is satisfied, and keeps the type of the main object unchanged if not (A person of ordinary skill in the art can reevaluate which object should be the main object based on additional information, predetermined or otherwise, or objects being tracked). In regards to claim 2, wherein the at least one processor that executes the set of instructions to further control a display configured to display the image (A person of ordinary skill can display an image by showing off an image to other people), wherein the display is controlled to display information indicating the object being tracked while superimposing the information on the image (A person of ordinary skill in the art can control the display of an image by choosing whether or not to display it, and a person can further display information on the object being tracked and further impose that on the image or over the image). In regards to claim 3, wherein, in the state where the object being tracked is continuously detected, in the case where the object of the type different from the type of the object being tracked is detected in addition to the object being tracked, an evaluation value of each of the object being tracked and the object of the type different from the type of the object being tracked is calculated (A person of ordinary skill can deal with multiple objects being included within an image with multiple types along with assigning specific values to each type of object), and wherein, in a case where the evaluation value of the object of the type different from the type of the object being tracked is larger than the evaluation value of the object being tracked by a predetermined value, the type corresponding to the main object is changed (A person of ordinary skill in the art can compare two numbers and see which one is bigger and then use that information to potentially change their determination of what the main unit is). In regards to claim 4, wherein the evaluation value is calculated based on parameters including at least one of a detection size, a detection position, detection reliability, detection frequency, and a number of detected parts (A person of ordinary skill can have a value based on at least one of these values, and if only value needs to be used, the singular value could be equivalent to the evaluation value). In regards to claim 5, wherein the evaluation value increases with a decreasing distance between the detection position and a center of the image or a focus adjustment region set by a user (A person of ordinary skill in the art can increase a value as the distance decreases between two positions). In regards to claim 6, wherein the evaluation value is calculated based on the detection size that is subjected to weighting in consideration of a relative size difference between the object being tracked and the object of the type different from the type of the object being tracked (A person of ordinary skill in the art can determine a value based on the relative sizes of various objects and their respective types). In regards to claim 7, wherein the evaluation value increases as the detection reliability increases (A person of ordinary skill can increase a value while the reliability increases). In regards to claim 8, wherein the evaluation value increases as the detection frequency increases (A person of ordinary skill in the art can increase the value as the frequency increases). In regards to claim 9, wherein the evaluation value increases as the number of detected parts increases (A person of ordinary skill in the art can increase the value as the number of parts increases). In regards to claim 10, wherein at least two of a person, an animal, and a vehicle are detected as the objects of the different types (A person of ordinary skill in the art can detect multiple people, animals, and vehicles and use those classifications as the different types). In regards to claim 11, it is similar to claim 1, and it is similarly rejected as the image sensor is virtually the same as the image sensor claimed in claim 1. In regards to claim 12, wherein, in the state where the object being tracked is continuously detected, in the case where the object of the type different from the type of the object being tracked is detected in addition to the object being tracked, an evaluation value of each of the object being tracked and the object of the type different from the type of the object being tracked is calculated(A person of ordinary skill can deal with multiple objects being included within an image with multiple types along with assigning specific values to each type of object), wherein, in a case where the evaluation value of the object of the type different from the type of the object being tracked is larger than the evaluation value of the object being tracked by a predetermined value, the type corresponding to the main object is changed, and wherein, in a state where an imaging preparation operation of a user is received (A person of ordinary skill in the art can compare two numbers and see which one is bigger and then use that information to potentially change their determination of what the main unit is), the evaluation value is calculated based on parameters including at least one of detection reliability, detection frequency, and a number of detected parts ((A person of ordinary skill can have a value based on at least one of these values, and if only value needs to be used, the singular value could be equivalent to the evaluation value). In regards to claims 13 and 14, they are similar to claim 1, and they are similarly rejected. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-2, 10-11, and 13-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito et al. (US 20210258495 A1). Claims 1-2, 10-11, and 13-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ito et al. (US 20210258495 A1), hereinafter referred to as Ito. The applied reference has a common applicant with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. In regards to claim 1, Ito discloses an image processing apparatus comprising: a memory device that stores a set of instructions; and at least one processor that executes the set of instructions (Paragraphs 34 and 24 and 26-27, these paragraphs describe the usage of a processor in paragraph 34 and the usage of a memory control circuit and memory is disclosed in paragraphs 24 and 26-27) to acquire an image output from an image sensor (Paragraph 20, Describes the inclusion of a camera, a type of image sensor, that can acquire images); detect objects of different types (Paragraphs 28-30, The description of a detection unit is given in paragraph 28 with the following paragraphs going into more detail on how it works); determine an object as a main object based on a result of the detection (Paragraphs 48 and 50-51, Describes the determination of a main subject); and track the object determined as the main object wherein, while the object being tracked is continuously detected (Paragraphs 61, 46, and 1 and the Title, Paragraph 61 discloses the continuous tracking while paragraphs 1 and the title disclose tracking subjects while 46 discloses the tracking of a specific subject which would be within the BRI), when a current frame includes an object of a type different from a type of the object being tracked, determining whether or not to change a type corresponding to the main object in accordance with a predetermined criterion, changes the type of the main object if the predetermined criterion is satisfied, and keeps the type of the main object unchanged if not (Paragraphs 51 and 129-137, Paragraph 51 gives a broad overview of switching the subjects while paragraphs 129-137 cover the specifics of switching the subjects based on type). In regards to claim 2, Ito discloses wherein the at least one processor that executes the set of instructions to further control a display configured to display the image (Paragraphs 57-59, The display unit is included and displays an image or images as output), wherein the display is controlled to display information indicating the object being tracked while superimposing the information on the image (Paragraphs 57-59, The display unit is included and displays an image or images as output where 58 discloses that the display can include captured image data while 59 discloses a control system that can turn the display on and off which would read upon the control unit). In regards to claim 10, Ito discloses wherein at least two of a person, an animal, and a vehicle are detected as the objects of the different types (Paragraphs 37 and 124, Paragraph 37 discloses the ability to detect multiple animal types and multiple types of vehicles while paragraph 124 further includes people but allows for multiple by the phrase at least one which would include any value over 1). In regards to claim 11, Ito discloses an image sensor configured to output an image; and the image processing apparatus according to claim 1 (Paragraphs 20, 23, and 57-59, The display unit is included and displays an image or images as output and paragraphs 20 and 23, since the image sensor of claim 1 has been further defined the CMOS sensor and camera would cover that embodiment of an image sensor). In regards to claims 13 and 14, they are similar to claim 1, and they are similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-4, 6-7, 9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 20210258495 A1), hereinafter referred to as Ito in view of Yamazaki et al. (US 20230215034 A1), hereinafter referred to as Yamazaki. In regards to claim 3, Ito does not explicitly disclose wherein, in the state where the object being tracked is continuously, in the case where the object of the type different from the type of the object being tracked is detected in addition to the object being tracked, an evaluation value of each of the object being tracked and the object of the type different from the type of the object being tracked is calculated, and wherein, in a case where the evaluation value of the object of the type different from the type of the object being tracked is larger than the evaluation value of the object being tracked by a predetermined value, the type corresponding to the main object is changed. However, Yamazaki does disclose wherein, in the state where the object being tracked is continuously, in the case where the object of the type different from the type of the object being tracked is detected in addition to the object being tracked, an evaluation value of each of the object being tracked and the object of the type different from the type of the object being tracked is calculated (Paragraphs 6 and 42-43, Paragraph 6 discloses that a plurality of types of subjects are tracked and paragraphs 42-43 discloses the usage of evaluation values), and wherein, in a case where the evaluation value of the object of the type different from the type of the object being tracked is larger than the evaluation value of the object being tracked by a predetermined value, the type corresponding to the main object is changed (Paragraphs 78, 43, and 49, Paragraphs 43 and 49 cite the usage of evaluation values and paragraph 78 discloses the usage of a value used for evaluating the type of the object having to overcome a threshold to determine whether the type was accurate or not). It would have been prima facie obvious to combine the teachings of the two disclosures as the inclusion of an evaluation value that is being used for determination would lead to a predictable increase in the ability to redetermine values. As the mechanism comes with preassigned thresholds, it allows for a system to course correct in real time which would lead to a predictable increase in accuracy. As such, it would have been prima facie obvious to combine the teachings of these disclosures. In regards to claim 4, Ito does not explicitly disclose wherein the evaluation value is calculated based on parameters including at least one of a detection size, a detection position, detection reliability, detection frequency, and a number of detected parts. However, Yamazaki discloses wherein the evaluation value is calculated based on parameters including at least one of a detection size, a detection position, detection reliability, detection frequency, and a number of detected parts (Paragraphs 78 and 57, Paragraph 78 discloses the use of the detection size or subject size and paragraph 57 further discloses that the position, reliability, and the number of detected areas is included with the detection results). In regards to claim 6, Yamazaki does disclose wherein the evaluation value is calculated based on the detection size that is subjected to weighting in consideration of a relative size difference between the object being tracked and the object of the type different from the type of the object being tracked (Paragraphs 78 and 94, Paragraph 78 compares subjects of their relative size compared to a threshold and paragraph 94 discloses weighting with a value which could be reasonably interchanged for a reliable result). In regards to claim 7, Yamazaki does disclose wherein the evaluation value increases as the detection reliability increases (Paragraphs 57 and 59, The determination is done via the detection reliability to determine the object as such, there is a reason to allow that as reliability of the detection increases that the determination’s value would also increase). In regards to claim 9, Yamazaki does disclose wherein the evaluation value increases as the number of detected parts increases (Paragraphs 57 and 59, The determination is done via the number of objects for determination as such, there is a reason to allow that as object number increases that the determination’s value would also increase). In regards to claim 12, Ito does not explicitly disclose any of the limitations of this claim. However, Yamazaki does disclose wherein, in the state where the object being tracked is continuously detected, in the case where the object of the type different from the type of the object being tracked is detected in addition to the object being tracked, an evaluation value of each of the object being tracked and the object of the type different from the type of the object being tracked is calculated (Paragraphs 6 and 42-43, Paragraph 6 discloses that a plurality of types of subjects are tracked and paragraphs 42-43 discloses the usage of evaluation values), wherein, in a case where the evaluation value of the object of the type different from the type of the object being tracked is larger than the evaluation value of the object being tracked by a predetermined value, the type corresponding to the main object is changed, and wherein, in a state where an imaging preparation operation of a user is received, the evaluation value is calculated based on parameters including at least one of detection reliability, detection frequency, and a number of detected parts (Paragraphs 78 and 43 and 49, Paragraphs 43 and 49 cite the usage of evaluation values and paragraph 78 discloses the usage of a value used for evaluating the type of the object having to overcome a threshold to determine whether the type was accurate or not and discloses the use of the detection size or subject size and paragraph 57 further discloses that the position, reliability, and the number of detected areas is included with the detection results). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 20210258495 A1), hereinafter referred to as Ito in view of Yamazaki et al. (US 20230215034 A1), hereinafter referred to as Yamazaki, as applied to claims 3-4, 7, 9, and 12 above, and further in view of Odagiri et al. (US 20210065365 A1), hereinafter referred to as Odagiri. In regards to claim 5, neither Yamazaki or Ito explicitly disclose wherein the evaluation value increases with a decreasing distance between the detection position and a center of the image or a focus adjustment region set by a user. However, Odagiri discloses wherein the evaluation value increases with a decreasing distance between the detection position and a center of the image or a focus adjustment region set by a user (Paragraph 95, Odagiri provides that the evaluation value of a region increases while the distance decreases between a probe and the regions which would means that a simple substitution of the probe for the center of the image as a region or the region specified by the user could similarly accomplish this effect). It would have been prima facie obvious to combine the teachings of these disclosures as factoring in the decreasing distance would have led to a predictable increase in accuracy. The closer something is makes it usually much more identifiable particularly in the context of animals, people, or vehicles. As such, having the distance be a positive factor for the evaluation factor would have been prima facie obvious. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Ito et al. (US 20210258495 A1), hereinafter referred to as Ito in view of Yamazaki et al. (US 20230215034 A1), hereinafter referred to as Yamazaki, as applied to claims 3-4, 7, 9, and 12 above, and further in view of Ruhr et al. (CN 117203403 A), hereinafter referred to Ruhr. In regards to claim 8, neither Ito nor Yamazaki disclose wherein the evaluation value is calculated based on the detection size that is subjected to weighting in consideration of a relative size difference between the object being tracked and the object of the type different from the type of the object being tracked. However, Ruhr does disclose wherein the evaluation value is calculated based on the detection size that is subjected to weighting in consideration of a relative size difference between the object being tracked and the object of the type different from the type of the object being tracked (6th new paragraph of page 13, Ruhr describes that the change in detection frequency and object’s detection rate are aggregated and used to calculate the result where an increase in detection frequency would imply an increase in the aggregation). It would have been prima facie obvious to combine these disclosures. As including the detection frequency in the aggregation would lead to a predictable increase in accuracy. The more times an object has been detected would allow for more opportunities to identify it accurately as it has been exposed to the system more and more frequently as such, it would have been prima facie obvious to combine these two disclosures. Response to Amendment The amendment, entered 12/19/2025, has been fully considered. The amendment overcomes the 112(f) interpretations and the specification objections that were about typographical errors. The title is still not sufficiently descriptive, and to help advance prosecution in any subsequent filing, the examiner has suggested a new title to help with the overcoming of this objection. Response to Arguments Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive. In regards to the 101 rejection arguments, the argument claims that the inclusion of an image sensor is enough to overcome the rejection. An image sensor is just an additional generic computer element that functions as mere data gathering. As such, this argument was not persuasive. In regards to the 102 arguments, they are also not persuasive. The argument claims that Ito et al. is related to, ‘how long the template matching tracking using luminance, color, etc. should be continued after a temporary discontinuity in object detection (recognition)’. This is quoted in the argument, but it does not appear to be included in Ito et al. at all. Further this quotation uses terminology that Ito et al. does not use. So, the quotation appears to be a preferred interpretation of Ito et al. from the applicant which is contrary to what the document discloses. As such, this argument is not persuasive. In regards to the 103 arguments, Odagiri is claimed to make it difficult to be applied if the object is a person or a vehicle, but Odagiri is being used to disclose increasing a value in accordance with where an object is in an image. As such, this technique is clearly something that is not only possible or necessarily difficult to apply to non-medical images. Moreover, the BRI of the term object is broader than just vehicle or people. As such, this argument is not persuasive. The argument alleges that there is a lack of motivation to combine Ruhr with the art. It does not describe why there is a failing with the prima facie obviousness case nor make any other arguments about this. As such, this argument is not persuasive. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONOR AIDAN O'MALLEY whose telephone number is (571)272-0226. The examiner can normally be reached Monday - Friday 9:00 am. - 5:00 pm. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 5722729523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CONOR AIDAN. O'MALLEY Examiner Art Unit 2675 /CONOR A O'MALLEY/ Examiner, Art Unit 2675 /ANDREW M MOYER/ Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Sep 16, 2025
Non-Final Rejection — §101, §102, §103
Dec 19, 2025
Response Filed
Feb 18, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573234
BLINK DETECTION IN CABIN USING DYNAMIC VISION SENSOR
2y 5m to grant Granted Mar 10, 2026
Patent 12555254
MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS METHOD, AND NON-TRANSITORY, COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12541866
MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND COMPUTER READABLE MEDIUM THAT ANALYZE A FLUORESCENCE IMAGE FROM PHOSPHOR IN BIOLOGICAL TISSUE
2y 5m to grant Granted Feb 03, 2026
Patent 12536776
TEACHING METHOD AND TRANSFER SYSTEM FOR SUBSTRATE USING THREE-DIMENSIONAL IMAGE DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12488417
PARAMETRIC COMPOSITE IMAGE HARMONIZATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
72%
With Interview (+5.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month