Prosecution Insights
Last updated: April 19, 2026
Application No. 18/562,285

IMAGE PROCESSING ALGORITHM EVALUATING APPARATUS

Final Rejection §103§112
Filed
Nov 17, 2023
Examiner
VANCHY JR, MICHAEL J
Art Unit
2666
Tech Center
2600 — Communications
Assignee
The University of Tokyo
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
404 granted / 606 resolved
+4.7% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
16 currently pending
Career history
622
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
60.8%
+20.8% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 606 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/22/2025 have been fully considered but they are not persuasive. Applicant’s arguments will now be addressed: Applicant argues (Remarks; p. 6, 1st paragraph) that prior art Tremblay “fails to teach or suggest how the created image is used” (i.e. how the augmented rain image is used). The Examiner respectfully disagrees. Tremblay teaches that the augmented rain images are used within image processing algorithms (such as object detection) to improve their performance (p. 342; right column, 1st paragraph and p. 355; Fig. 17). Applicant also argues (Remarks; p. 6, 1st paragraph) that “claim 1 is amended to recite evaluating the algorithm based on whether a difference occurs between an image without disturbance and an image with disturbance. Tremblay fails to reasonably teach or suggest the recited claim language”. The Examiner respectfully disagrees. Tremblay teaches evaluating how accurate the algorithms are when the weather is clear and when using the rain images (p. 355; Fig. 17); specifically evaluating the performance of the image processing algorithms based on how well the image processing algorithms do without rain, i.e. clear, and with the rain; wherein the algorithms can be finetuned in order to improve their performance in real-world rainy conditions (p. 341, Abstract; p. 342, right column, 1st paragraph; p. 351, Fig. 11; p. 353; Sections 7, 7.1, and 7.2; p. 354, Figs. 15 and 16; and p. 355, Fig. 17). Claims 1-5 are pending; and claims 1-5 have been amended. Claim Interpretation 112(f) The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: As to claims 1-5, the “storage unit” is considered to read on a computer with a memory (Specification as filed: [0011]; PGPUB: [0021]). As to claims 3-5, the “generating unit” is considered to read on a computer with a processor for operating the generation process (Specification as filed: [0011]; PGPUB: [0021]). As to claims 3-5, the “learning unit” is considered to read on a computer with a processor for operating the learning process (Specification as filed: [0011]; PGPUB: [0021]). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (and thus claims 2-5) recites the limitation "the image storage unit". There is insufficient antecedent basis for this limitation in the claim. The Examiner believes, since the image storage unit has been removed earlier in the claim, that “the image storage unit” was also supposed to be removed and potentially replaced with “the memory”. Appropriate correction is required. Claim 3 recites the limitation "the image generating unit" and “the image learning unit”. There is insufficient antecedent basis for this limitation in the claim. The Examiner believes, since these units have been removed earlier in the claim(s), that “the image generating unit” and “the image learning unit” are also supposed to be removed and potentially replaced with “the processor is configured to…”. Appropriate correction is required. Though not rejected under 35 USC 112(b), claims 4 and 5 also still list “the generating unit” (claim 4) and “the learning unit” (claims 4 and 5). The Examiner believes these should also be changed as described above to something like “the processor is configured to…” to avoid any future 112(b) rejections if/when claim 3 is amended. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Tremblay et al., “Rain Rendering for Evaluating and Improving Robustness to Bad Weather” (Tremblay) and further in view of Jaipuria et al., US 2021/0004608 A1 (Jaipuria). Regarding claim 1, Tremblay teaches an image processing algorithm evaluating apparatus (improving the performance of algorithms for object detection, semantic segmentation, and depth estimation on rainy images) (p. 341; Title and p. 342; right column, 1st paragraph) comprising: actual images captured (clear images that are taken and stored) (p. 344; Fig. 2) (existing image databases) (p. 342; left column, 1st paragraph); when receiving disturbance information representing a disturbance, acquire a target image (acquiring a clear image) (p. 344; Fig. 2) from among the actual images stored in the image storage unit (determining images from the same area (target) including that the images are rainy images (disturbance)) (p. 344; Fig. 2), interprets the target image (interpreting the target image using depth and illumination estimation) (p. 344; Fig. 2), and generates a composite image by manipulating the target image in such a manner that the disturbance is reflected to the target image, based on the interpretation (generating a composite image of the clear image with the rain (disturbance) based on rendering) (p. 344; Fig. 2 and Section 3.1); and evaluate performance of an image processing algorithm performing image processing based on the generated composite image and determining a situation around the vehicle (evaluating the performance of algorithms for object detection, semantic segmentation, and depth estimation on the generated rainy images; for objects around the vehicle) (p. 342; right column, 1st paragraph, p. 353; Section 7); apply image processing (applying image processing algorithms; such as object detection, semantic segmentation, depth estimation, etc.) (p. 342; right column, 1st paragraph) to the composite image using the image processing algorithm (applying the image processing algorithms to the rain augmented images) (p. 342; right column, 1st paragraph; p. 351, Fig. 11; p. 353; Section 7; p. 354, Figs. 15 and 16; and p. 355, Fig. 17) and calculate determination information for determining the situation around the vehicle (determining, such as in object detection, objects around the vehicle and the percentage of being accurate) (p. 341, Abstract; p. 342, right column, 1st paragraph; p. 351, Fig. 11; p. 353; Section 7; p. 354, Figs. 15 and 16; and p. 355, Fig. 17), and evaluate the performance of the image processing algorithm based on whether there is a difference between the determination information resultant of processing an image without disturbance, and the determining information resultant of processing the image with a disturbance (evaluating the performance of the image processing algorithms based on how well the image processing algorithms do without rain, i.e. clear, and with the rain; wherein the algorithms can be finetuned in order to improve their performance in real-world rainy conditions) (p. 341, Abstract; p. 342, right column, 1st paragraph; p. 351, Fig. 11; p. 353; Sections 7, 7.1, and 7.2; p. 354, Figs. 15 and 16; and p. 355, Fig. 17). Thus, Tremblay teaches evaluating the performance of the image processing algorithm based on seeing how well the algorithms did without weather and with weather (i.e. a difference) and then finetuning the algorithms to improve their performance (p. 355, Fig. 17). Tremblay teaches using images and databases (existing image databases) (p. 342; left column, 1st paragraph) as well as that the computer vision tasks can be used in autonomous driving (p. 348; Section 5, 1st paragraph); however, Tremblay does not explicitly teach “a memory and a processor” or “captured from a vehicle”. Jaipuria teaches a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth (Abstract) and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN) (Abstract), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain (Abstract); wherein a memory (memory on a computer) (Abstract) that stores actual images captured from a vehicle (wherein the memory can store images captured by the vehicle) (Abstract, [0001], and [0009-0010]); and a processor (a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth) (Abstract). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tremblay to include obtaining images from a vehicle and storing them since it allow for generating more authentic synthetic/rendered images based on the environment’s domain (winter, rain, night, etc.) (Jaipuria; [0030-0032]). Regarding claim 2, Tremblay teaches wherein estimates a distance from a position at which the target image is captured to an object included in the target image (estimating a distance to objects captured in the image; such as obtaining a depth estimation) (p. 344; Fig. 2 and Section 3.1), calculates an intensity of disturbance based on the estimated distance (using the depth information to determine the rainfall rate) (p. 344; Sections 3.1 and 3.1.1), and generates the composite image based on the calculation result (rendering the rainy image based on the calculation result, including the rainfall rate) (p. 344; Section 3.1.1 and p. 347; Fig. 6). Tremblay teaches using images and databases (existing image databases) (p. 342; left column, 1st paragraph) as well as that the computer vision tasks can be used in autonomous driving (p. 348; Section 5, 1st paragraph); however, Tremblay does not explicitly teach “a memory and a processor or “captured from a vehicle”. Jaipuria teaches a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth (Abstract) and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN) (Abstract), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain (Abstract); wherein a memory (memory on a computer) (Abstract) that stores actual images captured from a vehicle (wherein the memory can store images captured by the vehicle) (Abstract, [0001], and [0009-0010]); and a processor (a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth) (Abstract). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tremblay to include obtaining images from a vehicle and storing them since it allow for generating more authentic synthetic/rendered images based on the environment’s domain (winter, rain, night, etc.) (Jaipuria; [0030-0032]). Regarding claim 3, Tremblay teaches further comprising: training images a reference image that is the actual image not including the disturbance and a disturbance image that is the actual image including the disturbance (including images that are clear images and images that include rain translations) (p. 346; Fig. 5 and Section 3.2); and to generate a training composite image based on the reference image and the disturbance information (image training based on the clear images and rain translations) (p. 346; Fig. 5 and Section 3.2), using a same process as a process by which the image generating unit generates the composite image based on the target image and the disturbance information (using the process for creating the rendered rainy images; physic-based rain augmentation) (p. 344; Fig. 2 and Section 3.1, p. 346; Fig. 5 and Sections 3.2 and 3.3), and that carries out training using a generative adversarial network or a cycle generative adversarial network so as to improve a determination accuracy of authenticity of the generated training composite image with respect to the disturbance image (using a GAN wherein the clear images are first translated into rain with CycleGAN) (p. 346; Fig. 5 and Section 3.2), wherein to interpret the target image based on the training result of the image learning unit and generates the composite image (using the training result to generate the final rendered image of a rainy image) (p. 346; Fig. 5 and Sections 3.2 and 3.3). Tremblay teaches using images and databases (existing image databases) (p. 342; left column, 1st paragraph) as well as that the computer vision tasks can be used in autonomous driving (p. 348; Section 5, 1st paragraph); however, Tremblay does not explicitly teach “a memory and a processor” or “captured from a vehicle”. Jaipuria teaches a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth (Abstract) and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN) (Abstract), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain (Abstract); wherein a memory (memory on a computer) (Abstract) that stores actual images captured from a vehicle (wherein the memory can store images captured by the vehicle) (Abstract, [0001], and [0009-0010]); and a processor (a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth) (Abstract). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tremblay to include obtaining images from a vehicle and storing them since it allow for generating more authentic synthetic/rendered images based on the environment’s domain (winter, rain, night, etc.) (Jaipuria; [0030-0032]). Regarding claim 4, Tremblay teaches wherein to store a plurality of the disturbance images including the disturbance of a same type with different degrees (training based on generating images with rain; wherein controlling the amount of rain in order to generate arbitrary amounts ranging from very light rain to very heavy storms) (p. 342; left column 1st paragraph to right column), to carry out training based on the plurality of disturbance images including the disturbance with different degrees (based on the different amounts of rain using curriculum learning) (p. 342; left column 1st paragraph to right column), and when receiving an input of the disturbance information including a degree of the disturbance, the image generating unit interprets the target image based on the training result of the image learning unit (interpreting the amount of rain for rendering the rainy images based on curriculum learning) (p. 342; left column 1st paragraph to right column), and generates the composite image in such a manner that the disturbance is reflected to the target image by a degree corresponding to the disturbance information (by controlling the amount of rain it allows the system to produce weather-augmented datasets, where the rainfall rate is known and calibrated) (p. 342; left column 1st paragraph to right column). Tremblay teaches using images and databases (existing image databases) (p. 342; left column, 1st paragraph) as well as that the computer vision tasks can be used in autonomous driving (p. 348; Section 5, 1st paragraph); however, Tremblay does not explicitly teach “a memory and a processor” or “captured from a vehicle”. Jaipuria teaches a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth (Abstract) and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN) (Abstract), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain (Abstract); wherein a memory (memory on a computer) (Abstract) that stores actual images captured from a vehicle (wherein the memory can store images captured by the vehicle) (Abstract, [0001], and [0009-0010]); and a processor (a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth) (Abstract). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tremblay to include obtaining images from a vehicle and storing them since it allow for generating more authentic synthetic/rendered images based on the environment’s domain (winter, rain, night, etc.) (Jaipuria; [0030-0032]). Regarding claim 5, Tremblay teaches wherein to store a plurality of the actual images including different attributes (wherein the images are also taken in different areas and at different rainy characteristics) (p. 346; Fig. 4 and Section 3.2) in a manner associated with label information indicating the attributes (wherein the training images include actual images with annotations indicating clear or rainy or amount of rain) (p. 348; Section 5, 2nd paragraph) (used for training the model) (p. 342; left column 1st paragraph to right column), to carry out training using the cycle generative adversarial network in such a manner that a determination accuracy related to the attributes (wherein the images are also taken in different areas and at different rainy characteristics) (p. 346; Fig. 4 and Section 3.2) between the generated training composite image and the disturbance image is improved (using a GAN wherein the clear images are first translated into rain with CycleGAN) (p. 346; Fig. 5 and Section 3.2) (the rain rendering pipeline for improving robustness to rain through extensive evaluations on synthetic and real rain databases) (p. 353; Section 7), and when receiving the target image and the disturbance information, extract label information indicating the attribute of the target image (receiving the images that produce weather-augmented datasets, where the rainfall rate is known and calibrated) (p. 342; left column 1st paragraph to right column), interprets the target image based on the training result of the image learning unit, and generates the composite image reflecting the attribute (using the training result to generate the final rendered image of a rainy image; which reflects the amount of rain and characteristics) (p. 346; Figs. 4 and 5 and Sections 3.2 and 3.3) (wherein the images are also taken in different areas and at different rainy characteristics). Tremblay teaches using images and databases (existing image databases) (p. 342; left column, 1st paragraph) as well as that the computer vision tasks can be used in autonomous driving (p. 348; Section 5, 1st paragraph); however, Tremblay does not explicitly teach “a memory and a processor” or “captured from a vehicle”. Jaipuria teaches a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth (Abstract) and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN) (Abstract), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain (Abstract); and wherein a memory (memory on a computer) (Abstract) that stores actual images captured from a vehicle (wherein the memory can store images captured by the vehicle) (Abstract, [0001], and [0009-0010]); and a processor (a computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth) (Abstract). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tremblay to include obtaining images from a vehicle and storing them since it allow for generating more authentic synthetic/rendered images based on the environment’s domain (winter, rain, night, etc.) (Jaipuria; [0030-0032]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Fukuhara et al., US 2019/0220029 A1 teaches: In order to solve the problem as described above, the present invention is related to the improvement of the recognition rate of target objects such as other vehicles peripheral to own vehicle, obstacles on the road, and walkers, and it is an object of the present invention to improve reality of the driving test of a vehicle and sample collection by artificially generating images which are very similar to actually photographed images taken under conditions, such as severe weather conditions, which are difficult to reproduce, and provide a simulation system, a simulation program and a simulation method which can perform synchronization control with CG images generated by a CG technique by building a plurality of different types of sensors in a virtual environment ([0011]); and furthermore, while 1c to 1f are not limited to PC terminals, for example, when test is conducted with actually moving vehicles, 1c to 1f can be considered to refer to car navigation systems mounted on the test vehicles. In this case, rather than recognizing the 3D graphics composite image as the simulation image D61 generated by the image generation unit 203 of FIG. 4B, the learning unit 204 receives a live-action video in place of the simulation image D61 so that the system can be used for evaluating the performance of the image recognition unit 204. This is because, while a human being can immediately and accurately recognize a walker and a vehicle in a live-action video, it is possible to verify whether or not the image recognition unit 204 can output the same result of extraction and recognition (Figs. 4B and 14; [0146]). Evans et al., US 2022/0189145 A1 teaches: validation is performed as a simulation to test the accuracy of the trained AI model ([0076]). For example, the electronic device creates, or generates, a scenario or environment ([0076]). The environment can be a known environment, such as an airport, or a computer or human generated simulated environment ([0076]). The generated environment in some examples includes realistic looking features including shadows, lighting, weather conditions, time of day, and one or more objects to be detected ([0076]). Then, the electronic device executes the model to determine the ability of the model to recognize and properly identify the one or more objects to be detected ([0076]). The electronic device determines an accuracy of the identification of the one or more objects ([0076]). In some implementations, the model is executed in more than one generated environment to increase the accuracy of the model ([0076]). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J VANCHY JR whose telephone number is (571)270-1193. The examiner can normally be reached Monday - Friday 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J VANCHY JR/Primary Examiner, Art Unit 2666 Michael.Vanchy@uspto.gov
Read full office action

Prosecution Timeline

Nov 17, 2023
Application Filed
Sep 18, 2025
Non-Final Rejection — §103, §112
Dec 22, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602906
IMAGE RECOGNITION APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12579596
MANAGING ARTIFICIAL-INTELLIGENCE DERIVED IMAGE ATTRIBUTES
2y 5m to grant Granted Mar 17, 2026
Patent 12579634
REAL-TIME PROCESS DEFECT DETECTION AUTOMATION SYSTEM AND METHOD USING MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12573225
METHODS AND SYSTEMS OF FIELD DETECTION IN A DOCUMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12551101
SYSTEM AND METHOD FOR DIGITAL MEASUREMENTS OF SUBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
87%
With Interview (+20.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 606 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month