Prosecution Insights
Last updated: April 19, 2026
Application No. 18/405,932

NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR

Non-Final OA §101§102
Filed
Jan 05, 2024
Examiner
OSINSKI, MICHAEL S
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
466 granted / 619 resolved
+13.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. Election was made without traverse of Species IV in the reply filed on 12/24/2025. Information Disclosure Statement 2. The information disclosure statement(s) (IDS) submitted on 1/23/2024, 6/23/2025, and 10/29/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1, 10-11, 18, and 24 are rejected under rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Independent claims 1, 11, 18, and 24 are directed towards a method, a system, a non-transitory CRM, and a processor which are recognized statutory categories of invention. Step 2A, Prong One: The above mentioned independent claim recites the abstract ideas of mental processes which are concepts performed in the human mind (including an observation, evaluation, judgement, and opinion). For example, “comparing a first transformed segmentation mask with a second transformed segmentation mask” encompasses observing an image/data and performing an evaluation/making a determination regarding the contents of the image in comparison with other image/data which may be practically performed in the human mind using observation, evaluation, judgement, and opinion which falls within the “mental process” grouping of abstract ideas. Step 2A, Prong Two: The abstract ideas, as claimed, are not integrated into a practical application and thus do not provide an inventive concept. The above mentioned independent claims recite additional elements of “a processor”, “one or more circuits”, “one or more memories”, and “one or more processors” which are recited at a high level of generality and amount to no more than components that apply/execute the abstract ideas without limiting how they function and thus can be performed by any generic computer capable of applying the abstract ideas and are at best the equivalent of merely adding the words “apply it” to the judicial exception. Additionally, “training one or more neural networks to generate a first transformed segmentation mask” are mere data gathering and input/output activities recited at a high level of generality, thus are insignificant extra-solution activities. Step 2B: As explained in Step 2A, Prong Two above, the independent claims recite additional elements of “a processor”, “one or more circuits”, “one or more memories”, and “one or more processors” recited at a high level of generality such that they amount to no more than generic components to implement the abstract idea on a conventional computer, and “training one or more neural networks to generate a first transformed segmentation mask” are mere data gathering and input/output activities recited at a high level of generality, thus are insignificant extra-solution activities. Even when considered in combination, the additional elements represent mere instructions to apply the judicial exceptions and insignificant extra-solution activities which cannot provide an inventive concept. The claim does not point to a specific improvement in computers in their communication role or provides a specific improvement in the way computers operate (See MPEP 2016.05(g), MPEP 2106.05(d), and Berkheimer Memo). Therefore, based on the above analysis in conjunction with the 2019 Revised Patent Subject Matter Eligibility Guidance, it is determined that the independent claim(s) are directed towards ineligible subject matter of an abstract idea without significantly more. Dependent claim 10 is also rejected for being directed towards the abstract idea(s) of mental processes as well as insignificant pre-solution data gathering and post-solution data input/output activities that are ineligible subject matter of an abstract idea without adding significantly more than the judicial exceptions present within the independent claims. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1, 10-11, 18, and 24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Asendorf (US PGPub 2021/0327041) [hereafter Asendorf]. 5. As to claim 1, Asendorf discloses a method (as shown in Figure 1), comprising: using a processor (processor 912 as shown in Figure 8) comprising one or more circuits to train one or more neural networks (CNN 720 as shown in Figure 7) to generate a first transformed segmentation mask (mask corresponding to computer-vision features of reconstructed image 740), the training based, at least in part, on comparing the first transformed segmentation mask with a second transformed segmentation mask (mask corresponding to computer-vision features of validation images 760) (Paragraphs 0020-0025, 0060-0074, a system 900 comprises a computation component 926 that includes one or more processors 912 that execute instructions stored on a memory component 914 that causes the computation component to perform an image detection method that includes using a CNN to first extract computer vision features representing a material property of an imaged material sample of a test image in order to extract textural and/or structural features which are then processed with a trained transformation function to obtain a transformed image which is compared with computer vision features representing a material property of an imaged material sample of a validation image in order to extract textural and/or structural features of the validation images passed into the trained transformation function and generate a reconstruction error 750 based on the comparison of the transformed features of the test and validation images). 6. As to claim 10, Asendorf discloses the training is performed in a supervised manner (Paragraphs 0060-0062, 0065-0066, the CNN is trained using training images in conjunction with gradient descent and back propagation techniques). 7. As to claim 11, Asendorf discloses a system (as shown in Figure 8), comprising: a processor (processor 912) comprising one or more circuits; and one or more memories (memory 914) to store executable instructions that, if executed by the one or more circuits of the processor, cause the system to train one or more neural networks (CNN 720 as shown in Figure 7) to generate a first transformed segmentation mask (mask corresponding to computer-vision features of reconstructed image 740), the training based, at least in part, on comparing the first transformed segmentation mask with a second transformed segmentation mask (mask corresponding to computer-vision features of validation images 760) (Paragraphs 0020-0025, 0060-0074, a system 900 comprises a computation component 926 that includes one or more processors 912 that execute instructions stored on a memory component 914 that causes the computation component to perform an image detection method that includes using a CNN to first extract computer vision features representing a material property of an imaged material sample of a test image in order to extract textural and/or structural features which are then processed with a trained transformation function to obtain a transformed image which is compared with computer vision features representing a material property of an imaged material sample of a validation image in order to extract textural and/or structural features of the validation images passed into the trained transformation function and generate a reconstruction error 750 based on the comparison of the transformed features of the test and validation images). 8. As to claim 18, Asendorf discloses a non-transitory computer-readable storage medium (memory 914 as shown in the system of Figure 8) storing executable instructions that, if executed by one or more processors (processor 912) of a computer system, cause the computer system to train one or more neural networks (CNN 720 as shown in Figure 7) to generate a first transformed segmentation mask (mask corresponding to computer-vision features of reconstructed image 740), the training based, at least in part, on comparing the first transformed segmentation mask with a second transformed segmentation mask (mask corresponding to computer-vision features of validation images 760) (Paragraphs 0020-0025, 0060-0074, a system 900 comprises a computation component 926 that includes one or more processors 912 that execute instructions stored on a memory component 914 that causes the computation component to perform an image detection method that includes using a CNN to first extract computer vision features representing a material property of an imaged material sample of a test image in order to extract textural and/or structural features which are then processed with a trained transformation function to obtain a transformed image which is compared with computer vision features representing a material property of an imaged material sample of a validation image in order to extract textural and/or structural features of the validation images passed into the trained transformation function and generate a reconstruction error 750 based on the comparison of the transformed features of the test and validation images). 9. As to claim 24, Asendorf discloses a processor (processor 912), comprising: one or more circuits to train one or more neural networks (CNN 720 as shown in Figure 7) to generate a first transformed segmentation mask (mask corresponding to computer-vision features of reconstructed image 740), the training based, at least in part, on comparing the first transformed segmentation mask with a second transformed segmentation mask (mask corresponding to computer-vision features of validation images 760) (Paragraphs 0020-0025, 0060-0074, a system 900 comprises a computation component 926 that includes one or more processors 912 that execute instructions stored on a memory component 914 that causes the computation component to perform an image detection method that includes using a CNN to first extract computer vision features representing a material property of an imaged material sample of a test image in order to extract textural and/or structural features which are then processed with a trained transformation function to obtain a transformed image which is compared with computer vision features representing a material property of an imaged material sample of a validation image in order to extract textural and/or structural features of the validation images passed into the trained transformation function and generate a reconstruction error 750 based on the comparison of the transformed features of the test and validation images). Claims 10. Claims 2-9, 12-17, 19-23, and 25-29 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL S OSINSKI whose telephone number is (571) 270-3949. The examiner can normally be reached on Monday - Friday, 10:00am - 6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MO /MICHAEL S OSINSKI/Primary Examiner, Art Unit 2674 1/20/2026
Read full office action

Prosecution Timeline

Jan 05, 2024
Application Filed
Jan 20, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596951
MULTISCALE CONTIGUOUS BLOCK PIXEL ENTANGLER FOR IMAGE RECOGNITION ON HYBRID QUANTUM-CLASSICAL COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586351
STORAGE MEDIUM, SPECIFYING METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579657
IMAGING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573028
NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554796
OPTIMIZING PARAMETER ESTIMATION FOR TRAINING NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+23.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month