Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,461

TRAINING OF INSTANT SEGMENTATION ALGORITHMS WITH PARTIALLY ANNOTATED IMAGES

Non-Final OA §101§103
Filed
Aug 31, 2023
Examiner
LU, ZHIYU
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Carl Zeiss Microscopy GmbH
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
63%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
374 granted / 759 resolved
-12.7% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
57 currently pending
Career history
816
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 759 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of claims 1-6 and 10-16 in the reply filed on 10/01/2025 is acknowledged. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 10, 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because software such as “computer-implemented machine learning model” or “computer program” is not patent eligible subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 10-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fuchs et al. (US2021/0133966) in view of Kim et al. (US2023/0031919) and Dwivedi et al. (US2022/0215201) To claim 1, Fuchs teach a method for training a machine learning model for the instance segmentation of objects in microscope images (abstract), comprising the following work steps: a. inputting a partially annotated image with a first annotated area, whereby regions of objects in the first annotated area of the partially annotated image are assigned to an object class and regions without objects are assigned to a background class (Fig. 2; Fig. 3(b); paragraphs 0004, 0022-0023; paragraphs 0058-0059, 0092-0093, segmentation of objects; paragraph 0083, background regions may be identified; obviously region without object would be identified as background class); b. labeling the image, particularly in its entirety, via the machine learning model, whereby regions of objects predicted by the machine learning model are assigned to the object class (paragraphs 0007, 0009, 0014, 0018-0019, 0123-0128, divided tiles of sample biomedical image would be analyzed for annotation labeling regions of interests); c. calculating a value of a loss function of the machine learning model by matching annotations related to the first annotated area to corresponding labels (paragraphs 0080, 0082, 0109-0110); and d. adapting the machine learning model so as to minimize the loss function (paragraphs 0165-0167, the model trainer may determine whether the segmentation model has converged based on a comparison between the current determination of the loss metric and the previous determination of the loss metric. If the difference between the two loss metrics is greater than a threshold, the model trainer may continue to train the segmentation model. Otherwise, if the difference is less than or equal to the threshold, the model trainer may halt training). In furthering said obviousness, Kim teach segmenting an image with identification of portions/regions into object classes and portion without an object class as a background class (paragraphs 0050-0053), wherein the segmentation model is trained iteratively to reduce loss until being less than a threshold or converges (paragraphs 0086, 0097). Dwivedi teach annotating object in image via machine learning model, whereby regions of objects predicted by the machine learning model are assigned to the object class (abstract, paragraphs 0029-0030, 0034-0035, identify pixels of region of interest); calculating a value of a loss function of the machine learning model by matching annotations related to the first annotated area to corresponding labels (paragraphs 0036, 0054-0055, comparing identified objects to object-annotation metadata of object-annotated ROI image using a ROI-specific loss function), and adapting the machine learning model so as to minimize the loss function (paragraphs 0026, 0033, 0038, 0054-0055, 0077). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teachings of Kim and Dwivedi into the method of Fuchs, in order to implement training. To claim 10, Fuchs, Kim and Dwivedi teach a computer-implemented machine learning model, in an artificial neural network, for the instance segmentation of objects in microscope images, wherein the machine learning model is configured to realize the work steps of a method according to claim 1 for each of a plurality of training inputs (as explained in response to claim 1 above). To claim 11, Fuchs, Kim and Dwivedi teach a computer-implemented method for the instance segmentation of objects in microscope images, comprising the following work steps: inputting an image; labeling the image, particularly in its entirety, via a machine learning model according to claim 10; and outputting the labeled image (as explained in response to claim 1 above). To claim 12, Fuchs, Kim and Dwivedi teach teach a computer program or computer program product, wherein the computer program or computer program product contains commands stored on a computer-readable and/or non-volatile storage medium which, when run on a computer, prompts the computer to execute the steps of the method according to claim 1 (as explained in response to claim 1 above). To claim 13, Fuchs, Kim and Dwivedi teach a system for training a machine learning model for the instance segmentation of objects in microscope images (as explained in response to claim 1 above). To claim 14, Fuchs, Kim and Dwivedi teach a system for the instance segmentation of objects in microscope images, comprising: a third interface for inputting an image; means configured to label the image in its entirety, via the machine learning model according to claim 13; and a fourth interface configured to output the labeled image (as explained in response to claim 13 above, wherein a third interface and a fourth interface can be interpreted respectively as camera/data input connection/etc. and display/printer/data output connection/etc.). To claim 15, Fuchs, Kim and Dwivedi teach a microscope having a system according to claim 13 (as explained in response to claim 13 above). To claim 16, Fuchs, Kim and Dwivedi teach a microscope (Fuchs, paragraph 0122) having a system according to claim 14 (as explained in response to claim 15 above). To claim 2, Fuchs, Kim and Dwivedi teach claim 1. Fuchs, Kim and Dwivedi teach further comprising the following work step: e. checking whether a predetermined abort condition has been met; wherein work steps b. to d. are repeated until the predetermined abort condition has been met, in particular until a predefined number of repetitions has been reached and/or until the loss function value falls below a predefined value and/or until a change of the loss function value falls below a predefined threshold and/or an accuracy of the machine learning model falls below a predetermined quality in non-annotated areas of the image or areas only annotated for test purposes (as explained in response to claim 1 above, Fuchs, Kim and Dwivedi, respectively, teach various stop condition of repetitive neural network training). To claim 3, Fuchs, Kim and Dwivedi teach claim 1. Fuchs, Kim and Dwivedi teach further comprising the following work steps: f. renewed inputting of the partially annotated image with a second annotated area, whereby regions of objects in the second area of the partially annotated image are assigned to an object class and regions without objects are assigned to the background class; g. renewed labeling of the image by the adapted machine learning model, whereby regions of objects predicted by the adapted machine learning model are assigned to the object class; h. renewed calculating of a value of the loss function of the adapted machine learning model by matching annotations to labels in the first annotated area and in the second annotated area; and i. renewed adapting of the adapted machine learning model so as to minimize the loss function (Fuchs, paragraph 0125, obvious for repeating procedure for second or subsequent region of interest; Dwivedi, paragraph 0060, identification for plurality of regions of interest within the input image). To claim 4, Fuchs, Kim and Dwivedi teach claim 3. Fuchs, Kim and Dwivedi teach further comprising the following work step: j. checking whether a predetermined abort condition has been met; wherein work steps g. to i. are repeated until the predetermined abort condition has been met, in particular until a predefined number of repetitions has been reached and/or until the loss function value falls below a predefined value and/or until a change of the loss function value falls below a predefined threshold and/or an accuracy of the machine learning model falls below a predetermined quality in non-annotated areas of the image or areas only annotated for test purposes (as explained in response to claim 1 above, Fuchs, Kim and Dwivedi, respectively, teach various stop condition of repetitive neural network training). To claim 5, Fuchs, Kim and Dwivedi teach claim 1. Fuchs, Kim and Dwivedi teach wherein the value of the loss function depends on the geometric arrangement of the regions of objects predicted by the machine learning model with respect to the first annotated area and/or with respect to the annotated regions, in particular regions of objects, in the first annotated area and/or with respect to the second annotated area and/or with respect to the annotated regions, in particular regions of objects, in the second annotated area (Dwivedi, paragraphs 0031, 0036, 0054, loss function depend on geometric relations). To claim 6, Fuchs, Kim and Dwivedi teach claim 1. Fuchs, Kim and Dwivedi teach wherein regions of objects predicted by the machine learning model which are assignable to a region of an object in the first annotated area and/or the second annotated area are always included in the calculation of the loss function value (Fuchs, paragraphs 0165-0166, loss metric over all segmented image and annotations). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ZHIYU . LU Primary Examiner Art Unit 2669 /ZHIYU LU/Primary Examiner, Art Unit 2665 January 10, 2026
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601695
METHOD FOR MEASURING THE DETECTION SENSITIVITY OF AN X-RAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597268
METHOD AND DEVICE FOR DETERMINING LANE OF TRAVELING VEHICLE BY USING ARTIFICIAL NEURAL NETWORK, AND NAVIGATION DEVICE INCLUDING SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12596187
METHOD, APPARATUS, AND SYSTEM FOR WIRELESS SENSING MEASUREMENT AND REPORTING
2y 5m to grant Granted Apr 07, 2026
Patent 12592052
INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581142
APPROACHES FOR COMPRESSING AND DISTRIBUTING IMAGE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
63%
With Interview (+13.9%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 759 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month