DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-6 and 10-16 in the reply filed on 10/01/2025 is acknowledged.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10, 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because software such as “computer-implemented machine learning model” or “computer program” is not patent eligible subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 10-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fuchs et al. (US2021/0133966) in view of Kim et al. (US2023/0031919) and Dwivedi et al. (US2022/0215201)
To claim 1, Fuchs teach a method for training a machine learning model for the instance segmentation of objects in microscope images (abstract), comprising the following work steps:
a. inputting a partially annotated image with a first annotated area, whereby regions of objects in the first annotated area of the partially annotated image are assigned to an object class and regions without objects are assigned to a background class (Fig. 2; Fig. 3(b); paragraphs 0004, 0022-0023; paragraphs 0058-0059, 0092-0093, segmentation of objects; paragraph 0083, background regions may be identified; obviously region without object would be identified as background class);
b. labeling the image, particularly in its entirety, via the machine learning model, whereby regions of objects predicted by the machine learning model are assigned to the object class (paragraphs 0007, 0009, 0014, 0018-0019, 0123-0128, divided tiles of sample biomedical image would be analyzed for annotation labeling regions of interests);
c. calculating a value of a loss function of the machine learning model by matching annotations related to the first annotated area to corresponding labels (paragraphs 0080, 0082, 0109-0110); and
d. adapting the machine learning model so as to minimize the loss function (paragraphs 0165-0167, the model trainer may determine whether the segmentation model has converged based on a comparison between the current determination of the loss metric and the previous determination of the loss metric. If the difference between the two loss metrics is greater than a threshold, the model trainer may continue to train the segmentation model. Otherwise, if the difference is less than or equal to the threshold, the model trainer may halt training).
In furthering said obviousness, Kim teach segmenting an image with identification of portions/regions into object classes and portion without an object class as a background class (paragraphs 0050-0053), wherein the segmentation model is trained iteratively to reduce loss until being less than a threshold or converges (paragraphs 0086, 0097).
Dwivedi teach annotating object in image via machine learning model, whereby regions of objects predicted by the machine learning model are assigned to the object class (abstract, paragraphs 0029-0030, 0034-0035, identify pixels of region of interest); calculating a value of a loss function of the machine learning model by matching annotations related to the first annotated area to corresponding labels (paragraphs 0036, 0054-0055, comparing identified objects to object-annotation metadata of object-annotated ROI image using a ROI-specific loss function), and adapting the machine learning model so as to minimize the loss function (paragraphs 0026, 0033, 0038, 0054-0055, 0077).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teachings of Kim and Dwivedi into the method of Fuchs, in order to implement training.
To claim 10, Fuchs, Kim and Dwivedi teach a computer-implemented machine learning model, in an artificial neural network, for the instance segmentation of objects in microscope images, wherein the machine learning model is configured to realize the work steps of a method according to claim 1 for each of a plurality of training inputs (as explained in response to claim 1 above).
To claim 11, Fuchs, Kim and Dwivedi teach a computer-implemented method for the instance segmentation of objects in microscope images, comprising the following work steps:
inputting an image; labeling the image, particularly in its entirety, via a machine learning model according to claim 10; and outputting the labeled image (as explained in response to claim 1 above).
To claim 12, Fuchs, Kim and Dwivedi teach teach a computer program or computer program product, wherein the computer program or computer program product contains commands stored on a computer-readable and/or non-volatile storage medium which, when run on a computer, prompts the computer to execute the steps of the method according to claim 1 (as explained in response to claim 1 above).
To claim 13, Fuchs, Kim and Dwivedi teach a system for training a machine learning model for the instance segmentation of objects in microscope images (as explained in response to claim 1 above).
To claim 14, Fuchs, Kim and Dwivedi teach a system for the instance segmentation of objects in microscope images, comprising: a third interface for inputting an image; means configured to label the image in its entirety, via the machine learning model according to claim 13; and a fourth interface configured to output the labeled image (as explained in response to claim 13 above, wherein a third interface and a fourth interface can be interpreted respectively as camera/data input connection/etc. and display/printer/data output connection/etc.).
To claim 15, Fuchs, Kim and Dwivedi teach a microscope having a system according to claim 13 (as explained in response to claim 13 above).
To claim 16, Fuchs, Kim and Dwivedi teach a microscope (Fuchs, paragraph 0122) having a system according to claim 14 (as explained in response to claim 15 above).
To claim 2, Fuchs, Kim and Dwivedi teach claim 1.
Fuchs, Kim and Dwivedi teach further comprising the following work step: e. checking whether a predetermined abort condition has been met; wherein work steps b. to d. are repeated until the predetermined abort condition has been met, in particular until a predefined number of repetitions has been reached and/or until the loss function value falls below a predefined value and/or until a change of the loss function value falls below a predefined threshold and/or an accuracy of the machine learning model falls below a predetermined quality in non-annotated areas of the image or areas only annotated for test purposes (as explained in response to claim 1 above, Fuchs, Kim and Dwivedi, respectively, teach various stop condition of repetitive neural network training).
To claim 3, Fuchs, Kim and Dwivedi teach claim 1.
Fuchs, Kim and Dwivedi teach further comprising the following work steps: f. renewed inputting of the partially annotated image with a second annotated area, whereby regions of objects in the second area of the partially annotated image are assigned to an object class and regions without objects are assigned to the background class; g. renewed labeling of the image by the adapted machine learning model, whereby regions of objects predicted by the adapted machine learning model are assigned to the object class; h. renewed calculating of a value of the loss function of the adapted machine learning model by matching annotations to labels in the first annotated area and in the second annotated area; and i. renewed adapting of the adapted machine learning model so as to minimize the loss function (Fuchs, paragraph 0125, obvious for repeating procedure for second or subsequent region of interest; Dwivedi, paragraph 0060, identification for plurality of regions of interest within the input image).
To claim 4, Fuchs, Kim and Dwivedi teach claim 3.
Fuchs, Kim and Dwivedi teach further comprising the following work step: j. checking whether a predetermined abort condition has been met; wherein work steps g. to i. are repeated until the predetermined abort condition has been met, in particular until a predefined number of repetitions has been reached and/or until the loss function value falls below a predefined value and/or until a change of the loss function value falls below a predefined threshold and/or an accuracy of the machine learning model falls below a predetermined quality in non-annotated areas of the image or areas only annotated for test purposes (as explained in response to claim 1 above, Fuchs, Kim and Dwivedi, respectively, teach various stop condition of repetitive neural network training).
To claim 5, Fuchs, Kim and Dwivedi teach claim 1.
Fuchs, Kim and Dwivedi teach wherein the value of the loss function depends on the geometric arrangement of the regions of objects predicted by the machine learning model with respect to the first annotated area and/or with respect to the annotated regions, in particular regions of objects, in the first annotated area and/or with respect to the second annotated area and/or with respect to the annotated regions, in particular regions of objects, in the second annotated area (Dwivedi, paragraphs 0031, 0036, 0054, loss function depend on geometric relations).
To claim 6, Fuchs, Kim and Dwivedi teach claim 1.
Fuchs, Kim and Dwivedi teach wherein regions of objects predicted by the machine learning model which are assignable to a region of an object in the first annotated area and/or the second annotated area are always included in the calculation of the loss function value (Fuchs, paragraphs 0165-0166, loss metric over all segmented image and annotations).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 January 10, 2026