Prosecution Insights
Last updated: April 19, 2026
Application No. 18/536,174

HIERARCHICAL WORKFLOW FOR GENERATING ANNOTATED TRAINING DATA FOR MACHINE LEARNING ENABLED IMAGE SEGMENTATION

Non-Final OA §103
Filed
Dec 11, 2023
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Genentech Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: 1-20 are examined. Claims 21-62 are canceled. Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/7/2025 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Debuc et al (US 2012/0150029) in view of Abou Shousha et al (US 10,468,142). Claim 1: Debuc et al (US 2012/0150029) teaches the following subject matter: A method, comprising: receiving an image of a sample having a feature (abstract teaches OCT imaging for images to provide prognostic and diagnostic details regarding diseased tissue for assessing the optical properties and structure morphology differences between normal healthy subjects and patients with ocular diseases and disorders; 0030 detail features such as retinal features and pathologies for stages of optical muscular dystrophy); prompting presentation of the labeled image to an image correction interface (0080 detail correction by user-friendly graphic interface (GUI) assessing the features such as corrected thickness, various cellular layers of retina and whole macula); receiving, from the image correction interface, label correction data related to the annotation generated (0080-0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu); and updating the labeled image using the label correction data to generate an annotated image comprising an update to the labeled image (0094-0098, specifically 0098 teaches user input/correction can provides an additional visual tool to evaluate the quality of the corrections during the manual correction mode (this is view as updating)). Debuc et al teaches the subject matter above using OCT imaging and images, but the following is taught by Abou Shousha et al (US 10,468,142) teaches: generating, using a neural network, an annotation representing the feature (column 10 lines 58 to column 11 lines 21 teaches use of AI-base system with OCT imaging and images to analysis feature such as thickens, curvature, topography, evaluation of human cornea for diagnosis; column 20 lines 20-35 detail using AI (neural network) for annotation; figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition for category prediction to generate analysis report/heath report); generating, using the neural network, a labeled image comprising the annotation (figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition); by the first neural network (above teaches use of AI model (neural network)). Debuc et al and Abou Shousha et al are both in the field of OCT image analysis, especially related to use of OCT imaging for diagnosis of age-related macular degeneration (Abou Shousha et al teaches column 4 lines 19-32) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Debuc et al by Abou Shousha et al with use of AI-model couple with OCT system for imaging for diagnosis to provide the best course of action for treatment as disclosed by Abou Shousha et al in figure 9 and column 31 lines 36-58, especially 57-58. Claim 2: Abou Shousha et al teaches: The method of claim 1, wherein the update includes a confidence value for the annotation representing the feature (figure 1B and column 22 line 62- column 23 lines 7 teaches transform output to confidence scores/values to user friendly format by the analysis system). Claim 3: Debuc et al teach: The method of claim 2, further comprising: validating the annotation in the annotated image based on a comparison of the confidence value to a pre-set confidence threshold (0044 teaches comparing confidence level to preset cut-off threshold). Claim 4: Debuc et al teach: The method of claim 1, wherein the feature includes a biomarker that is indicative of age-related macular degeneration (AMD) (figure 6A and 0016 detail the function to patent with neovascular AMD, where 0254 and 0259 detail AMD to be age-related macular degeneration). Claim 5: Debuc et al teach: The method of claim 1, wherein the label correction data includes indications of affirmation, rejection, or modification of the label received at the image correction interface (0080-0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu). Claim 6: Debuc et al teach: The method of claim 5, wherein the indications are input into the image correction interface by one or more trained users (above teaches user interface; 0126 detail analysis with different operators (user) as well). Claim 7: The method of claim 1, wherein the sample is a tissue sample or a blood sample (abstract teaches imaging, processing and evaluation of tissues). Claim 8: Abou Shousha et al teaches: The method of claim 1, further comprising training the neural network with the annotated image (column 10 lines 58 to column 11 lines 21 teaches use of AI-base system with OCT imaging and images to analysis feature such as thickens, curvature, topography, evaluation of human cornea for diagnosis; column 20 lines 20-35 detail using AI (neural network) for annotation). Claim 9: Debuc et al (US 2012/0150029) teaches the following subject matter: A system, comprising: a non-transitory memory (0064-0065); and a hardware processor coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising (0064-0065): receiving an image of a sample having a feature (abstract teaches OCT imaging to provide prognostic and diagnostic details regarding diseased tissue for assessing the optical properties and structure morphology differences between normal healthy subjects and patients with ocular diseases and disorders; 0030 detail features such as retinal features and pathologies for stages of optical muscular dystrophy); prompting presentation of the labeled image to an image correction interface (0080 detail correction by user-friendly graphic interface (GUI) assessing the features such as corrected thickness, various cellular layers of retina and whole macula); receiving, from the image correction interface, label correction data related to the annotation generated (0080-0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu); and updating the labeled image using the label correction data to generate an annotated image comprising an update to the labeled image (0094-0098, specifically 0098 teaches user input/correction can provides an additional visual tool to evaluate the quality of the corrections during the manual correction mode (this is view as updating)). Debuc et al teaches the subject matter above using OCT imaging and images, but the following is taught by Abou Shousha et al (US 10,468,142) teaches: generating, using a neural network, an annotation representing the feature (column 10 lines 58 to column 11 lines 21 teaches use of AI-base system with OCT imaging and images to analysis feature such as thickens, curvature, topography, evaluation of human cornea for diagnosis; column 20 lines 20-35 detail using AI (neural network) for annotation; figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition for category prediction to generate analysis report/heath report); generating, using the neural network, a labeled image comprising the annotation (figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition); by the first neural network (above teaches use of AI model (neural network)). Debuc et al and Abou Shousha et al are both in the field of OCT image analysis, especially related to use of OCT imaging for diagnosis of age-related macular degeneration (Abou Shousha et al teaches column 4 lines 19-32) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Debuc et al by Abou Shousha et al with use of AI-model couple with OCT system for imaging for diagnosis to provide the best course of action for treatment as disclosed by Abou Shousha et al in figure 9 and column 31 lines 36-58, especially 57-58. Claim 10: Abou Shousha et al teaches: The system of claim 9, wherein the update includes a confidence value for the annotation representing the feature (figure 1B and column 22 line 62- column 23 lines 7 teaches transform output to confidence scores/values to user friendly format by the analysis system). Claim 11: Debuc et al teach: The system of claim 9, wherein the feature includes a biomarker that is indicative of age-related macular degeneration (AMD) (figure 6A and 0016 detail the function to patent with neovascular AMD, where 0254 and 0259 detail AMD to be age-related macular degeneration). Claim 12: Debuc et al teach: The system of claim 9, wherein the label correction data includes indications of affirmation, rejection, or modification of the label received at the image correction interface (0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu). Claim 13: Debuc et al teach: The system of claim 12, wherein the indications are input into the image correction interface by one or more trained users (above teaches user interface; 0126 detail analysis with different operators (user) as well). Claim 14: Abou Shousha et al teaches: The system of claim 9, further comprising training the neural network with the annotated image (column 10 lines 58 to column 11 lines 21 teaches use of AI-base system with OCT imaging and images to analysis feature such as thickens, curvature, topography, evaluation of human cornea for diagnosis; column 20 lines 20-35 detail using AI (neural network) for annotation). Claim 15: Debuc et al (US 2012/0150029) teaches the following subject matter: A non-transitory computer-readable medium (CRM) having stored thereon computer-readable instructions executable to cause a computer system to perform operations comprising (0064-0065): receiving an image of a sample having a feature (abstract teaches OCT imaging to provide prognostic and diagnostic details regarding diseased tissue for assessing the optical properties and structure morphology differences between normal healthy subjects and patients with ocular diseases and disorders; 0030 detail features such as retinal features and pathologies for stages of optical muscular dystrophy); prompting presentation of the labeled image to an image correction interface (0080 detail correction by user-friendly graphic interface (GUI) assessing the features such as corrected thickness, various cellular layers of retina and whole macula); receiving, from the image correction interface, label correction data related to the annotation generated (0080-0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu); and updating the labeled image using the label correction data to generate an annotated image comprising an update to the labeled image (0094-0098, specifically 0098 teaches user input/correction can provides an additional visual tool to evaluate the quality of the corrections during the manual correction mode (this is view as updating)). Debuc et al teaches the subject matter above using OCT imaging and images, but the following is taught by Abou Shousha et al (US 10,468,142) teaches: generating, using a neural network, an annotation representing the feature (column 10 lines 58 to column 11 lines 21 teaches use of AI-base system with OCT imaging and images to analysis feature such as thickens, curvature, topography, evaluation of human cornea for diagnosis; column 20 lines 20-35 detail using AI (neural network) for annotation; figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition for category prediction to generate analysis report/heath report); generating, using the neural network, a labeled image comprising the annotation (figure 19A and column 42 lines 12-54 detail using AI model 12 to label and annotate disease or condition); by the first neural network (above teaches use of AI model (neural network)). Debuc et al and Abou Shousha et al are both in the field of OCT image analysis, especially related to use of OCT imaging for diagnosis of age-related macular degeneration (Abou Shousha et al teaches column 4 lines 19-32) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Debuc et al by Abou Shousha et al with use of AI-model couple with OCT system for imaging for diagnosis to provide the best course of action for treatment as disclosed by Abou Shousha et al in figure 9 and column 31 lines 36-58, especially 57-58. Claim 16: Abou Shousha et al teaches: The non-transitory CRM of claim 15, wherein the update includes a confidence value for the annotation representing the feature (figure 1B and column 22 line 62- column 23 lines 7 teaches transform output to confidence scores/values to user friendly format by the analysis system). Claim 17: Debuc et al teach: The non-transitory CRM of any one of claims 15 comprising: validating the annotation in the annotated image based on a comparison of the confidence value to a pre-set confidence threshold (0044 teaches comparing confidence level to preset cut-off threshold). Claim 18: Debuc et al teach: The non-transitory CRM of any one of claims 15 feature includes a biomarker that is indicative of age-related macular degeneration (AMD) (figure 6A and 0016 detail the function to patent with neovascular AMD, where 0254 and 0259 detail AMD to be age-related macular degeneration). Claim 19: Debuc et al teach: The non-transitory CRM of any one of claims 15, wherein the label correction data includes indications of affirmation, rejection, or modification of the label received at the image correction interface (0081 detail correction tool with user-friendly graphical interface with the OCT system (OCTRIMA) for image enhancement, error correction using direct visual evaluation, where 0094-0098 detail user correction interface allows user to choose to accept or reject results generated, load and save segmentation data file, organize and given option to delete analysis option in a menu). Claim 20: Debuc et al teach: The non-transitory CRM of any one of claims 15, wherein the sample is a tissue sample or a blood sample (abstract teaches imaging, processing and evaluation of tissues). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Iwase et al (US 2021/0158525) teaches Medical Image Processing Apparatus, Medical Image Processing Method, Computer-readable Medium, And Learned Model - column 109 teaches The display controlling unit 25 and the outputting unit 2807 in the embodiments and modifications described above may cause various kinds of diagnosis results such as results relating to glaucoma or age-related macular degeneration, column 28 lines 40-55 teaches a label is set for each of these regions and is learned by the machine learning model in advance. As for training data, it is assumed that a front image of the optic nerve head is input data, and a label image to which, for example, the label for the periphery of the optic nerve head, the Disc label, and the Cup label are given is ground truth, where predetermined threshold comparison in column 26 lines 35-50. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Nov 06, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month