Prosecution Insights
Last updated: April 19, 2026
Application No. 18/513,805

Image-Based Severity Detection Method and System

Non-Final OA §103
Filed
Nov 20, 2023
Examiner
ROBERTS, RACHEL L
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Georgia Tech Research Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+27.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made that this application has the parent application PRO 63/426,489 filed on 11/18/2022. Information Disclosure Statement The IDS dated 07/09/2024 has been considered and placed in the application file. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 1 recite “or ” then listing “wherein at least one of the first severity score label and the second severity score label is used (i) for diagnosis or (ii) as labels for the second data set as a training data set for a second ML model or the baseline ML model.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 3 and 12 recite “or ” then listing “training the second ML model or the baseline ML model via the selected portion of the second data set.”. Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 10 and 16 recite “or ” then listing “the determined presence or severity value.”. Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 9 recite “at least one of ” then listing “Intraretinal Fluid (IRF), Diabetic Macular Edema (DME), and Intra-Retinal Hyper- Reflective Foci (IRHRF).”. Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9-14, 16-19 are rejected under 35 U.S.C. 103 as unpatentable over Lad et al (US Patent Publication US 2022/0351373 A1, hereafter referred to as Lad) in view of Mulyukov et al (WO Patent Publication WO 2021/220138 A1, hereafter referred to as Mulyukov). Regarding Claim 1, Lad teaches a method of training a machine learning model (Lad Fig 1 discloses a method for training a machine learning model), the method comprising: in a contrastive learning operation (Lad Fig 9, ¶0101 discloses a contrastive learning manner), training a baseline ML model via a first data set (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline), the first data set consisting of data for a non-anomalous, normal, or healthy set (Lad ¶0116 discloses uses auto fluorescent images of healthy patients with GA to training a multilayer CNN); in the contrastive learning operation (Lad Fig 9, ¶0101 discloses a contrastive learning manner), generating gradient (Lad Fig 18 discloses the output of the image with the gradient label) severity score vector (Lab ¶0114 discloses classifying the severity according to a grading criteria) from the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) for a second data set (Lad ¶0102 discloses using a second OCT dataset), the second data set comprising data for anomalous or unhealthy set (Lad ¶0102 discloses a second OCT dataset that consisted of images that were classified as having CNV, DME, and drusen), wherein the second data set is unlabeled with respect to severity (Lad ¶0102 discloses the secondary dataset being labeled for presence of disease not severity); and in the contrastive learning operation (Lad Fig 9, ¶0101 discloses a contrastive learning manner), tiering the severity score vector (Lad ¶0114 discloses classifying the MD severity according to a grading criteria) into a plurality of severity classes (Lad ¶0042, ¶0069, discloses determining the level of severity of the degeneration using percentage labels), for diagnosis (Lad ¶0044, ¶0107, Table 2, discloses using the severity to determine the diagnosis) or ( as labels for the second data set as a training data set (Lad ¶0102 discloses using a second OCT dataset that has been labeled with classifications of the presences of disease) or the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). Lad does not explicitly teach including a first severity class associated with a first severity score label and a second severity class associated with a second severity score label; wherein at least one of the first severity score label and the second severity score label is used for a second ML model. Mulyukov is in the same field of medical eye disease detection image processing. Further, Mulyukov teaches including a first severity class associated with a first severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) and a second severity class associated with a second severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high); wherein at least one of the first severity score label and the second severity score label is used(Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high for a second ML model (Mulyukov ¶00044, ¶00045, ¶00047 discloses applying a second algorithm to the variable identified). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad by incorporating the scoring of the severity of disease to be used an input for the second machine learning model as taught by Mulyukov, to make an invention that can more accurately identify and classify the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to address the need for a method that reliably and accurately assesses disease activity of w-AMD and/or of other retinopathies and that provides patient-specific anti-VEGF treatment regimen models, such as customized dosing frequency models (Mulyukov, ¶00017). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Lad in view of Mulyukov teaches the method of claim 1, wherein the step of tiering the severity score vector (Lad ¶0114 discloses classifying the MD severity according to a grading criteria) into a plurality of severity classes (Lad ¶0042, ¶0069, discloses determining the level of severity of the degeneration using percentage labels) comprises: ordering the severity score vector by rank (Mulyukov Fig 10 and Fig 11, ¶00023 disclose the severity of the disease activity being ranked as high medium or low) to generate a ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients); and arranging the ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients) into a plurality of bins (Mulyukov Fig 9, ¶00182 discloses soring the scoring of the patient disease activity in high and low bins), wherein a first bin corresponds to the first severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low), and wherein the second bin corresponds to the second severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high). See Claim 1 for rationale, its parent claim. Regarding Claim 3, Lad in view of Mulyukov teaches the method of claim 1, further comprising: selecting a portion (Mulyukov ¶00075 discloses splitting the dataset and selecting one of the branches) of the second data set(Lad ¶0102 discloses using a second OCT dataset) based on the gradient labels (Lad Fig 18 discloses the output of the image with the gradient label); and training the second ML model(Mulyukov ¶00044, ¶00045, ¶00047 discloses applying a second algorithm to the variable identified) or the baseline ML (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) model via the selected portion (Mulyukov ¶00075 discloses splitting the dataset and selecting one of the branches) of the second data set (Lad ¶0102 discloses using a second OCT dataset). See Claim 1 for rationale, its parent claim. Regarding Claim 4, Lad in view of Mulyukov teaches the method of claim 1, wherein the second data set (Lad ¶0102 discloses using a second OCT dataset) comprises candidate biomarker data (Lad ¶0087, ¶0092,¶0117 discloses biomarkers being used to determine AMD progression) for anomalous or unhealthy set (Lad ¶0102 discloses a second OCT dataset that consisted of images that were classified as having CNV, DME, and drusen), and wherein the method further comprising: training the second ML model (Mulyukov ¶00044, ¶00045, ¶00047 discloses applying a second algorithm to the variable identified) or the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) via the second data set (Lad ¶0102 discloses using a second OCT dataset), wherein the gradient labels (Lad Fig 18 discloses the output of the image with the gradient label) are used as ground truth for a set of biomarkers (Lad ¶0087, ¶0092,¶0117 discloses biomarkers being used to determine AMD progression) identified in the second data set (Lad Fig 12 and 13 and Fig 19 discloses how the ground truth lesion masks are used to determine the success rate of the gradient labeling). See Claim 1 for rationale, its parent claim. Regarding Claim 5, Lad in view of Mulyukov teaches the method of claim 1, further comprising: outputting, via a report or display (Lad ¶0065 discloses outputting the result, ¶0056 discloses displays), respective gradient label (Lad Fig 18 discloses the output of the image with the gradient label) and classifier output of the baseline ML model (Lad ¶0116, ¶0134, discloses the output of the model being the classifier outputting the presence of GA), wherein the respective gradient label (Lad Fig 18 discloses the output of the image with the gradient label) and classifier output (Lad ¶0116, ¶0134, discloses the output of the model being the classifier outputting the presence of GA), is used for diagnosis of a disease or a medical condition (Lad ¶0003 discloses the output being from the GA detection algorithm which uses both the gradient and classifier algorithm to output the detection GA and if it is likely to occur). See Claim 1 for rationale, its parent claim. Regarding Claim 6, Lad in view of Mulyukov teaches the method of claim 1, wherein the first data set comprises image data from a medical scan (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). See Claim 1 for rationale, its parent claim. Regarding Claim 7, Lad in view of Mulyukov teaches the method of claim 1, wherein the first data set (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) comprises image data from a sensor (Mulyukov ¶00049 discloses sensors collecting patient data and the patient data consisting of retinal images). See Claim 1 for rationale, its parent claim. Regarding Claim 9, Lad in view of Mulyukov teaches the method of claim 4, wherein the candidate biomarker data (Lad ¶0087, ¶0092,¶0117 discloses biomarkers being used to determine AMD progression) includes at least one of: Intraretinal Fluid (IRF) (Mulyyukov ¶00118 discloses measuring a key number of biomarker data including IRF), Diabetic Macular Edema (DME) (Lad ¶0102 discloses the images labeled for diabetic macular edema), and Intra-Retinal Hyper- Reflective Foci (IRHRF) (Lad Fig 12,14 and 15 shows the small bright spots seen in the OCT images which are IRHRF). See Claim 1 for rationale, its parent claim. Regarding Claim 10, Lad teaches a method comprising: receiving a data set (Lad ¶0003 discloses receiving a set of OCT volume can images as an input); determining, via a trained machine learning model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline), a presence or severity value associated with a disease or medical condition (Lad ¶0003 discloses a machine learning algorithm determining the presence of detecting geographic atrophy) using the data set (Lad ¶0003 discloses receiving a set of OCT volume can images as an input); outputting, via a report or graphical user interface (Lad ¶0065 discloses outputting the result, ¶0056 discloses displays, and interfaces), the determined presence or severity value (Lad ¶0003 discloses a machine learning algorithm determining the presence of detecting geographic atrophy), wherein the trained machine learning model was trained in a contrastive learning operation(Lad Fig 9, ¶0101 discloses a contrastive learning manner), the contrastive learning operation comprising: training a baseline ML model via a first data set (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline), the first data set consisting of data for a non-anomalous, normal, or healthy set (Lad ¶0116 discloses uses auto fluorescent images of healthy patients with GA to training a multilayer CNN); generating gradient (Lad Fig 18 discloses the output of the image with the gradient label) severity score vector (Lab ¶0114 discloses classifying the severity according to a grading criteria) from the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) for a second data set (Lad ¶0102 discloses using a second OCT dataset), the second data set comprising data for anomalous or unhealthy set (Lad ¶0102 discloses a second OCT dataset that consisted of images that were classified as having CNV, DME, and drusen), wherein the second data set is unlabeled with respect to severity (Lad ¶0102 discloses the secondary dataset being labeled for presence of disease not severity); tiering the severity score vector (Lad ¶0114 discloses classifying the MD severity according to a grading criteria) into a plurality of severity classes (Lad ¶0042, ¶0069, discloses determining the level of severity of the degeneration using percentage labels). Lad does not explicitly teach including a first severity class associated with a first severity score label and a second severity class associated with a second severity score label and generating the trained machine learning model using the first severity score label and the second severity class. Mulyukov is in the same field of medical eye disease detection image processing. Further, Mulyukov teaches including a first severity class associated with a first severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) and a second severity class associated with a second severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high); and generating the trained machine learning model (Mulyukov ¶00082 and Fig disclose generating a machine learning model based on the level of disease activity) using the first severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) and the second severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad by incorporating the scoring of the severity of disease to be used an input for the second machine learning model as taught by Mulyukov, to make an invention that can more accurately identify and classify the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to address the need for a method that reliably and accurately assesses disease activity of w-AMD and/or of other retinopathies and that provides patient-specific anti-VEGF treatment regimen models, such as customized dosing frequency models (Mulyukov, ¶00017). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 11, Lad in view of Mulyukov teaches the method of claim 10, wherein the step of tiering the severity score vector (Lad ¶0114 discloses classifying the MD severity according to a grading criteria) into a plurality of severity classes (Lad ¶0042, ¶0069, discloses determining the level of severity of the degeneration using percentage labels) comprises: ordering the severity score vector by rank (Mulyukov Fig 10 and Fig 11, ¶00023 disclose the severity of the disease activity being ranked as high medium or low) to generate a ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients); and arranging the ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients) into a plurality of bins (Mulyukov Fig 9, ¶00182 discloses soring the scoring of the patient disease activity in high and low bins) , wherein a first bin corresponds to the first severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) , and wherein the second bin corresponds to the second severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high). See Claim 10 for rationale, its parent claim. Regarding Claim 12, Lad in view of Mulyukov teaches the method of claim 10, wherein the second data set (Lad ¶0102 discloses using a second OCT dataset) comprises candidate biomarker data (Lad ¶0087, ¶0092,¶0117 discloses biomarkers being used to determine AMD progression) for anomalous or unhealthy set (Lad ¶0102 discloses a second OCT dataset that consisted of images that were classified as having CNV, DME, and drusen), and wherein the method further comprising: training the second ML model (Mulyukov ¶00044, ¶00045, ¶00047 discloses applying a second algorithm to the variable identified) or the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) via the second data set (Lad ¶0102 discloses using a second OCT dataset), wherein the gradient labels (Lad Fig 18 discloses the output of the image with the gradient label) are used as ground truth for a set of biomarkers(Lad ¶0087, ¶0092,¶0117 discloses biomarkers being used to determine AMD progression) identified in the second data set (Lad Fig 12 and 13 and Fig 19 discloses how the ground truth lesion masks are used to determine the success rate of the gradient labeling). See Claim 10 for rationale, its parent claim. Regarding Claim 13, Lad in view of Mulyukov teaches the method of claim 10, wherein the first data set comprises image data from a medical scan (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). See Claim 10 for rationale, its parent claim. Regarding Claim 14, Lad in view of Mulyukov teaches the method of claim 10, wherein the first data set (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) comprises image data from a sensor (Mulyukov ¶00049 discloses sensors collecting patient data and the patient data consisting of retinal images). See Claim 10 for rationale, its parent claim. Regarding Claim 16, Lad teaches a system (Lad ¶0056-¶0058 disclose the working of a system) comprising: a processor (Lad ¶0056-¶0057 discloses a processor); and a memory (Lad ¶0057 discloses a memory) having instructions stored thereon, wherein execution of the instructions by the processor causes the processor (Lad ¶0056-¶0057 discloses a processor in communication with memory that performs instructions) to: receive a data set (Lad ¶0003 discloses receiving a set of OCT volume can images as an input); determine, via a trained machine learning model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline), a presence or severity value associated with a disease or medical condition (Lad ¶0003 discloses a machine learning algorithm determining the presence of detecting geographic atrophy) using the data set (Lad ¶0003 discloses receiving a set of OCT volume can images as an input); output, via a report or graphical user interface (Lad ¶0065 discloses outputting the result, ¶0056 discloses displays, and interfaces), the determined presence or severity value (Lad ¶0003 discloses a machine learning algorithm determining the presence of detecting geographic atrophy), wherein the trained machine learning model was trained in a contrastive learning operation(Lad Fig 9, ¶0101 discloses a contrastive learning manner), the contrastive learning operation comprising: training a baseline ML model via a first data set (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline), the first data set consisting of data for a non-anomalous, normal, or healthy set (Lad ¶0116 discloses uses auto fluorescent images of healthy patients with GA to training a multilayer CNN); generating gradient (Lad Fig 18 discloses the output of the image with the gradient label) severity score vector (Lab ¶0114 discloses classifying the severity according to a grading criteria) from the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) for a second data set (Lad ¶0102 discloses using a second OCT dataset), the second data set comprising data for anomalous or unhealthy set (Lad ¶0102 discloses a second OCT dataset that consisted of images that were classified as having CNV, DME, and drusen), wherein the second data set is unlabeled with respect to severity (Lad ¶0102 discloses the secondary dataset being labeled for presence of disease not severity); tiering the severity score vector (Lad ¶0114 discloses classifying the MD severity according to a grading criteria) into a plurality of severity classes (Lad ¶0042, ¶0069, discloses determining the level of severity of the degeneration using percentage labels). Lad does not explicitly teach including a first severity class associated with a first severity score label and a second severity class associated with a second severity score label; and generating the trained machine learning model using the first severity score label and the second severity class. Mulyukov is in the same field of medical eye disease detection image processing. Further, Mulyukov teaches including a first severity class associated with a first severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) and a second severity class associated with a second severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high); and generating the trained machine learning model (Mulyukov ¶00082 and Fig disclose generating a machine learning model based on the level of disease activity) using the first severity score label (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) and the second severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad by incorporating the scoring of the severity of disease to be used an input for the second machine learning model as taught by Mulyukov, to make an invention that can more accurately identify and classify the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to address the need for a method that reliably and accurately assesses disease activity of w-AMD and/or of other retinopathies and that provides patient-specific anti-VEGF treatment regimen models, such as customized dosing frequency models (Mulyukov, ¶00017). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 17, Lad in view of Mulyukov teaches the system of claim 16, wherein the instructions (Lad ¶0056-¶0057 discloses a processor in communication with memory that performs instructions) to tier the severity score vector into the plurality of severity classes comprises: instructions (Lad ¶0056-¶0057 discloses a processor in communication with memory that performs instructions) to order the severity score vector by rank (Mulyukov Fig 10 and Fig 11, ¶00023 disclose the severity of the disease activity being ranked as high medium or low) to generate a ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients); and instructions (Lad ¶0056-¶0057 discloses a processor in communication with memory that performs instructions) to arrange the ranked list of vector elements of the severity score vector (Mulyukov Fig 4 and ¶00134 disclose analyzing feature vales to rank out they affect the disease activity in patients) into a plurality of bins (Mulyukov Fig 9, ¶00182 discloses soring the scoring of the patient disease activity in high and low bins) , wherein a first bin corresponds to the first severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being less than a threshold and being labeled low) , and wherein the second bin corresponds to the second severity class (Mulyukov Fig 10, ¶00189- ¶00191 discloses the first severity score being above than a threshold and being labeled high). See Claim 10 for rationale, its parent claim. Regarding Claim 18, Lad in view of Mulyukov teaches the system of claim 16, wherein the first training data set comprises image data from a medical scan (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). See Claim 10 for rationale, its parent claim. Regarding Claim 19, Lad in view of Mulyukov teaches the method of claim 16, wherein the first training data (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline) comprises image data from a sensor (Mulyukov ¶00049 discloses sensors collecting patient data and the patient data consisting of retinal images). See Claim 10 for rationale, its parent claim. Claims 8, 15, 20 are rejected under 35 U.S.C. 103 as unpatentable over Lad in view of Mulyukov in further view of Paschalakis et al (US Patent No US 10719936 B2, hereafter referred to as Paschalakis). Regarding Claim 8, Lad in view of Mulyukov teaches the method of claim 1, wherein the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). Lad in view of Mulyukov does not explicitly teach comprises an auto-encoder. Paschalakis is in the same field of medical eye image processing. Further, Paschalakis teaches comprises an auto-encoder (Paschalakis Col 5 Lines 5-6 disclose the layer of the model being trained as autoencoders). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad in view of Mulyukov by incorporating ana autoencoder as part of the machine learning model as taught by Paschalakis, to make an invention that is more efficient in identifying and classifying the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to reduce the necessity of the time-consuming hand-crafting of features that would otherwise be required to pre-process the images with application-specific filters or by calculating computable features (Paschalakis, Col 2, Lines 55-60). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 15, Lad in view of Mulyukov teaches the method of claim 10, wherein the baseline ML model (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). Lad in view of Mulyukov does not explicitly teach comprises an auto-encoder. Paschalakis is in the same field of medical eye image processing. Further, Paschalakis teaches comprises an auto-encoder (Paschalakis Col 5 Lines 5-6 disclose the layer of the model being trained as autoencoders). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad in view of Mulyukov by incorporating ana autoencoder as part of the machine learning model as taught by Paschalakis, to make an invention that is more efficient in identifying and classifying the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to reduce the necessity of the time-consuming hand-crafting of features that would otherwise be required to pre-process the images with application-specific filters or by calculating computable features (Paschalakis, Col 2, Lines 55-60). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 20, Lad in view of Mulyukov teaches the method of claim 16, wherein the baseline ML model comprises an auto-encoder (Lad ¶0155, ¶0179, ¶0183 discloses using a CNN trained on 3D OCT inputs as a baseline). Lad in view of Mulyukov does not explicitly teach comprises an auto-encoder. Paschalakis is in the same field of medical eye image processing. Further, Paschalakis teaches comprises an auto-encoder (Paschalakis Col 5 Lines 5-6 disclose the layer of the model being trained as autoencoders). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Lad in view of Mulyukov by incorporating ana autoencoder as part of the machine learning model as taught by Paschalakis, to make an invention that is more efficient in identifying and classifying the disease present in the patient image; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to reduce the necessity of the time-consuming hand-crafting of features that would otherwise be required to pre-process the images with application-specific filters or by calculating computable features (Paschalakis, Col 2, Lines 55-60). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Pub US-20220328189-A1 to Zhou et al. discloses a system for implementing annotation-efficient deep learning in computer aided diagnosis in medical images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL LYNN ROBERTS whose telephone number is (571)272-6413. The examiner can normally be reached Monday- Friday 7:30am- 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL L ROBERTS/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Nov 20, 2023
Application Filed
Nov 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581132
LARGE-SCALE POINT CLOUD-ORIENTED TWO-DIMENSIONAL REGULARIZED PLANAR PROJECTION AND ENCODING AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569208
PET APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564324
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM FOR ABNORMALITY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561773
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, CHIP AND MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12525028
CONTACT OBJECT DETECTION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month