DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 15-18 withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected sub-combination, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 11/21/2025.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 12/30/2024 and 08/26/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 and 19-20 are rejected under 35 U.S.C. 101
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of predicting treatment level based on images, without significantly more.
The claim recites: “A method for managing a treatment for a subject diagnosed with neovascular age-related macular degeneration (nAMD), the method comprising:
receiving spectral domain optical coherence tomography (SD-OCT) imaging data of a retina of the subject;
extracting retinal feature data for a plurality of retinal features using the SD-OCT imaging data, the plurality of retinal features being associated with at least one of a set of retinal fluids or a set of retinal layers;
sending input data formed using the retinal feature data for the plurality of retinal features into a first machine learning model; and
predicting, via the first machine learning model, a treatment level for an anti- vascular endothelial growth factor (anti-VEGF) treatment to be administered to the subject based on the input data.”
The limitations, as drafted, are processes that, under their broadest reasonable interpretation, cover performance of the limitation in the mind. A person can mentally extract feature data from eye images, where the feature data relates to fluid or layer. A person can further predict treatment levels based on the extracted retinal feature data. The obtaining of SD-OCT images amounts to insignificant extra-solution activity (data collection).
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a machine learning model, which is recited at a level of generality such that it amounts to no more than a generic learning model. Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at a high-level of generality. It is therefore a judicial exception that is not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This claim is not patent eligible.
Claims 2-5 and 13 are rejected under 35 U.S.C. 101 because the claimed invention merely specifies the retinal fluid and retinal feature data without adding elements that could not be performed in the human mind. For example, a person can observe fluid volume and layer thickness, where the fluid is selected from a specific group of fluids and the layers are selected from a specific group of layers. Regarding claim 13, a person can use features associated with SRF and PED in the determination of treatment dose. The claims are not patent eligible.
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to using clinical data as a predictive feature, which can be done in the human mind. For example, a person may consider a clinical feature along with the fluid and layer features in predicting a treatment regimen. The claim is not patent eligible.
Claims 7-9 are rejected under 35 U.S.C. 101 because the claimed invention specifies the treatment dose as low (less than or equal to 5) or high (greater than or equal to 16) which could be mentally determined by a person as the predicted treatment level. The claims are not patent eligible.
Claims 10-11 are rejected under 35 U.S.C. 101 because the claimed invention describes a second machine learning model (deep learning model as per claim 11), for extracting the retinal features. As described in the rejection of claim 1, a person can extract the retinal features. The second machine learning model, further specified as deep learning, is recited at a level of generality such that it describes a generic learning model. The claims are not patent eligible.
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to specifying the machine learning model of claim 1 as an XGBoost algorithm, is recited at a level of generality such that it describes a generic XGBoost model. The claim is not patent eligible.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed specifying that the SD-OCT images are captured in a single visit. This merely describes the insignificant extra-solution activity of claim 1. The claim is not patent eligible.
Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a system analogous to the abstract idea of the method of claims 1 and 10. These claims further describe a memory with machine readable medium comprising code, and a processor. These additional elements are recited generically such that they amount to a generic memory, code, and processor. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, and is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-11, 13-14, and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bogunovic (Prediction of Anti-VEGF Treatment Requirements in Neovascular AMD Using a Machine Learning Approach).
Regarding claim 1, Bogunovic teaches “A method for managing a treatment for a subject diagnosed with neovascular age-related macular degeneration (nAMD),” (Bogunovic, Introduction Paragraph 3, “The aim of this pilot study is to predict, on an individual patient level, low and high anti-VEGF injection requirements during a PRN treatment regimen of patients with neovascular AMD. Our hypothesis suggests that these requirement categories can be predicted by observing retinal morphology and treatment response as early as during the standardized initiation phase of the treatment course. Using automated computational analysis of OCT, a set of spatiotemporal features was extracted from imaging series, characterizing the retina and its anatomic response to the initial anti-VEGF treatment. Machine learning methods were then applied to build a predictive model of the future therapeutic requirements during the PRN regimen. The model was trained and validated on 2- year data from a large-scale prospective randomized controlled trial in treatment-naive AMD patients.”)
“the method comprising: receiving spectral domain optical coherence tomography (SD-OCT) imaging data of a retina of the subject;” (Bogunovic, “OCT Image Processing and Analysis” Paragraph 1, “The proposed methodology is based on a fully automated image processing and analysis pipeline available at the Vienna Reading Center (VRC), Vienna, Austria. No manual corrections have been performed in this study. All images were acquired with Cirrus HD-OCT III (Carl Zeiss Meditec, Inc., Dublin, CA, USA) presenting 512 X 128 X 1024 voxels, with a size of 11.7 X 47.2 X 2.0 um3, covering a volume of 6 X 6 X 2 mm3.” Note that the Cirrus HD-OCT III is an SD-OCT imager. Additionally, Figures 2 and 3 show the obtained imaging data of a retina of the subject.)
“extracting retinal feature data for a plurality of retinal features using the SD-OCT imaging data, the plurality of retinal features being associated with at least one of a set of retinal fluids or a set of retinal layers;” (Bogunovic, “Retinal Layer Segmentation” Section, “Intraretinal and Subretinal Fluid Segmentation” Section, and Figures 2-3, “Automated retinal layer segmentation is performed with a graph-theoretic method, part of the Iowa Reference Algorithms.10,11 The method transforms the problem into a multiscale 3D graph search to optimally and efficiently segment a set of surfaces according to image-based cost function and satisfying a priori hard constraints on surface smoothness and intersurface distances. As the a priori constraints are valid for healthy retinas, only a subset of layer interfaces is well segmented in neovascular AMD population. Thus, the following four principle layer thickness maps were extracted, which were empirically found to be robustly segmented: inner retina (IR), outer nuclear layer (ONL), photoreceptor outer segments with retinal pigment epithelium (OR), and total retinal thickness (TRT). An example of segmented surfaces denoting those layers is shown in Figure 2.”; “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with » 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.” Note that while the claim only requires either retinal fluid or retinal layer extraction, the above reference discloses both.)
“sending input data formed using the retinal feature data for the plurality of retinal features into a first machine learning model; and predicting, via the first machine learning model, a treatment level for an anti- vascular endothelial growth factor (anti-VEGF) treatment to be administered to the subject based on the input data.” (Bogunovic, Figure 4(a-b) and “Predictive Model of Treatment Requirements” Section Paragraphs 1 and 3, “For each eye, from its longitudinal series of three OCT volumes (baseline, month 1, and month 2) and the derived segmentations, we extracted a set of quantitative features characterizing the underlying retinal pathomorphology. For the imaging features to correspond across subjects, before the feature extraction, all scans of left eyes were mirrored to conform to scans of a right eye. From the image segmentations 2D maps were computed corresponding to the thickness maps of the four layers, as well as volume and en face area maps of both IRF and SRF, resulting in eight 2D maps in total, with examples shown in Figure 4a. Analyzing data in high-dimensional OCT volumes is affected by the so-called ‘‘Curse of Dimensionality,’’ where learning is very difficult and prone to overfitting. To limit the dimensionality of the feature vector and facilitate the machine learning, we summarized the A-scan properties spatially across the regions defined by the Early Treatment Diabetic Retinopathy Study (ETDRS) grid as depicted in Figure 4b. The ETDRS grid was placed at the center of the scan, and the mean feature values per ETDRS subregions were computed. In addition to the nine ETDRS grid cells, we additionally included the central 3 mm, central 6 mm, and the rings corresponding to the parafoveal and perifoveal bands, resulting in 13 spatial regions in total. Such ETDRS-related features have the additional advantage of being easier to interpret than Ascan related ones, due to widespread use of ETDRS grid in ophthalmology. To this set of imaging features, we added the measured BCVA. To measure the rate of change of the longitudinal features, the differences between the corresponding features of the consecutive time points (month 1month 0 and month 2 month 1) were further included. This resulted in the number of local spatio-temporal features being 525, computed as follows: (8 feature maps X 13 spatial regions + 1 BCVA) 3 5 temporal elements. Last, demographic features were added: sex, race, age, and smoking status together with the fluorescein angiogram pattern type, for a total of 530 features.”; “Finally, a machine learning approach based on the random forest classifier13 was used to obtain a predictive model of the low and high treatment requirements from the set of the above features. Random forest was grown with 1000 trees for which the out of bag mean squared error was observed to have converged. The number of features to randomly sample as candidates at each split of a tree was chosen to be the square root of the number of features (√530), which is the default setting for a classification task13.”)
Regarding claim 2, Bogunovic teaches “The method of claim 1,”
“wherein the retinal feature data includes a value associated with a corresponding retinal fluid of the set of retinal fluids, the value selected from a group consisting of a volume, a height, and a width of the corresponding retinal fluid.” (Bogunovic, Figure 4(a) and “Predictive Model of Treatment Requirements” Section Paragraph 1, “For each eye, from its longitudinal series of three OCT volumes (baseline, month 1, and month 2) and the derived segmentations, we extracted a set of quantitative features characterizing the underlying retinal pathomorphology. For the imaging features to correspond across subjects, before the feature extraction, all scans of left eyes were mirrored to conform to scans of a right eye. From the image segmentations 2D maps were computed corresponding to the thickness maps of the four layers, as well as volume and en face area maps of both IRF and SRF, resulting in eight 2D maps in total, with examples shown in Figure 4a. Analyzing data in high-dimensional OCT volumes is affected by the so-called ‘‘Curse of Dimensionality,’’ where learning is very difficult and prone to overfitting. To limit the dimensionality of the feature vector and facilitate the machine learning, we summarized the A-scan properties spatially across the regions defined by the Early Treatment Diabetic Retinopathy Study (ETDRS) grid as depicted in Figure 4b. The ETDRS grid was placed at the center of the scan, and the mean feature values per ETDRS subregions were computed. In addition to the nine ETDRS grid cells, we additionally included the central 3 mm, central 6 mm, and the rings corresponding to the parafoveal and perifoveal bands, resulting in 13 spatial regions in total. Such ETDRS-related features have the additional advantage of being easier to interpret than Ascan related ones, due to widespread use of ETDRS grid in ophthalmology. To this set of imaging features, we added the measured BCVA. To measure the rate of change of the longitudinal features, the differences between the corresponding features of the consecutive time points (month 1month 0 and month 2 month 1) were further included. This resulted in the number of local spatio-temporal features being 525, computed as follows: (8 feature maps X 13 spatial regions + 1 BCVA) 3 5 temporal elements. Last, demographic features were added: sex, race, age, and smoking status together with the fluorescein angiogram pattern type, for a total of 530 features.”)
Regarding claim 3, Bogunovic teaches “The method of claim 1,”
“wherein the retinal feature data includes a value for a corresponding retinal layer of the set of retinal layers, the value selected from a group consisting of a minimum thickness, a maximum thickness, and an average thickness of the corresponding retinal layer.” (Bogunovic, Figure 2 and “Retinal Layer Segmentation” Section, “Automated retinal layer segmentation is performed with a graph-theoretic method, part of the Iowa Reference Algorithms.10,11 The method transforms the problem into a multiscale 3D graph search to optimally and efficiently segment a set of surfaces according to image-based cost function and satisfying a priori hard constraints on surface smoothness and intersurface distances. As the a priori constraints are valid for healthy retinas, only a subset of layer interfaces is well segmented in neovascular AMD population. Thus, the following four principle layer thickness maps were extracted, which were empirically found to be robustly segmented: inner retina (IR), outer nuclear layer (ONL), photoreceptor outer segments with retinal pigment epithelium (OR), and total retinal thickness (TRT). An example of segmented surfaces denoting those layers is shown in Figure 2.”)
Regarding claim 4, Bogunovic teaches “The method of claim 1,”
“wherein a retinal fluid of the set of retinal fluids is selected from a group consisting of an intraretinal fluid (IRF), a subretinal fluid (SRF), a fluid associated with pigment epithelial detachment (PED), or a subretinal hyperreflective material (SHRM).” (Bogunovic, “Intraretinal and Subretinal Fluid Segmentation” Section, and Figure 3, “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with » 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.”)
Regarding claim 5, Bogunovic teaches “The method of claim 1,”
“wherein a retinal layer of the set of retinal layers is selected from a group consisting of an internal limiting membrane (ILM) layer, an outer plexiform layer-Henle fiber layer (OPL-HAL), an inner boundary- retinal pigment epithelial detachment (IB-RPE), an outer boundary-retinal pigment epithelial detachment (OB-RPE), or a Bruch's membrane (BM).” (Bogunovic, “Intraretinal and Subretinal Fluid Segmentation” Section, “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with » 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.”)
Regarding claim 6, Bogunovic teaches “The method of claim 1,”
“further comprising: forming the input data using the retinal feature data for the plurality of retinal features and clinical data for a set of clinical features, the set of clinical features including at least one of a best corrected visual acuity, a pulse, a diastolic blood pressure, or a systolic blood pressure.” (Bogunovic, Figure 4(a-b) and “Predictive Model of Treatment Requirements” Section Paragraph 1, “For each eye, from its longitudinal series of three OCT volumes (baseline, month 1, and month 2) and the derived segmentations, we extracted a set of quantitative features characterizing the underlying retinal pathomorphology. For the imaging features to correspond across subjects, before the feature extraction, all scans of left eyes were mirrored to conform to scans of a right eye. From the image segmentations 2D maps were computed corresponding to the thickness maps of the four layers, as well as volume and en face area maps of both IRF and SRF, resulting in eight 2D maps in total, with examples shown in Figure 4a. Analyzing data in high-dimensional OCT volumes is affected by the so-called ‘‘Curse of Dimensionality,’’ where learning is very difficult and prone to overfitting. To limit the dimensionality of the feature vector and facilitate the machine learning, we summarized the A-scan properties spatially across the regions defined by the Early Treatment Diabetic Retinopathy Study (ETDRS) grid as depicted in Figure 4b. The ETDRS grid was placed at the center of the scan, and the mean feature values per ETDRS subregions were computed. In addition to the nine ETDRS grid cells, we additionally included the central 3 mm, central 6 mm, and the rings corresponding to the parafoveal and perifoveal bands, resulting in 13 spatial regions in total. Such ETDRS-related features have the additional advantage of being easier to interpret than Ascan related ones, due to widespread use of ETDRS grid in ophthalmology. To this set of imaging features, we added the measured BCVA. To measure the rate of change of the longitudinal features, the differences between the corresponding features of the consecutive time points (month 1 - month 0 and month 2 - month 1) were further included. This resulted in the number of local spatio-temporal features being 525, computed as follows: (8 feature maps X 13 spatial regions + 1 BCVA) X 5 temporal elements. Last, demographic features were added: sex, race, age, and smoking status together with the fluorescein angiogram pattern type, for a total of 530 features.”)
Regarding claim 7, Bogunovic teaches “The method of claim 1,”
“wherein predicting the treatment level comprises predicting a classification for the treatment level as either a high or a low treatment level.” (Bogunovic, “Predictive Model of Treatment Requirements” Section Paragraph 3, “Finally, a machine learning approach based on the random forest classifier13 was used to obtain a predictive model of the low and high treatment requirements from the set of the above features. Random forest was grown with 1000 trees for which the out of bag mean squared error was observed to have converged. The number of features to randomly sample as candidates at each split of a tree was chosen to be the square root of the number of features (√530), which is the default setting for a classification task13.”)
Regarding claim 8, Bogunovic teaches “The method of claim 7,”
“wherein the high treatment level indicates sixteen or more injections of the anti-VEGF treatment during a selected time period after an initial phase of treatment.” (Bogunovic, “Predictive Model of Treatment Requirements” Section Paragraph 2, “The maximum number of injections during the 2-year PRN regimen is 21 (months 3 to 23). We defined the category of “low” requirements to consist of patients in lower quartile of the number of injections, which corresponded to receiving no more than five injections. Analogously, the category of “high” requirements was defined to consist of patients in upper quartile, which corresponded to receiving ≥16 injections. The remaining eyes in the interquartile range were assigned to the “medium” requirements category. We aim to discriminate the patients in the low requirement group from the medium and high requirement groups, and analogously, the ones in the high requirement group from the medium and low requirement groups. Thus, we pose the problem as a multiclass one-versus-all classification.”)
Regarding claim 9, Bogunovic teaches “The method of claim 7,”
“wherein the low treatment level indicates five or fewer injections of the anti-VEGF treatment during a selected time period after an initial phase of treatment.” (Bogunovic, “Predictive Model of Treatment Requirements” Section Paragraph 2, “The maximum number of injections during the 2-year PRN regimen is 21 (months 3 to 23). We defined the category of “low” requirements to consist of patients in lower quartile of the number of injections, which corresponded to receiving no more than five injections. Analogously, the category of “high” requirements was defined to consist of patients in upper quartile, which corresponded to receiving ≥16 injections. The remaining eyes in the interquartile range were assigned to the “medium” requirements category. We aim to discriminate the patients in the low requirement group from the medium and high requirement groups, and analogously, the ones in the high requirement group from the medium and low requirement groups. Thus, we pose the problem as a multiclass one-versus-all classification.”)
Regarding claim 10, Bogunovic teaches “The method of claim 1,”
“wherein the extracting comprises: extracting the retinal feature data for the plurality of retinal features from segmented images generated using a second machine learning model that automatically segments the SD-OCT imaging data, wherein the plurality of retinal features is associated with at least one of a set of retinal fluid segments or a set of retinal layer segments identified in the segmented images.” (Bogunovic, “Intraretinal and Subretinal Fluid Segmentation” Section, “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with ≈ 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.”)
Regarding claim 11, Bogunovic teaches “The method of claim 10,”
“wherein the second machine learning model comprises a deep learning model.” (Bogunovic, “Intraretinal and Subretinal Fluid Segmentation” Section, “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with ≈ 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.”)
Regarding claim 13, Bogunovic teaches “The method of claim 1”
“wherein the plurality of retinal features includes at least one feature associated with subretinal fluid (SRF) and at least one feature associated with pigment epithelial detachment (PED).” (Bogunovic, “Intraretinal and Subretinal Fluid Segmentation” Section, “Segmentation of intraretinal cystoid fluid (IRF) and subretinal fluid (SRF) was performed per B-scan using a validated segmentation algorithm based on deep learning.12 First, based on the top and the bottom retinal layer, a mask is computed denoting the retina extending from the inner limiting membrane (ILM) to the RPE. Then, every voxel within the mask is classified with a multiscale convolutional neural network (CNN) as belonging to one of the three classes: Normal retina, IRF, or SRF (Fig. 3). The CNN had been trained in a supervised manner using a training set of 157 OCT volumes with ≈ 20,000 manually annotated B-scans, acquired with the same OCT device model (Cirrus; Zeiss) and having the same pathology (neovascular AMD), which were disjoint from the set of images in the HARBOR trial.” Note that the RPE layer is associated with potential PED.)
Regarding claim 14, Bogunovic teaches “The method of claim 1,”
“wherein the SD-OCT imaging data comprises an SD-OCT image captured during a single clinical visit.” (Bogunovic, “Predictive Model of Treatment Requirements” Section Paragraph 1, “For each eye, from its longitudinal series of three OCT volumes (baseline, month 1, and month 2) and the derived segmentations, we extracted a set of quantitative features characterizing the underlying retinal pathomorphology. For the imaging features to correspond across subjects, before the feature extraction, all scans of left eyes were mirrored to conform to scans of a right eye. From the image segmentations 2D maps were computed corresponding to the thickness maps of the four layers, as well as volume and en face area maps of both IRF and SRF, resulting in eight 2D maps in total, with examples shown in Figure 4a. Analyzing data in high-dimensional OCT volumes is affected by the so-called ‘‘Curse of Dimensionality,’’ where learning is very difficult and prone to overfitting. To limit the dimensionality of the feature vector and facilitate the machine learning, we summarized the A-scan properties spatially across the regions defined by the Early Treatment Diabetic Retinopathy Study (ETDRS) grid as depicted in Figure 4b. The ETDRS grid was placed at the center of the scan, and the mean feature values per ETDRS subregions were computed. In addition to the nine ETDRS grid cells, we additionally included the central 3 mm, central 6 mm, and the rings corresponding to the parafoveal and perifoveal bands, resulting in 13 spatial regions in total. Such ETDRS-related features have the additional advantage of being easier to interpret than Ascan related ones, due to widespread use of ETDRS grid in ophthalmology. To this set of imaging features, we added the measured BCVA. To measure the rate of change of the longitudinal features, the differences between the corresponding features of the consecutive time points (month 1 - month 0 and month 2 - month 1) were further included. This resulted in the number of local spatio-temporal features being 525, computed as follows: (8 feature maps X 13 spatial regions + 1 BCVA) X 5 temporal elements. Last, demographic features were added: sex, race, age, and smoking status together with the fluorescein angiogram pattern type, for a total of 530 features.” Note that Bogunovic teaches collection of data from three single visits. The claim as written does not require that all of the image data be acquired from only one single visit.)
Regarding claims 19 and 20, these claims recite a system with memory, machine executable code, and processor with elements corresponding to the steps recited in Claims 1 and 10. Therefore, the recited elements of these claims are mapped to the analogous steps in the corresponding method claims. Additionally, Bogunovic describes a deep learning approach using algorithms for image processing and diagnostic prediction, which necessarily require the claimed system features. Further, these features are generic and well-known in the art and are therefore not considered novel.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bogunovic in view of Kumari (Automated Diabetic Retinopathy Screening With Montage Fundus Images).
Regarding claim 12, Bogunovic teaches “The method of claim 1,”
While Bogunovic discloses the first machine learning model comprising a random forest classifier (see claim 1 rejection), Bogunovic does not expressly disclose that the model comprises an Extreme Gradient Boosting (XGBoost) algorithm.
Kumari discloses a predictive model comprising an Extreme Gradient Boosting (XGBoost) algorithm (Kumari, Section 2A, Paragraphs 5-6 and Figure 4, “Classification model was built using available popular ML classification models such as K Nearest Neighbors, Naive Bayes, XGBoost, Random Forest Classifier and Support Vector Machine Classifier [4]. As shown in Fig. 4, XGBoost and Random Forest Classifier were the best models. After doing a cross validation for the selected models, XGBoost got selected as the best model. It showed a higher mean and lower standard deviation than the Random Forest Classifier.”)
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to replace the random forest classifier of Bogunovic with an XGBoost algorithm, as taught by Kumari.
The motivation for doing so would have been to improve model accuracy, as described above by Kumari. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bogunovic in view of Kumari to fully disclose, “wherein the first machine learning model comprises an Extreme Gradient Boosting (XGBoost) algorithm.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Romo-Bucheli (End-to-End Deep Learning Model for Predicting Treatment Requirements in Neovascular AMD From Longitudinal Retinal OCT Imaging) teaches a deep learning model based on OCT images for the prediction of anti-VEGF treatment requirements in nAMD patients. Zhang (US 20190110753 A1) teaches deep learning algorithms for performing medical diagnosis of ophthalmic disease based on OCT imaging. Ehlers (US 20200077883 A1) teaches segmentation and feature extraction of OCT images of the eye for determination and display of patient clinical parameters. Debuc (US 20110275931 A1) teaches analysis of OCT images for quantitative feature extraction, wherein prognostic and diagnostic details are obtained.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON JOSEPH SORRIN whose telephone number is (703)756-1565. The examiner can normally be reached Monday - Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON JOSEPH SORRIN/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672