DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The current application claims foreign priority from the Korean application (KR10-2022-0130653). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/19/2024 is in compliance with the provisions of 37 CFR 1.97 and has been considered by the examiner.
Claim Objections
Claim 5 is objected to because of the following informalities: in claim 5 lines 9-10, “the extraction unit” should read “the extraction”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (NPL "Machine learning predicting myopic regression after corneal refractive surgery using preoperative data and fundus photography") in view of Lee et al. (NPL “Age-specific influences of refractive error and illuminance on pupil diameter”) and in further view of Lim et al. (NPL “Factors Affecting Long-term Myopic Regression after Laser In Situ Keratomileusis and Laser-assisted Subepithelial Keratectomy for Moderate Myopia”) and in further view of Zhou et al. (NPL “Survival analysis of myopic regression after small incision lenticule extraction and femtosecond laser-assisted laser in situ keratomileusis for low to moderate myopia”).
Regarding claim 1, Kim discloses a medical apparatus for predicting myopic regression which is configured to predict at least one of a probability of myopic regression and whether myopic regression will occur (Kim page 3703, left-hand column [LHC], first full paragraph: “predict long-term myopic regression after corneal refractive surgery”) from numerical data (Kim page 3704, LHC, first full paragraph: “factors, including preoperative clinical measurements”) including a refractive power (Kim Fig. 1: Refractive power), a corneal curvature (it is known in the art that corneal curvature is a predictor for myopic regression, as evidenced by supporting NPL document “An Interval-Censored Model for Predicting Myopic Regression after Laser In Situ Keratomileusis” cited by Kim - see Conclusion), an eye axial length (it is known in the art that axial length is a predictor for myopic regression, as evidenced by supporting NPL document “Is the axial length a risk factor for post-LASIK myopic regression?” cited by Kim - see Conclusion), a corneal thickness (Kim Fig. 1: Corneal thickness), an intraocular pressure (Kim Fig. 1: Intraocular pressure), a sex (Kim Fig. 1: Sex), and an age (Kim Fig. 1: Age), using a machine learning processor (Kim page 3704, right-hand column [RHC], first paragraph: “the developed deep learning model”). However, Kim fails to disclose predicting myopic regression from numerical data including a photopic pupil size, a mesopic pupil size, a corneal diameter, a corneal epithelial thickness, a high-order aberration, and a visual acuity.
In the related art of myopia, Lee discloses predicting myopic regression from numerical data (Lee pages 1-2, RHC, last paragraph: it is known there is a relationship between pupil size and myopia) including a photopic pupil size (Lee Table 2: “Mesopic high”) and a mesopic pupil size (Lee Table 2: “Mesopic low”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kim to incorporate the teachings of Lee to determine if a more strict and cautious preoperative evaluation of refractive state is necessary to decrease postoperative vision complaints (Lee page 4, LHC, first paragraph under 4. Discussion). However, Kim, modified by Lee, still fails to disclose predicting myopic regression from numerical data including a corneal diameter, a corneal epithelial thickness, a high-order aberration, and a visual acuity.
In the related art of myopic regression, Lim discloses predicting myopic regression from numerical data including a corneal epithelial thickness (Lim page 98, RHC, first full paragraph: “an increase in central epithelial thickness after refractive surgery is related to myopic regression”) and a visual acuity (Lim Table 3: “UCVA = uncorrected visual acuity” and “BSCVA = best spectacle corrected visual acuity”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Kim to incorporate the teachings of Lim to provide safer and more accurate vision correction surgery for patients with myopia (Kim page 3709, LHC, last paragraph teaches motivation to integrate factors that affect the occurrence of myopic regression). However, Kim, modified by Lee and Lim, still fails to disclose predicting myopic regression from numerical data including a corneal diameter and a high-order aberration.
In the related art of myopic regression, Zhou discloses predicting myopic regression from numerical data (Zhou Abstract: “Predictors of myopic regression included”) including a corneal diameter (Zhou Abstract: “corneal diameter”) and a high-order aberration (Zhou Abstract: “preoperative higher-order aberration root mean square with 3 mm pupil diameter”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Kim to incorporate the teachings of Zhou to provide safer and more accurate vision correction surgery for patients with myopia (Kim page 3709, LHC, last paragraph teaches motivation to integrate factors that affect the occurrence of myopic regression).
Regarding claim 6, it is the corresponding method executed by the apparatus claimed in claim 1. Therefore, Kim, modified by Lee, Lim and Zhou, discloses the limitations of claim 6 as it does the limitations of claim 1.
Claim(s) 2-4 and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, Lee, Lim and Zhou in view of Liu et al. (NPL “Biometric Measurement of Anterior Segment: A Review”).
Regarding claim 2, Kim, modified by Lee, Lim and Zhou, discloses the medical apparatus of claim 1, wherein, to predict at least one of the probability of myopic regression and whether myopic regression will occur, a deep learning processor (Kim page 3704, LHC, first full paragraph: “Several deep learning models (convolutional neural networks) for image analysis have been developed to predict myopic regression based on fundus photography”) and image data including a captured fundus image (Kim Fig. 1: Preoperative fundus photography) and an optical coherence tomography image (it is known in the art that a multimodal deep learning algorithm can combine OCT and fundus images, as evidenced by supporting NPL document “The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment” cited by Kim - see Conclusion) are additionally used. However, Kim fails to disclose using image data including a corneal endothelial cell image, a corneal shape and aberration analyzer image, an optical path difference (OPD)-scan III image, and a computed corneal tomography machine image. In the related art of imaging the anterior segment, Liu discloses using image data including a corneal endothelial cell image (Liu page 17, second and third paragraphs: “FF-OCT enables an entire field of view imaging, approximately covering 1~2 cm2, through depths of hundreds of microns at the cellular level”), a corneal shape and aberration analyzer image (Liu page 3, second full paragraph: “anterior segment tomography not only obtains quantitative information from both the anterior and posterior corneal surfaces, but also has the capability of imaging the anterior segment tissues and reconstructing the 3D shapes of the tissues digitally”), an optical path difference (OPD)-scan III image (Liu page 15, under 2.6.2. Fourier Domain OCT: “the optical path length difference between sample and reference is encoded by the frequency of the interferometric fringes as a function of spectrum with all spectral components captured simultaneously”), and a computed corneal tomography machine image (Liu pages 11-12, last paragraph: “Pentacam HR and AXL”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Kim to incorporate the teachings of Liu to accurately measure corneal topography, which is crucial to determining the quality of vision, detection and diagnosis of pathology, prescription of noninvasive and invasive treatments, and evaluation of therapy (Liu page 3, first full paragraph).
Regarding claim 3, Kim, modified by Lee, Lim, Zhou and Liu, discloses the medical apparatus of claim 2, wherein the machine learning processor is configured to extract a first feature from the numerical data (Kim Fig. 2; page 3704, LHC, first full paragraph: “preoperative clinical measurements…Calculated SHAP values were used to create feature importance plots”), and the deep learning processor is configured to extract a second feature from the image data (Kim page 3704, LHC, first full paragraph: “the soft-max output of ResNet50 based on fundus photography”), the medical apparatus further comprising a fusion processor configured to predict at least one of the probability of myopic regression and whether myopic regression will occur from the first feature and the second feature (Kim page 3704, LHC, first full paragraph: “the XGBoost algorithm, which is derived from extreme gradient boosting, was used to integrate all factors”).
Regarding claim 4, Kim, modified by Lee, Lim, Zhou and Liu, discloses the medical apparatus of claim 3, wherein the deep learning processor comprises: a first sub-model processor configured to extract a first sub-feature from the captured fundus image; a second sub-model processor configured to extract a second sub-feature from the corneal endothelial cell image; a third sub-model processor configured to extract a third sub-feature from the corneal shape and aberration analyzer image; a fourth sub-model processor configured to extract a fourth sub-feature from the optical coherence tomography image; a fifth sub-model processor configured to extract a fifth sub-feature from the OPD-scan III image; a sixth sub-model processor configured to extract a sixth sub-feature from the computed corneal tomography machine image (supporting NPL document “The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment” cited by Kim discloses a multimodal deep learning algorithm using sub-model processors “Pre-trained VGG19” to extract features from each image modality - see Yoo Fig. 3); and a sub-fusion processor configured to extract the second feature from the first to sixth sub-features (Yoo page 679, LHC, first full paragraph: “we combined the VGG-19 feature vectors from each OCT and fundus image using RF, RBM, and DBN”).
Regarding claim 7, it is the corresponding method executed by the apparatus claimed in claim 2. Therefore, Kim, modified by Lee, Lim, Zhou and Liu, discloses the limitations of claim 7 as it does the limitations of claim 2.
Regarding claim 8, it is the corresponding method executed by the apparatus claimed in claim 3. Therefore, Kim, modified by Lee, Lim, Zhou and Liu, discloses the limitations of claim 8 as it does the limitations of claim 3.
Regarding claim 9, it is the corresponding method executed by the apparatus claimed in claim 4. Therefore, Kim, modified by Lee, Lim, Zhou and Liu, discloses the limitations of claim 9 as it does the limitations of claim 4.
Claim(s) 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, Lee, Lim and Zhou in view of Kim '214 et al. (KR20210084214A).
Regarding claim 5, Kim, modified by Lee, Lim and Zhou, discloses the medical apparatus of claim 1, wherein the machine learning processor comprises: a preprocessing processor configured to preprocess the numerical data (Kim page 3703, under Dataset: the data was preprocessed, e.g., “the data were fully deidentified to protect patient confidentiality” and “Of these, 11,821 patients were excluded”); a learning processor configured to extract a feature from the preprocessed numerical data (Kim Fig. 2; page 3704, LHC, first full paragraph: “preoperative clinical measurements…Calculated SHAP values were used to create feature importance plots”); and a postprocessing processor configured to predict at least one of the probability of myopic regression and whether myopic regression will occur from the extraction (Kim page 3704, LHC, first full paragraph: “the XGBoost algorithm, which is derived from extreme gradient boosting, was used to integrate all factors”). However, Kim fails to disclose a missing value processing processor configured to process a missing value of the numerical data. In the related art of deep learning prediction models, Kim '214 discloses a missing value processing processor configured to process a missing value of the numerical data (Kim '214 paragraph 0017: “the preprocessing unit is characterized by applying a weighted moving average method to the PSA values of the periods before and after the period when PSA values are missing within the period, thereby estimating a representative value of the period”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Kim to incorporate the teachings of Kim '214 to train the recurrent neural network more effectively and obtain results more accurately (Kim '214 paragraph 0020).
Regarding claim 10, Kim, modified by Lee, Lim, Zhou and Kim '214, discloses a method of training a myopic regression prediction medical apparatus configured to predict at least one of a probability of myopic regression and whether myopic regression will occur from numerical data including a refractive power, a corneal curvature, an eye axial length, a photopic pupil size, a mesopic pupil size, a corneal diameter, a corneal thickness, a corneal epithelial thickness, a high-order aberration, a visual acuity, an intraocular pressure, a sex, and an age, the method comprising: generating, by at least one of a missing value processing method, a preprocessing method, and a postprocessing method (as claimed in claim 5), a plurality of different machine learning processors (supporting NPL document “The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment” cited by Kim discloses comparing “five different deep learning models” - see Yoo page 679, LHC, first full paragraph and Fig. 3); training the plurality of machine learning processors (Yoo page 679, RHC, first paragraph: “There was an unsupervised layer-wise pre-training followed by supervised fine tuning using the gradient descent method in the training process”); evaluating the plurality of machine learning processors (Yoo page 679, RHC, last paragraph: “The measurement of classification problems was based on area under the curve (AUC), accuracy, and relative classifier information (RCI)”); and selecting an optimal machine learning processor among the plurality of machine learning processors on the basis of evaluation results (Yoo page 685, LHC, first full paragraph: “RF with multimodal setting worked better than RBM and DBN”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chen et al. (NPL “An Interval-Censored Model for Predicting Myopic Regression after Laser In Situ Keratomileusis”) discloses significant predictors for myopia regression after LASIK included mean preoperative central corneal curvature (Chen Abstract).
Gab-Alla (NPL “Is the axial length a risk factor for post-LASIK myopic regression?”) discloses pre-operative high axial length increases the risk of myopic regression after LASIK (Gab-Alla Abstract).
Yoo et al. (NPL “The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment”) discloses a multimodal deep learning algorithm based on the combination of OCT and fundus image raised the diagnostic accuracy compared to this data alone (Yoo Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.Z./Examiner, Art Unit 2677
/Jonathan S Lee/Primary Examiner, Art Unit 2677