Prosecution Insights
Last updated: April 19, 2026
Application No. 18/685,847

Systems and Methods for Predicting Corneal Improvement From Scheimpflug Imaging Using Machine Learning

Non-Final OA §103§112
Filed
Feb 22, 2024
Examiner
BILODEAU, DUSTIN E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Mayo Foundation for Medical Education and Research
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
71 granted / 81 resolved
+25.7% vs TC avg
Moderate +5% lift
Without
With
+5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
75.7%
+35.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/07/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered and attached by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 23 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Both claim 16 and 23 state the limitation of “Scheimpflug imaging parameters that are independent of corneal thickness”. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Applicant is advised that should claim 16 be found allowable, claim 23 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 8-13, 16-20, and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Zander D, Grewing V, Glatz A, et al. Predicting Edema Resolution After Descemet Membrane Endothelial Keratoplasty for Fuchs Dystrophy Using Scheimpflug Tomography. JAMA Ophthalmol. 2021;139(4):423–430. doi:10.1001/jamaophthalmol.2020.6994 hereinafter referred to as (Zander) in view of Abou Shousha (U.S. Patent No. 10468142). Regarding Claim 1, Zander teaches a method for predicting corneal improvement using Scheimpflug imaging, the method comprising (Key points: The results of this study suggest that preoperative Scheimpflug imaging can help predict corneal edema resolution after Descemet membrane endothelial keratoplasty:) (a) accessing Scheimpflug imaging data with a computer system, wherein the Scheimpflug imaging data have been acquired from a subject using a Scheimpflug imaging system (Data Acquisition and Image Scoring: Scheimpflug imaging was performed according to manufacturer instructions (Pentacam HR; Oculus). If needed, up to 3 attempts were made to obtain a high-quality image without data acquisition errors;) (b) accessing a predictive model with the computer system, wherein the predictive model has been constructed to predict corneal improvement following a therapy based on preoperative Scheimpflug imaging data (Page 424: we developed a model to predict edema resolution after DMEK based on preoperative features and validated the model in a separate cohort; Development of Predictive Model in the Derivation Cohort: the model without interaction and transformations was selected as the final model (Table 2).) (c) applying the Scheimpflug imaging data to the predictive model, generating output as corneal improvement feature data that indicate a predicted corneal improvement following the therapy; and (Assesment and Validation of the Predictive Model: To externally validate the prediction model, the final model was applied to predict edema resolution in a separate cohort not included in the model development process (validation cohort). In the validation cohort, the overall performance of the model was high (R2 = 0.49, 95%CI, 0.37-0.62) (Figure 2). The mean difference between predicted and observed edema was 3.3 μm (95% CI, −41.4 to 48.0 μm), indicating good calibration without clear overestimation or underestimation (Figure 2).) (d) presenting the corneal improvement feature data to a user (Conclusion shows results “may allow for more precise and personalized counseling on outcomes and may help set realistic expectations for clinicians, patients, and their relatives after DMEK, which is an elective surgery” which implies the data is presented to a user.) Zander implies but does not explicitly disclose (d) presenting the corneal improvement feature data to a user. Abou Shousha is in the same field of art of image analysis. Further, Abou Shousha teaches (d) presenting the corneal improvement feature data to a user (Col 15 Lines 36-39: the system 10 described with respect to FIG. 1F further includes an analysis subsystem 20 configured to generate a health report 24 and provide the health report 24 to a system user.) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zander by presenting the data to a user that is taught by Abou Shousha; thus, one of ordinary skilled in the art would be motivated to combine the references to provide an improved system of monitoring corneal conditions (Abou Shousha Background). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Zander in view of Abou Shousha discloses the method of claim 1, wherein the corneal improvement feature data comprise at least one predicted value of change in central corneal thickness (Zander, Discussion: Herein, we showed that these tomographic features were truly specific for corneal edema by demonstrating their normalization after restoring endothelial function and corneal thickness by DMEK; Table 3: Outcome of interest: Corneal edema resolution defined as the difference in central corneal thickness before and after endothelial keratoplasty) Regarding Claim 3, Zander in view of Abou Shousha discloses the method of claim 1, wherein the corneal improvement feature data comprise a corneal thickness map that depicts predicted spatial distribution of change in corneal thickness over a region of a cornea of the subject (Abou shousha, Col 9 Lines 5-6: FIG. 2B is a thickness map showing total cornea thickness; Zander, Fig. 1) Regarding Claim 4, Zander in view of Abou Shousha discloses the method of claim 1, wherein the corneal improvement feature data comprise at least one of a classification, quantitative score, probability of improvement, or other parameter indicating predicted corneal improvement following a therapy (Zander, Assessment and Validation of the Predictive Model: To externally validate the prediction model, the final model was applied to predict edema resolution in a separate cohort not included in the model development process (validation cohort). In the validation cohort, the overall performance of the model was high (R2 = 0.49, 95%CI, 0.37-0.62) (Figure 2). The mean difference between predicted and observed edema was 3.3 μm.) Regarding Claim 5, Zander in view of Abou Shousha discloses the method of claim 1, wherein the predictive model has been constructed using ensemble learning (Abou Shousha, Col 19 Lines 5-7: In any of the above or other embodiment, the system 10 may include an AI model 12 comprising an ensemble of AI submodels for some model output.) Regarding Claim 8, Zander in view of Abou Shousha discloses the method of claim 1, wherein the Scheimpflug imaging data comprises at least one of Scheimpflug images, Scheimpflug tomography maps, quantitative parameters computed from Scheimpflug images, and quantitative parameters computed from Scheimpflug tomography maps (Zander, Discussion: In this study, we developed and validated a model to help predict corneal edema resolution after DMEK based on a single Scheimpflug tomographic imaging examination in eyes with Fuchs dystrophy. To predict the presence of edema in these eyes, 5 factors are required. Two features were visible on tomography maps and 3 were indicators of corneal profile and structure (Table 3).) Regarding Claim 9, Zander in view of Abou Shousha discloses the method of claim 8, wherein the Scheimpflug imaging data comprises Scheimpflug tomography maps comprising at least one of posterior elevation maps and pachymetry maps (Zander, Data Acquistion and Image Scoring: To allow for standardized and masked grading of Scheimpflug patterns of corneal edema, the corneal pachymetry map and posterior elevation map were exported from the 4-map refractive display.) Regarding Claim 10, Zander in view of Abou Shousha discloses the method of claim 8, wherein the Scheimpflug imaging data comprises quantitative parameters computed from Scheimpflug tomography maps comprising at least one of isopach regularity and asphericity computed from pachymetry maps (Zander, Table 2. Model to predict Corneal Edema Resolution After DMEK: Parallel isopachs, x1 … Presence of nonparallel lines of equal corneal thickness on pachymetry map (yes = 1; no = 0); Table 2 shows inputs for the model derived from Scheimpflug data.) Regarding Claim 11, Zander in view of Abou Shousha discloses the method of claim 8, wherein the Scheimpflug imaging data comprises quantitative parameters computed from Scheimpflug tomography maps comprising radius of a posterior corneal surface computed from a posterior elevation map (Zander, Table 2. Model to predict Corneal Edema Resolutino After DMEK: Posterior depression per 10 μm … Maximum of focal posterior depression on the posterior elevation map; Table 2 shows inputs for the model derived from Scheimpflug data.) Regarding Claim 12, Zander in view of Abou Shousha discloses the method of claim 1, wherein the Scheimpflug imaging data comprises quantitative parameters computed from at least one of Scheimpflug imaging and Scheimpflug tomography maps, the quantitative parameters comprising at least one of irregular isopachs, displacement of a thinnest point of the subject's cornea, and volume of posterior depression (Zander, Table 2. Model to predict Corneal Edema Resolution After DMEK: Parallel isopachs, x1 … Presence of nonparallel lines of equal corneal thickness on pachymetry map (yes = 1; no = 0); Table 2 shows inputs for the model derived from Scheimpflug data; Development of Predictive Model in the Derivation Cohort: Adding first-order interaction terms and spline transformation of potential predictors selected combinations of previously identified predictors with the addition of displacement of the thinnest point (mean square error, 439). Interpretation of such a model would have been significantly more complicated; thus, the model without interaction and transformations was selected as the final model (Table 2).) Regarding Claim 13, Zander in view of Abou Shousha discloses the method of claim 1, wherein the Scheimpflug imaging data comprises a plurality of parameters of isopach regularity computed from pachymetry maps (Zander, Table 2. Model to predict Corneal Edema Resolution After DMEK: Parallel isopachs, x1 … Presence of nonparallel lines of equal corneal thickness on pachymetry map (yes = 1; no = 0); Table 2 shows inputs for the model derived from Scheimpflug data) and a radius of a posterior corneal surface of the subject (Zander, Table 2. Model to predict Corneal Edema Resolutino After DMEK: Posterior depression per 10 μm … Maximum of focal posterior depression on the posterior elevation map; Table 2 shows inputs for the model derived from Scheimpflug data,) and wherein the corneal improvement feature data comprise at least one predicted value of change in central corneal thickness (Zander, Table 2 and 3.) Regarding Claim 16, Zander teaches a method for predicting corneal improvement using Scheimpflug imaging, the method comprising (Key points: The results of this study suggest that preoperative Scheimpflug imaging can help predict corneal edema resolution after Descemet membrane endothelial keratoplasty:) (a) accessing Scheimpflug imaging data with a computer system, wherein the Scheimpflug imaging data have been acquired from a subject using a Scheimpflug imaging system (Data Acquisition and Image Scoring: Scheimpflug imaging was performed according to manufacturer instructions (Pentacam HR; Oculus). If needed, up to 3 attempts were made to obtain a high-quality image without data acquisition errors) and comprise Scheimpflug imaging parameters that are independent of corneal thickness (b) accessing a trained machine learning model with the computer system, wherein the trained machine learning model has been trained to predict corneal improvement following a therapy based on preoperative Scheimpflug imaging data (Page 424: we developed a model to predict edema resolution after DMEK based on preoperative features and validated the model in a separate cohort; Development of Predictive Model in the Derivation Cohort: the model without interaction and transformations was selected as the final model (Table 2).) (c) applying the Scheimpflug imaging data to the trained machine learning model, generating output as corneal improvement feature data that indicate a predicted corneal improvement following the therapy; and (Assesment and Validation of the Predictive Model: To externally validate the prediction model, the final model was applied to predict edema resolution in a separate cohort not included in the model development process (validation cohort). In the validation cohort, the overall performance of the model was high (R2 = 0.49, 95%CI, 0.37-0.62) (Figure 2). The mean difference between predicted and observed edema was 3.3 μm (95% CI, −41.4 to 48.0 μm), indicating good calibration without clear overestimation or underestimation (Figure 2).) (d) presenting the corneal improvement feature data to a user (Conclusion shows results “may allow for more precise and personalized counseling on outcomes and may help set realistic expectations for clinicians, patients, and their relatives after DMEK, which is an elective surgery” which implies the data is presented to a user.) Zander implies but does not explicitly disclose (a) accessing Scheimpflug imaging data with a computer system, wherein the Scheimpflug imaging data have been acquired from a subject using a Scheimpflug imaging system and comprise Scheimpflug imaging parameters that are independent of corneal thickness; (b) accessing a trained machine learning model with the computer system, wherein the trained machine learning model has been trained to predict corneal improvement following a therapy based on preoperative Scheimpflug imaging data; (c) applying the Scheimpflug imaging data to the trained machine learning model, generating output as corneal improvement feature data that indicate a predicted corneal improvement following the therapy; and (d) presenting the corneal improvement feature data to a user. Abou Shousha is in the same field of art of image analysis. Further, Abou Shousha teaches (a) accessing Scheimpflug imaging data with a computer system, wherein the Scheimpflug imaging data have been acquired from a subject using a Scheimpflug imaging system (Col 12 Lines 11-12: the input data may include images or maps; Col 12 Lines 32-34: Some maps may be produced devices such as corneal Placido disc topographers or Pentacam® rotating Scheimpflug camera) and comprise Scheimpflug imaging parameters that are independent of corneal thickness (Col 33 Lines 34-44: The AI model 12 is trained to receive the input image and process it to generate a model output. In operation, the AI model 12 may process input data comprising one or more B-scans. In a further embodiment, the AI model 12 may optionally receive and input into the network input data comprising patient data as described above and elsewhere herein (see, e.g., FIGS. 1H, 1J, & 11A-11C). In some embodiments, input data may include other input data such as thickness maps, heat maps, bullseye maps, structure maps, and/or other input data described herein in addition to or instead of images such as B-scan images.) (b) accessing a trained machine learning model with the computer system, wherein the trained machine learning model has been trained to predict corneal improvement following a therapy based on preoperative Scheimpflug imaging data (Fig. 10; Col 33 Lines 27-33: FIG. 10 schematically illustrates an embodiment of the system 10 according to various embodiments. The system 10 includes an AI model 12 trained to output an action, such as a prediction of treatment to be taken with respect to the patient's cornea or anterior segment. The AI model 12 is configured to output a discrete value or category corresponding to the predicted treatment;) (c) applying the Scheimpflug imaging data to the trained machine learning model, generating output as corneal improvement feature data that indicate a predicted corneal improvement following the therapy; and (d) presenting the corneal improvement feature data to a user (Col 34 Lines 28-37:I n a further embodiment, the system 10 described with respect to FIG. 10 may also include an analysis subsystem as described above with respect to FIG. 1B configured to generate health analysis data using the set of scores and present the data to the user in a health report. The system 10 may also include a database for storing output data. The database may also store historical outputs from which the analysis subsystem may utilize in preparation of the health report, e.g., for comparison purposes or to identify an improved or worsening condition.) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zander by using a machine learning neural network to predict corneal condition and display results to a user that is taught by Abou Shousha; thus, one of ordinary skilled in the art would be motivated to combine the references to provide an improved system of monitoring corneal conditions (Abou Shousha Background). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 17, Zander in view of Abou Shousha discloses the method of claim 16, wherein the trained machine learning model comprises a neural network (Abou Shousha, Col 16 Lines 22-31: The number of image data the system 10 may input to generate a prediction may depends on the implementation of the one or more AI models 12 of the system 10. For example, in some embodiments, the input image data includes a single image that captures a current state of a patient's cornea or anterior segment. In one such embodiment, the system 10 includes an AI model 12 comprising a feedforward neural network model that has been trained on labeled training data to process the input image and/or the other patient data to generate a model output.) Regarding Claim 18, Zander in view of Abou Shousha discloses the method of claim 17, wherein the Scheimpflug imaging data comprise at least one Scheimpflug tomography map (Zander, Discussion: In this study, we developed and validated a model to help predict corneal edema resolution after DMEK based on a single Scheimpflug tomographic imaging examination in eyes with Fuchs dystrophy. To predict the presence of edema in these eyes, 5 factors are required. Two features were visible on tomography maps and 3 were indicators of corneal profile and structure (Table 3).) (Abou Shousha, Col 10-11 Lines 65-4: The systems and methods described herein may be implemented as standalone or integrated applications for processing images or maps such as Optical Coherence Tomography (OCT) images or B-scans, thickness maps, bullseye maps, curvature maps, topography maps, tomography maps, or elevation maps of the human cornea and/or the anterior segment of the eye using Artificial Intelligence models associated with image data.) Regarding Claim 19, Zander in view of Abou Shousha discloses the method of claim 18, wherein the corneal improvement feature data comprise at least one of a classification, quantitative score, probability of improvement, or other parameter indicating predicted corneal improvement following a therapy (Zander, Assessment and Validation of the Predictive Model: To externally validate the prediction model, the final model was applied to predict edema resolution in a separate cohort not included in the model development process (validation cohort). In the validation cohort, the overall performance of the model was high (R2 = 0.49, 95%CI, 0.37-0.62) (Figure 2). The mean difference between predicted and observed edema was 3.3 μm.); (Abou Shousha, Col 34 Lines 28-37:I n a further embodiment, the system 10 described with respect to FIG. 10 may also include an analysis subsystem as described above with respect to FIG. 1B configured to generate health analysis data using the set of scores and present the data to the user in a health report. The system 10 may also include a database for storing output data. The database may also store historical outputs from which the analysis subsystem may utilize in preparation of the health report, e.g., for comparison purposes or to identify an improved or worsening condition.) Regarding Claim 20, Zander in view of Abou Shousha discloses the method of claim 16, wherein the machine learning model comprises a predictive model that has been trained using ensemble learning (Abou Shousha, Col 19 Lines 5-7: In any of the above or other embodiment, the system 10 may include an AI model 12 comprising an ensemble of AI submodels for some model output.) Regarding Claim 23, Zander in view of Abou Shousha discloses the method of claim 16, wherein the Scheimpflug imaging data consist of Scheimpflug imaging parameters that are independent of corneal thickness (Abou Shousha, Col 33 Lines 34-44: The AI model 12 is trained to receive the input image and process it to generate a model output. In operation, the AI model 12 may process input data comprising one or more B-scans. In a further embodiment, the AI model 12 may optionally receive and input into the network input data comprising patient data as described above and elsewhere herein (see, e.g., FIGS. 1H, 1J, & 11A-11C). In some embodiments, input data may include other input data such as thickness maps, heat maps, bullseye maps, structure maps, and/or other input data described herein in addition to or instead of images such as B-scan images.) Regarding Claim 24, Zander in view of Abou Shousha discloses the method of claim 16, wherein the machine learning model excludes preoperative central corneal thickness as an input parameter (Abou Shousha, Col 33 Lines 34-44: The AI model 12 is trained to receive the input image and process it to generate a model output. In operation, the AI model 12 may process input data comprising one or more B-scans. In a further embodiment, the AI model 12 may optionally receive and input into the network input data comprising patient data as described above and elsewhere herein (see, e.g., FIGS. 1H, 1J, & 11A-11C). In some embodiments, input data may include other input data such as thickness maps, heat maps, bullseye maps, structure maps, and/or other input data described herein in addition to or instead of images such as B-scan images.) Claims 6-7, 14-15, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over (Zander) in view of Abou Shousha (U.S. Patent No. 10468142) in view of Lee (U.S. Patent Pub. No. 2022/0400943). Regarding Claim 6, Zander in view of Abou Shousha teaches the method of claim 5 . Zander in view of Abou Shousha does not explicitly disclose wherein the predictive model is constructed using ensemble learning comprising a boosting algorithm. Lee is in the same field of art of image analysis. Further, Lee teaches wherein the predictive model is constructed using ensemble learning comprising a boosting algorithm (¶20 Machine learning model 15 may be based on one or more of linear regression, logistic regression, decision tree, support vector machine, naive Bayes, k-nearest neighbors, k-means, random forest, dimensionality reduction, gradient boosting, and neural network; ¶93 Issues of overfitting may be mitigated by creating an ensemble of neural networks; Abou Shousha also teaches an ensemble network.) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zander in view of Abou Shousha by using a ensemble learning with a boosting algorithm that is taught by Lee; thus, one of ordinary skilled in the art would be motivated to combine the references to provide a system and method for improved predictions of a patient's expected threshold for individual test points (Lee ¶9). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 7, Zander in view of Abou Shousha in view of Lee discloses the method of claim 6, wherein the boosting algorithm is a gradient boosting algorithm (Lee, ¶20 Machine learning model 15 may be based on one or more of linear regression, logistic regression, decision tree, support vector machine, naive Bayes, k-nearest neighbors, k-means, random forest, dimensionality reduction, gradient boosting, and neural network; ¶93 Issues of overfitting may be mitigated by creating an ensemble of neural networks; Abou Shousha also teaches an ensemble network.) Claim 14 recites limitations similar to claim 6 and is rejected under the same rationale and reasoning. Claim 12 recites limitations similar to claim 7 and is rejected under the same rationale and reasoning. Claim 21 recites limitations similar to claim 6 and is rejected under the same rationale and reasoning. Claim 22 recites limitations similar to claim 7 and is rejected under the same rationale and reasoning. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN BILODEAU whose telephone number is (571)272-1032. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUSTIN BILODEAU/Examiner, Art Unit 2664 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Feb 22, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602802
ELECTRONIC DEVICE FOR GENERATING DEPTH MAP AND OPERATING METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12597293
System and Method for Authoring Human-Involved Context-Aware Applications
2y 5m to grant Granted Apr 07, 2026
Patent 12592084
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR IDENTIFYING STATE OF LIGHTING
2y 5m to grant Granted Mar 31, 2026
Patent 12591959
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12581041
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION COLLECTION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+5.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month