Prosecution Insights
Last updated: April 19, 2026
Application No. 18/757,298

DIRECT MEDICAL TREATMENT PREDICTIONS USING ARTIFICIAL INTELLIGENCE

Non-Final OA §DP
Filed
Jun 27, 2024
Examiner
NGUYEN, KHAI MINH
Art Unit
2641
Tech Center
2600 — Communications
Assignee
Digital Diagnostics Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
1107 granted / 1271 resolved
+25.1% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
1301
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1271 resolved cases

Office Action

§DP
DETAILED ACTION Double Patenting 1. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). 2. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of U.S. Pat. No. 12051490 B2. The subject matter claimed in the instant application is fully disclosed in the U.S. Pat. No. 12051490 B2 as follows: Instant application: 1. A method for autonomously predicting an efficacy of a treatment for a patient, the method comprising: receiving image data of an anatomy of a patient; obtaining scores for each of a plurality of anatomical features reflected in the image data by applying a feature extraction model that is a first machine learning model to the image data; applying the scores as input to a treatment model, the treatment model comprising a second machine learning model configured to output a prediction of a measure of efficacy of a particular treatment based on features of the patient's anatomy; and receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment. 2. The method of claim 1, further comprising: determining, based on the image data, a body part to which the image data corresponds; and selecting the feature extraction model from a plurality of candidate feature models based on a concordance between each candidate feature model and a given body part. 3. The method of claim 1, wherein applying the scores as input to the treatment model comprises generating a feature vector that stores, for each anatomical feature of the plurality, its respective identification, and applying the feature vector as input to the treatment model. 4. The method of claim 1, further comprising: determining, for the particular treatment, whether its predicted measure of efficacy exceeds a threshold; and outputting to a user a recommendation for the particular treatment responsive to determining that its predicted measure of efficacy exceeds the threshold. 5. The method of claim 1, wherein receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment comprises receiving, as output from the treatment model, data representative of respective measures of efficacy for each of a plurality of candidate treatments. 6. A non-transitory computer-readable medium comprising instructions encoded thereon for autonomously determining a treatment for a patient, the instructions when executed causing one or more processors to perform operations, the instructions comprising instructions to: receive image data of an anatomy of a patient; obtain scores for each of a plurality of anatomical features reflected in the image data by applying a feature extraction model that is a first machine learning model to the image data; apply the scores as input to a treatment model, the treatment model comprising a second machine learning model configured to output a prediction of a measure of efficacy of a particular treatment based on features of the patient's anatomy; and receive, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment. 7. The non-transitory computer-readable medium of claim 6, the instructions further comprising instructions to: determine, based on the image data, a body part to which the image data corresponds; and select the feature extraction model from a plurality of candidate feature models based on a concordance between each candidate feature model and a given body part. 8. The non-transitory computer-readable medium of claim 6, wherein applying the scores as input to the treatment model comprises generating a feature vector that stores, for each anatomical feature of the plurality, its respective score, and applying the feature vector as input to the treatment model. 9. The non-transitory computer-readable medium of claim 6, the instructions further comprising instructions to: determine, for the particular treatment, whether its predicted measure of efficacy exceeds a threshold; and output to a user a recommendation for the particular treatment responsive to determining that its predicted measure of efficacy exceeds the threshold. 10. The non-transitory computer-readable medium of claim 6, wherein receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment comprises receiving, as output from the treatment model, data representative of respective measures of efficacy for each of a plurality of candidate treatments. 11. A method for autonomously determining a treatment for a patient, the method comprising: receiving sensor data from an electronic device that monitors a patient; accessing a machine learning model, the machine learning model configured to output a likelihood that a particular treatment would yield a positive result, where the machine learning model is trained using training data that pairs previously obtained sensor data for a plurality of patients to labels describing whether the particular treatment yielded a positive result for each of the plurality of patients; applying the received sensor data to the machine learning model; and receiving, as output from the machine learning model, data representative of a likelihood of whether the patient would benefit from the particular treatment. 12. The method of claim 11, wherein the data representative of the one or more treatments comprises probabilities that each of the one or more treatments would bring a benefit to the patient, and wherein the method further comprises: determining, for each of the one or more treatments, whether its corresponding probability exceeds a threshold; and outputting to a user a recommendation for each of the one or more treatments that has a corresponding probability that exceeds the threshold. 13. The method of claim 11, wherein the machine learning model is a convolutional neural network. 14. The method of claim 11, wherein the machine learning model is a multi-task model comprising a shared layer and branches, the shared layer trained to determine one or more candidate diagnoses based on the sensor data, each branch corresponding to a different treatment, each branch trained to output a likelihood that the different treatment to which the branch corresponds will be effective. 15. The method of claim 14, wherein an amount of training data for a branch corresponding to a given treatment is below a threshold, and wherein the multi-task model enriches the amount of training data by back-propagating the training data with information from a different branch. 16. A non-transitory computer-readable medium comprising instructions encoded thereon for autonomously determining a treatment for a patient, the instructions when executed causing one or more processors to perform operations, the instructions comprising instructions to: receive sensor data from an electronic device that monitors a patient; access a machine learning model, the machine learning model configured to output a likelihood that a particular treatment would yield a positive result, where the machine learning model is trained using training data that pairs previously obtained sensor data for a plurality of patients to labels describing whether the particular treatment yielded a positive result for each of the plurality of patients; apply the received sensor data to the machine learning model; and receive, as output from the machine learning model, data representative of a likelihood of whether the patient would benefit from the particular treatment. 17. The non-transitory computer-readable medium of claim 16, wherein the data representative of the one or more treatments comprises probabilities that each of the one or more treatments would bring a benefit to the patient, and wherein the instructions further comprise instructions to: determine, for each of the one or more treatments, whether its corresponding probability exceeds a threshold; and output to a user a recommendation for each of the one or more treatments that has a corresponding probability that exceeds the threshold. 18. The non-transitory computer-readable medium of claim 16, wherein the machine learning model is a convolutional neural network. 19. The non-transitory computer-readable medium of claim 16, wherein the machine learning model is a multi-task model comprising a shared layer and branches, the shared layer trained to determine one or more candidate diagnoses based on the sensor data, each branch corresponding to a different treatment, each branch trained to output a likelihood that the different treatment to which the branch corresponds will be effective. 20. The non-transitory computer-readable medium of claim 19, wherein an amount of training data for a branch corresponding to a given treatment is below a threshold, and wherein the multi-task model enriches the amount of training data by back-propagating the training data with information from a different branch. Pat. No. 12051490: 1. A method for autonomously predicting an efficacy of a treatment for a patient, the method comprising: receiving image data of an anatomy of a patient; applying the image data to a feature extraction model, the feature extraction model comprising a first machine learning model trained using training data that pairs anatomical images to an anatomical feature label; receiving, as output from the feature extraction model, scores for each of a plurality of anatomical features corresponding to anatomy of the patient; applying the scores as input to a treatment model, the treatment model comprising a second machine learning model different from the first machine learning model, the treatment model trained to output a prediction of a measure of efficacy of a particular treatment based on features of the patient's anatomy; and receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment. 2. The method of claim 1, further comprising: determining, based on the image data, a body part to which the image data corresponds; and selecting the feature extraction model from a plurality of candidate feature models based on a concordance between each candidate feature model and a given body part. 3. The method of claim 1, wherein applying the scores as input to the treatment model comprises generating a feature vector that stores, for each anatomical feature of the plurality, its respective identification, and applying the feature vector as input to the treatment model. 4. The method of claim 1, further comprising: determining, for the particular treatment, whether its predicted measure of efficacy exceeds a threshold; and outputting to a user a recommendation for the particular treatment responsive to determining that its predicted measure of efficacy exceeds the threshold. 5. The method of claim 1, wherein receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment comprises receiving, as output from the treatment model, data representative of respective measures of efficacy for each of a plurality of candidate treatments. 6. A non-transitory computer-readable medium comprising instructions encoded thereon for autonomously determining a treatment for a patient, the instructions when executed causing one or more processors to perform operations, the instructions comprising instructions to: receive image data of an anatomy of a patient; apply the image data to a feature extraction model, the feature extraction model comprising a first machine learning model trained using training data that pairs anatomical images to an anatomical feature label; receive, as output from the feature extraction model, scores for each of a plurality of anatomical features corresponding to anatomy of the patient; apply the scores as input to a treatment model, the treatment model comprising a second machine learning model different from the first machine learning model, the treatment model trained to output a prediction of a measure of efficacy of a particular treatment based on features of the patient's anatomy; and receive, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment. 7. The non-transitory computer-readable medium of claim 6, the instructions further comprising instructions to: determine, based on the image data, a body part to which the image data corresponds; and select the feature extraction model from a plurality of candidate feature models based on a concordance between each candidate feature model and a given body part. 8. The non-transitory computer-readable medium of claim 6, wherein applying the scores as input to the treatment model comprises generating a feature vector that stores, for each anatomical feature of the plurality, its respective score, and applying the feature vector as input to the treatment model. 9. The non-transitory computer-readable medium of claim 6, the instructions further comprising instructions to: determine, for the particular treatment, whether its predicted measure of efficacy exceeds a threshold; and output to a user a recommendation for the particular treatment responsive to determining that its predicted measure of efficacy exceeds the threshold. 10. The non-transitory computer-readable medium of claim 6, wherein receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment comprises receiving, as output from the treatment model, data representative of respective measures of efficacy for each of a plurality of candidate treatments. 11. A system comprising: memory with instructions encoded thereon for autonomously predicting an efficacy of a treatment for a patient; and one or more processors that, when executing the instructions, are caused to perform operations comprising: receiving image data of an anatomy of a patient; applying the image data to a feature extraction model, the feature extraction model comprising a first machine learning model trained using training data that pairs anatomical images to an anatomical feature label; receiving, as output from the feature extraction model, scores for each of a plurality of anatomical features corresponding to anatomy of the patient; applying the scores as input to a treatment model, the treatment model comprising a second machine learning model different from the first machine learning model, the treatment model trained to output a prediction of a measure of efficacy of a particular treatment based on features of the patient's anatomy; and receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment. 12. The system of claim 11, the operations further comprising: determining, based on the image data, a body part to which the image data corresponds; and selecting the feature extraction model from a plurality of candidate feature models based on a concordance between each candidate feature model and a given body part. 13. The system of claim 11, wherein applying the scores as input to the treatment model comprises generating a feature vector that stores, for each anatomical feature of the plurality, its respective identification, and applying the feature vector as input to the treatment model. 14. The system of claim 11, the operations further comprising: determining, for the particular treatment, whether its predicted measure of efficacy exceeds a threshold; and outputting to a user a recommendation for the particular treatment responsive to determining that its predicted measure of efficacy exceeds the threshold. 15. The system of claim 11, wherein receiving, as output from the treatment model, data representative of the predicted measure of efficacy of the particular treatment comprises receiving, as output from the treatment model, data representative of respective measures of efficacy for each of a plurality of candidate treatments. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHAI MINH NGUYEN whose telephone number is (571)272-7923. The examiner can normally be reached on 6-3. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Appiah can be reached on 571-272-7904. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHAI M NGUYEN/Primary Examiner, Art Unit 2641
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Nov 10, 2025
Non-Final Rejection — §DP
Apr 07, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604176
EMBEDDED SUBSCRIBER IDENTIFICATION MODULE WITH SECURE PROFILES
2y 5m to grant Granted Apr 14, 2026
Patent 12598503
RADIO INTERFACE MEASUREMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12588863
Systems and methods for preventing and treating wrinkles
2y 5m to grant Granted Mar 31, 2026
Patent 12587850
SYSTEMS AND METHODS OF ENFORCING POLICY COMPLIANCE OF VEHICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12581402
METHOD AND APPARATUS FOR CELL SELECTION, TERMINAL, NETWORK DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
91%
With Interview (+4.2%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1271 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month