Prosecution Insights
Last updated: April 19, 2026
Application No. 18/769,596

SYSTEM AND METHOD TO AID CLINICIANS IN ACCESSING THE OUTCOME OF LUNG CANCER INTERVENTIONS

Non-Final OA §101§102§103§112
Filed
Jul 11, 2024
Examiner
ABDULLAH, AAISHA
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Optellum Limited
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
11 granted / 44 resolved
-27.0% vs TC avg
Strong +42% interview lift
Without
With
+41.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
18 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Application Status This is the first non-final action on the merits. Claims 1-20 as originally filed on July 11, 2024 are currently pending and considered below. Information Disclosure Statement The information disclosure statement (IDS) submitted on November 28, 2024 is being considered by the examiner. The submission is in compliance with the provisions of 37 CFR 1.97. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 18 recites “the fusion model”. There is insufficient antecedent basis for this limitation in the claim. Claim 18 depends on claim 17 and claim 17 depends on claim 1. Claims 1 and 17 do not recite a fusion model. Therefore, it is unclear whether Applicant has introduced “the fusion model” as a new claim element, or if the Applicant intended “the fusion model” to refer to the fusion model recited in claim 15. For the purposes of compact prosecution, the claim will be interpreted in a manner as best understood by the Examiner, wherein claim 18 depends on claim 15 and the recitation of “the fusion model” in claim 18 is treated as the same fusion model of claim 15. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claims 1-19 recite a system for predicting an outcome score for a patient, which is within the statutory category of a machine. Claims 20 recites a computer implemented method for predicting an outcome score for a patient, which is within the statutory category of a process. Step 2A - Prong One: Regarding Prong One of Step 2A, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they "recite" a judicial exception or in other words whether a judicial exception is "set forth" or "described" in the claims. An "abstract idea" judicial exception is subject matter that falls within at least one of the following groupings: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: A Computer Aided Diagnosis, CADx, system for predicting an outcome score for a patient, comprising: an input circuit configured to receive input data comprising at least one input medical image for a patient; an outcome prediction circuit operably coupled to the input circuit configured to receive input data from the input circuit for outcome prediction analysis; wherein the outcome prediction circuit is further configured to receive details of a suggested future intervention for the patient; and the input data to the outcome prediction circuit is analysed to generate the outcome score for the patient accounting for the details of the future intervention. The underlined limitations constitute concepts performed in the human mind and mathematical concepts. That is, other than reciting steps as performed by the generic computer components, nothing in the claim elements precludes the steps from practically being performed in the mind. The claim encompasses a mental process of receiving input data and receiving details of the suggested future intervention. The identified abstract idea, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind except for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind except for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. The abstract idea for Claim 20 is identical to the abstract idea for Claim 1, because the only difference between Claims 1 and 10 is that Claim 1 recites a system, whereas Claim 20 recites a method. Any limitations not identified above as part of the limitation in the mind, are deemed “additional elements” and will be discussed further in detail below. Accordingly, independent claims 1 and 20 recite at least one abstract idea. Similarly, dependent claims 2-8 and 10-19 further narrow the abstract idea described in the independent claims. Claims 2-4, 8, 10-16 and 19 describe the models, generating the outcome score, the intervention records and mapping of the raw intervention information. Claims 5-7 describe the outcome score. Claims 17 and 18 describe the input data. Claim 10-12 partially narrow the abstract idea as described above, and also introduce an additional element(s) which will be discussed in Step 2A Prong 2 and Step 2B. These limitations only serve to further limit the abstract idea and hence, are directed toward fundamentally the same abstract ideas as independent claims 1 and 18, even when considered individually and as an ordered combination. Step 2A - Prong Two: Regarding Prong Two of Step 2A, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application." In the present case, claims 1-20 as a whole do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. The additional elements or combination of additional elements, beyond the above-noted at least one abstract idea will be described as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the “abstract idea(s)”). Specifically, independent claim 1 recites: A Computer Aided Diagnosis, CADx, system for predicting an outcome score for a patient, comprising: an input circuit configured to receive input data comprising at least one input medical image for a patient; an outcome prediction circuit operably coupled to the input circuit configured to receive input data from the input circuit for outcome prediction analysis; wherein the outcome prediction circuit is further configured to receive details of a suggested future intervention for the patient; and the input data to the outcome prediction circuit is analysed to generate the outcome score for the patient accounting for the details of the future intervention. The claim recites the additional elements of a system, input circuit, outcome circuit and outcome prediction circuit that implement the identified abstract idea. The system, input circuit, outcome circuit and outcome prediction circuit are not described by the applicant and are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (i.e., merely invoking the computer structure as a tool used to execute the limitations, MPEP 2106.05(f)). The dependent claims 9-12 and 16 recite additional element(s) beyond those already recited in the independent claims that implement the identified abstract idea. Claim 9 recites a CT scan, X-ray scan, ultrasound scan, MRI scan, SPECT scan and PET scan. Claim 10-12 recite an intervention encoder. Claim 16 recites machine learning. However, these functions do not integrate a practical application more than the abstract idea because: the intervention encoder and machine learning represents mere instructions to apply the abstract idea on a computer (i.e., merely invoking the computer structure as a tool used to execute the limitations); and, the CT scan, X-ray scan, ultrasound scan, MRI scan, SPECT scan and PET scan generally link the use of a judicial exception to a particular technological environment or field of use. Accordingly, the claims as a whole do not integrate the abstract idea into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Step 2B Regarding Step 2B, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. When viewed as a whole, claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite processes that are routine and well-known in the art and simply implements the process on a computer(s) is not enough to qualify as "significantly more." As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a system, input circuit, outcome circuit and outcome prediction circuit to perform the noted steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). The dependent claims 9-12 and 16 recite additional element(s) beyond those already recited in the independent claims that implement the identified abstract idea. Claim 9 recites a CT scan, X-ray scan, ultrasound scan, MRI scan, SPECT scan and PET scan. Claim 10-12 recite an intervention encoder. Claim 16 recites machine learning. However, these functions are not deemed significantly more than the abstract idea because: the intervention encoder and machine learning represents mere instructions to apply the abstract idea on a computer (i.e., merely invoking the computer structure as a tool used to execute the limitations); and, the CT scan, X-ray scan, ultrasound scan, MRI scan, SPECT scan and PET scan generally link the use of a judicial exception to a particular technological environment or field of use. Therefore, claims 1-20 are rejected under 35 USC §101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraph of 35 U.S.C. 102 that forms the basis for the rejections under this section set forth in this Office action: (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-7, 9 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Raffy (US 2015/0238270 A1). Regarding claim 1, Raffy teaches: A Computer Aided Diagnosis, CADx, system for predicting an outcome score for a patient ([0007], claim 1, [0040], [0011]), comprising: an input circuit configured to receive input data comprising at least one input medical image for a patient; (a “processor” configured to perform the steps of “receiving patient data including volumetric images of the patient”, e.g. see [0007], claim 1; the images are “produced by CT scans, MRI scans, and/or PET scans”, e.g. see [0041]) an outcome prediction circuit operably coupled to the input circuit configured to receive input data from the input circuit for outcome prediction analysis; (the “processor” is configured for “analyzing the volumetric images to identify one or more features correlated to treatment outcome prediction” and “predicting an outcome” (The processor is operably coupled to the input (it receives the patient data) and performs the prediction analysis.), e.g. see [0007], claim 1) wherein the outcome prediction circuit is further configured to receive details of a suggested future intervention for the patient; and (“displaying the predicted outcome…further includes receiving a selected treatment modality from the user”, e.g. see [0008], claim 3; the user selects a specific intervention type, such as “valve placement 20, coil placement 22, and energy delivery” and the user also selects “treatment location”, e.g. see [0056], [0008]) the input data to the outcome prediction circuit is analysed to generate the outcome score for the patient accounting for the details of the future intervention. (“predicting an outcome for a treatment modality includes predicting an outcome for the treatment modality selected by the user”, e.g. see [0008], claim 3; the system analyzes the input data, features like “fissure integrity” or “wall thickness” to generate a predicted outcome, e.g. see [0012], [0041]; “the predicted outcome may be a numerical value representing a probability of success” (i.e. outcome score), e.g. see [0011]; using “logistic regression analysis” to determine predictors for specific valve types (i.e. details of the future intervention), e.g. see [0088]) Regarding claim 5, Raffy teaches the system of claim 1 as described above. Raffy further teaches: wherein the outcome score comprises at least one of score of disease recurrence and treatment response (predicting a “positive response to the lung volume reduction procedure”, e.g. see [0091]; “prediction of lung cancer risk”, e.g. see [0006], [0108]) Regarding claim 6, Raffy teaches the system of claim 5 as described above. Raffy further teaches: wherein the outcome score further comprises one or more different characterizations of the outcome (“the predicted outcome includes…a probability of lung volume reduction…and…a probability of pneumothorax”, e.g. see [0011]) Regarding claim 7, Raffy teaches the system of claim 6 as described above. Raffy further teaches: wherein the one or more different characterisations of the outcome comprise one or more of: a malignancy score, a disease recurrence score, a disease recurrence location, a disease recurrence time, and an adverse effects score (“the predicted outcome may be…a probability of success” (i.e. treatment response); “the predicted outcome includes…a probability of pneumothorax” (i.e. adverse effect), e.g. see [0011]; “predicting a likelihood of lung cancer”, e.g. see [0018]) Regarding claim 9, Raffy teaches the system of claim 1 as described above. Raffy further teaches: wherein the input medical image is at least one of a CT scan, X-ray scan, ultrasound scan, MRI scan, SPECT scan, PET scan, in which all or part of a patients lungs are visible (“imaging data produced by CT scans, MRI scans, and/or PET scans, e.g. see [0041]) Claim 19 recites substantially similar limitations as those already addressed in claim 1, and, as such is rejected for similar reasons as given above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4, 8 and 10-19 are rejected under 35 U.S.C. 103 as being unpatentable over Raffy in further view of Polsterl (“A Wide and Deep Neural Network for Survival Analysis from Anatomical Shape and Tabular Clinical Data”, ArXiv, 2019). Regarding claim 2, Raffy teaches the system of claim 1 as described above. Raffy teaches the output prediction circuit, input data and outcome score as described above. Raffy does not teach: a disease characterisation model and an outcome prediction model; and receive input data from the input circuit for analysis by the disease characterisation model; wherein output from the disease characterisation model is provided to the outcome prediction model, to generate the score for the patient However, Polsterl in the analogous art of computer aided diagnosis (e.g. see abstract) teaches: a disease characterisation model and an outcome prediction model; and receive input data from the input circuit for analysis by the disease characterisation model; wherein output from the disease characterisation model is provided to the outcome prediction model, to generate the score for the patient (a “PointNet” (deep component) that processes the 3D anatomical shape (i.e. image) to extract a latent feature vector (i.e. disease characterisation model), e.g. see Section 3.2, “Wide and Deep Neural Network”; “predict a risk score” output layer (Cox’s proportional hazards model) that receives the processed features to generate the survival score (i.e. outcome prediction model), e.g. see Section 3.3 “Survival Analysis”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include a first natural language processing algorithm trained for social relationships as taught by Polsterl, for the purposes of “learn[ing] a complex latent representation of anatomical shape” (Section 3.2, “Wide and Deep Neural Network”). Regarding claim 3, Raffy and Polsterl teach the system of claim 2 as described above. Raffy further teaches: […] one or more historical intervention records for the patient (using a database that includes “information relating to patient treatment” including “type of treatment such as type of lung volume reduction procedure, location of treatment within the lung, the results of treatment”, e.g. see [0042]) Raffy does not teach: wherein the outcome prediction model is further configured to receive However, Polsterl in the analogous art teaches: wherein the outcome prediction model is further configured to receive (the “wide” component is configured to receive “tabular clinical data” (includes historical records), e.g. see Section 3 “Methods”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include the outcome prediction model is further configured to receive as taught by Polsterl, for the purposes of “improved prediction performance” (Section 6, “Conclusion”). Regarding claim 4, Raffy and Polsterl teach the system of claim 3 as described above. Raffy further teaches: wherein at least one of the historical intervention records and the future intervention are set as a null value, and are considered as no-interventions towards calculations of the outcome score (a “heterogeneity score…was defined as the difference between percentage between the treated lobe and the non-treated ipsilateral lobe” (the non-treated lobe is the functional equivalent of the “no-intervention” status), e.g. see [0103]; “Valve type and lobe” were not found to be predictors for the model, e.g. see [0088]). Regarding claim 8, Raffy and Polsterl teach the system of claim 3 as described above. Raffy further teaches: wherein the outcome score is provided for a predefined time sequence between 1 month and 10 years (follow-up scans 3 months after the procedure to assess “valve placement problems”, e.g. see [0086]; “the five-year survival rate for lung cancer” is low, e.g. see [0004]). Regarding claim 10, Raffy and Polsterl teach the system of claim 3 as described above. Raffy teaches the output prediction circuit and details of the future intervention as described above. Raffy further teaches: […] the intervention record […] (“information relating to patient treatment” including “type of treatment such as type of lung volume reduction procedure, location of treatment within the lung, the results of treatment”, e.g. see [0042]) Raffy does not teach: an intervention encoder, and the at least one intervention record are provided to the intervention encoder, which encodes the intervention record before providing it to the outcome prediction model However, Polsterl in the analogous art teaches: an intervention encoder, and the at least one intervention record are provided to the intervention encoder, (“The wide part of the network takes demographics and clinical biomarkers and their interactions.” (i.e. encoder); the “wide” component is specifically designed to encode “tabular clinical data” (which includes categorical variable like intervention type), e.g. see Section 3 “Methods”; “the linear component models known clinical variables…associated with Alzheimer’s disease”, e.g. see Section 3.2 “Wide and Deep Neural Network”) which encodes the intervention record before providing it to the outcome prediction model (the wide component processes (i.e. encodes) the input variables into a mathematical representation; applying transformations such as a “cross-product transformation” to the inputs; the encoded output of the wide component is fused with the deep component’s output to generate the final prediction, “the final patient-level latent representation”, e.g. see Section 3.2 “Wide and Deep Neural Network”) ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include an intervention encoder, and the at least one intervention record are provided to the intervention encoder, which encodes the intervention record before providing it to the outcome prediction model as taught by Polsterl, for the purposes of “improved prediction performance” (Section 6, “Conclusion”). Regarding claim 11, Raffy and Polsterl teach the system of claim 10 as described above. Raffy does not teach: wherein the intervention encoder is trained to map raw intervention information into a vector to enhance semantics of the raw intervention information However, Polsterl in the analogous art teaches: wherein the intervention encoder is trained to map raw intervention information into a vector to enhance semantics of the raw intervention information (the system maps input data to “embedding vectors”, e.g. see Section 3.1 “Learning from Anatomical Shape”; the network learns “high-level descriptors” and the “wide” component uses a “cross-product transformation” to model interactions (i.e. enhancing semantics), e.g. see Section 3.2 “Wide and Deep Neural Network”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include the intervention encoder is trained to map raw intervention information into a vector to enhance semantics of the raw intervention information as taught by Polsterl, for the purposes of “incorporate[ing] gene-gene (epistasis) and gene–environment interactions”, which motivates the use of vector mappings to capture semantic relationships (Section 3.2 “Wide and Deep Neural Network”). Regarding claim 12, Raffy and Polsterl teach the system of claim 11 as described above. Raffy teaches the intervention record as described above. Raffy further teaches: wherein the intervention record is…in the form of an intervention type […] and an intervention characteristic […] (“valve type” (i.e. intervention type) and “number of valves” (i.e. intervention characteristic), e.g. see [0096]) Raffy does not teach: provided to the intervention encoder in the form of a vector and [another type of] vector However, Polsterl in the analogous art teaches: provided to the intervention encoder in the form of a vector and [another type of] vector (the system processes clinical feature vectors that are either: “sparse (e.g. one-hot encoded genetic alterations)” or “dense (e.g. biomarker concentrations)”, e.g. see Section 3.2 “Wide and Deep Neural Network”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include provided to the intervention encoder in the form of a vector and [another type of] vector as taught by Polsterl, for the purposes of “leverage[ing] routine clinical patient information” effectively (Section 3.2 “Wide and Deep Neural Network”). Regarding claim 13, Raffy teaches the system of claim 12 as described above. Raffy does not teach: wherein the mapping of the raw intervention information into a vector is an identity mapping that does not change the vector or the [other type] vector However, Polsterl in the analogous art teaches: wherein the mapping of the raw intervention information into a vector is an identity mapping that does not change the vector or the [other type] vector (the “linear component models known clinical variables” directly; the “CONCAT” operation concatenates the raw input vector directly into the fusion layer (The system performs an identity mapping of the intervention/clinical information.), e.g. see Section 3.2 “Wide and Deep Neural Network”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include the mapping of the raw intervention information into a vector is an identity mapping that does not change the vector or the other type vector as taught by Polsterl, for the purposes of “leverage[ing] routine clinical patient information” effectively (Polsterl, Section 3.2 “Wide and Deep Neural Network”). Regarding claim 14, Raffy and Polsterl teach the system of claim 2 as described above. Raffy does not teach: wherein the disease characterization model comprises at least one of: an image characterization model for analysing visual features of an input image; and a structured data characterization model to encode structured data into a mathematical representation However, Polsterl in the analogous art teaches: wherein the disease characterization model comprises at least one of: an image characterization model for analysing visual features of an input image; and a structured data characterization model to encode structured data into a mathematical representation (a “PointNet” (deep component) that processes the 3D anatomical shape (i.e. image) to extract a latent feature vector (i.e. disease characterisation model), e.g. see Section 3.2, “Wide and Deep Neural Network”; “The wide part of the network takes demographics and clinical biomarkers and their interactions.” (i.e. encoder), e.g. see Section 3 “Methods”) Regarding claim 15, Raffy teaches the system of claim 14 as described above. Raffy does not teach: wherein the disease characterization model further comprises a fusion model to aggregate an output from the image characterization model and the structured data characterization model However, Polsterl in the analogous art teaches: wherein the disease characterization model further comprises a fusion model to aggregate an output from the image characterization model and the structured data characterization model (“Information from anatomical shape and tabular clinical data…are fused in a single neural network.”, e.g. see abstract; the fusion is performed by “vector concatenation”, e.g. see Section 3.2 “Wide and Deep Neural Network”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include the disease characterization model further comprises a fusion model to aggregate an output from the image characterization model and the structured data characterization model as taught by Polsterl, for the purposes of allowing for augmenting of “clinical variables” (Section 6 “Conclusion”). Regarding claim 16, Raffy and Polsterl teach the system of claim 2 as described above. Raffy does not teach: where one or more of the disease characterisation models and the outcome prediction model are trained using machine learning However, Polsterl in the analogous art teaches: where one or more of the disease characterisation models and the outcome prediction model are trained using machine learning (“Our network is trained end-to-end” using optimization and “a loss commonly used in survival analysis”, e.g. see abstract) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include the one or more of the disease characterisation models and the outcome prediction model are trained using machine learning as taught by Polsterl, for the purposes of “improved prediction performance” (Section 6, “Conclusion”). Regarding claim 17, Raffy teaches the system of claim 1 as described above. Raffy further teaches: wherein the input data is provided as a longitudinal data sequence with each input in the sequence having an associated timestamp (“a baseline scan…and a follow up scan at 3 months after the procedure” (baseline and follow-up form a sequence of data points over time), e.g. see [0095]) Regarding claim 18, Raffy teaches the system of claim 17 as described above. Raffy further teaches: […] provide an output for each input in the longitudinal data sequence […] (“a baseline scan…and a follow up scan at 3 months after the procedure” and determining the differences in lung volume, e.g. see [0095]) Raffy does not teach: wherein a disease characterization model will provide an output for each input, and the fusion module will provide a single aggregated output However, Polsterl in the analogous art teaches: wherein a disease characterization model will provide an output for each input, and (“The network takes a point cloud representation P…and then aggregates point features…The global feature vector is processed by a global MLP outputting a 100-dimensional latent representation” (for any given input, the disease characterization model provides a distinct output), e.g. see Fig. 1) the fusion module will provide a single aggregated output (the system uses fusion to combine the distinct output into a final score, “the final patient-level latent representation” (The risk score is the single aggregated output generated by the fusion of the separate model outputs.), e.g. see Section 3.2 “Wide and Deep Neural Network”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Raffy to include a disease characterization model will provide an output for each input, and the fusion module will provide a single aggregated output as taught by Polsterl, for the purposes of “improved prediction performance” (Section 6, “Conclusion”). Regarding claim 19, Raffy and Polsterl teach the system of claim 3 as described above. Raffy further teaches: wherein the intervention record is a record of all interventions performed for the patient, along with a timestamp for each intervention; where the interventions in the intervention record and the future intervention comprise at least one of: surgery, radiotherapy, chemotherapy, immunotherapy, ablation, laser therapy, cryotherapy, diathermy, and photodynamic therapy, and no intervention (“a baseline scan…and a follow up scan at 3 months after the procedure” (baseline and follow-up form a sequence of data points over time), e.g. see [0095]; the database includes “information relating to patient treatment” including “type of treatment such as type of lung volume reduction procedure, location of treatment within the lung, the results of treatment” (A database storing results of treatment which are defined by time, e.g. 3 months, must store the events by time.), e.g. see [0042]; the intervention comprise “valve placement 20, coil placement 22, and energy delivery”, e.g. see [0056]; “lung volume reduction procedures”, e.g. see [0006]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Reference Lou (US 2023/0097895 A1) discloses multimodal analysis of imaging and clinical data for personalized therapy. Reference Basu (US 2024/0274290 A1) discloses predicting changes in risk based on interventions. Reference Lee (US 2021/0390700 A1) discloses referring image segmentation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaisha Abdullah whose telephone number is (571)272-5668. The examiner can normally be reached Monday through Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Choi can be reached on (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.A./Examiner, Art Unit 3681 /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Jan 11, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12451247
USER INTERFACE FOR MANAGING A MULTIPLE DIAGNOSTIC ENGINE ENVIRONMENT
2y 5m to grant Granted Oct 21, 2025
Patent 12406768
SYSTEM AND METHOD FOR COLLECTION AND MANAGEMENT OF DATA FROM MANAGED AND UNMANAGED DEVICES
2y 5m to grant Granted Sep 02, 2025
Patent 12394511
Methods And Systems For Remote Analysis Of Medical Image Records
2y 5m to grant Granted Aug 19, 2025
Patent 12249425
INSULIN TITRATION ALGORITHM BASED ON PATIENT PROFILE
2y 5m to grant Granted Mar 11, 2025
Patent 12211624
METHODS AND SYSTEMS OF PREDICTING PPE NEEDS
2y 5m to grant Granted Jan 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
67%
With Interview (+41.9%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month