Prosecution Insights
Last updated: April 19, 2026
Application No. 19/030,488

EXTERNAL BEAM RADIATION THERAPY DOSE PREDICTION SYSTEM THROUGH LATENT DIFFUSION MODEL

Non-Final OA §101§103
Filed
Jan 17, 2025
Examiner
LEE, ANDREW ELDRIDGE
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ever Fortune AI Co. Ltd.
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
4y 7m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
23 granted / 130 resolved
-34.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
171
Total Applications
across all art units

Statute-Specific Performance

§101
38.9%
-1.1% vs TC avg
§103
40.8%
+0.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a storage unit in claim 1, which is being read as memory from Applicant’s specification paragraphs [0010]. a prompt converter and a denoise converter in claim 8, which is being read as software executed by the processor from Applicant’s specification paragraph [0014]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites a system for prediction of a therapy plan using clinical data. The limitations of: [… save …] a training data and the clinical data, wherein the training data comprises a plurality of medical images, a plurality of dose distribution data, and a plurality of training prompts; […] execute the following steps: imputing the training data to a […] model to execute […] model [… creation …] and generate an external beam radiation therapy plan model; inputting the clinical data and at least one prompt to the external beam radiation therapy plan model to generate the external beam radiation therapy plan. which under its broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) via human interaction with generic computer components. That is, by a human user interacting with a processor and storage unit, the claimed invention amounts to managing personal behavior or interaction between people, the Examiner notes as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. For example, but for a processor and storage unit, the claim encompasses collection and organization of training data to construct a model and use the constructed model to provide a treatment plan for a human user to use. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a processor and storage unit, which implements the abstract idea. The processor and storage unit are recited at a high-level of generality (i.e., a general-purpose computers/ computer components implementing generic computer functions; see Applicant’s Specification paragraph [0010]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim recites the additional elements of “store…” and “a latent diffusion training model… execute a latent diffusion model training”. The “store…” is recited at a high-level of generality (i.e., as a general means of storing data) and amounts to the mere storage of data, which is a form of extra-solution activity. The “a latent diffusion training model… execute a latent diffusion model training” is recited at a high-level of generality (i.e., training a generic off the shelf machine learning algorithm to make predictions) and amounts to merely linking of the abstract idea to particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a processor and storage unit to perform the noted steps amounts to no more than mere instructions to apply the exception using generic hardware components. Mere instructions to apply an exception using generic hardware components cannot provide an inventive concept ("significantly more"). Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “store…” and “a latent diffusion training model… execute a latent diffusion model training” were considered extra-solution activity and/or generally linking the abstract idea to particular technological environment. The “store…” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in MPEP 2106.05(d)(II)(iv) “Storing and retrieving information in memory” is well-understood, routine, and conventional. The “a latent diffusion training model… execute a latent diffusion model training” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Hatamizadeh (20240185396): see below but at least paragraphs [0067]-[0069], [0080], [0092]; Hooge (20250292447): paragraphs [0014], [0166]; Fouts (20230149092): paragraphs [0047], [0150]; use of machine learning to train a model to predictions is well-understood, routine and conventional. Well-understood, routine, and conventional elements/functions cannot provide "significantly more." As such the claim is not patent eligible. Claims 2-8 are similarly rejected because either further define the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible. Claims 2 recites the additional elements of “executing an encoding…”, however these steps are recited at a high-level of generality (i.e., as a general means of formatting data) and amounts to the mere formatting of data to some standard, which is a form of extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Also, as discussed above with respect to integration of the abstract idea into a practical application, this was considered extra-solution activity. This has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Kuusela (20250073494): see below but at least paragraph [0078]; Hatamizadeh (20240185396): see below but at least paragraphs [0071], [0084]; Hooge (20250292447): paragraph [0015]; Fouts (20230149092): paragraphs [0148]; encoding of data is well-understood, routine, and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible. Claim 3 describes preprocessing of data, but does not recite any additional elements, therefore the claim cannot provide significantly more and/or a practical application. Claims 4 and 5 describe a clinical target and condition (i.e., the data used), but does not recite any additional elements, therefore the claim cannot provide significantly more and/or a practical application. Claims 6 and 7 further describe the images used, but does not recite any additional elements, therefore the claim cannot provide significantly more and/or a practical application. Claim 8 recites use of various converters (i.e., software), but does not recite any additional elements, therefore the claim cannot provide significantly more and/or a practical application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 20250073494 (hereafter “Kuusela”), in view of U.S. Patent Pub. No. 20240185396 (hereafter “Hatamizadeh”). Regarding claim 1, Kuusela teaches an external beam radiation therapy dose prediction system through a [… machine learning …] model adapted to predict an external beam radiation therapy plan according to a clinical data of a patient (Kuusela: paragraphs [0005]-[0006], “a computer model to generate treatment attributes of a radiotherapy treatment plan using methods and systems… systems and methods described herein can automatically generate an optimized radiotherapy treatment plan using a machine learning language processing model”, paragraph [0068], “specific keywords that correspond to radiotherapy treatment (e.g., radiotherapy, radiation, radiation therapy, oncology, tumor, radiation oncology, linear accelerator, radiation dose, external beam radiation)”) and comprising: a storage unit adapted to store a training data and the clinical data, wherein the training data comprises a plurality of medical images, a plurality of dose distribution data, and a plurality of training prompts (Kuusela: paragraph [0008], “stored patient data regarding the patient from memory based on the input indicating the patient”, paragraph [0017], “storing, by the processor, a training dataset… and training, by the processor, the machine learning language processing model”, paragraph [0043], “stored in the system database 110b”, paragraphs [0050]-[0053], “The machine learning language processing model can be a large language model that has been trained to generate text, image, and/or video responses to text, image, and/or video input… Examples of patient attributes the analytics server may retrieve include computed tomography (CT) scans of the patient or a tumor of the patient, images of the patient or a tumor of the patient… The user can input text, images, and/or video into the interaction interface 302 and receive responses”, paragraphs [0068]-[0070], “using a labeled training data set. The labeled training data set may include different sentences or paragraphs of text that correspond to the radiotherapy treatment of patients… radiation dose… the labeled training data set can include… case studies of radiotherapy treatment, medical journals and research papers on radiation therapy treatment, structured radiotherapy treatment plans, etc. … train the machine learning language processing model using backpropagation techniques. For example, the analytics server can insert different entries (e.g., prompts, such as sentences or text) in the training data set into the machine learning language processing model and execute the machine learning language processing model for each entry… training dataset (e.g., for each entry of text, images, and/or videos)”, paragraph [0083], “stored locally in memory of the analytics server 402”, paragraph [0096], “machine learning model that has been trained to produce free-form descriptions for a given image”. The Examiner interprets the training dataset comprise images, distribution and prompts and teaches what is required under the broadest reasonable interpretation); a processor signally connected to the storage unit and adapted to execute (Kuusela: paragraph [0019], “a system can include a processor coupled to memory and configured to present a user interface providing an interaction interface between a user and a machine learning language processing model”) the following steps: imputing the training data to a [… machine learning …] training model to execute a [… machine learning …] model training and generate an external beam radiation therapy plan model (Kuusela: paragraph [0017], “storing, by the processor, a training dataset… and training, by the processor, the machine learning language processing model”, paragraphs [0068]-[0070], “The analytics server can generate or train the machine learning language processing model. The analytics server can generate or train the machine learning language processing model using supervised learning, unsupervised learning, or semi-supervised learning techniques. For example, the analytics server can train the machine learning language processing model using a labeled training data set… The analytics server can train the machine learning language processing model until the machine learning language processing model is accurate to an accuracy threshold, at which point the analytics server can deploy the machine learning language processing model for use to collect patient attributes through the interaction interface”, paragraph [0080], “The radiotherapy plan optimizer can output one or more treatment attributes of a radiotherapy treatment plan for the patient based on the patient's attributes”); inputting the clinical data and at least one prompt to the external beam radiation therapy plan model to generate the external beam radiation therapy plan (Kuusela: paragraphs [0006]-[0007], “a processor can use a machine learning language processing model… to automatically generate a radiotherapy treatment plan. The processor can provide an interactive interface (e.g., an interaction interface) to a computing device accessed by a user (e.g., an oncologist or other clinician) in which the user can provide inputs (e.g., prompts) that indicate different patient attributes of a patient that can be used to generate a radiotherapy treatment plan… The processor can receive input patient attributes and execute the machine learning language processing model using the patient attributes as input to generate a response… receiving, by the processor from the interaction interface, a first input comprising a first patient attribute of a patient… receiving, by the processor from the interaction interface, a second input comprising the second patient attribute of the patient; and responsive to determining the first patient attribute and the second patient attribute satisfy a plan condition, transmitting, by the processor, the first patient attribute of the patient and the second patient attribute of the patient to a radiotherapy plan optimizer, wherein the radiotherapy plan optimizer is configured to generate a radiotherapy treatment plan for the patient based on the first patient attribute and the second patient attribute”, paragraph [0043], “receive patient attributes for a patient as input and automatically generate a response (e.g., a question or command) requesting patient attributes based on the received patient attributes”. Also see, paragraph [0068]. The Examiner interprets a first input (i.e., clinical data) and a second input (i.e., at least one prompt) are used to generate the radio therapy plan). Kuusela may not explicitly teach (underlined below for clarity): an external beam radiation therapy dose prediction system through a latent diffusion model adapted to predict an external beam radiation therapy plan according to a clinical data of a patient and comprising: imputing the training data to a latent diffusion training model to execute a latent diffusion model training and generate an external beam radiation therapy plan model; Hatamizadeh teaches an external beam radiation therapy dose prediction system through a latent diffusion model adapted to predict an external beam radiation therapy plan according to a clinical data of a patient and comprising: imputing the training data to a latent diffusion training model to execute a latent diffusion model training and generate an external beam radiation therapy plan model (Hatamizadeh: Fig. 3, paragraph [0080], “machine learning model(s) 110 may include one or more diffusion models that attempt to learn a latent structure of a dataset by modeling the way in which data points diffuse through the latent space (e.g., as shown in FIG. 3)”, paragraph [0092], “a latent space diffusion model 300… latent diffusion model 300 may include one or more diffusion models that attempt to learn a latent structure of a dataset associated with input image 302 by modeling the way in which data points diffuse through a latent space”, paragraph [0574], “applications available for deployment pipelines 3810 may include any application that may be used for performing processing tasks on imaging data or other data from devices. In at least one embodiment, different applications may be responsible for… treatment planning, dosimetry, beam planning (or other radiation treatment procedures), and/or other analysis, image processing, or inferencing tasks”); One of ordinary skill in the art before the effective filing date would have found it obvious to include using latent diffusion models as taught by Hatamizadeh within the use of machine learning and prompts for external beam radiation treatment planning as taught by Kuusela with the motivation of “improve the performance to machine learning model(s)” (Hatamizadeh: paragraph [0071]). Regarding claim 2, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the latent diffusion model training comprises executing an encoding training according to the plurality of medical images and executing a dose generation training according to the plurality of training prompts (Kuusela: paragraph [0017], “storing, by the processor, a training dataset… and training, by the processor, the machine learning language processing model”, paragraphs [0068]-[0070], “using a labeled training data set. The labeled training data set may include different sentences or paragraphs of text that correspond to the radiotherapy treatment of patients… radiation dose… the labeled training data set can include… case studies of radiotherapy treatment, medical journals and research papers on radiation therapy treatment, structured radiotherapy treatment plans, etc. … train the machine learning language processing model using backpropagation techniques. For example, the analytics server can insert different entries (e.g., prompts, such as sentences or text) in the training data set into the machine learning language processing model and execute the machine learning language processing model for each entry… training dataset (e.g., for each entry of text, images, and/or videos)”, paragraph [0080], “The radiotherapy plan optimizer can output one or more treatment attributes of a radiotherapy treatment plan for the patient based on the patient's attributes”; Hatamizadeh: paragraph [0084], “machine learning model(s) 110 comprise 4 encoding and/or decoding stages each having 32 transformer blocks. In at least one embodiment, machine learning model(s) 110 comprise one or more stages having distinct number of transformer blocks”, paragraph [0093], “Encoder 304 may include one or more components configured map input image 302 to a different dimensional representation.”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 3, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the processor executes a preprocessing involving a plurality of medical images of the clinical data, so that the medical images of the clinical data could satisfy an input format of the external beam radiation therapy plan model (Kuusela: paragraph [0100], “The plan optimizer 416 can operate in a pre-processing step in which a list of statements generated is converted”; Hatamizadeh: paragraph [0100], “transformer 400 includes one or more layer normalization operations, such as normalization layer 402 and/or normalization layer 410”, paragraph [0253], “pre-processing”. The Examiner notes that “so that the medical images of the clinical data could satisfy an input format of the external beam radiation therapy plan model” is an intended use of the pre-processing that is not required to occur. This feature has been fully considered by the Examiner; however, the limitation does not provide patentable distinction over the cited prior art because it is an intended use or result of the pre-processing). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 4, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the plurality of training prompts comprise a clinical target and a clinical condition (Kuusela: paragraph [0006], “Examples of such patient attributes can include but are not limited to, potential methods of treating the patient, radiotherapy machine attributes, or treatment attributes (e.g., machine movement and/or dosage attributes of the treatment, frequency of the treatment, time periods between treatments, etc. … Responsive to determining the patient attributes of a patient satisfy a condition of a template (e.g., determining there is enough data to generate a radiotherapy treatment plan for the patient), the processor can input the patient attributes into a radiotherapy plan optimizer to generate a radiotherapy treatment plan for the patient”, paragraph [0094], “he patient descriptor function can be configured to receive patient information such as a CT image (or other imaging modality) and structure set information (e.g., a delineation of organs-at-risk (OARs) and target structures)”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 5, Kuusela and Hatamizadeh teach the limitations of claim 4, and further teach wherein the clinical condition comprises a therapy instrument, a radiation source, a therapy technique, a radiation dose, a dose constraint of the clinical target, or a combination thereof (Kuusela: paragraph [0006], “Examples of such patient attributes can include but are not limited to, potential methods of treating the patient, radiotherapy machine attributes, or treatment attributes (e.g., machine movement and/or dosage attributes of the treatment, frequency of the treatment, time periods between treatments, etc.)”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 6, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the plurality of medical images comprise a magnetic resonance imaging, a computed tomography scan, a positron emission tomography, a single photon emission computed tomography, or a combination thereof (Kuusela: paragraph [0052], “patient attributes the analytics server may retrieve include computed tomography (CT) scans of the patient or a tumor of the patient, images of the patient or a tumor of the patient”, paragraph [0092], “patient data (e.g., a CT image or a structure set)”; Hatamizadeh: paragraph [0549], “(e.g., MRI, CT Scan, X-Ray, Ultrasound, etc.)”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding claim 8, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the latent diffusion training model comprises a prompt converter and a denoise converter; the prompt converter is adapted to receive the plurality of dose distribution data, a denoise data of the denoise conveter to output a prompt data; the denoise converter is adapted to receive the plurality of medical images, the plurality of dose distribution data, and the prompt data to output the denoise data (Kuusela: paragraph [0026], “convert the vector of tokens representing the first patient attribute and the second patient attribute into a structured dataset”, paragraph [0079], “the analytics server can convert the vector of tokens to a structured data set”, paragraph [0099], “The plan optimizer 416 can be configured to receive a structured cost function or field geometry definitions from the plan optimization input converter 412 and generate treatment attributes of a radiotherapy treatment plan based on the structured cost function or field geometry definitions”; Hatamizadeh: Fig. 3-4, paragraph [0065]-[0067], “generate one or more denoised images (e.g., denoised representations of one or more input images) using one or more input images… synthesize samples using a denoising process. For example, for a data distribution”, paragraph [0080], “a denoising network that takes a noisy image and predicts a denoising direction towards a final output image”). The motivation to combine is the same as in claim 1, incorporated herein. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 20250073494 (hereafter “Kuusela”) and U.S. Patent Pub. No. 20240185396 (hereafter “Hatamizadeh”) as applied to claim 1 above, and further in view of U.S. Patent Pub. No. 20240374923 (hereafter “Sengupta”). Regarding claim 7, Kuusela and Hatamizadeh teach the limitations of claim 1, and further teach wherein the plurality of medical images comprise a plurality of clinical target contours and a plurality of adjacent organ-at-risk contours (Kuusela: paragraph [0094], “The patient descriptor function can be configured to receive patient information such as a CT image (or other imaging modality) and structure set information (e.g., a delineation of organs-at-risk (OARs) and target structures). The patient descriptor function can output a list of statements describing the geometry of the patient based on the input”). Kuusela and Hatamizadeh may not explicitly teach (underlined below for clarity): wherein the plurality of medical images comprise a plurality of clinical target contours and a plurality of adjacent organ-at-risk contours. Sengupta teaches wherein the plurality of medical images comprise a plurality of clinical target contours and a plurality of adjacent organ-at-risk contours (Sengupta: paragraph [0029], “applying an automated contouring process to generate target structures on a reference image, the target structures representing contours to be irradiated and contours of the targets to be avoided, the reference image being the CT image used to obtain the dosimetric SPECT/CT or PET/CT imag”, paragraph [0058], ‘The images can illustrate the patient's body tissues, organs, bone, soft tissues, blood vessels, etc.”). One of ordinary skill in the art before the effective filing date would have found it obvious to include using contours as taught by Sengupta with the use of machine learning and prompts for external beam radiation treatment planning as taught by Kuusela and Hatamizadeh with the motivation of “improves radiation treatment” (Sengupta: paragraph [0006]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Pub. No. 20230149092 (hereafter “Fouts”) teaches generation of images using a machine learning model U.S. Patent Pub. No. 20250292447 (hereafter “Hooge”) teaches training and use of a machine learning model for image analysis/generation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew E Lee whose telephone number is (571)272-8323. The examiner can normally be reached M-Th 9-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.E.L./Examiner, Art Unit 3684 /Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Mar 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542210
WEARABLE DEVICE AND COMPUTER ENABLED FEEDBACK FOR USER TASK ASSISTANCE
2y 5m to grant Granted Feb 03, 2026
Patent 12154077
USER INTERFACE FOR DISPLAYING PATIENT HISTORICAL DATA
2y 5m to grant Granted Nov 26, 2024
Patent 12040070
RADIOTHERAPY SYSTEM, DATA PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Jul 16, 2024
Patent 12027251
SYSTEMS AND METHODS FOR MANAGING LARGE MEDICAL IMAGE DATA
2y 5m to grant Granted Jul 02, 2024
Patent 11942189
Drug Efficacy Prediction for Treatment of Genetic Disease
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
51%
With Interview (+33.5%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month