Prosecution Insights
Last updated: April 19, 2026
Application No. 18/586,533

IMAGE DATA PROCESSING AND TARGET STRUCTURE TRACKING FOR RADIATION THERAPY

Non-Final OA §103§112
Filed
Feb 25, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Siemens Healthineers International AG
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The IDSs dated 25 February 2024 and 16 September 2025 have been considered and placed in the application file. Specification The specification is objected to because on the first page, the reference to related applications is blank. According to 37 CFR 1.71, MPEP §§ 608.01, 2161, and 2162, the specification must be in such particularity as to enable any person skilled in the pertinent art or science to make and use the invention without involving extensive experimentation and must clearly convey enough information about the invention to show that applicant invented the subject matter that is claimed. A substitute specification in proper idiomatic English and in compliance with 37 CFR 1.52(a) and (b) is required. The substitute specification filed must be accompanied by a statement that it contains no new matter. Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claims 2, 6, 8, 14, 16 and 20 recite “or.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1 and 15 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 1 and 15 recite “the material property data is matchable against a template.” It is unclear whether the material property data is required to be matched, could be matched or is just required to be of a data type that could be matched. For the purpose of prior art analysis, Examiner assumes the material property data is matched against a template. Appropriate correction is required. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4, 6-8, 11-12, 15, 18 and 20-21 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0036916 A1, (Stringer et al.) in view of US Patent Publication 2022 0309294 A1, (Paysan et al.). [AltContent: textbox (Stringer et al., Fig. 3, showing using a physical template to create a treatment template with material property data.)] PNG media_image1.png 702 590 media_image1.png Greyscale Claim 1 Regarding Claim 1, Stringer et al. teach a method for a computer system to perform image data processing for target structure tracking ("a method of mitigating off-target exposure to radiotherapy in a subject in need thereof is provided," paragraph [0186]), wherein the method comprises: obtaining treatment image data associated with a target structure of a patient requiring radiation therapy, wherein the treatment image data is acquired using an imaging system during a treatment phase of the radiation therapy ("In either type of treatment, a computerized tomographic (CT) x-ray image of the body is obtained in order to locate the tumor and determine the type of tissue through which the x-ray or proton beam must penetrate to reach the cancerous tumor," paragraph [0003] where the treatment phase includes locating current tumors); and generate material property data representing a particular material property associated with the target structure ("These methods require discretization, digitization, segmentation, and radiometric characterization ( cross sectional, electron density, and stopping power) of patient imaging data," paragraph [0088]), wherein the material property data is matchable against a template that also represents the particular material property for tracking the target structure based on the particular material property during the treatment phase ("On the other hand, energy transfer for charged particles is directly proportional to particle mass, energy, and material properties. The linear energy transfer (LET) of the charged particle is characterized by the relationship between energy transferred from the particle to the material per unit path length," paragraph [0090] and "In another embodiment, a method of improving certainty of a radiotherapeutic dose delivered to a human subject is provided, the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject." paragraph [0187] where the radiation-generating therapeutic device is a template). [AltContent: textbox (Paysan et al. Fig. 3, showing using an AI engine to match a template.)] PNG media_image2.png 495 642 media_image2.png Greyscale Stringer et al. do not explicitly teach all of artificial intelligence engines. However, Paysan et al. teach processing the treatment image data using an artificial intelligence (AI) engine ("intelligent end-to-end target structure tracking system may employ machine learning models (such as neural networks), filters, algorithms, and various combinations of input modalities (e.g., forward projection models and back projection models) to determine the location of PTVs and/or OARs using projection data," paragraph [0023] where neural networks are artificial intelligence engines). Therefore, taking the teachings of Stringer et al. and Paysan et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Calibration phantom for Radiotherapy” as taught by Stringer et al. to use Domain tracking of Target Structures as taught by Paysan et al. The suggestion/motivation for doing so would have been that, “Manual template extraction may include pre-processing steps such as segmentation, or the generation of structure data (such as PTV data) from the simulation images. The pre-processed template image may be ingested by a system that performs PTV tracking ( e.g., determining the position of the PTV in space). However, these conventional methods are undesirable because human error and/or bias, may result in large uncertainty margins associated with PTV and/or OAR tracking. In addition, conventional methods of tracking systems are limited in that they locate a PTV and/or OAR in two-dimensional space.” as noted by the Paysan et al. disclosure in paragraph [0004], which also motivates combination because the combination would predictably have a higher productivity as there is a reasonable expectation that an electronic solution will be more efficient than a manual solution; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 8 and apparatus claim 15 while noting that the rejection above cites to both device and method disclosures. Claims 8 and 15 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 4 Regarding claim 4, Stringer et al. teach the method of claim 1, as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach wherein processing the treatment image data comprises: obtaining prior knowledge data that is generated based on planning image data associated with the target structure, wherein the planning image data is acquired prior to the treatment phase of the radiation therapy ("The software solutions may analyze current imaging information (e.g., real-time projection data), leverage temporal information about patient motion, and analyze radiation therapy treatment planning data such as historic simulation ( or planning) data, to predict a location of the PTV and/or OAR structures throughout the radiation therapy treatment planning process," paragraph [0022]); and processing the treatment image data and the prior knowledge data using the AI engine to generate the material property data representing the particular material property ("The software solutions may analyze current imaging information (e.g., real-time projection data), leverage temporal information about patient motion, and analyze radiation therapy treatment planning data such as historic simulation ( or planning) data, to predict a location of the PTV and/or OAR structures throughout the radiation therapy treatment planning process," paragraph [0022]). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 6 Regarding claim 6, Stringer et al. teach the method of claim 4, wherein the method further comprises: generating the prior knowledge data that includes prior material property data representing the particular material property based on the planning image data or transformed image data that is generated based on the planning image data ("the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject," paragraph [0187] where calibrating is prior knowledge data). Claim 7 Regarding claim 7, Stringer et al. teach the method of claim 4, as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach wherein processing the treatment image data comprises: processing the treatment image data and the prior knowledge data using the AI engine that includes an encoder and a decoder to generate the material property data representing the particular material property ("The analytics server may apply a correlation filter to determine the correlation between the template image and the 3D image data. The analytics server may perform feature-based comparisons of the template image and the 3D image (e.g., using neural networks)," paragraph [0058]). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 8 Regarding claim 8, Stringer et al. teach a method for a computer system to perform target structure tracking for radiation therapy ("a method of mitigating off-target exposure to radiotherapy in a subject in need thereof is provided," paragraph [0186]), wherein the method comprises: obtaining material property data that represents a particular material property associated with a target structure of a patient requiring radiation therapy, wherein the material property data is generated using an artificial intelligence (AI) engine based on treatment image data acquired during a treatment phase of the radiation therapy ("In either type of treatment, a computerized tomographic (CT) x-ray image of the body is obtained in order to locate the tumor and determine the type of tissue through which the x-ray or proton beam must penetrate to reach the cancerous tumor," paragraph [0003] where the treatment phase includes locating current tumors); and based on the material property data and the template, performing template matching during the treatment phase to track the target structure based on the particular material property ("On the other hand, energy transfer for charged particles is directly proportional to particle mass, energy, and material properties. The linear energy transfer (LET) of the charged particle is characterized by the relationship between energy transferred from the particle to the material per unit path length," paragraph [0090] and "In another embodiment, a method of improving certainty of a radiotherapeutic dose delivered to a human subject is provided, the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject." paragraph [0187] where the radiation-generating therapeutic device is a template). Stringer et al. do not explicitly teach all of transformed image data. However, Paysan et al. teach obtaining a template that also represents the particular material property associated with the target structure, wherein the template is generated based on (a) planning image data that is acquired prior to the treatment phase or (b) transformed image data that is generated based on the planning image data ("In step 208, the analytics server may extract a template post processed 3D feature map from a diagnostic image, treatment simulation image, treatment planning image, or patient setup image," paragraph [0055]). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 11 Regarding claim 11, Stringer et al. teach the method of claim 1, as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach wherein the method further comprises: processing the treatment image data using the AI engine to generate the material property data ("The software solutions may analyze current imaging information (e.g., real-time projection data), leverage temporal information about patient motion, and analyze radiation therapy treatment planning data such as historic simulation ( or planning) data, to predict a location of the PTV and/or OAR structures throughout the radiation therapy treatment planning process," paragraph [0022]). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 12 Regarding claim 12, Stringer et al. teach the method of claim 11, wherein processing the treatment image data comprises: obtaining prior knowledge data that is generated based on the planning image data associated with the target structure ("In either type of treatment, a computerized tomographic (CT) x-ray image of the body is obtained in order to locate the tumor and determine the type of tissue through which the x-ray or proton beam must penetrate to reach the cancerous tumor," paragraph [0003] where the treatment phase includes locating current tumors); and to generate the material property data representing the particular material property ("On the other hand, energy transfer for charged particles is directly proportional to particle mass, energy, and material properties. The linear energy transfer (LET) of the charged particle is characterized by the relationship between energy transferred from the particle to the material per unit path length," paragraph [0090] and "In another embodiment, a method of improving certainty of a radiotherapeutic dose delivered to a human subject is provided, the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject." paragraph [0187] where the calibration phantom is a template), as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach processing the treatment image data and the prior knowledge data using the AI engine ("The analytics server may apply a correlation filter to determine the correlation between the template image and the 3D image data. The analytics server may perform feature-based comparisons of the template image and the 3D image (e.g., using neural networks)," paragraph [0058]) Stringer et al. and Paysan et al. are combined as per claim 1. Claim 15 Regarding claim 15, Stringer et al. teach a computer system ("a method of mitigating off-target exposure to radiotherapy in a subject in need thereof is provided," paragraph [0186]), comprising: obtain treatment image data associated with a target structure of a patient requiring radiation therapy, wherein the treatment image data is acquired using an imaging system during a treatment phase of the radiation therapy ("In either type of treatment, a computerized tomographic (CT) x-ray image of the body is obtained in order to locate the tumor and determine the type of tissue through which the x-ray or proton beam must penetrate to reach the cancerous tumor," paragraph [0003] where the treatment phase includes locating current tumors); and generate material property data representing a particular material property associated with the target structure ("These methods require discretization, digitization, segmentation, and radiometric characterization ( cross sectional, electron density, and stopping power) of patient imaging data," paragraph [0088]), wherein the material property data is matchable against a template that also represents the particular material property for tracking the target structure based on the particular material property during the treatment phase ("On the other hand, energy transfer for charged particles is directly proportional to particle mass, energy, and material properties. The linear energy transfer (LET) of the charged particle is characterized by the relationship between energy transferred from the particle to the material per unit path length," paragraph [0090] and "In another embodiment, a method of improving certainty of a radiotherapeutic dose delivered to a human subject is provided, the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject." paragraph [0187] where the radiation-generating therapeutic device is a template). Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach a processor ("The systems and methods described herein enable a server or a processor associated with (e.g., located in) a clinic to determine a location of PTV(s) and/or OAR(s) in a patient's body," paragraph [0116]); and a non-transitory computer-readable medium having stored thereon instructions that, when executed by the processor ("When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium," paragraph [0121]), cause the processor to perform the following: process the treatment image data using an artificial intelligence (AI) engine ("intelligent end-to-end target structure tracking system may employ machine learning models (such as neural networks), filters, algorithms, and various combinations of input modalities (e.g., forward projection models and back projection models) to determine the location of PTVs and/or OARs using projection data," paragraph [0023] where neural networks are artificial intelligence engines). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 18 Regarding claim 18, Stringer et al. teach the computer system of claim 15, as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach wherein the instructions for processing the treatment image data cause the processor to: obtain prior knowledge data that is generated based on planning image data associated with the target structure, wherein the planning image data is acquired prior to the treatment phase of the radiation therapy ("The software solutions may analyze current imaging information (e.g., real-time projection data), leverage temporal information about patient motion, and analyze radiation therapy treatment planning data such as historic simulation ( or planning) data, to predict a location of the PTV and/or OAR structures throughout the radiation therapy treatment planning process," paragraph [0022]); and process the treatment image data and the prior knowledge data using the AI engine to generate the material property data representing the particular material property ("The software solutions may analyze current imaging information (e.g., real-time projection data), leverage temporal information about patient motion, and analyze radiation therapy treatment planning data such as historic simulation ( or planning) data, to predict a location of the PTV and/or OAR structures throughout the radiation therapy treatment planning process," paragraph [0022]). Stringer et al. and Paysan et al. are combined as per claim 1. Claim 20 Regarding claim 20, Stringer et al. teach the computer system of claim 18, wherein the instructions further cause the processor to: generate the prior knowledge data that includes prior material property data representing the particular material property based on the planning image data or transformed image data that is generated based on the planning image data ("the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject," paragraph [0187] where calibrating is prior knowledge data). Claim 21 Regarding claim 21, Stringer et al. teach the computer system of claim 18, as noted above. Stringer et al. do not explicitly teach all of AI engines. However, Paysan et al. teach wherein the instructions for processing the treatment image data cause the processor to: process the treatment image data and the prior knowledge data using the AI engine that includes an encoder and a decoder to generate the material property data representing the particular material property ("The analytics server may apply a correlation filter to determine the correlation between the template image and the 3D image data. The analytics server may perform feature-based comparisons of the template image and the 3D image (e.g., using neural networks)," paragraph [0058]). Stringer et al. and Paysan et al. are combined as per claim 1. 2nd Claim Rejections - 35 USC § 103 Claims 2-3, 5, 9-10, 13-14, 16-17 and 19 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0036916 A1, (Stringer et al.) and US Patent Publication 2022 0309294 A1, (Paysan et al.) in further view of US Patent Publication 2018 0035965 A1, (Schmidt et al.). Claim 2 Regarding Claim 2, Stringer et al. and Paysan et al. teach the method of claim 1, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of material density. [AltContent: textbox (Schmidt et al. Fig. 1, showing using an AI network for object composition determination.)] PNG media_image3.png 790 386 media_image3.png Greyscale However, Schmidt et al. teach wherein processing the treatment image data comprises: processing the treatment image data using the AI engine to generate the material property data representing the particular material property in the form of material density ("In a security imaging application, the present system and method may be used to determine maps of effective atomic numbers and effective density for all pixels in a security scan image," paragraph [0035]) or material thickness ("The multi-spectral x-ray projection is then processed with an artificial neural network to determine composition information about the object in terms of equivalent thickness of at least one basis material," paragraph [0007]). Therefore, taking the teachings of Stringer et al., Paysan et al. and Schmidt et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Calibration phantom for Radiotherapy” as taught by Stringer et al. and Domain tracking of Target Structures as taught by Paysan et al. to use “Material decomposition of Multi-Spectral x-ray projections using Neural Networks” as taught by Paysan et al. The suggestion/motivation for doing so would have been that, “Existing technologies for material decomposition from multi-spectral x-ray data include compensatory algorithms for non-ideal detector responses based on the laws governing physical processes. Models of the detector's energy response and pulse pileup must be used. These models are parameterized based on the specific brand and type of the detector and may require radioactive isotopes for determining the parameters..” as noted by the Schmidt et al. disclosure in paragraph [0004], which also motivates combination because the combination would predictably have a higher efficiency as there is a reasonable expectation that a neural network approach to determining parameters will be more efficient and customized; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 3 Regarding claim 3, Stringer et al. teach the method of claim 1, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of effective atomic numbers. However, Schmidt et al. teach wherein processing the treatment image data comprises: processing at least the treatment image data using the AI engine to generate the material property data representing the particular material property in the form of effective atomic number associated with the target structure ("In a security imaging application, the present system and method may be used to determine maps of effective atomic numbers and effective density for all pixels in a security scan image," paragraph [0035]). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 5 Regarding claim 5, Stringer et al. teach the method of claim 4, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of polychromatic simulation. However, Schmidt et al. teach wherein the method further comprises: generating the prior knowledge data that includes simulated projection image data by performing polychromatic simulation based on transformed image data that is generated based on the planning image data ("a method of processing x-ray images comprises training an artificial neural network to process multi-spectral x-ray projections to determine composition information in terms of equivalent thickness of at least one basis material," paragraph [0006] where processing multi-spectral x-ray projections is polychromatic simulation). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 9 Regarding claim 9, Stringer et al. teach the method of claim 1, as noted above. Stringer does not explicitly teach all of template matching. However, Paysan et al. teach wherein performing the template matching comprises: performing the template matching based on the material property data ("In some configurations, the analytics server uses any suitable means for template matching," paragraph [0058]). Stringer et al. and Paysan et al. do not explicitly teach all of material thickness. However, Schmidt et al. teach the template that both represent the particular material property in the form of material thickness associated with the target structure ("The multi-spectral x-ray projection is then processed with an artificial neural network to determine composition information about the object in terms of equivalent thickness of at least one basis material," paragraph [0007]). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 10 Regarding claim 10, Stringer et al. teach the method of claim 1, as noted above. Stringer does not explicitly teach all of template matching. However, Paysan et al. teach wherein performing the template matching comprises: performing the template matching based on the material property data ("In some configurations, the analytics server uses any suitable means for template matching," paragraph [0058]). Stringer et al. and Paysan et al. do not explicitly teach all of effective atomic numbers. However, Schmidt et al. teach the template that both represent the particular material property in the form of effective atomic number associated with the target structure ("In a security imaging application, the present system and method may be used to determine maps of effective atomic numbers and effective density for all pixels in a security scan image," paragraph [0035]). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 13 Regarding claim 13, Stringer et al. teach the method of claim 12, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of polychromatic simulation. However, Schmidt et al. teach wherein processing the treatment image data comprises: generating the prior knowledge data that includes simulated projection image data by performing polychromatic simulation based on the transformed image data ("a method of processing x-ray images comprises training an artificial neural network to process multi-spectral x-ray projections to determine composition information in terms of equivalent thickness of at least one basis material," paragraph [0006] where processing multi-spectral x-ray projections is polychromatic simulation). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 14 Regarding claim 14, Stringer et al. teach the method of claim 12, as noted above, further teaching wherein processing the treatment image data comprises: generating the prior knowledge data that includes prior material property data representing the particular material property based on the planning image data or the transformed image data ("the method comprising calibrating a radiation-generating therapeutic device using the calibration phantom according to any of the embodiments disclosed herein, prior to administering the radiotherapeutic dose to the human subject," paragraph [0187] where calibrating is prior knowledge data). Claim 16 Regarding claim 16, Stringer et al. teach the computer system of claim 15, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of material density. However, Schmidt et al. teach wherein the instructions for processing the treatment image data cause the processor to: process the treatment image data using the AI engine to generate the material property data representing the particular material property in the form of material density ("In a security imaging application, the present system and method may be used to determine maps of effective atomic numbers and effective density for all pixels in a security scan image," paragraph [0035]) or material thickness ("The multi-spectral x-ray projection is then processed with an artificial neural network to determine composition information about the object in terms of equivalent thickness of at least one basis material," paragraph [0007]). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 17 Regarding claim 17, Stringer et al. teach the computer system of claim 15, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of effective atomic numbers. However, Schmidt et al. teach wherein the instructions for processing the treatment image data cause the processor to: process at least the treatment image data using the AI engine to generate the material property data representing the particular material property in the form of effective atomic number associated with the target structure ("In a security imaging application, the present system and method may be used to determine maps of effective atomic numbers and effective density for all pixels in a security scan image," paragraph [0035]). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Claim 19 Regarding claim 19, Stringer et al. teach the computer system of claim 18, as noted above. Stringer et al. and Paysan et al. do not explicitly teach all of polychromatic simulation. However, Schmidt et al. teach wherein the instructions further cause the processor to: generate the prior knowledge data that includes simulated projection image data by performing polychromatic simulation based on transformed image data that is generated based on the planning image data ("a method of processing x-ray images comprises training an artificial neural network to process multi-spectral x-ray projections to determine composition information in terms of equivalent thickness of at least one basis material," paragraph [0006] where processing multi-spectral x-ray projections is polychromatic simulation). Stringer et al., Paysan et al. and Schmidt et al. are combined as per claim 2. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2024 0061908 A1 to Paysan et al. discloses a dual-domain target structure tracking end-to-end system receives projection data in one dimension or two dimensions and a three-dimensional simulation image. The end-to-end system extracts a template feature map from the simulation image using segmentation. The end-to-end system extracts features from the projection data, transforms the features of the projection data into three-dimensional space, and sequences the three-dimensional space to generate a three-dimensional feature map. US Patent Publication 2020 0030634 A1 to Van Heteren et al. discloses multiple imaging X-ray sources and X-ray imaging devices enable the acquisition of volumetric image data for the target volume over a relatively short rotational arc, for example 30 degrees or less. Therefore, intra-fraction motion can be detected in near-real time, for example within about one second or less. The radiation therapy system can perform image guided radiation therapy (IGRT) that monitors intra-fraction motion using X-ray imaging. Detected anatomical variations can then either be compensated for, via patient repositioning and/or treatment modification, or the current treatment can be aborted. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 13 January 2026
Read full office action

Prosecution Timeline

Feb 25, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month