Prosecution Insights
Last updated: April 19, 2026
Application No. 18/357,991

METHOD OF GENERATING TRAINED MODEL, MACHINE LEARNING SYSTEM, PROGRAM, AND MEDICAL IMAGE PROCESSING APPARATUS

Final Rejection §103
Filed
Jul 24, 2023
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments The amendments and arguments filed 12/16/2025 have been entered and made of record. The Applicant's amendments and arguments filed 12/16/2025 have been considered, however, are unpersuasive: Re Claim 1, Applicant contends (in page 8 of the Arguments of 12/16/2025) that cited references, Xu as modified by HIBBARD do not disclose “a first discriminator … receive an input data including (1) first image data, and (2) coordination information of a human body coordination system; and particularly, Xu, only discloses “a first discriminator … receive an input data including (1) first image data”, but not claim required (2) coordination information; However, the Examiner disagrees, because: Xu clearly discloses that a first discriminator … receive an input data including (1) first image data, and (2) coordination information (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]); And it is apparently above Xu’s “being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036]” read on the claimed input of (2) coordination information; because Xu discloses: -- [0015] In some implementations, the one or more deformable offset layers are trained based on the adversarial training to change a sampling amount, introduce coordinate offsets, and resample images using interpolation in order store or absorb deformed structure information between the paired CBCT and CT images.--, in [0015]; Also see: -- The first sCT image is provided to train a first discriminator to discriminate whether the first sCT image is a real CT image or a synthetic CT image…. the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0051]-[0052]; --[0065] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter 210 can be defined as a location where the central axis of the radiation therapy beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation therapy beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A.--; in Xu’s [0065]; Furthermore, above Xu’s disclosures of “coordinate offsets”, “representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).”, and “The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210.” is consistent with HIBBARD’s disclosures of coordination information of a human body coordination system. Therefore, claims 1-13 are still not patentably distinguishable over the prior art reference(s). Further discussions are addressed in the prior art rejection section below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over XU (US 20220318956 A1, Date Filed: 2019-06-27), and in view of HIBBARD (US 20210244971 A1, Date Filed: 2020-02-07). Re Claim 1, XU discloses a method of generating a trained model that converts a domain of a medical image which is input, and outputs a generated image of a different domain (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]); wherein a learning model is used, which has a structure of a generative adversarial network including: a first generator configured using a first convolutional neural network that receives an input of a medical image of a first domain and that outputs a first generated image of a second domain different from the first domain (see XU: e.g., --[0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model,--, in [0017], and, and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036]); a first discriminator configured using a second convolutional neural network that receives an input of data including first image data, and coordinate information of a coordinate system corresponding to each position of a plurality of unit elements configuring the first image data, and that discriminates authenticity of the input image, wherein the first image data is the first generated image generated by the first generator or a medical image of the second domain included in a training dataset (see XU: e.g., --the generative adversarial network is configured to train the generative model using a discriminative model; values applied by the generative model and the discriminative model are established using adversarial training between the discriminative model and the generative model; and the generative model and the discriminative model comprise respective convolutional neural networks….the adversarial training comprises: training the generative model to generate a first sCT image from a given CBCT image by applying a first set of the one or more deformable offset layers to the given CBCT image; training the generative model to generate a second sCT image from the given CBCT image without applying the first set of the one or more deformable offset layers to the given CBCT image; and training the discriminative model to classify the first sCT image as a synthetic or a real computed tomography (CT) image, and the output of the generative model is used for training the discriminative model and an output of the discriminative model is used for training the generative model.--, in [0007]-[0008], and, --[0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model,--, in [0017], and, and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036]; and, --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]; it is apparently above Xu’s “being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036]” read on the claimed input of (2) coordination information; because Xu discloses: -- [0015] In some implementations, the one or more deformable offset layers are trained based on the adversarial training to change a sampling amount, introduce coordinate offsets, and resample images using interpolation in order store or absorb deformed structure information between the paired CBCT and CT images.--, in [0015]; Also see: --[0065] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter 210 can be defined as a location where the central axis of the radiation therapy beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation therapy beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A.--, in [0065]), XU however does not explicitly disclose a coordinate information of a human body coordinate system, HIBBARD discloses a coordinate information of a human body coordinate system {in the similar generative adversarial network (GAN) and discriminator networks and learning models for medical images processing and analysis} (see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]); HIBBARD and XU are combinable as they are in the same field of endeavor: generating a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify XU’s method using HIBBARD’s teachings by including a coordinate information of a human body coordinate system to XU’s a coordinate information in order to establish a spatial coordinate system to be used while imaging a patient or during radiation surgery (see HIBBARD: e.g., in [0101], [0107], and [0124]); Xu as modified by HIBBARD further disclose: and the method comprises: by a computer, acquiring a plurality of pieces of training data including the medical image of the first domain and the medical image of the second domain (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]); and performing training processing of training the first generator and the first discriminator in an adversarial manner based on the plurality of pieces of training data (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]). Re Claim 2, XU as modified by HIBBARD further disclose wherein the coordinate information corresponding to the first generated image in a case where the first generated image is input to the first discriminator is coordinate information determined for the medical image of the first domain which is a conversion source image input to the first generator in a case of generating the first generated image (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]; and, --only apply “adversarial” losses on generators G.sub.offset.sup.cbct2ct and G.sub.offset.sup.ct2cbct (e.g., the generators in the first, third, fifth and seventh paths), and apply “cycle-consistence” losses on all generators (e.g., G.sub.offset.sup.cbct2ct, G.sub.offset.sup.ct2cbct, and G.sup.cbct2ct, G.sup.ct2cbct). The effect of minimizing the “cycle-consistence” loss terms is to preserve original structures and to avoid unnecessary structure deformation, and the effect of minimizing the “adversarial” loss terms is to learn a mapping or distribution conversion from one domain to its opponent's domain. Unlike prior single-generator approaches, two different generators are provided in each direction (e.g., two generators, one for each of the first and second paths and two generators, one for each of the third and fourth paths). One generator is provided with offset layers (such as G.sub.offset.sup.cbct2ct in the first path), and one without offset layers (such as G.sup.cbct2ct in the second flow). In addition, the generators share weights and other modules with all other layers (except those offset layers). By combining these two loss terms on separate generators, the loss terms are decoupled and will not compete with each other.--, in [0143]). Re Claim 3, XU as modified by HIBBARD further disclose wherein the first image data is three-dimensional data, the coordinate information includes x coordinate information, y coordinate information, and z coordinate information that specify a position of each voxel as the unit element in a three- dimensional space (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]), and the x coordinate information, the y coordinate information, and the z coordinate information are used as channels and are combined with a channel of the first image data or a feature map of the first image data to be given to the first discriminator (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see: Fig. 2A, and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, 2D Cone beam CT, 3D CT, 3D CBCT, 4D CT, 4DCBCT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like. Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]; and see HIBBARD: e.g., --[0115] In deep CNN training, the learned model is the values of layer node parameters θ (node weights and layer biases) determined during training. Training employs maximum likelihood or the cross entropy between the training data and the model distribution. A cost function expressing this relationship is custom-character(θ)=−E.sub.x,y.Math.p.sub.data log p.sub.model(y|x;θ); [0116] The exact form of the cost function for a specific problem depends on the nature of the model used. A Gaussian model p.sub.model(y|x)=N(y:f(x;θ)) implies a cost function such as: custom-character(θ)=−E.sub.x,y˜p.sub.data∥y−f(x;θ).sub.2.sup.2+const [0117] Which includes a constant term that does not depend on θ. Thus, minimizing custom-character(θ) generates the mapping f(x;θ) that approximates the training data distribution. [0118] A useful extension of the GAN is the conditional GAN. Conditional adversarial networks learn a mapping from observed image x and random noise z as G:{x,z}.fwdarw.y. Both adversarial networks consist of two networks: a discriminator (D) and a generator (G). The generator G is trained to produce outputs that cannot be distinguished from “real” or actual training images by an adversarially trained discriminator D that is trained to be maximally accurate at detecting “fakes” or outputs of G. The conditional GAN differs from the unconditional GAN in that both discriminator and generator inferences are conditioned on an example image of the type X.--, in [0115]-[0118]). Re Claim 4, XU as modified by HIBBARD further disclose wherein the coordinate information of the human body coordinate system is an absolute coordinate defined with reference to an anatomical position of a portion of a human body (see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]), and for each medical image used as the training data, the coordinate information corresponding to each unit element in the image is associated (see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]). Re Claim 5, XU as modified by HIBBARD further disclose by the computer, generating, for each medical image used as the training data, the coordinate information corresponding to each unit element in the image (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; also see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]). Re Claim 6, XU as modified by HIBBARD further disclose wherein the coordinate information is input in an interlayer of the second convolutional neural network (see XU: e.g., --The first sCT image representation 436 is generated by the generator model 432 by applying one or more deformable offset layers and one or more convolutional layers to the input training image at a first input interface of the generator model 432. The second sCT image representation 436 is generated by the generator model 432 by applying the one or more convolutional layers and without applying the deformable offset layers at a second input interface of the generator model 432. All of the rest components of the generator model 432 (e.g., the components used to process information and generate the sCT image representations past the first and second input interfaces) are shared by the multiple paths, meaning that the generator is trained based on the outputs of both paths. The discriminator model 440 decides whether a simulated representation 436 is from the training data (e.g., the true CT image) or from the generator (e.g., the sCT, as communicated between the generator model 432 and the discriminator model 440 with the generation results 434 and the detection results 444). The discriminator model 440 only operates and is trained based on the first sCT image representation 436 (e.g., the one generated using the deformable offset layers). In this way, the generator model 432 is trained utilizing the discriminator on the generated images through the first path including the deformable offset layers and is further trained based on cycle-consistency loss information that is generated based on both the generated images through the first path including the deformable offset layers and the second path without deformable offset layers. This training process results in back-propagation of weight adjustments 438, 442 to improve the generator model 432 and the discriminator model 440….. the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0105]; also see: --[0094] The representation of the model 300 of FIG. 3A thus illustrates the training and prediction of a generative model, which is adapted to perform regression rather than classification. FIG. 3B illustrates an exemplary CNN model adapted for discriminating a synthetic CT image (sCT) according to the present disclosure. The discriminator network shown in FIG. 3B may include several levels of blocks configured with stride-2 convolutional layers, batch normalization layers and ReLu layers, and separated pooling layers. At the end of the network, there will be one or a few fully connection layers to form a 2D patch for discrimination purposes. The discriminator shown in FIG. 3B may be a patch-based discriminator configured to receive an input sCT image (e.g., generated from the first path from the generator shown in FIG. 3A that includes the deformable offset layers), classify the image as real or fake, and provide the classification as output 350. [0095] Consistent with embodiments of the present disclosure, the treatment modeling methods, systems, devices, and/or processes based on such models include two stages: training of the generative model, with use of a discriminator/generator pair in a GAN; and prediction with the generative model, with use of a GAN-trained generator. Various examples involving a GAN and a CycleGAN for sCT image generation are discussed in detail in the following examples. It will be understood that other variations and combinations of the type of deep learning model and other neural-network processing approaches may also be implemented with the present techniques. Further, although the following examples are discussed with reference to images and image data, it will be understood that the following networks and GAN may operate with use of other non-image data representations and formats. Also, while two paths are discussed as being used to generate first and second sCT images during training, only one path (the second path that does not include the deformable offset layers) is used in practice after the generator is trained to generate an sCT image given a CBCT image.--, in [0094]-[0095] in view of Fig. 3A). Re Claim 7, XU as modified by HIBBARD further disclose wherein the learning model further includes a second generator configured using a third convolutional neural network that receives an input of the medical image of the second domain and that outputs a second generated image of the first domain (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc., Further, the image data 152 may also include or be associated with medical image processing data, for instance, training images, ground truth images, contoured images, and dose images. In other examples, an equivalent representation of an anatomical area may be represented in non-image formats (e.g., coordinates, mappings, etc.).--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]), and a second discriminator configured using a fourth convolutional neural network that receives an input of data including second image data, and coordinate information of the human body coordinate system corresponding to each position of a plurality of unit elements configuring the second image data, and that discriminates the authenticity of the input image, wherein the first image data is the first generated image generated by the first generator or a medical image of the second domain included in a training dataset (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]), and the training processing includes processing of training the second generator and the second discriminator in an adversarial manner (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]). Re Claim 8, XU as modified by HIBBARD further disclose wherein the coordinate information corresponding to the second generated image in a case where the second generated image is input to the second discriminator is coordinate information determined for the medical image of the second domain which is a conversion source image input to the second generator in a case of generating the second generated image (see XU: e.g., --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; and, --only apply “adversarial” losses on generators G.sub.offset.sup.cbct2ct and G.sub.offset.sup.ct2cbct (e.g., the generators in the first, third, fifth and seventh paths), and apply “cycle-consistence” losses on all generators (e.g., G.sub.offset.sup.cbct2ct, G.sub.offset.sup.ct2cbct, and G.sup.cbct2ct, G.sup.ct2cbct). The effect of minimizing the “cycle-consistence” loss terms is to preserve original structures and to avoid unnecessary structure deformation, and the effect of minimizing the “adversarial” loss terms is to learn a mapping or distribution conversion from one domain to its opponent's domain. Unlike prior single-generator approaches, two different generators are provided in each direction (e.g., two generators, one for each of the first and second paths and two generators, one for each of the third and fourth paths). One generator is provided with offset layers (such as G.sub.offset.sup.cbct2ct in the first path), and one without offset layers (such as G.sup.cbct2ct in the second flow). In addition, the generators share weights and other modules with all other layers (except those offset layers). By combining these two loss terms on separate generators, the loss terms are decoupled and will not compete with each other.--, in [0143]; also see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]). Re Claim 9, XU as modified by HIBBARD further disclose by the computer, performing processing of calculating a first reconstruction loss of conversion processing using the first generator and the second generator in this order based on a first reconstructed generated image output from the second generator by inputting the first generated image of the second domain output from the first generator to the second generator (see XU: e.g., --[0046] In an example, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images), for hosting on the storage device 116 and the memory 114. An exemplary image data source 150 is described in detail in connection with FIG. 2B. In an example, the software programs operating on the radiotherapy processing computing system 110 may convert medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image or an sCT image. In another example, the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient's CBCT image subsequently created or captured (e.g., also represented as an image) so that corresponding images are appropriately paired and associated. In yet another example, the software programs may substitute functions of the patient images such as signed distance functions or processed versions of the images that emphasize some aspect of the image information. [0047] In an example, the radiotherapy processing computing system 110 may obtain or communicate CBCT imaging data 152 from or to image data source 150. Such imaging data may be provided to computing system 110 to enhance or improve the imaging data using GAN or CycleGAN modeling to produce an sCT image. The sCT image may be used by treatment data source 160 or device 180 to treat a human subject. In further examples, the treatment data source 160 receives or updates the planning data as a result of an sCT image generated by the image generation workflow 130; the image data source 150 may also provide or host the imaging data 152 for use in the image generation training workflow 140. [0048] In an example, computing system 110 may generate pairs of CBCT and real CT images using image data source 150. For example, computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image. Computing system 110 may also instruct a CT imaging device to obtain an image of the same target region (e.g., the same cross section of the brain region) as a real CT image. Computing system 110 may associate the real CT image with the previously obtained CBCT image of the same region, thus forming a pair of real CT and CBCT images for storage in device 116 as a training pair. Computing system 110 may continue generating such pairs of training images until a threshold number of pairs are obtained. In some implementations, computing system 110 may be guided by a human operator as to which target region to obtain and which CBCT images are paired with the real CT images.--, in [0046]-[0048; and, --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; and, --only apply “adversarial” losses on generators G.sub.offset.sup.cbct2ct and G.sub.offset.sup.ct2cbct (e.g., the generators in the first, third, fifth and seventh paths), and apply “cycle-consistence” losses on all generators (e.g., G.sub.offset.sup.cbct2ct, G.sub.offset.sup.ct2cbct, and G.sup.cbct2ct, G.sup.ct2cbct). The effect of minimizing the “cycle-consistence” loss terms is to preserve original structures and to avoid unnecessary structure deformation, and the effect of minimizing the “adversarial” loss terms is to learn a mapping or distribution conversion from one domain to its opponent's domain. Unlike prior single-generator approaches, two different generators are provided in each direction (e.g., two generators, one for each of the first and second paths and two generators, one for each of the third and fourth paths). One generator is provided with offset layers (such as G.sub.offset.sup.cbct2ct in the first path), and one without offset layers (such as G.sup.cbct2ct in the second flow). In addition, the generators share weights and other modules with all other layers (except those offset layers). By combining these two loss terms on separate generators, the loss terms are decoupled and will not compete with each other.--, in [0143]; also see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]); and processing of calculating a second reconstruction loss of conversion processing using the second generator and the first generator in this order based on a second reconstructed generated image output from the first generator by inputting the second generated image of the first domain output from the second generator to the first generator (see XU: e.g., --[0046] In an example, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images), for hosting on the storage device 116 and the memory 114. An exemplary image data source 150 is described in detail in connection with FIG. 2B. In an example, the software programs operating on the radiotherapy processing computing system 110 may convert medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image or an sCT image. In another example, the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient's CBCT image subsequently created or captured (e.g., also represented as an image) so that corresponding images are appropriately paired and associated. In yet another example, the software programs may substitute functions of the patient images such as signed distance functions or processed versions of the images that emphasize some aspect of the image information. [0047] In an example, the radiotherapy processing computing system 110 may obtain or communicate CBCT imaging data 152 from or to image data source 150. Such imaging data may be provided to computing system 110 to enhance or improve the imaging data using GAN or CycleGAN modeling to produce an sCT image. The sCT image may be used by treatment data source 160 or device 180 to treat a human subject. In further examples, the treatment data source 160 receives or updates the planning data as a result of an sCT image generated by the image generation workflow 130; the image data source 150 may also provide or host the imaging data 152 for use in the image generation training workflow 140. [0048] In an example, computing system 110 may generate pairs of CBCT and real CT images using image data source 150. For example, computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image. Computing system 110 may also instruct a CT imaging device to obtain an image of the same target region (e.g., the same cross section of the brain region) as a real CT image. Computing system 110 may associate the real CT image with the previously obtained CBCT image of the same region, thus forming a pair of real CT and CBCT images for storage in device 116 as a training pair. Computing system 110 may continue generating such pairs of training images until a threshold number of pairs are obtained. In some implementations, computing system 110 may be guided by a human operator as to which target region to obtain and which CBCT images are paired with the real CT images.--, in [0046]-[0048], and, --[0103] Thus, in this example, data preparation for the GAN model training 430 requires CT images that are paired with CBCT images (these may be referred to as training CBCT/CT images). In an example, the original data includes pairs of CBCT image sets and corresponding CT images that may be registered and resampled to a common coordinate frame to produce pairs of anatomy-derived images. [0104] In detail, in a GAN model, the generator (e.g., generator model 432) learns a distribution over the data x, p.sub.G(x), starting with noise input with distribution p.sub.z(z) as the generator learns a mapping G (z; θ.sub.G):p.sub.z(z).fwdarw.p.sub.G(x) where G is a differentiable function representing a neural network with layer weight and bias parameters θ.sub.G. The discriminator, D(x; θ.sub.D) (e.g., discriminator model 440), maps the generator output to a binary scalar {true, false}, deciding true if the generator output is from actual data distribution p.sub.data(x) and false if from the generator distribution p.sub.G(x). That is, D (x) is the probability that x came from p.sub.data(x) rather than from p.sub.G(x). [0105] FIG. 5 illustrates training in a GAN for generating a synthetic CT image model, according to the example techniques discussed herein. FIG. 5 specifically shows the operation flow 550 of a GAN generator model G 560, designed to produce a simulated (e.g., estimated, artificial, etc.) output sCT image 580 as a result of an input CBCT image 540. FIG. 5 also shows the operation flow 500 of a GAN discriminator model D 520, designed to produce a determination value 530 (e.g., real or fake, true or false) based on an input (e.g., a real CT image 510 or the generated sCT image 580). In particular, discriminator model D 520 is trained to produce an output that indicates whether discriminator model D 520 determines the generated sCT image 580 is real or fake.--, in [0103]-[0106]; and, --only apply “adversarial” losses on generators G.sub.offset.sup.cbct2ct and G.sub.offset.sup.ct2cbct (e.g., the generators in the first, third, fifth and seventh paths), and apply “cycle-consistence” losses on all generators (e.g., G.sub.offset.sup.cbct2ct, G.sub.offset.sup.ct2cbct, and G.sup.cbct2ct, G.sup.ct2cbct). The effect of minimizing the “cycle-consistence” loss terms is to preserve original structures and to avoid unnecessary structure deformation, and the effect of minimizing the “adversarial” loss terms is to learn a mapping or distribution conversion from one domain to its opponent's domain. Unlike prior single-generator approaches, two different generators are provided in each direction (e.g., two generators, one for each of the first and second paths and two generators, one for each of the third and fourth paths). One generator is provided with offset layers (such as G.sub.offset.sup.cbct2ct in the first path), and one without offset layers (such as G.sup.cbct2ct in the second flow). In addition, the generators share weights and other modules with all other layers (except those offset layers). By combining these two loss terms on separate generators, the loss terms are decoupled and will not compete with each other.--, in [0143]; also see HIBBARD: e.g., -- [0086] The coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210. The isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient. Alternatively, the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A. As discussed herein, the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.--, in [0086] {Fig. 2A and descriptions are consistent with Xu’s Fig. 2A and in Xu’s [0065] quoted above}; also see: -- As shown in FIG. 6, in a radiotherapy treatment session, a patient 602 may wear a coordinate frame 620 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinate frame 620 and a patient positioning system 622 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.--, in [0101], [0107], and [0124]). Re Claim 10, XU as modified by HIBBARD further disclose wherein the medical image of the first domain is a first modality image captured using a first modality which is a medical apparatus (see Xu: e.g., --[0046] In an example, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images), for hosting on the storage device 116 and the memory 114. An exemplary image data source 150 is described in detail in connection with FIG. 2B. In an example, the software programs operating on the radiotherapy processing computing system 110 may convert medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image or an sCT image. In another example, the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient's CBCT image subsequently created or captured (e.g., also represented as an image) so that corresponding images are appropriately paired and associated. In yet another example, the software programs may substitute functions of the patient images such as signed distance functions or processed versions of the images that emphasize some aspect of the image information. [0047] In an example, the radiotherapy processing computing system 110 may obtain or communicate CBCT imaging data 152 from or to image data source 150. Such imaging data may be provided to computing system 110 to enhance or improve the imaging data using GAN or CycleGAN modeling to produce an sCT image. The sCT image may be used by treatment data source 160 or device 180 to treat a human subject. In further examples, the treatment data source 160 receives or updates the planning data as a result of an sCT image generated by the image generation workflow 130; the image data source 150 may also provide or host the imaging data 152 for use in the image generation training workflow 140. [0048] In an example, computing system 110 may generate pairs of CBCT and real CT images using image data source 150. For example, computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image. Computing system 110 may also instruct a CT imaging device to obtain an image of the same target region (e.g., the same cross section of the brain region) as a real CT image. Computing system 110 may associate the real CT image with the previously obtained CBCT image of the same region, thus forming a pair of real CT and CBCT images for storage in device 116 as a training pair. Computing system 110 may continue generating such pairs of training images until a threshold number of pairs are obtained. In some implementations, computing system 110 may be guided by a human operator as to which target region to obtain and which CBCT images are paired with the real CT images.--, in [0046]-[0048]; and, --[0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053]), the medical image of the second domain is a second modality image captured using a second modality which is a medical apparatus of a different type from the first modality (see Xu: e.g., --[0046] In an example, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images), for hosting on the storage device 116 and the memory 114. An exemplary image data source 150 is described in detail in connection with FIG. 2B. In an example, the software programs operating on the radiotherapy processing computing system 110 may convert medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image or an sCT image. In another example, the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient's CBCT image subsequently created or captured (e.g., also represented as an image) so that corresponding images are appropriately paired and associated. In yet another example, the software programs may substitute functions of the patient images such as signed distance functions or processed versions of the images that emphasize some aspect of the image information. [0047] In an example, the radiotherapy processing computing system 110 may obtain or communicate CBCT imaging data 152 from or to image data source 150. Such imaging data may be provided to computing system 110 to enhance or improve the imaging data using GAN or CycleGAN modeling to produce an sCT image. The sCT image may be used by treatment data source 160 or device 180 to treat a human subject. In further examples, the treatment data source 160 receives or updates the planning data as a result of an sCT image generated by the image generation workflow 130; the image data source 150 may also provide or host the imaging data 152 for use in the image generation training workflow 140. [0048] In an example, computing system 110 may generate pairs of CBCT and real CT images using image data source 150. For example, computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image. Computing system 110 may also instruct a CT imaging device to obtain an image of the same target region (e.g., the same cross section of the brain region) as a real CT image. Computing system 110 may associate the real CT image with the previously obtained CBCT image of the same region, thus forming a pair of real CT and CBCT images for storage in device 116 as a training pair. Computing system 110 may continue generating such pairs of training images until a threshold number of pairs are obtained. In some implementations, computing system 110 may be guided by a human operator as to which target region to obtain and which CBCT images are paired with the real CT images.--, in [0046]-[0048]); and the learning model receives an input of the first modality image and is trained to generate a pseudo second modality generated image having a feature of the image captured using the second modality (see Xu: e.g., --[0046] In an example, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images), for hosting on the storage device 116 and the memory 114. An exemplary image data source 150 is described in detail in connection with FIG. 2B. In an example, the software programs operating on the radiotherapy processing computing system 110 may convert medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image or an sCT image. In another example, the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient's CBCT image subsequently created or captured (e.g., also represented as an image) so that corresponding images are appropriately paired and associated. In yet another example, the software programs may substitute functions of the patient images such as signed distance functions or processed versions of the images that emphasize some aspect of the image information. [0047] In an example, the radiotherapy processing computing system 110 may obtain or communicate CBCT imaging data 152 from or to image data source 150. Such imaging data may be provided to computing system 110 to enhance or improve the imaging data using GAN or CycleGAN modeling to produce an sCT image. The sCT image may be used by treatment data source 160 or device 180 to treat a human subject. In further examples, the treatment data source 160 receives or updates the planning data as a result of an sCT image generated by the image generation workflow 130; the image data source 150 may also provide or host the imaging data 152 for use in the image generation training workflow 140. [0048] In an example, computing system 110 may generate pairs of CBCT and real CT images using image data source 150. For example, computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image. Computing system 110 may also instruct a CT imaging device to obtain an image of the same target region (e.g., the same cross section of the brain region) as a real CT image. Computing system 110 may associate the real CT image with the previously obtained CBCT image of the same region, thus forming a pair of real CT and CBCT images for storage in device 116 as a training pair. Computing system 110 may continue generating such pairs of training images until a threshold number of pairs are obtained. In some implementations, computing system 110 may be guided by a human operator as to which target region to obtain and which CBCT images are paired with the real CT images.--, in [0046]-[0048]). Re Claim 11, claim 11 is the corresponding system claims to claim 1 respectively. Thus, claim 11 is rejected for the similar reasons as for claim 1. Furthermore, XU as modified by HIBBARD further disclose a machine learning system for training a learning model that converts a domain of a medical image which is input, and generates a generated image of a different domain (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]). Re Claim 12, claim 12 is the corresponding medium claims to claim 1 respectively. Thus, claim 12 is rejected for the similar reasons as for claim 1. Furthermore, XU as modified by HIBBARD further disclose non-transitory, computer-readable tangible recording medium on which a program for causing, when read by a computer, the computer to execute the method of generating a trained model according to claim 1 (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]). Re Claim 13, claim 13 is the corresponding apparatus claims to claim 1 respectively. Thus, claim 13 is rejected for the similar reasons as for claim 1. Furthermore, XU as modified by HIBBARD further disclose medical image processing apparatus comprising: a second storage device that stores a first trained model which is the trained first generator trained by implementing the method of generating a trained model according to claim 1 (see XU: e.g., --a method, system, and transitory or non-transitory computer readable medium are provided for training a model to generate a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image, comprising: receiving a CBCT image of a subject as an input of a generative model; and training the generative model, via first and second paths, in a generative adversarial network (GAN) to process the CBCT image to provide first and second synthetic computed tomography (sCT) images corresponding to the CBCT image as outputs of the generative model, the first path comprising a first set of one or more deformable offset layers and a first set of one or more convolution layers, the second path comprising the first set of the one or more convolution layers without the first set of the one or more deformable offset layers. [0017] In some implementations, the GAN is trained using a cycle generative adversarial network (CycleGAN) comprising the generative model and a discriminative model, wherein the generative model is a first generative model and the discriminative model is a first discriminative model, further comprising: training a second generative model to process produced first and second sCT images as inputs and provide first and second cycle-CBCT images as outputs via third and fourth paths, respectively, the third path comprising a second set of the one or more deformable offset layers and a second set of the one or more convolution layers, the fourth path comprising the second set of the one or more convolution layers without the second set of the one or more deformable offset layers; and training a second discriminative model to classify the first cycle-CBCT image as a synthetic or a real CBCT image. [0018] In some implementations, the CycleGAN comprises first and second portions to train the first generative model, further comprising: obtaining a training CBCT image that is paired with a real CT image; transmitting the training CBCT image to the input of the first generative model via the first and second paths to output the first and second synthetic CT images; receiving the first synthetic CT image at the input of the first discriminative model;--, in [0016]-[0018], and see Fig. 4, and, --training and use of a generative adversarial network adapted for generating a sCT image from a received CBCT image--, in [0024]; and, --the single generator is trained to convert the CBCT image appearance to CT image in a way that removes artefacts in original CBCT images and converts to the correct CT numbers while, at the same time, being trained based on some level of structure deformation. When the shape distribution or other feature distribution in CT images domain have large amount of differences compared to the original CBCT images domain--, [0036], and, --Radiotherapy system 100 may use a GAN to generate sCT images from a received CBCT image. The sCT image may represent an improved CBCT image with sharp-edge looking features that are akin to real CT images. Radiotherapy system 100 may thus produce sCT type of images for medical analysis in real time using lower quality CBCT images that are captured of a region of a subject.--, in [0038]-[0040], and, the radiotherapy processing computing system 110 may obtain image data 152 from the image data source 150 (e.g., CBCT images)….computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region). Computing system 110 may store the image data in storage device 116 with an associated indication of a time and target region captured by the CBCT image.--, in [0046]-[0047]; --0052] In an example, the image data 152 may include one or more MRI image (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric Mill, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI),…etc.,--, in [0052]-[0053], and, -- a true CBCT image 602 is received and provided to multiple deformable offset layers 660A in a first path. The CBCT image 602 passes through the deformable offset layers 660A in an interleaved manner with convolution blocks in the convolution blocks 661A…..first generation result 612 is an sCT image produced with offset layers and second generation result 614 is an sCT image produced without offset layers. The result 612 that includes the sCT image produced with the offset layers is provided to the first discriminator model 630 for the CT domain while result 614 is not provided to the first discriminator model 630. [0119] Referring back to FIG. 6A first generation results 612 (e.g., sCT image) may also be concurrently provided to the second generator model 608 together with the-second generation results 614 via third and fourth paths, respectively.--, in [0118]-0119]). Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Sep 16, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103
Mar 27, 2026
Examiner Interview Summary
Mar 27, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month