DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are currently pending in the present application, with claims 1, 12, and 19 being independent.
Response to Amendments / Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 02/05/2026 have been fully considered but they are not persuasive.
Applicant argues: Liang et al. (US 20220084173) does not teach the claimed simulation pipeline because Liang’s “synthetic” images are pre-existing dataset images (e.g., BRATS 2013), not images generated by a medical imaging simulator from a selected digital phantom. Applicant further argues that Liang does not teach “evaluating the realistic version of the simulated image,” but instead evaluates downstream application performance.
Examiner replies: that the rejection relies on the combination of Abadi and Liang. Abadi explicitly teaches generating simulated medical images from digital phantoms of “selecting a digital phantom” (Section 2, Fig. 2, and Fig. 3; computational, anthropomorphic phantoms…procedurally generated phantoms…patient-based phantoms. Fig. 2, 3, 6, and 8; XCAT phantoms),”inputting the digital phantom into a medical imaging simulator configured to generate a simulated image of the digital phantom” (Section 3; simulators of the imaging system to "virtually image" the virtual subjects…Section 3.1; to generate the simulated images, the acquisition geometry, system components, and computational phantoms are input to an x-ray interaction simulation framework…Section 3.2 and Fig. 10; realistic simulated PET and SPECT images generated from different human phantoms…Section 3.3; Realistic 3-D MRI simulations of computational phantoms…Section 3.4 and Fig. 11; simulation creates a realistic scattering field from the complex numerical breast phantom). Liang teaches inputting an image into an unpaired image-to-image translation network to generate a realistic version of the image (Par. 0011, 0039. Par. 0155-0156; generator…learns to change input pixels…to match the distribution of pixels of the target domain. Par. 0104; disease-to-healthy…healthy-to-healthy translations…). Under broadest reasonable interpretation, the “simulated image” generated by Abadi constitutes an input image to Liang’s network. Liang alone does not disclose the entire pipeline, nor is Liang’s network limited to a particular source of images, therefore, it would have been obvious to input Abadi’s simulated images into Liang’s translation network.
Additionally, the claim recites “evaluating the realistic version of the simulated image” with a broad interpretation of how the evaluation must be performed. Liang discloses evaluating the output of the GAN model including use of generated images for diagnostic purposes (Par. 0165; trained GAN model provides disease diagnosis from a diseased medical scan of a patient), and comparison against ground truth labels for determining outcomes (Par. 0103; generate a set of PE candidates, and then by comparing against the ground truth, the PE candidates are labeled as PE or non-PE). Under broadest reasonable interpretation, evaluating the generated image through diagnostic performance or comparison with ground truth constitutes evaluating the translated (i.e., “realistic”) version of the image. The claim does not require a specific realism metric, and therefore Liang’s evaluation of generated images in a clinical context reasonably satisfies the limitation of “evaluating the realistic version of the simulated image”.
Regarding the remaining arguments: Applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below.
Claim Objections
Claim 12 is objected to because of the following informalities: The recited “HU” in claim 12 should be amended to recite “Hounsfield Unit (HU)”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in view of Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”.
Regarding claim 1, Liang discloses a method for training a model for unpaired Image-to-Image translation of medical images (Par. 0011; Fixed-Point GAN…image-to-image translation…field of medical image processing. Par. 0039; a modified GAN… (1) handling unpaired images; (2) translating any image to a target domain reQiuring no source domain; (3) performing an identity transformation during same-domain translation; (4) performing minimal image transformation for cross-domain translation; and (5) scaling efficiently to multi-domain translation. See more in Par. 0065-0069), the method comprising:
acquiring a plurality of real patient images (Par. 0100-0101; BRATS 2013 dataset consists of…real brain MR images…),
acquiring a plurality of synthetic images (Par. 0100-0101; BRATS 2013 dataset consists of synthetic…brain MR images…),
and training a model to transform the plurality of synthetic images to resemble the plurality of real patient images (Par. 0057; Because the Fixed-Point GAN is trained using unpaired images, cycle-consistency is further utilized as set forth by Equation 5, so as to ensure that the generated images are close to the input images in both cross-domain (FIG. 3B, operation “E”) and same-domain (FIG. 3C operation “H”) translation learning. Par. 0104; …(1) disease-to-healthy and (2) healthy-to-healthy translations at the testing stage…Fixed-Point GAN's ability to perform both local and identical transformations), wherein the model is trained using at least one specialized loss function (FIG. 3A-3D. Par. 0063; domain classification loss, cycle consistency loss, and conditional identity loss)
Liang does not appear to explicitly disclose digital phantoms.
In the same art of medical imaging systems, Abadi discloses digital phantoms (Pg. 042805-2 - 042805-5, Section 2, Fig. 2, and Fig. 3; computational, anthropomorphic phantoms…procedurally generated phantoms…patient-based phantoms).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to use synthetic images generated from digital phantoms, as taught by Abadi, as input data for the unpaired image-to-image translation method disclosed by Liang. Substituting these simulated images alongside Liang’s existing synthetic dataset like BRATS would have yielded predictable results in diversity and reliability of training data for medical imaging.
Liang in view of Abadi does not disclose wherein the model is trained using at least one specialized loss function that ensures the transformed images retain original Hounsfield Unit (HU) values.
In the same art of deep learning-based image-to-image translation for medical imaging, Qiu discloses wherein the model is trained using at least one specialized loss function that ensures the transformed images retain original Hounsfield Unit (HU) values (Section 1; Global histogram matching was performed via an informative maximizing (MaxInfo) Loss calculated between planning CT and sCT derived by feeding CBCT’s into the HM-Cycle-GAN…HU. Fig. 1 and Section 2.5; perceptual loss was integrated into the loss function to prioritize generating accurate tissue boundaries in the transformed 3D images…Cycle-GAN is designed to predict the sCT to reach similar level of both intensity accuracy and histogram distribution…MAE loss used to force the sCT’s voxel-wise intensity accuracy, GDE used to force the sCT's structure to be similar to planning CT and thus reduce the scatter artifact, and MaxInfo loss (3)…)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the HU-preserving specialized loss function taught by Qiu into the unpaired image-to-image translation framework of Lian as applied to the simulator-generated digital phantom images of Abadi. Doing so allows preserving the quantitative voxel values of the phantom source image while still transforming the image toward a realistic patient image appearance, yielding predictable results in radiological fidelity and more reliable translated images in medical imaging applications (Qiu Section 1; Histogram matching is important in the thorax, which has high tissue heterogeneity. The accurate HU around the boundary is important to dose calculation, particularly for SBRT and proton therapy. Our method produced high quality thoracic sCT images that can be used for dose calculation and organ segmentations).
Regarding claim 2, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, and further discloses wherein the plurality of real patient images are provided by scanning a patient using a CT medical imaging system (Liang Par. 0100-0101; BRATS 2013 dataset consists of…real brain MR images. Par. 0103; 121 computed tomography pulmonary angiography (CTPA) scans. Par. 0165; diseased medical scan of a patient).
Liang, Abadi, and Qiu are combined for the reason set forth above with respect to claim 1.
Regarding claim 3, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, and further discloses wherein the plurality of synthetic images are provided by scanning the digital phantoms (Liang Par. 0100-0101; BRATS 2013 dataset consists of synthetic…brain MR images…MR imaging sequence (FLAIR) for all patients in both HG and LG categories, resulting in a total of 9,050 synthetic MR slices…).
Liang does not disclose using a medical imaging system simulator.
In the same art of medical imaging systems, Abadi discloses using a medical imaging system simulator (Pg. 042805-2, Section 1; Virtual clinical trials (VCTs)…simulation experiments could be in the context of human models being imaged with imaging devices. Pg. 042805-3, Section 2.1; Phantoms are first constructed by defining objects to represent the necessary organs and structures of a given subject…for input into corresponding imaging simulation).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the medical imaging system simulator taught by Abadi into Liang’s Fixed-Point-GAN training. Incorporating a simulation system enables controlled generating/scanning of synthetic images from digital phantoms, allowing for tailored variations in imaging parameters. The motivation lies in the advantage of improving the quality and realism of training data for unpaired image-to-image translation in medical imaging.
Regarding claim 4, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, and further discloses wherein the model comprises a generator network trained using an adversarial machine learning process (Liang Par. 0011-0012; Fixed-Point GAN…implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs))
Liang, Abadi, and Qiu are combined for the reason set forth above with respect to claim 1.
Regarding claim 5, Liang in view of Abadi in further view of Qiu discloses the method of claim 4, and further discloses wherein the generator network is trained using a CycleGAN architecture (Liang Par. 0132; use of CycleGAN provides for improvements in unpaired image-to-image translations via cycle consistency. Par. 0063; cycle consistency loss. Fig. 5A and Par. 0087; The experiment compared the Fixed-Point GAN with CycleGAN…CycleGAN can be utilized only to translate images containing two domains, comparing it as a baseline provides more insight into the performance of the Fixed-Point GAN as a scalable alternative).
Liang, Abadi, and Qiu are combined for the reason set forth above with respect to claim 1.
Regarding claim 6, Liang in view of Abadi in further view of Qiu discloses the method of claim 4, and further discloses wherein the generator network is trained using a STARGAN architecture (Liang Par. 0076; StarGAN was used as the baseline to provide multi-domain image-to-image translation. Fig. 4A-4B, 5A and Par. 0087; The experiment compared the Fixed-Point GAN with…StarGAN…allowed the study of the effect of fixed-point translation. Par. 0109; comparison with StarGAN reveals the effect of the proposed fixed-point translation learning. Par. 0123; the same generator and discriminator architectures as the public implementation of StarGAN were used. All models were trained using the Adam optimizer with a learning rate of 1e.sup.-4 for both the generator and discriminator across all experiments).
Liang, Abadi, and Qiu are combined for the reason set forth above with respect to claim 1.
Regarding claim 11, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, and further discloses wherein the specialized loss function comprises a comparison between the output image and an annotated image (Liang Par. 0052; 1) distinguish between real images and fake (e.g., translated or manipulated) images…training begins with providing the discriminator network a batch of random real images from the dataset as input. Par. 0055; if the input image has eyeglasses, then c.sub.x=with eyeglasses and c.sub.y=without eyeglasses…generator is trained to generate images in the correct domain. Par. 0057; ensure that the generated images are close to the input images in both cross-domain (FIG. 3B, operation “E”) and same-domain (FIG. 3C operation “H”) translation learning). Examiner's note; domain-labeled images functionally act as ground-truth annotations (e.g., with glasses vs without glasses)
Liang, Abadi, and Qiu are combined for the reason set forth above with respect to claim 1.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in view of Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”, and in further view of Sakboonyarat et al. Discriminative Image Enhancement for Robust Cascaded Segmentation of CT Images. ECTI Transactions on Computer and Information Technology (ECTI-CIT) (2021), hereinafter referred to as “Sakboonyarat”.
Regarding claim 7, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, and further discloses a comparison between original and generated images (Liang Par. 0100; synthetic and real brain MR images. Par. 0103; generate a set of PE candidates, and then by comparing against the ground truth, the PE candidates are labeled as PE or non-PE).
Liang in view of Abadi in further view of Qiu does not disclose wherein the specialized loss function comprises a loss value based on a comparison of HU value histograms.
In the same art of medical imaging systems, Sakboonyarat discloses wherein the specialized loss function comprises a loss value based on a comparison of HU value histograms (Pg. 154-155, Section 3.3; HU values from all voxels inside these 2D bounding boxes are accumulated to create the HU histogram…Fig. 5; Procedure for creating an HU histogram…Fig. 6; HU histograms of liver ground truth (red), accumulated 2D bounding boxes detected by RetinaNet (blue dotted line), and original CT slices containing the liver region (gray)…Pg. 160, Section 4.4; directly related to the loss function utilized in model training).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include loss values of HU value histograms, as taught by Sakboonyarat, in Liang, Abadi, and Qiu’s medical/CT imaging system. Hu value histogram is known tool used in CT scan imaging to help doctors quantify tissue characteristics and diagnose conditions, therefore, comparing input and generated images values yields the predictable result of ensuring the quality of the output image doesn’t deviate from the true norm in the diagnosis.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in view of Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”, and in further view of Tang et al. (US 20250166247), hereinafter referred to as “Tang”.
Regarding claim 8, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, but does not disclose wherein the specialized loss function comprises calculating a region of interest loss.
In the same art of medical imaging, Tang discloses wherein the specialized loss function comprises calculating a region of interest loss (Par. 0054-0055; loss function of the vessel ROI…the loss function can be formulated based on the gradient to describe edge information within the vessel ROI).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the region of interest loss function as taught by Tang into the unpaired image-to-image translation model of Liang and Abadi. The motivation lies in the advantage of localized fidelity in synthetic medical images, especially in diagnosing specific or significant areas.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in view of Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”, and in further view of Zheng et al. (WO 2022120758), hereinafter referred to as “Zheng”.
Regarding claim 9, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, but does not disclose wherein the specialized loss function comprises a feature matching loss.
In the same art of medical imaging, Zheng discloses wherein the specialized loss function comprises a feature matching loss (Pg. 8, Formula (4); the loss function of the conjugate generative adversarial network further includes a feature matching loss function).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature matching loss function as taught by Zheng into the unpaired image-to-image translation model of Liang, Abadi, and Qiu. The motivation lies in the advantage of preserving semantic features and enhancing structural/diagnostic consistency in medical images.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in view of Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”, and in further view of Guendel et al. (CN 113971657), hereinafter referred to as “Guendel”.
Regarding claim 10, Liang in view of Abadi in further view of Qiu discloses the method of claim 1, but does not disclose wherein the specialized loss function comprises a loss that enforces regularization or a physical simulation consistency.
In the same art of medical imaging, Guendel discloses wherein the specialized loss function comprises a loss that enforces regularization (Par. 0004; Machine training uses a loss function that includes regularization. Regularization is noise regularization and/or correlation regularization).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the regularization loss function as taught by Guendel into the unpaired image-to-image translation model of Liang, Abadi, and Qiu. The motivation lies in the advantage of enhancing image quality by reducing noise in order to obtain accurate and realistic medical images.
Claim(s) 12-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in view of Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”.
Regarding claim 12, Abadi discloses a system for performing a virtual clinical trial (VCTs in medical imaging), the system comprising:
a plurality of virtual digital phantoms and/or physical phantoms (Pg. 042805-2 - 042805-5, Section 2, Fig. 2, and Fig. 3; computational, anthropomorphic phantoms…procedurally generated phantoms…patient-based phantoms) wherein each of the virtual digital phantoms and/or physical phantoms includes one or more (Pg. 042805-2, Section 2; The advantage of computational phantoms is that, unlike actual patients, their exact anatomy is known, providing a "gold standard" or "ground truth" from which to quantitatively evaluate and improve imaging devices and techniques…Section 4; known ground truths),
a virtual imaging simulator configured to generate a virtual image from each of the plurality of virtual digital phantoms and/or physical phantoms (Pg. 042805-9 - 042805-15, Section 3; simulators of the imaging system to "virtually image" the virtual subjects…Section 3.1; to generate the simulated images, the acQiusition geometry, system components, and computational phantoms are input to an x-ray interaction simulation framework…Section 3.2 and Fig. 10; realistic simulated PET and SPECT images generated from different human phantoms…Section 3.3; Realistic 3-D MRI simulations of computational phantoms…Section 3.4 and Fig. 11; simulation creates a realistic scattering field from the complex numerical breast phantom).
Abadi does not disclose and a model configured for unpaired Image-to-Image translation, the model configured to transform the virtual images to resemble real patient images while maintaining the one or more ground truth
In the same art of medical imaging, Liang discloses a model configured for unpaired Image-to-Image translation (Par. 0011; Fixed-Point GAN…image-to-image translation…field of medical image processing. Par. 0039; a modified GAN…(1) handling unpaired images; (2) translating any image to a target domain reQiuring no source domain; (3) performing an identity transformation during same-domain translation; (4) performing minimal image transformation for cross-domain translation; and (5) scaling efficiently to multi-domain translation. See more in Par. 0065-0069), the model configured to transform the virtual images to resemble real patient images (Par. 0057; Because the Fixed-Point GAN is trained using unpaired images, cycle-consistency is further utilized as set forth by Equation 5, so as to ensure that the generated images are close to the input images in both cross-domain (FIG. 3B, operation “E”) and same-domain (FIG. 3C operation “H”) translation learning. Par. 0104; …(1) disease-to-healthy and (2) healthy-to-healthy translations at the testing stage…Fixed-Point GAN's ability to perform both local and identical transformations) while maintaining the one or more ground truth values (Par. 0052; The Fixed-Point GAN described herein uses only one generator network and one discriminator network. The generator network produces the translated images given the input images and the desired target domains, while the discriminator network aims to 1) distinguish between real images and fake (e.g., translated or manipulated) images…2) predict the domains of the input images. The training begins with providing the discriminator network a batch of random real images from the dataset as input. Par. 0103; generate a set of PE candidates, and then by comparing against the ground truth, the PE candidates are labeled as PE or non-PE).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the unpaired image-to-image translation model taught by Liang into the virtual clinical trials of Abadi. Integrating Liang’s Fixed-Point GAN into a medical simulation system would enable the transformation of simulated images to resemble real patient images, allowing enhanced realism and increased utility of VCTs. The combination provides predictable results in improving image fidelity and domain alignment of medical imaging.
Abadi in view of Liang does not disclose one or more ground truth HU values.
In the same art of deep learning-based image-to-image translation for medical imaging, Qiu discloses one or more ground truth HU values (Fig. 2-4; Intensity (HU) and Section 2.5; perceptual loss was integrated into the loss function to prioritize generating accurate tissue boundaries in the transformed 3D images…Cycle-GAN is designed to predict the sCT to reach similar level of both intensity accuracy and histogram distribution…MAE loss used to force the sCT’s voxel-wise intensity accuracy, GDE used to force the sCT's structure to be similar to planning CT and thus reduce the scatter artifact, and MaxInfo loss (3)…).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the HU-preserving image translation techniques taught by Qiu into the unpaired image-to-image translation model of Liang. Abadi teaches a system using digital phantoms and imaging simulators to generate virtual medical images with known ground truth characteristics, while Liang teaches using an unpaired image-to-image translation model to transform medical images to resemble real patient images. Therefore, incorporating Qiu’s intensity and distribution preserving loss techniques into the combined system would have been a predictable improvement to maintain the quantitative HU information of phantom-derived virtual images while enhancing realism, thereby improving reliability of translated images in a virtual clinical trial system.
Regarding claim 13, Abadi in view of Liang in further view of Qiu discloses the system of claim 12, and further discloses wherein the plurality of virtual digital phantoms are XCAT models (Abadi Fig. 2, 3, 6, and 8; XCAT phantoms).
Abadi, Liang, and Qiu are combined for the reason set forth above with respect to claim 12.
Regarding claim 14, Abadi in view of Liang in further view of Qiu discloses the system of claim 12, and further discloses wherein the virtual imaging simulator is configured to simulate a CT scan of the plurality of virtual digital phantoms and/or physical phantoms (Abadi Pg. 024805-2; VCT, the human subject is replaced with a virtual digital phantom, the imaging system with a virtual simulated scanner…imaging data of a computer phantom can be generated using a computerized scanner model…Section 3.1; Fig. 9 shows examples of scanner-specific simulated images of mammography, tomosynthesis, and CT…Section 3.2; SPECT scanner simulations…Section 5.2; CT imaging).
Abadi, Liang, and Qiu are combined for the reason set forth above with respect to claim 12.
Regarding claim 15, Abadi in view of Liang in further view of Qiu discloses the system of claim 12,, and further discloses wherein the model comprises a GAN based architecture (Liang Fixed-Point GAN).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to adopt the GAN-based architecture of Liang in Abadi’s virtual clinical trial system. Liang’s GAN-based architecture is well-suited for transforming unpaired image domains and is widely used in medical imaging systems.
Regarding claim 16, Abadi in view of Liang in further view of Qiu discloses the system of claim 12,, and further discloses wherein the model comprises a Generative AI based architecture (Liang Par. 0004; field of medical imaging and analysis using machine learning models, and more particularly, to systems, methods, and apparatuses for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs)).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to implement Liang’s Generative AI techniques in Abadi’s virtual clinical trial system. Providing machine-learning-based generative method trained on unpaired datasets is ideal for enhancing synthetic image realism, yielding predictable results in allowing transformation of virtual phantom outputs into clinical imaging for post-process use.
Regarding claim 17, Abadi in view of Liang in further view of Qiu discloses the system of claim 12, and further discloses wherein the model is trained using an additional loss function that maintains the one or more ground truth values (Qiu Section 1; Global histogram matching was performed via an informative maximizing (MaxInfo) Loss calculated between planning CT and sCT derived by feeding CBCT’s into the HM-Cycle-GAN…HU. Fig. 1 and Section 2.5; perceptual loss was integrated into the loss function to prioritize generating accurate tissue boundaries in the transformed 3D images…Cycle-GAN is designed to predict the sCT to reach similar level of both intensity accuracy and histogram distribution…MAE loss used to force the sCT’s voxel-wise intensity accuracy, GDE used to force the sCT's structure to be similar to planning CT and thus reduce the scatter artifact, and MaxInfo loss (3)…).
Abadi, Liang, and Qiu are combined for the reasons set forth above with respect to claim 12.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in view of Liang et al. (US 20220084173), hereinafter referred to as “Liang”,
Regarding claim 19, Abadi discloses A method for generating a synthetic medical image, the method comprising:
selecting a digital phantom (Pg. 042805-2 - 042805-5, Section 2, Fig. 2, and Fig. 3; computational, anthropomorphic phantoms…procedurally generated phantoms…patient-based phantoms. Fig. 2, 3, 6, and 8; XCAT phantoms),
inputting the digital phantom into a medical imaging simulator configured to generate a simulated image of the digital phantom (Pg. 042805-9 - 042805-15, Section 3; simulators of the imaging system to "virtually image" the virtual subjects…Section 3.1; to generate the simulated images, the acQiusition geometry, system components, and computational phantoms are input to an x-ray interaction simulation framework…Section 3.2 and Fig. 10; realistic simulated PET and SPECT images generated from different human phantoms…Section 3.3; Realistic 3-D MRI simulations of computational phantoms…Section 3.4 and Fig. 11; simulation creates a realistic scattering field from the complex numerical breast phantom).
Abadi does not disclose inputting the simulated image into an unpaired image to image translation network configured to generate a realistic version of the simulated image, and evaluating the realistic version of the simulated image.
In the same art of medical imaging, Liang discloses inputting the simulated image into an unpaired image to image translation network configured to generate a realistic version of the simulated image (Par. 0011; Fixed-Point GAN…image-to-image translation…field of medical image processing. Par. 0039; a modified GAN…(1) handling unpaired images. Par. 0104; …(1) disease-to-healthy and (2) healthy-to-healthy translations at the testing stage…Fixed-Point GAN's ability to perform both local and identical transformations. Par. 0155-0156; cross-domain translation learning operation, wherein the generator of the GAN learns to change input pixels of an input image to match the distribution of pixels of the target domain represented by un-paired training images…same-domain translation learning operation, wherein the generator of the GAN learns to keep the pixels of the input image fixed and without modification when the input image and the target domain represented by the un-paired training images are the same),
and evaluating the realistic version of the simulated image (FIG. 6A-6C and Par. 0165; trained GAN model provides disease diagnosis from a diseased medical scan of a patient without any paired healthy medical scan of the same patient).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to input the simulated image as taught by Abadi into Liang’s unpaired image-to-image translation Fixed-Point GAN. The combination yields predictable results in enhancing image and diagnostic utility of simulated images, allowing synthetic output to resemble real-world scans in the field of medical imaging and AI-driven virtual clinical trials.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in view of Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in further view of Qiu et al. "Deep learning-based thoracic CBCT correction with histogram matching." Biomedical physics & engineering express 7, no. 6 (2021): 065040, hereinafter referred to as “Qiu”, and in further view of Sakboonyarat et al. Discriminative Image Enhancement for Robust Cascaded Segmentation of CT Images. ECTI Transactions on Computer and Information Technology (ECTI-CIT) (2021), hereinafter referred to as “Sakboonyarat”.
Regarding claim 18, Abadi in view of Liang in further view of Qiu discloses the method of claim 1, and further discloses a comparison between original and generated images (Liang Par. 0100; synthetic and real brain MR images. Par. 0103; generate a set of PE candidates, and then by comparing against the ground truth, the PE candidates are labeled as PE or non-PE).
Abadi, Liang, and Qiu are combined for the reason set forth above with respect to claim 12.
Abadi in view of Liang does not disclose wherein the specialized loss function comprises a loss value based on a comparison of HU value histograms.
In the same art of medical imaging systems, Sakboonyarat discloses wherein the specialized loss function comprises a loss value based on a comparison of HU value histograms (Pg. 154-155, Section 3.3; HU values from all voxels inside these 2D bounding boxes are accumulated to create the HU histogram…Fig. 5; Procedure for creating an HU histogram…Fig. 6; HU histograms of liver ground truth (red), accumulated 2D bounding boxes detected by RetinaNet (blue dotted line), and original CT slices containing the liver region (gray)…Pg. 160, Section 4.4; directly related to the loss function utilized in model training).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include loss values of HU value histograms, as taught by Sakboonyarat, in Liang, Abadi, and Qiu’s medical/CT imaging system. Hu value histogram is known tool used in CT scan imaging to help doctors quantify tissue characteristics and diagnose conditions, therefore, comparing input and generated images values yields the predictable result of ensuring the quality of the output image doesn’t deviate from the true norm in the diagnosis.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abadi, Ehsan, et al. "Virtual clinical trials in medical imaging: a review." Journal of Medical Imaging 7.4 (2020): 042805-042805, hereinafter referred to as “Abadi”, in view of Liang et al. (US 20220084173), hereinafter referred to as “Liang”, in further view of Wang, Yue, et al. "TWIN-GPT: digital twins for clinical trials via large language model." ACM Transactions on Multimedia Computing, Communications and Applications (2024), hereinafter referred to as “Wang”.
Regarding claim 20, Abadi in view of Liang discloses the method of claim 19, and further discloses the digital phantom (Liang Pg. 042805-2 - 042805-5, Section 2, Fig. 2, and Fig. 3; computational, anthropomorphic phantoms…procedurally generated phantoms…patient-based phantoms), medical imaging simulator (Liang Pg. 042805-9 - 042805-15, Section 3; simulators of the imaging system), and scan parameters (Liang Pg. 042805-2, Section 2; Imaging data of a computer phantom can be generated using a computerized scanner model under various scanning parameters or protocols, and the effects quantified in comparison with the known phantom).
Abadi in view of Liang does not disclose are selected or provided by a chatbot.
In the same art of AI models and clinical trials, Wang discloses selected or provided by a chatbot (Pg. 2, Section 1; TWIN-GPT is fine-tuned on a pre-trained LLM (ChatGPT [52]) on clinical trial datasets, so as to generate personalized digital twins for different patients).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Wang’s chatbot into the medical imaging system of Abadi and Liang. The motivation lies in the advantage of automating the medical imaging system by allowing a chatbot to assist in selecting or providing simulation inputs in order to streamline simulation configuration, personalize virtual patients, and increase scalability in clinical research.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNY N TRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615