Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/09/2026 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
Applicant’s arguments and claim amendments, see P. 9 - P. 10, filed 03/09/2026, with respect to claims 21-22 have been fully considered and are persuasive. The 35 U.S.C. 112 rejection of 12/16/2025 has been withdrawn.
Applicant’s amendments to claims 1 and 13 have been considered but are moot in view of the new ground(s) of rejection in view Sun et al. (An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4, 6-7, 9-15, 18, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Bogoni et al. (US 2011/0200227 A1, hereinafter Bogoni) in view of Bernardis et al. (Extracting Evolving Pathologies via Spectral Clustering, hereinafter Bernardis), Sun et al. (An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection, hereinafter Sun), and TSUJIMOTO (US 2023/0215003 A1, hereinafter Tsujimoto).
Regarding claim 1 and 13, Bogoni discloses
Claim 1: A computer-implemented method, the method comprising:
Claim 13: A system comprising: an interface unit…and a computing unit configured to
receiving a first medical image of an anatomical region of a patient, the first medical image being acquired at a first instance of time and depicting at least one abnormality in the anatomical region; receiving a second medical image of the anatomical region of the patient, the second medical image being acquired at a second instance of time; (Para [0030]: “tagging of regions-of-interest (e.g., lesions, tumors, masses, etc.)”, Claim 1: “(i) receiving first and second images acquired at respective first and second different time-points; (ii) receiving first and second findings associated with the first and second images respectively, wherein the first and second findings are associated with at least one region of interest”);
comparing the first abnormality image and the second abnormality image (Para [0036]: “A longitudinal analysis generally refers to a correlational study that involves repeated observations of a medical condition over a period of time”, Para [0039]: “In one implementation, the data analysis unit 107 computes, based on the prior findings, a longitudinal analysis result associated with a characteristic of the ROI”); and
determining a change of the at least one abnormality based on the comparing (Para [0030]: “tagging of regions-of-interest (e.g., lesions, tumors, masses, etc.), Claim 5: “the longitudinal analysis result is associated with a change in a characteristic of the region of interest”).
However, Bogoni does not disclose
providing a decomposition function configured to extract, from a medical image of an anatomical region with one or more abnormalities, an abnormality image only depicting image regions of the medical image of the one or more abnormalities
wherein,
the decomposition function applies an inpainting function configured to infer, based on an unmodified given medical image, the unmodified given medical image being of an anatomical region of a patient depicting at least one abnormality in the anatomical region, abnormalities present within the unmodified given medical image and inpaint abnormalities within the unmodified given medical image to generate a normal image of the unmodified given medical image, and
the decomposition function is further configured to extract the abnormality image from the unmodified given medical image by subtracting the generated normal image from the unmodified given medical image or vice versa;
generating a first abnormality image of the first medical image by applying the decomposition function to the first medical image;
generating a second abnormality image of the second medical image by applying the decomposition function to the second medical image.
Bernardis teaches
providing a decomposition function configured to extract, from a medical image of an anatomical region with one or more abnormalities, an abnormality image only depicting image regions of the medical image of the one or more abnormalities (Fig.2, Fig. 3, P. 628 Section 2: “We segment lesions from 3D+t MR via a spectral graph theoretic framework. We assume the scans of each time are pose-corrected, so that across time-points healthy tissue remains constant and only the evolving pathology is changing”, P. 682 Para. 1: “Our spectral graph method extracts lesions from longitudinal MR scans by defining grouping cues that distinguish lesions from healthy tissue. Specifically, we first construct pairwise affinities from each 3D image to characterize brighter regions of varying shapes and sizes. We then reduce the complexity of the 3D+t segmentation task by transforming it into a 3D one”. P. 682-683 Section 2.1: “We start by constructing the graph Gt (N, E. Wt) at each time point t = 1, …, T. Let It be the 3D scan associated with time-point t. Voxels of It are represented as nodes N in the graph and the N x N weight matrix Wt captures the affinity of the edges E”);
generating a first abnormality image of the first medical image by applying the decomposition function to the first medical image; generating a second abnormality image of the second medical image by applying the decomposition function to the second medical image (Fig. 3: The images are stacked on top of each other with one taken at time1 and one taken at time t. Also it can be understood that this is an image of a lesion based on Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time” This shows that multiple images are applied with the same techniques. Even without such statement, it can still be understood that one technique for one type of image can be applied to another image of the same type).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bogoni with singling out lesion in a medical image of Bernardis to effectively increase the efficiency of identifying abnormalities in medical images.
However Bogoni in view of Bernardis does not explicitly teach
the decomposition function applies an inpainting function configured to infer, based on an unmodified given medical image, the unmodified given medical image being of an anatomical region of a patient depicting at least one abnormality in the anatomical region, abnormalities present within the unmodified given medical image and inpaint abnormalities within the unmodified given medical image to generate a normal image of the unmodified given medical image, and
the decomposition function is further configured to extract the abnormality image from the unmodified given medical image by subtracting the generated normal image from the unmodified given medical image or vice versa.
Sun teaches
the decomposition function applies an inpainting function configured to infer, based on an unmodified given medical image, the unmodified given medical image being of an anatomical region of a patient depicting at least one abnormality in the anatomical region, abnormalities present within the unmodified given medical image and inpaint abnormalities within the unmodified given medical image to generate a normal image of the unmodified given medical image (P. 3 1): Anomaly Mask: During training, we assume that each abnormal image has a corresponding binary mask provided with it that indicates where the abnormal locations are within the image. Let this mask be
M
x
, which is the same size as the image being considered in training. We emphasize here that this mask
x
is not available and not needed during testing. To generate a normal-looking image, the real abnormal image is simply fed into well-trained abnormal-to-normal generator
G
A
2
N
.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bogoni in view of Bernardis with inpainting unmodified original medical image with abnormalities to generate a normal image using a specialized GANs of Sun to effectively reduce the human labor needed when training a machine learning model for lesion detection in medical images.
However, Bogoni in view of Bernardis and Sun does not explicitly teach
the decomposition function is further configured to extract the abnormality image from the unmodified given medical image by subtracting the generated normal image from the unmodified given medical image or vice versa.
Tsujimoto teaches
the decomposition function is further configured to extract the abnormality image from the unmodified given medical image by subtracting the generated normal image from the unmodified given medical image or vice versa (Fig. 3, Para [0101]: “A second training data generation unit 540 that generates second training data derives difference data 550 that is the difference between the lesion image 520 input to the first learning model 500 illustrated in FIG. 1 and the pseudo normal mucous membrane image 526 that is an output from the first learning model 500”, Para [0103]: “The difference data 550 that is the difference between the lesion image 520 and the pseudo normal mucous membrane image 526 is relatively small when the lesion in the lesion image 520 is similar to a normal mucous membrane”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bogoni in view of Bernardis and Sun with subtraction of medical images to isolate the abnormalities in images of Tsujimoto since Tsujimoto also introduces using GAN to generate normal medical image similar to Sun, with the difference being Tsujimoto’s GAN is trained using normal image with random mask while Sun’s GAN is trained with abnormal image with masks. As such, Sun’s GAN does learn where and what a lesion is.
Regarding claim 3, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bernardis further teaches
determining at least one image registration between an image space of the first abnormality image and an image space of the second abnormality image; and the determining the change determines the change based on the at least one image registration (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 4, dependent upon claim 3, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 3.
Bernardis further teaches
the determining the at least one image registration determines the at least one image registration by registering the first medical image with the second medical image (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 6, dependent upon claim 4, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 4.
Bernardis further teaches
calculating a deformation field based on the at least one image registration, the deformation field mapping an image region of the at least one abnormality in the first abnormality image to a corresponding image region of the at least one abnormality in the second abnormality image, wherein the determining the change determines the change based on the deformation field (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. The determination field is the evolution vectors. P. 682 Para. 2: “For each node, we estimate a possible evolution direction of the pathology.”, Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 7 and 18, dependent upon claim 1 and claim 6, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1 and claim 6.
Bogoni further discloses
calculating a score measuring a size change of the at least one abnormality from the first instance of time to the second instance of time (Para [0042]: “For example, the size of a tumor may be monitored over time by automatically correlating the current image with the prior image datasets to determine any decrease in size that may be attributed to the treatment. The user may choose to present the progression of characteristics in a graphical representation, such as a pictorial graph or a bar chart. Additionally, clinical guidelines may be presented to indicate the likelihood of malignancy given the progression of the selected characteristics” Size can be a kind of score).
Regarding claim 9, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Sun further teaches
the decomposition function includes a trained function (P. 3 2) Term 2: GAN: “The main term in our objective is the GAN. Instead of building an unidirectional transform in the abnormal to normal direction, we adopt a bidirectional transform model with two generators
G
A
2
N
and
G
N
2
A
trained simultaneously.”).
Regarding claim 10, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses
providing the determined change to a user via a user interface (Para [0040]: “For instance, the data analysis unit 107 may present, at the display device 108, a gallery that shows the same ROI across multiple time points. Additional information may be provided at each time point”, Para [0041]: “ In one implementation, the longitudinal analysis result comprises a summary of the progression of the medical condition (or disease) associated with an ROI. The persistence of the findings across multiple time-points may also be characterized. The user may select the ROI (e.g., sentinel nodules), or the data analysis unit 107 may automatically summarize the medical condition related to all the RO Is so as to present a trend of the disease. The user may select an anatomical area as an ROI, such as specific lobes or quadrants of the lung, or a segment of the liver. This allows the user to monitor either the stability or progression of the disease, or the response to treatment or therapy”, Para [0042]: “For example, the size of a tumor may be monitored over time by automatically correlating the current image with the prior image datasets to determine any decrease in size that may be attributed to the treatment. The user may choose to present the progression of characteristics in a graphical representation, such as a pictorial graph or a bar chart. Additionally, clinical guidelines may be presented to indicate the likelihood of malignancy given the progression of the selected characteristics” For a user to interact with the data, a display is needed for user to observe the data and interact with the data).
Regarding claim 11, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses
the anatomical region includes a lung of the patient (Fig. 4, Para [0037]: “In one implementation, the data analysis unit 107 analyzes or identifies a certain medical condition or disease (i.e. pathology-specific). Alternatively, the data analysis unit 107 includes multiple pathology-specific CAD tools (e.g., lung CAD, pulmonary embolism CAD, etc.) to identify a multiplicity of medical conditions”), and
the at least one abnormality includes a lung lesion in the lung of the patient (Fig. 4, Para [0030]: “tagging of regions-of-interest ( e.g., lesions, tumors, masses, etc.)”).
Regarding claim 12, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses
the first medical image and the second medical image are X-ray images of a chest of the patient (Para [0028]: “The images may be acquired by, for example, magnetic resonance (MR) imaging, computed tomography (CT), helical CT, x-ray, positron emission tomography (PET), fluoroscopic, ultrasound, single photon emission computed tomography (SPECT), or a combination thereof”, Para [0037]: “In one implementation, the data analysis unit 107 analyzes or identifies a certain medical condition or disease (i.e. pathology-specific). Alternatively, the data analysis unit 107 includes multiple pathology-specific CAD tools (e.g., lung CAD, pulmonary embolism CAD, etc.) to identify a multiplicity of medical conditions”).
Regarding claim 14, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses
A non-transitory computer program product comprising program elements which, when executed by a computing unit of a system, cause the system to perform the method of claim 1 (Para [0022]: “The computer system 101 may be a desktop personal computer, a portable laptop computer, another portable device, a mini-computer, a mainframe computer, a server, a storage system, a dedicated digital appliance, or another device having a storage sub-system configured to store a collection of digital data items. In one implementation, the computer system 101 comprises a processor or central processing unit (CPU) 104 coupled to one or more non-transitory computer-readable media 106 (e.g., computer storage or memory), display device 108 (e.g., monitor) and various input devices 110 (e.g., mouse or keyboard) via an input\ output interface 121. The computer system 101 may further include support circuits such as a cache, power supply, clock circuits and a communications bus”).
Regarding claim 15, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses
A non-transitory computer-readable medium having program elements which, when executed by a computing unit of a system, cause the system to perform the method of claim 1 (Para [0023]: “In one implementation, the techniques described herein may be implemented as computer-readable program code tangibly embodied in the non-transitory computer-readable media 106.”).
Regarding claim 20, dependent upon claim 18, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 18.
Bogoni further discloses
the anatomical region includes a lung of the patient (Fig. 4, Para [0037]: “In one implementation, the data analysis unit 107 analyzes or identifies a certain medical condition or disease (i.e. pathology-specific). Alternatively, the data analysis unit 107 includes multiple pathology-specific CAD tools (e.g., lung CAD, pulmonary embolism CAD, etc.) to identify a multiplicity of medical conditions”), and
the at least one abnormality includes a lung lesion in the lung of the patient (Fig. 4, Para [0030]: “tagging of regions-of-interest ( e.g., lesions, tumors, masses, etc.)”).
Regarding claim 21, dependent upon claim 13, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 13.
Bernardis further teaches
determine at least one image registration between an image space of the first abnormality image and an image space of the second abnormality image by registering a first normal image with a second normal image; and determine the change based on the at least one image registration (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 22, dependent upon claim 21, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 21.
Bernardis further teaches
determine the at least one image registration by registering the first medical image with the second medical image (Fig. 1 Contours, Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. The determination field is the evolution vectors. P. 682 Para. 2: “For each node, we estimate a possible evolution direction of the pathology.”, Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Claims 2, 5, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Bogoni et al. (US 2011/0200227 A1, hereinafter Bogoni) in view of Bernardis et al. (Extracting Evolving Pathologies via Spectral Clustering, hereinafter Bernardis), Sun et al. (An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection, hereinafter Sun), TSUJIMOTO (US 2023/0215003 A1, hereinafter Tsujimoto) and GIRARDOT et al. (US 2024/0257339 A1, hereinafter Girardot)
Regarding claim 2, dependent upon claim 1, Bogoni in view of Bernardis, Sun, and Tsujimoto teaches all the elements regarding claim 1.
Bogoni further discloses the use of multiple images during analysis (Claim 1: “(i) receiving first and second images acquired at respective first and second different time-points”);
However, Bogoni in view of Bernardis, Sun, and Tsujimoto does not explicitly teach
the decomposition function is configured to extract a normal image of the anatomical region not depicting the one or more abnormalities, the method further comprising:
generating a first normal image of the first medical image by applying the decomposition function to the first medical image; and
generating a second normal image of the second medical image by applying the decomposition function to the second medical image.
Girardot teaches
the decomposition function is configured to extract a normal image of the anatomical region not depicting the one or more abnormalities, the method further comprising: generating a first normal image of the first medical image by applying the decomposition function to the first medical image; and generating a second normal image of the second medical image by applying the decomposition function to the second medical image (Fig. 10, Para [0129]: “FIG. 10 is an illustration of a real medical image 11, 12, its associated segmentation mask 21, 22, and a synthetic medical image 13 generated by the neural network 31 from the segmentation mask 21, 22, for a majority case (without an anomaly) and a minority case (with an anomaly)”, It can be understood that one image modification technique for one type of image can be applied to another image of the same type).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bogoni in view of Bernardis, Sun, and Tsujimoto with generating medical images without its anomalies of Girardot to effectively increase the efficiency of image analysis
Regarding claim 5, dependent upon claim 2, Bogoni in view of Bernardis, Sun, Tsujimoto, and Girardot teaches all the elements regarding claim 2.
Bernardis further teaches
determining at least one image registration between an image space of the first abnormality image and an image space of the second abnormality image by registering the first normal image with the second normal image; and the determining determines the change based on the at least one image registration (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 16, dependent upon claim 2, Bogoni in view of Bernardis, Sun, Tsujimoto, and Girardot teaches all the elements regarding claim 2.
Bernardis further teaches
determining at least one image registration between an image space of the first abnormality image and an image space of the second abnormality image; and the determining the change determines the change based on the at least one image registration (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Regarding claim 17, dependent upon claim 16, Bogoni in view of Bernardis, Sun, Tsujimoto, and Girardot teaches all the elements regarding claim 16.
Bernardis further teaches
the determining the at least one image registration determines the at least one image registration by registering the first medical image with the second medical image (Fig. 3: Tracking temporal changes. This shows that the images are associated to each other and being overlapped to compare and compute the change. Section 2.2.; P. 685 Section 2.2: “To combine the affinities computed at each time-point, we need to track the changes across the graphs G1,…, GT. This is achieved in two steps. First, we compute the nodes’ correspondences Ct [Wingdings font/0xE0] t-1 from each time-point to the previous one. Then, we iteratively find the correspondences at each node to a reference time-point. We choose the first time-point as our reference for convenience since lesions only expand in time”).
Relevant Prior Art Directed to State of Art
Gulsun et al. (US 2010/0254584 A1, hereinafter Gulsun) is prior art not applied in the rejection(s) above. Gulsun discloses a method for assessing a tumor's response to therapy, includes providing images of a first study of a patient and images of a second study of the patient, the second study occurring after the first study and after the patient undergoes therapy to treat a tumor, each study comprising first and second types of functional magnetic resonance (fMR) images, performing a first registration in which the images within each study are registered, performing a second registration in which reference images from both studies are coregistered, segmenting the tumor in an image of each of the second registered studies; and determining that first and second fMR measure differences exist between the segmented tumor's of the first and second studies, the first fMR measure difference being obtained from the first type of MR images, the second fMR measure difference being obtained from the second type of MR images.
Lure et al. (US 2005/0084178 A1, hereinafter Lure) is prior art not applied in the rejection(s) above. Lure discloses a method of processing radiological images for diagnostic purposes involves the automated registration and comparison of images obtained at different times. A variation on the method may also use computer-aided detection (CAD) in conjunction with image parameters obtained during the process of registration to register CAD results.
Davatzikos et al. (US 9,984,283 B2, hereinafter Davatzikos) is prior art not applied in the rejection(s) above. Davatzikos discloses methods, systems, and computer readable media for automated detection of abnormalities in medical images. According to a method for automated abnormality detection, the method includes receiving a target image. The method also includes deformably registering to the target image or to a common template a subset of normative images from a plurality of normative images, wherein the subset of normative images is associated with a normal variation of an anatomical feature. The method further includes defining a dictionary using the subset of normative images. The method also includes decomposing, using sparse decomposition and the dictionary, the target image. The method further includes classifying one or more voxels of the target image as normal or abnormal based on results of the sparse decomposition.
Kareem et al. (WO 2019/238804 A1, hereinafter Kareem) is prior art not applied in the rejection(s) above. Kareem discloses systems and methods for classifying an abnormality in a medical image. An input medical image depicting a lesion is received. The lesion is localized in the input medical image using a trained localization network to generate a localization map. The lesion is classified based on the input medical image and the localization map using a trained classification network. The classification of the lesion is output. The trained localization network and the trained classification network are jointly trained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA CHEN whose telephone number is (703)756-5394. The examiner can normally be reached M-Th: 9:30 am - 4:30pm ET F: 9:30 am - 2:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHEN R KOZIOL can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J. C./ Examiner, Art Unit 2665
/Stephen R Koziol/ Supervisory Patent Examiner, Art Unit 2665