DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because one or more of the drawings does not use the abbreviation “FIG.”. See 37 C.F.R. 1.84(u)(1); see also MPEP 1825.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 10 is objected to because of the following informalities: claim 10 should be amended to recite “where the first diffusion model machine-learned network and/or second diffusion model machine-learned network include diffusion model machine-learned networks trained using an image set generated using a ground truth image with increasing levels of noise added” for clarity. Appropriate correction is required.
Claim 15 is objected to because of the following informalities: claim 15 should be amended to recite “where denoising the second noise input to obtain the medical image includes obtaining” for clarity. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 16 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
In the method of claim 15, “denoising the second noise input to obtain the medical image” includes “obtaining an image to supplement a training set of medical images with deficient occupancy for medical images with a medical abnormality with at least a selected characteristic present in the synthesized medical abnormality” (emphasis added). Claim 16 recites, in part, “where the deficient occupancy of the training set includes: a deviation from a medically established relative probability for occurrences of the medical abnormality with at least the selected characteristic; an absence of medical images with the medical abnormality with at least the selected characteristic; and a below threshold amount of total images within the training set” (emphasis added). There is no explicit method step of determining or otherwise confirming that the “training set” meets specific criteria of the “deficient occupancy” listed in claim 16. As written, the “second denoising stage” of claim 14 includes “a second noise input” which is used to generate/obtain the synthesized medical image. The limitations of claim 16 describe characteristics that the training set happens to possess without sufficiently describing why those characteristics matter or how the method steps differ when they are considered. The synthesized medical image could be added to any number or variety of training sets for any number of reasons of a user/designer of the method. Yet, the method would remain unchanged whether the training set has those characteristics or not because adding an image to a set will occur regardless of whatever high-level characteristics that may exist to describe the training set. Furthermore, the three clauses of claim 16 are listed in reference to the “deficient occupancy” and not the “selected characteristic present in the synthesized medical abnormality”, which adds to the confusion in attempting to understand what step is being added or modified in relation to the method of claim 15.
Furthermore, it is unclear how the “deviation” in claim 16 would be determined and what the “probability” exactly corresponds to. Is it a probability of occurrences of brain tumors that is typically found in machine learning training data sets (e.g., 50% of a training set includes tumors and 50% are healthy brain images), in actual occurrences of real world patients (e.g., 10% of patients receiving MRI brain scans are diagnosed with brain tumors)? It is unclear what “a medically established relative probability” means. How is a probability “medically established”? It is unclear if “an absence of medical images with the medical abnormality” in claim 16 means amongst images of the training set, for medical images in general, for medical images of the specific patient with a specific tumor, and so on. It is unclear what “to supplement a training set of medical images with [a below threshold amount of total images within the training set] for medical images with a medical abnormality with at least a selected characteristic present in the synthesized medical abnormality” in claim 15 means (including claim 16 read as a whole) because this is always true of any pre-existing training set before a first synthesized medical image is added (i.e., a synthesized medical image is output to supplement a training set of medical images with no (zero is less than the total) images that have a tumor with a selected characteristic present in the synthesized medical image, where the characteristic is having a synthesized tumor generated by the method of claim 14).
Even if the additional features of claim 16 were interpreted to mean that the various reasons for the “deficiency” were actively confirmed/determined by the method, such deficiencies would merely amount to mental considerations of a user of the “training interface” for the two-stage denoising method of claim 14, as the method of claim 16 would not fundamentally change in terms of how the obtained synthesized medical image is generated, where a user could be generating new synthetic medical images to supplement a training set for any number of reasons about the training set without the training set itself being necessarily any different than without the additional features of claim 16.
Under the broadest reasonable interpretation, claim 15 essentially means that the synthesized image of claim 14 is used to supplement a training set that does not yet include (i.e., is deficient of) images with the synthesized abnormality (e.g., a user of the method of claim 14 uses the method to generate new synthetic tumor images to supplement an existing training image set that does not yet include synthetic tumor images). Under the broadest reasonable interpretation, claim 16 essentially adds to claim 15 that the deficient training set is deficient in the three listed ways, without sufficiently integrating those ways into the method such that the method actually requires accounting for them.
For purposes of applying prior art, claim 16 is interpreted to not further limit claim 15. Accordingly, claim 16 is rejected under 35 U.S.C. 112(d) below.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 16 is rejected under 35 U.S.C. 112(d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
As interpreted in the corresponding rejection under 35 U.S.C. 112(b) above, claim 16 is interpreted to merely describe attributes of the “training set” without adding any new method step or further limiting an existing method step. Therefore, claim 16 does not further limit the method of claim 15.
Applicant may cancel claim 16, amend the claim to place the claim in proper dependent form, or present a sufficient showing that the dependent claim 16 complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed inventions absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4 and 6-13 are rejected under 35 U.S.C. 103 as being unpatentable over Synthesis of Brain Tumor MR Images for Learning Data Augmentation (published 22 March 2021) to Kim et al. (hereinafter “Kim”) in view of Mask-conditioned latent diffusion for generating gastrointestinal polyp images (published 19 July 2023) to Machacek et al. (hereinafter “Machacek”).
Regarding claim 1, Kim teaches a system for synthesizing a medical image of a synthesized medical abnormality (Kim, Fig. 1, “Synthesized MR images of Brain Tumor”), the system including:
synthesis circuitry configured to (see Kim at section 3: the conducted experiments require a computer or general-purpose processor to produce to results presented.):
obtain a descriptor input for the synthesized medical abnormality, the descriptor input detailing a selected characteristic (tumor representation) for the synthesized medical abnormality (Kim, Abstract, “Because tumors have complex characteristics, the proposed method simplifies them into concentric circles that are easily controllable”; section I, “Real tumor masks usually have complex features, such as grade, appearance, size, and location. Thus, these features of tumor masks are condensed and simplified to concentric circles.”);
[brain image, its brain mask, and the concentric circles feature descriptors) within a defined multidimensional space (two-dimensional space); and
obtain a pre-abnormality image mapped into the defined multidimensional space (Kim, Fig. 1, “MR Images of Normal Brain”);
[
machine learning control circuitry (see Kim at section 3: the conducted experiments require a computer or general-purpose processor to produce to results presented.) configured to provide the abnormality spatial mask and the medical image for a medical machine learning system (Kim, Abstract, “In terms of data augmentation, the proposed method can successfully synthesize brain tumor images that can be used to train tumor segmentation networks or other deep neural networks.”), but does not teach that which is explicitly taught by Machacek.
Machacek teaches denoise, using a first diffusion model machine-learned network (Machacek, Figure 1, “Improved Diffusion”), a first noise input to generate an abnormality spatial mask (Machacek, section 3.1, “The improved diffusion model general synthetic mask images by first adding noise to a randomly selected mask image from the training set. This noise would then be gradually reversed through multiple steps until a synthetic mask image is generated.”) within a defined multidimensional space (see Machacek at Figure 1: the synthetic masks are two-dimensional); and
obtain a [
denoise, using a second diffusion model machine-learned network (Machacek, Figure 1, “Pre-Trained Latent Diffusion”, “The green box represents the conditional latent diffusion model which is used to generate synthetic polyp conditioned on input masks.”), a second noise input to generate the medical image with the synthesized medical abnormality positioned in accord with the abnormality spatial mask (see Machacek at Figure 1: the latent diffusion model has a forward and reverse noise process as the improved diffusion model.).
Kim discloses a medical image synthesis method that synthesizes brain tumor images from real healthy brain images using a generative model for generating a larger and more diverse training image set for machine learning. Thus, Kim shows that it was known in the art before the effective filing date of the claimed invention to use generative models to create more training examples, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, expanding a limited set of training examples. Machacek discloses a medical image synthesis method that synthesizes brain tumor images from real healthy brain images using a generative model, specifically two diffusion models, for generating a larger and more diverse training image set for machine learning. Thus, Machacek shows that it was known in the art before the effective filing date of the claimed invention to use diffusion models to create more training examples, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, expanding a limited set of training examples.
A person of ordinary skill in the art would have been motivated to replace the GAN-based image synthesis disclosed by Kim with diffusion-based image synthesis as disclosed by Machacek, to thereby generate synthetic brain tumor masks within an enclosed area surrounded by real healthy brain images using a feature descriptor of concentric circles for representing tumors. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to “overcome the issue of limited annotated data and train machine learning models more effectively” (Machacek, section 3.1).
Regarding claim 2, Kim in view of Machacek teaches the system of claim 1, where the descriptor input includes a descriptor in a predefined format associated with the first diffusion model machine-learned network (Kim, Abstract, “Because tumors have complex characteristics, the proposed method simplifies them into concentric circles that are easily controllable”; see Kim at Fig. 2: The concentric circles are in a two-dimensional format, same as the real input images and same as the multi-dimensional space where the inpainting occurs.).
Regarding claim 3, Kim in view of Machacek teaches the system of claim 2, where the predefined format includes:
one or more concentric [
Kim further teaches one or more concentric spheres positioned within the defined multidimensional space (Kim, pg. 2197, “Although the proposed method was developed for two-dimensional images in this study, the method has the potential for expansion to three-dimensional cases. If the concentric circles are expanded to concentric spheres, the tumor characteristics can be controlled in three dimensions, and brain tumors synthesized by the proposed method can have continuous and coherent shapes along slice direction. Then, the synthesized dataset would be applicable to three-dimensional tumor segmentation algorithms”).
Kim in view of Machacek is analogous to the claimed invention for the reasons provided above.
A person of ordinary skill in the art would have been motivated to modify the two-dimensional space of the image data and diffusion models disclosed by Kim in view of Machacek to process three-dimensional medical images and generate synthetic three-dimensional images of a real three-dimensional medical image with an inpainted three-dimensional synthetic tumor as suggested by Kim, to thereby generate synthetic brain tumor masks within real healthy brain images using a feature descriptor of concentric spheres for representing tumors. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of more accurately representing real tumors by generating synthetic tumors having shapes in three dimensions instead of just two dimensions.
Regarding claim 4, Kim in view of Machacek teaches the system of claim 1, where the descriptor input includes one or more spheres positioned within the defined multidimensional space to indicate one or more selected volume characteristics and/or a center-of-mass of the synthesized medical abnormality (Kim, pg. 2197, “concentric spheres”).
The rationale for obviousness is the same as provided for claim 3.
Regarding claim 6, Kim in view of Machacek teaches the system of claim 1, where the first diffusion model machine-learned network is further configured to denoise the first noise input based on an anatomical mask positioned within the defined multidimensional space (see Kim at Fig. 1: the normal brain images produce a brain mask that is used to condition/constrain the inpainted area), the anatomical mask generated based on the pre-abnormality image (see Kim at Fig. 1).
Regarding claim 7, Kim in view of Machacek teaches the system of claim 6, where:
the anatomical mask includes a brain mask (see Kim at Fig. 1: the normal brain images produce a brain mask that is used to condition/constraint the inpainted area); and
the synthesized medical abnormality includes a brain tumor (Kim, Fig. 1, “Synthesized MR images of Brain Tumor”).
Regarding claim 8, Kim in view of Machacek teaches the system of claim 6, where:
the anatomical mask includes one or more anatomical boundaries (see Kim at Fig. 1: the normal brain images produce a brain mask that is used to condition/constraint the inpainted area); and
the first diffusion model machine-learned network is further configured to denoise the first noise input based on an anatomical mask by positioning and/or shaping the abnormality spatial mask to disallow boundary straddling (see Kim at Fig. 1: the normal brain images produce a brain mask that is used to condition/constrain the inpainted area).
Regarding claim 9, Kim in view of Machacek teaches the system of claim 1, where:
the first diffusion model machine-learned network is further configured to denoise the first noise input iteratively using multiple denoising iterations (Machacek, section 3.1, “The improved diffusion model general synthetic mask images by first adding noise to a randomly selected mask image from the training set. This noise would then be gradually reversed through multiple steps until a synthetic mask image is generated.”); and
the second diffusion model machine-learned network is further configured to denoise the second noise input iteratively using multiple denoising iterations (see Machacek at Figure 1: both diffusion models implement diffusion processes, meaning each model performs iterative denoising across multiple timesteps.).
The rationale for obviousness is the same as provided for claim 1.
Regarding claim 10, Kim in view of Machacek teaches the system of claim 1, where the first diffusion model machine-learned network and/or second diffusion model machine-learned network include diffusion model machine-learned networks trained using an image set generated using a ground truth image (Machacek, section 3.1, “the first step is to obtain a training set for real mask images”) with increasing levels of noise added (Machacek, section 3.1, “This noise would then be gradually reversed through multiple steps until a synthetic mask image is generated.”).
The rationale for obviousness is the same as provided for claim 1.
Regarding claim 11, Kim in view of Machacek teaches the system of claim 1, where the synthesized medical abnormality includes a tumor (Kim, Fig. 1, “Synthesized MR images of Brain Tumor”), [
Regarding claim 12, Kim in view of Machacek teaches the system of claim 1, where the pre-abnormality image includes a magnetic resonance imaging (MRI) image (Kim, Abstract, “Our method can synthesize a huge number of brain tumor multicontrast MR images from numerous healthy brain multicontrast MR images and various concentric circles.”) [
Regarding claim 13, Kim in view of Machacek teaches the system of claim 1, where the defined multidimensional space includes a two-dimensional space (see Kim at Fig. 1) or a three-dimensional space (see Kim at pg. 2197, “three dimensions”).
The rationale for obviousness is the same as provided for claim 3.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Machacek, in further view of U.S. Pat. Appl. Pub. No. 20230377226 (filed 19 May 2023) to Saharia et al. (hereinafter “Saharia”), and in further view of LLM Itself Can Read and Generate CXY Images (published 24 May 2023) to Lee et al. (hereinafter “Lee”).
Regarding claim 5, Kim in view of Machacek teaches the system of claim 1, but does not teach that which is explicitly taught by Saharia.
Saharia teaches where:
the descriptor input includes a vector descriptor indicating [(LLMs)) with a sequence of generative neural networks (e.g., diffusion-based models) to deliver text-to-image generation with a high degree of photorealism, fidelity, and deep language understanding.”); and
obtaining the descriptor input includes applying a large language model (Saharia, par. 52, “large language models (LLMs)”) to [
Kim in view of Machacek is analogous to the claimed invention for the reasons provided above. Saharia discloses diffusion-based text-to-image synthesis for generating synthetic medical images with an LLM as a pre-trained natural language text encoder (see Saharia at par. 64). Thus, Saharia shows that it was known in the art before the effective filing date of the claimed invention to use LLMs to receive text prompts provided to an LLM to produce vector descriptors of the prompt for specifying what the user wants the synthesized image to include, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, expanding a limited set of training examples.
A person of ordinary skill in the art would have been motivated to add an LLM as disclosed by Saharia to the image synthesis method disclosed by Kim in view of Machacek, to thereby generate text encodings from a prompt provided by a user to specify what is included in the synthesized medical image. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of increasing user-friendliness of the training interface for training interaction.
Kim in view of Machacek and in further view of Saharia does not teach that which is explicitly taught by Lee.
Lee teaches descriptor input includes a vector descriptor indicating one or medical classifications of the synthesized medical abnormality (Lee, Figure 1, “Generate a CXR image for the following diagnosis: left lung pneumonia”); and
obtaining the descriptor input includes applying a large language model to clinical description of a model medical abnormality to generate the vector descriptor (see Lee at Figure 1: LLM can generate text or image output based on text and/or image input tokens).
Kim in view of Machacek and in further view of Saharia is analogous to the claimed invention for the reasons provided above. Lee discloses medical image synthesis that uses an LLM to generate new medical images that (ideally) correspond to model (ground truth) clinical descriptions of particular medical classifications. Thus, Lee shows that it was known in the art before the effective filing date of the claimed invention to use LLMs to receive text prompts of clinical descriptions that medically classify the expected appearance of the synthesized output image, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, expanding a limited set of training examples.
A person of ordinary skill in the art would have been motivated to finetune the LLM disclosed by Kim in view of Machacek and in further view of Saharia according to the framework disclosed by Lee, to thereby generate synthetic brain tumor images responsive to user-provided prompts that include the same type of textual description used by clinicians when describing certain types and appearances of specific brain tumors. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of giving a user greater control over the specificity of the synthesized output.
Claims 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Machacek and in further view of U.S. Pat. Appl. Pub. No. 20240169500 (filed 22 November 2022) to Zheng et al. (hereinafter “Zheng”).
Regarding claim 14, Kim teaches a multiple-stage [
[
after obtaining the abnormality spatial mask, [the “Synthesized MR images of Brain Tumor”), but does not teach that which is explicitly taught by Machacek.
Machacek teaches denoising, using a first diffusion model machine-learned network at a first denoising stage (Machacek, Figure, 1, “Improved Diffusion”), a first noise input to obtain an abnormality spatial mask (Machacek, section 3.1, “The improved diffusion model general synthetic mask images by first adding noise to a randomly selected mask image from the training set. This noise would then be gradually reversed through multiple steps until a synthetic mask image is generated.”) within a defined multidimensional space (see Machacek at Figure 1: the synthetic masks are two-dimensional);
after obtaining the abnormality spatial mask, denoising, using a [
The rationale for obviousness is the same as provided for claim 1.
Kim in view of Machacek does not teach that which is explicitly taught by Zheng.
Zheng teaches providing the medical image to a training interface for training interaction (Zheng, par. 51, “user interface 220 receives an image including a first region that includes content and a second region to be inpainted. In some examples, user interface 220 provides the image as an input to diffusion model 225, where the intermediate output image is conditioned based on the first region of the image. In some examples, user interface 220 receives a user input indicating the second region to be inpainted.”; see also Zheng at pars. 121-22).
Kim in view of Machacek is analogous to the claimed invention for the reasons provided above. Zheng discloses a user interface for training diffusion models for inpainting. Thus, Zheng shows that it was known in the art before the effective filing date of the claimed invention to use user interfaces for training diffusion models to generate synthetic images, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, expanding a limited set of training examples.
A person of ordinary skill in the art would have been motivated to add a user interface as disclosed by Zheng to the device performing the method of Kim in view of Machacek, to thereby enable a user to visually inspect pre-abnormality images to select a subset of images for training a machine learning model to synthesize new types of synthetic tumor images or fine-tune the existing model. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of providing a simple way to use and re-train the model.
Regarding claim 15, Kim in view of Machacek and in further view of Zheng teaches the multiple-stage denoising method of claim 14, where denoising the second noise input to obtain the medical image includes obtaining an image to supplement a training set of medical images with deficient occupancy for medical images with a medical abnormality with at least a selected characteristic (having the synthetic image included therein) present in the synthesized medical abnormality (see Machacek at section 2: “real-world datasets … have some limitations” including “diversity”, meaning a “narrow range of images” will not be representative of the many ways medical abnormalities are presented in real life and may incorrectly influence the model to misclassify an image object, for example mistaking a healthy fold in a GI tract as being a polyp (i.e., a deviation from a medically established relative probability for occurrences of the synthetic object produced by the model), “Annotation quality”, meaning datasets include mislabeled images and lack the synthetic images that can be assumed to always include the correctly-labeled object (i.e., an absence of medical images with the correct label of the synthetic image), and image Size”, meaning a deficiency is there are not enough images (i.e., below a threshold) in the dataset.).
The rationale for obviousness is the same as provided for claim 14.
Regarding claim 16, Kim in view of Machacek and in further view of Zheng teaches the multiple-stage denoising method of claim 15, where the deficient occupancy of the training set includes:
a deviation from a medically established relative probability for occurrences of the medical abnormality with at least the selected characteristic (see Machacek at section 2: “real-world datasets … have some limitations” including “diversity”, meaning a “narrow range of images” will not be representative of the many ways medical abnormalities are presented in real life and may incorrectly influence the model to misclassify an image object, for example mistaking a healthy fold in a GI tract as being a polyp (i.e., a deviation from a medically established relative probability for occurrences of the synthetic object produced by the model));
an absence of medical images with the medical abnormality with at least the selected characteristic (see Machacek at section 2: “Annotation quality”, meaning datasets include mislabeled images and lack the synthetic images that can be assumed to always include the correctly-labeled object (i.e., an absence of medical images with the correct label of the synthetic image)); and
a below threshold amount of total images within the training set (see Machacek at section 2: “Annotation quality”, meaning datasets include mislabeled images and lack the synthetic images that can be assumed to always include the correctly-labeled object (i.e., an absence of medical images with the correct label of the synthetic image), and image “Size”, meaning a deficiency is there are not enough images of sufficient size (i.e., below a threshold) in the dataset.).
The rationale for obviousness is the same as provided for claim 14.
Regarding claim 17, Kim in view of Machacek and in further view of Zheng teaches the multiple-stage denoising method of claim 14, where denoising the first noise input to obtain the abnormality spatial mask includes denoising the first noise input to obtain the abnormality spatial mask within an anatomical mask positioned within the defined multidimensional space (see Kim at Fig. 1: the brain tumor is synthesized within the two-dimensional space of the tumor mask within the overall brain mask).
Regarding claim 18, Kim in view of Machacek and in further view of Zheng teaches the multiple-stage denoising method of claim 17, where:
the anatomical mask includes one or more anatomical boundaries (see Kim at Fig. 1: the tumor mask has an outer boundary.); and
at a time that the abnormality has a center-of-mass near the one or more anatomical boundaries (see Kim at Fig. 1: the tumor mask restricts where the synthesized data is added. The center of mass of the tumor in the masked area is always somewhere near the boundary. Furthermore, the tumor is represented as concentric circles. The center of the concentric circles is an approximation of a center of mass of a tumor.):
denoising the first noise input includes shaping the abnormality spatial mask to disallow boundary straddling (see Kim at Fig. 1: the normal brain images produce a brain mask that is used to condition/constraint the inpainted area. The determined shape of a tumor mask disallows a shared boundary with the surrounding healthy brain image)[
Regarding claim 19, Kim in view of Machacek and in further view of Zheng teaches the multiple-stage denoising method of claim 17, where:
the anatomical mask includes a brain mask (see Kim at Fig. 1: brain mask output from the normal brain images); and
the abnormality includes a brain tumor (see Kim at Fig. 1: tumor mask).
Claim 20 substantially corresponds to claim 14, mainly differing by including claim limitations substantially similar to limitations in claims 1 and 2 (taught by Kim), and further specifying that the abnormality spatial mask spatially defines the synthesized medical abnormality with the selected characteristic (see Kim at Fig. 1: the concentric circles represent tumor characteristics within the spatially defined tumor mask area.). Therefore, claim 20 is rejected for the same reasons for obviousness as provided for claim 14.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN P POTTS whose telephone number is (571)272-6351. The examiner can normally be reached M-F, 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN P POTTS/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672