Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
Translated copies of Office Actions from counterpart applications are expected to be relevant for the present application.
The examiner notes that the IDS has the first inventor’s first name listed as “Kyle,” but the Application Data Sheet filed March 15, 2024 (i.e., two weeks prior), changed the name to “Kaier.”
Claim Objections
Claim 9 is objected to because of the following informalities:
Claim 9 recites a “second path,” but there is no first path in claim 9 because claim 9 does not depend from claim 8.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-13 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The specification’s “field of the invention” states “The present invention … relates, in particular, to means to translate a for-processing image to a for-presentation image that is manufacturer and modality agnostic.” The background section underscores the relevance of being manufacturer and modality agnostic, by repeating the admission that this is what the present invention is. Because none of claims 1-13 are limited to what the specification identifies as a defining limitation, the specification does not demonstrate possession of the claimed invention. MPEP 2172 is a different situation because the legal scope of protection can vary from the plain language description of the invention (as happened in In re Zahn, 617 F.2d 261, 204 USPQ 988 (CCPA 1980)). Here, the claim language is at odds with, not merely differing from, what is described as the invention. To overcome this rejection, Applicant may wish to consider the last paragraph of the background section:
The present invention overcomes such problems. It provides manufacture agnostic means to learn a translation mapping between paired for-processing and for- presentation images using GAN. The trained GAN can convert a for-processing image to a vendor-neutral for-presentation image. The present invention further serves as a standardization framework to alleviate differences as well ensuring comparable review across different radiography equipment, acquisition settings and representations.
Claims that are limited to these features are expected to overcome this rejection.
Claim 1 recites “comprising a first neural network as a generator and a second neural network as a discriminator,” but this is unlimited functional claiming due to the wide variety of neural network architectures that can be used in a GAN. MPEP 2173.05(g). Limiting the neural networks to “convolutional neural networks” is expected to overcome this rejection.
Dependent claims are likewise rejected.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-13 (all claims but 1) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites “wherein to train the discriminator,” but it is unclear if this is intended to mean that the following claim elements are to be interpreted as method steps, or if this is instead reciting an intended use.
Claim 2 recites “a first set of paired for-processing images and real for-presentation images,” but the antecedent basis is unclear because both claims 1 and 2 recite sets that are for training, but claim 2 further specifies that the “for-presentation” images are real. Because claims 1 and 2 are both for the same training, there is an implication that the requirement for real for-presentation images would also be included in the set of claim 1, but this is not specified. MPEP 2173.05(e).
Claim 2 also recites “a second set of paired for-processing images and pseudo for-presentation images,” but this raises the same antecedent basis issue as above.
Claim 13 recites corresponding language and is likewise rejected.
Claim 3 recites “wherein to train the generator,” but this raises the same issue as “wherein to train the discriminator.”
Claim 3 recites “image feature-level distance,” but this is new terminology. MPEP 2173.05(a). The usage in the specification (e.g., bottom of p. 17) is not specific enough to provide guidance.
Claim 13 recites corresponding language and is likewise rejected.
Claim 5 recites “then normalise,” but it is unclear if this has antecedent basis in claim 4’s “normalise,” or if this is intended to be a different instance of normalizing.
Claim 6 recites “determined by a ratio” but this is subjective because the ratio is unspecified. MPEP 2173.05(b)(IV). One option to overcome this rejection is to specify an objective standard, such as what the ratio is or how it is determined.
Claim 6 recites “preselected,” but this raises the same issue as “determined by a ratio.”
Claim 7 recites “preselected,” but this raises the same issue as “determined by a ratio.”
Claims 8 and 9 recite network layers “from concatenation of the sets of paired images,” but this lacks a clear plain meaning because concatenated images are an input to a neural network, not a layer connection.
Claim 10 recites “the first” path, but this lacks sufficient antecedent basis (note that the first path is recited in claim 8, but claim 8 is not a parent of claim 10). MPEP 2173.05(e).
Claim 11 recites “the second path,” but this lacks sufficient antecedent basis (note that the second path is recited in claim 9, but claim 9 is not a parent of claim 11). MPEP 2173.05(e).
Claim 11 recites “for each,” but this is subjective. MPEP 2173.05(b)(IV). It appears that the intent may have been to claim “from” each, rather than “for” each.
Claim 11 recites “and/or,” but it is unclear whether this is met by either or if both are required. This is analogous to MPEP 2173.05(d).
Claim 12 recites “the first score and the second score,” but both scores lack sufficient antecedent basis. MPEP 2173.05(e). It may be that the intent was for claim 12 to depend from claim 2.
Claim 12 recites “indicates” but this is subjective. MPEP 2173.05(b)(IV). One option to overcome this rejection is to specify an objective standard.
Claim 13 recites corresponding language and is likewise rejected.
Dependent claims are likewise rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to “a generative adversarial network,” without specifying either that this is a method or an apparatus.
Claims 1-13 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Step 1: As per above, all of the claims fail step 1.
Step 2A, prong one: All of the elements of claims 1-13 are a mental process “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation.” MPEP 2106.04(a)(2)(III) To date, the examiner has not identified guidance (e.g., from the USPTO or the Courts) that specify an amount of computation alone is sufficient to overcome the process otherwise being mental. Further, the various models are also mental processes, see example 47, claim 2, element (d) (from the July 2024 AI subject matter eligibility examples). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision.
Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 1 is rejected under 35 U.S.C. 102 as being anticipated by Applicant Admitted Prior Art. MPEP 2129(I) states “Where the admitted prior art anticipates the claim but does not qualify as prior art under any of the paragraphs of 35 U.S.C. 102, the claim may be rejected as being anticipated by the admitted prior art without citing to 35 U.S.C. 102.”
1. (Original): A Generative Adversarial Network (GAN) comprising a first neural network as a generator and a second neural network as a discriminator configured to train one another to learn a translation mapping between sets of paired for-processing and for- presentation images. (Applicant Admitted Prior Art, “Background”
1) “In radiologic applications, GANs are used to synthesize images conditioned on other images. The discriminator determines for pairs of images whether they form a realistic combination. Thus it is possible to use GANs for image-to-image translation problems such as correction of motion artefacts, image denoising, and modality translation (e.g. PET to CT).”
2) “For example, a generator CNN can be trained to transform an image of one modality (the source domain) into an image of another modality (the target domain).”
3) “This GAN comparison study shows that supervised paired image-to-image translation yields higher image quality in the target domain than the semi-supervised unpaired image-to-image translation.” These are three separate teachings, each of which anticipates this claim.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over Shin Y, Yang J, Lee YH. Deep generative adversarial networks: applications in musculoskeletal imaging. Radiology: Artificial Intelligence. 2021 Mar 3;3(3):e200157 (“Shin”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of the various embodiments of Shin together so that one can choose the desired features. Shin motivates this combination by being a review paper. Shin, abstract.
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
1. (Original): A Generative Adversarial Network (GAN) comprising a first neural network as a generator and a second neural network as a discriminator configured to train one another to learn a translation mapping between sets of paired for-processing and for- presentation images. (Shin, abstract, “Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols”)
2. (Currently amended): A GAN according to claim 1 wherein to train the discriminator:
the generator is configured to yield a pseudo for-presentation image A' from a for-processing image A,
the discriminator is configured to yield a first score measuring the discriminator performance in identifying a real for-processing image from a first set of paired for-processing images and real for-presentation images, the discriminator is configured to yield a second score measuring the discriminator performance in identifying the pseudo for-processing image from a second set of paired for-processing images and pseudo for-presentation images, the discriminator is configured to backpropagate the first score and the second score to update weights of the discriminator. (Shin, “Overview of GANs” The claim is directed to the basics of how a GAN works, Shin describes the same technology (though written at a high level, see, e.g., how the section on CycleGAN talks about minimizing cycle consistency loss without specifying “backpropagation” because one of ordinary skill would understand that this is taught. See also MPEP 2144.06 because, at a minimum, backpropagation is a known substitute.)
3. (Currently amended): A GAN according to claim 1 wherein to train the generator:
the discriminator is configured to yield a third score measuring general image quality difference from the first set of paired for-processing images and real for-presentation images, (Shin, Image Quality Evaluation, “as a validation tool for quality assessment of the generated images”)
the discriminator is configured to yield a fourth score measuring image feature-level distance from the first set of paired for-processing images and real for-presentation images and the second set of paired for-processing images and pseudo for-presentation images, (Shin, Cross-modality synthesis, “mean p distance” p distance means to measure the distance between synthetic and original images and p. 6, left side, “defined as the distance of features extracted by pretrained VGG network [31] layers to learn the high-frequency pixel distributions of images”)
the generator is configured to backpropagate the third score and the fourth score to update weights of the generator. (Shin, “Overview of GANs” See the mapping of claim 2.)
8. (Currently amended): A GAN according to claim 1 wherein the discriminator comprises a first path of network layers direct from concatenation of the sets of paired images. (Shin, Fig. 4)
9. (Currently amended): A GAN according to claim 1 wherein the discriminator comprises a second path of network layers from down-sampled resolution from concatenation of the sets of paired images. (Shin, Deep Convolutional GAN, “In addition, in DCGANs, the generator and the discriminator learn their own spatial downsampling … .”)
10. (Currently amended): A GAN according to claim 9, wherein the first and second paths share the same network layers. (Shin, Fig. 4)
11. (Currently amended): A GAN according to claim 8, wherein the discriminator is configured to extract first multiscale features for each of the network layers in the first path and/or to extract second multiscale features for each of the network layers in the second path. (Shin, Fig. 7, caption, “MS-SSIM = multiscale structural similarity”)
12. (Currently amended): A GAN according to claim 11 where the discriminator is configured to utilize the extracted features to compute the first score and the second score in a sum which indicates a capability of the discriminator to distinguish the real for- presentation images from the pseudo for-presentation images. (Shin, Image Quality Evaluation, “Most CNN models incorporate similarity metrics for both loss functions in training and image distortion measurement in the test phase (eg, … structural similarity index). Shin’s incorporation of multiple similarity metrics teaches the claimed “sum” (alternatively, MPEP 2144.06), and the structural similarity index teaches the claimed extracted features (i.e., the multiscale structural similarity is the structural similarity index) )
Claim 13 is rejected as per claims 2, 3, and 12.
Claims 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over Shin as applied to claim 2 above, and further in view of Yang Q, Wu Y, Cao D, Luo M, Wei T. A lowlight image enhancement method learning from both paired and unpaired data by adversarial training. Neurocomputing. 2021 Apr 14;433:83-95. (“Yang”)
4. (Currently amended): Shin teaches A GAN according to claim 2, but is not relied on for the below claim language:
However, Yang teaches comprising a preprocessor configured to receive and normalise a source image to yield the for-processing image A. (Yang, abstract, “when applied as pre-processing module” and “Finally, we improve the enhancer by introduce attention mechanism and global feature to original U-net, make it more suitable for lowlight image enhancement task.” Yang’s attention mechanism teaches the claimed normalizing, see the text under equation (6) that Yang conducts linear scaling.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Yang to the teachings of Shin as combined for claim 2 such that Yang’s preprocessing is used before Shin’s encoding for the purpose of improving the result of the later process (i.e., Shin’s processes). Yang, abstract, “when applied as pre-processing module, our method can improve the classification accuracy on lowlight dataset by 1.7%”
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
5. (Original): A GAN according to claim 4 wherein the preprocessor is configured to perform gamma correction on the source image and then normalise. (Yang, abstract, “a novel gamma-correction-based self-supervised content loss on abundant unpaired real data, for training the enhancer to perform better on real lowlight images.”)
6. (Currently amended): A GAN according to claim 5 wherein the preprocessor is configured to apply a level of gamma correction determined by a ratio of breast projected area in the source image to a preselected value. (Yang, “The gamma correction is performed on the brightness channel of HSV color space.” The claimed correction is non-functional descriptive material and thus not entitled to patentable weight. In the interest of compact prosecution, the breast projected area is brighter than the background, or alternatively, the preselected value can be chosen such that the ratio is one.)
7. (Original): A GAN according to claim 6 wherein above a preselected value of the ratio the level of gamma correction is lower than below the preselected ratio. (Yang, “The gamma correction is performed on the brightness channel of HSV color space.” The claimed correction is non-functional descriptive material and thus not entitled to patentable weight. In the interest of compact prosecution, the breast projected area is brighter than the background, or alternatively, the preselected value can be chosen such that the ratio is one.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 11398026 B2 – abstract “synthesize a predicted medical image of the patient that depicts the patient as if they were administered with a second imaging agent”
US 10482600 B2 – title “Cross-domain image analysis and cross-domain image synthesis using deep image-to-image networks and adversarial networks”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/Primary Examiner, Art Unit 2663