DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 1/30/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Status
Claim(s) 1-3, 6-7, 9, 11, 13, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2).
Claim(s) 4-5, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Nielsen (US 12437391 B2).
Claim(s) 8 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Chen (US 11861501 B2).
Claim(s) 10, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Fuchs (US 20210074036 A1).
Claim(s) 15 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Wang (US 20210225027 A1).
Claims 16 and 19 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-7, 9, 11, 13, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2).
Regarding claims 1 and 20, discloses Xia discloses [Claim 1: A method for synthesizing MRA and MIP of MRA images using acquired single contrast MR image (T1-w MR image) (Xia: ¶6 “enable subject-specific synthesis of magnetic resonance angiography (MRA) images, given other multi-contrast MR images (e.g. T1-weighted, T2-weighted, PD-weighted MR images)”) by a system having at least a processor and a memory (Xia: ¶67 “hardware resources 1105 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120,”) therein to execute instructions of an artificial intelligence engine configured to a UNet model stored within the memory of the system; wherein the UNet model comprises: (Xia: ¶8 “Image-to-image translation may be achieved using statistical or machine learning approaches,”)]
[Claim 20: A system for deep learning-based translation of T1-weighted image to vasculature image of a brain comprising: (Xia: ¶6 “enable subject-specific synthesis of magnetic resonance angiography (MRA) images, given other multi-contrast MR images (e.g. T1-weighted, T2-weighted, PD-weighted MR images)”) a memory to store instructions and a processor to execute instructions stored within the memory; (Xia: ¶67 “hardware resources 1105 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120,”) the processor to execute an artificial intelligence engine configured to a UNet model stored within the memory of the system; wherein the UNet model comprising: (Xia: ¶8 “Image-to-image translation may be achieved using statistical or machine learning approaches,”)]
an encoder having a plurality of layer blocks, each of the layer blocks of the encoder comprising one or more convolutional layers, (Xia: ¶35 “Each generator convolution block in the encoder sub-network is made up of a 3D strided convolution layer”) each of the convolution layers associating with an activation layer, (Xia: ¶35 “followed by a leaky rectified linear unit (LeakyReLU) activation layer”) and a down sampling layer; (¶36 “Strided convolution layers 212,222,232,242 are responsible for learning features relevant to the learning task and down-sampling the inputs”)
a decoder having a plurality of layer blocks, each of the layer blocks of the decoder comprising one up-sampling layer, one or more convolutional layers, (Xia: ¶38 “The decoder 300 sub-network comprises a sequence of four up-sampling residual convolution blocks 310-340”) and each of the convolution layers associating with an activation layer; (Xia: ¶40 “FIG. 4 shows an example residual block (ResBlock) 400 architecture for use in the decoder of FIG. 3…a leaky rectified linear unit activation (LeakyReLU) layer”)
wherein the encoder is adapted to extract features from the T1-w MR image for the decoder to combine outputs from the encoder and extracted image features in multiscale resolution levels (Xia: ¶36 “The down-sampling factor is controlled by stride size, and, according to one example, a stride of 2×2×2 is used in all convolution blocks in the encoder 200 sub-network of the disclosed system.” ¶38 “The decoder 300 sub-network comprises a sequence of four up-sampling residual convolution blocks 310-340”) to generate the MRA and MIP of MRA images. (Xia: ¶26 “The disclosed system provides an end to end generative adversarial network that can synthesise anatomically plausible, high resolution 3D MRA images using the most commonly acquired multi-contrast images (i.e. T1/T2/PD-weighted MR images)”)
Xia fails to specifically disclose through the skip connection
a skip connection for associating with one of the layer blocks of the encoder with one of the layer blocks of the decoder at a corresponding multiscale resolution level;
In related art, Han discloses through the skip connection (Han: Col 17 lines 61-63 “an architecture as shown in FIG. 6, such as including short-range or long-range skip connections between convolutional layers.”)
a skip connection for associating with one of the layer blocks of the encoder with one of the layer blocks of the decoder (Han: Col 17 lines 61-63 disclose a skip connection between convolutional layers. Col lines 38-41 disclose an encoder and decoder as part of the Convolutional Neural Network) at a corresponding multiscale resolution level; (Han: Col 20 lines 64-65 “The CNN 600A of FIG. 6 may include five different resolution layers,”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the skip connection disclosed by Han into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia to pass information through the layers of the neural network such that it can be used to generate the MRA images.
Regarding claim 2, Xia, as modified by Han, disclose wherein the decoder comprises an output layer to generate an image with a same resolution as the input image. (Han: Col 21 lines 13-15 “A final or output layer 632A may provide a synthesized image having a resolution similar to or matching a resolution of the input 2D imaging data”)
Regarding claim 3, Xia, as modified by Han, disclose wherein the output layer comprises a single output convolutional layer followed by an output activation layer. (Xia: ¶39 “Final 3D convolution layer 350 is the last 3D convolutional layer added to the decoder, followed by a Tanh Activation layer 360.” Table 2 discloses output layers with size (1 x 1))
Regarding claim 6, Xia, as modified by Han, disclose wherein the encoder and the decoder are adapted to perform cross-sequence from a T1-w image to MRA or MIP image translation (Xia: ¶6 “enable subject-specific synthesis of magnetic resonance angiography (MRA) images, given other multi-contrast MR images (e.g. T1-weighted, T2-weighted, PD-weighted MR images)”) consisting of 19 convolutional layers. (Xia: ¶26 “The decoder sub-network in the generator module takes as input the latent vector output by the encoder sub-network and maps/transforms this low-dimensional representation of the multi-contrast images inputted to the encoder sub-network, to their corresponding, patient-/subject-specific MRA image.” )
Regarding claim 7, Xia, as modified by Han, disclose wherein the encoder is adapted to receive images comprising three dimensions and one or more color channels. (Xia: ¶10 “from patient-/subject-specific 3D brain multi-contrast MR images (such as T1-weighted, T2-weighted and PD-weighted MR images)”)
Regarding claim 9, Xia, as modified by Han, disclose wherein a layer block of the encoder that immediately precedes the decoder comprises a single convolution layer. (Xia: ¶36 discloses convolution block 240 containing strided convolution layer 242)
Regarding claim 11, Xia, as modified by Han, disclose wherein the activation layer is adapted to conduct a linear rectification function by one or more rectified linear units (ReLU). (Xia: ¶35 “followed by a leaky rectified linear unit (LeakyReLU) activation layer” )
Regarding claim 13, Xia, as modified by Han, disclose wherein each of the convolutional layers is adapted to process input data with a number of convolutional filters. (Xia: ¶40 “All 3D convolution blocks within the conditional batch normalisation layers (500) and the residual blocks (400) are identical convolution operations with the same kernel”)
Regarding claim 17, Xia, as modified by Han, disclose wherein the skip connection is adapted to copied and concatenated features generated from one of the layer blocks (Han: Col 19 lines 46-49 “ In certain examples, information from a non-adjacent layer may “skip” intervening layers and may be aggregated together with the output of a later convolutional layer ”) of the encoder to one of the layer blocks of the decoder (Han: Col 20 lines 38-54 discloses a skip connection as part of the CNN architecture along with the encoder and decoder) at a corresponding multiscale resolution level. (Han: Col 20 lines 64-65 “The CNN 600A of FIG. 6 may include five different resolution layers,”)
Claim(s) 4-5, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Nielsen (US 12437391 B2).
Regarding claim 4, Xia, as modified by Han, disclose wherein the single output convolutional layer is a 1 x 1 convolutional layer (Xia: Table 2 discloses output layers with size (1 x 1))
Xia, as modified by Han, fail to specifically disclose with a stride of 1.
In related art, Nielsen discloses with a stride of 1. (Nielsen: Col 10 lines 1-3 “Any suitable number and size of convolutions may be utilised, having any suitable kernel size and stride size. For example, a stride of 1 may be used.”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate a stride of 1 disclosed by Nielsen into method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to select the hyperparameters that determine how the convolutional neural network will function.
Regarding claim 5, Xia, as modified by Han and Nielsen, disclose wherein the output activation layer is adapted to conduct hyperbolic tangent (tanh) operations. (Xia: ¶39 “Final 3D convolution layer 350 is the last 3D convolutional layer added to the decoder, followed by a Tanh Activation layer 360,”)
Regarding claim 12, Xia, as modified by Han, disclose the claimed invention except for wherein the down sampling comprises a 2 x 2 x 2 max-pooling operation with a stride of 2 voxels.
In related art, Nielsen discloses the down sampling comprises a 2 x 2 x 2 max-pooling operation with a stride of 2 voxels. (Nielsen: Col 9 lines 55-56 “a 2×2 max pooling operation with stride 2 for down-sampling.”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the max pooling operation disclosed by Nielsen into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to down sample the input data as part of the MRA generation process.
Claim(s) 8 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Chen (US 11861501 B2).
Regarding claim 8, Xia, as modified by Han, discloses the claimed invention except for wherein one or more layer blocks of the encoder comprises a repeated implementation of two 3 x 3 convolution layers with 2 voxels stride over five-layer blocks.
In related art, Chen discloses one or more layer blocks of the encoder comprises a repeated implementation of two 3 x 3 convolution layers with 2 voxels stride (Chen: Col 11-12 lines 60-1 “The first convolutional sublayer includes a 3×3 convolution kernel, each time of convolution has a stride of 2… a second convolutional sublayer 902 includes 64 3×3 convolution kernels”) over five-layer blocks. (Chen: Col 11 lines 29-30 “The deep network encoding unit 802 includes five convolutional layers,”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the encoder structure disclosed by Chen into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to define the structure of a deep learning model that is going to perform a task.
Claim(s) 10, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Fuchs (US 20210074036 A1).
Regarding claim 10, Xia, as modified by Han, disclose the claimed invention except for wherein a zero padding technique is implemented before each convolution layer.
In related art Fuchs discloses a zero padding technique is implemented before each convolution layer. (Fuchs: ¶97 “With the determination of the predefined number, the upsampling layer 246A-N may perform zero-padding” ¶97 and Fig. 2C disclose utilizing zero padding at every convolution stack)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the zero padding technique disclosed by Fuchs into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to manage the dimensions of the data being manipulated by the model.
Regarding claim 14, Xia, as modified by Han, disclose the claimed invention except for wherein the number of convolutional filters is doubled from a first layer block to a last layer block within the encoder.
In related art, Fuchs discloses the number of convolutional filters is doubled from a first layer block to a last layer block within the encoder. (Fuchs: ¶6 “The number of filters across the convolutional layers may increase at each successive CNN, thereby increasing the number of feature maps… At each CNN of the encoder, the number of feature maps may double”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate doubling the number of filters across convolutional layers disclosed by Fuchs into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to enhance feature learning from the input magnetic resonance image.
Regarding claim 18, Xia, as modified by Han, disclose the claimed invention except for wherein UNet model is trained with a batch size of 4.
In related art, Fuchs discloses UNet model is trained with a batch size of 4. (Fuchs: ¶110 “The hyperparameters used for training were:…batch size 30” ¶180 ”Training hyperparameters were:… batch size 70” Fuchs discloses a customizable hyperparameter determining the batch size that could be set to 4)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate a specified batch training size disclosed by Fuchs into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to observe how changeable parameters such as batch size affect the training and performance of a machine learning model.
Claim(s) 15 are rejected under 35 U.S.C. 103 as being unpatentable over Xia (US 20250078290 A1) in view of Han (US 11699281 B2) and in further view of Wang (US 20210225027 A1).
Regarding claim 15, Xia, as modified by Han, disclose the claimed invention except for wherein the up-sampling layer of the decoder is adapted to perform nearest-neighbor interpolation to increase image size through each layer block within the decoder.
In related art, Wang discloses the up-sampling layer of the decoder is adapted to perform nearest-neighbor interpolation to increase image size through each layer block within the decoder. (Wang: ¶279 “There are a plurality of sampling methods, such as nearest neighbor interpolation…All of the methods can be used in the above operations of up-sampling and down-sampling.”)
Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate nearest neighbor interpolation as a method of up-sampling data disclosed by Wang into the method for synthesizing magnetic resonance angiography (MRA) images from other magnetic resonance images disclosed by Xia, as modified by Han, to restore the dimensionality of the feature maps manipulated by the model.
Allowable Subject Matter
Claims 16 and 19 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Laub (US 20200405175 A1) discloses a model-based MRI image reconstruction technique is provided. The model-based reconstruction technique increases the performance of Time-of-Flight MRA. In a learning phase, a model is calculated from a sufficiently large set of data acquired at both low and high magnetic fields, using deep learning strategies. In a clinical phase, the model is applied to measured data generating high MR image quality
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL KIM MAIDEN whose telephone number is (703)756-1264. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at 4089187630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL KIM MAIDEN/Examiner, Art Unit 2665
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665