DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities: the language “the generated images 51 generated and” in paragraph [0052] is not correct. Appropriate correction is required.
35 USC § 112 (f) Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim 3 and claim 11 use the words “means for”.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "a generative model building part", "a process condition prediction model" and "an image generation/prediction part" in claims 1-15 and “a property prediction model building part” for claims 6-15, “an aimed image acquisition part” for claims 7-9, and “a visualization part” for claim 9.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because a storage medium may be a carrier wave, a signal per se, and thus non-statutory (MPEP 2106 (I)) or (MPEP 2106 Patent Subject Matter Eligibility (I)). Claim 17 recites a “a storage medium” which may encompass transitory media such as, carrier wavers.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-11, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Souly (US 20230051237 A1) and in view of Sardeshmukh (US 20220343116 A1).
Regarding to claim 1, Souly discloses a prediction apparatus (Fig. 1; [0017]: the system architecture 100 includes network 105, a material analysis system 110, computing resources 120, and storage resources 130; [0018]: computing devices; CPUs; processors; [0021-0022]: hard disk drives (HDDs), solid state drives (SSD), hybrid drives, storage area networks, storage arrays, etc; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image1.png
482
778
media_image1.png
Greyscale
; Fig. 6; [0068-0069]: determine properties of a material; the process 600 receives the set/sequence of images or may retrieve/access the set/sequence of images from a data storage device) comprising:
a generative model building part that uses microstructure images, which are acquired from a material, as learning data to build a generative model for generating a microstructure image of the material ([0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; [0032]: train and build the machine learning models, transformer networks, and crossmodal transformer networks simultaneously using the same training data.);
a process condition prediction model building part that builds a process condition prediction model by learning data of process conditions paired with the microstructure images of the material ([0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; [0035]: the neural network is a computing model that is used to determine a feature in input data through various computations; Fig. 2; [0036]: a deep neural network; Fig. 2; [0045]: the fully connected layers 230 regresses the transformed features generated by the transformer network 220 to determine, generate, obtain, etc., the material properties 240, e.g., one or more properties of a material, such as specific power, specific energy, etc.; Fig. 3; [0046]: the material analysis system 110 determines one or more material properties 340, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 305A and 305B.), the process condition prediction model being a regression model for predicting process conditions for any microstructure images (Fig. 2; [0038]: recurrent neural networks (RNNs) or long short-term memory (LSTM) networks process sequential data; [0039]: process a sequence of images and pay attention to different portions of the sequential images and learn from the context of those portions of the sequential images; [0040]: transformer network; Fig. 2; [0045]: the fully connected layers 230 regresses the transformed features generated by the transformer network 220 to determine, generate, obtain, etc., the material properties 240, e.g., one or more properties of a material, such as specific power, specific energy, etc.; Fig. 3; [0058]: the fully connected layers 330 regresses the concatenated crossmodal features to determine, generate, obtain, etc., the material properties 340;
PNG
media_image3.png
280
790
media_image3.png
Greyscale
); and
an image generation/prediction part that generates a microstructure image of a material by inputting sampled variables into the generative model built by the generative model building part ([0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; [0045]: each of the edges are assigned and/or associated with a weight; Fig. 3; [0049]: the features 315A that are obtained, determined, etc., by the machine learning model 310A are provided to the transformer network 320A as an input.), and enters the generated microstructure image of the material into the process condition prediction model built by the process condition prediction model building part ([0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; Fig. 2; [0036]: a deep neural network;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; Fig. 2; [0045]: the fully connected layers 230 regresses the transformed features generated by the transformer network 220 to determine, generate, obtain, etc., the material properties 240, e.g., one or more properties of a material, such as specific power, specific energy, etc.; Fig. 3; [0046]: the material analysis system 110 determines one or more material properties 340, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 305A and 305B; Fig. 3; [0058]: the fully connected layers 330 regresses the concatenated crossmodal features to determine, generate, obtain, etc., the material properties 340;
PNG
media_image3.png
280
790
media_image3.png
Greyscale
) so as to generate a microstructure image of the material and, at the same time, predict process conditions for the microstructure image ([0015]: the material analysis system uses images of the microstructure of the material to automatically determine or predict the properties of the material; [0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; [0041]: the decoder layer may hide future outputs to ensure that a prediction made at a time X only depends on what is known prior to time X; Fig. 3; [0046]: the material analysis system 110 determines one or more material properties 340, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 305A and 305B).
Souly fails to explicitly disclose variables are latent variables.
In same field of endeavor, Sardeshmukh teaches:
variables are latent variables (Fig. 3; [0041-0042]: generate a reconstructed microstructure output image for the input microstructure image; [0047]: generate the latent vector in the latent space; [0048]: the latent vector of each train cropped microstructure image is passed to the decoder unit 302b of the variational autoencoder 302).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Souly to include variables are latent variables as taught by Sardeshmukh. The motivation for doing so would have been to generate a reconstructed microstructure output image for the input microstructure image; to generate the latent vector in the latent space; to pass the latent vector of each train cropped microstructure image to the decoder unit 302b of the variational autoencoder 302; to effectively generate the synthetic microstructure images with the desired features; to enable the trained variational autoencoder to generate the synthetic microstructure images more accurately as taught by Sardeshmukh in paragraphs [0041-0042], [0047-0048] and [0083].
Regarding to claim 2, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 1, wherein
epochs for the generative model building part and the process condition prediction model building part are decided based on an accuracy of reconstruction of microstructure images generated by the generative model and distribution of process conditions predicted by the process condition prediction model (Sardeshmukh; Fig. 6; [0052]: calculate a style loss between the train cropped microstructure image and the corresponding reconstructed cropped microstructure image; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; the re-training takes additional time).
Same motivation of claim 1 is applied here.
Regarding to claim 3, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 2, the prediction apparatus further comprising:
adjusting means for adjusting the epochs for the generative model building part and the process condition prediction model building part (Sardeshmukh; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; the re-training takes additional time; the predefined threshold is adjustable).
Same motivation of claim 1 is applied here.
Regarding to claim 6, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 1, the prediction apparatus further comprising:
a property prediction model building part that builds a property prediction model, which is a regression model that predicts material properties of any microstructure images by learning data of material properties paired with the microstructure images of the material (Souly; Fig. 2; [0045]: the fully connected layers 230 regresses the transformed features, generated by the transformer network 220, to determine, generate, obtain, etc., the material properties 240; Fig. 3; [0058]: the fully connected layers 330 regresses the concatenated crossmodal features to determine, generate, obtain, etc., the material properties 340;
PNG
media_image3.png
280
790
media_image3.png
Greyscale
), wherein
the image generation/prediction part predicts material properties of the microstructure image of the material generated by using the property prediction model (Souly; Fig. 2; [0045]: the fully connected layers 230 regresses the transformed features, generated by the transformer network 220, to determine, generate, obtain, etc., the material properties 240; Fig. 3; [0046]: the material analysis system 110 determines one or more material properties 340, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 305A and 305B; [0048]: obtain, generate, and determine features, e.g., visual features, of the images 205; [0051]: the crossmodal attention data allows the transformer network 350A to identify, determine, and/or focus on relevant information from both the images 305A and 305B based on what the transformer network 350A is currently processing.).
Regarding to claim 7, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, the prediction apparatus further comprising:
an aimed image acquisition part that acquires, from microstructure images of the material generated by the image generation/prediction part, a microstructure image of the material having aimed material properties (Sardeshmukh; Fig. 2C; [0072]: generate synthetic microstructure images with the desired feature, i.e. an aimed feature; the desired feature is one among the plurality of predefined features that are classified during the interpretation of the trained variational autoencodee; [0074]: generate the synthetic microstructure images with the desired feature, i.e. aimed feature).
Same motivation of claim 1 is applied here.
Regarding to claim 8, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 7, wherein
the aimed image acquisition part decides whether the material properties, which are predicted by the image generation/prediction part, of the microstructure image generated by the generative model satisfy predetermined aimed conditions or not (Sardeshmukh; [0079]: generate a reconstructed validation microstructure image for each validation cropped microstructure image of the plurality of validation cropped microstructure images using the trained variational autoencoder; determine a texture similarity score for each validation cropped microstructure image of the plurality of validation cropped microstructure images; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; the re-training takes additional time); and
if the material properties satisfy the predetermined aimed conditions, the generated microstructure image is taken as the microstructure image of the material having the aimed material properties (Sardeshmukh; Fig. 2E; [0074]: generate the synthetic microstructure images with the desired feature, for the test microstructure image of the material; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; if larger than predefined threshold, the properties satisfy the desired requests).
Same motivation of claim 1 is applied here.
Regarding to claim 9, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 8, the prediction apparatus further comprising:
a visualization part that visualizes prediction results of the process conditions and the material properties of the microstructure image of the material having the aimed material properties (Sardeshmukh; Fig. 2E; [0074]: generate the synthetic microstructure images with the desired feature, for the test microstructure image of the material; [0079]: generate a reconstructed validation microstructure image for each validation cropped microstructure image of the plurality of validation cropped microstructure images using the trained variational autoencoder.).
Same motivation of claim 1 is applied here.
Regarding to claim 10, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, wherein
epochs for the generative model building part and the property prediction model building part are decided based on an accuracy of reconstruction of microstructure images generated by the generative model and distribution of material properties predicted by the property prediction model (Sardeshmukh; Fig. 6; [0052]: calculate a style loss between the train cropped microstructure image and the corresponding reconstructed cropped microstructure image; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; the re-training takes additional time).
Same motivation of claim 1 is applied here.
Regarding to claim 11, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 10, the prediction apparatus further comprising:
adjusting means for adjusting the epochs for the generative model building part and the property prediction model building part (Sardeshmukh; [0081]: if the average texture similarity score is less than the predefined threshold, then the trained variational autoencoder is retrained until the average texture similarity score is greater than or equal to the predefined threshold; the re-training takes additional time; the predefined threshold is adjustable).
Regarding to claim 14, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, wherein
the image generation/prediction part generates more microstructure images that includes microstructures strongly related to the aimed material properties, or less microstructure images that includes microstructures weakly related to the aimed material properties, than in a case in which microstructure images are randomly generated (Sardeshmukh; [0047]: allow random sampling from the gaussian probability distribution to generate the latent vector in the latent space; [0050]: microstructures are the type texture images; they contain randomly repeated patterns such as spheres, lines, and so on; [0059]: the graphite appears in the form of flakes, like black lines growing in random directions; Fig. 2C; [0072]: generate synthetic microstructure images with the desired feature; the desired feature is one among the plurality of predefined features that are classified during the interpretation of the trained variational autoencodee; [0074]: generate the synthetic microstructure images with the desired feature).
Same motivation of claim 1 is applied here.
Regarding to claim 15, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, wherein
distribution of the material properties of microstructure images generated by the image generation/prediction part has deviation (Sardeshmukh; [0079]: generate reconstructed validation microstructure image for each validation cropped microstructure image of the plurality of validation cropped microstructure images using the trained variational autoencoder; [0081]: average of the texture similarity scores of the plurality of validation cropped microstructure images).
Regarding to claim 16, Souly discloses a prediction method run on a computer, the method comprising steps of (Fig. 1; [0017]: the system architecture 100 includes network 105, a material analysis system 110, computing resources 120, and storage resources 130; [0018]: computing devices; CPUs; processors; [0021-0022]: hard disk drives (HDDs), solid state drives (SSD), hybrid drives, storage area networks, storage arrays, etc; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image1.png
482
778
media_image1.png
Greyscale
; Fig. 6; [0068-0069]: determine properties of a material; the process 600 receives the set/sequence of images or may retrieve/access the set/sequence of images from a data storage device):
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 16.
Regarding to claim 17, Souly discloses a storage medium for storing a program that causes a computer to function as (Fig. 1; [0017]: the system architecture 100 includes network 105, a material analysis system 110, computing resources 120, and storage resources 130; [0018]: computing devices; CPUs; processors; [0021-0022]: hard disk drives (HDDs), solid state drives (SSD), hybrid drives, storage area networks, storage arrays, etc; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image1.png
482
778
media_image1.png
Greyscale
; Fig. 6; [0068-0069]: determine properties of a material; the process 600 receives the set/sequence of images or may retrieve/access the set/sequence of images from a data storage device)
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 17.
Claims 4-5 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Souly (US 20230051237 A1) in view of Sardeshmukh (US 20220343116 A1), and further in view of Nene (US 20240378694 A1).
Regarding to claim 4, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 1, wherein
the image generation/prediction part enters an image obtained by applying process on the microstructure image of the material generated by the generative model into the process condition prediction model to predict process conditions (Souly; [0015]: the material analysis system uses images of the microstructure of the material to automatically determine or predict the properties of the material; [0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; Fig. 2; [0036]: a deep neural network; Fig. 3; [0046]).
Souly in view of Sardeshmukh fails to explicitly disclose: process is super-resolution process.
In same field of endeavor, Nene teaches: process is super-resolution process ([0031]: the super-resolution model is trained by applying the super-resolution model to the training data; [0038]: the super-resolution processing is performed by a super-resolution machine learning model; Fig. 2; [0039-0040]: a super-resolution processing flow; [0062]: perform super-resolution on low-resolution images to generate high-resolution images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Souly in view of Sardeshmukh to include process is super-resolution process as taught by Nene. The motivation for doing so would have been to perform the upscaling by a super-resolution model; to generate a corresponding high-resolution image that omits aliasing and jitter artifacts; to perform super-resolution on low-resolution images to generate high-resolution images as taught by Nene in paragraphs [0040-0041] and [0062].
Regarding to claim 5, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 1, wherein
the process condition prediction model building part uses, as the microstructure image of the material to be learned, an image obtained by applying process on the image rebuilt from the microstructure image of the material by the generative model (Souly; [0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; Fig. 2; [0036]: a deep neural network; Fig. 3; [0046]).
Souly in view of Sardeshmukh fails to explicitly disclose: process is super-resolution process.
In same field of endeavor, Nene teaches: process is super-resolution process ([0031]: the super-resolution model is trained by applying the super-resolution model to the training data; [0038]: the super-resolution processing is performed by a super-resolution machine learning model; Fig. 2; [0039-0040]: a super-resolution processing flow; [0062]: perform super-resolution on low-resolution images to generate high-resolution images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Souly in view of Sardeshmukh to include process is super-resolution process as taught by Nene. The motivation for doing so would have been to perform the upscaling by a super-resolution model; to generate a corresponding high-resolution image that omits aliasing and jitter artifacts; to perform super-resolution on low-resolution images to generate high-resolution images as taught by Nene in paragraphs [0040-0041] and [0062].
Regarding to claim 12, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, wherein
the image generation/prediction part enters an image obtained by applying process on the microstructure image of the material generated by the generative model into the property prediction model to predict material properties (Souly; [0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; Fig. 2; [0036]: a deep neural network).
Souly in view of Sardeshmukh fails to explicitly disclose: process is super-resolution process.
In same field of endeavor, Nene teaches: process is super-resolution process ([0031]: the super-resolution model is trained by applying the super-resolution model to the training data; [0038]: the super-resolution processing is performed by a super-resolution machine learning model; Fig. 2; [0039-0040]: a super-resolution processing flow; [0062]: perform super-resolution on low-resolution images to generate high-resolution images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Souly in view of Sardeshmukh to include process is super-resolution process as taught by Nene. The motivation for doing so would have been to perform the upscaling by a super-resolution model; to generate a corresponding high-resolution image that omits aliasing and jitter artifacts; to perform super-resolution on low-resolution images to generate high-resolution images as taught by Nene in paragraphs [0040-0041] and [0062].
Regarding to claim 13, Souly in view of Sardeshmukh discloses the prediction apparatus according to claim 6, wherein
the property prediction model building part uses, as the microstructure image of the material to be learned, an image obtained by applying process on the image rebuilt from the microstructure image of the material by the generative model (Souly; Souly; [0024]: a generative adversarial network, i.e., GAN, generates the images of the structure/microstructure of a material based on existing images of the structure/microstructure of an existing material; the images of the volume of material are captured by imaging devices such as microscopes, electron microscopes, etc; [0028]: the material analysis system 110 determines the set of features based on the set or sequence of images using multiple machine learning models, e.g., multiple neural networks, multiple CNNs, etc.; Fig. 2; [0034]: the material analysis system 110 determines one or more material properties 240, e.g., one or more properties of a material such as specific energy, specific power, etc., based on the images 205;
PNG
media_image2.png
318
794
media_image2.png
Greyscale
; Fig. 2; [0036]: a deep neural network; Fig. 3; [0046]).
Souly in view of Sardeshmukh fails to explicitly disclose: process is super-resolution process.
In same field of endeavor, Nene teaches: process is super-resolution process ([0031]: the super-resolution model is trained by applying the super-resolution model to the training data; [0038]: the super-resolution processing is performed by a super-resolution machine learning model; Fig. 2; [0039-0040]: a super-resolution processing flow; [0062]: perform super-resolution on low-resolution images to generate high-resolution images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Souly in view of Sardeshmukh to include process is super-resolution process as taught by Nene. The motivation for doing so would have been to perform the upscaling by a super-resolution model; to generate a corresponding high-resolution image that omits aliasing and jitter artifacts; to perform super-resolution on low-resolution images to generate high-resolution images as taught by Nene in paragraphs [0040-0041] and [0062].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616