DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/19/2023 and 04/07/2025 is/are compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Office Action Summary
Claim(s) 15 and 16 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim(s) 1-2, 5, and 10 is/are interpreted under 35 USC 112(f).
Claim(s) 1-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al (US 2019/0287230 A1) in view of Zhang et al (US 2017/0345140 A1).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Regarding claim(s) 15 and 16 is/are directed to non-statutory subject matter. The broadest reasonable interpretation of the claim in light of the specification concludes that the claim as a whole covers a transitory signal since the definition of “detection program and learning program” leaves open the possibility that the medium could be transitory. Paragraph [0073] of the specification discloses “ […] an abnormality detection program and a learning program may be provided with, for example, a computer-readable recording medium such as a USB memory or a digital versatile disc (DVD)-ROM [...]”. This leaves open the possibility that the storage medium of claim 15 and 16 could be transitory. The Examiner suggests amending the claims to recite a non-transitory computer readable storage medium. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “input unit” in claim(s) 1-2, 5, and 10 and “learning unit” in claim(s) 10.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al (US 2019/0287230 A1) in view of Zhang et al (US 2017/0345140 A1).
Regarding claim(s) 1, Lu teaches an abnormality detection system for detecting a visual defect of an object, the abnormality detection system comprising:
an input unit that acquires inspection images of a target object (Figure 2; and Paragraph [0011]: “The wafer inspection tool is configured to generate images of a wafer, and includes an electron beam source and a detector. The processor operates a model configured to find one or more anomalies in the image”),
a feature extractor that is previously learned to extract a feature map from training images including a non-defective image of the target object (Figure 3: encoder and decoder; Paragraph [0062]: “Some machine learning feature vectors are extracted from the defect-free training images”; and Paragraph [0066]: “Only nominal patterns may be used to train the model to detect anomalies […] only clean SEM images may be needed”);
an image generator (read as “g(z)”) that is previously learned to restore the training images (read as “reconstructed data
x
^
”) from the feature map (read as “z”) extracted by the feature extractor (Figure 5; Paragraph [0052]: “For encoding, f(x) stands for an encoder mapping from x to z, […] For decoding, g(z) represents the complex decoding process that results in the reconstructed data
x
^
, which is modeled in the structure of a neural network similar as encoder”; Paragraph [0066]-[0068]; and Paragraph [0071]: “FIG. 5 illustrates input and reconstructed SEM patches with an autoencoder”); and
a detector that detects an abnormality of the target object, based on a similarity calculated by comparing inspection image of the target object which is an inspection target, the inspection image being input to the input unit (Paragraph [0048]: “At 104, a presence of one or more anomalies in an image is determined using the model [...] a difference between reconstructed and original SEM images may be calculated at 104 to locate the anomaly patterns (e.g., defects)”; Paragraph [0052]: “For encoding, f(x) stands for an encoder mapping from x to z, […] For decoding, g(z) represents the complex decoding process that results in the reconstructed data
x
^
, which is modeled in the structure of a neural network similar as encoder”; Paragraph [0066]-[0068]; Paragraph [0072]: “FIG. 6 a graph of reconstruction errors. A threshold is used to distinguish anomaly from nominal.”; and Paragraph [0074]: “The reconstruction error can be defined as the difference between the original input vector x and the reconstruction
x
^
as in Eq. 4”).
Lu fails to teach an input unit that acquires inspection images of a target object, the inspection images having different image sizes each of which is equal to or more than a predetermined size. However, Zhang teaches an input unit that acquires inspection images of a target object, the inspection images having different image sizes each of which is equal to or more than a predetermined size (Paragraph [0080] – [0081]: “once the input image is input to the methods and systems described herein it is not cropped. In addition, the input image used in the embodiments described herein is not cropped out of a larger image […] the neural network may be configured as fully convolutional neural network, which refers to the configuration in which each layer type has no assumption on specific input size and the entire network can operate on arbitrarily sized input for both training and inference”; and Paragraph [0104]: “given a set of arbitrarily sized training images (each of them may be no less than the designed minimum size, i.e., 64 pixels by 64 pixels in the example shown in FIG. 3a), at the training step, a random batch of images are selected”).
It would have been obvious to one of ordinary skill in the art to modify the abnormality-detection autoencoder system of Lu, which teaches acquiring inspection images of a target object, extracting a feature representation using a convolutional encoder trained only on non-defective images, reconstructing the inspection image using a decoder, and detecting abnormality based on the difference (i.e., inverse similarity) between the inspection image and the restored image, in view of Zhang, which teaches a fully-convolutional encoder–decoder network comprising stacked convolution layers without any fully-connected or GAP layers and capable of processing arbitrarily sized input images while preserving spatial information and producing proportionally scaled feature maps that down sample at predictable ratios (1/2 per conv layer). Zhang’s architecture would have been an obvious design choice to apply to Lu’s system in order to eliminate input-size constraints, permit processing of different-sized inspection images, maintain spatial structure in the feature maps, and achieve scalable encoder/decoder behavior advantages that would predictably improve the flexibility and performance of Lu’s reconstruction-based abnormality detector. Thus, combining Zhang’s size adaptive fully-convolutional network with Lu’s anomaly detection framework would have been a routine substitution of known CNN architectural components yielding predictable benefits, rendering the claimed invention.
Regarding claim(s) 2, Lu as modified by Zhang teaches the abnormality detection system according to claim 1, where Lu teaches wherein the detector is set to detect the abnormality of the target object at a degree of accuracy equal to or more than a certain level (Paragraph [0048]: “At 104, a presence of one or more anomalies in an image is determined using the model. Threshold reconstruction errors or probabilities can be used to find an anomaly patch or region in the image. For example, a difference between reconstructed and original SEM images may be calculated at 104 to locate the anomaly patterns (e.g., defects)”), where Zhang teaches regardless of the image sizes of the inspection image input to the input unit (Paragraph [0015]: “The neural network does not include a fully connected layer thereby eliminating constraints on size of the image input to the two or more encoder layers”; and Paragraph [0080] – [0081]: “once the input image is input to the methods and systems described herein it is not cropped. In addition, the input image used in the embodiments described herein is not cropped out of a larger image […] the neural network may be configured as fully convolutional neural network, which refers to the configuration in which each layer type has no assumption on specific input size and the entire network can operate on arbitrarily sized input for both training and inference”).
Regarding claim(s) 3 and 11, Lu as modified by Zhang teaches the abnormality detection system according to claim 1, where Zhang teaches wherein the feature map extracted by the feature extractor has a size equal to or more than a size of 8 by 8 pixels (Figure 3a-3b; Paragraph [0015]: “The one or more components include a neural network that includes two or more encoder layers configured for determining features of an image. The neural network does not include a fully connected layer thereby eliminating constraints on size of the image input to the two or more encoder layers”; Paragraph [0083]: “FIGS. 3a and 3b illustrate how a currently used encoder-decoder network with fixed input size (i.e., 64 pixels by 64 pixels) may be converted to a fully convolutional network that enables arbitrary input size [...] In FIGS. 3a and 3b, the input and output dimensions are shown […] the format of (C, H, W)”; and Paragraph [0084]: “Input dimensions 300d of the input image are (1, 64, 64) [...] The input image is input to set 304 of convolutional and pooling layers, which generates output 304o (c1, 32, 32) that is input to set 306 of convolutional and pooling layers. Output 306o (c2, 16, 16) of set 306 is input to reshaping layer 310, which generates output 310o, which is input to fully connected layer 312, which generates representation 314 (512), that is input to the decoder portion of the currently used network”).
Regarding claim(s) 4 and 12, Lu as modified by Zhang teaches the abnormality detection system according to claim 3, where Zhang teaches wherein on condition that the sizes of the inspection image are indicated by M and the size of the feature map is indicated by N, the feature map extracted by the feature extractor (Figure 3a-3b; Paragraph [0015]; Paragraph [0083]; Paragraph [0084]; and Paragraph [0086]: “Input image 328 has input dimensions 328d of (1, 1024, 1024) […] Input image 328 is input to set 332 of convolutional and pooling layers that produces output 332o (c1, 512, 512), which is input to set 334 of convolutional and pooling layers that produces output 334o (c2, 256, 256). Output 334o is input, to set 336 of convolutional layers, which produces representation 338 (512, 241, 241)”) satisfies the following formula (1):
N
≥
M x (1/2)^a Formula (1), where M and N each represent a number of vertical or horizontal pixels, and a represents a number of convolution layers in the feature extractor (Figure 3a-3b; Paragraph [0084]: “Input dimensions 300d of the input image are (1, 64, 64) […] The input image is input to set 304 of convolutional and pooling layers, which generates output 304o (c1, 32, 32) that is input to set 306 of convolutional and pooling layers. Output 306o (c2, 16, 16) of set 306 is input to reshaping layer 310, […]”; and Paragraph [0086]: “Input image 328 has input dimensions 328d of (1, 1024, 1024). Encoder portion 330 includes sets 332 and 334 of convolutional and pooling layers and set 336 of convolutional layers. Input image 328 is input to set 332 of convolutional and pooling layers that produces output 332o (c1, 512, 512), which is input to set 334 of convolutional and pooling layers that produces output 334o (c2, 256, 256). Output 334o is input, to set 336 of convolutional layers, which produces representation 338 (512, 241, 241)”).
Regarding claim(s) 5, Lu as modified by Zhang teaches the abnormality detection system according to claim 3, where Zhang teaches wherein the size of the feature map extracted by the feature extractor is proportional to the sizes of the inspection image input to the input unit (Paragraph [0015]: “The one or more components include a neural network that includes two or more encoder layers configured for determining features of an image. The neural network does not include a fully connected layer thereby eliminating constraints on size of the image input to the two or more encoder layers”; Paragraph [0080] – [0081]: “once the input image is input to the methods and systems described herein it is not cropped. In addition, the input image used in the embodiments described herein is not cropped out of a larger image […] the neural network may be configured as fully convolutional neural network, which refers to the configuration in which each layer type has no assumption on specific input size and the entire network can operate on arbitrarily sized input for both training and inference”; Paragraph [0084]; and Paragraph [0086]).
Regarding claim(s) 6 and 13, Lu as modified by Zhang teaches the abnormality detection system according to claims 1, where Zhang teaches wherein the feature extractor extracts the feature map from which spatial information on an image is not lost (Paragraph [0076] – [0077]: “Often, a reshaping layer is performed before the fully connected layer to transform a 4-dimensional input (N, C, H, W) to 2-dimensional input (N, D) satisfying D=C*H*W […] the reshaping layer and fully connected layer can be replaced with a convolutional layer with VALID padding with kernel size of (C, O, H, W), and the resulting network can perform exact math on the arbitrary sized input.”; Paragraph [0083]: “FIGS. 3a and 3b illustrate how a currently used encoder-decoder network with fixed input size (i.e., 64 pixels by 64 pixels) may be converted to a fully convolutional network that enables arbitrary input size [...] In FIGS. 3a and 3b, the input and output dimensions are shown […] the format of (C, H, W)”; and Paragraph [0084]: “Input dimensions 300d of the input image are (1, 64, 64) [...] The input image is input to set 304 of convolutional and pooling layers, which generates output 304o (c1, 32, 32) that is input to set 306 of convolutional and pooling layers. Output 306o (c2, 16, 16) of set 306 is input to reshaping layer 310, which generates output 310o, which is input to fully connected layer 312, which generates representation 314 (512), that is input to the decoder portion of the currently used network”).
Regarding claim(s) 7 and 14, Lu as modified by Zhang teaches the abnormality detection system according to claim 6, where Zhang teaches wherein the feature extractor does not include a fully connected layer or a global average pooling (GAP) layer (Paragraph [0015]: “The one or more components include a neural network that includes two or more encoder layers configured for determining features of an image. The neural network does not include a fully connected layer thereby eliminating constraints on size of the image input to the two or more encoder layers”).
Regarding claim(s) 8, Lu as modified by Zhang teaches the abnormality detection system according to claim 1, where Zhang teaches wherein the feature extractor and the image generator each have a structure to be changed in accordance with the sizes of the input inspection image (Figure 3a-3b; Paragraph [0076] – [0077]: “[…] the neural network becomes independent of input image size, meaning that the neural network does not have an upper limit on the input image size like currently used neural networks […]”; Paragraph [0083]: “FIGS. 3a and 3b illustrate how a currently used encoder-decoder network with fixed input size (i.e., 64 pixels by 64 pixels) may be converted to a fully convolutional network that enables arbitrary input size [...] In FIGS. 3a and 3b, the input and output dimensions are shown […] the format of (C, H, W)”; and Paragraph [0084]: “Input dimensions 300d of the input image are (1, 64, 64) [...] The input image is input to set 304 of convolutional and pooling layers, which generates output 304o (c1, 32, 32) that is input to set 306 of convolutional and pooling layers. Output 306o (c2, 16, 16) of set 306 is input to reshaping layer 310, which generates output 310o, which is input to fully connected layer 312, which generates representation 314 (512), that is input to the decoder portion of the currently used network”).
Regarding claim(s) 9, Lu as modified by Zhang teaches the abnormality detection system according to claim 1,where Lu teaches wherein the inspection image is image of an electronic circuit (Paragraph [0002]: “This disclosure relates to anomaly detection in images and, more particularly, to anomaly detection in scanning electron microscope images of semiconductor wafers”; and Paragraph [0005]: “Inspection has always been an important part of fabricating semiconductor devices such as integrated circuits (ICs)”).
Regarding claim(s) 10, Lu teaches a learning apparatus for learning a learning model that carries out abnormality detection of detecting a visual defect of an object, the learning model including a feature extractor and an image generator, the learning apparatus comprising:
an input unit that acquires training images including a non-defective image of a target object (Figure 2; Paragraph [0011]: “The wafer inspection tool is configured to generate images of a wafer, and includes an electron beam source and a detector. The processor operates a model configured to find one or more anomalies in the image”; and Paragraph [0066]: “Only nominal patterns may be used to train the model to detect anomalies […] only clean SEM images may be needed”);
the feature extractor that extracts a feature map, based on the training images input to the input unit (Figure 3: encoder and decoder; Paragraph [0062]: “Some machine learning feature vectors are extracted from the defect-free training images”; and Paragraph [0066]: “Only nominal patterns may be used to train the model to detect anomalies […] only clean SEM images may be needed”);
the image generator (read as “g(z)”) that generates restored image by restoring the training images (read as “reconstructed data
x
^
”) from the feature map (read as “z”) extracted by the feature extractor (Figure 5; Paragraph [0052]: “For encoding, f(x) stands for an encoder mapping from x to z, […] For decoding, g(z) represents the complex decoding process that results in the reconstructed data
x
^
, which is modeled in the structure of a neural network similar as encoder”; Paragraph [0066]-[0068]; and Paragraph [0071]: “FIG. 5 illustrates input and reconstructed SEM patches with an autoencoder”); and
a learning unit that updates parameters of the feature extractor and image generator, based on the training images and the restored images(Paragraph [0048]: “At 104, a presence of one or more anomalies in an image is determined using the model [...] a difference between reconstructed and original SEM images may be calculated at 104 to locate the anomaly patterns (e.g., defects)”; Paragraph [0052]: “For encoding, f(x) stands for an encoder mapping from x to z, […] For decoding, g(z) represents the complex decoding process that results in the reconstructed data
x
^
, which is modeled in the structure of a neural network similar as encoder”; Paragraph [0066]-[0068]; Paragraph [0072]: “FIG. 6 a graph of reconstruction errors. A threshold is used to distinguish anomaly from nominal.”; and Paragraph [0074]: “The reconstruction error can be defined as the difference between the original input vector x and the reconstruction
x
^
as in Eq. 4”).
Lu fails to teach the training images input to the input unit have different image sizes each of which is equal to or more than a predetermined size.
However, Zhang teaches (Paragraph [0080] – [0081]: “once the input image is input to the methods and systems described herein it is not cropped. In addition, the input image used in the embodiments described herein is not cropped out of a larger image […] the neural network may be configured as fully convolutional neural network, which refers to the configuration in which each layer type has no assumption on specific input size and the entire network can operate on arbitrarily sized input for both training and inference”; and Paragraph [0104]: “given a set of arbitrarily sized training images (each of them may be no less than the designed minimum size, i.e., 64 pixels by 64 pixels in the example shown in FIG. 3a), at the training step, a random batch of images are selected”).
It would have been obvious to one of ordinary skill in the art to modify the abnormality-detection autoencoder system of Lu, which teaches acquiring inspection images of a target object, extracting a feature representation using a convolutional encoder trained only on non-defective images, reconstructing the inspection image using a decoder, and detecting abnormality based on the difference (i.e., inverse similarity) between the inspection image and the restored image, in view of Zhang, which teaches a fully-convolutional encoder–decoder network comprising stacked convolution layers without any fully-connected or GAP layers and capable of processing arbitrarily sized input images while preserving spatial information and producing proportionally scaled feature maps that down sample at predictable ratios (1/2 per conv layer). Zhang’s architecture would have been an obvious design choice to apply to Lu’s system in order to eliminate input-size constraints, permit processing of different-sized inspection images, maintain spatial structure in the feature maps, and achieve scalable encoder/decoder behavior advantages that would predictably improve the flexibility and performance of Lu’s reconstruction-based abnormality detector. Thus, combining Zhang’s size adaptive fully-convolutional network with Lu’s anomaly detection framework would have been a routine substitution of known CNN architectural components yielding predictable benefits, rendering the claimed invention.
Regarding claim(s) 15, Lu as modified by Zhang teaches, where Lu teaches an abnormality detection program for causing a computer to function as the abnormality detection system according to claim 1 (Figure 10: “Processor 208 and an electronic data storage medium 209”; and Paragraph [0022]: “The non-transitory computer-readable storage medium comprises one or more programs for executing a model […] configured to receive an image of a wafer and determine presence of one or more anomalies in the image”).
Regarding claim(s) 16, Lu as modified by Zhang teaches, where Lu teaches a learning program for causing a computer to function as the learning apparatus according to claim 10 (Figure 10; and Paragraph [0022]; and Paragraph [0040]-[0042]: “use semi-supervised machine learning for anomaly detection. By semi-supervised […] operator only needs to select clean SEM images for the training data set, which can be easier than annotating defective images […] the training set includes images of semiconductor structures, dies, or parts of a semiconductor wafer surface. Only clean (e.g., defect-free) images may be present in the training set”).
Relevant Prior Art Directed to State of Art
Okanohara et al (US 2022/0237060 A1) are relevant prior art not applied in the rejection(s) above. Zhou discloses a system comprising: one or more memories; and one or more processors configured to output information relating to anomaly of target data inputted into the system using at least a part of a deep generative model configured to model a probability distribution corresponding to normal data. Furthermore, wherein the information relating to anomaly of the target data includes data indicative of an anomalous part of the target data.
Zhou et al (US 2019/0108904 A1) are relevant prior art not applied in the rejection(s) above. Zhou discloses a medical image processing apparatus comprising: a memory configured to store a plurality of neural networks corresponding to a plurality of imaging target sites, respectively, the neural networks each including an input layer, an output layer, and an intermediate layer between the input layer and the output layer, and each generated through learning processing with multiple data sets acquired for the corresponding imaging target site; and processing circuitry configured to process first data into second data using, among the neural networks, the neural network corresponding to the imaging target site for the first data, wherein the first data is input to the input layer and the second data is output from the output layer.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONGBONG NAH whose telephone number is (571) 272-1361. The examiner can normally be reached M - F: 9:00 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONGBONG NAH/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674