DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 7-9; 14-18 and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sriraman et al (WO 2019199697 A1), hereinafter Sriraman in view of Alawieh et al (2020 IEEE/ACM International Conference On Computer Aided Design), hereinafter Alawieh.
-Regarding claim 1, Sriraman discloses a non-transitory computer-readable medium (Figure 6, memory 606) having instructions therein that, when executed by a computer system (Figure 6; Page 43, lines 15-19), cause the computer system to at least (Abstract; Figures 1-6): obtain a set of paired after-development (AD) images (Abstract; Page 6, lines 19-21, “(a) receiving after development inspection metrology results …”; Figure 1, operations 107, 109) and after-etch (AE) images associated with a substrate (Abstract; Page 6, lines 22-24, “(b) receiving after etch inspection metrology result …”; Figure 1, operations 113, 115), and train, based on the AD images and AE images, a machine learning model (Figure 1; Page 35, line 8; Page 36, lines 9-10, “The profiles are then be used as inputs to train, optimize, and improve the computerized etch profile models”; Page 16, lines 20-22, “a neural network such as a convolutional neural network”; page 15, lines 2-5) to generate a predicted AE image (Abstract; Figure 1; Page 6, lines 24-26, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results”) for an input AD image to the machine learning model (Figure 1, modellings 119, 125; Page 14, lines 7-13, “All blocks in the upper portion of Figure I- above the dashed box- are used to generate data (including optionally images) …”), wherein the predicted AE image (Figure 1; Page 6, 1st paragraph) corresponds to a location from which an input AD image of the AD images is obtained (Figure 1; Page 42, lines 3-5, “The models used herein may be configured to execute on a single machine at a single location”). Note: obtaining and training an etch (AE) model to predict after-etch image from a given after-development (AD) image is well known in semiconduction field.
Sriraman does not disclose a set of unpaired images for the training of machine learning model, wherein each image of a pair of unpaired images is obtained from different locations.
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches a learning scheme uses a cycle translation to learn the mapping using unpaired images (Alawieh: Fig. 9
PNG
media_image1.png
269
398
media_image1.png
Greyscale
; Page 5, 1st Col., last paragraph). Note: it is known that each image of a pair of unpaired images for Cycle GAN is obtained from different locations (See Zhu et al (Proc. IEEE Int. Conf. Comput. Vis., Venice, pp. 2223–2232, 2017): Figure 2).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and cycle-consistent adversarial networks in order to solve one of the major challenges facing training of after-etch model in practice that data is available but is not necessarily paired.
-Regarding claim 15, Sriraman discloses an apparatus for generating a first image from a second image using a machine learning model, the apparatus comprising (Abstract; Figures 1-6): a memory storing a set of instructions (Figure 6, memory 606); and at least one processor configured to execute the set of instructions to cause the apparatus to at least Figure 6; Page 43, lines 15-19): obtain a given after-etch (AE) image associated with a given substrate (Abstract; Page 6, lines 22-24, “(b) receiving after etch inspection metrology result …”; Figure 1, operations 113, 115), wherein the given AE image corresponds to a given location on the given substrate (Figure 1; Page 42, lines 3-5, “The models used herein may be configured to execute on a single machine at a single location”); and generate, via a machine learning model (Figure 1; Page 16, lines 20-22, “a neural network such as a convolutional neural network”), a given predicted after-development (AD) image using the given AE image (Abstract; Page 6, lines 24-28, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results … In certain implementations, … applying an inverse of the transfer function to determine a design layout for a lithography mask”), wherein the given predicted AD image corresponds to the given location (Figure 1; Page 42, lines 3-5, “The models used herein may be configured to execute on a single machine at a single location”), wherein the machine learning model is trained to generate a predicted AD image using a set of paired AD images and AE images associated with a substrate (Figure 1; Page 35, line 8; Page 36, lines 9-10, “The profiles are then be used as inputs to train, optimize, and improve the computerized etch profile models”; Page 16, lines 20-22, “a neural network such as a convolutional neural network”).
Note: obtaining and training an etch (AE) model to predict after-etch image from a given after-development (AD) image is well known in semiconduction field and vice versa.
Sriraman does not disclose a set of unpaired images for the training of machine learning model.
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches a learning scheme uses a cycle translation to learn the mapping using unpaired images (Alawieh: Fig. 9; Page 5, 1st Col., last paragraph). Note: it is known that each image of a pair of unpaired images for Cycle GAN is obtained from different locations (See Zhu et al (Proc. IEEE Int. Conf. Comput. Vis., Venice, pp. 2223–2232, 2017): Figure 2).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and cycle-consistent adversarial networks in order to solve one of the major challenges facing training of after-etch model in practice that data is available but is not necessarily paired.
-Regarding claims 2 and 17, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 1 and the apparatus of claim 15.
Sriraman does not disclose to determine, via a discriminator model of the machine learning model, whether the predicted image is classified as a real or fake image.
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches to determine, via a discriminator model of the machine learning model, whether the predicted image is classified as a real or fake image (Alawieh: Fig. 9, top branch, green path, discriminator (left side)).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and a cycle-consistent adversarial network with one generator model for AE generator model and a discriminator model to solve one of the major challenges facing training of machine learning models in practice that data is available but is not necessarily paired, and equip the model with an integrated reject option which can be leveraged to reduce the misclassification risk for the model, new defect detection, data change detection, and resource allocation (Alawieh: Page 2, 1st Col., 2nd paragraph; Page 5, 1st Col., last paragraph).
-Regarding claims 3 and 18, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 2 and the apparatus of claim 17.
Sriraman does not disclose to compute a first cost function that is indicative of predicted images being classified as fake and the images being classified as real, wherein the first cost function is further computed based on a set of process-related parameters; adjust one or more parameters of the discriminator model to maximize the first cost function; and adjust one or more parameters of the generator model to minimize the first cost function. A person of ordinary skill in the art would understand that this is a known routine for training of any Generative Adversarial Network (GAN) when using AE images as input of the GAN (reciting well-understood, routine, conventional activities previously known to the industry cannot provide an inventive concept).
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches to determine, via a discriminator model of the machine learning model, whether the predicted image is classified as a real or fake image (Alawieh: Fig. 9, top branch, discriminator (left side)), and disclose to compute a first cost function that is indicative of predicted images being classified as fake and the images being classified as real, wherein the first cost function is further computed based on a set of process-related parameters; adjust one or more parameters of the discriminator model to maximize the first cost function; and adjust one or more parameters of the generator model to minimize the first cost function (Alawieh: Fig. 9 (training of top branch); Page 2, equation (1))
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and a cycle-consistent adversarial network generator models and discriminator models to solve one of the major challenges facing training of machine learning models in practice that data is available but is not necessarily paired, and equip the model with an integrated reject option which can be leveraged to reduce the misclassification risk for the model, new defect detection, data change detection, and resource allocation (Alawieh: Page 2, 1st Col., 2nd paragraph; Page 5, 1st Col., last paragraph).
-Regarding claim 7, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 3. The combination further teaches wherein the set of process- related parameters includes parameters associated with one or more processes for forming a pattern on the substrate (Sriraman: Figure 1; Page 6, lines 19-26; Page 7, lines 18-23; Page 13, lines 22-23, “profiles of etched features produced by etching patterns produced … an etch model that predicts etch profiles …”; Page 14, line 18, “transfer a given pattern”).
-Regarding claim 8, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 2. The combination further teaches to generate, via an AD generator model of the machine learning model, a predicted AD image using a reference AE image of the AE images; and determine, via an AD discriminator model of the machine learning model, whether the predicted AD image is classified as a real or fake image (Alawieh: Fig. 9, bottom branch, red path, discriminator (right side); using bottom generator as an AD generator model and top generator as an AE generator model, red input as input AD image and green input as input AE image).
-Regarding claim 9, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 8.
Sriraman does not disclose to compute a third cost function that is indicative of predicted images being classified as fake and the images being classified as real, wherein the first cost function is further computed based on a set of process-related parameters; adjust one or more parameters of the discriminator model to maximize the first cost function; and adjust one or more parameters of the generator model to minimize the first cost function. A person of ordinary skill in the art would understand that this is a known routine for training of any Generative Adversarial Network (GAN) when using AE images as input of the GAN (reciting well-understood, routine, conventional activities previously known to the industry cannot provide an inventive concept).
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches to determine, via a discriminator model of the machine learning model, whether the predicted image is classified as a real or fake image (Alawieh: Fig. 9, top branch, discriminator (left side)), and disclose to compute a third cost function that is indicative of predicted images being classified as fake and the images being classified as real, wherein the first cost function is further computed based on a set of process-related parameters; adjust one or more parameters of the discriminator model to maximize the first cost function; and adjust one or more parameters of the generator model to minimize the first cost function (Alawieh: Fig. 9 (training of top branch); Page 2, equation (1))
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and a cycle-consistent adversarial network generator models and discriminator models to solve one of the major challenges facing training of machine learning models in practice that data is available but is not necessarily paired, and equip the model with an integrated reject option which can be leveraged to reduce the misclassification risk for the model, new defect detection, data change detection, and resource allocation (Alawieh: Page 2, 1st Col., 2nd paragraph; Page 5, 1st Col., last paragraph).
-Regarding claim 14, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 1. The combination further teaches wherein the substrate includes a plurality of regions, and wherein the set of unpaired AD and AE images are obtained from a same region of the regions (Sriraman: Figs. 1-2, 5A-5B; Page 8, lines 16-19; Page 19, lines 5-12). Note: Alawieh has not limitation on region to obtain unpaired image.
-Regarding claim 16, Sriraman in view of Alawieh teaches the apparatus of claim 15. The combination further teaches wherein each AD image in the set of unpaired AD images and AE images is obtained at a location on the substrate that is different from all locations at which the AE images are obtained (Sriraman: Fig. 1; Alawieh: Fig. 9; Page 5, 1st Col., last paragraph, “scheme featuring an unpaired image-to-image translation … Cycle Generative Adversarial architecture (CyGAN) [17] learns simultaneously a two-way image translation using unpaired data.”). It is known that each image of a pair of unpaired images for Cycle GAN is obtained from different locations (See Zhu et al (Proc. IEEE Int. Conf. Comput. Vis., Venice, pp. 2223–2232, 2017): Figure 2).
-Regarding claim 20, Sriraman discloses a non-transitory computer-readable medium (Figure 6, memory 606) having instructions therein that, when executed by a computer system (Figure 6; Page 43, lines 15-19), cause the computer system to at least (Abstract; Figures 1-6): obtain a set of paired after-development (AD) images (Abstract; Page 6, lines 19-21, “(a) receiving after development inspection metrology results …”; Figure 1, operations 107, 109) and after-etch (AE) images associated with a substrate (Abstract; Page 6, lines 22-24, “(b) receiving after etch inspection metrology result …”; Figure 1, operations 113, 115), and train an AE generator (Figure 1; Page 35, line 8; Page 36, lines 9-10, “The profiles are then be used as inputs to train, optimize, and improve the computerized etch profile models”; Page 16, lines 20-22, “a neural network such as a convolutional neural network) to generate a predicted AE image from an input AD image of the AD images such that a first cost function determined based on the input AD image (Abstract; Page 6, lines 24-26, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results”) and the predicted AE image is reduced (Page 16, 1st paragraph, “data reduction and cost function optimization procedures may be employed”; Page 29, lines 10-23; Page 30, 1st paragraph); and train an AD generator model (Figure 1; Page 35, line 8; Page 36, lines 9-10, “The profiles are then be used as inputs to train, optimize, and improve the computerized etch profile models”; Page 16, lines 20-22, “a neural network such as a convolutional neural network) to generate a predicted AD image from a reference AE image of the AE images such that a second cost function determined based on the reference AE image (Abstract; Page 6, lines 24-28, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results … In certain implementations, … applying an inverse of the transfer function to determine a design layout for a lithography mask”) and the predicted AD image is reduced (Page 16, 1st paragraph, “data reduction and cost function optimization procedures may be employed”; Page 29, lines 10-23; Page 30, 1st paragraph). Note: obtaining and training an etch (AE) model to predict after-etch image from a given after-development (AD) image is well known in semiconduction field and vice versa.
Sriraman does not disclose a set of unpaired images for the training AE generator model and AD generator model, wherein each image of a pair of unpaired images is obtained from different locations. Sriraman does not disclose the AE generator model and AD generator model belong to a machine learning model.
In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches a learning scheme uses a cycle translation to learn the mapping using unpaired images (Alawieh: Fig. 9
PNG
media_image1.png
269
398
media_image1.png
Greyscale
; Page 5, 1st Col., last paragraph). Note: it is known that each image of a pair of unpaired images for Cycle GAN is obtained from different locations (See Zhu et al, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., Venice, pp. 2223–2232, 2017) and two generator models in a cycle-consistent adversarial network.
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and a cycle-consistent adversarial network with one generator model for AE generator model and the other for AD generator model order to solve one of the major challenges facing training of machine learning models in practice that data is available but is not necessarily paired, and equip the model with an integrated reject option which can be leveraged to reduce the misclassification risk for the model, new defect detection, data change detection, and resource allocation (Alawieh: Page 2, 1st Col., 2nd paragraph; Page 5, 1st Col., last paragraph).
-Regarding claim 21, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 20. The combination further teaches wherein the machine learning model is configured to generate a predicted AE image (Sriraman: Fig. 1; Alawieh, Fig. 9).
Claim(s) 4-6, 10-12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sriraman et al (WO 2019199697 A1), hereinafter Sriraman in view of Alawieh et al (2020 IEEE/ACM International Conference On Computer Aided Design), hereinafter Alawieh, and further in view of Alawieh et al (IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2020), hereinafter Alawieh1.
-Regarding claims 4 and 19, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 3 and the apparatus of claim 18. The combination further teaches generate generating, via a first generator model of the machine learning model, a cyclic image using the predicted image (Alawieh: Fig. 9, bottom branch, red path, bottom generator),
Sriraman in view of Alawieh does not teach to compute computing a second cost function that is indicative of a difference between the cyclic image and an input image (used as AD image (red input in Alawieh’s Fig. 9)); and adjust one or more parameters of the first generator model (used as AD generator, bottom generator in Alawieh’s Fig. 9) or a second generator model (used as AE generator) to minimize the second cost function.
However, Alawieh1 is an analogous art pertinent to the problem to be solved in this application and teaches a method using generative adversarial networks (GANs) to generate subresolution assist feature (SRAF) directly for any given layout. Alawieh1further teaches to compute computing a second cost function that is indicative of a difference between the cyclic image and an input image (used as AD image); and adjust one or more parameters of the first generator model (used as AD generator) or a second generator model (used as AE generator) to minimize the second cost function (Alawieh1: Fig. 6; Page 377, 2nd component in equation (4), i.e.
E
y
).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Sriraman in view of Alawieh with the teaching of Alawieh1 by using a cost function related to the between the cyclic image and an input image in order to train the cycle Generative Adversarial Network.
-Regarding claim 5, Sriraman in view of Alawieh, and further in view of Alawieh1 teaches the non-transitory computer-readable medium of claim 4. The modification further teaches to train the machine learning model with a different AD image and AE image in each iteration of training until the AE discriminator model determines whether the predicted AE image is classified as a real image (Alawieh1: Fig. 6; Page 377, Sec. C).
-Regarding claim 6, Sriraman in view of Alawieh, and further in view of Alawieh1 teaches the non-transitory computer-readable medium of claim 5. The modification further teaches wherein the AE discriminator model determines whether the predicted AE image is classified as a real image when the first cost function or the second cost function is minimized (Alawieh1: Fig. 6; Page 377, Sec. C).
-Regarding claim 10, Sriraman in view of Alawieh teaches the non-transitory computer-readable medium of claim 9. The combination further teaches generate generating, via a first generator model of the machine learning model, a cyclic image using the predicted image (Alawieh: Fig. 9, top branch, green path, generator (top)),
Sriraman in view of Alawieh does not teach to compute computing a fourth cost function that is indicative of a difference between the cyclic image and an input image (used as AE image (green input in Alawieh’s Fig. 9)); and adjust one or more parameters of the first generator model (used as AD generator, bottom generator in Alawieh’s Fig. 9) or a second generator model (used as AE generator) to minimize the fourth cost function.
However, Alawieh1 is an analogous art pertinent to the problem to be solved in this application and teaches a method using generative adversarial networks (GANs) to generate subresolution assist feature (SRAF) directly for any given layout. Alawieh1further teaches to compute computing a second cost function that is indicative of a difference between the cyclic image and an input image (used as AD image); and adjust one or more parameters of the first generator model (used as AD generator) or a second generator model (used as AE generator) to minimize the second cost function (Alawieh1: Fig. 6; Page 377, 1st component in equation (4), i.e.
E
x
).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Sriraman in view of Alawieh with the teaching of Alawieh1 by using a cost function related to the between the cyclic image and an input image in order to train the cycle Generative Adversarial Network.
-Regarding claim 11, Sriraman in view of Alawieh, and further in view of Alawieh1 teaches the non-transitory computer-readable medium of claim 10. The modification further teaches to train the machine learning model with a different AD image and AE image in each iteration of training until the AD discriminator model determines whether the predicted AD image is classified as a real image (Alawieh1: Fig. 6; Page 377, Sec. C).
-Regarding claim 12, Sriraman in view of Alawieh, and further in view of Alawieh1 teaches the non-transitory computer-readable medium of claim 11. The modification further teaches wherein the AD discriminator model determines whether the predicted AD image is classified as a real image when the third cost function or the fourth cost function is minimized (Alawieh1: Fig. 6; Page 377, Sec. C).
Response to Arguments
Applicant's arguments filed 12/04/2026 have been fully considered but they are not persuasive. Applicant argues that Sriraman and Alawieh do not disclose or teach a non-transitory computer-readable medium of claim 1, the apparatus of claim 15, and the non-transitory computer-readable medium of claim 20 because "segments on lithography photomasks" is not at all an after-development (AD) image (nor an after-etch (AE) image) (Remarks: page 9, 2nd paragraph), accordingly, Sriraman fails to disclose or teach a machine learning model that generates a predicted AE image for an input AD image (Remarks: page 9, 2nd paragraph; page 12, 1st paragraph), Sriraman fails to disclose or teach generation, via a machine learning model, of a given predicted after-development (AD) image using the given AE image (Remark: page 11, 1st paragraph), Alawieh fails to overcome the deficiencies of the cited portions of Sriraman (Remarks: page 9, 3rd paragraph; page 11, 2nd paragraph; page 12, 2nd paragraph), and Alawieh fails to disclose or teach obtain a set of unpaired after-development (AD) images and after-etch (AE) images associated with a substrate (Remarks: page 10, 2nd paragraph; page 13, 2nd paragraph). The examiner respectfully disagrees the above arguments.
In response to applicant's argument that "segments on lithography photomasks" is not at all an AD image (nor an AE image), Sriraman discloses “the design layout segments are clips or gauges provided in a GDS format” (Sriraman: Page 4, 2nd paragraph; note: images can be generated from GDS files using tools like InkScape or GIMP, allowing for the creation of bitmap images that can be used for lithography purposes) and “the design clip library is defined at an operation 103. Clips or gauges are geometric features or segments that may represent small portions of a design layout … Figure 3A presents an example of families of gauges and the left panel of Figure 3B presents an example of a gauge” (Sriraman: page 9, lines 18-25; Figures 1, 3A-3B). Sriraman further discloses “All blocks in the upper portion of Figure I- above the dashed box- are used to generate data (including optionally images) …” (Page 14, lines 7-13), “the after development inspection metrology results and/or the after etch inspection metrology results are provided as x-y contours of CD- SEM- generated images … the after development inspection metrology results and/or the after etch inspection metrology results are provided as x-z profiles of TEM or CD-SAXS-generated images” (Sriraman: page 7, lines 11-13; page 11, lines 26-28; page 13, lines 14-16; Figure 1, outputs of operations 109, 115).
In response to applicant's argument that accordingly, Sriraman fails to disclose or teach a machine learning model that generates a predicted AE image for an input AD image and fails to disclose or teach generation, via a machine learning model, of a given predicted after-development (AD) image using the given AE image , Sriraman discloses obtain a set of paired after-development (AD) images (Abstract; Page 6, lines 19-21, “(a) receiving after development inspection metrology results …”; Figure 1, operations 107, 109) and after-etch (AE) images associated with a substrate (Abstract; Page 6, lines 22-24, “(b) receiving after etch inspection metrology result …”; Figure 1, operations 113, 115), and train, based on the AD images and AE images, a machine learning model (Figure 1; Page 35, line 8; Page 36, lines 9-10, “The profiles are then be used as inputs to train, optimize, and improve the computerized etch profile models”; Page 16, lines 20-22, “a neural network such as a convolutional neural network”; page 15, lines 2-5) to generate a predicted AE image (Abstract; Figure 1; Page 6, lines 24-26, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results”) for an input AD image to the machine learning model (Figure 1, modellings 119, 125), wherein the predicted AE image (Figure 1; Page 6, 1st paragraph) corresponds to a location from which an input AD image of the AD images is obtained (Figure 1; Page 42, lines 3-5, “The models used herein may be configured to execute on a single machine at a single location”). Sriraman further discloses to generate, via a machine learning model (Figure 1; Page 16, lines 20-22, “a neural network such as a convolutional neural network”), a given predicted after-development (AD) image using the given AE image (Abstract; Page 6, lines 24-28, “generating the transfer function using the set of design layout segments together with corresponding after development inspection metrology results and corresponding after etch inspection metrology results … In certain implementations, … applying an inverse of the transfer function to determine a design layout for a lithography mask”; Page 20, lines 21-26).
In response to applicant's argument that Alawieh fails to overcome the deficiencies of the cited portions of Sriraman, Sriraman does not disclose a set of unpaired images for the training of machine learning model. In the same field of endeavor, Alawieh teaches a method for re-examining VLSI manufacturing and yield thorough deep learning (Alawieh: Abstract; Figs. 1-11). Alawieh further teaches a learning scheme uses a cycle translation to learn the mapping using unpaired images (Alawieh: Fig. 9; Page 5, 1st Col., last paragraph). Note: it is known that each image of a pair of unpaired images for Cycle GAN is obtained from different locations (See Zhu et al (Proc. IEEE Int. Conf. Comput. Vis., Venice, pp. 2223–2232, 2017): Figure 2). Alawieh teaches using any type of unpaired images including unpaired AD and AE images.
In response to applicant's argument that and Alawieh fails to disclose or teach obtain a set of unpaired after-development (AD) images and after-etch (AE) images associated with a substrate, please note that claims 1, 15 and 20 are rejected under 35 U.S.C. 103. One prior art can not teach everything. In this case, Sriraman discloses almost all the claim limitations except using unpaired AD and AE images. However, Alawieh teaches using unpaired images that can be any type of unpaired images including AD and AE image. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Sriraman with the teaching of Alawieh by using unpaired after-development (AD) images and after-etch (AE) images associated with a substrate and cycle-consistent adversarial networks in order to solve one of the major challenges facing training of after-etch model in practice that data is available but is not necessarily paired.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAO LIU/Primary Examiner, Art Unit 2664