DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 07/28/2025 have been fully considered but they are not persuasive.
Regarding rejections on independent claims and respective dependent claims, applicant argued that Cao fail to teach “select, based on the metric, a sub-set of the plurality of patterns from the one or more portions of the image having values of the metric within a specific range and provide the sub-set of patterns as training data for training a model associated with a patterning process” recited in independent claims.
However, examiner respectfully disagrees. First off, as indicated in previous Office action, Cao teach the cost function describing the difference between a predicted resist image and an experimentally measured resist image SEM image, wherein the cost function can be based on image pixel intensity difference, contour to contour difference, or CD difference, etc. (paragraph 0104). So, the cost function would be interpreted as “the metric”. Since the cost function in Cao is a difference between the predicted mask rule check metric and a truth mask rule check metric (paragraphs 0016-0017), the cost function would be interpreted as a specified range (e.g., difference).
Second off, Cao teach apparatus and methods of a patterning process and determining patterns of patterning device corresponding to a design layout (paragraph 0002), wherein said design layout may be a continuous transmission mask (CTM) image (paragraphs 0054, 0065-0066). Cao furthers one or more patterns of a portion of an obtained design layout (e.g., CTM image) is applied to neural network processing (Figs. 4-6, paragraphs 0081-0083), which means only “a sub-set of the plurality of patterns from the one or more portions of the image” would be processed thru machine learning model (e.g., neural network).
Third off, Cao teaches different method of training machine learning models related to a patterning process (abstract), wherein at least one machine learning model outputs (interpreted as selecting or selected) training data to train another machine learning model (Figs. 10A-C, paragraph 0097). Cao teaches training data may include a printed pattern (e.g., printed substrate), a mask image (e.g., a curvilinear mask, a CTM image obtained from the CTM1 model 1020 or CTM1 model 1030) corresponding to a target pattern, a simulated process image (e.g., a resist image, an aerial image, an etch image, etc.) corresponding to the mask images, benchmark images (or ground truth images) generated, for example, by executing SMO/iOPC to pre-generate CTM truth images, and a target pattern (paragraphs 0111, 0133-0134, 0153-0154, 0176-0177). Training data for one machine learning model in Cao is output of another machine learning model, such as predicted mask patterns (e.g., from CTM and/or OPC) based on trained process model (e.g., said another machine learning model) and the cost function (paragraphs 0117-0118, 0124, 0129-0131, 0142). Under the broadest reasonable interpretation, Cao does teach the argued limitations.
Thus, rejections are proper and maintained.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 8-9, 11-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cao et al. (WO2019/162346).
To claim 1, Cao teach a non-transitory computer-readable medium comprising instructions stored therein that, when executed by one or more processors, cause the one or more processors to at least:
obtain an image having a plurality of patterns (paragraph 0104, SEM image);
determine, based on pixel intensities within the image (paragraph 0104, image pixel intensity), a metric indicative of a level of informativeness contained in one or more portions of the image (paragraph 0104, a cost function that describes the difference between a predicted resist image and an experimentally measured resist image SEM image. The cost function can be based on image pixel intensity difference, contour to contour difference, or CD difference, etc.);
select, based on the metric, a sub-set of the plurality of patterns from the one or more portions of the image having values of the metric within a specified range (Figs. 4-5, 6, 10A-C, paragraphs 0081-0083, 0097, further training and/or fine-tuning the trained process models based on a first training data set, e.g., printed patterns, and a first cost function, e.g., difference between printed patterns and predicted patterns; which means pattern mask image as output of one machine learning model based on the cost function would be input as training data to another machine learning model, wherein said output or pattern mask image is only a portion of a design layout); and
provide the sub-set of patterns as training data for training a model associated with a patterning process (paragraph 0104, the training process may involve reducing, in an embodiment, minimize, a cost function; paragraph 0109, training a process model of a patterning process to predict a pattern on a substrate).
To claim 16, Cao teach a method for generating training data for training a model (as explained in response to claim 1 above).
To claims 2 and 17, Cao teach claims 1 and 16.
Cao teach wherein the level of informativeness corresponds to non-homogeneity of each of the plurality of patterns, an uncertainty associated with a model prediction, or an error associated with a model prediction (paragraph 0104).
To claims 3 and 18, Cao teach claims 1 and 16.
Cao teach wherein the instructions configured to determine the metric are further configured to cause the one or more processors to generate information content data by applying the metric to one or more pixels of the image (paragraph 0104).
To claim 8, Cao teach claim 1.
Cao teach wherein the instructions configured to determine the metric are further configured to cause the one or more processors to determine the metric without simulation of one or more of the plurality of patterns using a process model associated with a patterning process, or without application, using one or more of the plurality of patterns, of a machine learning model associated with the patterning process (paragraph 0104).
To claim 9, Cao teach claim 1.
Cao teach wherein the instructions configured to select the sub-set of patterns are further configured to cause the one or more processors to: compare values of the metric across the image; identify portions of the image corresponding to values of the metric within the specified range; and select selecting the sub-set of patterns within the identified portions (paragraphs 0097, 0104, wherein processes of comparing, identifying, and selecting are embedded).
To claim 11, Cao teach claim 1.
Cao teach wherein the sub-set of patterns comprises at least a portion of a pattern of the sub-set of patterns (paragraph 0097).
To claim 12, Cao teach claim 1.
Cao teach wherein the image is at least one selected from: a design layout comprising patterns to be printed on a substrate; or a SEM image of a patterned substrate acquired via a scanning electron microscope (SEM) (paragraph 0104).
To claim 13, Cao teach claim 1.
Cao teach wherein the image is at least one selected from: a binary image, a grey scale image; or
a n-channel image, wherein n refers to number of colors used in the image (paragraph 0081).
To claim 14, Cao teach claim 1.
Cao teach wherein the instructions are further configured to cause the one or more processors to train, using the sub-set of patterns as training data, a model associated with the patterning process (paragraphs 0097, 0124, CTM-CNN training).
To claim 15, Cao teach claim 14.
Cao teach wherein the instructions configured to train the model are further configured to cause the one or more processors to train a model configured to generate optical proximity correction structures associated with the plurality of patterns of a design layout, wherein the optical proximity correction structures comprises one or more selected from: main features corresponding to the plurality of patterns of the design layout; or assist features surrounding the plurality of patterns of the design layout (paragraph 0097, using the trained process models to train another machine learning model e.g., 8002 configured to predict mask pattern e.g., including OPC).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao et al. (WO2019/162346) in view of Aparna et al. (“Application of Image Intensity Local Variance Measure for Analysis of Distorted Images”).
To claims 4 and 19, Cao teach claims 3 and 18.
But, Cao do not expressly disclose wherein the instructions configured to generate the information content data are further configured to cause the one or more processors to: slide a window of specified shape and/or size through the image; and compute, for each sliding position, a value of the metric applied within the window.
Aparna teach instructions configured to generate the information content data are further configured to cause the one or more processors to: slide a window of specified shape and/or size through the image; and compute, for each sliding position, a value of the metric applied within the window (page 383, III), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the apparatus of Cao, in order to further detail in information generation.
Claim(s) 5-7, 10, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao et al. (WO2019/162346) in view of Yin et al. (“Unsupervised Hierarchical Image Segmentation Through Fuzzy Entropy Maximization”).
To claims 5 and 20, Cao teach claims 1 and 16.
But, Cao do not expressly disclose wherein the metric is at least one selected from: an information entropy, Renyi entropy, or differential entropy.
Yin teach the metric is at least one selected from: an information entropy, Renyi entropy, or differential entropy (pages 245-246), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the apparatus of Cao, in order to further detail in information generation.
To claim 6, Cao and Yin teach claim 5.
Cao and Yin teach wherein the metric comprises an information entropy and the information entropy comprises a sum of products of a probability of an outcome of a plurality of possible outcomes associated with the image and a logarithmic function of the probability of the outcome (Yin, page 248, section 4).
To claim 7, Cao and Yin teach claim 6.
Cao and Yin teach wherein the possible outcomes comprises at least one selected from: a binary value assigned to a pixel of the image, a first value being indicative of presence of a pattern within the image and a second value being indicative of absence a pattern within the image; or a grey scale value assigned to a pixel of the image (Yin, page 249, section 5.1).
To claim 10, Cao and Yin teach claim 5.
Cao and Yin teach wherein the instructions configured to select the sub-set of patterns are further configured to cause the one or more processors to: identify portions of the image corresponding to relatively low entropy values compared to other portions; and select the sub-set of patterns within the identified portion (low entropy means an image with high predictability and low complexity, obvious in paragraphs 0090, 0097, 0154 of Cao).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 November 1, 2025