Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This communication is in response to the action filed on 11/10/2025.
Claim 16 is currently amended. The claims 1-16 are pending.
Response to Arguments
Applicant’s arguments filed on 11/10/2025 on page 6, under REMARKS with respect to 35 U.S.C. 101 rejection to claim 16 has been fully considered and is persuasive. The rejection to the claim has been withdrawn.
Applicant’s arguments filed on 11/10/2025 on pages 6-8, under REMARKS with respect to 35
U.S.C. 102 and 103 claim rejections to claims 1-16 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of US 2025/0003899 A1.
The examiner would like to note to the applicant that there was no response to the raised priority claim issue included in the previous action, and the issue is reiterated in this current response. However, it is additionally noted the priority issue, has no effect on the current prior art cited as the used prior art meets the claimed priority date.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in United Kingdom of Great Britan and Northern Ireland on 9/15/2022. It is noted, however, that applicant has not filed a certified copy of the GB 2213576.8 application as required by 37 CFR 1.55.
Claim Objections
Claim 16 is objected to because of the following informalities: it appears claim 16 is missing a phrase after “a non-transitory computer-readable medium that…” and continues with “when read by a computer” it appears to the examiner the applicant meant “a non-transitory computer-readable medium that, stores instructions which when read by a computer…” and will be interpreted as such. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-8, 10-16 are rejected under 35 § U.S.C. 102(a)(2) as being anticipated by US 2025/0003899 A1 to HOUBEN et al. (hereinafter “HOUBEN”).
As per claim 1, HOUBEN discloses a method of training a neural network for use in surface metrology (a system and method of using the system in order to train using surface metrology data a neural network which is implemented in a first surface estimation model 1420 and is a convolutional neural network CNN to be used in wafer surface estimation; abstract; figs 1-3; paragraphs [0013], [0051-0053], [0167], [0181], [0186-0187]), comprising: providing height image data comprising a series of height measurements of a sample (providing simulated and non-simulated real data to the system comprising a plurality of height profiles of the image regions of interest comprising the features of interest; abstract; figs 1, 3; paragraphs [0048], [0059], [0075]), the height image data comprising a plurality of features (each region comprising the modified averaged height values includes features such as edges, corners and other surface features of the sample; paragraphs [0075-0077]); obtaining, from the height image data, a plurality of height image patches (obtaining from the SEM images which contain the height image data a plurality of image regions acting as image patches and include a plurality of height data points are collected from the regions of interest which include feature of interest of the wafer, where the observed data 1405 comprises height profile of the structure(s) captured by an atomic force microscope tool; paragraph [0136], [0173-0175], [0186]), each height image patch containing at least a portion of a feature (each image region acting as an image patch and including a height profile and includes features of interest of the sample wafer being imaged; paragraphs [0180], [0184-0187]); applying one or more effects to each of the height image patches to obtain a corresponding modified height image patch for each height image patch (applying via the second generator of the cycle-consistent GAN may obtain input non-simulation image 712 including image data x and convert it to image 718 in the source domain by applying the mapping F, where mapping F would be acting as the effect which converts/changes image 712, into image 718 and the differences in the image would be the applied effect on the image; paragraphs [0085], [0110-0114]); inputting one or more of the modified height image patches into the neural network (observed data 1405 is used to determine predicted surface map 1421 which uses an average height value of the observed structures and the average height comprises a modified height value and is input into the model which runs via a neural network model; fig 14; paragraphs [0187-0192); using the neural network to identify a feature in each of the one or more modified height image patches (the neural network based model is used to identify features wherein each region of the captured image of the semiconductor wafer sample comprises a feature of said wafer 203 which includes predicted surface map having an average height applied which is a modified height value; paragraphs [0048], [0069], [0075], [0087], [0176], [0181], [0187-0192]); and training the neural network based on the identification (further the neural network is adapted to be trained in order to improve accuracy in the identification process the training may be categorized into three types: supervised training, unsupervised training, and reinforcement training. In the supervised training, a set of target output data, also referred to as "labels" or "ground truth" may be generated based on a set of input data and the data may be simulated or non-simulated training data; paragraphs [0085-0086], [0207]).
As per claim 2, HOUBEN discloses the method of claim 1, wherein the height image data comprises real data obtained from a real sample (the data used comprises simulated and non-simulated real data; paragraph [0085], [0189], [0207).
As per claim 3, HOUBEN discloses the method of claim 1, wherein the height image data comprises simulated data (the data used comprises simulated and non-simulated real data; paragraph [0085], [0189], [0207; paragraph [0207]).
As per claim 4, HOUBEN discloses the method of claim 1, wherein the one or more effects comprise at least one of: a rotation; a reflection; applying noise; raising or lowering brightness; raising or lowering contrast; zooming in; and zooming out (identity mapping loss methods are used to evaluate preservation (which would include raising or lowering values of…) of a plurality of global image features including, at least one of color composition, gray level, brightness (as claimed), contrast, saturation, or tint, Etc… and further the conventional image processing techniques may only be capable of processing a limited number of image features, which may prevent further reducing the critical dimension measurement delta to a lower level, HOUBEN allows for multiple/many image feature to be processed at once; paragraphs [0051-0054], [0071-0072]).
As per claim 5, HOUBEN discloses the method of claim 1, wherein the plurality of height image patches comprises 10000 or more height image patches (the training process may terminate when the number of iterations reaches a predetermined number of simulated and non-simulated images having regions of interest comprising a feature of height adjusted wafer images (adjusted to an average height using a predicted surface map 1421) and the number of training iterations ran would be set to above 10000 to any value desired by users; paragraph [0181]).
As per claim 6, HOUBEN discloses the method of claim 5, wherein the plurality of height image patches comprises 15000 or more height image patches (the training process may terminate when the number of iterations reaches a predetermined number of simulated and non-simulated images having regions of interest comprising a feature of height adjusted wafer images (adjusted to an average height using a predicted surface map 1421) and the number of training iterations ran would be set to above 15000 to any value desired by users; paragraph [0181]).
As per claim 7, HOUBEN discloses the method of claim 1, wherein the neural network is trained based on each of the modified height image patches (the neural network model is calibrated (trained) using one dimensional height data includes height profile of the structures along a cut line the, observed data 1405 can include two-dimensional height data of the structure traced from input non-simulation image 1404, two-dimensional height data comprises height data of the structures along a first direction and a second direction, observed data 1405 includes shape parameters obtained from the optical metrology tool used to measure structures of the patterned substrate, and further first surface estimation model 1420 can be trained to generate predicted surface estimation data based on input data, which included the modified average height predicted surface map image regions comprising wafer features used as the input data; paragraph [0179], [0187-0190]).
As per claim 8, HOUBEN discloses the method of claim 1, wherein each height image patch comprises at least a corner, edge or central portion of a feature (the wafer/sample features included corners, and edges of the wafer substrate among other feature types; fig 3; paragraph [0075]).
As per claim 10, HOUBEN discloses the method of claim 1, wherein each height image patch comprises height measurements of an area of a surface of the sample (the height measurements used to find the average height value are height measurements of wafer features contained fully in the image region or partly in the image region; paragraphs [0075], [0120], [0124], [0173-0175], [0181-0187]).
As per claim 11, HOUBEN discloses the method of claim 1, wherein a plurality of the features have at least a portion of the feature contained in at least one of the height image patches (the height measurements used to find the average height value are height measurements of wafer features contained fully in the image region or partly in the image region; paragraphs [0075], [0120], [0124], [0173-0175], [0181-0187]).
As per claim 12, HOUBEN discloses the method of claim 1, wherein some of the height image patches contain an entire feature and some of the height image patches contain a portion of a feature (the regions of interest of the SEM images that have been adjusted for an average height include a plurality of feature types and each region contains at least one wafer feature and some include a plurality of features; paragraphs [0075], [0120], [0124], [0173-0175], [0181-0187]).
As per claim 13, HOUBEN discloses the method of claim 1, wherein the height image data comprises probe microscope data (the image data including the height image data is provided via a microscope imaging device; fig 3; paragraph [0071]).
As per claim 14, HOUBEN discloses a method of performing surface metrology of a sample (a system and method of using the system in order to train using surface metrology data a neural network which is implemented in a first surface estimation model 1420 and is a convolutional neural network CNN to be used in wafer surface estimation of a sample; abstract; figs 1-3; paragraphs [0013], [0051-0053], [0167], [0181], [0186-0187]), the method comprising: training a neural network by a method according to claim 1 (training the neural network CNN of the surface estimation model in order to improve accuracy in the identification process the training may be categorized into three types: supervised training, unsupervised training, and reinforcement training. In the supervised training, a set of target output data, also referred to as "labels" or "ground truth" may be generated based on a set of input data and the data may be simulated or non-simulated training data; paragraphs [0085-0086], [0207]), thereby generating a trained neural network (generating a trained CNN of the surface estimation model 1420; abstract; figs 1-3; paragraphs [0013], [0051-0053], [0167], [0181], [0186-0187]); scanning a new sample to obtain a series of height measurements of the new sample (observed data 1405 is paired with input non-simulation image 1404 and is measured data of structures of a sample, which is measured by input non-simulation image 1404 of the sample and includes height profiles of the structures along the cut line of the wafer; paragraphs [0181-0188]); and operating the trained neural network to identify feature data in the series of height measurements of the new sample (using pipelined process 1441, trained first surface estimation model 1420 is configured to receive domain adapted image 1411, which is predicted by domain adaptation technique 1410, and to generate predicted surface estimation data 1421, and shown in FIG. 14B, predicted surface estimation data 1421 can represent predicted 3D geometric features corresponding to domain adapted image 1411 which has had its feature height adjusted to the average height; paragraph [0181-0190]).
As per claim 15, HOUBEN discloses apparatus for training a neural network for use in surface metrology, comprising a processor configured to perform the method of claim 1 (the computing system used to perform the image processing method includes a computing processor; fig 1; paragraph [0213]).
As per claim 16, HOUBEN discloses a non-transitory computer-readable medium that, when read by a computer, causes the computer to perform the method of claim 1 (the computing system used to perform the image processing method includes a storage memory component which stores programs and instructions to execute the image processing method; fig 1; paragraph [0213]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claim 9 is rejected under 35 § U.S.C. 103 as being obvious over HOUBEN (hereinafter “HOUBEN”) in view of US 2018/0082106 A1 to INABA et al (hereinafter “INABA”).
As per claim 9, HOUBEN discloses the method of claim 1. HOUBEN fails to disclose further comprising: obtaining, from the height image data, a plurality of additional height image patches that do not contain a feature or a portion of a feature; applying one or more effects to each of the additional height image patches to obtain a corresponding modified additional height image patch for each additional height image patch; inputting one or more of the modified additional height image patches into the neural network; using the neural network to determine that there are no features in each of the one or more modified additional height image patches; and training the neural network based on the determination.
INABA discloses further comprising: obtaining, from the height image data, a plurality of additional height image patches that do not contain a feature or a portion of a feature (a CNN model for recognizing a feature/pattern of an image of an object/sample and in a case where no feature/pattern is detected in an image comprising an image region (patch); paragraphs [0060], [0073-0074]); applying one or more effects to each of the additional height image patches to obtain a corresponding modified additional height image patch for each additional height image patch (the user may apply parameter/feature variation to each of the image regions including patterns including information indicating that images of sufficient variation for recognition have been obtained is displayed on the display apparatus 3 to the user; paragraphs [0073-0074]); inputting one or more of the modified additional height image patches into the neural network (the feature variation images may vary a plurality of features including height values; paragraphs [0073-0074]); using the neural network to determine that there are no features in each of the one or more modified additional height image patches (the network is adapted to determine that there is no isolated pattern of appearances of the features of interest; paragraphs [0073-0074]); and training the neural network based on the determination (and via the CNN model learning (teaching/being taught) a feature or a pattern from a generated reproduced image; paragraphs [0060], [0073-0074]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify HOUBEN to have a plurality of additional height image patches that do not contain a feature or a portion of a feature of INABA reference. The Suggestion/motivation for doing so would have been to a trained CNN model for image sample feature identification purposes as suggested by paragraph [0060] of INABA. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine INABA with HOUBEN to obtain the invention as specified in claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677