Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is in response to the application filed on 03/12/2024.
Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statements (IDS’s) filed on 01/19/2025, and 07/20/2025 have been considered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-11, 13-17, and 19-20 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by US 2022/0043251 A1 to MOORE et al. (hereinafter “MOORE”).
As per claim 1, MOORE discloses a system configured for generating information for use in evaluating a deep learning model (a system and method for generating multi-layer model updates/improvements (based on an evaluation step of pass or fail) and evolve and improve based on training using training data in order to train deep learning neural network models adapted to perform image stacking of images of a sample aligning them over a defect/feature of interest and determine sample adequacy based on image feature analysis; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0132-0134]), comprising: a computer subsystem configured for: acquiring results generated by a deep learning model configured for assigning an attribute to images generated for a specimen responsive to a likelihood that the images are images of interest (the system comprises a computer which comprises computing components such as a memory and processor to execute instructions to perform the methods, the system is adapted to acquire image results from the multilayer deep learning model that is adapted to assign an attribute of adequacy (likelihood is the sample adequate or likely to give a successful sample analysis result) for processing when presented an image of a sample based on user defined and selected image features from a plurality of selectable image features; paragraphs [0070-0072], [0092], [0126-0129], [0138], [0149]); separating the images into two or more groups based on the attribute such that each of the two or more groups corresponds to different values of the attribute (the images are separated into classes of adequacy based on Boolean logic meaning the images are sorted into “pass” or “fail” groups and each group would have a value such as a one or a zero; paragraphs [0070-0072], [0092], [0126-0129]); aligning the images in each of the two or more groups to each other (using a geometric transformation function the images from both groups are stacked and aligned together; abstract; figs 1, and 12; paragraphs [0070-0072], [0092]); stacking the aligned images within each of the two or more groups thereby highlighting in the stacked images one or more features of the images to which the attribute is responsive (the stacked and aligned images are geometrically transformed such that both groups of images are aligned together based on the selected feature of interest which may then be highlighted as an area of interest by the operator, who may also use a predefined shape to overlay onto the sample image as the area of interest, allowing multiple ways for the user to highlight desired features of interest used for alignment; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]); and outputting the stacked images for use in evaluating the deep learning model (the system is adapted to output stacked images to a display such that the user may observe and interact with the images via the systems GUI and apply the aforementioned visual highlight annotations as desired; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0139-0140]).
As per claim 2, MOORE discloses the system of claim 1, wherein stacking the aligned images deemphasizes one or more other features of the images to which the attribute is less responsive than the one or more features (the user is able to manually select the features of interest and would be able to not select a feature in order to deemphasize it or select the features opposite to be trained for identification which would in turn deemphasize the feature for alignment purposes during model improvement process; paragraphs [0070-0072], [0092], [0126-0129], [0132-0134]).
As per claim 3, MOORE discloses the system of claim 1, wherein stacking the aligned images deemphasizes noise in the images (using a transfer function at block S1080 results in the removal (deemphasis) of noise; paragraphs [0070-0072], [0092], [0097], [0132-0134]).
As per claim 4, MOORE discloses the system of claim 1, wherein said outputting comprises displaying the stacked images to a user thereby conveying to the user the one or more features of the images used by the deep learning model for assigning the attribute (the outputting step comprises displaying the stacked images having the features of interest highlighted using the GUI available to the user so the user may observe the display screen and see the features used/highlighted for importance in the decision of adequacy; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]).
As per claim 5, MOORE discloses the system of claim 1, wherein said outputting comprises displaying the stacked images for each of the two or more groups to a user thereby conveying to the user the one or more features of the images to which the attribute is responsive (the system is adapted to output the stacked image result of each group based on a geometric transformation which the system applies an image coordinate system for the purposes of aligning images that were taken under different conditions, correcting images for lens distortion, correcting effects of camera orientation, and/or image morphing or other special effects, in the spatial transformation, each point (x,y) of image A is mapped to a point (u,v) in a new coordinate system and allows for stacking of the images based on identified feature/attribute locations now that the coordinate system is standard; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]).
As per claim 6, MOORE discloses the system of claim 1, wherein the one or more features highlighted in the stacked images to which the attribute is responsive are visually perceptible by a user in fewer than all of the images in any one of the two or more groups (images in which features of interest are identified which are selected by the user are highlighted using the used GUI to apply visual annotations such as a highlight to a feature/attribute of interest if the feature exists within the image and would not highlight images not containing feature/attributes of interest; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]).
As per claim 7, MOORE discloses the system of claim 1, wherein the deep learning model is further configured for assigning values of the attribute within a range from 0 to 1 (the images are separated into classes of adequacy based on Boolean logic meaning the images are sorted into “pass” or “fail” groups and each group would have a value such as a one or a zero; paragraphs [0070-0072], [0092], [0126-0129], [0132-0134]), wherein the assigned values of 1 indicate that the images are the images of interest (if the image receives a “1” or a “pass” the image is deemed adequate for processing and is of interest; paragraphs [0070-0072], [0092], [0126-0129], [0132-0134]), and wherein the assigned values of 0 indicate that the images are not the images of interest (if the image receives a “0” or a “fail” the image is deemed not adequate for processing and is not of interest; paragraphs [0070-0072], [0092], [0126-0129], [0132-0134]).
As per claim 8, MOORE discloses the system of claim 1, wherein said aligning comprises aligning the images based on a location of a defect detected in the images (a spatial transformation step uses the geometric transformation function to convert the images to a standard coordinate system/space and aligns the images in the standard coordinate space based on coordinates of the features of interest in a (u,v) coordinate space; paragraphs [0070-0072], [0092]).
As per claim 9, MOORE discloses the system of claim 1, wherein evaluating the deep learning model comprises comparing the highlighted one or more features in the stacked images for each of the two or more groups and identifying the one or more features used by the deep learning model to assign the attribute to the images based on said comparing (as described in para [0070] the user selects the feature of interest to train the multi-layer model on and is then able to highlight those features using the image annotation feature provided over the user operable GUI, such that new samples via the model can be matched by individually comparing each feature from the new image to the reference samples a, database of previous samples or images already highlighted for features of interest and ca be used to identifying candidate matching feature of input image via comparison to determine the class to be placed in; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0106], [0126-0129], [0132-0134], [0139-0140]).
As per claim 10, MOORE discloses the system of claim 1, wherein evaluating the deep learning model comprises determining if the deep learning model is overfitting to non-meaningful features in the stacked images in any one of the two or more groups (the system is adapted to provide to the user the ability to apply rules to the deep learning model, these rules comprise according to para [0088] ability to creating an algorithm to teach a deep learning algorithm, or by a manual rule set passed to the model, which would create a model which having rules which dictate which one or more features in an image are emphasized and to what degree, this would allow the user to modify the trained/set rule so that the feature which is experiencing overfitting which would be identified in the model assessment steps described in paragraph [0134] for model improvement and in order to improve the model would modify the rule to prevent overfitting; paragraphs [0070-0072], [0088], [0132-0134]).
As per claim 11, MOORE discloses the system of claim 1, wherein evaluating the deep learning model comprises determining if the deep learning model is suitable for use in assigning the attribute to images generated for other specimens (wherein the deep learning model generates stacked image results and used the results to improve and retrain the model continuously, such that the one or more trained deep learning models are used to evaluate the models using test data and images not used in the model development to inform a user of model viability (during the assessment step the models are determined if they are suitable for use or need further improvements) after model assessment, the trained models are applied to images/data at block S1340 resulting in a computer- assisted assessment at block S1350, using the assessment the models may continue to evolve to improve accuracy through additional data and user feedback; paragraphs [0132-0134]).
As per claim 13, MOORE discloses the system of claim 1, wherein the images generated for the specimen are training images in a training data set (the system is adapted to input a whole slide image acquired using ptychography may be fed into a machine learning or deep learning model for training to detect adequacy; fig 1; paragraph [0132]), and wherein the results are generated by the deep learning model during training of the deep learning model (wherein the deep learning model generates stacked image results and used the results to improve and retrain the model continuously, such that the one or more trained deep learning models are used to evaluate the models using test data and images not used in the model development to inform a user of model viability after model assessment, the trained models are applied to images/data at block S1340 resulting in a computer- assisted assessment at block S1350, using the assessment the models may continue to evolve to improve accuracy through additional data and user feedback; paragraphs [0132-0134]).
As per claim 14, MOORE discloses the system of claim 1, wherein the deep learning model is trained prior to generating the results acquired by the computer subsystem (the systems deep learning model is initially pre-trained and then improved; paragraphs [0132-0134], [0136]).
As per claim 15, MOORE discloses the system of claim 1, wherein the computer subsystem is further configured for transforming the aligned images into a different domain (a spatial transformation step uses the geometric transformation function to convert the images to a standard coordinate system/space and aligns the images in the standard coordinate space based on coordinates of the features of interest in a (u,v) coordinate space selected features are selected so that they are well localized in both the spatial and frequency domains, thus reducing the likelihood of disruptive effects like occlusion, clutter, or noise; paragraphs [0070-0072], [0092], [0126-0129), and wherein stacking the aligned images comprises stacking the transformed aligned images within each of the two or more groups (in the spatial transformation, each point (x,y) of image A is mapped to a point (u,v) in a new coordinate system and allows for stacking of the images based on identified feature/attribute locations now that the coordinate system is standard; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]).
As per claim 16, MOORE discloses the system of claim 1, further comprising one or more components executed by the computer subsystem, wherein the one or more components comprise the deep learning model (the deep learning model is stored on the system computer inside the system memory component which is a computer readable medium and is connected to a processor to run the models perform improvements to the models and perform the methods described; fig 34; paragraph [0149]), and wherein said acquiring comprises generating the results by inputting the images generated for the specimen into the deep learning model (illuminating the sample and capturing one or more images of the sample S144, loading the one or more images S146, receiving one or more input parameters S148, iteratively reconstructing the one or more images into a high-resolution image S150, post-processing S152 ( generated images), and assessing the sample using feature extraction and/or one or more machine and/or deep learning models at S154; fig 1 ; paragraph [0054]).
As per claim 17, MOORE discloses the system of claim 1, further comprising an inspection subsystem configured for generating the images for the specimen (the computing system includes a self-contained computational microscope comprising an embedded user interface and is used to image the samples at a microscopic level; figs 1, 8-11, and 32-34; paragraphs [0033], [0056]).
As per claim 19, MOORE discloses a non-transitory computer-readable medium (a deep learning model is stored on the system computer inside the system memory component which is a computer readable medium and is connected to a processor to run the models perform improvements to the models and perform the methods described; fig 34; paragraph [0149]), storing program instructions executable on a computer system for performing a computer-implemented method for generating information for use in evaluating a deep learning model (a computer based system and method for generating multi-layer model updates/improvements (based on an evaluation step of pass or fail) and evolve and improve based on training using training data in order to train deep learning neural network models adapted to perform image stacking of images of a sample aligning them over a defect/feature of interest and determine sample adequacy based on image feature analysis; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0132-0134]), wherein the computer-implemented method comprises: acquiring results generated by a deep learning model configured for assigning an attribute to images generated for a specimen responsive to a likelihood that the images are images of interest (the system comprises a computer which comprises computing components such as a memory and processor to execute instructions to perform the methods, the system is adapted to acquire image results from the multilayer deep learning model that is adapted to assign an attribute of adequacy (likelihood of is the sample adequate or likely to give a successful sample analysis result and is a pass fail score) for processing when presented an image of a sample based on user defined and selected image features from a plurality of selectable image features; paragraphs [0070-0072], [0092], [0126-0129], [0138], [0149]); separating the images into two or more groups based on the attribute such that each of the two or more groups corresponds to different values of the attribute (the images are separated into classes of adequacy based on Boolean logic meaning the images are sorted into “pass” or “fail” groups and each group would have a value such as a one or a zero; paragraphs [0070-0072], [0092], [0126-0129]); aligning the images in each of the two or more groups to each other (using a geometric transformation function the images from both groups are stacked and aligned together; abstract; figs 1, and 12; paragraphs [0070-0072], [0092]); stacking the aligned images within each of the two or more groups thereby highlighting in the stacked images one or more features of the images to which the attribute is responsive (the stacked and aligned images are geometrically transformed such that both groups of images are aligned together based on the selected feature of interest which may then be highlighted as an area of interest by the operator, who may also use a predefined shape to overlay onto the sample image as the area of interest, allowing multiple ways for the user to highlight desired features of interest used for alignment; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]); and outputting the stacked images for use in evaluating the deep learning model (the system is adapted to output stacked images to a display such that the user may observe and interact with the images via the systems GUI and apply the aforementioned visual highlight annotations as desired; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0139-0140]).
As per claim 20, MOORE discloses a computer-implemented method for generating information for use in evaluating a deep learning model (a computer based system and method for generating multi-layer model updates/improvements (based on an evaluation step of pass or fail) at assessment step and evolve and improve based on training using training data in order to train deep learning neural network models adapted to perform image stacking of images of a sample aligning them over a defect/feature of interest and determine sample adequacy based on image feature analysis; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0132-0134]), comprising: acquiring results generated by a deep learning model configured for assigning an attribute to images generated for a specimen responsive to a likelihood that the images are images of interest (the system comprises a computer which comprises computing components such as a memory and processor to execute instructions to perform the methods, the system is adapted to acquire image results from the multilayer deep learning model that is adapted to assign an attribute of adequacy (likelihood of is the sample adequate or likely to give a successful sample analysis result and is a pass fail score) for processing when presented an image of a sample based on user defined and selected image features from a plurality of selectable image features; paragraphs [0070-0072], [0092], [0126-0129], [0138], [0149]); separating the images into two or more groups based on the attribute such that each of the two or more groups corresponds to different values of the attribute (the images are separated into classes of adequacy based on Boolean logic meaning the images are sorted into “pass” or “fail” groups and each group would have a value such as a one or a zero; paragraphs [0070-0072], [0092], [0126-0129]); aligning the images in each of the two or more groups to each other (using a geometric transformation function to standardize the image coordinates the images from both groups are stacked and aligned together; abstract; figs 1, and 12; paragraphs [0070-0072], [0092]); stacking the aligned images within each of the two or more groups thereby highlighting in the stacked images one or more features of the images to which the attribute is responsive (the stacked and aligned images are geometrically transformed such that both groups of images are aligned together based on the selected feature of interest which may then be highlighted as an area of interest by the operator, who may also use a predefined shape to overlay onto the sample image as the area of interest, allowing multiple ways for the user to highlight desired features of interest used for alignment; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0126-0129], [0139-0140]); and outputting the stacked images for use in evaluating the deep learning model (the system is adapted to output stacked images to a display such that the user may observe and interact with the images via the systems GUI and apply the aforementioned visual highlight annotations as desired; abstract; figs 1, and 12; paragraphs [0070-0072], [0092], [0139-0140]), wherein said acquiring, separating, aligning, stacking, and outputting are performed by a computer subsystem (the method is performed via a computer system comprising a memory storing instructions related to the methods described and a processor to execute those methods; fig 34; paragraphs [0070-0072], [0092], [0126-0129], [0132-0134], [0139-0140] [0149]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 12 and 18 are rejected under 35 § U.S.C. 103 as being obvious US 2022/0043251 A1 to MOORE et al. (hereinafter “MOORE”) in view of US 2024/0289940 A1 to DUBOVSKI et al. (hereinafter “DUBOVSKI”).
As per claim 12, MOORE discloses the system of claim 1. MOORE fails to disclose [wherein the computer subsystem is further configured for training the deep learning model with a training data set comprising images of defects of interest designated as the images of interest.
DUBOVSKI discloses wherein the computer subsystem is further configured for training the deep learning model with a training data set comprising images of defects of interest designated as the images of interest (based on a quality threshold using a training set of images some having known defects the system is trained to sort and classify the SEM semiconductor image by defect type as having defects or not having defects based on the quality threshold using simulated images that have been generated and used as a training set to train the deep learning model on defect recognition; fig 1; paragraphs [0043], [0046], [0051-0053], [0056], [0077], [0103-0105], [0121]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify MOORE to have a training data set comprising images of defects of interest designated as the images of interest of DUBOVSKI reference. The Suggestion/motivation for doing so would have been to provide the system the ability to simulate images to be used in the training set in order to train the model on any type of defect and provide a vast number of training image in comparison to using real captured images, therefore improving model accuracy as suggested by DUBOVSK at paragraph [0077]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine DUBOVSKI with MOORE to obtain the invention as specified in claim 12.
As per claim 18, MOORE discloses the system of claim 1. MOORE fails to disclose wherein the images generated for the specimen are optical wafer images generated by an inspection subsystem.
DUBOVSKI discloses wherein the images generated for the specimen are optical wafer images generated by an inspection subsystem (the stacked overlay images are semiconductor wafer images and are generated via a microscopic inspection system similar to the microscopic inspection system of MOORE; abstract; figs 1, 2A-2B; paragraphs [0016], [0042], [0047], [0053-0055]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify MOORE to have the images generated for the specimen are optical wafer image of DUBOVSKI reference. The Suggestion/motivation for doing so would have been to provide the ability to use similar microscopic imaging methods and the same quality determination methods of MORRE to inspect and determine adequacy based on trained feature identification and extraction performed by a deep learning model in order to generate stacked overlay images of specimens that are semiconductor wafers as suggested by paragraphs [0042] and [0047] of DUBOVSKI. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine DUBOVSKI with MOORE to obtain the invention as specified in claim 18.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These prior arts include the following:
US 2023/0351568 A1
US 2022/0028052 A1
US 2023/0349838 A1
US 2021/0334989 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677