Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,453

CONTOUR PROBABILITY PREDICTION METHOD

Non-Final OA §103
Filed
Mar 04, 2024
Examiner
CODRINGTON, SHANE WRENSFORD
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/04/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al (Zhang hereinafter WO 2021083608 A1) in view of Chen et al (Chen hereinafter “CNN-Based Quality Assurance for Automatic Segmentation of Breast Cancer in Radiotherapy”) As per claim 1 Zhang teaches measuring a wafer with an imaging device to obtain an image of the wafer (Paragraph [00352] “ a training measured image that is obtained from an image capture device and aligned with the training target image” and “ obtaining (a) a measured image from an image capture device, and (b) a predicted measured image from a machine learning model trained to generate the predicted measured image from a target image associated with a design pattern to be printed on a substrate, the measured image corresponding to the design pattern printed on the substrate”) on which a process has been performed according to a design image (Zhang ties the measured image to a design pattern and its associated target image: Paragraph [0035] “ obtaining an input target image associated with a reference design pattern, and a reference measured image associated with a specified design pattern printed on a substrate” and Paragraph [0210] “each image pair includes a) the training target image 1210 associated with the training design pattern 905, and b) the training measured image 1215 that is aligned with the training target image 1210.” Here Zhang’s design image is “the target image associated with a design pattern”), acquiring a plurality of contour images for the image of the wafer (Paragraph [0352] “obtaining a plurality of image pairs…obtaining a plurality of image pairs, wherein each image pair includes a) a training target image associated with a training design pattern, and b) a training measured image” and figure 12 label 1215) Zhang does not explicitly teach representing the contour information in the measured images as an explicit mean/dispersion probability distribution image and using that distribution representation as an additional model input during training. Chen teaches a contour probability prediction training method for probabilistically predicting a contour (Chen shows probability/uncertainty maps for contours and using them as inputs to a CNN: Discussion: “maps of segmentation probability and uncertainty were introduced to predict the contours quality”), determining a contour average ( “ Each pixel (i, j) denotes the probability that the pixel (i, j) belongs to the region to be segmented. The “probability map” represents the predicted contour to some extent.” Note that a probability value at each pixel is the expected membership of that pixel in the contour region. An image whose pixels are expected memberships is an average (expected) contour representation. Chen ties the probability map to the contour by stating it “represents the predicted contour” ) and a contour standard deviation (Inputs of the QA Network section: “where u(i, j) denotes the uncertainty of the pixel (i, j). The pixels with higher uncertainty correspond to the ones that lie close to the decision boundary” and Discussion section: “the uncertainty map represents the confidence of the model” Claim 1’s “standard deviation” requires a dispersion and or variability representation derived from plural contour evidence. Chen’s uncertainty map reads on contour standard deviation because a pixel wise spread/dispersion map for contour location and assignment can be represented as an uncertainty map. An uncertainty map displays the standard deviation of predicted values across a dataset acting as a pixel wise measure of confidence ) for a plurality of images (CT images from different cases see material and methods/ Patient data) , generating a probability distribution image representing a probability distribution based on the contour average and the contour standard deviation (Chen creates two maps that together constitute a distribution style representation. A probability map: “Each pixel…denotes the probability…the probability map represents the predicted contour” and an uncertainty map “where u(i, j) denotes the uncertainty of the pixel (i, j). The pixels with higher uncertainty correspond to the ones that lie close to the decision boundary” Chen states that both maps are needed and are intended as direct network inputs: “Although the uncertainty map was calculated from the probability map…we believe that both are necessary…we intended to directly input these two parameters into the network” The probability map functions as the mean/expected contour image. The uncertainty map functions as the dispersion/variability image (contour standard deviation in substance). Together they form the probability distribution image, and can be seen as combined information in figure 1’s flow. Together they are a mean and dispersion information in image form. ) and a deep learning training probability prediction model using the design image and the probability distribution image as inputs for the probability prediction model (Chen supplies the model input structure: the probability/uncertainty maps as direct inputs to a CNN: “The inputs of the network included…the…image, the generated probability map and the uncertainty map” We can see the inputs and outputs in figure 1. Chen shows a probability distribution image wherein the probability map represents the expected mean contour and the uncertainty map represents the dispersion/standard deviation of that contour prediction such that the probability map and the uncertainty map together encode a full probability distribution image. This is supported by the arrow between the two maps in figure 1 and Chens explicit statement that both maps are necessary and directly inputted to the network. In essence the probability map is the central estimate and the uncertainty map is the variability of that estimate, together creating a probability distribution. In regards to a design image input, Chen uses The CT image as geometric information and contrast for the extraction of useful features such as contour extraction. This input in Chens CNN is analogous to the design image.) Accordingly, a person of ordinary skill in the art would have been motivated to modify Zhang’s workflow with Chen’s deep learning pipeline by deriving, from each measured image a probability map representing the mean/expected contour, an uncertainty map representing dispersion and variability (standard deviation) of the contour and to provide these maps as a “probability distribution image” input alongside the design target image during training. A person of ordinary skill in the art would do this because Chen teaches that probability in addition to uncertainty maps supply complementary mean/dispersion information that improves robustness and learning quality when directly input to a CNN along with a design image. Zhang already teaches a contour prediction method where the plurality of wafer measurement images are collected from an imagining device. This modification yields a probability predication model that learns not only a nominal mean contour of a wafer but also where the contour is inherently variable and uncertain with the design image used as a guide to contrast from. This improves prediction reliability under process variation compared to training with design target images and raw measured images alone. As per claim 2 Zhang and Chen teach all claim limitations rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection. Chen teaches generating a probability prediction image (Chen states in Workflow of QA for Segmentation section that “the value of each pixel represents the probability that the pixel belongs to the contour to be segmented” this maps to probability prediction image because the output is an image and the pixel values are probabilities. Furthermore the cyclical nature of the design image being inserted into the segmentation model and outputting a probability map supports this. ) for the design image by inputting at least a part of the design image to a probability prediction model (Figure 1 shows the CT image which is the same functional role as the design image.) after the deep learning training ( Chen describes using the trained deep learning network to generate prediction outputs from input images stating that “2D CT images were the inputs, while the corresponding 2D segmentation probability maps were the outputs” which is an inference step performed after the network has been trained. Any explicit disclosure of inputs/outputs from a deep network must include the network being in a trained state and therefore the action being after training. ) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found been motivated to perform claim 2’s post training generation step using Chen’s trained deep learning probability prediction model, because Chen explicitly discloses the trained network’s inference operation i.e. after training, providing an input image to the network and obtaining a probability map image output (“2D…images were the inputs while the corresponding…segmentation probability maps were the outputs”) Which is the same after training decision flow required by claim 2 (input design associate image data and generate probability prediction image). Further, using the trained model after training predictably enables producing the probability prediction image for a given design image (or a portion thereof) without retraining, yielding practical and repeatable post training prediction result consistent with Chen’s disclosed system. Zhang’s disclosure which was previously stated bridges Chen’s general deep learning probability prediction/uncertainty prediction framework to a wafer level, design driven application thereby supplying the design image semantics and wafer deployment context. Zhang provides a predictable and well understood application environment in which Chen’s probability prediction model is applied to design images associated with wafer patterns. The modified workflow together yielding expected result of generating probability prediction images for semiconductor manufacturing analysis. As per claim 3 Zhang and Chen teach all claim limitations rejected in claim 2’s 103 rejection. See claim 2’s 103 rejection. Chen teaches using the entire input image to generate the probability map not just a section of the input design image. (Figure 1’s CT image/design image. Chen also states “the segmentation probability map has the same resolution as the CT image” Which lets us know the entire input image domain is being used to generate the probability prediction image.) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have been motivated to apply Chen’s concept of using the full domain region of the design image in the Zhang/Chen modified workflow because doing this allows the trained probability prediction model to generate a probability predictions images that correspond to the entire wafer design thereby providing comprehensive spatially consistent probability information across all target patterns on the wafer, which is a predictable and desirable result in semiconductor manufacturing analysis where wafer level assessment is routinely performed. As per claim 4 Zhang and Chen teach all claim limitations rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection. Zhang teaches the plurality of contour images correspond to a portion of the wafer (Zhang describes a lithographic tool operating on a target portion of the substrate “Paragraph [0236] “an entire pattern imparted to the radiation beam is projected onto a target portion C at one time (i.e. a single static exposure). The substrate table WT is then shifted in the X and/or Y direction so that a different target portion C can be exposed.”, Paragraph [0229] “Projection system (e.g. a reflective projection system) PS can be configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g. comprising one or more dies) of the substrate W.” and Paragraph [0234] “ projection system PS, which focuses the beam onto a target portion C of the substrate W” A “target portion” or “die” is a portion of a wafer. We know this is from the plurality of images because Zhang states “Typically many measurements will be made on targets at different locations across the substrate W” in paragraph [0262]. Each “measurement” in this system is an imaging operation from SEM image capture device. Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found have been motivated to generate and use a plurality of contour representing images ( for example probability and uncertainty maps tied to contour/boundary decisions as taught by Chen ) on a portion-by-portion basis for the substrate as taught by Zhang. This is because Zhang explicitly teaches that patterning and metrology are performed at target portions and “many measurements” are made at different locations, making it a routine implementation to produce the plurality of contour images corresponding to a portion of the wafer rather than requiring full wafer contour imaging for each training instance. This allows for the advantage of reduced data and computation as well as better alignment with how lithography and metrology tools operate in practice while still providing multiple contours representing images for training as Chen teaches. As per claim 5 Zhang teaches obtaining a plurality of training image pairs (Paragraph [0210] “obtaining a plurality of image pairs” ) without specifying any minimum above two or any requirement that the plurality must exceed 10 Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to have implemented Zhang’s concept of a plurality of images as a predictable design choice for controlling data acquisition burden. One of ordinary skill in the art would have been motivated to select such a small plurality because Chen’s deep learning workflow context underscores the practical need to manage input data volume and computational efficiency when using probability/uncertainty type map inputs in model driven contour image pipelines. As per claim 14 Zhang and Chen cover all claim limitations previously rejected in claim 1’s 103 rejections. Please see claim 1’s 103 rejection. Zhang discloses the use of GAN which can be incorporated in the Zhang/Chen prediction model (Paragraph [0281] “In some embodiments, the image generator model 2800 is an ML model, such as a Cycle-consistent Generative Adversarial Network (Cycle GAN), which is a variation of Generative Adversarial Network (GAN).” And Paragraph [0283] “a cycle GAN has two GANs and each GAN has its own generator and discriminator pair. In cycle GAN, a first generator will map an input image (e.g., “input.sub.A”) from domain D.sub.A to some image in target domain “D.sub.B.” A second generator maps back this output image generated by the first generator back to the original input.”) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to implement the probability prediction model of the Chen/Zhang using a GAN as it is taught by Zhang. Zhang teaches GAN based image generation for lithography metrology/inspection including SEM images and explains that GAN generator/discriminator training is used to produce predicted images consistent with real measured images. This allows for a predictable benefit of improved accuracy in image-based prediction for downstream contour determinations. A person of ordinary skill in the art is aware that GAN allows generation of refined contour prediction with precise detection of object boundaries by learning the underlying data distribution. The person of ordinary skill in the art knows this can outperform traditional pixelwise methods and would have been motivated to stride towards this implementation. Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al (Zhang hereinafter WO 2021083608 A1) in view of Chen et al (Chen hereinafter “CNN-Based Quality Assurance for Automatic Segmentation of Breast Cancer in Radiotherapy”) in further view of Kooiman et all (Kooiman hereinafter US 20220342316 A1) As per claim 6 Zhang and Chen cover all claim limitations (such as measuring a wafer with an imaging device) previously rejected in claim 1’s 103 rejections. Please see claim 1’s 103 rejection. Zhang nor Chen teach measuring a wafer with an imaging device comprises the wafer in an after-development inspection (ADI) state or the wafer in an after clean inspection (ACI) state of the wafer. Kooiman teaches measuring a wafer with an imaging device comprises the wafer in an after-development inspection (ADI) state or the wafer in an after clean inspection (ACI) state of the wafer. (Paragraph [0383] “Procedure P2352 includes obtaining an ADI of a substrate. For example, ADI can be obtained via a metrology tool such as SEM as discussed herein.” ) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to further modify the previously modified Zhang/Chen workflow to include Kooiman’s concept of obtaining an ADI of a substrate (wafer) via SEM and arrived at performing the “measuring step” (image acquisition) specifically in an ADI state. One of ordinary skill in the art would have been motivated to make this modification because Kooiman expressly bases their downstream predication modeling on ADI images which gives a solid concrete inspection stage input image source for the measurement step already being used by Zhang. This gives a stage specific measurement consistency. It ties the measuring step to an explicitly defined inspection stage (ADI)which gives a predictable standardized image source for the pipeline, supporting more reliable downstream model use that is conditioned on ADI inputs. As per claim 7 Zhang , Chen and Kooiman cover all claim previously rejected in claim 6’s 103 rejections. Please see claim 6’s 103 rejection. Kooiman teaches the ADI and ACI are inspection processes for checking at least one of defects, particles, or critical dimensions (CD) of the wafer. (FIGS. 7A and 7B are examples of ADI and AEI showing defective and non-defective contact holes, according to an embodiment). Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to further modify the previously modified Zhang/Chen workflow to include Kooiman’s concept of using ADI as inspection processes for checking defects of the wafer. A person of ordinary skill in the art would have been motivated to make this modification because Kooiman establishes ADI as a foundational inspection stage producing an image used to assess defectiveness. This allows the practitioner to gain stage specific inspection consistency linking defect checks to standardized ADI derived images, improving repeatability of defectiveness determination and dimensional monitoring across wafers. As per claim 8 Zhang , Chen and Kooiman cover all claim previously rejected in claim 6’s 103 rejections. Please see claim 6’s 103 rejection. Kooiman teaches the process performed on the wafer includes an extreme ultraviolet (EUV) photolithography process (Paragraph [0073] “FIG. 1 illustrates an exemplary lithographic projection apparatus 10A. Major components are a radiation source 12A, which may be a deep-ultraviolet excimer laser source or other type of source including an extreme ultra violet (EUV) source”) Accordingly, it would have been obvious at the time this invention was effectively filed to further the Zhang/Chen/Kooiman workflow to include Kooiman’s concept of performing extreme ultraviolet photolithography on a wafer. By incorporating EUV process the same contour prediction workflow can be used on a EUV pattern wafer, improving the ability to asses and manage contour variability and defect risk in EUV manufacturing Claims 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al (Zhang hereinafter WO 2021083608 A1) in view of Chen et al (Chen hereinafter “CNN-Based Quality Assurance for Automatic Segmentation of Breast Cancer in Radiotherapy”) in further view of Slachter et al (Slachter hereinafter US 11079687 B2) As per claim 15 Claim 15 recites the same training steps previously addressed in claim 1-4 previously covered by Zhang and Chen. Zhang nor Chen teach the new limitations of outputting a corresponding target pattern as a hotspot when a probability value corresponding to a probability of a defect pattern forming, of each plurality of target patterns included in the generated wafer probability image is equal to or greater than a threshold or wherein the probability value corresponding to the defect pattern is a value representing a probability that each of the plurality of target patterns deviates from a valid standard. Slachter teaches output as a hotspot when a probability value is equal to or greater than a threshold value (Column 24 line 24 “ the term failure rate and failure probability of a feature may be used interchangeably.” , column 53 line 64: “the failure rate distribution 1910 enables determination of a probability of failure of a particular feature” and column 55 line 2“ dose/focus values for which failure probability is below a failure rate threshold value (e.g., 10.sup.−8) are selected.” ) This disclosure yields the concept that computing a per feature probability and applying a probability threshold will trigger an output decision that can be reported as a “hot spot”( “Various patterns on or provided by a patterning device may have different process windows. i.e., a space of processing variables under which a pattern will be produced within specification…These patterns can be referred to as “hot spots” …When the hot spots are not defective, it is most likely that all the patterns are not defective.” The concept that “the hot spots are not defective” shows a decision being made making the hot spot an output. Slachter also teaches probability value corresponding to the defect pattern is a value that represents a probability that each target pattern deviates from a valid standard (Column 23 line 6 “the present disclosure describes a method to obtain a process window based on a desired yield and/or defect criteria for one or more features. For example, the process window can be a set of doses and/or focus values (also referred as dose/focus settings) that are sensitive to failures of individual features and/or a desired yield of the patterning process.” Column 28 line 2 “a CD of a contact hole may be too small (e.g., less than a threshold such as less than 10 nm) which causes footing (i.e., a hole is not transferred to the substrate)”) Slachter here shows deviation from a valid standard by disclosing failure criteria and threshold excursions that outline failed outcomes. This is essentially probability of failure corresponds to probability of being outside an acceptable valid standard. Chen supplies the notion of a probability value map tied to contouring outputs, Slachter supplies the missing semantic hook: “valid standard” equating to “within a specification” and probability used to decide defect/failure relative to that specification. Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have been motivated to combine Zhang and Chen with Slachter because Zhang and Chen teach the probabilistic contour prediction training workflow of claims 1-4 while Slachter teaches applying probability thresholds to per feature failure probabilities to identify patterns produced outside specification and report them as hotspots. A person of ordinary skill in the art would have had a reasonable expectation of success applying Slachter’s known hotspot within spec screening framework to Chen’s contour probability outputs in Zhang’s patterned substrate context yielding predictable result of actionable hotspot outputs for defect and process window control. The system now has the advantage of automatically identifying and reporting target patterns that are likely to fail specification. This enables direct wafer level defect screening and process window control. As per claim 16 Zhang, Chen and Slachter cover all claim limitations previously rejected in claim 15’s 103 rejection. Please see claim 15’s 103 rejection. Slachter expressly defines EUV radiation and places it in the lithography exposure context (column 13 line 17 “radiation, including ultraviolet (UV) radiation (e.g. having a wavelength of 365, 248, 193, 157 or 126 nm) and extreme ultra-violet (EUV) radiation”) and ties defects/failures to exposure conditions (dose) stating in column 27 line 58 “may be exposed at different doses that eventually affects the failure probability of the feature “ which is the concept of “probability of defects caused by exposure to EUV” Accordingly it would have been obvious to a person of ordinary skill in the art at the time the invention was effectively filed to have continued to enhance the Chen/Zhang/Slachter probability prediction pipeline (directed to probabilistically predicting pattern/defect outcomes) with Slachter’s concept directed to exposure driven defect and failure probability in an EUV lithography environment and arrived at configuring the probability prediction model to predict a probability of defects caused by EUV exposure. This is because Slachter explicitly teaches EUV as an exposure radiation regime and teaches that the exposure dose drives probabilistic failure behavior. This provides a predictable and industry standard basis for expressing the model’s defect probability outputs as EUV exposure caused defect probabilities. As per claim 17 Zhang, Chen and Slachter cover all claim limitations previously rejected in claim 15’s 103 rejection. Please see claim 15’s 103 rejection. Zhang teaches the plurality of images are scanning electron images Slachter teaches the images are “design layout” (Column 21 line 12 “the intended design is generally defined as a pre-OPC design layout…provided in…file format as GDSII or Oasis…”) an aerial image (Column 20 line 26 “An aerial image 1230 can be simulated from the source model 1200, the projection optics model 1210 and the patterning device—design layout model 1220.”) a resist image (Column 20 line 58 “ The radiation intensity distribution (aerial image intensity) is turned into a latent “resist image””) Accordingly, it would have been obvious to a person of ordinary skill in the art at the time this invention was effectively filed to implement Slachter’s concept of using relevant lithography design representations such as simulated aerial image, resist image in the modified Zhang/Chen/Slachter and arrived at claim 17’s implementation of using SEM imaging as the source of the plurality of contour images and treating the “design image” content as design layout or aerial image or resist image. Slachter expressly teaches that semiconductor process measurement may use SEM and that lithography design representations include standardized design layouts and simulated aerial and resist images which are used for predicting edge placement and comparing to intended design. This allows the model the best possible data to learn real lithography behavior which improves contour prediction accuracy and defect detection. As per claim 18 Zhang, Chen and Slachter cover all claim limitations previously rejected in claim 15’s 103 rejection. Please see claim 15’s 103 rejection. Claim 18 recites the same limitation of claim 5. Claim 18 also holds a parallel claim dependency to claim 15 as claim 5 has to claim 1. Therefore claim 18 is rejected under the same grounds as claim 5. Accordingly for the same reasons provided in the rejection of claim 5 , it would have been obvious to a person of ordinary skill in the art at the time this invention was filed to employ a limited plurality size as routine design selection to reduce acquisition and processing burden while still providing multiple contour image instances for modeling. Therefore claim 18 is unpatentable for the same reasons as claim 5 and the rejection and rationale for claim 5 are incorporated herein by reference. Claims 19 is rejected under 35 U.S.C. 103 as being unpatentable over Slachter et al (Slachter hereinafter US 11079687 B2) in view of Chen et al (Chen hereinafter “CNN-Based Quality Assurance for Automatic Segmentation of Breast Cancer in Radiotherapy”) Slachter teaches measuring a wafer with an imaging device to obtain an image of the wafer (Column 14 line 50…a substrate…may be subjected to various types of measurement…Examples of measurement include optical imaging… and/or non-optical imaging (e.g., scanning electron microscopy (SEM)) on which an extreme EUV photolithography process has been performed according to design image (Column 13 line 16 “The terms “radiation” and “beam” used herein encompass all types of electromagnetic radiation, including ultraviolet (UV) radiation (e.g. having a wavelength of 365, 248, 193, 157 or 126 nm) and extreme ultra-violet (EUV) radiation (e.g. having a wavelength in the range of 5-20 nm), as well as particle beams, such as ion beams or electron beams.” and Column 66 line 58 “ Upon reflection of the beam of radiation 21 at the patterning device MA, held by the support structure MT, a patterned beam 26 is formed and the patterned beam 26 is imaged by the projection system PS via reflective elements 28, 30 onto a substrate W held by the substrate table WT.” Column 20 line 22 “The device design is generally defined as the pre-OPC patterning device layout”), acquiring a plurality of contour images for the image of a portion of the wafer ( Column 16 line 37 “Typically measurements will be made on targets at different locations across the substrate W” and column 43 line 55 “Simulation of the patterning process can, for example, predict contours, CDs, edge placement (e.g., edge placement error), etc. in the resist and/or etched image. Thus, the objective of the simulation is to accurately predict, for example, edge placement, and/or aerial image intensity slope, and/or CD, etc. of the printed pattern.” Each “measurement” is a new image that’s being down across the substrate and with the goal of predicting contours therefore contour data must reside in this plurality of images) in an after-development inspection (ADI) state or in an after clean inspection (ACI) state ( “measurements may be performed in an after development inspection step (Column 35 line 38 “measurements may be performed in an after development inspection (ADI) step”) , output as a hotspot when a probability value is equal to or greater than a threshold value (Column 24 line 24 “ the term failure rate and failure probability of a feature may be used interchangeably.” , column 53 line 64: “the failure rate distribution 1910 enables determination of a probability of failure of a particular feature” and column 55 line 2“ dose/focus values for which failure probability is below a failure rate threshold value (e.g., 10.sup.−8) are selected.” ) This disclosure yields the concept that computing a per feature probability and applying a probability threshold will trigger an output decision that can be reported as a “hot spot”( “Various patterns on or provided by a patterning device may have different process windows. i.e., a space of processing variables under which a pattern will be produced within specification…These patterns can be referred to as “hot spots” …When the hot spots are not defective, it is most likely that all the patterns are not defective.” The concept that “the hot spots are not defective” shows a decision being made making the hot spot an output. Slachter also teaches probability value corresponding to the defect pattern is a value that represents a probability that each target pattern deviates from a valid standard (Column 23 line 6 “the present disclosure describes a method to obtain a process window based on a desired yield and/or defect criteria for one or more features. For example, the process window can be a set of doses and/or focus values (also referred as dose/focus settings) that are sensitive to failures of individual features and/or a desired yield of the patterning process.” Column 28 line 2 “a CD of a contact hole may be too small (e.g., less than a threshold such as less than 10 nm) which causes footing (i.e., a hole is not transferred to the substrate)”) Slachter here shows deviation from a valid standard by disclosing failure criteria and threshold excursions that outline failed outcomes. This is essentially probability of failure corresponds to probability of being outside an acceptable valid standard. Slachter does not teach deep learning training framework that explicitly encodes the previously claimed inputs or the constituent metrology that create those inputs Chen teaches a contour probability prediction training method for probabilistically predicting a contour (Chen shows probability/uncertainty maps for contours and using them as inputs to a CNN: Discussion: “maps of segmentation probability and uncertainty were introduced to predict the contours quality”), determining a contour average ( “ Each pixel (i, j) denotes the probability that the pixel (i, j) belongs to the region to be segmented. The “probability map” represents the predicted contour to some extent.” Note that a probability value at each pixel is the expected membership of that pixel in the contour region. An image whose pixels are expected memberships is an average (expected) contour representation. Chen ties the probability map to the contour by stating it “represents the predicted contour” ) and a contour standard deviation (Inputs of the QA Network section: “where u(i, j) denotes the uncertainty of the pixel (i, j). The pixels with higher uncertainty correspond to the ones that lie close to the decision boundary” and Discussion section: “the uncertainty map represents the confidence of the model” Claim 1’s “standard deviation” requires a dispersion and or variability representation derived from plural contour evidence. Chen’s uncertainty map reads on contour standard deviation because a pixel wise spread/dispersion map for contour location and assignment can be represented as an uncertainty map. An uncertainty map displays the standard deviation of predicted values across a dataset acting as a pixel wise measure of confidence) for a plurality of images (CT images from different cases see material and methods/ Patient data), generating a probability distribution image representing a probability distribution based on the contour average and the contour standard deviation (Chen creates two maps that together constitute a distribution style representation. A probability map: “Each pixel…denotes the probability…the probability map represents the predicted contour” and an uncertainty map “where u(i, j) denotes the uncertainty of the pixel (i, j). The pixels with higher uncertainty correspond to the ones that lie close to the decision boundary” Chen states that both maps are needed and are intended as direct network inputs: “Although the uncertainty map was calculated from the probability map…we believe that both are necessary…we intended to directly input these two parameters into the network” The probability map functions as the mean/expected contour image. The uncertainty map functions as the dispersion/variability image (contour standard deviation in substance). Together they form the probability distribution image, and can be seen as combined information in figure 1’s flow. Together they are a mean and dispersion information in image form. ) , deep learning training probability prediction model using at least the portion of the design image and at least a portion of the probability distribution image as inputs for the probability prediction model (Chen supplies the model input structure: the probability/uncertainty maps as direct inputs to a CNN: “The inputs of the network included…the…image, the generated probability map and the uncertainty map” We can see the inputs and outputs in figure 1. Chen shows a probability distribution image wherein the probability map represents the expected mean contour and the uncertainty map represents the dispersion/standard deviation of that contour prediction such that the probability map and the uncertainty map together encode a full probability distribution image. This is supported by the arrow between the two maps in figure 1 and Chens explicit statement that both maps are necessary and directly inputted to the network. In essence the probability map is the central estimate and the uncertainty map is the variability of that estimate, together creating a probability distribution. In regards to a design image input, Chen uses The CT image as geometric information and contrast for the extraction of useful features such as contour extraction. This input in Chens CNN is analogous to the design image.) generating a probability prediction image (Chen states in Workflow of QA for Segmentation section that “the value of each pixel represents the probability that the pixel belongs to the contour to be segmented” this maps to probability prediction image because the output is an image and the pixel values are probabilities. Furthermore the cyclical nature of the design image being inserted into the segmentation model and outputting a probability map supports this. ) for the design image by inputting at least a part of the design image to a probability prediction model (Figure 1 shows the CT image which is the same functional role as the design image.) after the deep learning training ( Chen describes using the trained deep learning network to generate prediction outputs from input images stating that “2D CT images were the inputs, while the corresponding 2D segmentation probability maps were the outputs” which is an inference step performed after the network has been trained. Any explicit disclosure of inputs/outputs from a deep network must include the network being in a trained state and therefore the action being after training. ) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have been motivated to modify Slachter’s workflow with Chen because Slachter explicitly teaches ADI-based wafer measurement in an EUV lithography context, determination of feature failure probability and threshold-based identification. Chen fills in the deep learning contour predication training framework thar represents contours using probability maps (contour averages/expected contour) together with an uncertainty map (Dispersion information/SD) which together encode a probability distribution image and are directly used as training inputs to a neural network. A person of ordinary skill in the art would have done this in order to improve predication quality and accuracy for EUV induced stochastic pattern variations particularly where the pipeline recognizes feature failure probability depending on variability around printed contours while being aware that threshold-based decisions are driven by probabilistic excursion from valid standards. This gives the expected improvement of more reliable probabilistic hotspot identification. Allowable Subject Matter Claim 9-13 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANE WRENSFORD CODRINGTON/Examiner, Art Unit 2667 /TOM Y LU/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §103
Mar 13, 2026
Interview Requested
Mar 25, 2026
Examiner Interview Summary
Mar 25, 2026
Applicant Interview (Telephonic)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month