Prosecution Insights
Last updated: April 19, 2026
Application No. 17/886,353

ON WAFER DIMENSIONALITY REDUCTION

Non-Final OA §103
Filed
Aug 11, 2022
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Applied Materials, Inc.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/28/2026 has been entered. Response to Arguments Applicant's arguments filed 1/28/2026 have been fully considered but they are not persuasive. Applicant argues, “Middlebrooks is silent regarding ‘reducing dimensionality of second metrology data obtained by performing thickness or in-plane displacement measurements on a second plurality of substrates,’ as recited in amended claim 1.” Remarks 8. The first trained machine learning model is operating second metrology data in Arabshahi paragraph 55, “Once the model is trained, the internal weights/connections may be configured to receive a new batch of sensor outputs in a time series and determine…” The dimensionality reduction for second metrology data is taught by Middlebrook paragraph 116, “the present model formulates a low dimensional encoding (e.g., latent space) that encapsulates information in an input (e.g., a complex electric field image and/or other input associated with a pattern or other features of a semiconductor manufacturing process) to the model.” The thickness measurement is also taught by Middelbrook paragraph 35, “the one or more determined metrology metrics comprise one or more of overlay, a critical dimension, a reconstruction of a three dimensional profile of features of a substrate, or a dose or focus of a lithography apparatus at a moment when the features of the substrate were printed with the lithography apparatus.” Overlay is thickness data, e.g. one or two layer overlay. Three-dimensional profile of feature of a substrate is thickness data and in-plane displacement data. Therefore, Middelbrook and Arabshahi make the claim element obvious. Applicant argues, “Talukder is silent regarding ‘reducing dimensionality of second metrology data obtained by performing thickness or in-plane displacement measurements on a second plurality of substrates,’ as recited in amended claim 1.” Remarks 8. This is taught by Middelbrook and Arabshahi, see above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US20210116896A1 to Arabshahi et al, WO2021104718A1 to Middlebrooks et al and WO2022140097A1 to Talukder et al. Arabshahi teaches claims 1, 9 and 16. (Currently amended) A method comprising: receiving first metrology data (Arabshahi abs “executing a wafer recipe a semiconductor processing system to process a semiconductor wafer; monitoring sensor outputs from a sensors that monitor conditions associated with the semiconductor processing system…”) training a first machine learning model with data input comprising the first metrology data to generate a first trained machine learning model, (Arabshahi fig. 6 and para 55 “ training a model using the sensor data (614)…”) the first trained machine learning model being capable of dimensionally reduced metrology data associated with the second plurality of substrates produced by second manufacturing equipment to perform one or more corrective actions associated with the second manufacturing equipment; (Arabshahi para 55 “Once the model is trained, the internal weights/connections may be configured to receive a new batch of sensor outputs in a time series and determine whether they match the trained “fingerprint” of previous fault conditions.” Arabshahi para 44 “fault output may also include writing a fault indication in a process log, sending a record of the sensor measurements to a data store for analysis and/remodel training, stopping the process being executed by the wafer recipe, sounding an alarm, and/or any other method of alerting users and/or other systems of the possible fault condition.”) Arabshahi doesn’t teach dimensionality reduction. obtaining first manufacturing parameters associated with the second plurality of substrates; and (Arabshahi para 56 “each time the wafer recipe is executed, the resulting data set from the sensor outputs can be labeled and used to continuously train and retrain the model.“) training a second machine learning model to predict (Arabshahi para 56 “each time the wafer recipe is executed, the resulting data set from the sensor outputs can be labeled and used to continuously train and retrain the model.“ Retraining creates a second model. Each time the wafer recipe is executed, that is another set of metrology data. The sensor outputs are the “training input” and the labels are the “target outputs”.) Arabshahi doesn’t teach dimensionality reduction. However, Middlebrook teaches that the metrology data is obtained by performing second thickness or in-plane displacement measurements on a second plurality of substrates (Middlebrook para 35 “ the one or more determined metrology metrics comprise one or more of overlay, a critical dimension, a reconstruction of a three dimensional profile of features of a substrate, or a dose or focus of a lithography apparatus at a moment when the features of the substrate were printed with the lithography apparatus.” 3d profile is thickness and displacement measurements. Overlay is thickness, e.g. 1 or 2 layer thickness/overlay.) reducing dimensionality of second metrology data. (Middlebrook para 116 “variational encoder-decoder architecture. In the middle (e.g., middle layers) of the model (e.g., a neural network), the present model formulates a low dimensional encoding (e.g., latent space) that encapsulates information in an input (e.g., a complex electric field image and/or other input associated with a pattern or other features of a semiconductor manufacturing process) to the model.”) Arabshahi, Middlebrook and the claims all use machine learning on wafer production data. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use dimensionality reduction in Arabshahi to “leverage the low dimensionality and compactness of the latent space to make determinations directly in the latent space.” Middlebrook para 116. Arabshahi and Middlebrook don’t teach manufacturing parameters as input to a ML model. However, Talukder teaches a second machine learning model to predict (Talukder para 2 “may use in situ measurements for process control during fabrication of a wafer. For example, in situ measurements may be used to accurately control an etch depth, a deposition depth, etc. during wafer fabrication.” Talukder abs “generating a second machine learning model using the ex situ data and the in situ measurements.” In situ measurements are manufacturing parameters. Applicant’s specification para 15 states ‘There are many manufacturing parameters (e.g., hardware parameters, process parameters, etc.) that cause the resulting properties of substrates.” The resulting property in Talukder is “etch depth”, and the parameter is the in situ measurement “for process control during fabrication…” Talukder para 2. Etch depth is a thickness measurement.) Arabshahi, Middlebrook, Talukder and the claims all apply machine learning to manufacturing. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to incorporate manufacturing parameters into Arabshahi’s input data when training the second model because “such a model may become out of specification… due to drift of the process chamber…” Talukder para 1. Middlebrook teaches claims 2, 6 and 19. The method of claim 1, wherein the training of the first machine learning model comprises: reducing dimensionality of the first metrology data to form first compressed data; and generating, based on the first compressed data, first reconstructed data that is based on the first metrology data. (Middlebrook para 131 “the dimensional data in the latent space is encoded by the encoder of the encoder-decoder architecture. In some embodiments, predictions, and/or other output from the parameterized model are generated by the decoder of the encoder-decoder architecture.”) Arabshahi teaches claims 3, 11 and 18. The method of claim 1, wherein: the first trained machine learning model is capable of (Arabshahi clm 1 “providing the plurality of sensor outputs to a plurality of models, wherein the plurality of models are trained to identify when the conditions associated with the semiconductor processing system indicate a fault in the semiconductor wafer;”) the second machine learning model is to be further trained based on second data input comprising current data associated with production of the second plurality of substrates and second target output comprising the second compressed data to perform the one or more corrective actions. (Arabshahi para 56 “each time the wafer recipe is executed, the resulting data set from the sensor outputs can be labeled and used to continuously train and retrain the model.“ Arabshahi para 44 “fault output may also include writing a fault indication in a process log, sending a record of the sensor measurements to a data store for analysis and/remodel training, stopping the process being executed by the wafer recipe, sounding an alarm, and/or any other method of alerting users and/or other systems of the possible fault condition.”) Arabshahi doesn’t teach dimensionality reduction. However, Middlebrook teaches reducing dimensionality of second metrology data. (Middlebrook para 116 “variational encoder-decoder architecture. In the middle (e.g., middle layers) of the model (e.g., a neural network), the present model formulates a low dimensional encoding (e.g., latent space) that encapsulates information in an input (e.g., a complex electric field image and/or other input associated with a pattern or other features of a semiconductor manufacturing process) to the model.”) Arabshahi teaches claims 4 and 12. The method of claim 3, wherein the current data comprises one or more of sensor data or manufacturing parameters. (Arabshahi para 56 “each time the wafer recipe is executed, the resulting data set from the sensor outputs can be labeled and used to continuously train and retrain the model.“) Arabshahi teaches claims 5 and 13. The method of claim 3, wherein the one or more corrective actions comprise one or more of: providing an alert to a user; updating process parameters of the second manufacturing equipment; updating hardware parameters of the second manufacturing equipment; correcting sensor drift of sensors associated with the second manufacturing equipment; correcting chamber drift associated with the second manufacturing equipment; or updating a process recipe to produce subsequent substrates. (Arabshahi para 44 “fault output may also include writing a fault indication in a process log, sending a record of the sensor measurements to a data store for analysis and/remodel training, stopping the process being executed by the wafer recipe, sounding an alarm, and/or any other method of alerting users and/or other systems of the possible fault condition.” Arabshahi para 55 “Once the model is trained, the internal weights/connections may be configured to receive a new batch of sensor outputs in a time series and determine whether they match the trained “fingerprint” of previous fault conditions.”) Arabshahi teaches claims 6 and 19. The method of claim 2, wherein the reducing of the dimensionality of the first metrology data is via non-linear fit. (Arabshahi para 102 “ the structure or profile of the target giving rise to the detected spectrum may be reconstructed, e.g. by Rigorous Coupled Wave Analysis and non-linear regression or by comparison with a library of simulated spectra.”) Middlebrook teaches claims 7 and 14. The method of claim 1, wherein the first metrology data comprises spatial maps across surfaces of the one or more substrates of one or more of thickness data or in-plane displacement data. (Middlebrook para 35 “ the one or more determined metrology metrics comprise one or more of overlay, a critical dimension, a reconstruction of a three dimensional profile of features of a substrate, or a dose or focus of a lithography apparatus at a moment when the features of the substrate were printed with the lithography apparatus.” Three-dimensional profile features of a substrate is a spatial map of thickness data and displacement data.) Middlebrook teaches claims 8, 15 and 20. The method of claim 1, wherein the first machine learning model is a convolutional neural network model. (Middlebrook para 114 “In some embodiments, the intermediate layers of the one or more neural networks include one or more convolutional layers, one or more recurrent layers, and/or other layers.”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Aug 11, 2022
Application Filed
Sep 30, 2022
Response after Non-Final Action
Jun 11, 2025
Non-Final Rejection — §103
Sep 04, 2025
Examiner Interview Summary
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Response Filed
Sep 26, 2025
Final Rejection — §103
Jan 28, 2026
Request for Continued Examination
Feb 06, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month