Prosecution Insights
Last updated: April 19, 2026
Application No. 18/555,896

DIGITAL LITHOGRAPHY APPARATUS WITH AUTOFOCUS POSITION CONTROL AND METHODS OF USE THEREOF

Non-Final OA §103§112
Filed
Oct 18, 2023
Examiner
WHITESELL, STEVEN H
Art Unit
1759
Tech Center
1700 — Chemical & Materials Engineering
Assignee
Applied Materials, Inc.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
781 granted / 954 resolved
+16.9% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
47 currently pending
Career history
1001
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 954 resolved cases

Office Action

§103 §112
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 2, 2026 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 12, 14, 15, 18-21, and 23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 12, 14, and 18 each recite the limitation “the at least one motor”. There is insufficient antecedent basis for this limitation in the claim. Claims 15, 19-21, and 23 depend therefrom. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 12, 14, 15, and 18-23 are rejected under 35 U.S.C. 103 as being unpatentable over Valverde-Paniagua et al. [WO 2020/005361] in view of Hori et al. [US 2012/0287424] and Queens [US 2020/0081355]. For claims 1, 12, and 14, Valverde-Paniagua teaches a digital lithography system (see Figs. 1-3 and 5-6D) and associated method for its practice, comprising: at least one light source (a plurality of lasers, see [0048]) configured to emit a light beam onto a substrate (reflection off the surface, see [0023]) via a lens (projection optics 210); at least one image sensor (sensors such as the focus sensor 204 in FIG. 2 receive images from each pixel across the substrate, see [0048]), configured to detect a reflected light beam from the substrate via the lens, wherein each light source pairs with each image sensor (laser beams (focus beams) reflect on to specific locations on the CMOS sensor(s) associated with each beam, see [0051]); and a controller (190), including a processing device operatively coupled to a memory (memory 194 is connected to the CPU 192, see [0035]), in communication with the at least one light source and the at least one image sensor (the controller 190 facilitates the control and automation of the processing techniques including autofocus, see [0034]), wherein the controller is configured to: receive, from the at least one image sensor, at least one signal indicative of a position of the light beam on the substrate (magnitude and direction of the focus deviation using a calibrated ratio of a z-height change to a sensor pixel shift, see [0051]-[0057]); and determine a target position of an element of the apparatus to focus the light beam onto the substrate in response to receiving the at least one signal from the at least one image sensor (an exposure source (e.g., solid state emitters such as the projection optics of FIG. 2A or the solid state emitter device 212 of FIG. 2B) can be moved in various directions to autofocus the exposure source on the substrate, see [0057]) by using, as a feedback signal, autofocus signal centroids (based upon the determination of the centroids at operation 512, a focus deviation is determined and a exposure position is determined and either the stage on which the substrate is positioned is moved or the exposure apparatus is moved (adjusted) to the exposure position, see [0051]-[0056]) from one or more autofocus channels of a plurality of autofocus channels associated with the at least one image sensor and the at least one light source (each channel comprising a laser beam and a sensor such as CMOS sensors, see [0026]), wherein, to determine the target position of the lens, the controller is configured to use one of: a proportional-integral-derivative control method (PID filtering, see [0053] and [0063]) with dynamic channel selection (filtering a percentage of noise from captured image data, where each image represents a channel at each measurement pixel on the substrate, see [0048]-[0050] and [0061], this is considered dynamic because the number of images used are changed) that selects, from the plurality of autofocus channels, a number of autofocus channels used to generate the feedback signal based on an amount of signal noise (percentage of signal noise is identified as a threshold 610 and data below the threshold 610 is removed from the image pixel data, the centroid is determined from that clean filtered data, see [0061] and Fig. 6D); and actuate the at least one motor to move the element in accordance with the target position of the element (see [0046], [0056], and [0057]). Valverde-Paniagua teaches moving the projection optics or exposure source to adjust focus (see [0046] and [0055]-[0057]), Valverde-Paniagua fails to explicitly describe at least one motor configured to move the lens to focus the light beam onto the substrate; the controller is configured and actuate the at least one motor to move the lens in accordance with the target position of the lens in response to one or more signals from the at least one image sensor. Hori teaches at least one motor (9, see Fig. 1) configured to move the lens (8) to focus the light beam onto the substrate (4); a controller (25) in communication with the light source (21), the image sensor (23) and the at least one motor (9), wherein the controller is configured to actuate the at least one motor to move the lens in accordance with the target position of the lens in response to one or more signals from the at least one image sensor (the position of the objective along its optical axis is controlled so as to follow the target value based on focus error signal sensor 23 and position signal from sensor 10, see Fig. 4 and [0047]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to provide the controlled motor to provide lens movement as taught by Hori in the autofocus control as taught by Valverde-Paniagua in order to actuate the projection lens to adjust the position of the lens relative the substrate to provide the substrate surface within the depth of focus of the lens and within the desired tolerance for accurate pattern exposure. Valverde-Paniagua recognizes that changes in the surface material of the substrate contributes to the measurement noise and eliminates that noise by removing measurement that would likely be associated with those materials (see [0048]-[0050], [0055], and [0061]-[0062]), but fails to teach focus control based on an empirical reference position method that combines the feedback signal with an empirical reference map of the substrate. Queens teaches focus control based on an empirical reference position method that combines the feedback signal with an empirical reference map of the substrate (different layers and field patterns have different sensitivity to focus error that can be ignored for measurement data in those areas, focus control based on height map generated by height sensor, see [0083]-[0084], [0096]-[0097], and [0103]-[0105]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to provide the control method of identifying ignorable locations that would yield measurement error as taught by Queens in the control method of eliminating portions of error associated with surface characteristics as taught by Valverde-Paniagua in order to provide removal of specific types of error to increase accuracy in the measurement processing. For claim 2, Valverde-Paniagua teaches the at least one light source comprises at least one of a laser (laser, see [0048]), a continuous wave (CW) laser, a quality (Q)—switched laser, or a mode-locked laser. For claims 3, 19, and 21, Valverde-Paniagua teaches the substrate comprises at least one material of glass (glass, see [0023]), a reflective material, a metal, chrome, a polymer, a crystal or an oxide. For claims 4 and 21, Valverde-Paniagua teaches the lens comprises at least one of an optical lens (projection optics, see [0041]), a spherical lens, or an aspherical lens. For claims 5, 20, and 21, Valverde-Paniagua teaches the at least one image sensor comprises at least one of a linear image sensor, a complementary metal oxide semiconductor (CMOS) (see [0023]) or active pixel image sensor, a charge-coupled device (CCD) image sensor, or a solid-state device. For claims 6 and 18, in the combination of Valverde-Paniagua and Hori, Hori teaches the at least one motor comprises a linear motor comprising at least one of a piezoelectric motor (see [0031]), an ultrasonic motor, an ultrasonic resonant motors, a piezo stepper motor, piezo-walk motor, a piezo stick-slip motor, a flexure type motor, or an inertial motor. For claims 15, 22, and 23, Valverde-Paniagua teaches calibrating, by the controller, the at least one image sensor to correlate a change in length (ΔL) of the light beam along a length of the at least one image sensor with a change in height (ΔZ) of a surface of the substrate (using a calibrated ratio of a z-height change to a sensor pixel shift, see [0051]). Response to Arguments Applicant's arguments filed on February 2, 2026 have been fully considered but they are not persuasive. The Applicant argues on pages 9-11, regarding claims 1, 12, and 14, that Valverde-Paniagua fails to teach the amended subject matter of the claims including “a proportional-integral-derivative control method with dynamic channel selection that selects, from the plurality of autofocus channels, a number of autofocus channels used to generate the feedback signal based on an amount of signal noise.” The Examiner respectfully disagrees. Valverde-Paniagua teaches in paragraph [0046] imaging using three channels at a plurality of locations (pixels) across the substrate and then selecting pixel images associated with the channels for use in centroid measurement by removing images that include noise. Fig. 6D shows a histogram where pixel height is determined from the images associated with each channel. The channel data associated with the pixels that are below the threshold 610 is removed from consideration as noise. The centroid is then determined using the filtered data in window 608, representing the selected channel data, using PID filtering (see [0053]) which is then used to determine an exposure portion (see [0054]). Valverde-Paniagua teaches dynamic channel selection because the channel data used for the centroid calculation is selected, and thereby changed, based on a noise associated percentage or other identified error. Applicant’s arguments with respect to claims 1, 12, and 14 regarding the limitation “an empirical reference position method that combines the feedback signal with an empirical reference map of the substrate” have been considered but are moot because the new grounds of rejection does not rely on any reference applied in the previous prior art rejection for any teaching or matter specifically challenged in the argument. Queens is relied upon to teach the salient features of the claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven H Whitesell whose telephone number is (571)270-3942. The examiner can normally be reached Mon - Fri 9:00 AM - 5:30 PM (MST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duane Smith can be reached at 571-272-1166. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Steven H Whitesell/ Primary Examiner, Art Unit 1759
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Jul 03, 2025
Non-Final Rejection — §103, §112
Jul 10, 2025
Interview Requested
Jul 16, 2025
Applicant Interview (Telephonic)
Jul 16, 2025
Examiner Interview Summary
Jul 22, 2025
Response Filed
Aug 29, 2025
Final Rejection — §103, §112
Sep 04, 2025
Interview Requested
Sep 11, 2025
Examiner Interview Summary
Sep 11, 2025
Examiner Interview (Telephonic)
Feb 02, 2026
Request for Continued Examination
Feb 05, 2026
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601978
METHOD OF SETTING UP A PROJECTION EXPOSURE SYSTEM, A PROJECTION EXPOSURE METHOD AND A PROJECTION EXPOSURE SYSTEM FOR MICROLITHOGRAPHY
2y 5m to grant Granted Apr 14, 2026
Patent 12585197
MONITORING UNIT AND SUBSTRATE TREATING APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12581585
TILT STAGE, EXTREME ULTRAVIOLET LIGHT GENERATION APPARATUS, AND ELECTRONIC DEVICE MANUFACTURING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12571680
WAVELENGTH MEASUREMENT APPARATUS, NARROWED-LINE LASER APPARATUS, AND METHOD FOR MANUFACTURING ELECTRONIC DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12547086
PROJECTION EXPOSURE APPARATUS FOR SEMICONDUCTOR LITHOGRAPHY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
95%
With Interview (+13.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 954 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month