DETAILED ACTION
This Final Office action is responsive to the claims filed on May 17, 2022.
Summary
Claims 1-20 are under examination.
Claims 1 and 15 are objected to.
Claims 1-20 are rejected under 35 USC 101.
Claims 1-2, 6-9, 13-16, and 18 are rejected under 35 USC 102 over Bean.
Additionally or Alternatively, claims 1-2, 6-9, 13-16, and 18 are rejected under 35 USC 103 over Bean in view of Cheng.
Claims 3-5, 10-11, 17, and 19 are rejected under 35 USC 103 over Bean in view of Cheng and Schneider
Claims 12 and 20 are rejected under 35 USC 103 over Bean in view of Cheng and Kang
Response To Arguments/Amendments
Claim Objections: The Applicant’s arguments and amendments are persuasive. The prior claim objections have been withdrawn.
35 USC 112(b): The Applicant’s arguments and amendments have been considered and are persuasive. The 35 USC 112(b) rejections have been withdrawn.
35 USC 101: The Applicant’s arguments and amendments have been considered but are not persuasive. The Applicant’s arguments will be addressed in order.
Diamond v. Diehr Is Allegedly Less Eligible Than The Applicant’s Claims: The Applicant asserts that the claim as recited is more eligible than the claim in Diehr, which was found eligible. This argument is not convincing. The Applicant asserts that the claimed subject matter “recites at least a practical application of running the computer model based on the input to determine a setpoint for manipulation of the optical element and/or electronically providing the input, or data based on the input, to a system for running the computer model to determine a setpoint for manipulation of the optical element.” The court in Diehr stated, “[t]hat respondents' claims involve the transformation of an article, in this case raw, uncured synthetic rubber, into a different state or thing cannot be disputed. The respondents' claims describe in detail a step-by-step method for accomplishing such, beginning with the loading of a mold with raw, uncured rubber and ending with the eventual opening of the press at the conclusion of the cure.” For example, the claim in Diehr recites, “A method of operating a rubber-molding press for precision molded compounds with the aid of a digital computer, comprising […] constantly determining the temperature (Z) of the mold at a location closely adjacent to the mold cavity in the press during molding, […] opening the press automatically when a said comparison indicates equivalence." The recitations of the Diehr claim specifically integrate into the molding process to transform an article. The Applicant’s claim transforms nothing. The Applicant’s claim fails to reduce an article to a different state or thing. The Applicant’s claim takes in data and outputs data in a manner that does not represent any technological improvement. The data is used either as an input to another model to generate more data or merely transmits this data for use by another computer. The Applicant’s claim is not analogous in any way and is considerably less eligible than the claim in Diehr. The Applicant’s claim is similar to the claims in Electric Power Group, where the claims take in data, process the data, and output the data without any demonstrated technological improvement. The Applicant’s claims in no way represent an improvement to computer functionality and, except, potentially, entirely within the abstract ideas of the Applicant’s claims, .in no way represent an improvement to any technology. The Applicant’s argument is not persuasive.
Running the computer model based on input to determine a setpoint for manipulation of an optical element and/or electronically providing the input, or data based on the input, to a system for running the computer model to determine a setpoint for manipulation of the optical element are allegedly certainly a structure or process which… is allegedly performing a function which the patent laws were designed to protect: This is clearly not true, as the claim in not actually integrated into the modification of anything, let alone a setting of a lithographic apparatus. No such modification is recited in the claim. Contrary to the Applicant’s assertion, this claim preempts just about every use of supervised learning for machine learning models as they pertain to optical elements of a lithographic machine. The independent claims are recited at such as high level that they preempt the use of machine learning for a large portion of an entire field without providing any field-specific or computer-specific technological advances. Accordingly, this argument is not persuasive.
Conclusion: For at least these reasons, the Applicant’s claims are ineligible. The rejections are maintained.
35 USC 102/103: The Applicant’s arguments and amendments have been considered but are not persuasive. Not only is the 35 USC 103 rejection maintained, but, based on the new scope of the claims after amendment and the manner in which they qualify an optical element, a further 35 USC 102 rejection of the claims, as anticipated solely by Bean, has been introduced for the claims that were originally rejected solely under 35 USC 103 over Bean and Cheng. Specifically, the amendment demonstrates the claim terms “optical element of a lithographic apparatus” and “determine a setpoint for manipulation of the optical element” reveal that the breadth of the modification of the imprinting parameters of Bean, which include the spatio-temporal illumination pattern of actinic radiation, considered with the disclosure of the optical elements of Bean and the model’s modification of and receipt of input representing the imprinting parameters in Bean, teach the elements of the independent claims without resorting to Cheng as a secondary reference. The rejection in view of Cheng is also maintained, with all of the elements added by the amendment being taught by Bean. Accordingly, the original rejections over Bean and Cheng, under 35 USC 103 are maintained, and the amendment has revealed a new rejection to be made over Bean alone, under 35 USC 102(a)(1) and (a)(2). The Applicant’s prior art rejection arguments will be addressed in the order presented.
The cited portions of Bean allegedly fail to teach receiving parameter data, receiving model data relating to a computer model to determine a setpoint for manipulation of the optical element when addressing the at least one field and determining, based on the parameter data and one the model data, an input to the model: Whether or not this is the case, based on the amendment, the claim has been remapped to illustrate that these elements are described by a combination of Bean paragraphs [0046] (optical elements of lithographic system), [0053] (fields), [0056] (automated control of the imprinting process), [0059] (as applied to the fields), [0066] (using image classification to determine imprinting parameters), [0096]-[0100] (model training based on input parameters), and [0104]-[0108] (inference of the parameters using the trained model). Specifically, paragraph [0104] recites, “An embodiment may be a method of generating imprinting process parameters 700 that uses the imprinting process 300 and the image classification method 500 together to determine a set of imprinting parameters that meet a quality goal. The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters. In a step S702A a set of images 748 are generated which are representative of those imprinted films 424.” These paragraphs in Bean clearly teach the features the Applicant asserts the cited art does not, so the Applicant’s argument is not persuasive.
There allegedly is no disclosure in Bean of a model to determine a setpoint for manipulation of the optical element: As previously demonstrated, Bean (e.g., in [0104]) teaches a model that analyzes images to determine imprinting parameters, including a spatio-temporal distribution of thermal radiation, something which is modified by modifying an optical element of the lithographic system. The Applicant’s argument is not persuasive.
There allegedly is no disclosure in Bean of using the claimed parameter data and the claimed model data regarding such a model to determine an input for that model: As previously demonstrated, the parameter data and model are used to train the model, and then the model determines the parameter values to modify the parameter, such as the spatio-temporal distribution of thermal radiation. Again, See Bean [0046] (optical elements of lithographic system), [0053] (fields), [0056] (automated control of the imprinting process), [0059] (as applied to the fields), [0066] (using image classification to determine imprinting parameters), [0096]-[0100] (model training based on input parameters), and [0104]-[0108] (inference of the parameters using the trained model).
Cheng allegedly does nothing to contribute to the rejection: As demonstrated in greater detail below, the rejection based on Cheng is merely presented in the alternative, should it be found that the Bean reference does not explicitly teach the optical elements that are modified by the model. As demonstrated below, these are, if not explicitly, at the very least, implicitly taught by Bean in accordance with MPEP 2144.01. However, should it be found that these implicit teachings are insufficient, Cheng is a valuable reference for the explicit teaching of any optical element related features of the claim Bean is potentially found not to teach.
The Applicant alleges that none of the other cited references teach the amended features of the claims: This argument is moot in light of the fact that the amended features are rejected alternatively, solely over Bean, or over Bean in view of Cheng. The other references are not used for the rejection of the amended features.
Conclusion: For at least these reasons, the Applicant’s arguments and amendments fail to demonstrate that the Applicant’s claims are novel and non-obvious over the art. Accordingly, the rejections are maintained.
Claim Objections
Claims 1 and 15 are objected to because of the following informalities:
Claims 1 and 15 recite, “and/or,” but this appears to be intended as “or” in an open-ended claim.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Independent Claims
Step 1
Claim 1 is a process. Claim 15 is a machine.
Step 2A, Prong 1
Claims 1 and 15 recite a mental process and mathematical concept, abstract ideas.
Specifically, claim 15 recites (Claim limitations in bolded italic, paragraph references are to the Applicant’s specification.):
[…]
determine, based on the parameter data and on the lens model data, an input to the model. (Evaluation, Mathematical Concept – [000115] “In step 404 the method determines the request based on the received parameter data and lens model data. Once determined, the request may be provided as input to a lens model.” [000109] “The request may provide different setpoints for different fields to be exposed, so that different corrections may be applied for different fields. A lens model may receive as input a request relating to error corrections for a plurality of fields to be exposed sequentially.” [000111] “In order to provide a request for an edge field of a substrate to a lens model, the metrology data may only be available for a portion of the expected field shape. The portion of an edge field of a substrate that is taken up by the substrate may be referred to as a partial field. An extrapolation of parameter data available for the portion of the expected field shape not filled by the substrate may be used to create a request fitting a field of the expected shape, e.g. a rectangular shape. The extrapolation may for example be a polynomial extrapolation, wherein the extrapolation is based on the metrology data as input.” – In its broadest reasonable sense, the claim uses the received parameter data and lens model data to determine, by evaluation, extrapolated data. This is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator. The evaluation is a mental process, which is an abstract idea under MPEP 2106.04(a)(2)(III). Further, under the broadest reasonable interpretation, the determining step includes extrapolation, which is a mathematical calculation. Mathematical calculations are mathematical concepts and, under MPEP 2106.04(a)(2)(I) are abstract ideas.
[…]
run the [scenario] based on the input to determine a setpoint for manipulation of the optical element and/or […] (Mental Process – NOTE: The “and/or” is being interpreted as merely an “or” under the broadest reasonable interpretation. Consequently, the examination need only be conducted on one branch of the claimed alternatives. Determining a setpoint for a machine based on information is how tuning of optical elements was done prior to the advent of computers. Think of optometry. Think of microscopy. Think of astronomy. All of these existed before computers and were conducted in the mind or with the aid of pen and paper. Therefore, the running step is practically performable in the mind or with the aid of pen and paper, which is an evaluation, a mental process, an abstract idea.)
Claim 15 recites an abstract idea.
Claim 1 recites the process executable by claim 15, so it also recites the mental process.
Claim 1 recites an abstract idea.
Step 2A, Prong 2
Claims 1 and 15 do not recite any elements that integrate the abstract idea into a practical application, and, therefore, are directed to the abstract idea.
Specifically, claim 15 recites the following additional limitations:
A computer program product comprising a non-transitory computer-readable medium having computer readable instructions therein, the instructions, when executed by one or more processors, configured to cause the one or more processors to at least:
[…] computer model […]
[…] model data […]
[…] electronically […]
These elements are generic computer implementations, recitations of general-purpose computer elements with no specific configurations to execute the claimed method. As such, the computer implementations implement the recited abstract idea on a generic computer, and, under MPEP 2106.05(f), do not integrate the abstract idea into a practical application at Step 2A, Prong Two.
NOTE: The model and use thereof is not recited in the claims, but should it be found that the model and/or use thereof is an element of the claims, the model is recited at a high level and is a generic computing element, which fails to integrate the abstract idea into a practical application at Step 2A, Prong 2 under MPEP 2106.05(f).
receive parameter data for at least one field of a plurality of fields of a substrate, the parameter data relating to one or more parameters of the substrate within the at least one field, the one or more parameters being at least partially sensitive to manipulation of an optical element of a lithographic apparatus as part of an exposure performed by the lithographic apparatus;
receive model data relating to a computer model to determine a setpoint for manipulation of the optical element when addressing the at least one field; and
The receive/receiving step(s) merely gather(s) existing information (parameter data and model data) for evaluation. Mere data gathering is insignificant extra solution activity under MPEP 2106.05(g). Under Mere Data Gathering, an analogous example is provided: “iv. Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011).” Under MPEP 2106.05(g), receiving data for evaluation is not significant in meaningfully limiting the invention, and the receiving of the data is necessary to the evaluations and/or mathematical operations of the claim. The receiving steps add nothing more than insignificant extra solution activity, so they do not integrate the abstract idea into a practical application at Step 2A, Prong Two.
providing the input, or data based on the input, to a system for running the [scenario] to determine a setpoint for manipulation of the optical element. (The providing step(s) merely provides information for evaluation. Providing data resulting from a determination is insignificant extra solution activity under MPEP 2106.05(g), which provides the following examples: “a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, ” “ii. Printing or downloading generated menus” Under MPEP 2106.05(g), receiving data for evaluation is not significant in meaningfully limiting the invention, and the receiving of the data is necessary to the evaluations and/or mathematical operations of the claim.)
Further the nature of the parameter data, the model data, and the context provided in the preamble merely limit the claim to the field of use of lithography, which fails to integrate the abstract idea into a practical application under MPEP 2106.05(h).
Because the additional limitations do not integrate the abstract idea into a practical application, claim 15 is directed to the abstract idea.
Claim 1 recites the process executable by claim 15, so it also recites the same additional limitations that do not integrate the abstract idea into a practical application.
Claim 1 is directed to the abstract idea.
Step 2B
Claims 1 and 15 do not combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept.
Specifically, claims 1 and 15 recite the following additional limitations:
A computer program product comprising a non-transitory computer-readable medium having computer readable instructions therein, the instructions, when executed by one or more processors, configured to cause the one or more processors to at least:
[…] computer model […]
[…] model data […]
[…] electronically […]
These elements represent generic computer implementations, recitations of general-purpose computer elements with no specific configurations to execute the claimed method. As such, the computer implementations implement the recited abstract idea on a generic computer, and, under MPEP 2106.05(f), do not combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
NOTE: The model and use thereof is not necessarily positively recited in the claims (see the note above about the “and/or” clause), but should it be found that the model and/or use thereof is an element of the claims, the model is recited at a high level and is a generic computing element, which fails to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B under MPEP 2106.05(f).
receive parameter data for at least one field of a plurality of fields of a substrate, the parameter data relating to one or more parameters of the substrate within the at least one field, the one or more parameters being at least partially sensitive to manipulation of an optical element of a lithographic apparatus as part of an exposure performed by the lithographic apparatus;
receive model data relating to a computer model to determine a setpoint for manipulation of the optical element when addressing the at least one field; and
providing the input, or data based on the input, to a system for running the [scenario] to determine a setpoint for manipulation of the optical element.
The receive/receiving step(s) and provide/providing steps is/are storing and retrieving information from memory and also indicative of sending or receiving data, so the step(s) is/are analogous to the examples cited in MPEP 2106.05(d) representing well-understood, routine, and conventional functions, including: “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.”
Because the additional limitation of the receive/receiving step(s) and provide/providing step(s) is/are insignificant extra-solution activity (as illustrated under Step 2A Prong 2) and well-understood, routine, and conventional, the step(s) fail(s) to combine with the other elements of the claim to provide significantly more than the abstract idea that would render the combination an inventive concept, under MPEP 2106.05(d) and MPEP 2106.05(g), at Step 2B.
Further the nature of the parameter data, the model data, and the context provided in the preamble merely limit the claim to the field of use of lithography, which fails to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept under MPEP 2106.05(h) at Step 2B.
Because the additional limitations do not combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept, claim 15 is ineligible.
Claim 1 recites the process executable by claim 15, so it also recites the same additional limitations that do not combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept.
Claim 1 is ineligible.
Dependent Claims
The dependent claims fail to provide any additional limitations to confer eligibility. (NOTE: The model and use thereof do not appear to be positively recited in the claims, but should it be found that the model and/or use thereof is an element of the claims, the model is recited at a high level and is a generic computing element, which fails to integrate the abstract idea into a practical application and fails to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B under MPEP 2106.05(f).)
Claims 2 and 16
wherein the at least one field comprises a partial field, and the parameter data comprises parameter data relating to one or more locations inside the partial field;
These are parameters of the determine/determining step of the respective independent claim and, therefore, are elements of the abstract idea. Therefore, they do not provide additional limitations that would confer eligibility at Step 2A, Prong 2 or Step 2B.
Should it be found otherwise, these data are elements of the receive/receiving steps of their respective independent claims and fail to confer eligibility for at least the same reasons as the receive/receiving steps.
Should it be found otherwise, these data merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
and wherein the determining the input comprises optimizing the input to apply a correction to the one or more parameters, wherein the correction is identified by the parameter data relating to the one or more locations within the partial field.
This merely qualifies the evaluation performed in the determine/determining step and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step. Further, optimizing is an evaluation and mathematical calculation (see [000116] – evaluation by extrapolation). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Claims 2 and 16 are ineligible.
Claims 3 and 17
wherein the optimizing the input comprises: determining an initial setpoint inside of the partial field based on a first lens model, wherein the first lens model is based on the lens model data; and evaluating the initial setpoint to determine a setpoint across a portion of a full field outside of the partial field to determine a target setpoint.
This merely qualifies the evaluation performed in the determine/determining step and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step. Further, optimizing is an evaluation and mathematical calculation (see [000116] – evaluation by extrapolation). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Also, the nature and context of the data and model merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claims 3 and 17 are ineligible.
Claim 4
wherein the first model is further configured to determine an input corresponding to the target setpoint.
The model is not positively recited as an element of the claim, so these qualify parameters of the determine/determining step of the respective independent claim and, therefore, are elements of the abstract idea. Therefore, they do not provide additional limitations that would confer eligibility at Step 2A, Prong 2 or Step 2B.
Should it be found otherwise, these qualifiers are elements of the receive/receiving steps of their respective independent claims and fail to confer eligibility for at least the same reasons as the receive/receiving steps.
Should it be found otherwise, these qualifiers merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claim 4 is ineligible.
Claim 5
wherein the first model is a partial field aware lens model configured not to optimize the input for locations outside of the partial field.
The model is not positively recited as an element of the claim, so these qualify parameters of the determine/determining step of the respective independent claim and, therefore, are elements of the abstract idea. Therefore, they do not provide additional limitations that would confer eligibility at Step 2A, Prong 2 or Step 2B.
Should it be found otherwise, these qualifiers are elements of the receive/receiving steps of their respective independent claims and fail to confer eligibility for at least the same reasons as the receive/receiving steps.
Should it be found otherwise, these qualifiers merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claim 5 is ineligible.
Claims 6 and 18
wherein the optimizing the input comprises: determining a plurality of provisional inputs based on the parameter data; and selecting one of the plurality of provisional inputs based on the model data.
The determine/determining and select/selecting steps in these claims merely qualify the evaluation performed in the determine/determining step and are, therefore elements of the abstract idea for at least the same reasons as the determine/determining step of the respective independent claims. Further, determine/determining step is an evaluation and mathematical calculation (see [000116] and [000121]– evaluation by extrapolation). Also, the select/selecting step is an evaluation ([000121] – evaluation using a model). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Also, the nature and context of the data and model merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claims 6 and 18 are ineligible.
Claim 7
wherein determining one or more of the plurality of provisional inputs comprises extrapolating parameter data outside of the partial field based on the parameter data inside of the partial field.
The extrapolate/extrapolating step qualifies the evaluation performed in the determine/determining step and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step. Further, extrapolate/extrapolating step is an evaluation and mathematical calculation (see [000116] and [000121]– evaluation by extrapolation). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Also, the nature and context of the data and model merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claim 7 is ineligible.
Claim 8
wherein the selecting one of the plurality of provisional inputs comprises selecting a provisional input that applies a correction to the parameter that is closest to the correction identified from the parameter data.
The select/selecting step qualifies the evaluation performed in the determine/determining step and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step. Further, select/selecting step is an evaluation (see [000121]– evaluation by a model). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Claim 8 is ineligible.
Claim 9
wherein the correction is of an error identified in the parameter data.
These qualify input/ouput parameters of the determine/determining step of the respective independent claims and, therefore, are elements of the abstract idea. Therefore, they do not provide additional limitations that would confer eligibility at Step 2A, Prong 2 or Step 2B.
Should it be found otherwise, these are elements of the receive/receiving steps of their respective independent claims and fail to confer eligibility for at least the same reasons as the receive/receiving steps.
Should it be found otherwise, these data merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claim 9 is ineligible.
Claims 10 and 19
wherein the determining the input comprises determining an input for a first field based on an input for a second field, wherein the first field is a partial field and the second field is a full field.
The determine/determining step of these claims qualifies the evaluation performed in the determine/determining step of the respective independent claims and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step of the respective independent claims. Further, determine/determining step is an evaluation and mathematical operation (see [000116], [000121] - [000123] – evaluation by extrapolation). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Also, the nature and context of the data and model merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claims 10 and 19 are ineligible.
Claim 11
wherein determining the input comprises optimizing the input to apply a correction to one or more parameters in the full field.
The optimize/optimizing step of these claims qualifies the evaluation performed in the determine/determining step of the respective independent claims and is, therefore an element of the abstract idea for at least the same reasons as the determine/determining step in the respective independent claims. Further, optimize/optimizing step is an evaluation (see [000116], [000118], [000121] - [000123]– evaluation by extrapolation). Therefore, this does not provide any additional limitations to confer eligibility at Step 2A, Prong 2 or Step 2B.
Also, the nature and context of the data and model merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claim 11 is ineligible.
Claims 12 and 20
wherein the one or more parameters comprise one or more selected from: overlay data, critical dimension data, levelling data, alignment data, or edge placement error data.
These are parameters of the determine/determining step of the respective independent claim and, therefore, are elements of the abstract idea. Therefore, they do not provide additional limitations that would confer eligibility at Step 2A, Prong 2 or Step 2B.
Should it be found otherwise, these data are elements of the receive/receiving steps of their respective independent claims and fail to confer eligibility for at least the same reasons as the receive/receiving steps.
Should it be found otherwise, these data merely limit the abstract idea to the particular field of lithography and do not confer eligibility under MPEP 2106.05(h), as previously discussed.
Claims 12 and 20 are ineligible.
Claim 13
An apparatus for configuring an input to a model for determining one or more settings of an optical element of a lithographic apparatus, the apparatus comprising one or more processors configured to perform the method according to claim 1.
This merely recites a generic processor/computing apparatus for conducting the process of claim 1. The process does not confer eligibility for at least the same reasons as in claim 1. The generic computing elements are recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2 and fail to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at step 2B.
Claim 13 is ineligible.
Claim 14
A lithographic apparatus comprising the apparatus according to claim 13.
This merely limits the features of the claims from which it depends and the abstract idea to a particular technological environment, which, under MPEP 2106.05(h), fails to integrate the abstract idea into a practical application at Step 2A, Prong 2 and fails to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at step 2B.
Claim 14 is ineligible.
Claim Rejections - 35 USC § 102/103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6-9, 13-16, and 18: Bean
Claim(s) 1-2, 6-9, 13-16, and 18 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by US 2021/0042906 A1 to Bean et al. (Bean).
Alternatively:
Claims 1-2, 6-9, 13-16, and 18: Bean in view of Cheng
Claim(s) 1-2, 6-9, 13-16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0042906 A1 to Bean et al. (Bean) in view of NPL: “Programmable uniformity correction by using plug-in finger arrays in advanced lithography system” by Cheng et al. (Cheng).
Claims 1 and 15
Regarding claim 1, Bean teaches:
A method, comprising: (Bean [0104]-[0105] “An embodiment may be a method of generating imprinting process parameters 700 that uses the imprinting process 300 and the image classification method 500 together to determine a set of imprinting parameters that meet a quality goal. The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters. In a step S702A a set of images 748 are generated which are representative of those imprinted films 424. The method of generating imprinting process parameters 700 may then use the image classification method 500 to generate a set of image classification data about each of the set of images 748.” – The method of generating corrected imprinting process parameters includes a method for determining parameters to input into a model to determine the corrected imprinting process parameters.)
receiving parameter data for at least one field of a plurality of fields of a substrate, the parameter data relating to one or more parameters of the substrate within the at least one field, the one or more parameters being at least partially sensitive to manipulation of an optical element of a lithographic apparatus as part of an exposure performed by the lithographic apparatus; (Bean [0104] “An embodiment may be a method of generating imprinting process parameters 700 that uses the imprinting process 300 and the image classification method 500 together to determine a set of imprinting parameters that meet a quality goal. The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters. In a step S702A a set of images 748 are generated which are representative of those imprinted films 424.” [0106] “The method of generating imprinting process parameters 700 may include a testing step S704 in which the set of image classification data may then be tested to determine if the imprinted films 424 meet the quality goal. The quality goal may be based on a single criteria or on multiple criteria. A non-limiting list of exemplary criteria are: number of non-fill defects are below a non-fill defect threshold; number of extrusion defects are below an extrusion defect threshold; percent area of imprinted field that includes a defect is below a defect threshold” – Parameter data for the at least one field to which the imprinting process is applied are received. [0022] “In a second embodiment, the image is of a portion of an imprinted film. The system may further comprise a nanoimprint lithography system configured to form the imprinted film on a substrate.” – The parameter data is for a lithography system. [0046] “The nanoimprint lithography system 100 may further comprise a field camera 136 that is positioned to view the spread of formable material 124 after the template 108 has made contact with the formable material 124. FIG. 1 illustrates an optical axis of the field camera's imaging field as a dashed line. As illustrated in FIG. 1 the nanoimprint lithography system 100 may include one or more optical components (dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc.) which combine the actinic radiation with light to be detected by the field camera. The field camera 136 may be configured to detect the spread of formable material under the template 108.” – The lithography system includes tunable optical elements. [0053] “The imprinting process may be done repeatedly in a plurality of imprint fields (also known as just fields or shots) that are spread across the substrate surface 130. Each of the imprint fields may be the same size as the mesa 110 or just the pattern area of the mesa 110. The pattern area of the mesa 110 is a region of the patterning surface 112 which is used to imprint patterns on a substrate 102 which are features of the device or are then used in subsequent processes to form features of the device. The pattern area of the mesa 110 may or may not include mass velocity variation features (fluid control features) which are used to prevent extrusions from forming on imprint field edges. In an alternative embodiment, the substrate 102 has only one imprint field which is the same size as the substrate 102 or the area of the substrate 102 which is to be patterned with the mesa 110. In an alternative embodiment, the imprint fields overlap. Some of the imprint fields may be partial imprint fields which intersect with a boundary of the substrate 102.” [0056] “FIG. 3 is a flowchart of a method of manufacturing an article (device) that includes an imprinting process 300 by the nanoimprint lithography system 100 that can be used to form patterns in formable material 124 on one or more imprint fields (also referred to as: pattern areas or shot areas). The imprinting process 300 may be performed repeatedly on a plurality of substrates 102 by the nanoimprint lithography system 100. The processor 140 may be used to control the imprinting process 300.” - The methods are applied on a field-by-field basis. [0066] “A non-limiting list of exemplary imprinting parameters are: drop dispense pattern; template shape; spatio-temporal illumination pattern of actinic radiation; droplet volumes; substrate coating; template coating; template priming; template trajectory; formable material formulation; time from the template initially contacting the formable material to the formable material reaching a particular imprint field edge; time from the template initially contacting the formable material to the formable material reaching a particular imprint field corner; gas purge conditions (flow, time, and/or mixture); imprint sequence; back pressure; final imprint force: drop edge exclusion zone; spread time; gas flows and curing parameters; etc. Determining what these imprinting parameters are and making sure that the imprinting process 300 stays on track requires a method of characterizing the quality of the imprinted film. The applicant has also determined that obtaining a first estimate of these imprinting parameters may include imprinting a plurality of fields with a variety of imprinting parameters but only inspecting key regions of the imprinted film for key defects.” – The imprinting parameters include spatio-temporal illumination pattern of actinic radiation, which is modified by and sensitive to manipulation of optical elements of the lithography system. It is used as input as a training parameter and is output as an element of a determination of how to modify the operation of an optical element of the lithography system.)
receiving model data relating to a computer model to determine a setpoint for manipulation of the optical element when addressing the at least one field; and (Bean [0103] “In an embodiment, the set of input images 548 is divided into subsets of images based on the feature information IF. For example, the input images may be divided into the image subsets (548 edge; 548 mark; . . . 548 N) one for one or more of each type of feature identified by the feature information IF. A plurality of models (modeledge; modelmark; . . . modelN) are then generated in the training step S538 for each subset of input images 548. The classification step S510 then uses the appropriate model among the plurality of models based on the feature information IF. In an alternative embodiment, a model is generated for each type of feature.” [0105] “The method of generating imprinting process parameters 700 may then use the image classification method 500 to generate a set of image classification data about each of the set of images 748.” – The particular model is selected based on the parameters that are used to set the parameters sensitive to the manipulation of the optical element. [0104] “An embodiment may be a method of generating imprinting process parameters 700 that uses the imprinting process 300 and the image classification method 500 together to determine a set of imprinting parameters that meet a quality goal. The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters. In a step S702A a set of images 748 are generated which are representative of those imprinted films 424.” – These are used to train models and also for making inferences based on image analysis using the models ).
determining, based on the parameter data and on the model data, an input to the model; and (Bean [0106] “The method of generating imprinting process parameters 700 may include a testing step S704 in which the set of image classification data may then be tested to determine if the imprinted films 424 meet the quality goal. The quality goal may be based on a single criteria or on multiple criteria. A non-limiting list of exemplary criteria are: number of non-fill defects are below a non-fill defect threshold; number of extrusion defects are below an extrusion defect threshold; percent area of imprinted field that includes a defect is below a defect threshold; percent area of substrate that includes a defect is below a defect threshold; percent area of images that includes a defect is below a defect threshold; percent area of region of substrate that will become a device that includes a defect is below a defect threshold; etc.” [0046] “As illustrated in FIG. 1 the nanoimprint lithography system 100 may include one or more optical components (dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc.) which combine the actinic radiation with light to be detected by the field camera.” [0019] “(e) in a first case wherein the set of classifications of the imprinted films do not meet the quality goal, adjusting the imprinting parameters based on the set image classifications and repeating processes” [0066] “The quality of the imprinted film 424 is dependent upon a plurality imprinting parameters. A non-limiting list of exemplary imprinting parameters are: drop dispense pattern; template shape; spatio-temporal illumination pattern of actinic radiation; droplet volumes; substrate coating; template coating; template priming; template trajectory; formable material formulation; time from the template initially contacting the formable material to the formable material reaching a particular imprint field edge; time from the template initially contacting the formable material to the formable material reaching a particular imprint field corner; gas purge conditions (flow, time, and/or mixture); imprint sequence; back pressure; final imprint force: drop edge exclusion zone; spread time; gas flows and curing parameters; etc.” – These data are collected and used for input into a classification model that determines corrections to the system, including the lens system).
running the computer model based on the input to determine a setpoint for manipulation of the optical element and/or electronically providing the input, or data based on the input, to a system for funning the computer model to determine a setpoint for manipulation of the optical element. (Bean [0104] “An embodiment may be a method of generating imprinting process parameters 700 that uses the imprinting process 300 and the image classification method 500 together to determine a set of imprinting parameters that meet a quality goal. The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters. In a step S702A a set of images 748 are generated which are representative of those imprinted films 424.” – The computer model is run based on the input of model data and input parameters. This outputs parameters, such as the illumination pattern of the actinic radiation and template trajectory, which are affected by the manipulation of the lithographic system optical elements, such as the dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc.)
Bean teaches that the lithographic system includes optical components and that a non-exhaustive list of parameters including spatio-temporal illumination of the illumination pattern of the actinic radiation and template trajectory could be adjusted based on the corrections determined by the model (Bean [0046] “As illustrated in FIG. 1 the nanoimprint lithography system 100 may include one or more optical components (dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc.) which combine the actinic radiation with light to be detected by the field camera.” [0066] “The quality of the imprinted film 424 is dependent upon a plurality imprinting parameters. A non-limiting list of exemplary imprinting parameters are: […] spatio-temporal illumination pattern of actinic radiation; […] template trajectory”). It is likely Bean on its face at least implicitly teaches all of the features of the claim by teaching that the imprinting parameters are inputs of and modified by the model and that those imprinting parameters relate to modification of optical elements, such as the dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc, even if it is not explicit. To this end, an alternative 35 USC 102 rejection for the independent claims is made based on this assertion. However, for the avoidance of doubt, Bean in view of Cheng explicitly teaches:
A method for determining an input to a model to determine a setpoint for manipulation of an optical element of a lithographic apparatus when addressing at least one of a plurality of fields of a substrate, the method comprising:
receiving parameter data for the at least one field, the parameter data relating to one or more parameters of the substrate within the at least one field, the one or more parameters being at least partially sensitive to manipulation of the optical element as part of an exposure performed by the lithographic apparatus;
receiving model data relating to the optical element; and
(Cheng Page 77, Right Column “Therefore, a programmable uniformity correction method with higher flexibility and better correction capability is proposed in this paper. In Section 2, the model of the proposed method is established. In Section 3, the mathematics of the method is described in detail. In Section 4, an adjustment strategy is introduced. Finally, an optical illumination system with programmable uniformity correction unit for advanced lithography is designed and simulated to verify the higher flexibility and better correction capability of the proposed method in Section 5.” Page 78 “The major difference between the proposed method and the previous ones is that the static gray filters or solid pieces of glass have been replaced by variable attenuation correction element arrays, which are called fingers. The fingers are in the form of rectangular sheets, which are independently inserted into the edge of an illumination field along the scanning direction to correct the IINU through programming. In order to elaborate the programmable uniformity correction method clearly, a model shown in Fig. 3 is established.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the parameter adjustments of Bean by the optical element adjustments of Cheng because a person of ordinary skill in the art would be motivated based on the mention in Bean of modifying parameters including spatio-temporal illumination of the illumination pattern of the actinic radiation and template trajectory and disclosure of an optical system to look to the model, optical system, and optical system parameters of Cheng that provide high flexibility and correction capability to ensure lithographic critical dimension uniformity. (Bean [0046] “As illustrated in FIG. 1 the nanoimprint lithography system 100 may include one or more optical components (dichroic mirrors, beam combiners, prisms, lenses, mirrors, etc.) which combine the actinic radiation with light to be detected by the field camera.” [0066] “The quality of the imprinted film 424 is dependent upon a plurality imprinting parameters. A non-limiting list of exemplary imprinting parameters are: […] spatio-temporal illumination pattern of actinic radiation; […] template trajectory”; Cheng Abstract “Illumination integrated non-uniformity (IINU) is one of the key factors to determine the resolution and Critical Dimension Uniformity (CDU) which are important performance parameters in advanced lithography system. To further reduce the IINU, uniformity correction technology has been adopted. In this paper, an approach of programmable uniformity correction with higher flexibility and better correction capability is proposed. The method is composed of variable attenuation correction element arrays which are inserted into the edge of an illumination field to shield the energy through programming. Based on the proposed method, a programmable uniformity correction unit is applied to an illumination optical system. The simulation results show that the value of the corrected IINU reaches less than 0.25%, which satisfies the requirements of IINU in advanced lithography system, and the energy loss is less than 1.1%. It verifies the higher flexibility and better correction capability of the proposed method.)
Regarding claim 15, Bean in view of Cheng teaches:
A computer program product comprising a non-transitory computer-readable medium having computer readable instructions therein, the instructions, when executed by one or more processors, configured to cause the one or more processors to at least: (Bean [0051] “The nanoimprint lithography system 100 may be regulated, controlled, and/or directed by one or more processors 140 (controller) in communication with one or more components and/or subsystems such as the substrate chuck 104, the substrate positioning stage 106, the template chuck 118, the imprint head 120, the fluid dispenser 122, the radiation source 126, the thermal radiation source 134, the field camera 136, imprint field atmosphere control system, and/or the droplet inspection system 138. The processor 140 may operate based on instructions in a computer readable program stored in a non-transitory computer readable memory 142.”)
[Conduct the method of claim 1] (See the art rejections of claim 1.)
Claims 2 and 16
Regarding claims 2 and 16, Bean in view of Cheng Teach the features of claims 1 and 15 and further teaches:
wherein the at least one field comprises a partial field, and the parameter data comprises parameter data relating to one or more locations inside the partial field; (Bean [0053] “Some of the imprint fields may be partial imprint fields which intersect with a boundary of the substrate 102.” [0066] “The applicant has also determined that obtaining a first estimate of these imprinting parameters may include imprinting a plurality of fields with a variety of imprinting parameters but only inspecting key regions of the imprinted film for key defects.” [0104] “The method of generating imprinting process parameters 700 may start with the imprinting process 300 which is used to produce a plurality of imprinted films 424 one on or more substrates using an initial set of imprinting parameters.” [0106] “The quality goal may be based on a single criteria or on multiple criteria. A non-limiting list of exemplary criteria are: number of non-fill defects are below a non-fill defect threshold; number of extrusion defects are below an extrusion defect threshold; percent area of imprinted field that includes a defect is below a defect threshold” – Parameters of locations inside partial fields are used as elements of input.)
and wherein the determining the input comprises optimizing the input to apply a correction to the one or more parameters, wherein the correction is identified by the parameter data relating to the one or more locations within the partial field. (Bean [0053] “Some of the imprint fields may be partial imprint fields which intersect with a boundary of the substrate 102.” [0019] “(a) imprinting a plurality of films on one or more substrates with a set of imprinting parameters; (b) obtaining a set of images of the plurality of films; (c) generating a set of classifications of the imprinted films by analyzing the set of images in accordance with claim 1; (d) determining if the set of classifications of the imprinted films meet the quality goal; (e) in a first case wherein the set of classifications of the imprinted films do not meet the quality goal, determining new imprinting parameters based on the set image classifications and repeating processes (a)-(e) until the imprinting films meet the quality goal; (e) in a first case wherein the set of classifications of the imprinted films do not meet the quality goal, adjusting the imprinting parameters based on the set image classifications and repeating processes (a)-(e) until the imprinting films meet the quality goal; and (f) in a second case wherein the set of classifications of the imprinted films do meet the quality goal, outputting the imprinting parameters in which the set of classifications of the imprinted films do meet the quality goal as production imprinting parameters.” – The input is optimized to correct the parameters. That is, the adjustments determined optimize to meet the quality goal. This is specific to the location in the field.)
Claims 6 and 18
Regarding claims 6 and 18, Bean in view of Cheng teaches the features of claims 2 and 16 and further teaches:
wherein the optimizing the input comprises: determining a plurality of provisional inputs based on the parameter data; and selecting one of the plurality of provisional inputs based on the model data. (Bean [0084]-[0085] “The segmenting step S506 may include a forming substep S530 of forming a set of locations L including the initial location L0 locations Li that are adjacent to and nearby location L0. The set of locations L may include 3-30 individual adjacent locations. The segmenting step S506 may include a statistical value calculating step S532 in which a statistical value Mi that is representative of the pixels in the image 448 that are associated with the feature described by the feature information IF that is at location Li forming a set of statistical values M associated with the set of locations L. In an embodiment, the statistical value is a median. The segmenting step S506 may include a difference calculating step S534 in which a difference value ΔMi between for each statistical value in the set of statistical values M. In an embodiment, the difference value ΔMi may be defined as the difference in statistical values between neighboring locations Li, for example, (ΔMi = Mi−Mi−1) although other methods of measuring variation may be used. The segmenting step S506 may include a testing step S536 in which the difference value ΔMi for each location Li is tested against a criteria and if the difference value ΔMi meets the criteria. Pixels in the image 448 associated with the feature (for example, an imprint field edge, a set parallel edges, a mark, etc.) described by the feature information IF at the location Li in which ΔMi meets the criteria are considered to be in the ROI T.” – Each image of the field is segmented based on feature information to determine a set of locations. From the set of locations, a subset is used to output a single ROI.)
Claim 7
Regarding claim 7, Bean in view of Cheng teaches the features of claim 6 and further teaches:
wherein determining one or more of the plurality of provisional inputs comprises extrapolating parameter data outside of the partial field based on the parameter data inside of the partial field. (Bean [0053] “Some of the imprint fields may be partial imprint fields which intersect with a boundary of the substrate 102.” [0091] “In an embodiment, the feature information IF indicates that the feature is an edge, for horizontal (or vertical) detected edges identified by step S528, traverse several pixels up and down (or left and right in case of vertical edge). For each of this row (or column) calculate median (statistical value Mi) of pixel intensity for this row (or column) as in statistical value calculating step S532. If the calculated median of pixel intensity profile stays the same as determined in the testing step S536, these rows (columns) corresponds to stable regions (ROI B); otherwise, these rows (columns) corresponds to transition regions (ROI T).” [0095] “In an embodiment, the statistical values described above are calculated for one type of feature such as: straight edges; curved edges; sets of parallel edges; etc.” See statistical calculations in paragraphs [0091] – [0095] that extrapolate assumptions for edges that would make up the partial fields.)
Claim 8
Regarding claim 8, Bean in view of Cheng teaches the features of claim 7 and further teaches:
wherein the selecting one of the plurality of provisional inputs comprises selecting a provisional input that applies a correction to the parameter that is closest to the correction identified from the parameter data. (Bean [0089]-[0091] “In an embodiment, after each statistical value Mi is calculated in step S532 a difference value ΔMi is also calculated in a difference calculating step S534. In an embodiment, a set of difference values is calculated for a set of statistical values. FIG. 6D is an illustration of the difference value ΔMi for the entire image 648 a instead of just the region around the initial location L0 and is done for informative purposes only. In an embodiment, after each difference value ΔMi is calculated each difference value is tested against a criteria for example if the difference value ΔMi above a threshold such as the threshold illustrated by the grey dashed edge in FIG. 6D. In an embodiment, criteria may be more complicated such as the absolute value of the difference value ΔMi is greater than a threshold. In an embodiment, once the difference value ΔMi does not meet the criteria, then neighboring statistical values and/or neighboring difference values are not calculated and/or tested. In an embodiment, an output of the testing step S536 is a ROI T, an example of which is illustrated in FIG. 6A, in which the portion of the image 648 a that overlaps with identified features in which the difference values meet the criteria as determined by the testing step S536. In an embodiment, the ROI T is a continuous portion of the image 648 a and only one continuous region is identified for each identified feature. FIG. 6E is an illustration of just ROI T identified by the testing step S536.” – The location associated with the expected correction within a threshold (closest to the correction because the determinations stop when the threshold is met) is selected.)
Claim 9
Regarding claim 9, Bean in view of Cheng teaches the features of claim 8 and further teaches:
wherein the correction is of an error identified in the parameter data. (Bean [0068] “Optical images 450 b of fields imprinted under the same process conditions, can have different intensities, regardless of the existence of a defect. This is a challenge for automatic defect detection. In some cases, the applicant has found that given a set of images imprinted under the similar process conditions and the similar imaging conditions, it can be difficult to impossible to obtain a reference image which may be used by as a standard for defect identification. The applicant has found that it is useful to be able to detect defects and identify the type of imprint defect in images, in which the imprint condition vary without having the need to compare every image to a standard reference image.” [0070] [0081] “Defect detection may then be performed within the ROI T of the image 448. In an embodiment, the substrate 102 has a plurality of imprinted films 424, each of the imprinted films 424 is surrounded by an imprint field edge 454, and streets on the substrate 102 separate the imprinted films 424 from each other.” [0106]-[0107] “The method of generating imprinting process parameters 700 may include a testing step S704 in which the set of image classification data may then be tested to determine if the imprinted films 424 meet the quality goal. The quality goal may be based on a single criteria or on multiple criteria. A non-limiting list of exemplary criteria are: number of non-fill defects are below a non-fill defect threshold; number of extrusion defects are below an extrusion defect threshold; percent area of imprinted field that includes a defect is below a defect threshold; percent area of substrate that includes a defect is below a defect threshold; percent area of images that includes a defect is below a defect threshold; percent area of region of substrate that will become a device that includes a defect is below a defect threshold; etc. If the answer to the testing step S704, is no, then the method of generating imprinting process parameters 700 may include a setting step S706. In the setting step S706, new imprinting parameters are set based on the set of image classifications. The locations of defects and the type of defect will determine how the imprinting parameters are adjusted.” – The correction conducted is providing new adjusted parameters, which is equivalent to determining an error (difference) in the parameters.)
Claim 13
Regarding claim 13, Bean in view of Cheng teaches the features of claim 1 and further teaches:
An apparatus for configuring an input to a model for determining one or more settings of an optical element of a lithographic apparatus, the apparatus comprising one or more processors configured to perform the method according to claim 1. (Bean [0051] “The nanoimprint lithography system 100 may be regulated, controlled, and/or directed by one or more processors 140 (controller) in communication with one or more components and/or subsystems such as the substrate chuck 104, the substrate positioning stage 106, the template chuck 118, the imprint head 120, the fluid dispenser 122, the radiation source 126, the thermal radiation source 134, the field camera 136, imprint field atmosphere control system, and/or the droplet inspection system 138. The processor 140 may operate based on instructions in a computer readable program stored in a non-transitory computer readable memory 142. The processor 140 may be or include one or more of a CPU, MPU, GPU, ASIC, FPGA, DSP, and a general purpose computer. The processor 140 may be a purpose built controller or may be a general purpose computing device that is adapted to be a controller.” Also, see claims 15-17. – This is an apparatus that configures the input to determine settings of an optical element.)
Claim 14
Regarding claim 14, Bean in view of Cheng teaches the features of claim 13 and further teaches:
A lithographic apparatus comprising the apparatus according to claim 13. (Bean [0051] “The nanoimprint lithography system 100 may be regulated, controlled, and/or directed by one or more processors 140 (controller) in communication with one or more components and/or subsystems such as the substrate chuck 104, the substrate positioning stage 106, the template chuck 118, the imprint head 120, the fluid dispenser 122, the radiation source 126, the thermal radiation source 134, the field camera 136, imprint field atmosphere control system, and/or the droplet inspection system 138. The processor 140 may operate based on instructions in a computer readable program stored in a non-transitory computer readable memory 142. The processor 140 may be or include one or more of a CPU, MPU, GPU, ASIC, FPGA, DSP, and a general purpose computer. The processor 140 may be a purpose built controller or may be a general purpose computing device that is adapted to be a controller.” Also, see claims 15-17. – This is a lithographic apparatus that configures the input to determine settings of an optical element.)
Claims 3-5, 10-11, 17, and 19: Bean in view of Cheng and Schneider
Claim(s) 3-5, 10-11, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0042906 A1 to Bean et al. (Bean) in view of NPL: “Programmable uniformity correction by using plug-in finger arrays in advanced lithography system” by Cheng et al. (Cheng) and US 20140239192 A1 to Schneider et al. (Schneider).
Claims 3 and 17
Regarding Claims 3 and 17, Bean in view of Cheng teaches the features of claim 2 and 16 and further teaches
Wherein the optimizing the input comprises: determining an initial setpoint inside of the partial field based on a first lens model, wherein the first lens model is based on the lens model data; and (Bean [0103] “In an embodiment, the set of input images 548 is divided into subsets of images based on the feature information IF. For example, the input images may be divided into the image subsets (548 edge; 548 mark; . . . 548 N) one for one or more of each type of feature identified by the feature information IF. A plurality of models (modeledge; modelmark; . . . modelN) are then generated in the training step S538 for each subset of input images 548. The classification step S510 then uses the appropriate model among the plurality of models based on the feature information IF. In an alternative embodiment, a model is generated for each type of feature.” [0105] “The method of generating imprinting process parameters 700 may then use the image classification method 500 to generate a set of image classification data about each of the set of images 748.” – The particular model is selected based on the feature information of the field being partial.)
Bean in view of Cheng does not appear to explicitly teach, but Bean in view of Cheng and Schneider teaches:
evaluating the initial setpoint to determine a setpoint across a portion of a full field outside of the partial field to determine a target setpoint. (Schneider [0060] “The measuring device 25 a can carry out a wavefront measurement over the complete image field 20 or over a part of the image field 20 and then interpolate the measured values thus obtained to the complete image field 20. Examples of such measuring devices 25 a are given in U.S. Pat. No. 7,333,216 B2 and U.S. Pat. No. 6,650,399 B2. Partial field or full field measurements of the wavefronts can be performed at different, for example cyclically recur-ring instants. In the case of a partial field measurement over the image field 20, it is possible to use, for example, (x,y) measurement point arrays in the form of a (13, 3) array, a (13, 5) array or a (13, 7) array. It is likewise possible just to measure for example one field point (preferably the field center), three field points (preferably the field center and the field centers of the respective left and right field halves), five field points or else a different number of field points. Afterward, an extrapolation to one of the field point arrays, for example a (13, 3), (13, 5) or (13, 7) array, can then be effected. The number of measured field points can also vary from measurement instant to measurement instant. In the case of fields which, for example, are crescent-shaped or have a boundary that is different from rectangular, instead of field points in a field center or in a field half for the wavefront measurement it is also possible to use other characteristic field points, chosen depending on the field shape.” – Schneider teaches extrapolating boundary elements of fields/regions of interest for different fields including full and partial fields.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the setting of points in a field/region of interest of partial and full fields taught in Bean by the extrapolation of boundary points of a field/region of interest from different fields including partial and full fields in Schneider because a person of ordinary skill in the art would be motivated by the teaching of establishing points and boundaries of a field/region of interest in Bean to look to Schneider to optimize the setting of the boundaries of the field/region of interest by extrapolating between full and/or partial fields to prevent field-dependent imaging aberrations which are present during the projection exposure do not undesirably affect a projection result. (Bean [0087]-[0088] “For example, consider an image 648 a shown in FIG. 6A, the segmenting step S506 may filter the image 648 a in the filtering substep S528 a forming the binary image 648 b also illustrated in FIG. 6B. In the case of image 648 a, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight horizontal edge. The feature information IF may also include information that identifies the expected position and/or size of the horizontal edge. The fitting step S528 b may output an initial location L0 such as a y position of the horizontal edge indicated by the grey edge superimposed on top of binary image 648 b in FIG. 6B. In an embodiment, the fitting step S528 b may output multiple parameters such as: x position; y position; angle; width; height; the parameters of the feature information IF that define a curved edge; etc. In an alternative embodiment, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight edge that has an expected angle relative to the horizontal or vertical. Once the initial location L0 is identified, the statistical value calculating step S532 may calculate a statistical value in a limited region around the initial location based on the feature information. For example, for image 648 b, since a horizontal edge has been identified as the expected feature (i.e. feature of interest), a statistical value of pixel intensities along horizontal edges in the image 648 a are calculated in the region around the initial location L0. FIG. 6C is an illustration of the statistical value (median) of horizontal edges for the entire image 648 a instead of just the region around the initial location L0 and is done for informative purposes only.” [0092] “In the present disclosure, maxfeature at location i(ROI T) refers to the maximum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, minfeature at location i(ROI T) refers to the minimum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, medianfeature at location i(ROI T) refers to the median pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, maxacross i( )refers to the maximum value across the index i. In the context of calculating the set of statistical values that is only the index i that corresponds to the ROI T identified in the segmenting step S506.”; Schneider [0004] “It is an object of the present invention to develop such an illumination and displacement device in such a way that field-dependent imaging aberrations which are present during the projection exposure do not undesirably affect a projection result.” [0060] “The measuring device 25 a can carry out a wavefront measurement over the complete image field 20 or over a part of the image field 20 and then interpolate the measured values thus obtained to the complete image field 20. Examples of such measuring devices 25 a are given in U.S. Pat. No. 7,333,216 B2 and U.S. Pat. No. 6,650,399 B2. Partial field or full field measurements of the wavefronts can be performed at different, for example cyclically recur-ring instants. In the case of a partial field measurement over the image field 20, it is possible to use, for example, (x,y) measurement point arrays in the form of a (13, 3) array, a (13, 5) array or a (13, 7) array. It is likewise possible just to measure for example one field point (preferably the field center), three field points (preferably the field center and the field centers of the respective left and right field halves), five field points or else a different number of field points. Afterward, an extrapolation to one of the field point arrays, for example a (13, 3), (13, 5) or (13, 7) array, can then be effected. The number of measured field points can also vary from measurement instant to measurement instant. In the case of fields which, for example, are crescent-shaped or have a boundary that is different from rectangular, instead of field points in a field center or in a field half for the wavefront measurement it is also possible to use other characteristic field points, chosen depending on the field shape. Instead of a measurement of the wavefront over the image field 20, a simulation calculation with respect to the wavefront can also be effected, proceeding from the optical design data. In this case, the heating of the optical components carrying the illumination or imaging light 3, for example mirror or lens element heating, can be taken into account. As an alternative to a simulation calculated overall, it is possible to correct a simulation calculation model via customary standard methods, for example via a Levenberg-Marquardt algorithm, for optimizing the wavefront determination. In the context of the simulation calculation, it is also possible to use a feedforward model, for example a linear extrapolation. The simulation calculations and full field or partial field wavefront measurements over the image field 20 can be used complementarily to one another.”)
Claim 4
Regarding claim 4, Bean in view of Cheng and Schneider teaches the features of claim 3 and further teaches,
wherein the first model is further configured to determine an input corresponding to the target setpoint. (Bean [0087]-[0088] “For example, consider an image 648 a shown in FIG. 6A, the segmenting step S506 may filter the image 648 a in the filtering substep S528 a forming the binary image 648 b also illustrated in FIG. 6B. In the case of image 648 a, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight horizontal edge. The feature information IF may also include information that identifies the expected position and/or size of the horizontal edge. The fitting step S528 b may output an initial location L0 such as a y position of the horizontal edge indicated by the grey edge superimposed on top of binary image 648 b in FIG. 6B. In an embodiment, the fitting step S528 b may output multiple parameters such as: x position; y position; angle; width; height; the parameters of the feature information IF that define a curved edge; etc. In an alternative embodiment, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight edge that has an expected angle relative to the horizontal or vertical. Once the initial location L0 is identified, the statistical value calculating step S532 may calculate a statistical value in a limited region around the initial location based on the feature information. For example, for image 648 b, since a horizontal edge has been identified as the expected feature (i.e. feature of interest), a statistical value of pixel intensities along horizontal edges in the image 648 a are calculated in the region around the initial location L0. FIG. 6C is an illustration of the statistical value (median) of horizontal edges for the entire image 648 a instead of just the region around the initial location L0 and is done for informative purposes only.” [0092] “In the present disclosure, maxfeature at location i(ROI T) refers to the maximum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, minfeature at location i(ROI T) refers to the minimum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, medianfeature at location i(ROI T) refers to the median pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, maxacross i( )refers to the maximum value across the index i. In the context of calculating the set of statistical values that is only the index i that corresponds to the ROI T identified in the segmenting step S506.” – The model in Bean determines a region of the field corresponding to the setpoint.)
Claim 5
Regarding claim 5, Bean in view of Schneider teaches the features of claim 3 and further teaches,
wherein the first model is a partial field aware lens model configured not to optimize the input for locations outside of the partial field. ([0092] “In the present disclosure, maxfeature at location i(ROI T) refers to the maximum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, minfeature at location i(ROI T) refers to the minimum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, medianfeature at location i(ROI T) refers to the median pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, maxacross i( )refers to the maximum value across the index i. In the context of calculating the set of statistical values that is only the index i that corresponds to the ROI T identified in the segmenting step S506.” [0103] “In an embodiment, the set of input images 548 is divided into subsets of images based on the feature information IF. For example, the input images may be divided into the image subsets (548 edge; 548 mark; . . . 548 N) one for one or more of each type of feature identified by the feature information IF. A plurality of models (modeledge; modelmark; . . . modelN) are then generated in the training step S538 for each subset of input images 548. The classification step S510 then uses the appropriate model among the plurality of models based on the feature information IF. In an alternative embodiment, a model is generated for each type of feature.”; – The model is specific to the feature and operates on one field at a time, meaning it does not operate at any given time on more than one field.)
Claims 10 and 19
Regarding claims 10 and 19, Bean in view of Cheng teaches the features of claim 1.
wherein the determining the input comprises determining an input for a first field based on an input for a second field, wherein the first field is a partial field and the second field is a full field. (Schneider [0060] “The measuring device 25 a can carry out a wavefront measurement over the complete image field 20 or over a part of the image field 20 and then interpolate the measured values thus obtained to the complete image field 20. Examples of such measuring devices 25 a are given in U.S. Pat. No. 7,333,216 B2 and U.S. Pat. No. 6,650,399 B2. Partial field or full field measurements of the wavefronts can be performed at different, for example cyclically recur-ring instants. In the case of a partial field measurement over the image field 20, it is possible to use, for example, (x,y) measurement point arrays in the form of a (13, 3) array, a (13, 5) array or a (13, 7) array. It is likewise possible just to measure for example one field point (preferably the field center), three field points (preferably the field center and the field centers of the respective left and right field halves), five field points or else a different number of field points. Afterward, an extrapolation to one of the field point arrays, for example a (13, 3), (13, 5) or (13, 7) array, can then be effected. The number of measured field points can also vary from measurement instant to measurement instant. In the case of fields which, for example, are crescent-shaped or have a boundary that is different from rectangular, instead of field points in a field center or in a field half for the wavefront measurement it is also possible to use other characteristic field points, chosen depending on the field shape. Instead of a measurement of the wavefront over the image field 20, a simulation calculation with respect to the wavefront can also be effected, proceeding from the optical design data. In this case, the heating of the optical components carrying the illumination or imaging light 3, for example mirror or lens element heating, can be taken into account. As an alternative to a simulation calculated overall, it is possible to correct a simulation calculation model via customary standard methods, for example via a Levenberg-Marquardt algorithm, for optimizing the wavefront determination. In the context of the simulation calculation, it is also possible to use a feedforward model, for example a linear extrapolation. The simulation calculations and full field or partial field wavefront measurements over the image field 20 can be used complementarily to one another.” – The Schneider reference extrapolates boundary points between partial and full fields. These boundary points are contemplated by Bean as parameters (see Bean [0083]-[0092] – segmentation and selection)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the setting of points in a field/region of interest of partial and full fields taught in Bean by the extrapolation of boundary points of a field/region of interest from different fields including partial and full fields in Schneider because a person of ordinary skill in the art would be motivated by the teaching of establishing points and boundaries of a field/region of interest in Bean to look to Schneider to optimize the setting of the boundaries of the field/region of interest by extrapolating between full and/or partial fields to prevent field-dependent imaging aberrations which are present during the projection exposure do not undesirably affect a projection result. (Bean [0087]-[0088] “For example, consider an image 648 a shown in FIG. 6A, the segmenting step S506 may filter the image 648 a in the filtering substep S528 a forming the binary image 648 b also illustrated in FIG. 6B. In the case of image 648 a, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight horizontal edge. The feature information IF may also include information that identifies the expected position and/or size of the horizontal edge. The fitting step S528 b may output an initial location L0 such as a y position of the horizontal edge indicated by the grey edge superimposed on top of binary image 648 b in FIG. 6B. In an embodiment, the fitting step S528 b may output multiple parameters such as: x position; y position; angle; width; height; the parameters of the feature information IF that define a curved edge; etc. In an alternative embodiment, the fitting step S528 b may receive feature information IF indicating that the image 648 a is expected to contain a straight edge that has an expected angle relative to the horizontal or vertical. Once the initial location L0 is identified, the statistical value calculating step S532 may calculate a statistical value in a limited region around the initial location based on the feature information. For example, for image 648 b, since a horizontal edge has been identified as the expected feature (i.e. feature of interest), a statistical value of pixel intensities along horizontal edges in the image 648 a are calculated in the region around the initial location L0. FIG. 6C is an illustration of the statistical value (median) of horizontal edges for the entire image 648 a instead of just the region around the initial location L0 and is done for informative purposes only.” [0092] “In the present disclosure, maxfeature at location i(ROI T) refers to the maximum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, minfeature at location i(ROI T) refers to the minimum pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, medianfeature at location i(ROI T) refers to the median pixel value in the image 448 among the pixels that correspond to the feature specified by feature information IF at location Li in the ROI T. In the present disclosure, maxacross i( )refers to the maximum value across the index i. In the context of calculating the set of statistical values that is only the index i that corresponds to the ROI T identified in the segmenting step S506.”; Schneider [0004] “It is an object of the present invention to develop such an illumination and displacement device in such a way that field-dependent imaging aberrations which are present during the projection exposure do not undesirably affect a projection result.” [0060] “The measuring device 25 a can carry out a wavefront measurement over the complete image field 20 or over a part of the image field 20 and then interpolate the measured values thus obtained to the complete image field 20. Examples of such measuring devices 25 a are given in U.S. Pat. No. 7,333,216 B2 and U.S. Pat. No. 6,650,399 B2. Partial field or full field measurements of the wavefronts can be performed at different, for example cyclically recur-ring instants. In the case of a partial field measurement over the image field 20, it is possible to use, for example, (x,y) measurement point arrays in the form of a (13, 3) array, a (13, 5) array or a (13, 7) array. It is likewise possible just to measure for example one field point (preferably the field center), three field points (preferably the field center and the field centers of the respective left and right field halves), five field points or else a different number of field points. Afterward, an extrapolation to one of the field point arrays, for example a (13, 3), (13, 5) or (13, 7) array, can then be effected. The number of measured field points can also vary from measurement instant to measurement instant. In the case of fields which, for example, are crescent-shaped or have a boundary that is different from rectangular, instead of field points in a field center or in a field half for the wavefront measurement it is also possible to use other characteristic field points, chosen depending on the field shape. Instead of a measurement of the wavefront over the image field 20, a simulation calculation with respect to the wavefront can also be effected, proceeding from the optical design data. In this case, the heating of the optical components carrying the illumination or imaging light 3, for example mirror or lens element heating, can be taken into account. As an alternative to a simulation calculated overall, it is possible to correct a simulation calculation model via customary standard methods, for example via a Levenberg-Marquardt algorithm, for optimizing the wavefront determination. In the context of the simulation calculation, it is also possible to use a feedforward model, for example a linear extrapolation. The simulation calculations and full field or partial field wavefront measurements over the image field 20 can be used complementarily to one another.”)
Claim 11
Regarding claim 11, Bean in view of Cheng and Schneider teaches the features of claim 10 and further teaches:
wherein determining the input comprises optimizing the input to apply a correction to one or more parameters in the full field. (Bean [0108] “After the new imprinting parameters are set, new films are imprinted using the imprinting process 300 and the method 700 returns to step S702 repeating the series of steps until the imprinted films meet the quality goal has determined by step S704. Once imprinting parameters meet the quality criteria, the imprinting parameters that meet the quality criteria are output as production imprinting parameters. The production imprinting parameters are then used to imprint a plurality of production substrates using process 300 which are then processed in processing step S312 so as to manufacture a plurality of articles on each substrate.” – The input and parameters are iteratively updated until optimized for all fields (including full fields), and wafers produced are improved.)
Claims 12 and 20: Bean in view of Cheng and Kang
Claim(s) 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0042906 A1 to Bean et al. (Bean) in view of NPL: “Programmable uniformity correction by using plug-in finger arrays in advanced lithography system” by Cheng et al. (Cheng) NPL: “Critical dimension control in photolithography based on the yield by a simulation program” by Kang et al. (Kang).
Claims 12 and 20
Regarding claims 12 and 20, Bean in view of Cheng teaches the features of claim 1 and 15. Cheng teaches critical dimension uniformity (Cheng Abstract “Illumination integrated non-uniformity (IINU) is one of the key factors to determine the resolution and Critical Dimension Uniformity (CDU) which are important performance parameters in advanced lithography system.”). Bean in view of Cheng fails to teach, but Bean in view of Cheng and Kang teach
wherein the one or more parameters comprises one or more selected from: overlay data, critical dimension data, levelling data, alignment data, or edge placement error data. (Kang Abstract “The critical dimension (CD) of wafers in photolithography is the most important parameter that determines the final performance of devices. The sampling of CD’s, as a result, is essential and must be taken with caution.” – Critical dimension is taught as a parameter for lithography.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the non-exhaustive list of parameters for lithography correction of Bean by including critical dimension as a parameter as taught in Kang because the person of ordinary skill in would be motivated to add to the non-exhaustive list of lithographic correction parameters of Bean the critical dimension of Kang because the critical dimension of Kang most important parameter that determines the final performance of wafer-related devices. (Bean [0066] “A non-limiting list of exemplary imprinting parameters are: drop dispense pattern; template shape; spatio-temporal illumination pattern of actinic radiation; droplet volumes; substrate coating; template coating; template priming; template trajectory; formable material formulation; time from the template initially contacting the formable material to the formable material reaching a particular imprint field edge; time from the template initially contacting the formable material to the formable material reaching a particular imprint field corner; gas purge conditions (flow, time, and/or mixture); imprint sequence; back pressure; final imprint force: drop edge exclusion zone; spread time; gas flows and curing parameters; etc. Determining what these imprinting parameters are and making sure that the imprinting process 300 stays on track requires a method of characterizing the quality of the imprinted film.”; Kang Abstract “The critical dimension (CD) of wafers in photolithography is the most important parameter that determines the final performance of devices. The sampling of CD’s, as a result, is essential and must be taken with caution.”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(From Prior Office Action)
US 2005/0273753 A1 to Sezginer (Teaches models to correct for optical aberrations in lithography, including partial fields)
US 2010/0302525 A1 to Zimmerman et al. (Teaches uniformity correction and uniformity drift compensation including by using fingers)
US 2012/0262865 A1 to Zimmerman (Teaches uniformity correction including by using fingers)
US 2020/0004158 A1 to Koga et al. (Teaches a model for correcting lithographic errors)
US 2019/0384182 A1 to Takarada et al. (Teaches generating synchronization accuracy data for wafer fabribation)
US 2019/0187575 A1 to Lu et al. (Teaches alignment control in lithography in real time)
US 20180341173 A1 to Li et al. (Teaches alignment control including partial fields at edges)
US 20180210332 A1 to Slonaker et al. (Teaches alignment and overlay measurement and using and setting alignment marks)
US 20160342094 A1 to Endres et al. (Teaches illuminating partial fields and alignment)
US 20080286667 A1 to Okita (Teaches overlay management and a model for reducing overlay error)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571) 272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 2188
/RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188