Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,145

PATTERN MODELING SYSTEM AND PATTERN MODELING METHOD

Non-Final OA §103
Filed
Mar 22, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application claims priority to foreign application with application number KR 10-2023-0039293 dated 24 March 2023. No priority documents have been received. Information Disclosure Statement The IDS dated 22 March 2024 has been considered and placed in the application file. Specification - Drawings Acknowledgement is made of the color drawings submitted 22 March 2024 in this application. Applicants are reminded that, absent a successful petition, the black and white drawings submitted on 22 March 2024, along with the replacement sheets submitted on 17 June 2024 will be used. No petition is currently on file. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2020 0380362 A1, (Cao et al). Claim 1 [AltContent: textbox (Cao et al. Fig. 6, showing predicting a mask pattern and adjusting the mask pattern based on filters.)] PNG media_image1.png 607 515 media_image1.png Greyscale Regarding Claim 1, Cao et al. teach a pattern modeling method of predicting image data, ("a target pattern, and training, by a hardware computer system, the machine learning model configured to predict a mask pattern based on the process model and a cost function that determines a difference between the predicted pattern and the target pattern," paragraph [0009]) the pattern modeling method comprising: generating first image data based on a sample pattern that is learned by a deep neural network (DNN) ("the machine learning model configured to predict a mask pattern," paragraph [0009] where a machine learning model is a deep neural network); generating second image data by measuring the first image data ("a cost function that describes the difference between a predicted resist image and an experimentally measured resist image (SEM image)," paragraph [0105] where the second image is the SEM image); determining an area of the second image data to which a weight filter is to be applied ("In an embodiment, the parameter may be the weight and/or bias of the machine learning model (e.g., CNN)," paragraph [0126]where the machine learning model is a weight filter); training the DNN by applying the weight filter to the determined area of the second image data ("The ctm_parameter are optimized parameters determined during the CTMCNN training using gradient based method. In an embodiment, the parameters may be weights and bias of the CNN," paragraph [0126]); and predicting at least one pattern image based on a result of the training the DNN ("FIG. 14C is flow chart of another method for predicting OPC (or CTM/CTM+ images) based on the LMC model 1310," paragraph [0172]). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Cao et al. explicitly motivates doing so at least in paragraphs [0050], [0056] and [0355] including “The descriptions above are intended to be illustrative, not limiting. Thus, it will be apparent to one skilled in the art that modifications may be made as described without departing from the scope of the claims set out below” and otherwise motivating experimentation and optimization. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of system claim 9 and system claim 18 while noting that the rejection above cites to both device and method disclosures. Claims 9 and 18 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Cao et al. teach the pattern modeling method of claim 1, wherein the second image data comprises image data generated by applying a process condition to the first image data (" In an embodiment, the thin-mask approximation, also called the Kirchhoff boundary condition, is widely used to simplify the determination of the interaction of the radiation and the patterning device," paragraph [0068]). Claim 3 Regarding claim 3, Cao et al. teach the pattern modeling method of claim 1, wherein the determining the area of the second image data to which the weight filter is to be applied comprises determining an area corresponding to a critical dimension of the second image data ("the feature vector may include one or more characteristics (e.g., shape, arrangement, size, etc.) of the design layout comprised or formed by the patterning device, one or more characteristics ( e.g., one or more physical properties such as a dimension, a refractive index, material composition, etc.) of the patterning device," paragraph [0076]). Claim 4 Regarding claim 4, Cao et al. teach the pattern modeling method of claim 1, wherein the determining the area of the second image data to which the weight filter is to be applied comprises determining an area corresponding to a distribution of the second image data ("In an embodiment, an optics model may be used that represents optical characteristics (including changes to the radiation intensity distribution and/or the phase distribution caused by the projection optics) of projection optics of a lithographic apparatus," paragraph [0087]). Claim 5 Regarding claim 5, Cao et al. teach the pattern modeling method of claim 1, wherein the determining the area of the second image data to which the weight filter is to be applied comprises determining an area corresponding to a pattern shape of the second image data ("The projection optics model can represent the optical characteristics of the projection optics, including aberration, distortion, one or more refractive indexes, one or more physical sizes, one or more physical dimensions, etc," paragraph [0087] where physical size and dimension is a pattern shape). Claim 6 Regarding claim 6, Cao et al. teach the pattern modeling method of claim 1, wherein the DNN is trained based on: third image data obtained by convolving the weight filter with the second image data ("the mask image or near field is convolved with a series of kernels, then squared and summed, to obtain the optical or aerial image. The convolution kernels may be carried over directly to other CNN models," paragraph [0104]); and fourth image data obtained by convolving the weight filter with reference image data ("the mask image or near field is convolved with a series of kernels, then squared and summed, to obtain the optical or aerial image. The convolution kernels may be carried over directly to other CNN models," paragraph [0104]). Claim 7 Regarding claim 7, Cao et al. teach the pattern modeling method of claim 6, wherein the training of the DNN comprises applying a difference between training data to which the weight filter corresponding to the area is applied and a loss function ("the training may be based on another training data set and a cost function (e.g., EPE or RMS). The training data may include a mask image (e.g., a CTM image obtained from the CTMl model 1020 or CTMl model 1030) corresponding to a target pattern, a simulated process image ( e.g., a resist image, an aerial image, an etch image, etc.) corresponding to the mask images," paragraph [0135]). Claim 8 Regarding claim 8, Cao et al. teach the pattern modeling method of claim 7, wherein the training the DNN further comprises updating weight data of the DNN based on a calculation result of the loss function ("In an embodiment, the parameter may be the weight and/or bias of the machine learning model (e.g., CNN)," paragraph [0126]). Claim 9 Regarding claim 9, Cao et al. teach a pattern modeling system for predicting image data("a target pattern, and training, by a hardware computer system, the machine learning model configured to predict a mask pattern based on the process model and a cost function that determines a difference between the predicted pattern and the target pattern," paragraph [0009]), the pattern modeling system comprising: a memory storing instructions (" Computer system 100 also includes a main memory 106, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 102 for storing information and instructions to be executed by processor 104," paragraph [0206]); and at least one processor ("Computer system 100 includes a bus 102 or other communication mechanism for communicating information, and a processor 104 (or multiple processors 104 and 105) coupled with bus 102 for processing information," paragraph [0206]) configured to execute the instructions to: generate first image data based on a sample pattern that is learned by a deep neural network (DNN), the DNN comprising a plurality of layers ("the machine learning model configured to predict a mask pattern," paragraph [0009] where a machine learning model is a deep neural network); generate second image data by measuring the first image data ("a cost function that describes the difference between a predicted resist image and an experimentally measured resist image (SEM image)," paragraph [0105] where the second image is the SEM image); determining an area of the second image data to which a weight filter is to be applied ("In an embodiment, the parameter may be the weight and/or bias of the machine learning model (e.g., CNN)," paragraph [0126]where the machine learning model is a weight filter); train the DNN by applying the weight filter to the determined area of the second image data ("The ctm_parameter are optimized parameters determined during the CTMCNN training using gradient based method. In an embodiment, the parameters may be weights and bias of the CNN," paragraph [0126]); and predict at least one pattern image based on a result of the training the DNN ("FIG. 14C is flow chart of another method for predicting OPC (or CTM/CTM+ images) based on the LMC model 1310," paragraph [0172]). Claim 10 Regarding claim 10, Cao et al. teach the pattern modeling system of claim 9, wherein the at least one processor comprises at least one data preprocessor, and wherein the at least one data preprocessor is configured to execute the instructions to preprocess the image data based on the sample pattern ("portions of one or more methods described herein may be performed by computer system 100 in response to processor 104 executing one or more sequences of one or more instructions contained in main memory 106," paragraph [0208] where sequences teach preprocessing). Claim 11 Regarding claim 11, Cao et al. teach the pattern modeling system of claim 10, wherein the first image data comprises image data obtained at least in part based on the sample pattern, and wherein the second image data comprises image data obtained by applying a process condition to the first image data (" In an embodiment, the thin-mask approximation, also called the Kirchhoff boundary condition, is widely used to simplify the determination of the interaction of the radiation and the patterning device," paragraph [0068]). Claim 12 Regarding claim 12, Cao et al. teach the pattern modeling system of claim 11, wherein the area to which the weight filter is to be applied comprises an area corresponding to a critical dimension (CD) of the second image data ("the feature vector may include one or more characteristics (e.g., shape, arrangement, size, etc.) of the design layout comprised or formed by the patterning device, one or more characteristics ( e.g., one or more physical properties such as a dimension, a refractive index, material composition, etc.) of the patterning device," paragraph [0076]). Claim 13 Regarding claim 13, Cao et al. teach the pattern modeling system of claim 11, wherein the area to which the weight filter is to be applied comprises an area corresponding to a distribution of the second image data ("In an embodiment, an optics model may be used that represents optical characteristics (including changes to the radiation intensity distribution and/or the phase distribution caused by the projection optics) of projection optics of a lithographic apparatus," paragraph [0087]). Claim 14 Regarding claim 14, Cao et al. teach the pattern modeling system of claim 11, wherein the area to which the weight filter is to be applied comprises an area corresponding to a pattern shape of the second image data ("The projection optics model can represent the optical characteristics of the projection optics, including aberration, distortion, one or more refractive indexes, one or more physical sizes, one or more physical dimensions, etc," paragraph [0087] where physical size and dimension is a pattern shape). Claim 15 Regarding claim 15, Cao et al. teach the pattern modeling system of claim 11, wherein the at least one data preprocessor is further configured to execute the instructions to: apply the weight filter corresponding to the area to the second image data("In an embodiment, the parameter may be the weight and/or bias of the machine learning model (e.g., CNN)," paragraph [0126]where the machine learning model is a weight filter); and transmit an image to which the weight filter is applied to a loss function module ("the training process may involve reducing (in an embodiment, minimize), a cost function that describes the difference between a predicted resist image and an experimentally measured resist image (SEM image)," paragraph [0105]). Claim 16 Regarding claim 16, Cao et al. teach the pattern modeling system of claim 15, wherein the loss function module is configured to: determine a difference between third image data generated based on applying the weight filter to the second image data and fourth image data generated based on applying the weight filter to reference image data ("the training may be based on another training data set and a cost function (e.g., EPE or RMS). The training data may include a mask image (e.g., a CTM image obtained from the CTMl model 1020 or CTMl model 1030) corresponding to a target pattern, a simulated process image ( e.g., a resist image, an aerial image, an etch image, etc.) corresponding to the mask images," paragraph [0135]), and minimize the difference ("the training process may involve reducing (in an embodiment, minimize), a cost function that describes the difference between a predicted resist image and an experimentally measured resist image (SEM image)," paragraph [0105]). Claim 17 Regarding claim 17, Cao et al. teach the pattern modeling system of claim 16, wherein the reference image data comprises output image data of the DNN ("providing a first output of the first trained model as a second input to the second trained model," paragraph [0276]). Claim 18 Regarding claim 18, Cao et al. teach a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to: generate first image data based on a sample pattern that is learned by a deep neural network (DNN), the DNN comprising a plurality of layers ("the machine learning model configured to predict a mask pattern," paragraph [0009] where a machine learning model is a deep neural network); generate second image data by measuring the first image data("a cost function that describes the difference between a predicted resist image and an experimentally measured resist image (SEM image)," paragraph [0105] where the second image is the SEM image); determining an area of the second image data to which a weight filter is to be applied ("In an embodiment, the parameter may be the weight and/or bias of the machine learning model (e.g., CNN)," paragraph [0126]where the machine learning model is a weight filter); train the DNN by applying the weight filter to the determined area of the second image data ("The ctm_parameter are optimized parameters determined during the CTMCNN training using gradient based method. In an embodiment, the parameters may be weights and bias of the CNN," paragraph [0126]); and predict at least one pattern image based on a result of the training the DNN ("FIG. 14C is flow chart of another method for predicting OPC (or CTM/CTM+ images) based on the LMC model 1310," paragraph [0172]), wherein the weight filter is applied to an area corresponding to a feature portion of the second image data ("the feature vector may include one or more characteristics (e.g., shape, arrangement, size, etc.) of the design layout comprised or formed by the patterning device, one or more characteristics ( e.g., one or more physical properties such as a dimension, a refractive index, material composition, etc.) of the patterning device," paragraph [0076]). Claim 19 Regarding claim 19, Cao et al. teach the non-transitory computer-readable storage medium of claim 18, wherein the feature portion comprises an area corresponding to a critical dimension of the second image data ("the feature vector may include one or more characteristics (e.g., shape, arrangement, size, etc.) of the design layout comprised or formed by the patterning device, one or more characteristics ( e.g., one or more physical properties such as a dimension, a refractive index, material composition, etc.) of the patterning device," paragraph [0076]). Claim 20 Regarding claim 20, Cao et al. teach the non-transitory computer-readable storage medium of claim 18, wherein the feature portion comprises an area corresponding to a distribution of the second image data ("In an embodiment, an optics model may be used that represents optical characteristics (including changes to the radiation intensity distribution and/or the phase distribution caused by the projection optics) of projection optics of a lithographic apparatus," paragraph [0087]). Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2021 0405521 A1 to Kim et al. discloses a proximity correction method for a semiconductor manufacturing process includes: generating a plurality of pieces of original image data from a plurality of sample regions, with the sample regions selected from layout data used in the semiconductor manufacturing process; removing some pieces of original image data that overlap with each other from the plurality of pieces of original image data, resulting in a plurality of pieces of input image data; inputting the plurality of pieces of input image data to a machine learning model; obtaining a prediction value of critical dimensions of target patterns included in the plurality of pieces of input image data from the machine learning model; measuring a result value for critical dimensions of actual patterns corresponding to the target patterns on a semiconductor substrate on which the semiconductor manufacturing process is performed; and performing learning of the machine learning model using the prediction value and the result value. US Patent Publication 2022 0035236 A1 to Kim et al. discloses accurately and quickly restoring an image on the mask to the shape on the mask, and a mask manufacturing method using the method of forming the mask. The method of forming a mask includes obtaining first images by performing rasterization and image correction on shapes on the mask corresponding to first patterns on a wafer, obtaining second images by applying a transformation to the shapes on the mask, performing deep learning based on a transformation relationship between ones of the first images and ones of the second images corresponding to the first images, and forming a target shape on the mask corresponding to a target pattern on the wafer, based on the deep learning. US Patent Publication 2022 0284344 A1 to Ma et al. discloses training a machine learning model configured to predict values of a physical characteristic associated with a substrate and for use in adjusting a patterning process. The method involves obtaining a reference image; determining a first set of model parameter values of the machine learning model such that a first cost function is reduced from an initial value of the cost function obtained using an initial set of model parameter values, where the first cost function is a difference between the reference image and an image generated via the machine learning model. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 17 January 2024
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Jan 17, 2026
Non-Final Rejection — §103
Feb 20, 2026
Interview Requested
Feb 27, 2026
Applicant Interview (Telephonic)
Feb 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month