Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,172

Reinforcement Learning (RL) Based Federated Automated Defect Classification and Detection

Non-Final OA §103
Filed
Sep 06, 2023
Examiner
LU, HUA
Art Unit
2118
Tech Center
2100 — Computer Architecture & Software
Assignee
Katholieke Universiteit Leuven
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
391 granted / 568 resolved
+13.8% vs TC avg
Strong +28% interview lift
Without
With
+27.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
45 currently pending
Career history
613
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 568 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is responsive to the Application filed on 9/6/2023. A filing date 9/6/2023 is acknowledged. Claims 1-20 are pending in this application. Claims 1, 15, 19 are independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 3. Claims 1-8, 11, 12, 15-17, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Maialen Larranaga et al (US Publication 20230076185 A1, hereinafter Larranaga), and in view of Rajkumar Theagarajan et al (US Publication 20230136110 A1, hereinafter Theagarajan). As for independent claim 1, Larranaga discloses: A federated machine learning method (Larranaga: [0104], an algorithm may be a machine learning algorithm. In some embodiments, the machine learning algorithm may be and/or include mathematical equations, other algorithms, plots, charts, networks (e.g., neural networks), and/or other tools and machine learning components. For example, the machine learning algorithm may be and/or include one or more neural networks having an input layer, an output layer, and one or more intermediate or hidden layers. In some embodiments, the one or more neural networks may be and/or include deep neural networks (e.g., neural networks that have one or more intermediate or hidden layers between the input and output layers)) comprising: providing, from a central model server (Larranaga: [0180], In the Internet example, a server 130 might transmit a requested code for an application program through Internet 128, ISP 126, local network 122 and communication interface 118), an initial trained machine learning (ML) model to a plurality of clients as a respective local ML model, wherein the initial trained ML model (Larranaga: [0138], Training the initial algorithm may comprise providing the training data to the initial algorithm as input to the initial algorithm) is configured to identify defect features from scanning electron microscopy (SEM) images (Larranaga: [0097], Different types of metrology tools MT for making such measurements are known, including scanning electron microscopes or various forms of scatterometer metrology tools MT); receiving, from at least one client by the central model server, information indicative of a respective updated local ML model (Larranaga: [0138], Learning to better predict performance may comprise iteratively updating one or more of the algorithm parameters (e.g., before or after serving), and determining whether the update resulted in a better or a worse prediction of the known performance data); and determining, based on the information indicative of the respective updated local ML models, an updated global ML model (Larranaga: [0141], During serving phase 702, agent 500 performs 814 the optimal policy (e.g., 810 and/or 812) for a semiconductor processing process 815 and updates 811 training data 800 (e.g., to improve the policy) based on the information (e.g., processing information such as overlay measurements 820 and corresponding yield enhancements 830) generated during serving phase 702). Larranaga discloses using machine learning to identify defects on the substrate including initial model and updated model, but does not clearly disclose the system includes a central server and a plurality of clients, in an analogous art of using machine learning to detect defects with SEM image, Theagarajan discloses: an initial trained machine learning (ML) model to a plurality of clients as a respective local ML model, wherein the initial trained ML model (Theagarajan: Fig. 2 and [0054], the one or more components may include initial DL model 1 (202), initial DL model 2 (204), initial DL model 3 (206), . . . , and initial DL model N (208), as shown in FIG. 2) is configured to identify defect features from scanning electron microscopy (SEM) images (Theagaraja: [0005], an inspection process and generating additional information about the defects at a higher resolution using either a high magnification optical system or a scanning electron microscope (SEM)); Larranaga and Theagarajan are analogous arts because they are in the same field of endeavor, using machine learning to detect and classify defects in semiconductor. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Larranaga using the teachings of Theagaraja to include using plurality of initial DL model and generating final model. It would provide Larranaga’s method with enhanced capabilities of determining the defect information efficiently and accurately. As for claim 2, Larranaga-Theagarajan discloses: further comprising training an ML model based on initial training data to form the initial trained ML model, wherein the training data comprises a plurality of SEM images, wherein at least a portion of the SEM images each comprise one or more semiconductor defects (Theagarajan: [0005], Defect review typically involves re-detecting defects detected as such by an inspection process and generating additional information about the defects at a higher resolution using either a high magnification optical system or a scanning electron microscope (SEM)). As for claim 3, Larranaga-Theagarajan discloses: wherein receiving the respective updated local ML model by the central model server does not include receiving local training data from the at least one client (Theagarajan: [0077], The one or more components further include a final knowledge distilled DL model configured for determining information for the specimen or an additional specimen based on output generated for the specimen or the additional specimen with one or more runtime modes of the imaging subsystem. For example, the one or more components may include final KD DL model 224, as shown in FIG. 2 that uses input one or more runtime mode image(s) 226 to generate output that includes determined information 228 … the final KD DL model may have an architecture that is different than all of the initial DL models or an architecture that is the same as one or more of the initial DL models, possibly with one or more different parameters; please note the final model is different than all of the initial models). As for claim 4, Larranaga-Theagarajan discloses: further comprising further training, at each client, the respective local ML model based on local training data to provide the respective updated local ML model (Larranaga: [0028], determining the process metric, and initiating the adjustment are performed as at least part of a model free reinforcement learning (MFRL) framework). As for claim 5, Larranaga-Theagarajan discloses: wherein training the respective local ML model comprises applying a Markov Decision process to the local training data so as to accurately classify, detect, and/or localize semiconductor defects, wherein the local training data comprises a plurality of local SEM images, wherein at least a portion of the local SEM images each comprise one or more semiconductor defects (Larranaga: [0129], In some embodiments, this arrangement comprises a Markov Decision Process). As for claim 6, Larranaga-Theagarajan discloses: wherein the training of the respective local ML model comprises adjusting at least one parameter weight of the respective local ML model (Larranaga: [0102], the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system with respect to the intended values (e.g., ideal values) of these characteristics). As for claim 7, Larranaga-Theagarajan discloses: wherein determining the updated global ML model comprises incorporating the at least one adjusted parameter weight into the updated global ML model by way of a federated averaging technique. As for claim 8, Larranaga-Theagarajan discloses: wherein determining the updated global ML model comprises incorporating a plurality of adjusted parameter weights from a respective plurality of local ML models based on a consensus/voting process (Theagarajan: [0074], there will be a consensus among the multiple DL models that the specimen location is a defect location). As for claim 11, Larranaga-Theagarajan discloses: wherein the SEM images comprise semiconductor features, wherein the semiconductor features comprise at least one of: line-space features, contact hole features, pillar features, logic circuit features, static random access memory (SRAM) features, or dynamic random access memory (DRAM) features (Larranaga: [0086], design rules define the space tolerance between devices (such as gates, capacitors, etc.) or interconnect lines, to ensure that the devices or lines do not interact with one another in an undesirable way. One or more of the design rule limitations may be referred to as a “critical dimension” (CD). A critical dimension of a device can be defined as the smallest width of a line or hole, or the smallest space between two lines or two holes). As for claim 12, Larranaga-Theagarajan discloses: wherein the initial trained ML model is configured to localize the defect features within a given SEM image frame (Theagarajan: [0063], predicted defect locations for inspection applications, predicted structure characteristics for metrology applications, and so on). As for independent claim 15, Larranaga discloses: A federated machine learning method (Larranaga: [0104], an algorithm may be a machine learning algorithm. In some embodiments, the machine learning algorithm may be and/or include mathematical equations, other algorithms, plots, charts, networks (e.g., neural networks), and/or other tools and machine learning components. For example, the machine learning algorithm may be and/or include one or more neural networks having an input layer, an output layer, and one or more intermediate or hidden layers. In some embodiments, the one or more neural networks may be and/or include deep neural networks (e.g., neural networks that have one or more intermediate or hidden layers between the input and output layers)) comprising: training, based on an initial training dataset, a machine learning (ML) model to form an initial trained ML model (Larranaga: [0138], Training the initial algorithm may comprise providing the training data to the initial algorithm as input to the initial algorithm), wherein the initial training dataset comprises a plurality of scanning electron microscopy (SEM) images, wherein the SEM images each comprise semiconductor features (Larranaga: [0097], Different types of metrology tools MT for making such measurements are known, including scanning electron microscopes or various forms of scatterometer metrology tools MT), wherein the initial trained ML model is configured to identify, classify, and localize defect features from among the semiconductor features in the SEM images (Larranaga: [0093], The inspection apparatus may alternatively be constructed to identify defects on the substrate and may, for example, be part of the lithocell LC, or may be integrated into the lithographic apparatus LA, or may even be a stand-alone device); providing, from a central model server (Larranaga: [0180], In the Internet example, a server 130 might transmit a requested code for an application program through Internet 128, ISP 126, local network 122 and communication interface 118), the initial trained machine learning model to a plurality of clients as a respective local ML model (Larranaga: [0138], Training the initial algorithm may comprise providing the training data to the initial algorithm as input to the initial algorithm); training, at each of the plurality of clients, the respective local ML model based on a respective client training dataset to form a respective updated local ML model, wherein the respective client training dataset corresponds to a plurality of SEM images of semiconductor defect features specific to that client (Larranaga: [0138], Learning to better predict performance may comprise iteratively updating one or more of the algorithm parameters (e.g., before or after serving), and determining whether the update resulted in a better or a worse prediction of the known performance data), wherein the respective updated local ML model comprises a respective set of updated weight parameters (Larranaga: [0102], the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system with respect to the intended values (e.g., ideal values) of these characteristics); providing, to the central model server from one or more clients, the respective set of updated weight parameters (Larranaga: [0102], the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system with respect to the intended values (e.g., ideal values) of these characteristics); and determining, based on the respective set of updated weight parameters, an updated global ML model (Larranaga: [0141], During serving phase 702, agent 500 performs 814 the optimal policy (e.g., 810 and/or 812) for a semiconductor processing process 815 and updates 811 training data 800 (e.g., to improve the policy) based on the information (e.g., processing information such as overlay measurements 820 and corresponding yield enhancements 830) generated during serving phase 702). Larranaga discloses using machine learning to identify defects on the substrate including initial model and updated model, but does not clearly disclose the system includes a central server and a plurality of clients, in an analogous art of using machine learning to detect defects with SEM image, Theagarajan discloses: the initial trained machine learning model to a plurality of clients as a respective local ML model; training, at each of the plurality of clients, the respective local ML model based on a respective client training dataset to form a respective updated local ML model (Theagarajan: Fig. 2 and [0054], the one or more components may include initial DL model 1 (202), initial DL model 2 (204), initial DL model 3 (206), . . . , and initial DL model N (208), as shown in FIG. 2), wherein the respective client training dataset corresponds to a plurality of SEM images of semiconductor defect features specific to that client (Theagaraja: [0005], an inspection process and generating additional information about the defects at a higher resolution using either a high magnification optical system or a scanning electron microscope (SEM)); Larranaga and Theagarajan are analogous arts because they are in the same field of endeavor, using machine learning to detect and classify defects in semiconductor. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Larranaga using the teachings of Theagaraja to include using plurality of initial DL model and generating final model. It would provide Larranaga’s method with enhanced capabilities of determining the defect information efficiently and accurately. As for claim 16, Larranaga-Theagarajan discloses: providing the updated global ML model to the plurality of clients as a respective new local ML model (Larranaga: [0028], determining the process metric, and initiating the adjustment are performed as at least part of a model free reinforcement learning (MFRL) framework). As for claim 17, Larranaga-Theagarajan discloses: wherein providing the respective set of updated weight parameters to the central model server comprises not providing the respective client training dataset (Theagarajan: [0077], The one or more components further include a final knowledge distilled DL model configured for determining information for the specimen or an additional specimen based on output generated for the specimen or the additional specimen with one or more runtime modes of the imaging subsystem. For example, the one or more components may include final KD DL model 224, as shown in FIG. 2 that uses input one or more runtime mode image(s) 226 to generate output that includes determined information 228 … the final KD DL model may have an architecture that is different than all of the initial DL models or an architecture that is the same as one or more of the initial DL models, possibly with one or more different parameters; please note the final model is different than all of the initial models). As for independent claim 19, Larranaga discloses: A method comprising: receiving a scanning electron microscope (SEM) image of a plurality of semiconductor features (Larranaga: [0097], Different types of metrology tools MT for making such measurements are known, including scanning electron microscopes or various forms of scatterometer metrology tools MT); and applying a trained global machine learning (ML) model to determine whether a defect feature exists within the SEM image, wherein the trained global ML model was trained based on incorporating a plurality of adjusted parameter weights (Larranaga: [0102], the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system with respect to the intended values (e.g., ideal values) of these characteristics; [0141], During serving phase 702, agent 500 performs 814 the optimal policy (e.g., 810 and/or 812) for a semiconductor processing process 815 and updates 811 training data 800 (e.g., to improve the policy) based on the information (e.g., processing information such as overlay measurements 820 and corresponding yield enhancements 830) generated during serving phase 702); Larranaga discloses using machine learning to identify defects on the substrate including initial model and updated model, but does not clearly disclose the system includes a central server and a plurality of clients and a consensus process, in an analogous art of using machine learning to detect defects with SEM image, Theagarajan discloses: from a respective plurality of local ML models operating on a respective client devices (Theagarajan: Fig. 2 and [0054], the one or more components may include initial DL model 1 (202), initial DL model 2 (204), initial DL model 3 (206), . . . , and initial DL model N (208), as shown in FIG. 2) based on a consensus/voting process and without receiving local training data from the client devices (Theagarajan: [0074], there will be a consensus among the multiple DL models that the specimen location is a defect location). Larranaga and Theagarajan are analogous arts because they are in the same field of endeavor, using machine learning to detect and classify defects in semiconductor. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Larranaga using the teachings of Theagaraja to include using plurality of initial DL model and generating final model, and include a consensus process. It would provide Larranaga’s method with enhanced capabilities of determining the defect information efficiently and accurately. 4. Claims 9, 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Larranaga and Theagarajan as applied on claim 1, and further in view of Robert Clark et al (US Publication 20200083080 A1, hereinafter Clark). As for claim 9, Larranaga-Theagarajan does not clearly disclose using encryption, in an analogous art of using machine learning to detect defects in semiconductor, Clark discloses: wherein receiving the information indicative of a respective updated local ML model by the central model server comprises receiving a local encrypted version of the respective updated local ML model (Clark: [0406], an encryption component 3255 to ensure information integrity during asset transmission and asset recovery at the intended recipient; [0421], for data assets or computer program assets, such assets can be encrypted while being packaged in order retain integrity of the information conveyed by the asset). Larranaga and Theagarajan and Clark are analogous arts because they are in the same field of endeavor, using machine learning to detect and classify defects in semiconductor. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Larranaga using the teachings of Clark to include encrypted data assets or computer program assets. It would provide Larranaga’s method with enhanced capabilities of optimizing the asset distribution system. As for claim 13, Larranaga-Theagarajan-Clark discloses: wherein the initial trained ML model comprises an encrypted ML model (Clark: [0406], an encryption component 3255 to ensure information integrity during asset transmission and asset recovery at the intended recipient; [0421], for data assets or computer program assets, such assets can be encrypted while being packaged in order retain integrity of the information conveyed by the asset). As for claim 14, Larranaga-Theagarajan-Clark discloses: providing the updated global ML model to the plurality of clients as a respective new encrypted local ML model (Clark: [0406], an encryption component 3255 to ensure information integrity during asset transmission and asset recovery at the intended recipient; [0421], for data assets or computer program assets, such assets can be encrypted while being packaged in order retain integrity of the information conveyed by the asset). 5. Claims 10, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Larranaga and Theagarajan as applied on claims 1 and 15, 19, and further in view of Thomas Korb et al (US Publication 20240411296 A1, hereinafter Korb). As for claim 10, Larranaga-Theagarajan does not clearly disclose bridge defects, in an analogous art of using machine learning to detect defects in semiconductor, Korb discloses: wherein the initial trained ML model is configured to classify defect features from among a plurality of defect categories, wherein the defect categories comprise at least one of: bridge defects, line-collapse defects, gaps/breaks, or micro-bridges (Korb: [0256], detected bridge defects indicate insufficient etching, so the amount of etching is increased, detected line breaks indicate excessive etching, so the amount of etching is decreased, consistently occurring defects indicate a defective mask, so the mask is checked, and detected missing structures hint at non-ideal material deposition, so the material deposition is modified). Larranaga and Theagarajan and Korb are analogous arts because they are in the same field of endeavor, using machine learning to detect and classify defects in semiconductor. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Larranaga using the teachings of Korb to expressly include bridge defects. As for claim 18, Larranaga-Theagarajan-Korb discloses: wherein the semiconductor features comprise at least one of: line-space features, contact hole features, pillar features, logic circuit features, static random access memory (SRAM) features, or dynamic random access memory (DRAM) features (Larranaga: [0086], design rules define the space tolerance between devices (such as gates, capacitors, etc.) or interconnect lines, to ensure that the devices or lines do not interact with one another in an undesirable way. One or more of the design rule limitations may be referred to as a “critical dimension” (CD). A critical dimension of a device can be defined as the smallest width of a line or hole, or the smallest space between two lines or two holes), and wherein the defect features comprise at least one of: bridge defects, line-collapse defects, gaps/breaks, or micro-bridges (Korb: [0256], detected bridge defects indicate insufficient etching, so the amount of etching is increased, detected line breaks indicate excessive etching, so the amount of etching is decreased, consistently occurring defects indicate a defective mask, so the mask is checked, and detected missing structures hint at non-ideal material deposition, so the material deposition is modified). As for claim 20, Larranaga-Theagarajan-Korb discloses: classifying the defect feature into a defect category from a plurality of defect categories(Theagarajan: [0005], Defects can generally be more accurately classified into defect types based on information determined by defect review compared to inspection, wherein the plurality of defect categories comprise at least one of: bridge defects, line-collapse defects, gaps/breaks, or micro-bridges (Korb: [0256], detected bridge defects indicate insufficient etching, so the amount of etching is increased, detected line breaks indicate excessive etching, so the amount of etching is decreased, consistently occurring defects indicate a defective mask, so the mask is checked, and detected missing structures hint at non-ideal material deposition, so the material deposition is modified). Examiner’s Note Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Applicants are required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Sastry et al (US Publication 20200027021) REINFORCEMENT LEARNING FOR MULTI-DOMAIN PROBLEMS It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hua Lu whose telephone number is 571-270-1410 and fax number is 571-270-2410. The examiner can normally be reached on Mon-Fri 9:00 am to 6:00 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached on 571-272-3644. The fax phone number for the organization where this application or proceeding is assigned is 703-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Hua Lu/ Primary Examiner, Art Unit 2118
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Oct 29, 2025
Non-Final Rejection — §103
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593819
METHOD, APPARATUS AND SYSTEM FOR DETECTING CARBON EMISSION-INVOLVED GAS FROM RUMINANT
2y 5m to grant Granted Apr 07, 2026
Patent 12585245
NUMERICAL VALUE CONTROLLER
2y 5m to grant Granted Mar 24, 2026
Patent 12578706
CONTROL SYSTEM, INDUSTRIAL DEVICE, CONTROL METHOD, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12572265
METHODS, SYSTEMS, AND USER INTERFACE FOR DISPLAYING OF PRESENTATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12560914
AUTOMATIC INSPECTION SYSTEM AND WIRELESS SLAVE DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+27.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 568 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month