Prosecution Insights
Last updated: April 19, 2026
Application No. 18/273,269

Automatic Optical Inspection Using Hybrid Imaging System

Non-Final OA §102§103
Filed
Jul 19, 2023
Examiner
ROGERS, SCOTT A
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Orbotech Ltd.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
574 granted / 625 resolved
+29.8% vs TC avg
Minimal +1% lift
Without
With
+0.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
18 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
37.7%
-2.3% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 625 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 9-10, and 17-22 are rejected under 35 U.S.C. 102(a)(1) / 102(a)(2) as being anticipated by Zhou et al (US 20200211178 A1). Referring to claim 1: Zhou et al disclose a method (abstract) comprising: obtaining a prediction model, wherein the prediction model is configured to predict enhanced-quality images of products based on a low-quality images of the products (Fig.1, par. 36: SEM sampling system 102 uses the machine learning model to generate enhanced high quality images 109 based on the low quality images 108), wherein the prediction model is generated based on pairs of images obtained by a dual-scanning system comprising a low-quality scanning system and a high-quality scanning system (Fig.1, par. 5, 29, 36: generating the machine learning model based on data analyzed from a plurality of low quality image and high quality image pairs); utilizing the low-quality scanning system to capture a low-quality image of a product (Fig.1, par. 36: low quality images collected during operation in a high throughput mode); predicting, based on the low-quality image of the product and using the prediction model, an enhanced-quality image of the product, wherein the enhanced-quality image has a higher quality than a quality of the low-quality image (par. 36: quality enhancement system 107 uses the machine learning model to modify an image to approximate a result obtained with high quality); and performing defects detection on the enhanced-quality image, whereby detecting defects without utilizing the high-quality scanning system (Fig.1, par. 36: using the collected low quality images 108 to generate higher quality images 109 with high throughput for inspection; par.26: inspecting very small defects; par. 29/31: increasing the accuracy of the inspection). Referring to claim 17: This claim describes a product or article of manufacture (i.e., a computer program product comprising a non-transitory computer readable storage medium) retaining program instructions that when read by a processor, cause the processor to perform the method as set forth in claim 1. Zhou et al discloses such a product for this purpose (par.7, 72). Therefore, this claim is rejected for the same reasons as indicated above with respect to claim 1. Referring to claims 2 and 18: Zhou et al further describe the low-quality scanning system as faster than the high-quality scanning system, whereby detecting defects in a shorter time in comparison to defect detection that is based on high-quality images obtained using the high-quality scanning system (par. 44: low quality image is obtained by scan with a high throughput, and high quality is obtained by scan with a low throughput). Referring to claims 3 and 19: Zhou et al further describe that said utilizing, said predicting and said performing defects detection is performed by a student module, wherein the student module comprising the low-quality scanning system and devoid of the high-quality scanning system (Figs. 1, 6A, par. 36, 59: quality enhancement module/system 617/107 component of the processor receives low quality images 108 and generates enhances images 109). Referring to claims 4 and 20: Zhou et al further describe said utilizing, said predicting and said performing the defects detection being performed by a teacher module, wherein the teacher module comprises the dual-scanning system comprising the low-quality scanning system and the high-quality scanning system (Figs.1, 6A, par. 34, 35: automated SEM sampling system 102 / 602 component of the processor receives training images 105 / 610 of different qualities). Referring to claims 9 and 21: Zhou et al further describe obtaining the prediction model comprises obtaining a set of pairs of low-quality and high-quality images of products, obtained using the dual-scanning system, wherein said obtaining the set of pairs is performed at a customer site, and training the prediction model using the set of pairs of low-quality and high-quality images of products, whereby generating the prediction model, wherein said utilizing the low-quality scanning system to capture the low-quality image of the product is performed at the customer site (par. 19-26: implied by the method and system as indicated above for automatically obtaining training images and using said images to train a machine learning model in an IC component manufacturing process on a semiconductor manufacturing line). Referring to claims 10 and 22: Zhou et al further describe the enhanced-quality image as having a lower quality than a quality of images obtained by the high-quality scanning system (par. 36: implied by enhancing low quality images 108 collected during operation in a high throughput mode to generate enhanced high quality images 109 . . . the machine learning model being used to modify an image to approximate a result obtained with an increased number of scans in generating higher quality images with high throughput). Claims 1-4, 9-10, and 17-22 are rejected under 35 U.S.C. 102(a)(1) /102(a)(2) as being anticipated by Fang et al (US 20200018944 A1). Referring to claim 1: Fang et al disclose a method (abstract) comprising: obtaining a prediction model, wherein the prediction model is configured to predict, i.e., generate, enhanced-quality images of products based on a low-quality images of the products (Figs.3-4, par. 48-52: image enhancement system 300 includes machine learning network or model 320; par. 75: Image enhancement system 300 can use high-resolution image 310 to train machine learning network 320 before performing high-throughput inspection; par. 109-110: and image enhancement system 300 may analyze high-resolution image 310 to develop deconvolution or benchmarking strategies to enhance image quality of low-resolution inspection image(s) 330), wherein the prediction model is generated based on pairs of images obtained by a dual-scanning system comprising a low-quality scanning system and a high-quality scanning system (par. 29: SEM inspection tool may acquire a first SEM image at a higher resolution / quality and acquire a second SEM image at a lower resolution / quality; par. 120: image enhancement system 300 may utilize machine learning network 320 to analyze the high-resolution data and the low-resolution data acquired); utilizing the low-quality scanning system to capture a low-quality image of a product (par. 22: SEM inspection tool may be used to acquire a low-resolution inspection image (such as a low-resolution / quality image 330 shown in FIG. 4); par. 53: image enhancement system 300 may acquire inspection image 330 as a low-resolution / quality image of a sample [and] may be acquired using image acquirer 260 of EBI system (Fig. 1-2) or any such inspection system capable of acquiring low resolution images); predicting, based on the low-quality image of the product and using the prediction model, an enhanced-quality image of the product, wherein the enhanced-quality image has a higher quality than a quality of the low-quality image (Figs. 4-5, par. 22: SEM inspection tool may be used to acquire a low-resolution / quality inspection image (e.g., low-resolution image 330) and using features of the low-resolution inspection image, the inspection tool can identify one or more stored high-resolution / quality inspection images (e.g., high-resolution image 310) having similar features to enhance the acquired low-resolution image. Using the pattern information from the high-resolution inspection image, the SEM inspection tool can improve the low-resolution inspection image (e.g., enhanced image 420).; par. 84-96: enhanced image 420 is created based on low-resolution inspection image 330 acquired in real-time and trained features derived from different types of training images; par. 106-107: using machine learning network 320, enhanced image(s) are generated (e.g., enhanced image 420) from low resolution inspection image(s) 330); and performing defects detection on the enhanced-quality image, whereby detecting defects without utilizing the high-quality scanning system (par. 108: Enhanced images may be used for inspection, defect identification and analysis, process verification, quality control, yield improvement analysis). Referring to claim 17: This claim describes a product or article of manufacture (i.e., a computer program product comprising a non-transitory computer readable storage medium) retaining program instructions that when read by a processor, cause the processor to perform the method as set forth in claim 1. Fang et al discloses such a product for this purpose (see par. 7, 37, 317). Therefore, this claim is rejected for the same reasons as indicated above with respect to claim 1. Referring to claims 2 and 18: Fang et al further describe the low-quality scanning system as faster than the high-quality scanning system, whereby detecting defects in a shorter time in comparison to defect detection that is based on high-quality images obtained using the high-quality scanning system (par. 22, 84-89: low-resolution / quality inspection images may be enhanced, while maintaining the high throughput of the SEM inspection tool). Referring to claims 3 and 19: Fang et al further describe that said utilizing, said predicting and said performing defects detection is performed by a student module, wherein the student module comprising the low-quality scanning system and devoid of the high-quality scanning system (Fig. 2-4, par. 54: image enhancement system 300 may acquire inspection image 330 as a low-resolution image . . . using image acquirer 260 of EBI system 100 or any such inspection system capable of acquiring low resolution images; par. 74: image enhancement system 300 may be part of image processing system 250 of FIG. 2, or may comprise image processing system 250 including controller 109, image acquirer 260, storage 270, and the like; par. 82-87: enhanced image 420 is created based on the low-resolution inspection image 330 acquired in real-time and trained features using automated machine learning network 320; par. 89: Enhanced image 420 may be used for inspection, defect identification and analysis, process verification, quality control, yield improvement analysis, etc.). Referring to claims 4 and 20: Fang et al further describe said utilizing, said predicting and said performing the defects detection being performed by a teacher module (comprising the student module indicated above), wherein the teacher module additionally comprises the dual-scanning system comprising the low-quality scanning system and the high-quality scanning system (par. 29: SEM inspection tool may acquire a first SEM image at a higher resolution / quality and acquire a second SEM image at a lower resolution / quality; par. 120: image enhancement system 300 may utilize machine learning network 320 to analyze the high-resolution data and the low-resolution data acquired). Referring to claims 9 and 21: Fang et al further describe obtaining the prediction model comprises obtaining a set of pairs of low-quality and high-quality images of products, obtained using the dual-scanning system, wherein said obtaining the set of pairs is performed at a customer site, and training the prediction model using the set of pairs of low-quality and high-quality images of products, whereby generating the prediction model, wherein said utilizing the low-quality scanning system to capture the low-quality image of the product is performed at the customer site (par. 19-21: implied by the method and system as indicated above for automatically obtaining training images and using said images to train a machine learning model in an IC component manufacturing process and manufacturing facility). Referring to claims 10 and 22: Fang et al further describe the enhanced-quality image as having a lower quality than a quality of images obtained by the high-quality scanning system (par. 22: implied by a SEM inspection tool, acquiring a low-resolution inspection image (e.g. 330 shown in FIG. 4) while in high throughput mode, and identifying one or more stored high-resolution inspection images (e.g., 310 of FIG. 4) using features of the low-resolution inspection image having similar features to enhance the acquired low-resolution image, which can improve (but not duplicate) the low-resolution inspection image, such as an enhanced image 420 of FIG. 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al or Fang et al as applied to claim 1-2 above, and further in view of Aslan et al (US 20170132528 A1). Referring to claims 11-12: This claim describes a system comprising a “teacher module”, which correspond to the dual-scanning system used to obtain low and high quality images, and a “student module”, which correspond to the low-quality scanning system used to capture a low quality image, a model generator configured to generate a prediction model to perform the predicting, and a defects detector configured to perform the defects detection all as set forth in method claim 1. Zhou et al discloses such a system (see par. 2, 33-36 and passages indicated above with respect to claim 1). Fang et al discloses such a system (see abstract, par. 5-6, and indicated above with respect to claim 1). Neither Zhou et al nor Fang et al further describe a “teacher module” smaller in number than “a plurality of student modules”. Aslan et al teach techniques and systems for applying and jointly training multiple machine learning models (teacher and student modules). Aslan et al describe a technique for joint training of multiple machine learning models, wherein a teacher model 300 can be trained in parallel with M student models 302 (see Fig. 3 / par. 51), allowing information to be passed (or knowledge transferred) between each student model 302 and the teacher model 300. Each of the student models 302 can influence the training of the teacher model 300, and vice versa, during joint training. The joint model training techniques described by Aslan et al provide greater flexibility as compared to conventional model training methods due to the ability of at least one model to influence the training of at least one other model during the joint training process. Machine learning models that are trained using the techniques described can perform better (in terms of the accuracy of model output) than conventionally-trained machine learning models in some scenarios and can be deployed or implemented in a more versatile fashion. Therefore, for the advantages described in Aslan et al, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhou et al or Fang et al to provide a configuration of a “teacher module” smaller in number than “a plurality of student modules”. Referring to claim 13: Zhou et al as modified above further described said teacher module and said plurality of student modules are deployed at a customer site (par. 19-26: implied by the method and system as indicated above for automatically obtaining training images and using said images to train a machine learning model in an IC component manufacturing process on a semiconductor manufacturing line). Fang et al as modified above further described said teacher module and said plurality of student modules are deployed at a customer site (par. 19-21: implied by the method and system as indicated above for automatically obtaining training images and using said images to train a machine learning model in an IC component manufacturing process and manufacturing facility). Referring to claim 14: Zhou et al as modified above further describe the low-quality scanning system as faster than the high-quality scanning system, whereby detecting defects in a shorter time in comparison to defect detection that is based on high-quality images obtained using the high-quality scanning system (par. 44: low quality image is obtained by scan with a high throughput, and high quality is obtained by scan with a low throughput). Fang et al as modified above further describe the low-quality scanning system as faster than the high-quality scanning system, whereby detecting defects in a shorter time in comparison to defect detection that is based on high-quality images obtained using the high-quality scanning system (par. 22, 84-89: low-resolution / quality inspection images may be enhanced, while maintaining the high throughput of the SEM inspection tool). Referring to claim 15: Zhou et al as modified above further describe said teacher modules configured to be utilized for gathering a training dataset to be used by said model generator (Fig. 5A-5C, 6A, 7, par. 5-6, 26-28, 33, 36, 49, 53-55, 57, 60: obtaining training images to train a machine learning model; par. 5, 7-8, 28, 61, 71: analyzing a plurality of patterns of data . . . to use in relation to training the machine learning model), wherein said plurality of student modules are configured to be utilized for performing the automated optical inspection using images obtained by the low-quality scanning system (see claim 3 above). Fang et al as modified above further describe said teacher modules configured to be utilized for gathering a training dataset to be used by said model generator (par. 47-48: information file 315 contains reference feature information, e.g., a large amount of GDS or OASIS format images making up a large dataset of comparison features), wherein said plurality of student modules are configured to be utilized for performing the automated optical inspection using images obtained by the low-quality scanning system (see claim 3 above). Referring to claim 16: Zhou et al as modified above further describe said teacher modules are configured to be utilized for performing the automated optical inspection using images obtained by the low-quality scanning system and without utilizing the high-quality scanning system (see claim 4 above). Fang et al as modified above further describe said teacher modules are configured to be utilized for performing the automated optical inspection using images obtained by the low-quality scanning system and without utilizing the high-quality scanning system (see claim 4 above). Allowable Subject Matter Claims 5-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Referring to these claims, the prior art searched and of record neither anticipates nor suggests in the claimed combinations, comparing results between performing defects detection on the high-quality image and performing defects detection on the enhanced-quality image. In Fang et al, comparator 370 is configured to compare extracted relevant information from machine learning network 320 and extracted pattern information from pattern extractor 340 (par. 63). The comparator is not comparing the defects extracted from two images having different qualities, but rather is comparing patterns from a high-quality image and a low-quality inspection image to identify trained features. Image enhancer 380 may be configured to receive the output from comparator 370 and inspection image 330 to generate an enhanced image 420, which is used for inspection, defect identification and analysis, process verification, quality control, yield improvement analysis, etc. (par. 86, 89). Information Disclosure Statements The information disclosure statements submitted on 19 July 2023, 18 July 2025, and 15 October 2025 were filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the statements have been considered by the examiner as indicated below. The relevance of the cited documents in the first and third filed statements, in addition to any as applied above, can be found in the International Search Report and/or Written Opinion from the ISA dated 29 May 2022 for PCT/IL2022/050201 (of record), the Office Action from the Intellectual Property Office (IPO) of the ROC (Taiwan) dated 08 February 2025 for Patent Application No. 111101007 (of record), the Notice of Reasons for Refusal from the Japanese Patent Office (JPO) date 01 July 2025 for Patent Application No. 2023-546061 (of record), and The First Office Action from The State Intellectual Property Office of the PRC (People’s Republic of China) dated 25 July 2025 for Application No. 2022800084156 (of record). Applicant has not provided an explanation of relevance of cited documents in the second filed statement summarized below. Leem et al (US 20200090304 A1) disclose monitoring a semiconductor fabrication process using an image conversion model having an artificial neural network. The image conversion model, when executed, causes the processor to receive a first image and a second image of a semiconductor wafer. The artificial neural network is trained by inputting a dataset representing the first image and the second image, generating a conversion image of the semiconductor wafer and calibrating weights and biases of the artificial neural network to match the conversion image to the second image. A third image of the semiconductor wafer is generated based on the calibrated weights and biases of the artificial neural network. The image conversion model with the trained artificial neural network may be transmitted to another device for image conversion of low resolution images. Tripodi et al (US 20190378012 A1) disclose a method of determining a characteristic of interest relating to a structure on a substrate formed by a lithographic process, the method comprising: obtaining an input image of the structure; and using a trained neural network to determine the characteristic of interest from said input image. Also disclosed is a reticle comprising a target forming feature comprising more than two sub-features each having different sensitivities to a characteristic of interest when imaged onto a substrate to form a corresponding target structure on said substrate. Related methods and apparatuses are also described. Zhang et al (US 20170193680 A1 / TW 201734895 A) disclose methods and systems for generating a high resolution image for a specimen from one or more low resolution images of the specimen are provided. One system includes one or more computer subsystems configured for acquiring one or more low resolution images of a specimen. The system also includes one or more components executed by the one or more computer subsystems. The one or more components include a model that includes one or more first layers configured for generating a representation of the one or more low resolution images. The model also includes one or more second layers configured for generating a high resolution image of the specimen from the representation of the one or more low resolution images. Cited Art The prior art and other references made of record and not relied upon are considered pertinent to applicant's disclosure. Yang (US 11436702 B2) disclose a method for super-resolution image reconstruction that includes obtaining an original image that has first resolution and includes a target object, generating a first target image by increasing the first resolution of the original image, determining first feature points relating to the target object based on the first target image, determining first priori information relating to the target object based on the first feature points relating to the target object, generating a second target image having second resolution higher than the first resolution based on the first priori information relating to the target object and the first target image. Ge et al (US 12211191 B2) disclose an inspection method that includes receiving a plurality of training images and an image of a target object obtained from inspection of the target object. The method further includes generating, by one or more training codes, a plurality of inference codes. The one or more training codes are configured to receive the plurality of training images as input and output the plurality of inference codes. The one or more training codes and the plurality of inference codes includes computer executable instructions. The method further includes selecting one or more inference codes from the plurality inference codes based on a user input and/or one or more characteristics of at least a portion of the received plurality of training images. The method also includes inspecting the received image using the one or more inference codes of the plurality of inference codes. Bae et al (US 20200097850 A1) disclose the student model may be configured in the same number as the multiple features, and the useful information of the teacher model that has finished pre-learning may be forwarded to a number of student models for the multiple features so as to be learned (par. 12 / Fig. 6A). Yang et al (US 20230049405 A1) disclose a method includes patterning a hard mask over a target layer, capturing a low resolution image of the hard mask, and enhancing the low resolution image of the hard mask with a first machine learning model to produce an enhanced image of the hard mask. The method further includes analyzing the enhanced image of the hard mask with a second machine learning model to determine whether the target layer has defects. Olsen et al (US 20240354371 A1) disclose systems and methods for generating predicted high-resolution images from low-resolution images. To generate the predicted high-resolution images, the present technology may utilize machine learning models and super resolution models in a series of processes. For instance, the low-resolution images may undergo a sensor transformation based on processing by a machine learning model. The low-resolution images may also be combined with land structure features and/or prior high-resolution images to form an augmented input that is processed by a super resolution model to generate an initial predicted high-resolution image. The predicted initial high-resolution image may be combined or stacked with other predicted high-resolution images to form a stacked image. That stacked image may then be processed by another super resolution model to generate a final predicted high-resolution image. Ouchi (US 20250245784 A1) discloses implementing image synthesis without loss of surface information of secondary electron (SE) images and shadow information of backscattered electron (BSE) images. The proposed image processing techniques include applying data of a first quality image (low quality image) to a trained model, estimating a structural feature and a material feature of a second quality image (high quality image) corresponding to the first quality image, calculating at least one shadow datum based on the structural feature and a synthesis parameter and calculating at least one gradation datum based on the material feature and a synthesis parameter, generating a synthesized image from the at least one shadow datum and the at least one gradation datum, and outputting the synthesized image as a prediction result of the second quality image (see FIG. 8). Tao et al (WO 2024088665 A1) disclose a method for training a prediction model to generate a high-resolution image representing defects on a substrate from a low-resolution image of the substrate. The method includes inputting a first image and a reference image of defects on a substrate, which are representative of images captured using different image capture conditions, to a neural network. The neural network is executed to generate a predicted image in response to the first image. A loss function that is indicative of a difference between a defect distribution in the predicted image and a defect distribution in the reference image is calculated and the neural network is modified based on the loss function. The neural network may be trained until the loss function is minimized. Kong et al (CN 115170483 B) disclose defect detection, and particularly relates to a super-resolution reconstruction and feature extraction method for defect detection. The method comprises the basic steps of determining a low-resolution image block training sample set and a high-resolution image block training sample set, generating a reconstruction model to obtain a high-resolution image block of prediction output, smoothing edge pixels of the high-resolution image block, and extracting a defect edge region. The defect detection super-resolution image reconstructed by the method has higher contrast, complete edge detail information and clear outline of the defect area, and the reconstructed defect detection image has higher overall quality, does not have obvious distortion, blurring and distortion phenomena, and has important significance for extracting and analyzing key defect information in the surface defect detection process of the additive manufacturing workpiece. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott Rogers whose telephone number is 571-272-7467. The examiner can normally be reached 8 am to 7 pm flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on 571-270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Scott A Rogers/ Primary Examiner, Art Unit 2681 18 October 2025
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597125
APPARATUS AND METHOD FOR CORRECTING A CONTOUR OF AN OBJECT IN A MEDICAL IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12597120
PRINTED IMAGE DEFECT DISCRIMINATION DEVICE AND METHOD DISPLAYING DETECTED DEFECTS IN LIST BY TYPE IN DISPLAY MODE ACCORDING TO STATE OF DEFECT
2y 5m to grant Granted Apr 07, 2026
Patent 12597138
SYSTEMS AND METHODS FOR ANNOTATING TARGET IMAGES BASED ON FEATURES THEREIN AND SELECTED CANDIDATE SAMPLE IMAGES WITH ANNOTATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12586391
Systems and Methods for Deconvolving Cell Types in Histology Slide Images, Using Super-Resolution Spatial Transcriptomics Data
2y 5m to grant Granted Mar 24, 2026
Patent 12578488
IMPROVED ATTENUATION MAP GENERATED BY LSO BACKGROUND
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
93%
With Interview (+0.9%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 625 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month