Prosecution Insights
Last updated: April 19, 2026
Application No. 18/552,591

SYSTEM, METHOD, AND COMPUTER DEVICE FOR AUTOMATED VISUAL INSPECTION USING ADAPTIVE REGION OF INTEREST SEGMENTATION

Non-Final OA §102§103
Filed
Sep 26, 2023
Examiner
ESQUINO, CALEB LOGAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Musashi AI North America Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is in response to the application filed on September 26th, 2023. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on September 26th, 2023, December 17th, 2024, and May 22nd, 2025 are being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "806" and "832" in figure 14 have both been used to designate “Masked input image data”. For the purposes of examination, part 832 will be regarded as reading “object detection output data”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 12 is objected to because of the following informalities: Claim 12: “The method of claim 11 further comparing object detection output data” contains a minor spelling mistake. This should read “The method of claim 11, further comprising…” or “The method of claim 11, wherein..” Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 7-11, 13-16, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry” (herein after referred to by its primary author, Ebayyeh). In regards to claim 1, Ebayyeh teaches a system for visual inspection of a target article using adaptive region of interest ("ROI") segmentation, the system comprising: a camera for acquiring an inspection image of the target article; an Al visual inspection computing device for detecting defects or anomalies in the target article, the Al visual inspection computing device comprising: a communication interface for receiving the inspection image acquired by the camera (Ebayyeh Abstract “Automatic optical inspection (AOI) is one of the non-destructive techniques used in quality inspection of various products. This technique is considered robust and can replace human inspectors who are subjected to dull and fatigue in performing inspection tasks. A fully automated optical inspection system consists of hardware and software setups…Hardware setups used in acquiring images are then discussed in terms of the camera and lighting source selection and configuration”); an adaptive ROI segmentation module for processing the inspection image using an ROI segmentation model to generate a masked inspection image in which regions not of interest ("nROls") are masked (Ebayyeh Figure 34; Page 183214 “In both studies it was found that the dark background of the image can affect the BDCT and can be confused with the defect features. Therefore, after specifying the ROI, an image mask is used to delineate the ROI.” Examiner note: In figure 34, regions of interest are highlighted in white, while non-regions of interest are masked and comprise black pixels. Furthermore, the cited paragraph shows that the areas outside of the ROI are masked, these areas are analogous to the nROIs of the current disclosure as they are areas not contained within the ROI.); an image analysis module for receiving the masked inspection image and analyzing the masked inspection image using an image analysis model (Ebayyeh Page 183221 “In AOI applications, template matching algorithm works by first identifying a reference template which usually represent the non-defected case (also known as golden template) that can be used for comparison. The selected template can be compared to the target samples using various kind of correlation functions.”) to generate output data indicating presence of the defects or anomalies detected by the image analysis model (Ebayyeh Page 183222 “The output of the image subtraction operation can be one of the following three cases: positive (potential defect), negative (potential defect), or zero (non-defective).”), wherein analysis of the masked inspection image is limited to non-masked ROIs (Examiner note: The masking portion of this reference is referred to as “Preprocessing” (See Page 183212 Section V A, so it is performed before the comparison analysis.); and an output interface for displaying the output data (Ebayyeh Page 183249 “These algorithms were integrated with a user interface system that allows users to execute the following system operations including: loading the analysed dataset, adding or retracting the decision knowledge, controlling the parameters in WBM patterns clustering system, and monitoring the clustering results.”). In regards to claim 2, Ebayyeh teaches the system of claim 1, wherein the image analysis model comprises an object detection model trained to detect at least one defect class in the masked inspection image (Ebayyeh Page 183235 “In terms of the nature of the output, classification can be subdivided into binary classification and multi-class classification. In binary classification, the outputs are categorized into two groups (e.g. pass/fail, defect/non-defect). In multi-class classification, the outputs are categorized into more than two groups”; Page 183237 “Wu et al. in [171] used Bayes classifier as binary first-stage classification approach to filter the defected and non-defected PCB samples before sending the non-defected results to an SVM multi-class classifier to specify the type of defect” Examiner note: The first excerpt shows that classifiers are split into 2 groups, binary and multi-class. The second excerpt shows that multi-class classifiers can identify types of defects, which are analogous to defect classes). In regards to claim 3, Ebayyeh teaches the system of claim 1, wherein the image analysis model comprises a golden sample analysis module configured to compare the masked inspection image to a golden sample image of the target article (Ebayyeh Page 183221 “In AOI applications, template matching algorithm works by first identifying a reference template which usually represent the non-defected case (also known as golden template) that can be used for comparison”). In regards to claim 4, Ebayyeh teaches the system of claim 1, wherein the image analysis model comprises an object detection model (Ebayyeh Page 183235 “In terms of the nature of the output, classification can be subdivided into binary classification and multi-class classification. In binary classification, the outputs are categorized into two groups (e.g. pass/fail, defect/non-defect). In multi-class classification, the outputs are categorized into more than two groups”)and a golden sample analysis module (Ebayyeh Page 183221 “In AOI applications, template matching algorithm works by first identifying a reference template which usually represent the non-defected case (also known as golden template) that can be used for comparison”). In regards to claim 7, Ebayyeh teaches the system of claim 2, wherein the output data includes a defect type and a defect location for each defect detected by the object detection model (Ebayyeh Page 183216 “The proposed approach determines the defect type through image analysis using various features, such as the geometric characteristics and the shape descriptor with intensity distribution. Various rule-based algorithms were used to classify the defects according to features extracted such as minimum bounding rectangle, actual defect region and region-based descriptor” Examiner note: This section teaches that defect types and a minimum bounding rectangle can be identified. The minimum bounding rectangle is analogous to the defect location, as they both describe where the defect is located within the image). In regards to claim 8, Ebayyeh teaches the system of claim 1, wherein the adaptive ROI segmentation model is trained to identify and mask a non-uniform area of the inspection image (Ebayyeh Figure 34; Page 183214 “In both studies it was found that the dark background of the image can affect the BDCT and can be confused with the defect features. Therefore, after specifying the ROI, an image mask is used to delineate the ROI.” Examiner note: These images are masked in such a way that the white region can be any shape). In regards to claim 9, Ebayyeh teaches the system of claim 8, wherein the non-uniform area comprises any one or more of an improperly illuminated area in the inspection image, a user-defined no-uniform area, a component of the target article that varies across different target articles of the same class, and an irregularly textured area of the target article (Ebayyeh Figure 34 “The brighter regions in b1–b6 are obtained by applying Otsu’s auto-thresholding [37]” Examiner note: This reference teaches identifying the non-uniform area based on the brightness of the region, this is analogous to an improperly illuminated areas, as they both are finding a non-uniform area based on a difference in brightness). In regards to claim 10, Ebayyeh teaches the system of claim 2, wherein the output data classifies the target article as either defective or non-defective (Ebayyeh Page 183235 “In this stage the inspection algorithm uses the extracted features as an input in order to produce a an output of categorized classes. In terms of the nature of the output, classification can be subdivided into binary classification and multi-class classification. In binary classification, the outputs are categorized into two groups (e.g. pass/fail, defect/nondefect).”). In regards to claim 11, Ebayyeh anticipates the claim language as in the consideration of claim 1. In regards to claim 13, Ebayyeh anticipates the claim language as in the consideration of claim 7. In regards to claim 14, Ebayyeh anticipates the claim language as in the consideration of claim 8. In regards to claim 15, Ebayyeh anticipates the claim language as in the consideration of claim 9. In regards to claim 16, Ebayyeh anticipates the claim language as in the consideration of claim 1. In regards to claim 18, Ebayyeh anticipates the claim language as in the consideration of claim 7. In regards to claim 19, Ebayyeh anticipates the claim language as in the consideration of claim 8. In regards to claim 20, Ebayyeh anticipates the claim language as in the consideration of claim 9. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5, 12, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ebayyeh in view of US20210209418 (herein after referred to by its primary author, Badanes). In regards to claim 5, Ebayyeh teaches the system of claim 4 further comprising object detection output data generated by the object detection model (Ebayyeh Page 183235 “In terms of the nature of the output, classification can be subdivided into binary classification and multi-class classification. In binary classification, the outputs are categorized into two groups (e.g. pass/fail, defect/non-defect). In multi-class classification, the outputs are categorized into more than two groups”) and golden sample output data generated by the golden sample analysis module (Ebayyeh Page 183221 “In AOI applications, template matching algorithm works by first identifying a reference template which usually represent the non-defected case (also known as golden template) that can be used for comparison”). Ebayyeh fails to teach a comparison module for comparing object detection output data generated by the object detection model with golden sample output data generated by the golden sample analysis module. However, Badanes teaches a comparison module for combining output data generated by two object detection models (Badanes Abstract “There is provided a method of defect detection on a specimen and a system thereof. The method includes: obtaining a runtime image representative of at least a portion of the specimen; processing the runtime image using a supervised model to obtain a first output indicative of the estimated presence of first defects on the runtime image; processing the runtime image using an unsupervised model component to obtain a second output indicative of the estimated presence of second defects on the runtime image; and combining the first output and the second output using one or more optimized parameters to obtain a defect detection result of the specimen.” Examiner note: This reference teaches combining two defect detection results to yield a final defect detection. This method would then be used on the output data generated by the object detection model and the golden sample analysis module to combine their defect findings.). Badanes is considered to be analogous to the claimed invention because they are both in the same field of defect detection. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Ebayyeh to include the teachings of Badanes, to provide the advantage of improved defect detection results (Badanes Paragraph [0097] “Thus, as illustrated above, the proposed system, comprising two or more supervised and unsupervised models, as well as the combination and optimization thereof, is capable of detecting defects, which may or may not have been seen during training (the unsupervised model can be used as a safety net for detection of unseen anomalies), thereby providing improved defect detection results.”) In regards to claim 12, Ebayyeh in view of Badanes renders obvious the claim limitations as in the consideration of claim 5. In regards to claim 17, Ebayyeh in view of Badanes renders obvious the claim limitations as in the consideration of claim 5. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ebayyeh in view of US20170148226 (herein after referred to by its primary author, Zhang). In regards to claim 6, Ebayyeh teaches the system of claim 3, but fails to teach wherein the golden sample module includes a generative model for generating the golden sample image from the inspection image. However, Zhang teaches wherein the golden sample module includes a generative model for generating the golden sample image from the inspection image (Zhang Paragraph [0104] “In one embodiment, the imaging system is configured to acquire the one or more actual images and the one or more simulated images and to detect defects on the specimen by comparing the one or more actual images to the one or more simulated images. For example, the embodiments described herein can be used to generate a “golden” or “standard” reference to improve die-to-database defect detection algorithms for mask and wafer inspection and metrology.”). Zhang is considered to be analogous to the claimed invention because they are both in the same field of defect detection. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Ebayyeh to include the teachings of Zhang, to provide the advantage of a computationally efficient method of creating a reference image (Zhang Paragraph [0028] “Some embodiments described herein include a deep generative model for realistic rendering of computer aided design (CAD) for applications such as semiconductor inspection and metrology. In addition, the embodiments described herein can provide a computationally efficient methodology for generating a realistic-looking image from associated CAD for tools such as electron beam and optical tools.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: “Detection and Segmentation of Manufacturing Defects with Convolutional Neural Networks and Transfer Learning” teaches a method of detecting defects using a neural network, which outputs a defect with a bounding box showing its location. “Automatic Detection and Identification of Defects by Deep Learning Algorithms from Pulsed Thermography Data” teaches a method of defect detections, where a region of interest is identified and the remaining regions (nROIs) are masked. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB LOGAN ESQUINO whose telephone number is (703)756-1462. The examiner can normally be reached M-Fr 8:00AM-4:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CALEB L ESQUINO/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Sep 26, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602924
Method for Semantic Localization of an Unmanned Aerial Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12602813
DEEP APERTURE
2y 5m to grant Granted Apr 14, 2026
Patent 12541857
SYNTHESIZING IMAGES FROM THE PERSPECTIVE OF THE DOMINANT EYE
2y 5m to grant Granted Feb 03, 2026
Patent 12530787
TECHNIQUES FOR DIGITAL IMAGE REGISTRATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518425
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+41.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month