Prosecution Insights
Last updated: April 19, 2026
Application No. 18/282,589

METHOD FOR TRAINING DEFECTIVE-SPOT DETECTION MODEL, METHOD FOR DETECTING DEFECTIVE-SPOT, AND METHOD FOR RESTORING DEFECTIVE-SPOT

Non-Final OA §103
Filed
Sep 18, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant's election with traverse of Group I corresponding to claims 1-8, 10 and 17-19 in the reply filed on 12/16/2025 is acknowledged. The traversal is on the ground(s) that “a complete and thorough search for the subject matter of Group I would require searching the areas appropriate to the subject matter of Group II” and “searching for the claims of Group II would be minimally burdensome” (see pages 2-3 of the reply). This is not found persuasive because: First, the examiner points out that determining whether serious burden exists is not applicable to the current application because such an inquiry is part of the “independent and distinct” restriction analysis applicable for an application filed under 35 U.S.C. 111(a). Here, the current application was submitted under 35. U.S.C. 371, whereas the “unity of invention” restriction analysis applied to (see MPEP 1896 for reference). And as discussed in the requirement for election dated 11/04/2025, under the “unity of invention” analysis, Groups I-II lack unity of invention because the groups do not share the same or corresponding technical feature. While the Group I's special technical feature appears to be a particular way of training a model, the Group Il's special technical feature appears to be a particular way of using an already trained model. Moreover, even if a mere concept of using a defect detection model is considered to be shared between the groups, such a concept is not a special technical feature as it is a well-known concept that does not make a contribution over the prior art in view of Us patent application publication no. 2024/0095903 to Sherman et al. (see, e.g., par. 7 thereof). Second, even if the applicant’s argument is somewhat applicable, after having searched for the subject matter of Group I, the examiner finds the that areas searched for the subject matter of Group I to be inappropriate to the subject matter of Group II. The examiner finds that most of the subject matter in Group II has not yet been searched and would need another significant amount of search effort to thoroughly search for the subject matter of Group II. For example, just with Claim 11 of Group II, most of its limitations, e.g., obtaining a target detection results using two adjacent frames, determining a mask after determining the target detection results, filtering the two adjacent frames used for obtaining a target detection results, obtaining an initial restored images based on the filtered, image, the mask and the first frame, and obtaining a target image with the restored defective spot using the two adjacent frames, the mask and the initial stored image, have not been searched. The traversal is on the ground(s) that the fees the applicant has paid and would be required to pay for a possible divisional application would not be fair (see page 3 of the reply). This is not found persuasive because such an argument irrelevant here since the fairness of the fees is not one of the considerations for reasonableness under the “unity of invention” analysis. For these reasons, the requirement is still deemed proper and is therefore made FINAL. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/05/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7, 8, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Us patent application publication no. 2024/0095903 to Sherman et al. (hereinafter Sherman) in view of Us patent application publication no. 2023/0082268 to Delaney. For claim 1, Sherman as applied teaches a method for training a defective-spot detection model, comprising: obtaining a first training data set and a second training data set (see, e.g., pars. 8, 77-79 and 86 and FIGS. 2 and 6, which teach obtaining a target image and an original image); wherein the first training data set comprises a plurality of frames of sample detection images (see, e.g., pars. 8 and 86, which teach that the target image is a defective feature free image), and the second training data set comprises a plurality of frames of sample defective-spot images (see, e.g., pars. 8, 77-79 and 107-110 and FIG. 7, which teach that the original image contains a defective feature image); for each frame of sample detection image, processing the frame of sample detection image by using at least one of the plurality of frames of sample defective-spot images, to generate a frame of sample training image (see, e.g., pars. 8, 46, 54, 83-85, and 98-106 and FIGS. 2, 3 and 6, which teach for each target image, generating an augmented target/synthetic defective image using the defective feature from the original image); and training the defective-spot detection model by using the plurality of frames of sample training images until a loss value is converged, so as to obtain a trained defective-spot detection model (see, e.g., pars. 8, 46, 54, 59-62, 73, 111-114, and FIG. 5, which teach training a defective feature detecting machine learning model using a training set including the augmented target images, wherein the model is optimized by minimizing the loss function); wherein for each frame of sample detection image, processing the frame of sample detection image by using at least one of the plurality of frames of sample defective-spot images to generate a frame of sample training image, comprises: generating the sample training image with a defective-spot based on the frame of transparent mask and the sample detection image (see, e.g., see, e.g., pars. 8, 46, 54, 83-85, and 98-106 and FIGS. 2, 3 and 6, which teach generating the augmented target image by pasting the defective feature of the original image to the target image). Sherman as applied, however, does explicitly teach that generating the defective feature includes “generating a transparent layer based on a resolution of the sample detection image” and “replacing an image in a certain region of the transparent layer based on at least one of the plurality of frames of sample defective-spot images, to generate a frame of transparent mask.” Delaney in the analogous art teaches generating a transparent layer based on a resolution of the sample detection image (see, e.g. pars. 89-90 and 95-96 and FIGS. 7E and 8E of Delaney, which teach obtaining a transparent layer overlaid on the original image) and replacing an image in a certain region of the transparent layer based on at least one of the plurality of frames of sample defective-spot images, to generate a frame of transparent mask (see, e.g., pars. 89-90 and 95-96 and FIGS. 7E and 8E of Delaney, which teach marking pixels of the transparent layer to match corresponding defect pixels in the original image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman to generate its defective feature image as taught by Delaney because Sherman suggests generating synthetic image by implanting/masking a defect in design data/defect-free image (see, e.g., par. 78 of Sherman) and also doing so yield a predictable result of allowing the defective feature image to be matched and applied to the target image more simply and accurately since the mask-based feature image would be of the same of size/resolution with the aligned defect pixels (see MPEP 2143(I)(D)). For claim 7, while Sherman as applied does not explicitly teach, Delaney in the analogous art teaches that replacing the image in the certain region of the transparent layer based on the at least one of the plurality of frames of sample defective-spot images to generate the frame of transparent mask, comprises: determining the certain region of the transparent layer based on a resolution of the at least one of the plurality of frames of sample defective-spot images (see, e.g. pars. 89-90 and 95-96 and FIGS. 7E and 8E of Delaney, which teach determining/marking pixels of the transparent layer that correspond to the defect pixels in the original image); and replacing the image in the certain region of the transparent layer by using the at least one of the plurality of frames of sample defective-spot images to generate the frame of transparent mask (see, e.g. pars. 89-90 and 95-96 and FIGS. 7E and 8E of Delaney, which teach setting values of pixels corresponding to the defect to a predetermine values). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman to generate its defective feature image as taught by Delaney because Sherman suggests generating synthetic image by implanting/masking a defect in design data/defect-free image (see, e.g., par. 78 of Sherman) and also doing so would yield a predictable result of allowing the defective feature image to be matched and applied to the target image more simply and accurately since the mask-based feature image would be of the same of size/resolution with the aligned defect pixels (see MPEP 2143(I)(D)). For claim 8, Sherman as applied teaches that after generating the transparent layer based on the resolution of the sample detection image, the method further comprises: generating a piece of label data based on the at least one of the plurality of frames of sample defective-spot images and the transparent layer (see, e.g., pars. 59 and 112-114 and FIG. 5 of Sherman, which teach labelling the augmented target images which includes the defective feature images); and training the defective-spot detection model by using the plurality of frames of sample training images until the loss value is converged, so as to obtain the trained defective-spot detection model comprises: training the defective-spot detection model by using the plurality of frames of sample training images and the plurality of pieces of label data until the loss value is converged, so as to obtain the trained defective-spot detection model (see, e.g., pars. 59 and 112-114 and FIG. 5 of Sherman, which teach optimizing the model by minimizing the loss between the predicted label data and the ground truth label data). While Sherman does not explicitly teach, Delaney in the analogous art teaches labelling/marking the defect pixels in the transparent layer (see, e.g. pars. 89-90 and 95-96 and FIGS. 7E and 8E of Delaney). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman to label the defect as taught by Delaney because doing so would yield a predictable result of allowing the labeled data to become the ground truth data for comparison with the predicted label (see pars. 112-114 of Sherman and MPEP 2143(I)(D)). For claim 17, Sherman in view of Delaney teaches a computer device, comprising: a processor, a memory, and a bus (see, e.g., pars. 55-56 and FIG. 1 of Sherman, which teach a processor and memory circuit connected to a hardware-based I/O interface), wherein the memory stores machine-readable instructions executable by the processor (see, e.g., pars. 23, 35 and 55-56 and FIG. 1 of Sherman), the processor communicates with the memory over the bus when the computer device runs (see, e.g., pars. 55, 61, 74, and 77 and FIG. 1 of Sherman), the machine-readable instructions when executed by the processor perform the method of claim 1 (see the rejection of claim 1). For claim 18, Sherman in view of Delaney teaches a computer non-transitory readable storage medium having a computer program stored thereon (see, e.g., pars. 23 and 35 of Sherman), when executed by a processor, performs the method of claim 1 (see the rejection of claim 1). Claim(s) 2 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sherman in view of Delaney and further in view of us patent application publication no. 2024/0311994 to Ikeda and us patent application publication no. 2016/0088188 to Cho et al. (hereinafter Cho). For claim 2, Sherman in view of Delaney teaches that obtaining the second training data set comprising the plurality of frames of sample defective-spot images, comprises: generating defective-spot image data in a target region of a preset image by using a grid dyeing method, to obtain a first defective-spot image sample (see, e.g., pars. 77-79, 84 and FIGS. 2 and 6-7 of Sherman, which teach determining a first region of the original image); performing an image expansion process on the first defective-spot image sample to obtain a second defective-spot image sample (see, e.g., pars. 80-82 and 85 and FIGS. 2 and 6-7 of Sherman, which teach determining a second region that expands from the first region); performing a median filtering process on the second defective-spot image sample to obtain a third defective-spot image sample (see, e.g., par. 109 of Sherman, which teaches performing noise filtration before subtracting out the defective feature image), and determining edge position information of the third defective-spot image sample (see, e.g., pars. 85-87 and FIG. 6 of Sherman, which teach determining a contextual region based on the edge position information of the second region); and extracting the defective-spot image data based on the edge position information of the third defective-spot image sample to obtain the sample defective-spot image (see, e.g., pars. 104-110 and FIGS. 3 and 7 of Sherman, which teach extracting the defective feature images based on the contextual region, which is determined based on the edge position information of the second region). Sherman in view of Delaney does not explicitly teach using a grid dyeing method to obtain a defective spot image. Ikeda in the analogous art teaches extracting regions of images and determining which of the regions include defective pixels, wherein the defective pixels are determined using pixel grid information (see, e.g., pars. 90-96 and FIGS. 5 and 8A-C of Ikeda; the examiner notes that in the absence of an explicit definition in the specification and a well-known definition in the art, the term “grid dyeing method” has been interpreted under the broadest reasonable interpretation as an extraction method involving grid). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman in view of Delaney to extract image regions as taught by Ikeda because doing so would facilitate obtaining more defect data and improve the imbalance between the defect and non-defect data and the learning accuracy (see pars. 114, 120 and 129 of Ikeda). While Sherman in view of Delany and Ikeda teaches performing a noise filtration, it does not explicitly teach performing a median filtering process to obtain a defective spot image. In the analogous art, Cho teaches applying a median filtering to remove the falsely detected scratches (see, e.g., pars. 46, 66-69 and 77 of Cho) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman in view of Delaney and Ikeda to perform a median filtering as taught by Cho because doing so would prevent erroneous scratch detection (see pars. 46 and 69 of Cho). For claim 6, Sherman in view of Delaney, Ikeda and Cho teaches that extracting the defective-spot image data based on the edge position information of the third defective-spot image sample to obtain the sample defective-spot image, comprises: extracting the defective-spot image data based on the edge position information of the third defective-spot image sample, so as to obtain a fourth defective-spot image sample (see, e.g., pars. 88-94 and 103-110 and FIGS. 3 and 7, which teach extracting the contextual region information based on the edge position of the second region to obtain the image patch including the defective feature); and performing data processing on the fourth defective-spot image sample to obtain of a plurality of sample defective-spot images in different types (see, e.g., par. 102 of Sherman, which teaches manipulating the defective feature image before pasting); wherein the plurality of sample defective-spot images in different types comprises at least one of followings (in view of pars. 98-99 in the specification, the list types have been interpreted disjunctively): the fourth defective-spot image sample (see, e.g., pars. 98-101 and FIG. 6 of Sherman); an image obtained by rotating the fourth defective-spot image sample by a preset angle (see, e.g., par. 102 of Sherman, which teaches rotating the defective feature image before pasting); the fourth defective-spot image samples with different gray colors (see, e.g., par. 102 of Sherman, which teaches modifying the gray level); and the scaled fourth defective-spot image sample by a preset size proportion (see, e.g., par. 102 of Sherman, which teaches scaling the defective feature image before pasting). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sherman in view of Delaney, Ikeda, Cho and further in view of us patent application publication no. 2021/0248734 to Han et al. (hereinafter Han). For claim 5, while Sherman in view of Delaney, Ikeda and Cho does not explicitly teach, Han in the analogous art teaches that determining the edge position information of the third defective-spot image sample, comprises: throughout all rows of pixels of the third defective-spot image sample, sequentially determining a target pixel with a preset gray-scale value in each row of pixels (see, e.g., pars. 43-47 and 55-59 and FIG. 2a of Han, which teach, line by line, determining a pixel point with a gray scale value that is between the thresholds); and determining the edge position information of the third defective-spot image sample based on the position information of the target pixel (see, e.g., pars. 43-47 and 60-61 and FIG. 2a of Han, which teach determining a contour of the pixel defect area based on the coordinates of the pixel points). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman in view of Delaney and Ikeda to determine edge positions as taught by Han because doing so would allow determining the contour of the defective feature for simpler extraction (see pars. 60-61 of Han). Claim(s) 10 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sherman in view of Delaney and further in view of us patent no. 11393077 to Mironica et al. (hereinafter Mironica). For claim 10, Sherman in view of Delaney teaches that the defective-spot detection method of claim 1 (see the rejection of claim 1) further comprises: obtaining a video stream (see, e.g., pars. 77-78 of Sherman); and performing defective-spot detection on each video frame in the video stream by using the defective-spot detection model to obtain a target detection result of each video frame (see, e.g., pars. 60, 73 and 116 of Sherman, which teach deploying the trained model in run-time for defect examination,). The examiner believes that the above teaching of Sherman in view of Delaney is sufficient to suggest that the taught method may be applied to frames of a video stream. However, for the interest of compact prosecution, the examiner relies on Mironica in the analogous art that teaches applying the defect detection method to a digitized image sequence of a video (see, e.g., lines 42-54 in col. 6. Lines 38-51 in col. 12, lines 56-66 in col. 13, lines 59-67 in col. 24, lines 1-3 and 62-67 in col. 25, and lines 1-11 in col. 26 of Mironica). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sherman in view of Delaney to apply the defect detection method to a video stream as taught by Mironica because doing so would yield a predictable result of detecting and correcting scratch artifacts in a video stream (see lines 55-66 on col. 13 of Mironica and MPEP 2143(I)(D)). For claim 19, Sherman in view of Delaney and Mironica teaches a computer non-transitory readable storage medium having a computer program stored thereon , when executed by a processor (see, e.g., pars. 23 and 35 of Sherman), performs the method of claim 10 (see the rejections of claims 1 and 10). Allowable Subject Matter Claims 3-4 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In regard to claim 3, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “generating the defective-spot image data in the target region of the preset image by using the grid dyeing method to obtain the first defective-spot image sample, comprises: determining any two positions of each of multiple rows of pixels in the target region to generate a line segment with a preset width; and sequentially processing each row of the multiple rows of pixels to obtain multiple line segments, so as to obtain the first defective-spot image sample.” In regard to claim 4, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “performing the median filtering process on the second defective-spot image sample to obtain the third defective- spot image sample, comprises: obtaining a median filtering kernel; and for each pixel in the second defective-spot image sample, determining a target gray-scale value of a middle pixel corresponding to the median filter kernel, based on the gray-scale values of the pixels corresponding to the median filter kernel, so as to obtain the third defective-spot image sample.” Additional Citations The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Xiao et al. (Us pat. app. Pub. 2022/0036533) Describes an image defect detection method and apparatus, an electronic device, a storage medium and a product. The method includes acquiring a to-be-detected image; obtaining a restored image corresponding to the to-be-detected image based on the to-be-detected image, at least one mask image group and a plurality of defect-free positive sample images, where each mask image group includes at least two binary images having a complementary relationship, and different mask image groups have different image sizes; and locating a defect of the to-be-detected image based on the to-be-detected image and each restored image. The solution solves the problem in which a related defect detection method requires numerous manual operations and has a low detection accuracy due to subjective factors of a worker. Amirghodsi et al. (Us pat. app. Pub. 2024/0037717) Describes methods, systems, and non-transitory computer readable storage media for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image. The disclosed system utilizes the artifact segmentation machine-learning model detect perceptual artifacts in the inpainted portions for additional inpainting iterations. Zhao et al. (Us pat. app. Pub. 2022/0138899) Describes methods and apparatuses for processing an image, training an image recognition network and recognizing an image. The method of processing an image includes: obtaining a plurality of original images from an original image set, where at least one of the plurality of original images includes an annotation area; obtaining at least one first image by splicing the plurality of original images; for each of the at least one first image, adjusting a shape and/or size of the first image based on the plurality of original images to form a second image; obtaining respective positions of the at least one annotation area in the second image by converting respective positions of the at least one annotation area in the plurality of original images. Table 1 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
Jan 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month