Prosecution Insights
Last updated: April 19, 2026
Application No. 18/661,683

AI-BASED SYSTEM AND METHOD OF DETECTING DEFECT OF MATERIAL CONSIDERING KIND AND DISTRIBUTION OF REAL DEFECT

Non-Final OA §101§103
Filed
May 12, 2024
Examiner
WU, YANNA
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Lightvision Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
354 granted / 438 resolved
+18.8% vs TC avg
Strong +35% interview lift
Without
With
+35.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
20 currently pending
Career history
458
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 438 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to non-statutory subject matter because the claim(s) as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than an abstract idea. As summarized in the 2019 Revised Patent Subject Matter Eligibility Guidance, examiners must perform a Two-Part Analysis for Judicial Exceptions. Step 1 In Step 1, it must be determined whether the claimed invention is directed to a process, machine, manufacture or composition of matter. The instant invention encompasses three sets of claims: a system in claims 1-7, a system in claims 8-11 (i.e., a manufacture) and a method in claims 12-13 (i.e., a process) All claims are directed to one of the four statutory categories and meet the requirements of step 1. Step 2A Prong One The claimed invention is directed to an abstract idea without significantly more. The instant invention is broadly directed to detecting a defect in an image. Claim 1 recites the following (with emphasis added): A system for detecting a defect comprising: a training data generating unit configured to generate multiple synthetic defect-free images or synthetic defect images using a defect image; a training unit configured to learn a model for detection of a defect by using the generated synthetic defect-free images or the generated synthetic defect images; and a defect detecting unit configured to detect a defect of an input image using the learned model, wherein the training data generating unit detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image, and generate different kind, number or resolution of the synthetic defect-free images or the synthetic defect images depending on the detected shape of the defect or the detected shape of the background. Claim 1 encompass the abstract idea, which is also encompassed by the dependent claims 2-7. Claim 1 recites the steps for generating, detecting image features (bolded part) This can be performed in a mental process using pen and paper, which is abstract idea. Prong Two This judicial exception is not integrated into a practical application because mere instruction to implement on a computer (i.e. unit here in claim 1) or a computer model (learned model here in claim 1), or merely using a computer or computer model as a tool to perform the abstract idea, adding insignificant extra solution activity, and/or generally linking the use of the abstract idea to a technological environment or field of use is not considered integration into a practical application. Claim 1 recites using training data to train a neural network model. Using training data to train a neural network model is a generic feature of neural network, which does not represent a technological improvement. The using of the computer and the neural network model does not add improvement to the functioning of a computer or to any other technology field, which failed to enable the abstract idea to integrate into a practical application. Dependent claims 2-7 recite limitations about mere image generating and feature detecting without specific process, which can be performed by a mental process using pen and paper, which is abstract idea. The claims do not include additional elements that are sufficient to enable the abstract idea to integrate into a practical application. Step 2B Step 2B in the analysis requires us to determine whether the claims do significantly more than simply describe that abstract method. Mayo, 132 S. Ct. at 1297. We must examine the limitations of the claims to determine whether the claims contain an "inventive concept" to "transform" the claimed abstract idea into patent-eligible subject matter. Alice, 134 S. Ct. at 2357 (quoting Mayo, 132 S. Ct. at 1294, 1298). The transformation of an abstract idea into patent-eligible subject matter "requires 'more than simply stat[ing] the [abstract idea] while adding the words 'apply it."' Id. (quoting Mayo, 132 S. Ct. at 1294) (alterations in original). "A claim that recites an abstract idea must include 'additional features' to ensure 'that the [claim] is more than a drafting effort designed to monopolize the [abstract idea].'" Id. (quoting Mayo, 132 S. Ct. at 1297) (alterations in original). Those "additional features" must be more than "well-understood, routine, conventional activity." Mayo, 132 S. Ct. at 1298. The present claims include the additional elements other than the abstract idea which include a computer (i.e. unit here in claim 1) or a computer model (learned model here in claim 1). These additional elements are merely conventional computer and computer model. Any potentially technical aspects of the claims are well-known generic computer components performing conventional functions (e.g., a processor performing a mental process). The present claims have been analyzed both individually and in combination and, the instant claims do not provide any improvement of the functioning of the computer or improvement to computer technology or any other technical field. There do not appear to be any meaningful limitations other than those that are well-understood, routine and conventional in the field. Thus, the present claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claims 1-7 are not patent eligible. Claims 8-11 and method claim 12-13 recite similar limitations of claims 1-7, thus are abstract idea and not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 6-9, 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Milne et al. (US 2023/0196096 A1). Regarding claim 1, Milne teaches: A system for detecting a defect comprising: a training data generating unit configured to generate multiple synthetic defect-free images or synthetic defect images using a defect image; ([0083], “If the defect class with the highest resolution requirement is used to dictate the resolution of all training images in image library 140 (and all images used to perform classification with the trained AVI neural network(s)), the processing/memory constraints noted above can result in unacceptably slow performance. Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks). In this phased approach, training images for some defect classes (i.e., the images used to train AVI neural network(s) corresponding to those defect classes) are reduced in size by lowering the resolution of the original container image (down-sampling), while training images for other defect classes are reduced in size by cropping to a smaller portion of the original container image. In some embodiments, training images for some defect classes are reduced in size by both cropping and down-sampling the original container image.” [0085], FIG. 1) a training unit configured to learn a model for detection of a defect by using the generated synthetic defect-free images or the generated synthetic defect images; ([0046], “AVI neural network module 116 comprises software that uses images stored in an image library 140 to train one or more AVI neural networks…. the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.), detect objects in images (e.g., detect the position of foreign objects that are not bubbles within container images), or some combination thereof (e.g., one neural network classifying images, and another performing object detection).”) and a defect detecting unit configured to detect a defect of an input image using the learned model, ([0039], “the AVI neural network(s) may be used in production to detect defects associated with containers and/or contents of those containers. In a pharmaceutical context, for example, the AVI neural network(s) may be used to detect defects associated with syringes, cartridges, vials or other container types (e.g., cracks, scratches, stains, missing components, etc., of the containers), and/or to detect defects associated with liquid or lyophilized drug products within the containers (e.g., the presence of fibers and/or other foreign particles, variations in color of the product, etc.).”) wherein the training data generating unit based on a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image, and generate different kind, number or resolution of the synthetic defect-free images or the synthetic defect images depending on the detected shape of the defect or the detected shape of the background. ([0083], “If the defect class with the highest resolution requirement is used to dictate the resolution of all training images in image library 140 (and all images used to perform classification with the trained AVI neural network(s)), the processing/memory constraints noted above can result in unacceptably slow performance. Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks). In this phased approach, training images for some defect classes (i.e., the images used to train AVI neural network(s) corresponding to those defect classes) are reduced in size by lowering the resolution of the original container image (down-sampling), while training images for other defect classes are reduced in size by cropping to a smaller portion of the original container image. In some embodiments, training images for some defect classes are reduced in size by both cropping and down-sampling the original container image.” [0085]) However, in the above citation, Milne does not explicitly teach: detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image On the other hand, Milne teaches: based on a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image (as explained above using [0083], “Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks).”) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have included the step of “detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image” in order to have the data about the shape of the defect based on the kind of the defect image. The benefit of including this step is to explicitly acquiring the defect shape data and make it possible to continue the following step of generating different defect images to train the learned model. Regarding claim 3, Milne teaches: The system of claim 1, wherein the training data generating unit includes: an image kind/distribution analyzing unit configured to detect the shape of the defect or the shape of the background by analyzing the kind and the distribution of the defect image; (Milne teaches: based on a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image [0083], “Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks).” It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have included the step of “detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image” in order to have the data about the shape of the defect based on the kind of the defect image. The benefit of including this step is to explicitly acquiring the defect shape data and make it possible to continue the following step of generating different defect images to train the learned model.) a defect-free image generating unit configured to generate the synthetic defect-free images according to the detected shape of the defect or the detected shape of the background; and a defect image generating unit configured to generate the synthetic defect images depending on the detected shape of the defect or the detected shape of the background. ([0046], “AVI neural network module 116 comprises software that uses images stored in an image library 140 to train one or more AVI neural networks. Image library 140 may be stored in memory unit 114, or in another local or remote memory (e.g., a memory coupled to a remote library server, etc.). In addition to training, module 116 may implement/run the trained AVI neural network(s), e.g., by applying images newly acquired by visual inspection system 102 (or another visual inspection system) to the neural network(s), possibly after certain pre-processing is performed on the images as discussed below. In various embodiments, the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.),”[0083], “Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks).”) Regarding claim 4, Milne teaches: The system of claim 3, wherein the defect-free image generating unit or the defect image generating unit generates set kind, number or resolution of the synthetic defect-free images or the synthetic defect images irrespective of kind, number or resolution determined through an artificial intelligence by the image kind/distribution analyzing unit when a user sets a kind, a number or a resolution of the synthetic defect-free images or the synthetic defect images.([0065], “The GUI (or another GUI generated by another program) may also display each captured frame/image to the user, and include user interactive controls for manipulating the image (e.g., zoom, pan, etc.) and for manually labeling the image (e.g., “defect observed” or “no defect” for image classification, or drawing boundaries within, or pixel-wise labeling, portions of images for object detection).” [0056], “The GUI (or another GUI generated by another program) may also display each captured frame/image to the user, and include user interactive controls for manipulating the image (e.g., zoom, pan, etc.) and for manually labeling the image (e.g., “defect observed” or “no defect” for image classification, or drawing boundaries within, or pixel-wise labeling, portions of images for object detection).”) Regarding claim 6, Milne teaches: The system of claim 3, wherein the defect image generating unit generates multiple synthetic defect images by modifying a defect area in the defect image or generates the multiple synthetic defect images by converting a resolution of the defect image into a reference resolution, and wherein at least one of a location, a size or a shape of a defect of the synthetic defect images differs from a location, a size or a shape of a defect of the defect image. ([0083], “If the defect class with the highest resolution requirement is used to dictate the resolution of all training images in image library 140 (and all images used to perform classification with the trained AVI neural network(s)), the processing/memory constraints noted above can result in unacceptably slow performance. Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks). In this phased approach, training images for some defect classes (i.e., the images used to train AVI neural network(s) corresponding to those defect classes) are reduced in size by lowering the resolution of the original container image (down-sampling), while training images for other defect classes are reduced in size by cropping to a smaller portion of the original container image. In some embodiments, training images for some defect classes are reduced in size by both cropping and down-sampling the original container image.” [0085]) Regarding claim 7, Milne teaches: The system of claim 1, wherein the defect detecting unit generates a comparison image which is an image formed by removing a defect from the input image by using the model, and detects the defect by comparing the generated comparison image with the input image.(FIG. 18B, [0142]” Turning next to FIG. 18B, a process 1820 compares heatmaps for “defect” images to heatmaps for “good” (non-defect) images, rather than comparing to a map of container zones. At stage 1822, a heatmap of a “good” (non-defect) image (also referred to as a “good heatmap”) is generated. AVI neural network module 116 may generate the good heatmap by running a container image that is known to exhibit no defects through a neural network that is trained to detect defects of a specific category/class. This good heatmap can then act as a reference heatmap for numerous iterations of the process 1820.”) Regarding claim 8, Milne teaches: A system for detecting a defect comprising: a training data generating unit configured to generate multiple synthetic defect-free images or synthetic defect images using a defect image; ([0083], “If the defect class with the highest resolution requirement is used to dictate the resolution of all training images in image library 140 (and all images used to perform classification with the trained AVI neural network(s)), the processing/memory constraints noted above can result in unacceptably slow performance. Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks). In this phased approach, training images for some defect classes (i.e., the images used to train AVI neural network(s) corresponding to those defect classes) are reduced in size by lowering the resolution of the original container image (down-sampling), while training images for other defect classes are reduced in size by cropping to a smaller portion of the original container image. In some embodiments, training images for some defect classes are reduced in size by both cropping and down-sampling the original container image.” [0085]) a training unit configured to learn a model for detection of a defect by using the generated synthetic defect-free images or the generated synthetic defect images; ([0046], “AVI neural network module 116 comprises software that uses images stored in an image library 140 to train one or more AVI neural networks…. the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.), detect objects in images (e.g., detect the position of foreign objects that are not bubbles within container images), or some combination thereof (e.g., one neural network classifying images, and another performing object detection).”) and a defect detecting unit configured to detect a defect of an input image using the learned model, ([0039], “the AVI neural network(s) may be used in production to detect defects associated with containers and/or contents of those containers. In a pharmaceutical context, for example, the AVI neural network(s) may be used to detect defects associated with syringes, cartridges, vials or other container types (e.g., cracks, scratches, stains, missing components, etc., of the containers), and/or to detect defects associated with liquid or lyophilized drug products within the containers (e.g., the presence of fibers and/or other foreign particles, variations in color of the product, etc.).”) wherein the training data generating unit determines a kind, a number or a resolution of a synthetic defect-free image or a synthetic defect image to be generated by analyzing a kind or a distribution of the defect image through an artificial intelligence, ([0085], “Regardless of whether module 132 “pre-crops” image 802 down to image portion 810, module 132 reduces image sizes by cropping image 802 (or 810) down to various smaller image portions 812, 814, 816 that are associated with specific defect classes. These include an image portion 812 for detecting a missing needle shield, an image portion 814 for detecting syringe barrel defects, and an image portion 816 for detecting plunger defects. In some embodiments, defect classes may overlap to some extent. For instance, both image portion 812 and image portion 814 may also be associated with foreign particles within the container. In some embodiments, because a missing needle shield is an easily observed (coarse) defect, image pre-processing module 132 also down-samples the cropped image portion 812 (or, alternatively, down-samples image 802 or 810 before cropping to generate image portion 812).” [0112] teaches the training data can be generated using a GAN: “Referring first to FIG. 12, an example technique 1200 generates synthetic container images using a generative adversarial network (GAN).”[0115], “The generated artificial/synthetic images may vary in one or more respects, such as any of various kinds of defects (e.g., stains, cracks, particles, etc.), and/or any non-defect variations (e.g., different positions for any or all of features 902 through 908 and/or any of the features in set 1120, and/or the presence of bubbles, etc.). In some embodiments, library expansion module 134 seeds particle locations (e.g., randomly or specifically chosen locations) and then uses a GAN to generate realistic particle images.”) On a different embodiment, Milne teaches: but determines differently a kind, a number or a resolution of the synthetic defect-free image or the synthetic defect image to be generated according to user’s request.([0066] “Computer system 104 may store and execute custom, user-facing software that facilitates the capture of training images (for image library 140), for the manual labeling of those images (to support supervised learning) prior to training the AVI neural network(s). For example, in addition to controlling the lights, agitation motor and camera(s) using VIS control module 120, memory unit 114 may store software that, when executed by processing unit 110, generates a graphic user interface (GUI) that enables a user to initiate various functions and/or enter controlling parameters. For example, the GUI may include interactive controls that enable the user to specify the number of frames/images that visual inspection system 102 is to capture, the rotation angle between frames/images (if different perspectives are desired), and so on. The GUI (or another GUI generated by another program) may also display each captured frame/image to the user, and include user interactive controls for manipulating the image (e.g., zoom, pan, etc.) and for manually labeling the image (e.g., “defect observed” or “no defect” for image classification, or drawing boundaries within, or pixel-wise labeling, portions of images for object detection).”) The above limitations are taught in different embodiment by Milne. However, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the different embodiment taught by Milne to allow users’ involvement in the image defect detection, so to enable the system provide a more flexible and accurate defect detection results. Regarding claim 9, Milne teaches: The system of claim 8, wherein the training data generating unit detects a shape of a defect or a shape of a background by analyzing the kind or the distribution of the defect image and generates a different kind, number or resolution of the synthetic defect-free image or the synthetic defect image depending on the detected shape of the defect or the detected shape of the background. ([0083], “If the defect class with the highest resolution requirement is used to dictate the resolution of all training images in image library 140 (and all images used to perform classification with the trained AVI neural network(s)), the processing/memory constraints noted above can result in unacceptably slow performance. Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks). In this phased approach, training images for some defect classes (i.e., the images used to train AVI neural network(s) corresponding to those defect classes) are reduced in size by lowering the resolution of the original container image (down-sampling), while training images for other defect classes are reduced in size by cropping to a smaller portion of the original container image. In some embodiments, training images for some defect classes are reduced in size by both cropping and down-sampling the original container image.” [0085]) However, in the above citation, Milne does not explicitly teach: detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image On the other hand, Milne teaches: based on a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image (as explained above using [0083], “Thus, in some embodiments, system 100 instead implements a phased approach that is at least partially based on the relative dimensions/sizes of the various defect classes (e.g., different defect classes associated with different AVI neural networks).”) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have included the step of “detects a shape of the defect or a shape of a background by analyzing a kind or a distribution of the defect image” in order to have the data about the shape of the defect based on the kind of the defect image. The benefit of including this step is to explicitly acquiring the defect shape data and make it possible to continue the following step of generating different defect images to train the learned model. Claims 11 recite similar limitations of claim 6, thus are rejected accordingly. Claims 12 recites similar limitations of claim 1, thus are rejected accordingly. Claim(s) 2, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Milne in view of Zheng et al. (US 2008/0101676 A1). Regarding claim 2, Milne teaches: The system of claim 1, wherein the training data generating unit detect a shape(see claim 1) However, Milne does not, but Zheng teaches: extracts a Haar-like feature or a hand-crafted feature from the defect image ([0028], “Another advantage to using MSL for shape detection is that different features or learning methods can be used at each step. For example, in the translation estimation step rotation is treated as an intra-class variation so 3D Haar features can be used for detection.”) and detects the shape of the defect or the shape of the background using clustering based on the extracted feature.([0028]-[0030], “Another advantage to using MSL for shape detection is that different features or learning methods can be used at each step. For example, in the translation estimation step rotation is treated as an intra-class variation so 3D Haar features can be used for detection. In the translation-orientation and similarity transformation estimation steps, steerable features are used which will be described in further detail hereinafter. Steerable features have a very flexible framework in which a few points are sampled from the volume under a special pattern. A few local features are extracted for each sampling point, such as voxel intensity and gradient. To evaluate the steerable features under a specified orientation, only the sampling pattern needs to be steered and no volume rotation is involved. After the similarity transformation estimation, an initial estimate of the non-rigid shape is obtained. Learning based 3D boundary detection is used to guide the shape deformation in the active shape model framework. Again, steerable features are used to train local detectors and find the boundary under any orientation, therefore avoiding time consuming volume rotation. In many instances, the posterior distribution or the object to be detected, e.g., heart, is clustered in a small region in the high dimensional parameter space. It is not necessary to search the whole space uniformly and exhaustively. In accordance with an embodiment of the present invention, an efficient parameter search method, Marginal Space Learning (MSL), is used to search such clustered space. In MSL, the dimensionality of the search space is gradually increased. For purposes of explanation, Q is the space where the solution to the given problem exists and P.sub..OMEGA. is the true probability that needs to be learned.”) Milne teaches the training data generating unit generates training data based on detected shape information. Zheng teaches a specific method of detecting shape information. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Milne with the specific teachings of Zheng to more accurately detecting shape information of a defect in an image. Claims 13 recite similar limitations of claim 2, thus are rejected accordingly. Claim(s) 5, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Milne in view of Li et al. (US 2021/0004945 A1). Regarding claim 5, Milne teaches: The system of claim 3, wherein the defect-free image generating unit However, Milne does not explicitly, but Li teaches: generates the synthetic defect-free images by removing a defect area from the defect area and then replacing the removed defect area with proper background. ([0068], “Besides the GAN based model, other image recovering techniques for removing the defect can also be used, such as matching and copying background patches to the masked region, or matching the masked region from a database with image indexing, for example, by indexing a corresponding normal image in the database and copying the corresponding region in the indexed image to the masked region.”) Milne teaches generate no defect images. Li teaches a specific method of generating a no defect images. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Milne with the specific teachings of Li to easily generate defect free images. Claims 10 recite similar limitations of claim 5, thus are rejected accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YANNA WU/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

May 12, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602850
GENERATIVE AI VIRTUAL CLOTHING TRY-ON
2y 5m to grant Granted Apr 14, 2026
Patent 12579664
EYE TRACKING METHOD, APPARATUS AND SENSOR FOR DETERMINING SENSING COVERAGE BASED ON EYE MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12573106
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD FOR PROCESSING OVERLAY IMAGES
2y 5m to grant Granted Mar 10, 2026
Patent 12573108
HEAD-POSE AND GAZE REDIRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12555187
CLIENT-SERVER MEDICAL IMAGE STACK RETRIEVAL AND DISPLAY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+35.3%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 438 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month