Prosecution Insights
Last updated: April 19, 2026
Application No. 18/363,088

IMAGE GENERATING METHOD, IMAGE GENERATING DEVICE, AND STORAGE MEDIUM

Final Rejection §103
Filed
Aug 01, 2023
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Kabushiki Kaisha Yaskawa Denki
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-17 were pending for examination in the Application No. 18/363,088 filed August 1st, 2023. In the remarks and amendments received on January 26th, 2026, claims 1, 3, 9, 11, and 17 are amended, and claims 2, 4, 10, and 12 are canceled. Accordingly, claims 1, 3, 5-9, 11, and 13-17 are currently pending for examination in the application. Response to Amendment Applicant’s amendments filed January 26th, 2026, to the Specification have overcome each and every objection previously set forth in the Non-Final Office Action mailed September 25th, 2025. Accordingly, the objection(s) are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure. Response to Arguments Applicant’s arguments filed January 26th, 2026, regarding the rejection(s) of the independent claim(s) have been fully considered but are not persuasive. The examiner respectfully disagrees with Applicant’s assertion that “an image generated from within the model”, such as depicted in the prior art of Shaham, differs from Applicant’s “input image[s]” as disclosed by Applicant (see pg. 10 of Applicant’s Remarks). As disclosed in Fig. 3 of Applicant’s drawings, “an intermediate layer among the plurality of layers” as recited in claim 1 of Applicant’s claims are layers including “a generator and a discriminator in each of a plurality of layers”; wherein an “input image” (which is an image input to a SinGAN model as disclosed in lines 24-26 of Applicant’s Specification) is first input to a first layer comprising of a first generator and discriminator such that the output of the first generator is then inputted into a subsequent generator in a subsequent layer of the model as depicted in Fig. 3 and lines 6-21 of pg. 13 of Applicant’s disclosure. Since the sinGAN model in Fig. 4 of Shaham discloses the same architecture as Fig. 3 of Applicant’s sinGAN model, it is not unreasonable to preclude the interpretation of the claim 1 limitation of “inputting the input image to the generator in an intermediate layer among the plurality of layers” as the “input image” in a sinGAN model as taught by Shaham. The examiner respectfully notes Applicant has not pointed out support in Applicant’s Specification regarding Applicant’s support for the claim 1 limitation “inputting the input image to the generator in an intermediate layer among the plurality of layers” such that the “input image” precludes the interpretation of this limitation as taught by the sinGAN model of Shaham. Further, the examiner respectfully disagrees that Shaham does not teach or suggest the claim 1 limitation “the generator in the intermediate layer is determined based on a layout of the portion of interest shown in the input image” (see pg. 11 of Applicant’s Remarks). The examiner respectfully points out that the claim merely recites determining the “generator in the intermediate layer based on a layout of the portion of the interest shown in the input image” and does not specify and/or require how this determination is made based on the “layout of the portion of the interest shown in the input image”, nor what constitutes and/or comprises a “layout of a portion of interest”. Therefore, the claim given its broadest reasonable interpretation does not preclude the interpretation of inputting an image comprising a portion of interest (e.g., “defect”) as disclosed by Brauer into a sinGAN model comprising a generator in an intermediate layer determined based on a layout (e.g., “distribution”) of an input image as taught by Shaham as a “generator in the intermediate layer based on a layout of the portion of the interest shown in the input image” as claimed by Applicant. Priority (Previously Presented) Acknowledgment is made of applicant’s status as a continuation (CON) of International Application No. PCT/JP2022/005630 filed on February 14th, 2022, which claims priority to provisional U.S. Patent Application No. 63/272,173 filed on October 27th, 2021, and foreign Patent Application No. JP 2021-022117 filed on February 15th, 2021. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as foreign Patent Application No. JP 2021-022117, filed on October 27th, 2021. Acknowledgment is made of applicant’s claim for benefit of a prior-filled provisional application under 35 U.S.C. 119(e). The certified copy has been filed as provisional U.S. Patent Application No. 63/272,173, filed on October 27th, 2021. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5-9, and 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over Brauer (US 2022/0036539 A1) in view of Lin et al. (Lin; US 2021/0390682 A1), further in view of Shaham et al. (Shaham; “SinGAN: Learning a Generative Model From a Single Natural Image,” 2019, provided by Applicant’s IDS filed on August 1st, 2023), and further more in view of Liu et al. (Liu; “Multistage GAN for Fabric Defect Detection,” 2019). Regarding claim 1, Brauer discloses an image generating method, comprising: creating a (para(s). [0069] and [0092-0093], recite(s) [0069] “In general, GANs consist of two adversarial models, a generative model, G, capturing the data distribution, and a discriminative model, D, estimating the probability that a given sample comes from the training data rather than G. …” [0092] “FIG. 4 illustrates one embodiment of steps that may be performed for artificial image generation using a GAN. In particular, as shown in FIG. 4, the one or more computer subsystems may input design image 400 (conditional image) with added defect 402 into trained generator network 404. The added or synthetic defect may be created in the design data portion shown in design image 400 as described further herein by the one or more computer subsystems. The trained generator network may output generated patch image 406 showing defect 408. In this manner, the synthetic defect is added defect 402 in the portion of design data shown in design image 400, which is input to the GAN by the one or more computer subsystems to thereby generate simulated image 406.” [0093] “As shown in FIG. 4, therefore, once the GAN is trained, the GAN can be used to generate simulated optical or other images. In this embodiment, the trained generator network is used to create a patch, real-looking image of an artificially introduced defect in a design clip. The generated patch images can then be used as described further herein.” , where the “trained generator network” is a GAN model (e.g., “generative adversarial networks”) including a first image (e.g., “design image 400”) having a portion of interest shown partially on a target object (e.g., “defect 402”)); generating an input image by compositing a target object image and a portion-of-interest(para(s). [0092]—see citation above—, where the “design image 400” is an input image generated by compositing (e.g., “synthetic defect is added”) a target object image (e.g., “design image 400”) and a portion-of-interest image (e.g., “added defect 402”)); and generating, based on the (para(s). [0092]—see citation above—, where paras. [0089-0090] further recite(s): [0089] “But in some cases, a first specimen may have similar enough characteristics (e.g., patterned features, materials, etc.) to a second specimen, that a GAN trained on the first specimen can be used to generate simulated images for the second specimen even if the first and second specimens do not have the same designs. …” [0090] “In this manner, a trained GAN may be repurposed for generating simulated images for specimens it was not necessarily trained for. In one such example, if two different specimens with two different designs have at least some patterned features in common in a portion of the design (e.g., similar memory array areas) formed of similar materials and having the same or similar dimensions, a GAN trained for one of the specimens may be capable of producing simulated images in that portion of the design for another of the specimens. …” , where the “generated simulated image 406” is a second image exhibiting a portion of interest (e.g., “defect 408”) different in mode (e.g., “different design[s]”) from the portion of interest (e.g., “defect 402”) of the first image), wherein the generating of the second image includes inputting the input image to the generator(para(s). [0069] and [0092-0093]—see citations in the first limitation of the claim above—, where the “design image” is “input to the GAN… to thereby generated simulated image” is inputting the input image (e.g., “design image”) to the generator (i.e., the “GAN” comprises a “generative model G”)), and the generator(para(s). [0069] and [0092-0093]—see citations in the first limitation of the claim above—, where the input image (e.g., “design image”) comprising a portion of interest (e.g., “defect 402”) shown partially on a target object is input into the “GAN” model comprising a generator (e.g., “generative model G”) is the generator determined based on a layout (e.g., “data distribution”) of the portion of interest shown in the input image). Where Brauer does not specifically disclose creating a SinGAN model including a generator and a discriminatorand generating, based on the SinGAN model and the input image, a second image…; Lin teaches in the same field of endeavor of generating defect images creating a SinGAN model including a generator and a discriminator (para(s). [0042] and [0067], recite(s) [0042] “… The generative adversarial networks model includes two parts, the generative network model and the discrimination network model. …” [0067] “In embodiments of the present disclosure, images of various surfaces of the defective article including various defects can be synthesized by acquiring the image of the surface of the good article and the image of the defect using an industrial camera in a fixed optical environment. Specifically, an unconditional generation model that can learn from a single natural image may be used to combine to generate a defect article surface image. The unconditional generation model may include sinGAN, DCGAN, or CGAN, etc. …According to the present embodiment, image samples of different article surfaces containing different defects are acquired as a training set by attaching different defects to the surfaces of different good articles, so that the trained defect detection model has strong practicability.” , where the unconditional generation model “sinGAN” is a SinGAN model based on a first image (e.g., “image of the defect”) having a portion of interest (e.g., “defect”) shown partially on a target); and generating, based on the SinGAN model and the input image, a second image… (para(s). [0067]—see citation above—, where the “generat[ed] defect article surface image” is the second image). Since Brauer and Lin each disclose a GAN model including a generator and a discriminator based on a first image having a portion of interest (e.g., a defect) shown partially on a target object, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the GAN model of Brauer could have been substituted for the SinGAN model of Lin because both the GAN model and SinGAN model serve the purpose of generating a second image exhibiting a portion of interest different in mode from the portion of interest in the first image. Where Brauer in view of Lin does not specifically disclose creating a SinGAN model including a generator and a discriminator in each of a plurality of layers…; …wherein the generating of the second image includes inputting the input image to the generator in an intermediate layer among the plurality of layers, and the generator in the intermediate layer is determined based on a layout of the portion of interest shown in the input image; Shaham teaches in the same field of endeavor of a SinGAN model creating a SinGAN model including a generator and a discriminator in each of a plurality of layers… (Fig. 4 on pg. 4571, recite(s) PNG media_image1.png 568 998 media_image1.png Greyscale , where each level of the “pyramid of GANs” is a layer comprising a generator (Gn) and a discriminator (Dn)); …wherein the generating of the second image includes inputting the input image to the generator in an intermediate layer among the plurality of layers (Fig. 4 on pg. 4571—see citation in claim 1 above—, where the figure depicts the input image (i.e., an image inputted to the SinGAN model, such as x ^ N in Fig. 4) is inputted to the generator in an intermediate layer (e.g., “ G N - 1 ”)), and the generator in the intermediate layer is determined based on a layout of the portion of interest shown in the input image (Fig. 4 on pg. 4571—see citation in claim 1 above—, where section 2.1. on pg. 4571 further recite(s): [2.1. Mult-scale architecture] “Our model consists of a pyramid of generators, { G 0 ,   .   .   .   ,   G N   } , trained against an image pyramid of x :   { x 0 ,   . . . ,   x N   } , where x N is a downsampled version of x by a factor r n , for some r   >   1 . Each generator G N is responsible of producing realistic image samples w.r.t. the patch distribution in the corresponding image x N . This is achieved through adversarial training, where G N learns to fool an associated discriminator D N , which attempts to distinguish patches in the generated samples from patches in x N .” , where a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that, since the input image comprises the portion-of-interest (e.g., “defect”) and outputs an image comprising the portion-of-interest as disclosed by Brauer in claim 1 above—see citations in claim 1 limitation “generating an input image…” above—, the generator in the intermediate layer which takes in the input image (e.g., x ^ N in Fig. 4) and outputs the same patch distribution in the corresponding input image is determining the generator in the intermediate layer based on at least a layout (e.g., “distribution”) of the portion of interest shown in the input image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention for the SinGAN model of Brauer in view of Lin to include a generator and a discriminator in each of a plurality of layers comprising of inputting the input image to the generator in an intermediate layer among the plurality of layers, wherein the generator in the intermediate layer is determined based on a layout of the portion of interest shown in the input image, because a SinGAN model inherently comprises of a generator and a discriminator in each of a plurality of layers comprising of generators in intermediate layers determined based on a layout shown in the input image as disclosed by Shaham above. Where Brauer, as modified by Lin and Shaham, does not specifically disclose generating an input image by compositing a target object image and a portion-of-interest image; Liu teaches in the same field of endeavor of generating defect images generating an input image by compositing a target object image and a portion-of-interest image (subheading “Stage 2” on pg. 3393, recite(s) [Stage 2] “After the defects are generated, we fuse the defects and textures using a defect-fusing network as depicted in stage 2 of Fig. 4. First, the defective zones from the training patches are cropped out, leaving only the defect-free fabric patches with blank windows. Then, the generated defects are resized and pasted onto the blank windows, thus obtaining the imperfect inputs x for the defect-fusing network. To be well defined, we name the generated patches T (x) and the training patches as real images y. Note that the defect-fusing network is trained to fuse different generated defects into their corresponding backgrounds; i.e., all training defective samples (with different textures) are utilized as real data during adversarial training” , where the “imperfect inputs” are input images generated by compositing (e.g., “past[ing]”) a target object image (e.g., “defect-free fabric patches”) and a portion-of-interest image (e.g., “generated defects”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Brauer, as modified by Lin and Shaham, to incorporate generating an input image by compositing a target object image and a portion-of-interest image to generate diverse training data comprising of different defects and textures as taught by Liu (subsection III(C) on pg. 3392, recite(s) [C. Synthesizing Novel Defective Samples Using a Multistage GAN] “In real-world applications, since fabric textures are multifarious and continuously updated, we must consider the weak transferability issues of existing fabric defect detection methods. To maintain our method’s adaptability, we propose a multistage GAN-based module for synthesizing defective fabric samples with novel textures. As one type of data augmentation solution, synthesized defective samples are utilized to further fine-tune our defect detection network. Thus, we no longer need to collect and label defective fabric samples with novel textures, instead simply synthesizing new defective fabric samples based on continuously updated defect-free samples. …” ). Regarding claim 5, Brauer, as modified by Lin, Shaham, and Liu, discloses the image generating method according to claim 1, wherein Brauer further discloses the generating of the input image includes acquiring region information on the portion of interest (para(s). [0092]—see citation in claim 1 limitation “creating a SinGAN model…” above—, where the “added defect 402” is the acquired region information on the portion of interest (e.g., image information on the defective portion of the input image)), and wherein the generating of the second image includes: inputting the input image to the SinGAN model to generate an output image exhibiting the portion of interest different in mode from the portion of interest of the first image (para(s). [0092] and [0089-0090]—see citations in claim 1 limitation “generating, based on the SinGAN model and the input image, a second image…” above—, where the “generated simulated image 406” is an output image exhibiting the portion of interest (e.g., “defect”) different in mode (e.g., “different design[s]”) from the portion of interest (e.g., “defect 402”) of the first image; and Lin teaches the GAN model as a SinGAN model—see teaching of Lin in claim 1 above); and generating, based on the region information, the second image including the portion of interest included in the output image (para(s). [0092] and [0089-0090]—see citations in claim 1 limitation “generating, based on the SinGAN model and the input image, a second image…” above—, where the “generated simulated image 406” is also a second image included in the output image). Regarding claim 6, Brauer, as modified by Lin, Shaham, and Liu, discloses the image generating method according to claim 1, wherein Lin further teaches the generating of the second image includes outputting an output image from the SinGAN model (para(s). [0067]—see citation in claim 1 above—, where the “generat[ed] defect article surface image” is an output image from the SinGAN model), wherein Shaham further teaches the outputting of the output image includes: inputting a random noise to the generator in at least a lowest layer (Fig. 4 on pg. 4571—see citation in claim 1 above—, where z N in Fig. 4 is random noise inputted to the generator in at least a lowest layer (e.g., generator G N in Fig. 4)); and outputting the output image including the portion-of-interest image from the generator in a highest layer (Fig. 4 on pg. 4571—see citation in claim 1 above—, where the output image x 0 in Fig. 4 is an output image outputted from the generator in the highest layer (e.g., generator G 0 in Fig. 4); and since the input image comprises the portion-of-interest (e.g., “defect”) and outputs an image comprising the portion-of-interest as disclosed by Brauer in claim 1 above—see citations in claim 1 limitation “generating an input image…” above— and Shaham teaches in section 2.1. on pg. 4571—see citation in claim 4 above—that the generators in a SinGAN model outputs the same patch distribution in the corresponding input image, that a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that an image outputted by the generator at the highest layer of the SinGAN model which outputs the same patch distribution in the corresponding input image would output an output image comprising the portion-of-interest (e.g., defect) shown in the input image). Regarding claim 7, Brauer, as modified by Lin, Shaham, and Liu, discloses the image generating method according to claim 1, wherein Shaham further teaches the generating of the second image includes: inputting a random noise to the generator in at least a lowest layer (Fig. 4 on pg. 4571—see citation in claim 1 above—, where z N in Fig. 4 is random noise inputted to the generator in at least a lowest layer (e.g., generator G N in Fig. 4)); and outputting the second image from the generator in a highest layer (Fig. 4 on pg. 4571—see citation in claim 1 above—, where x 0 in Fig. 4 is the second image outputted from the generator in a highest layer). Regarding claim 8, Brauer, as modified by Lin, Shaham, and Liu, discloses the image generating method according to claim 1, wherein Brauer further discloses the portion of interest comprises a defective portion shown partially on the target object (para(s). [0092]—see citation in claim 1 limitation “creating a SinGAN model…” above—, where Fig. 4 further depicts: PNG media_image2.png 207 551 media_image2.png Greyscale , where the “defect 402” is shown partially in “design image 400” as depicted in Fig. 4 above). Regarding claim 9, the claim differs from claim 1 in that the claim is in the form of an image generating device, comprising: at least one processor; and at least one memory device configured to store a plurality of instructions to be executed by the at least one processor, wherein the at least one memory device is configured to store the SinGAN model of claim 1, and wherein the plurality of instructions cause the at least one processor to execute the method of claim 1. Brauer discloses said processor and memory device (para(s). [0042], recite(s) [0042] “…In general, the term “computer system” may be broadly defined to encompass any device having one or more processors, which executes instructions from a memory medium. The computer subsystem(s) or system(s) may also include any suitable processor known in the art such as a parallel processor. In addition, the computer subsystem(s) or system(s) may include a computer platform with high speed processing and software, either as a standalone or a networked tool.” , where the “memory medium” is a memory device). Therefore, claim 9 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 13, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Regarding claim 14, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Regarding claim 15, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Regarding claim 16, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above). Regarding claim 17, the claim differs from claim 1 in that the claim is in the form of a non-transitory computer-readable information storage medium having stored thereon a program executed by a computer, the program causing the computer to operate as an image generating device configured to execute the method of claim 1. Brauer discloses said non-transitory computer-readable information storage medium (para(s). [0145], recite(s) [0145] “Program instructions 702 implementing methods such as those described herein may be stored on computer-readable medium 700. The computer-readable medium may be a storage medium such as a magnetic or optical disk, a magnetic tape, or any other suitable non-transitory computer-readable medium known in the art.” ). Therefore, claim 17 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Brauer, as modified by Lin, Shaham, and Liu, as applied to claims 1 and 9 above, and further more in view of Ki et al. (Ki; US 2020/0380373 A1). Regarding claim 3, Brauer, as modified by Lin, Shaham, and Liu, discloses the image generating method according to claim 2, wherein Ki teaches in the same field of endeavor of defect image generation the generating of the input image includes generating the input image by cutting out a region of the portion of interest and a periphery of the portion of interest from the composited target object image and portion-of-interest image (para(s). [0041], [0055], [0067], and [0080], recite(s) [0041] “The reconstruction algorithm 200 according to the exemplary embodiment of the present disclosure may be an image reconstruction algorithm. Herein, the image reconstruction algorithm may include, for example, a Variational Autoencoder (VAE) and a Generative Model, and particularly, include generative adversarial networks, conditional generative adversarial networks, and the like. The image reconstruction algorithm is merely an example, and the scope of the present disclosure is not limited thereto.” [0055] “The first, second, and third training of the present disclosure may be performed by using patches obtained by dividing an entire image in a predetermined size, as input, and may be performed by using a patch extracted from a portion corresponding to a defect in the image as input.” [0067] “The computing device 100 according to the exemplary embodiment of the present disclosure may input defect data 211 of the source domain, to which the first mask is applied, to the generating network 510 (S310). In the case where the defect data is image data, the computing device 100 according to the exemplary embodiment of the present disclosure may progress a training process by inputting a patch extracted from the entire image to the generating network 510 and may also progress a training process by inputting a patch extracted from a defect part of the image to the generating network 510. …” [0080] “…The image patch 710 may have a size of N pixels × N pixels, and may have a size of 1 pixel. The image patch of the present disclosure may be extracted from the entirety or a part of the data. For example, the image patch of the present disclosure may be extracted mainly based on the defect part in the defect data. The identification network may output final distinguishment information by aggregating the response values for each image patch obtained by comparing the defect data 213 of the source domain, to which the first mask is not applied, and the generated defect data 515 of the source domain, to which the first mask is reconstructed for each image patch, and the scope of the present disclosure is not limited thereto.” , where the input to an image “generating network” can be an “image patch” extracted from “part of the [entirety of image] data” mainly “based on the defect part in the defect data” is generating the input image by cutting (i.e., “extract[ing]”) a region of the portion of interest (e.g., the “defect”) and a periphery of the portion of interest from an image comprising a target object image and portion-of-interest image (i.e., an “image patch” comprises of both the defect—i.e., the portion-of-interest—and area around the defect—i.e., the periphery of the portion of interest)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Brauer, as modified by Lin, Shaham, and Liu, to incorporate cutting out a region of the portion of interest and a periphery of the portion of interest from the composited target object image and portion-of-interest image to improve input image generation by focusing the image generation network on generating an output image comprising the portion of interest (e.g., the defect) as taught by Ki above. Regarding claim 11, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 01, 2023
Application Filed
Sep 19, 2025
Non-Final Rejection — §103
Jan 05, 2026
Interview Requested
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 26, 2026
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month