Prosecution Insights
Last updated: April 19, 2026
Application No. 16/925,085

ATTRIBUTE-AWARE IMAGE GENERATION USING NEURAL NETWORKS

Final Rejection §103
Filed
Jul 09, 2020
Examiner
SITIRICHE, LUIS A
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
4 (Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
363 granted / 468 resolved
+22.6% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
24 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to the remarks entered on 08/05/2025. Claims 1-8, 11, 14-19, 21-24, 26-27, 29-31 are amended. Claims 1-31 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 21-23, 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Xiao (Xiao et al., DNA-GAN: Learning Disentangled Representations From Multi-Attribute Images, Department of Information Science, Beijing, China, Workshop track - ICLR 2018, 1-14 hereinafter: "Xiao") in view of Badr (Badr, Auto-Encoder: What Is It? And What Is It Used For? (Part 1), Apr 22 2019- hereinafter “Badr”). Regarding Claim 21: Xiao teaches: “A method comprising: updating one or more parameters of one or more neural networks to generate one or more images based, at least in part, on loss values computed based, at least in part, on one or more differences between encodings of attributes in images generated by the one or more neural networks and encodings of attributes indicated to be included in the images generated by the one or more neural networks [Xiao discloses reconstruction loss, which is used to measure the difference between the original input (which corresponds to the encodings of attributes indicated to be included) and the reconstructed output (which corresponds to the encodings of attributes in images generated). This reconstruction loss is used to iteratively train a model (which corresponds to the updating of parameters) to generate images as close as possible to the original one. This can be seen at p. 2, first paragraph: “With the help of the adversarial discriminator loss and the reconstruction loss, DNA-GAN can reconstruct the input images and generate new images with new attributes”, also at p.3: Figure 1, and p. 4: section 3.2: PNG media_image1.png 312 548 media_image1.png Greyscale , and further at p. 5: Section 3.4: PNG media_image2.png 114 546 media_image2.png Greyscale . This is also evidenced by Badr at p. 1: “4- Reconstruction Loss: This is the method that measures measure how well the decoder is performing and how close the output is to the original input. The training then involves using back propagation in order to minimize the network’s reconstruction loss”].” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xiao with the above teachings of Badr by generating images using a trained neural network by updating its parameters iteratively, as taught by Xiao, based on a reconstruction loss to measure how close the output image is to the original image, as taught by Badr. The modification would have been obvious because one of ordinary skill in the art would be motivated to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible (see Badr at [Background]: “Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible). Regarding Claim 22: The combination of Xiao and Badr teaches “the method of claim 21” as seen above. Xiao further teaches: “generating a style based, at least in part, on a factor code indicating each of one or more attributes of the attributes to be generated in the one or more images and a latent code representing one or more input images [Xiao discloses an encoder that encodes an image to only have attribute-relevant parts (i.e. factor code) [Pg. 1 Section 1]. The encoder also maps real-world images into latent disentangled representations (i.e. latent code) [Pg. 3 Section 3.1] which then generates attribute subspaces by linear combination of disentangled encodings (i.e. style) [Pg. 8]], and generating the one or more images based, at least in part, on the style and the one or more input images [Xiao discloses images generated based on swapping attributes from an input image, as stated on Pg. 7 Figure 3a].” Regarding Claim 23: The combination of Xiao and Badr teaches “the method of claim 22” as seen above. Xiao further teaches: “wherein the factor code comprises one or more data values and each of the one or more data values indicates an individual one of the one or more attributes to be generated in the one or more images [Xiao discloses an encoder that encodes an image into attribute-relevant and attribute-irrelevant parts, meaning attributes in an image are converted into some data values to be used by the neural network, as stated on pg. 1 Section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information. For example, given a facial image, we are trying to obtain a latent representation that each individual part controls different attributes, such as hairstyles, genders, expressions and so on”].” Regarding Claim 28: The combination of Xiao and Badr teaches “the method of claim 21” as seen above. Xiao further teaches: “training the one or more neural networks using a training framework comprising a generative adversarial network [Xiao discloses a GAN architecture which uses a decoder that generates images (i.e. generator network) and a discriminator on pg. 3 Figure 1], the generative adversarial network comprising a discriminator to indicate whether the one or more images are generated by the one or more neural networks and the discriminator generating an approximated factor code based, at least in part, on the one or more images [[Xiao discloses the GAN includes a discriminator that indicates how realistic a generated image from a generator is as stated on pg. 4, section 3.2, “The discriminator takes the generated image and the i-th element of its label as inputs, and outputs a number which indicates how realistic the input image is. The larger the number is, the more realistic the image is”].” Regarding Claim 29: The combination of Xiao and Badr teaches “the method of claim 21” as seen above. Xiao further teaches: “wherein the one or more images are generated by the one or more neural networks to contain the attributes [Xiao discloses generated images with different illumination levels swapped from disentangled representations, as stated on pg. 7 Figure 2, “Manipulating illumination factors on the Multi-PIE dataset. From left to right, the six images in a row are: original images with light illumination and B with the dark illumination, newly generated images A2 and B2 by swapping the illumination-relevant piece in disentangled representations, and reconstructed images A1 and B1”], where the attributes are applied to one or more objects identified in one or more input images, the one or more input images used to generate the one or more images [Xiao discloses the illumination factors were manipulated using disentangled representations to change the illumination levels of newly generated images from original input images, as stated on pg. 7, Figure 2, “Manipulating illumination factors on the Multi-PIE dataset. From left to right, the six images in a row are: original images A with light illumination and B with the dark illumination [one or more attributes applied to one or more objects in one or more input images], newly generated images A2 and B2 by swapping the illumination-relevant piece in disentangled representations, and reconstructed images A1 and B1”].” Claims 1-9, 11-20, 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Xiao (Xiao et al., DNA-GAN: Learning Disentangled Representations From Multi-Attribute Images, Department of Information Science, Beijing, China, Workshop track - ICLR 2018, 1-14 hereinafter: "Xiao") in view of Badr (Badr, Auto-Encoder: What Is It? And What Is It Used For? (Part 1), Apr 22 2019- hereinafter “Badr”) and further in view of Karras (Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, NVIDIA, 2019, 1-12 hereinafter: "Karras"). Regarding Claim 1: Xiao teaches: “One or more processors comprising: [Xiao discloses using a processor for attribute-aware image generation in pg. 6, “The following results are obtained using the official code and pre-trained celebA model provided by the author.” Xiao also provides a GitHub link to the code for the model, which can be inferred as using some computer system to run the code] circuitry to update one or more parameters of one or more neural networks to generate one or more images based, at least in part, on loss values computed based, at least in part, on one or more differences between encodings of attributes in images generated by the one or more neural networks and encodings of attributes indicated to be included in the images generated by the one or more neural networks [Xiao discloses reconstruction loss, which is used to measure the difference between the original input (which corresponds to the encodings of attributes indicated to be included) and the reconstructed output (which corresponds to the encodings of attributes in images generated). This reconstruction loss is used to iteratively train a model (which corresponds to the updating of parameters) to generate images as close as possible to the original one. This can be seen at p. 2, first paragraph: “With the help of the adversarial discriminator loss and the reconstruction loss, DNA-GAN can reconstruct the input images and generate new images with new attributes”, also at p.3: Figure 1, and p. 4: section 3.2: PNG media_image1.png 312 548 media_image1.png Greyscale , and further at p. 5: Section 3.4: PNG media_image2.png 114 546 media_image2.png Greyscale . This is also evidenced by Badr at p. 1: “4- Reconstruction Loss: This is the method that measures measure how well the decoder is performing and how close the output is to the original input. The training then involves using back propagation in order to minimize the network’s reconstruction loss”].” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xiao with the above teachings of Badr by generating images using a trained neural network by updating its parameters iteratively, as taught by Xiao, based on a reconstruction loss to measure how close the output image is to the original image, as taught by Badr. The modification would have been obvious because one of ordinary skill in the art would be motivated to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible (see Badr at [Background]: “Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible). In addition, even though Xiao implicitly teaches one or more processors comprising circuitry, Karras teaches it [Karras discloses using an NVIDIA DGX-1 with 8 Tesla V100 GPUs on pg. 9 Section C, “Our training time is approximately one week on an NVIDIA DGX-1 with 8 Tesla V100 GPUs”].” It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr with the teachings of Karras by using one or more neural networks to generate one or more images based on a reconstruction loss to measure how close the output image is to the original image, as taught by Xiao and Badr, wherein the neural networks are used by one or more processors, as taught by Karras. One would be motivated to do so to improve the quality of the generated images, as disclosed by Karras (Karras Pg. 2 Section 2.1 Quality of generated images “Before studying the properties of our generator, we demonstrate experimentally that the redesign does not compromise image quality but, in fact, improves it considerably”). Regarding Claim 2: Xiao, Badr and Karras teach “the one or more processors of claim 1” as seen above. Xiao also teaches: “wherein: the attributes are indicated by a factor code [From the specification of the instant application, the broadest reasonable interpretation of factor code is a set of attributes as stated in paragraph 64. Xiao discloses a factor code in pg. 1 through the use of an encoder and attribute-relevant parts, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information]; one or more input images are encoded into a latent code [Xiao discloses a latent code that encodes input images using an encoder in pg. 3, section 3.2, “The encoder maps the real-world images A and B into two latent disentangled representations]; a style is generated based, at least in part, on the factor code and the latent code [Xiao discloses attribute subspaces (i.e. styles) based on linear combinations of disentangled representations (i.e. factor code and latent code) in pg. 8, “Since different attributes are encoded in different DNA pieces in our latent representations, we are able to interpolate the attribute subspaces by linear combination of disentangled encodings”]; and the one or more images are generated based on the one or more input images and the style [Xiao discloses experimental results of generated images based on changing attributes in original images on pg. 7, Figure 3a, “The experimental results of TD-GAN and IcGAN on CelebA dataset… For each model, the four images in a row are: two original images, and two newly generated images by swapping the attributes”].” Regarding Claim 3: Xiao, Badr and Karras teach “The one or more processors of claim 2” as seen above. Xiao further teaches: “wherein the factor code is a set of data values and each data value indicates an individual attribute to be generated in the one or more images [Xiao discloses an encoder that encodes an image to attribute-relevant parts, which acts as data values indicating individual attributes as stated on pg. 1, section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information. For example, given a facial image, we are trying to obtain a latent representation that each individual part controls different, such as hairstyles, genders, expressions and so on”].” Regarding Claim 4: Xiao, Badr and Karras teaches “the one or more processors of claim 2” as seen above. Karras further teaches: “wherein the style comprises information about the one or more attributes to be generated in the one or more images [Karras discloses styles that affect certain aspects of some image through a localized network on Pg. 3 Section 3, “The effects of each style are localized in the network, i.e., modifying a specific subset of the styles can be expected to affect only certain aspects of the image”]; and the style is generated by one or more fully connected layers of the one or more neural networks” [Karras discloses a mapping network with 8 fully connected layers that calculates a vector w, which then a style y is computed from w on Pg. 2 Figure 1b, “The mapping network consists of 8 fully connected (FC) layers that calculate a vector w, which then a spatially invariant style y is computed from w”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 1. Regarding Claim 5: Xiao, Badr and Karras teach “the one or more processors of claim 2” as seen above. Xiao further teaches: “wherein the one or more images comprise the one or more input images modified to contain the attributes [Xiao discloses generated images from original input images with modified illumination factors, some containing darker illumination and lighter illumination, on pg. 7 Figure 2, “Manipulating illumination factors on the Multi-PIE dataset. From left to right, the six images in a row are: original images with light illumination and B with the dark illumination, newly generated images A2 and B2 by swapping the illumination-relevant piece in disentangled representations, and reconstructed images A1 and B1”]”. Regarding Claim 6: Xiao, Hui and Karras teach “The one or more processors of claim 1” as seen above. Xiao further teaches: wherein the one or more neural networks are trained using a generative adversarial network [Xiao discloses other neural network models being trained using generative adversarial networks on pg. 2, “As the generative adversarial network (GAN) (Goodfellow et al., 2014) was established, many implicit models have been developed”].” Regarding Claim 7: Xiao teaches: “A system comprising: one or more processors [Xiao discloses using a processor for attribute-aware image generation in pg. 6, “The following results are obtained using the official code and pre-trained celebA model provided by the author.” Xiao also provides a GitHub link to the code for the model, which can be inferred as using some computer system to run the code] to update one or more parameters of one or more neural networks to generate one or more images based, at least in part, on loss values computed based, at least in part, on one or more differences between encodings of attributes in images generated by the one or more neural networks and encodings of attributes indicated to be included in the images generated by the one or more neural networks [Xiao discloses reconstruction loss, which is used to measure the difference between the original input (which corresponds to the encodings of attributes indicated to be included) and the reconstructed output (which corresponds to the encodings of attributes in images generated). This reconstruction loss is used to iteratively train a model (which corresponds to the updating of parameters) to generate images as close as possible to the original one. This can be seen at p. 2, first paragraph: “With the help of the adversarial discriminator loss and the reconstruction loss, DNA-GAN can reconstruct the input images and generate new images with new attributes”, also at p.3: Figure 1, and p. 4: section 3.2: PNG media_image1.png 312 548 media_image1.png Greyscale , and further at p. 5: Section 3.4: PNG media_image2.png 114 546 media_image2.png Greyscale . This is also evidenced by Badr at p. 1: “4- Reconstruction Loss: This is the method that measures measure how well the decoder is performing and how close the output is to the original input. The training then involves using back propagation in order to minimize the network’s reconstruction loss”].” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xiao with the above teachings of Badr by generating images using a trained neural network by updating its parameters iteratively, as taught by Xiao, based on a reconstruction loss to measure how close the output image is to the original image, as taught by Badr. The modification would have been obvious because one of ordinary skill in the art would be motivated to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible (see Badr at [Background]: “Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible). In addition, even though Xiao implicitly teaches one or more processors, Karras teaches it [Karras discloses using an NVIDIA DGX-1 with 8 Tesla V100 GPUs on pg. 9 Section C, “Our training time is approximately one week on an NVIDIA DGX-1 with 8 Tesla V100 GPUs”].” It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr with the teachings of Karras by using one or more neural networks to generate one or more images based on a reconstruction loss to measure how close the output image is to the original image, as taught by Xiao and Badr, wherein the neural networks are used by one or more processors, as taught by Karras. One would be motivated to do so to improve the quality of the generated images, as disclosed by Karras (Karras Pg. 2 Section 2.1 Quality of generated images “Before studying the properties of our generator, we demonstrate experimentally that the redesign does not compromise image quality but, in fact, improves it considerably”). Regarding Claim 8: The combination of Xiao, Badr and Karras teaches “the system of claim 7” as seen above. Xiao further teaches: “wherein: the attributes are indicated in a factor code (Xiao discloses an encoder to encode an image into attribute-relevant and irrelevant parts [i.e. attributes indicated by a factor code] on pg. 1 Section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes [one or more attributes are indicated by a factor code], and the attribute-irrelevant part encodes other information”).” However, the combination of Xiao and Badr does not appear to teach: “the one or more images are generated by a generator neural network, the generator neural network comprises a mapping network and a synthesis network, the mapping network identifies a style based on the factor code and an encoding of one or more input images, and the synthesis network generates the one or more images based, at least in part, on the style.” Karras, however, teaches: “the one or more images are generated by a generator neural network [Karras discloses a set of generated images by a style-based generator on pg. 3 Figure 2, “Uncurated set of images produced by our style-based generator (config F) with the FFHQ dataset”], the generator neural network comprises a mapping network and a synthesis network [Karras discloses both a mapping and synthesis neural network on pg. 2 Figure 1], the mapping network identifies a style based on the factor code and an encoding of one or more input images [Karras discloses a style y generated from a vector w (i.e. factor code) and a latent code z from an input latent space Z on pg. 1 Section 2, “Given a latent code z in the input latent space, a non-linear mapping network f: Z -> W first produces w ∈ W… Comparing our approach to style transfer, we compute the spatially invariant style y from vector w instead of an example image”], and the synthesis network generates the one or more images based, at least in part, on the style [Karras discloses a synthesis network that generates novel images based on a collection of styles on pg. 5-6 Section 3.2, “We can view the mapping network and affine transformations as a way to draw samples for each style from a learned distribution, and the synthesis network as a way to generate a novel image based on a collection of styles”].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 7. Regarding Claim 9: Xiao, Badr and Karras teach the system of claim 8, wherein the factor code comprises one or more data values indicating each of one or more attributes to be generated in the one or more images [Xiao discloses an encoder that encodes an image to attribute-relevant parts, which acts as data values indicating individual attributes as stated on pg. 1, section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information. For example, given a facial image, we are trying to obtain a latent representation that each individual part controls different, such as hairstyles, genders, expressions and so on”]. Regarding Claim 11: Xiao, Badr and Karras teach “the system of claim 8” as seen above. Karras further teaches: “wherein the synthesis network comprises one or more upscaling layers to generate the one or more images [The instant specification discloses Figure 3 to include the upscaling layers. Karras discloses an identical figure to the specification, which can be safely assumed that Karras teaches a synthesis network that includes one or more upscaling layers, as disclosed on pg. 2 Figure 1], the upscaling layers having an input size that is less than an output size [The instant specification discloses Figure 3 to include the upscaling layers. Karras discloses an identical figure to the specification. Before the upscaling layer, the synthesis network starts with an input of size 4x4, then ends with an out of size 8x8, meaning the input size is smaller than the output size, as disclosed on pg. 2 Figure 1] and a number of upscaling layers determined based, at least in part, on dimensions of the one or more images [Karras discloses a synthesis network consisting of 18 layers, two layers for each resolution of the image, meaning the amount of upscaling layers used depends on at least the resolution (i.e. dimensions) of the image, as stated in pg. 2 Figure 1].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Hui to include the teachings of Karras for at least the same reasons as discussed above in claim 7. Regarding Claim 12: Xiao, Badr and Karras teach “the system of claim 11” as seen above. Karras further teaches: “wherein the synthesis network comprises an input block to replace a subset of the one or more upscaling layers [The specification discloses an input block 336 in paragraph 85. Figure 3 of the Drawings show a Const 4x4x512 block. Karras discloses a synthesis network with a 4x4x512 constant tensor on pg. 2 Figure 1, Section 2.1 Quality of generated images, “We therefore simplify the architecture by removing the traditional input layer and starting the image synthesis from a learned 4x4x512 constant tensor (D)”].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 7. Regarding Claim 13: The combination of Xiao, Badr and Karras teaches “the system of claim 7” as seen above. Xiao further teaches: “wherein the one or more neural networks are trained based, at least in part, on training values obtained in conjunction with a discriminator neural network [Xiao discloses that the neural network uses a discriminator that outputs a number based on how realistic a generated image is, which then is used by the generator to create more realistic images as stated on pg. 4, section 3.2, “The discriminator takes the generated image and the i-th element of its label as inputs, and outputs a number which indicates how realistic the input image is. The larger the number is, the more realistic the image is”]; the discriminator neural network indicating whether the one or more images are generated by the one or more neural networks [Xiao discloses a discriminator that indicates how realistic a generated image from a generator is as stated on pg. 4, section 3.2, “The discriminator takes the generated image and the i-th element of its label as inputs, and outputs a number which indicates how realistic the input image is. The larger the number is, the more realistic the image is”]. Regarding Claim 14: Xiao teaches: “A non-transitory machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least [Xiao discloses using a processor for attribute-aware image generation in pg. 6, “The following results are obtained using the official code and pre-trained celebA model provided by the author.” Xiao also provides a GitHub link to the code for the model, which can be inferred as using some computer system to run the code]: update one or more parameters of one or more neural networks to generate one or more images based, at least in part, on loss values computed based, at least in part, on one or more differences between encodings of attributes in images generated by the one or more neural networks and encodings of attributes indicated to be included in the images generated by the one or more neural networks [Xiao discloses reconstruction loss, which is used to measure the difference between the original input (which corresponds to the encodings of attributes indicated to be included) and the reconstructed output (which corresponds to the encodings of attributes in images generated). This reconstruction loss is used to iteratively train a model (which corresponds to the updating of parameters) to generate images as close as possible to the original one. This can be seen at p. 2, first paragraph: “With the help of the adversarial discriminator loss and the reconstruction loss, DNA-GAN can reconstruct the input images and generate new images with new attributes”, also at p.3: Figure 1, and p. 4: section 3.2: PNG media_image1.png 312 548 media_image1.png Greyscale , and further at p. 5: Section 3.4: PNG media_image2.png 114 546 media_image2.png Greyscale . This is also evidenced by Badr at p. 1: “4- Reconstruction Loss: This is the method that measures measure how well the decoder is performing and how close the output is to the original input. The training then involves using back propagation in order to minimize the network’s reconstruction loss”].” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xiao with the above teachings of Badr by generating images using a trained neural network by updating its parameters iteratively, as taught by Xiao, based on a reconstruction loss to measure how close the output image is to the original image, as taught by Badr. The modification would have been obvious because one of ordinary skill in the art would be motivated to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible (see Badr at [Background]: “Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible). In addition, even though Xiao implicitly teaches one or more processors, Karras teaches it [Karras discloses using an NVIDIA DGX-1 with 8 Tesla V100 GPUs on pg. 9 Section C, “Our training time is approximately one week on an NVIDIA DGX-1 with 8 Tesla V100 GPUs”].” It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr with the teachings of Karras by using one or more neural networks to generate one or more images based on a reconstruction loss to measure how close the output image is to the original image, as taught by Xiao and Badr, wherein the neural networks are used by one or more processors, as taught by Karras. One would be motivated to do so to improve the quality of the generated images, as disclosed by Karras (Karras Pg. 2 Section 2.1 Quality of generated images “Before studying the properties of our generator, we demonstrate experimentally that the redesign does not compromise image quality but, in fact, improves it considerably”). Regarding Claim 15: Xiao, Badr and Karras teach “the non-transitory machine-readable medium of claim 14” as shown above. Xiao further teaches: “a latent code is generated based, at least in part, on one or more input images [Xiao discloses an encoder that maps images into latent disentangled representations on pg. 3 Section 3.1, “The encoder maps the real-world images A and B into two latent disentangled representations”]; a factor code indicates each of the attributes to be generated in the one or more images [Xiao discloses an encoder that encodes an image into attribute-relevant and attribute-irrelevant parts, where the attribute-relevant parts encode information on different attributes on pg. 1 Section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information”] to generate a style based, at least in part, on the factor code combined with the latent code [Xiao discloses interpolating attribute subspaces based on disentangled encodings of images and different attributes obtained from the images on pg. 8, “Since different attributes are encoded in different DNA pieces in our latent representations, we are able to interpolate the attribute subspaces by linear combination of disentangled encodings”].” However, the combination of Xiao and Badr does not appear to teach a mapping network and the following limitation: “and the one or more neural networks comprise a synthesis network to generate each of the one or more images based, at least in part, on the latent code and the style.” Karras teaches the following: “the one or more neural networks comprise a mapping network to generate a style based, at least in part, on the factor code combined with the latent code [Karras discloses a mapping network that creates a style y from a latent code z]; and the one or more neural networks comprise a synthesis network to generate each of the one or more images based, at least in part, on the latent code and the style [Karras discloses a synthesis network to generate novel images based on a collection of styles on pg. 5-6 Section 3.2, “We can view the mapping network and affine transformations as a way to draw samples for each style from a learned distribution, and the synthesis network as a way to generate a novel image based on a collection of styles”].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 14. Regarding Claim 16: Xiao, Badr and Karras teach “the non-transitory machine-readable medium of claim 15” as seen above. Xiao further teaches: “wherein the one or more images comprises the one or more input images combined with each of the attributes specified in the factor code [Xiao discloses generated images that include an original image with manipulated illumination factors on pg. 7 Figure 2, “Manipulating illumination factors on the Multi-PIE dataset. From left to right, the six images in a row are: original images A with light illumination and B with the dark illumination, newly generated images A2 and B2 by swapping the illumination-relevant piece in disentangled representations, and reconstructed images A1 and B1”], the attributes specified in the factor code changing a plurality of features associated with the one or more input images [Xiao discloses generating lighter and darker images by manipulating an illumination-relevant piece of the images on pg. 7 Figure 2, “Manipulating illumination factors on the Multi-PIE dataset. From left to right, the six images in a row are: original images A with light illumination and B with the dark illumination, newly generated images A2 and B2 by swapping the illumination-relevant piece in disentangled representations, and reconstructed images A1 and B1”].” Regarding Claim 17: Xiao, Badr and Karras teach “the non-transitory machine-readable medium of claim 15” as seen above. Xiao further teaches: “wherein the factor code comprises a set of binary data values and each of the set of binary data values indicates an individual attribute of the attributes [Xiao provides the model through a GitHub webpage, meaning the model was performed on a computer which stores binary values by default. Xiao also discloses an encoder that encodes an image to attribute-relevant and attribute-irrelevant parts, the attribute-relevant part holding information on different attributes on pg. 1 Section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information”].” Regarding Claim 18: Xiao, Badr and Karras teach “the non-transitory machine-readable medium of claim 15” as seen above. Karras further teaches: “wherein the style comprises information about the attributes to be added to the one or more input images [Karras discloses styles that affect certain aspects of an image on pg. 3 Section 3, “The effects of each style are localized in the network, i.e., modifying a specific subset of the styles can be expected to affect only certain aspects of the image”].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 15. Regarding Claim 19: Xiao, Badr and Karras teach “the non-transitory machine-readable medium of claim 15” as seen above. Karras further teaches: “wherein the synthesis network comprises a set of upscaling layers to generate the one or more images [The instant specification discloses Figure 3 to include the upscaling layers. Karras discloses an identical figure to the specification, therefore Karras teaches a synthesis network that includes one or more upscaling layers, as disclosed on pg. 2 Figure 1], a number of upscaling layers based, at least in part, on dimensions of the one or more images [Karras discloses a synthesis network consisting of 18 layers, two layers for each resolution of the image, meaning the amount of upscaling layers used depends on at least the resolution (i.e. dimensions) of the image, as stated in pg. 2 Figure 1].” The system of Xiao, Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 15. Regarding Claim 20: The combination of Xiao, Badr and Karras teaches “the non-transitory machine-readable medium of claim 14” as seen above. Xiao further teaches: “wherein a training framework used to train the one or more neural networks comprises a generator neural network and a discriminator neural network [Xiao discloses a GAN architecture which uses a decoder that generates images (i.e. generator network) and a discriminator on pg. 3 Figure 1].” Regarding Claim 24: The combination of Xiao and Badr teaches “the method of claim 22”. Xiao further teaches: “identifying one or more objects in each of the one or more input images [Xiao discloses an encoder that encodes an image into attribute-relevant parts, which includes information on different attributes in the image on pg. 1 Section 1, “In DNA-GAN, an encoder is used to encode an image to the attribute-relevant part and the attribute-irrelevant part, where different pieces in the attribute-relevant part encode information of different attributes, and the attribute-irrelevant part encodes other information”].” The combination does not appear to teach: “wherein the one or more neural networks comprise a mapping network, and the mapping network generates the style by combining the factor code and the latent code.” Karras, however, teaches: “wherein the one or more neural networks comprise a mapping network [Karras discloses the model including a mapping and synthesis network on pg. 2 Figure 1] and the mapping network generates the style by combining the factor code and the latent code [Karras discloses a mapping network generating a style given a latent code and a vector w in which learned affine transformations specialize w to a style on pg. 2 Section 2, “Given a latent code z in the input latent space Z, a non-linear mapping network f: Z -> W first produces w ∈ W… Learned affine transformations then specialize w to styles y = (ys; yb) that control adaptive instance normalization (AdaIN) operations after each convolution layer of the synthesis network g… Comparing our approach to style transfer, we compute the spatially invariant style y from vector w instead of an example image”]. The combination of Xiao, Badr and Karras, and the instant application are analogous art because they are all directed to generating images with attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 21. Regarding Claim 25: Xiao, Hui and Karras teach “the method of claim 24” as seen above. Karras further teaches: “wherein the mapping network comprises one or more fully connected layers [Karras discloses a mapping network that contains eight fully connected layers on pg. 2 Figure 1b].” The combination of Xiao, Badr and Karras, and the instant application are analogous art because they are all directed to generating images with attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 21. Regarding Claim 26: Xiao and Badr teach “the method of claim 22” as seen above. However, they do not teach: “wherein the one or more neural networks comprise a synthesis network and the synthesis network generates the one or more images such that the one or more images comprise the one or more attributes applied to one or more objects indicated by the factor code.” Karras, however, teaches: wherein the one or more neural networks comprise a synthesis network [Karras discloses a generative adversarial network that comprises a synthesis network on pg. 2 Figure 1b] and the synthesis network generates the one or more images such that the one or more images comprise the one or more attributes applied to one or more objects indicated by the factor code [Karras discloses a synthesis network that generates images based on a collection of styles which were collected by learned distributions from affine transformations on pg. 3 Section 3, “We can view the mapping network and affine transformations as a way to draw samples for each style from a learned distribution, and the synthesis network as a way to generate a novel image based on a collection of styles. The effects of each style are localized in the network, i.e., modifying a specific subset of the styles can be expected to affect only certain aspects of the image”].” The system of Xiao and Badr, the teachings of Karras, and the instant application are analogous art because they are all directed to generating images with attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combination of Xiao and Hui to include the teachings of Karras for at least the same reasons as discussed above in claim 21. Regarding Claim 27: Xiao, Hui and Karras teach “the method of claim 26” as seen above. Karras further teaches: “wherein the synthesis network comprises an input block and one or more upscaling blocks [The instant specification discloses Figure 3 to include the upscaling layers. Karras discloses an identical figure to the specification, therefore Karras teaches a synthesis network that includes one or more upscaling layers, as disclosed on pg. 2 Figure 1], a number of upscaling blocks based, at least in part, on dimensions of the one or more images [Karras discloses a synthesis network consisting of 18 layers, two layers for each resolution of the image, meaning the amount of upscaling layers used depends on at least the resolution (i.e. dimensions) of the image, as stated in pg. 2 Figure].” The combination of Xiao, Badr and Karras, and the instant application are analogous art because they are all directed to generating images with attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system combination of Xiao and Badr to include the teachings of Karras for at least the same reasons as discussed above in claim 21. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Xiao in view of Badr, Karras and in further view of Svoboda (Svoboda et al., Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer, NNAISENSE, Switzerland, arXiv:1906.02913v3, [2020], 1-15 hereinafter: "Svoboda"). Regarding Claim 10: Xiao, Badr and Karras teach “the system of claim 8” as seen above. Karras further teaches: “wherein the mapping network comprises one or more fully connected layers [Karras discloses a mapping network that consists of 8 fully connected layers on pg. 2 Figure 1b].” However, Xiao, Badr and Karras do not appear to teach: “and the factor code and the encoding of the one or more input images are combined as input to the one or more fully connected layers.” Svoboda, however, teaches: “and the factor code and the encoding of the one or more input images are combined as input to the one or more fully connected layers [Svoboda discloses an encoder that produces latent representations of input images, which holds a content part and a style part, and an auxiliary decoder that uses the output of the encoder as input into its convolutional layers to reconstruct the original image on pg.4 Col. 1-2, “The encoder used to produce latent representation of all input images [encoding of one or more input images] is composed of several strided convolutional layers for downsampling followed by multiple ResNet blocks. The latent code z [factor code] is composed by two parts: the content part, (z)C, which holds information about the image content (e.g. objects, position, scale, etc.), and the style part, (z)S, which encodes the style that the content is presented in (e.g. level of detail, shapes, etc.)… The Auxiliary decoder reconstructs an image from its latent representation [factor code and one or more input images are combined as input] and is used only during training to train the encoder module. It is composed of several ResNet blocks followed by fractionally-strided convolutional layers [input to one or more layers] to reconstruct the original image.”].” The combined system of Xiao, Badr, Karras, the teachings of Svoboda, and the instant application are analogous art because they are all directed to generating images with location attributes from an input image using a generative adversarial network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Xiao, Badr and Karras with the teachings of Svoboda. One would be motivated to do so to “directly enforce separation among different styles, which has been experimentally shown to greatly reduce the amount of style dependent information retained in the decoder”, as disclosed on pg. 3 Section 3. Claims 30-31 are rejected under 35 U.S.C. 103 as being unpatentable over Xiao (Xiao et al., DNA-GAN: Learning Disentangled Representations From Multi-Attribute Images, Department of Information Science, Beijing, China, Workshop track - ICLR 2018, 1-14 hereinafter: "Xiao") in view of Badr (Badr, Auto-Encoder: What Is It? And What Is It Used For? (Part 1), Apr 22 2019- hereinafter “Badr”), in view of Karras (Karras et al., A Style-Based Generator Architecture for Generative Adversarial Networks, NVIDIA, 2019, 1-12 hereinafter: "Karras"), and further in view of Hui et al (CN 109816048A (English translation)- hereinafter Hui). Regarding Claim 30: Xiao, Badr and Karras teach “the one or more processors of claim 1” as seen above. However, fails to teach: wherein training data for the one or more neural networks comprises image, and wherein one or more labels of the image comprises one or more locations of the attributes of one or more objects in the image. Hui further teaches: wherein training data for the one or more neural networks comprises image, and wherein one or more labels of the image comprises one or more locations of the attributes of one or more objects in the image (Hui discloses at [0002]: “The invention relates to an image processing method in machine learning, and in particular to an image synthesis method based on attribute migration”. Also at [0022]: “When synthesizing an image with a certain attribute, this method uses sample pairs to learn the feature information of an image without the attribute feature (abbreviated as a normal image) and an attribute image with the attribute, and combines the attribute features of the attribute image with the normal image to generate a new attribute image” and “when locating the attribute region, this method can control the proportion of the migration attribute region size by adjusting the hyperparameter λ (0<=λ<=1) in the network model to achieve the migration of multi-scale attribute region features”. Further [0028]: “this method uses a dual-line generator structure and combines it with the guided learning of the GAN generative adversarial network to effectively separate and reorganize the attribute features and non-attribute features in the image, thereby guiding the attention network to learn the location of the attribute area”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Xiao, Badr and Karras with the above teachings of Hui by generating images using a trained neural network, as taught by Xiao, Badr and Karras, based on location of the attributes, as taught by Hui. The modification would have been obvious because one of ordinary skill in the art would be motivated to improve the quality of attribute image generation (see Hui at [0022]: “effectively locates the position of the attribute and reduces the possibility of failure in attribute area positioning and migration failure” and “this method uses the AE-GAN (AutoEncoder GAN) model structure when generating attribute images to further improve the quality of attribute image generation”). Regarding Claim 31: Xiao, Badr and Karras teach “the one or more processors of claim 1” as seen above. However, they fail to teach wherein one or more locations are to be used to cause the one or more neural networks to be trained to generate images comprising one or more depictions of the attributes. Hui further teaches: wherein one or more locations are to be used to cause the one or more neural networks to be trained to generate images comprising one or more depictions of the attributes (Hui discloses at [0002]: “The invention relates to an image processing method in machine learning, and in particular to an image synthesis method based on attribute migration”. Also at [0022]: “When synthesizing an image with a certain attribute, this method uses sample pairs to learn the feature information of an image without the attribute feature (abbreviated as a normal image) and an attribute image with the attribute, and combines the attribute features of the attribute image with the normal image to generate a new attribute image” and “when locating the attribute region, this method can control the proportion of the migration attribute region size by adjusting the hyperparameter λ (0<=λ<=1) in the network model to achieve the migration of multi-scale attribute region features”. Further [0028]: “this method uses a dual-line generator structure and combines it with the guided learning of the GAN generative adversarial network to effectively separate and reorganize the attribute features and non-attribute features in the image, thereby guiding the attention network to learn the location of the attribute area”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Xiao, Badr and Karras with the above teachings of Hui by generating images using a trained neural network, as taught by Xiao, Badr and Karras, based on location of the attributes, as taught by Hui. The modification would have been obvious because one of ordinary skill in the art would be motivated to improve the quality of attribute image generation (see Hui at [0022]: “effectively locates the position of the attribute and reduces the possibility of failure in attribute area positioning and migration failure” and “this method uses the AE-GAN (AutoEncoder GAN) model structure when generating attribute images to further improve the quality of attribute image generation”). Response to Arguments The Applicant’s arguments regarding the rejection of above-mentioned claims have been fully considered. In reference to Applicant’s arguments about: 35 USC 103 rejections for Independent claims and Dependent claims. Examiner’s response: Applicant’s arguments about Xiao failing to teach the amended limitations are considered, however they are not persuasive. In view of the Broadest Reasonable Interpretation of the amended claim limitation, Examiner understands that Xiao still teaches this limitation. The amended limitation is interpreted as follows: “update one or more parameters of one or more neural networks (this part is interpreted as re-training or iterative training of a neural network model) to generate one or more images based, at least in part, on loss values (this part is interpreted as a common loss function in neural networks, which is a method to quantify the difference between the output of a neural network to the ground truth) computed based, at least in part, on one or more differences between encodings of attributes in images generated by the one or more neural networks (this is interpreted as the output of the model) and encodings of attributes indicated to be included in the images generated by the one or more neural networks (this is interpreted as the ground truth, which is what it is supposed or indicated to be, in this scenario, the attributes in the images)”. In view of this interpretation, Examiner concludes that Xia teaches this, as it can be seen for example at p. 2, first paragraph, p.3: Figure 1, and p. 4: section 3.2, and at p. 5 Section 3.4 (see updated rejection above for more details of the mapping). Analogous independent claims 1, 14 and 21 are also rejected based on this rationale. In addition, Examiner brought a secondary reference to provide evidence for the interpretation taken (the Badr reference) to reject the claims. In regards to arguments about dependent claims, these arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In regards to arguments regarding the art Svodoba for dependent claim 10, examiner would like to point out that this additional prior art is only brought to cure the specific deficiencies of this dependent claim. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS A SITIRICHE whose telephone number is (571)270-1316. The examiner can normally be reached M-F 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached on (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUIS A SITIRICHE/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Jul 09, 2020
Application Filed
Dec 19, 2022
Non-Final Rejection — §103
May 25, 2023
Interview Requested
Jun 29, 2023
Response Filed
Jul 12, 2023
Final Rejection — §103
Jan 17, 2024
Notice of Allowance
Aug 19, 2024
Request for Continued Examination
Aug 22, 2024
Response after Non-Final Action
Feb 28, 2025
Non-Final Rejection — §103
Mar 03, 2025
Response after Non-Final Action
Aug 05, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103
Nov 23, 2025
Interview Requested
Dec 08, 2025
Examiner Interview Summary
Dec 08, 2025
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585947
MODIFYING COMPUTATIONAL GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579476
ADAPTIVE LEARNING FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12579445
MODELS FOR PREDICTING RESISTANCE TRENDS
2y 5m to grant Granted Mar 17, 2026
Patent 12572791
METHOD, DEVICE AND COMPUTER PROGRAM FOR PREDICTING A SUITABLE CONFIGURATION OF A MACHINE LEARNING SYSTEM FOR A TRAINING DATA SET
2y 5m to grant Granted Mar 10, 2026
Patent 12572857
Adaptive Probabilistic Latent Semantic Analysis System For Automated Document Coding And Review In Electronic Discovery
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+22.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month