Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,687

AI GAN ENABLED MEDIA COMPRESSION FOR OPTIMIZED RESOURCE UTILIZATION

Non-Final OA §101§103
Filed
Apr 25, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented in the case. Information Disclosure Statement The information disclosure statement submitted on 04/25/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 3, 10 and 17 are objected to because of the following informalities: Claim 3, line 2 recites the phrase “adding the updated one or media assets” which should be “adding the updated one or more media assets” For the informalities above and wherever else they may occur appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”) Claim 1, 8 and 15 have the following abstract idea analysis. Step 1: The claim is directed to “a method, system and crm”. The claims are directed to the statutory categories accordingly. Step 2A Prong 1: claims recite the abstract idea limitations of "deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a generative adversarial network (GAN) generator including one or more images of a first set of one or more objects that exceed a relevance score threshold;". These limitations include mental concepts (act of evaluating. Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). "comparing collected information to a predefined threshold, which is an act of evaluating information that can be practically performed in the human mind. Thus, this step is an abstract idea in the “mental process” grouping". The specification also provides example operations performed such as scoring and thresholds. USPGPUB ¶45. Other sections of the claims such as "receiving media assets, applying a GAN, determining with a discriminator and generating updated media assets." are advanced processes, too generic or high level to be listed as a judicial exception given the available descriptions and MPEP comparisons. Step 2A Prong 2: The judicial exceptions recited in these claims are not integrated into a practical application. Merely invoking "a GAN", "a processor" or "memory" does not yield eligibility. Claims are still in line with mental concepts such as claim 1, 8 and 15 are not specific to a practical application. The additional elements as such are processors and instructions which do not include specialized hardware. See MPEP § 2106.05(f). Claim 1, 8 and 15 do not include a particular field but even doing so may not be sufficient to overcome the abstract idea rejection. Merely applying an model to a field or data without an advancement in the new field or new hardware is ineligible. MPEP § 2106.05(h). Step 2B: The claims do not contain significantly more than their judicial exceptions. Processors, memory and other hardware are in their standard forms in the field. These additional elements are well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Claims lacks any particular "how" or algorithm for a solution in a field in a novel way. Claims require more specificity on processes that would be incapable of simple mathematics, mental processes or use more substantial structure than conventional devices such as non-textbook implementations. Regarding claims 2-7, 9-14 and 16-20 merely narrow the previously recited abstract idea limitations with more abstract concepts and/or routine fundamental processes. For the reasons described above with respect to claim 1 and 9 this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Abstract idea steps 1, 2A prong 1 and 2 remain the same as independent analysis above. See specification for more practical application concepts as none are seen in claims 2-7, 9-14 and 16-20. With respect to step 2B These claims disclose similar limitations described for the independent claims above and do not provide anything significantly more than mathematical or mental concepts. Claims 2-7, 9-14 and 16-20 recite the additional elements of "in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: applying, by the GAN generator, one or more additional modifications to each object in the first set not identified as real based on the relevance score of each object. adding the updated one or media assets to the knowledge corpus. training the GAN generator by feeding the created training data set into the GAN generator, wherein at least one object that does not exceed the relevance score threshold is removed from the created training data set. wherein applying the one or more modifications further comprises: executing one or more compression techniques on each object in the first set. wherein a degree of compression applied to each object in the first set is inversely proportional to the relevance score of each object in the first set, wherein an object having a lower relevance score is more compressed than an object having a higher relevance score. wherein at least one compression technique includes adapting a pixel density of at least one object in the first set consistent with the relevance score of the at least one object." These elements are more abstract concepts, generic applications to a field of use or well-understood, routine, conventional activity (see MPEP § 2106.05(d) and can't be simply appended to qualify as significantly more or being a practical application. What type of application, or structure of components beyond generic machine learning is still unknown for these claims. Therefore claims 2-7, 9-14 and 16-20 also recite abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-8, 11-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over BV et al. (US 20230326088 A1 hereinafter BV) in view of Mas Montserrat et al. (US 20210124995 A1 hereinafter Mas Montserrat) As to independent claim 1, BV teaches a computer-based method of compressing media utilizing a generative adversarial network (GAN), the method comprising: GAN-based image compression ¶22] receiving one or more media assets and historical data from a knowledge corpus in accordance with an identified usage context; [receives images (media) and importance data (historical) ¶29, ¶33 "input 102 that includes an image to be compressed, user-provided importance data, and a target bitrate"] applying, by the GAN generator, one or more modifications to each object in the first set based on the relevance score of each object; [values (scores) in regions used to control compression and allocate bits (modifications) ¶29 " each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions."] determining whether a discriminator of the GAN is able to identify each object in the first set modified by the GAN generator; and [discriminator ¶49] in response to determining the GAN discriminator is able to identify each object in the first set modified by the GAN generator as real: [discriminator predicts real ¶49 "The discriminator then predicts which of the images is real or fake. This prediction is used by one or more loss functions 506 to generate an error that is propagated back to the networks for training (as indicated by dotted lines in FIG. 5)."] generating, by the GAN generator, one or more updated media assets including a second set of one or more objects that are identified by the GAN discriminator as real. [real causes loss/error to get propagated back for compression or bit allocation (modifications for updated media) ¶49-50 " loss function ensures bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget (e.g., to achieve the target bitrate)."] BV does not specifically teach identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; However, Mas Montserrat teaches identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; [CNN identifies symbols (objects) in image ¶16-¶18 "CNN 110 receives as input a plurality of images and produces as output a plurality of bounding boxes, where each bounding box is assigned a class that is associated with a symbol believed to be present in the portion of an image that is enclosed by the bounding box"] deriving a relevance score for each identified object based on the historical data and the identified usage context; [generates a confidence score for symbol (object) based on training data (past images and context (logos within) ¶14) ¶24, ¶18 " CNN 110 also produces for each bounding box a confidence score which indicates a likelihood that the class assigned to the bounding box is correct (i.e., that the symbol associated with the class is depicted in the bounding box)"] creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; [create training data based on object (symbol) based score threshold ¶25 "the unlabeled image is selected as a training image for training a system to recognize the symbol, when the confidence score is above a predefined threshold."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV by incorporating the identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold disclosed by Mas Montserrat because both techniques address the same field of machine learning and by incorporating Mas Montserrat into BV saves time in image recognition and training time of unnecessary training data [Mas Montserrat ¶14-15] As to dependent claim 4, the rejection of claim 1 is incorporated, BV and Mas Montserrat further teach training the GAN generator by feeding the created training data set into the GAN generator, wherein at least one object that does not exceed the relevance score threshold is removed from the created training data set.[Mas Montserrat discard when below threshold¶25 " confidence score falls below the predefined threshold is discarded"] As to dependent claim 5, the rejection of claim 1 is incorporated, BV and Mas Montserrat further teach executing one or more compression techniques on each object in the first set. [BV areas in image get assigned different bitrates (compression) ¶57] As to dependent claim 6, the rejection of claim 5 is incorporated, BV and Mas Montserrat further teach wherein a degree of compression applied to each object in the first set is inversely proportional to the relevance score of each object in the first set, wherein an object having a lower relevance score is more compressed than an object having a higher relevance score. [BV ensures that parts of the image deemed more important retain higher quality during compression, aligning with the objective to allocate bits optimally as guided by user-provided importance values while staying within a target bitrate ¶44, 29 " where each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions", "the pixels of the region corresponding to the eyes have more details and therefore can be given a higher importance value than the relatively smooth pixels representing the person's cheeks."] As to dependent claim 7, the rejection of claim 1 is incorporated, BV and Mas Montserrat further teach wherein at least one compression technique includes adapting a pixel density of at least one object in the first set consistent with the relevance score of the at least one object. [BV higher bitrates cause higher quality or pixel density and based on importance values ¶44, 29] As to independent claim 8, BV teaches a computer system, the computer system comprising: [image processing system ¶103] one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more computer-readable tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories, wherein the computer system is capable of performing a method comprising: [memory with processors and instructions ¶104-105] receiving one or more media assets and historical data from a knowledge corpus in accordance with an identified usage context; [receives images (media) and importance data (historical) ¶29, ¶33 "input 102 that includes an image to be compressed, user-provided importance data, and a target bitrate"] applying, by the GAN generator, one or more modifications to each object in the first set based on the relevance score of each object; [values (scores) in regions used to control compression and allocate bits (modifications) ¶29 " each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions."] determining whether a discriminator of the GAN is able to identify each object in the first set modified by the GAN generator; and [discriminator ¶49] in response to determining the GAN discriminator is able to identify each object in the first set modified by the GAN generator as real: [discriminator predicts real ¶49 "The discriminator then predicts which of the images is real or fake. This prediction is used by one or more loss functions 506 to generate an error that is propagated back to the networks for training (as indicated by dotted lines in FIG. 5)."] generating, by the GAN generator, one or more updated media assets including a second set of one or more objects that are identified by the GAN discriminator as real. [real causes loss/error to get propagated back for compression or bit allocation (modifications for updated media) ¶49-50 " loss function ensures bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget (e.g., to achieve the target bitrate)."] BV does not specifically teach identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; However, Mas Montserrat teaches identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; [CNN identifies symbols (objects) in image ¶16-¶18 "CNN 110 receives as input a plurality of images and produces as output a plurality of bounding boxes, where each bounding box is assigned a class that is associated with a symbol believed to be present in the portion of an image that is enclosed by the bounding box"] deriving a relevance score for each identified object based on the historical data and the identified usage context; [generates a confidence score for symbol (object) based on training data (past images and context (logos within) ¶14) ¶24, ¶18 " CNN 110 also produces for each bounding box a confidence score which indicates a likelihood that the class assigned to the bounding box is correct (i.e., that the symbol associated with the class is depicted in the bounding box)"] creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; [create training data based on object (symbol) based score threshold ¶25 "the unlabeled image is selected as a training image for training a system to recognize the symbol, when the confidence score is above a predefined threshold."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV by incorporating the identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold disclosed by Mas Montserrat because both techniques address the same field of machine learning and by incorporating Mas Montserrat into BV saves time in image recognition and training time of unnecessary training data [Mas Montserrat ¶14-15] As to dependent claim 11, the rejection of claim 8 is incorporated, BV and Mas Montserrat further teach training the GAN generator by feeding the created training data set into the GAN generator, wherein at least one object that does not exceed the relevance score threshold is removed from the created training data set.[Mas Montserrat discard when below threshold¶25 " confidence score falls below the predefined threshold is discarded"] As to dependent claim 12, the rejection of claim 8 is incorporated, BV and Mas Montserrat further teach executing one or more compression techniques on each object in the first set. [BV areas in image get assigned different bitrates (compression) ¶57] As to dependent claim 13, the rejection of claim 12 is incorporated, BV and Mas Montserrat further teach wherein a degree of compression applied to each object in the first set is inversely proportional to the relevance score of each object in the first set, wherein an object having a lower relevance score is more compressed than an object having a higher relevance score. [BV ensures that parts of the image deemed more important retain higher quality during compression, aligning with the objective to allocate bits optimally as guided by user-provided importance values while staying within a target bitrate ¶44, 29 " where each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions", "the pixels of the region corresponding to the eyes have more details and therefore can be given a higher importance value than the relatively smooth pixels representing the person's cheeks."] As to dependent claim 14, the rejection of claim 12 is incorporated, BV and Mas Montserrat further teach wherein at least one compression technique includes adapting a pixel density of at least one object in the first set consistent with the relevance score of the at least one object. [BV higher bitrates cause higher quality or pixel density and based on importance values ¶44, 29] As to independent claim 15, BV teaches a computer program product, the computer program product comprising: [memory ¶104] one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more computer-readable tangible storage medium, the program instructions executable by a processor capable of performing a method, the method comprising: [memory with processors and instructions ¶104-105] receiving one or more media assets and historical data from a knowledge corpus in accordance with an identified usage context; [receives images (media) and importance data (historical) ¶29, ¶33 "input 102 that includes an image to be compressed, user-provided importance data, and a target bitrate"] applying, by the GAN generator, one or more modifications to each object in the first set based on the relevance score of each object; [values (scores) in regions used to control compression and allocate bits (modifications) ¶29 " each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions."] determining whether a discriminator of the GAN is able to identify each object in the first set modified by the GAN generator; and [discriminator ¶49] in response to determining the GAN discriminator is able to identify each object in the first set modified by the GAN generator as real: [discriminator predicts real ¶49 "The discriminator then predicts which of the images is real or fake. This prediction is used by one or more loss functions 506 to generate an error that is propagated back to the networks for training (as indicated by dotted lines in FIG. 5)."] generating, by the GAN generator, one or more updated media assets including a second set of one or more objects that are identified by the GAN discriminator as real. [real causes loss/error to get propagated back for compression or bit allocation (modifications for updated media) ¶49-50 " loss function ensures bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget (e.g., to achieve the target bitrate)."] BV does not specifically teach identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; However, Mas Montserrat teaches identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; [CNN identifies symbols (objects) in image ¶16-¶18 "CNN 110 receives as input a plurality of images and produces as output a plurality of bounding boxes, where each bounding box is assigned a class that is associated with a symbol believed to be present in the portion of an image that is enclosed by the bounding box"] deriving a relevance score for each identified object based on the historical data and the identified usage context; [generates a confidence score for symbol (object) based on training data (past images and context (logos within) ¶14) ¶24, ¶18 " CNN 110 also produces for each bounding box a confidence score which indicates a likelihood that the class assigned to the bounding box is correct (i.e., that the symbol associated with the class is depicted in the bounding box)"] creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold; [create training data based on object (symbol) based score threshold ¶25 "the unlabeled image is selected as a training image for training a system to recognize the symbol, when the confidence score is above a predefined threshold."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV by incorporating the identifying, by a convolutional neural network (CNN), one or more objects in the one or more media assets; deriving a relevance score for each identified object based on the historical data and the identified usage context; and creating a training data set for a GAN generator including one or more images of a first set of one or more objects that exceed a relevance score threshold disclosed by Mas Montserrat because both techniques address the same field of machine learning and by incorporating Mas Montserrat into BV saves time in image recognition and training time of unnecessary training data [Mas Montserrat ¶14-15] As to dependent claim 18, the rejection of claim 15 is incorporated, BV and Mas Montserrat further teach training the GAN generator by feeding the created training data set into the GAN generator, wherein at least one object that does not exceed the relevance score threshold is removed from the created training data set.[Mas Montserrat discard when below threshold¶25 " confidence score falls below the predefined threshold is discarded"] As to dependent claim 19, the rejection of claim 15 is incorporated, BV and Mas Montserrat further teach executing one or more compression techniques on each object in the first set. [BV areas in image get assigned different bitrates (compression) ¶57] As to dependent claim 20, the rejection of claim 19 is incorporated, BV and Mas Montserrat further teach wherein a degree of compression applied to each object in the first set is inversely proportional to the relevance score of each object in the first set, wherein an object having a lower relevance score is more compressed than an object having a higher relevance score. [BV ensures that parts of the image deemed more important retain higher quality during compression, aligning with the objective to allocate bits optimally as guided by user-provided importance values while staying within a target bitrate ¶44, 29 " where each region has been annotated with an importance value that represents the relative importance of each region to the user. During compression, the importance values are used to allocate more bits to more important regions and fewer bits to less important regions", "the pixels of the region corresponding to the eyes have more details and therefore can be given a higher importance value than the relatively smooth pixels representing the person's cheeks."] Claims 2-3, 9-10 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over BV and Mas Montserrat, as applied in the rejection of claim 1 8 and 15 above, and further in view of Rippel et al. (US 20180174052 A1 hereinafter Rippel) As to dependent claim 2, BV and Mas Montserrat each the rejection of claim 1 that is incorporated. BV further teaches applying, by the GAN generator, one or more additional modifications to each object in the first set not identified as real based on the relevance score of each object. [BV discriminator decides real or fake which drive where to allocate bits (additional modifications)¶49, ¶5 " a discriminator, to train the reconstruction network to generate photorealistic reconstructed images, and various loss functions to ensure bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget."] BV and Mas Montserrat do not specifically teach in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: However, Rippel teaches in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: [when discriminator does not satisfy criteria (not able to identify), do iterations of training ¶50 "The compression system 130 repeatedly alternates between training the autoencoder 302 and the discriminator 304. Specifically, for one or more iterations, a forward pass step and a backpropagation step to update the parameters of the discriminator 304 based on the discriminator loss are repeatedly alternated, while the parameters of the autoencoder 350 are fixed. For one or more subsequent iterations, a forward pass step and a backpropagation step to update the parameters of the autoencoder 302 based on the autoencoder loss function are repeatedly alternated, while the parameters of the discriminator 304 are fixed. The training process is completed when the autoencoder loss function and the discriminator loss satisfies a predetermined criteria."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV and Mas Montserrat by incorporating the in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real disclosed by Rippel because all techniques address the same field of machine learning systems and by incorporating Rippel into BV and Mas Montserrat reduces compression artifacts for improved image quality [Rippel ¶4, ¶6] As to dependent claim 3, the rejection of claim 2 is incorporated, BV, Mas Montserrat and Rippel further teach adding the updated one or media assets to the knowledge corpus. [Mas Montserrat add composite images (updated) to a training set (corpus) ¶35 "composite images can be produced. The composite images may then be used to train a symbol recognition system"] As to dependent claim 9, BV and Mas Montserrat each the rejection of claim 8 that is incorporated. BV further teaches applying, by the GAN generator, one or more additional modifications to each object in the first set not identified as real based on the relevance score of each object. [BV discriminator decides real or fake which drive where to allocate bits (additional modifications)¶49, ¶5 " a discriminator, to train the reconstruction network to generate photorealistic reconstructed images, and various loss functions to ensure bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget."] BV and Mas Montserrat do not specifically teach in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: However, Rippel teaches in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: [when discriminator does not satisfy criteria (not able to identify), do iterations of training ¶50 "The compression system 130 repeatedly alternates between training the autoencoder 302 and the discriminator 304. Specifically, for one or more iterations, a forward pass step and a backpropagation step to update the parameters of the discriminator 304 based on the discriminator loss are repeatedly alternated, while the parameters of the autoencoder 350 are fixed. For one or more subsequent iterations, a forward pass step and a backpropagation step to update the parameters of the autoencoder 302 based on the autoencoder loss function are repeatedly alternated, while the parameters of the discriminator 304 are fixed. The training process is completed when the autoencoder loss function and the discriminator loss satisfies a predetermined criteria."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV and Mas Montserrat by incorporating the in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real disclosed by Rippel because all techniques address the same field of machine learning systems and by incorporating Rippel into BV and Mas Montserrat reduces compression artifacts for improved image quality [Rippel ¶4, ¶6] As to dependent claim 10, the rejection of claim 9 is incorporated, BV, Mas Montserrat and Rippel further teach adding the updated one or media assets to the knowledge corpus. [Mas Montserrat add composite images (updated) to a training set (corpus) ¶35 "composite images can be produced. The composite images may then be used to train a symbol recognition system"] As to dependent claim 16, BV and Mas Montserrat each the rejection of claim 15 that is incorporated. BV further teaches applying, by the GAN generator, one or more additional modifications to each object in the first set not identified as real based on the relevance score of each object. [BV discriminator decides real or fake which drive where to allocate bits (additional modifications)¶49, ¶5 " a discriminator, to train the reconstruction network to generate photorealistic reconstructed images, and various loss functions to ensure bits are allocated optimally in the importance map while staying within the limits of user-provided bit budget."] BV and Mas Montserrat do not specifically teach in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: However, Rippel teaches in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real: [when discriminator does not satisfy criteria (not able to identify), do iterations of training ¶50 "The compression system 130 repeatedly alternates between training the autoencoder 302 and the discriminator 304. Specifically, for one or more iterations, a forward pass step and a backpropagation step to update the parameters of the discriminator 304 based on the discriminator loss are repeatedly alternated, while the parameters of the autoencoder 350 are fixed. For one or more subsequent iterations, a forward pass step and a backpropagation step to update the parameters of the autoencoder 302 based on the autoencoder loss function are repeatedly alternated, while the parameters of the discriminator 304 are fixed. The training process is completed when the autoencoder loss function and the discriminator loss satisfies a predetermined criteria."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image analysis disclosed by BV and Mas Montserrat by incorporating the in response to determining the GAN discriminator is not able to identify each object in the first set modified by the GAN generator as real, iterating, until the GAN discriminator is able to identify each object in the first set as real disclosed by Rippel because all techniques address the same field of machine learning systems and by incorporating Rippel into BV and Mas Montserrat reduces compression artifacts for improved image quality [Rippel ¶4, ¶6] As to dependent claim 17, the rejection of claim 16 is incorporated, BV, Mas Montserrat and Rippel further teach adding the updated one or media assets to the knowledge corpus. [Mas Montserrat add composite images (updated) to a training set (corpus) ¶35 "composite images can be produced. The composite images may then be used to train a symbol recognition system"] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Tawari et al. (US 20230004805 A1) teaches a convolution network that identifies objects in images with important scores for objects (see ¶3) It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Beau Spratt whose telephone number is 571 272 9919. The examiner can normally be reached 8:30am to 5:00pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571 272 7212. The fax phone number for the organization where this application or proceeding is assigned is 571 483 7388. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866 217 9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800 786 9199 (IN USA OR CANADA) or 571 272 1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Apr 25, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §101, §103
Mar 16, 2026
Interview Requested
Mar 31, 2026
Examiner Interview Summary
Mar 31, 2026
Applicant Interview (Telephonic)
Apr 03, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month