Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,016

IMAGE OPTIMIZATION

Non-Final OA §102§103
Filed
Jan 24, 2024
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending for examination in the Application No. 18/421,016 filed January 24th, 2024. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as foreign Patent Application No. CN202211252059.0, filed on October 13th, 2022. Acknowledgment is made of applicant’s status as a continuation (CON) of International Application No. PCT/CN2023/120931, filed on September 25th, 2023, which claims priority to foreign Patent Application No. CN202211252059.0, filed on October 13th, 2022. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 5th, 2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered and attached by the examiner. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claims 6 and 17 are objected to because of the following informalities failing to comply with 37 CFR 1.71(a) for "full, clear, concise, and exact terms" (see MPEP § 608.01(m)): In lines 6-7 in each of claims 6 and 17, “to obtain the offset parameter constraint item” should be “to obtain the offset parameter constraint [[item]]” to maintain consistency in terminology within the claims. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 12, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shi et al. (Shi; US 2025/0054271 A1; effective filing date given to foreign priority date December 24th, 2021). Regarding claim 1, Shi discloses an image optimization method, comprising: obtaining an image generation network, a to-be-optimized image, and a plurality of preset image features (para(s). [0070-0071] and [0065], recite(s) [0070] “S203 : performing image reconstruction by means of an image generation model based on the first image feature, the second image feature, and the plurality of intermediate image features, so as to obtain a target video, in which the target video is used for presenting a process of a gradual change from the first image to the second image.” [0071] “The image generation model may be a neural network for image generation or image reconstruction, with its input data being a coded image feature and its output data being a reconstructed image. A trained image generation model disclosed on the network may be used, or the neural network may be trained by using training data (including a plurality of training images) to obtain the image generation model, and there is no limitation on the training process of the model.” [0065] “In an example, a plurality of images, and image features obtained by encoding the plurality of images may be pre-stored. The second image feature is obtained from the stored image features of the plurality of images. In one way, a second image may be specified by the user among the pre-stored plurality of images, and an image feature of the second image, i.e., the second image feature, may be obtained from the image features of the plurality of images. In another way, the second image feature may be obtained among the image features of the plurality of images in a preset order (e.g., an order in which the images are stored) or at random.” , where the “first image” is a to-be-optimized image and the “image features obtained by encoding the plurality of images may be pre-stored” are a plurality of preset image features); selecting a target feature from the plurality of preset image features based on (i) the target feature and the to-be-optimized image and (ii) a preset similarity condition (para(s). [0111], [0113], [0116], and [0118], recite(s) [0111] “S501: training a neural network according to a plurality of training images and an image generation model, in which the neural network is used for learning a deviation of image feature adjustment performed based on a feature space of the image generation model.” [0113] “S5011: generating a target image feature according to an image feature of a first training image and an image feature of a second training image.” [0116] “In another example, the image feature of the first training image and the image feature of the second training image are weighted and summed to obtain the target image feature. Weights corresponding to the image feature of the first training image and the image feature of the second training image, respectively, may be set in advance.” [0118] “In the present embodiment, the average image feature in the feature space may be determined based on a probability distribution that the feature space conforms to. The target image feature is subjected to initial adjustment by using the average image feature, making the target image feature close to the average image feature, and improving the quality of the target image feature.” , where the “target image feature” is a target feature selected from the plurality of preset image features (e.g., “an image feature of a second training image” or ‘second image feature’) based on at least (i) the target image feature and the to-be-optimized image (e.g., the “image feature of a first training image” or ‘first image feature’) and (ii) a preset similarity condition (e.g., “the target image feature close to the average image feature”)); inputting the target feature and an initial offset parameter to the image generation network (para(s). [0118]— see citation in preceding limitation immediately above—, where para(s). [0119] and [0121] further recite(s): [0119] “In one embodiment, the step of performing initial adjustment on the target image feature according to the average image feature includes: determining a mean value of the target image feature and the average image feature, and determining the target image feature that is initially adjusted to be the mean value. Thus, feature cropping (i.e., the initial adjustment) of the target image feature is realized by way of solving for the mean value of the target image feature and the average image feature.” [0121] “In the present embodiment, the image feature of the first training image and the image feature of the second training image are input into the neural network to obtain output data of the neural network, i.e., the target deviation, corresponding to the initial adjustment, obtained by learning. Based on the target deviation, corresponding to the initial adjustment, obtained by learning of the neural network, the target image feature that is initially adjusted is readjusted, so that the target image feature is close to the image feature of the first training image and the image feature of the second training image, that is, the similarity between the target image feature and the image feature of the first training image as well as the image feature of the second training image is improved.” , where the target image feature (e.g., “target image feature” prior to an “initial adjustment”) and an initial offset parameter (e.g., “average image feature” used to perform the “initial adjustment” of the “target image feature”) are inputs to the image generation network (i.e., “neural network”) to “obtain output data of the neural network, i.e., the target deviation, corresponding to the initial adjustment, obtained by learning”); adjusting the initial offset parameter according to a difference between an output of the image generation network and the to-be-optimized image, to obtain a target offset parameter (para(s). [0119] and [0121]—see citation in preceding limitation immediately above—, where “readjust[ing]” the “target image feature” includes adjusting the initial offset parameter according to a difference to obtain a target offset parameter as further recited in para(s). [0123-0124] and [0132] below: [0123] “S5014 : adjusting model parameters of the neural network according to the target deviation, a target image feature that is readjusted, the first training image and the second training image.” [0124] “In the present embodiment, a training error of the neural network may be determined based on the target deviation, the target image feature that is readjusted, the first training image and the second training image, and the model parameters of the neural network are adjusted based on the training error. For example, the training error is determined based on a difference between the target image feature that is readjusted and the image feature of the first training image, and/or a difference between the target image feature that is readjusted and the image feature of the second training image.” [0132] “Illustratively, referring to FIG. 6 . FIG. 6 is a schematic diagram of a training framework of a neural network according to an embodiment of the present disclosure. As shown in FIG. 6 , a training process includes: first, determining an average value of a latent code 1 (an image feature obtained by encoding an input image 1 ) and a latent code 2 (an image feature obtained by encoding an input image 2 ); based on the feature space of the image generation model, performing feature cropping (i.e., performing the initial adjustment) on the average value to obtain a cropped average value; inputting the latent code 1 and latent code 2 into the neural network, and according to a feature deviation output by the neural network, determining the part of the training error subjected to the regularized constraint; then adding the feature deviation output by the neural network to the cropped average value, and then inputting this average value into the image generation model to obtain a reconstructed image; and finally, determining a feature difference between the reconstructed image and the input image 1 , and a feature difference between the reconstructed image and the input image 2 via a feature network, and based on the two feature differences, determining the part of the training error subjected to the similarity constraint. In this way, the model parameters of the neural network are adjusted based on the part of the training error subjected to the regularized constraint and the part of the training error subjected to the similarity constraint.” , where performing “feature cropping (i.e., performing the initial adjustment) on the average value to obtain a cropped average value” is adjusting the initial offset parameter (e.g., “cropped average value” or “feature deviation output by the neural network” at the “initial adjustment”) according to at least a difference (e.g., a “training error”) between an output image of the generation network (e.g., “the target image feature”) and the to-be-optimized image (e.g., “the image feature of the first training image”)); and inputting the target feature and the target offset parameter to the image generation network, to generate an optimized image (para(s). [0132]—see citation in limitation immediately above—, where para(s). [0128] further recite(s): [0128] “Specifically, the image features mentioned in the above-mentioned embodiments are all coded image features. In order to improve the accuracy of model training, after the target image feature that is readjusted is obtained, the target image feature may be input into the image generation model to obtain an intermediate reconstructed image (i.e., a reconstructed image corresponding to the target image feature); and then, the first training image, the second training image and the intermediate reconstructed image may be subjected feature extraction via a feature extraction network, to obtain the image feature of the first training image, the image feature of the second training image, and the image feature of the intermediate reconstructed image, respectively. For example, when the first training image, the second training image, and the intermediate reconstruction image are all facial images, these images may be subjected to feature extraction by using a facial feature extraction network. Next, a difference between the image feature of the intermediate reconstructed image and the image feature of the first training image (the features extracted by the feature extraction network), and a difference between the image feature of the intermediate reconstructed image and the image feature of the second training image (the features extracted by the feature extraction network) are determined, and the training error is determined according to the two differences and the output data of the neural network.” , where the “target image feature may be input into the image generation model to obtain an intermediate reconstructed image (i.e., a reconstructed image corresponding to the target image feature)” includes “inputting this average value [i.e., adding the feature deviation output by the neural network to the cropped average value] into the image generation model to obtain a reconstructed image” is inputting the target feature (e.g., “target image feature”) and the target offset parameter (e.g., an adjusted initial “target deviation”—i.e., “adding the feature deviation output by the neural network to the cropped average value”) to the image generation network to generate an optimized image (i.e., a “reconstructed image corresponding to the target image feature”)). Regarding claim 12, the claim differs from claim 1 in that the claim is in the form of an apparatus, comprising: processing circuitry configured to perform the method of claim 1. Therefore, claim 12 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 20, the claim differs from claim 1 in that the claim is in the form of a non-transitory computer-readable storage medium storing instructions which, when executed by a processor, cause the processor to perform the method of claim 1. Therefore, claim 20 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2, 10, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Shi as applied to claims 1 or 12 above, and further in view of Tao et al. (Tao; US 2024/0257423 A1; effective filing date given to foreign priority date April 16th, 2021). Regarding claim 2, Shi discloses the image optimization method according to claim 1, wherein Tao teaches in the same field of endeavor of determining target image features from a plurality of preset image features for image generation models the selecting comprises: performing clustering processing on the plurality of preset image features, to obtain a plurality of feature clusters (para(s). [0051], recite(s) [0051] “The source domain images can all belong to one style, and the target domain images can belong to one or more styles. Due to a lack of the labeling information in the target domain images, a clustering algorithm can be used to obtain one or more representations of one or more clustering centers of the target domain images, which can be used as one or more target domain style representations to represent different styles. Any existing algorithm can be adopted as the clustering algorithm, such as K-means, mean shift clustering, or density based clustering algorithm, etc. By clustering, each of the target domain images can be labeled with a pseudo domain label, that is, each of the target domain images may be labeled with a style.” , where the “clustering algorithm” is a clustering process on the plurality of preset image features (e.g., features of “target domain images”)); and selecting the target feature from center features of the plurality of feature clusters (para(s). [0050], recite(s) [0050] “In some embodiments, as shown in FIG. 2 , the style encoder comprises a style representation extraction network and a clustering module. The target domain images can be input to the style representation extraction network to obtain basic style representations of the target domain images; the basic style representations of the target domain images can be input to the clustering module for clustering to obtain representation vectors of clustering centers as the target domain style representations.” , where obtaining “representation vectors of clustering centers as the target domain style representations” is selecting the target feature (e.g., “target domain style”) from centers of a plurality of features clusters). Since Shi discloses the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model (para(s). [0077], recite(s) [0077] “In some embodiments, the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model. Accordingly, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in terms of image generation, the image reconstruction quality of the image generation model and the quality of the image frames of the target video are improved.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Shi to incorporate selecting the target feature (i.e., the “target image feature” of Shi) from center features of a plurality of features clusters obtained by performing clustering processing on the plurality of preset image features (i.e., the pre-stored image features of Shi) to include the representation of different styles of each image feature in the plurality of preset image features when selecting the target feature for generating an optimized image from the image generation network as taught by Tao above (see para. [0051] of Tao above). Regarding claim 10, Shi discloses the image optimization method according to claim 1, wherein Shi further discloses the image optimization method according to claim 1 further comprising: obtaining, according to a distribution feature(para(s). [0065]—see citation in claim 1 limitation “obtaining…” above—, where para(s). [0087] further recite(s): [0087] “The feature space of the image generation model may be understood as an input space of the image generation model, and feature samples in this input space conform to a certain probability distribution.” , where the “image features of the plurality of images” includes obtaining them “at random” as disclosed in para. [0065] with the “feature samples… conform to a certain probability distribution” is obtaining a plurality of original features through sampling according to a distribution feature of random variables); and mapping the plurality of original features to a preset feature space(para(s). [0065] and [0087]—see preceding citation immediately above—, where para(s). [0084] further recite(s): [0084] “In an example, an average value of the first image feature and the second image feature is determined, and the average value is the third image feature. Specifically, feature values at corresponding positions of the first image feature and the second image feature may be added and averaged to obtain the average value of the first image feature and the second image feature.” , where determining the “positions” of the “first” and “second” image features in a feature space is mapping the plurality of original features to a preset feature space). Where Shi does not specifically disclose obtaining, according to a distribution feature type of random variables, a plurality of original features through sampling; and mapping the plurality of original features to a preset feature space, to obtain the plurality of preset image features; Tao teaches in the same field of endeavor of determining target image features from a plurality of preset image features for image generation models obtaining, according to a distribution feature type of random variables, a plurality of original features through sampling (para(s). [0050] and [0018], recite(s) [0050] “In some embodiments, as shown in FIG. 2 , the style encoder comprises a style representation extraction network and a clustering module. The target domain images can be input to the style representation extraction network to obtain basic style representations of the target domain images; the basic style representations of the target domain images can be input to the clustering module for clustering to obtain representation vectors of clustering centers as the target domain style representations.” [0018] “…wherein a value of each dimension in the randomly generated preset number of the new style representations is randomly sampled from a standard normal distribution.” , where the “preset number of the new style representations” are a plurality of original features obtained through sampling (e.g., “randomly sampling”) according to a distribution feature type of random variables (e.g., “randomly sampled from a standard normal distribution”)); and mapping the plurality of original features to a preset feature space, to obtain the plurality of preset image features (para(s). [0051], recite(s) [0051] “The source domain images can all belong to one style, and the target domain images can belong to one or more styles. Due to a lack of the labeling information in the target domain images, a clustering algorithm can be used to obtain one or more representations of one or more clustering centers of the target domain images, which can be used as one or more target domain style representations to represent different styles. Any existing algorithm can be adopted as the clustering algorithm, such as K-means, mean shift clustering, or density based clustering algorithm, etc. By clustering, each of the target domain images can be labeled with a pseudo domain label, that is, each of the target domain images may be labeled with a style.” , where “clustering to obtain representation vectors” is mapping the plurality of original features to a preset feature space to obtain the plurality of preset image features). Since Shi discloses the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model (para(s). [0077], recite(s) [0077] “In some embodiments, the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model. Accordingly, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in terms of image generation, the image reconstruction quality of the image generation model and the quality of the image frames of the target video are improved.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Shi to incorporate obtaining the plurality of original features through sampling according to a distribution feature type of random variables and by mapping the plurality of original features to a preset feature space to improve the representation of each image feature in the plurality of preset image features for generating an optimized image from an image generation network as taught by Tao above (see para. [0051] of Tao above). Regarding claim 13, the claim recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above). Claims 5-9 and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shi as applied to claims 1 or 12 above, and further in view of Karras et al. (Karras; “Analyzing and Improving the Image Quality of StyleGAN,” 2020). Regarding claim 5, Shi discloses the image optimization method according to claim 1, wherein the inputting the target feature and the initial offset parameter includes inputting the target feature and the initial offset parameter to the image generation network, to generate a first image (para(s). [0118], [0119] and [0121]— see citation in claim 1 limitation “inputting the target feature and an initial offset parameter…” above—, where the “output data” of the image generation network from inputting the target feature and the initial offset parameter include generating a first image (e.g., a “reconstructed image”) as disclosed in para(s). [0132]—see citation in claim 1 limitation “adjusting the initial offset parameter according to…” above); the method further comprises: calculating the to-be-optimized image and the(first) image based on a constraint condition for the initial offset parameter, to obtain an offset loss value (para(s). [0119], [0121], [0123-0124] and [0132]—see citations in claim 1 limitation “adjusting the initial offset parameter according to…” above—, where determining the “difference between the reconstructed image and the input image” is calculating the to-be-optimized image (e.g., “input image”) and the first image (e.g., “reconstructed image”) based on at least a constraint condition for the initial offset parameter (e.g., “regularized constraint”) to obtain an offset loss value (e.g., “training error” or “feature deviation”)); and the adjusting the initial offset parameter includes adjusting the initial offset parameter according to the offset loss value, to obtain the target offset parameter (para(s). [0119], [0121], [0123-0124] and [0132]—see citations in claim 1 limitation “adjusting the initial offset parameter according to…” above—, where adding the “feature deviation output by the neural network to the cropped average value” is obtaining a target offset parameter by adjusting the initial offset parameter (e.g., “cropped average value” or initial target deviation) according to the offset loss value (e.g., a “training error” or “feature deviation”)). Where Shi does not specifically disclose performing image deterioration processing on the first image, to obtain a second image, Karras teaches in the same field of endeavor of StyleGAN or StyleGAN2 models performing image deterioration processing on the first image, to obtain a second image (section 4.1 in col. 2 of pg. 8112, recite(s). [4.1. Alternative network architectures] “…In Figure 7b we simplify this design by upsampling and summing the contributions of RGB outputs corresponding to different resolutions. In the discriminator, we similarly provide the downsampled image to each resolution block of the discriminator. We use bilinear filtering in all up and downsampling operations. In Figure 7c we further modify the design to use residual connections.3 This design is similar to LAPGAN [6] without the per-resolution discriminators…” , where the image generation network of StyleGAN2 includes “downsampling operations” in the discriminator is performing image deterioration on a first image generated by a generator to obtain a downsampled second image in the discriminator). Since Shi discloses the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model (para(s). [0077], recite(s) [0077] “In some embodiments, the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model. Accordingly, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in terms of image generation, the image reconstruction quality of the image generation model and the quality of the image frames of the target video are improved.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Shi to incorporate obtaining a second image by performing image deterioration processing on the first image and obtaining an offset loss value by calculating the to-be-optimized image and the second image based on a constraint condition for the initial offset parameter as using an image generation network like StyleGAN2 includes downsampling generated images from a generator (e.g., a first image) as taught by Karras above. Regarding claim 6, Shi in view of Karras discloses the image optimization method according to claim 5, wherein Shi further discloses the constraint condition for the initial offset parameter comprises an offset parameter constraint (para(s). [0129-0130], recite(s) [0129] “In an example, the target optimization function of the neural network may be expressed as: m i n L = Φ   G f   w 1 , w 2 + w 3 - Φ x 1 2 + Φ   G f   w 1 , w 2 + w 3 - Φ x 2 2 + λ f   w 1 , w 2 in which x1 and x2 represent the first training image and the second training image, respectively, w1 represents the image feature obtained by encoding the first training image, w2 represents the image feature obtained by encoding the second training image, w3 represents the target image feature, f   represents the neural network, G   represents the image generation model, Φ represents the feature extraction network, and λ is a preset parameter.” [0130] “∥Φ(G(f(w1,w2)+w3))−Φ(x1)∥2+∥Φ(G(f(w1,w2)+w3))−Φ(x2)2∥ is the similarity constraint and λ∥f(w1,w2)∥ is the regularized constraint.” , where the “regularized constraint” is an offset parameter constraint); and the calculating the to-be-optimized image and the second image comprises: calculating the to-be-optimized image and the second image, to obtain a first loss item (para(s). [0127], recite(s) [0127] “Specifically, the target optimization function of the neural network may be determined in advance according to the regularized constraint and the similarity constraint. During a training process of the neural network, a function value of the target optimization function, i.e., the training error of the neural network, is determined based on the target deviation, the first training image and the second training image. The model parameters of the neural network are optimized based on the training error. The optimization algorithm is, for example, a gradient descent algorithm.” , where the first expression in the “target optimization function” (e.g., Φ   G f   w 1 , w 2 + w 3 - Φ x 1 2 ) is at least a first loss item by calculating the deviation between the to-be-optimized image (e.g., the “first training image” feature or w1 in the “target optimization function” depicted in para. [0129]) and the second image (e.g., a downsampled “second training image” feature or w2 in the “target optimization function” depicted in para. [0129]))), performing regularization processing on the initial offset parameter, to obtain the offset parameter constraint item (para(s). [0130]—see citation in the current claim above—, where the “regularized constraint” is a regularization processing on the initial offset parameter (see an offset parameter constraint item (e.g., the expressions depicted in the “optimization function” in para. [0129])), and constraining the first loss item based on the offset parameter constraint, to obtain the offset loss value (para(s). [0129--0130]—see citation in the current claim above—, where adding the “regularized constraint” to at least the first expression (e.g., first loss item) in the “target optimization function” in para. [0129] is constraining the first loss item based on the offset parameter constraint to obtain the offset loss value). Regarding claim 7, Shi in view of Karras discloses the image optimization method according to claim 5, wherein Shi further discloses the image optimization method according to claim 5 further comprising: inputting the target feature and the target offset parameter to the image generation network, to generate a third image (para(s). [0132] and [0128]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter…” above—, where the “target image feature may be input into the image generation model to obtain an intermediate reconstructed image (i.e., a reconstructed image corresponding to the target image feature)” includes “inputting this average value [i.e., adding the feature deviation output by the neural network to the cropped average value] into the image generation model to obtain a reconstructed image” is inputting the target feature (e.g., “target image feature”) and the target offset parameter (e.g., an adjusted initial “target deviation”—i.e., “adding the feature deviation output by the neural network to the cropped average value”) to the image generation network to generate a third image (i.e., a “reconstructed image corresponding to the target image feature”)); calculating the to-be-optimized image and the(third) image based on a constraint condition for the image generation network, to obtain a network loss value (para(s). [0132] and [0128]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter…” above—, where the “training error” is a network loss value obtained by at least calculating the difference between the to-be-optimized image (e.g., “first image”) and the third image (e.g., “intermediate reconstructed image”) based on a constraint condition (e.g., a “similarity constraint”) for the image generation network); and adjusting a network parameter of the image generation network according to the network loss value, to obtain an adjusted image generation network, the adjusted image generation network being configured to generate the optimized image (para(s). [0132] and [0128]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter…” above—, where “the model parameters of the neural network are adjusted based on the part of the training error subjected to the regularized constraint and the part of the training error subjected to the similarity constraint” recited in para. [0132] is adjusting network parameters of the image generation network according to the network loss value (e.g., “training error’) to obtain an adjusted image generation network configured to generate an optimized (e.g., “reconstructed”) image). Where Shi does not specifically disclose performing image deterioration processing on the third image, to obtain a fourth image, Karras teaches in the same field of endeavor of StyleGAN or StyleGAN2 models performing image deterioration processing on the third image, to obtain a fourth image (section 4.1 in col. 2 of pg. 8112, recite(s). [4.1. Alternative network architectures] “…In Figure 7b we simplify this design by upsampling and summing the contributions of RGB outputs corresponding to different resolutions. In the discriminator, we similarly provide the downsampled image to each resolution block of the discriminator. We use bilinear filtering in all up and downsampling operations. In Figure 7c we further modify the design to use residual connections.3 This design is similar to LAPGAN [6] without the per-resolution discriminators…” , where the image generation network of StyleGAN2 includes “downsampling operations” in the discriminator is performing image deterioration on a third image generated by a generator to obtain at least a downsampled fourth image in the discriminator). Since Shi discloses the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model (para(s). [0077], recite(s) [0077] “In some embodiments, the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model. Accordingly, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in terms of image generation, the image reconstruction quality of the image generation model and the quality of the image frames of the target video are improved.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Shi to incorporate obtaining a fourth image by performing image deterioration processing on the third image and obtaining an network loss value by calculating the to-be-optimized image and the fourth image based on a constraint condition for the image generation network as using an image generation network like StyleGAN2 includes downsampling generated images from a generator (e.g., a third image) as taught by Karras above. Regarding claim 8, Shi in view of Karras discloses the image optimization method according to claim 7, wherein Shi further discloses the constraint condition for the image generation network comprises a network constraint (para(s). [0132] and [0128]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter…” above—, where the “training error” including a “similarity constraint” and/or a “regularization constraint” is a network constraint); and the calculating the to-be-optimized image and the fourth image comprises: calculating the to-be-optimized image and the fourth image, to obtain a second loss item (para(s). [0129-0130] and [0127]—see citations in claim 6 above—, where the second expression in the “target optimization function” (e.g., Φ   G f   w 1 , w 2 + w 3 - Φ x 2 2 ) is at least a second loss item by calculating the deviation between the to-be-optimized image (e.g., the “first training image” feature or w1 in the “target optimization function” depicted in para. [0129]) and the fourth image (e.g., a downsampled “second training image” feature or w2 in the “target optimization function” depicted in para. [0129]))), calculating an output result of an initial iteration of the image generation network and an output result of a current iteration of the image generation network, to obtain the network constraint (para(s). [0132]—see citation in claim 1 limitation “adjusting the initial offset parameter according to…” above—, where the “training error” including the “regularized constraint” and “similarity constraint” is a network constraint obtained from the output result of an initial iteration of the image generation model (e.g., output which determines “the part of the training error subjected to the regularized constraint”) and an output result of a current iteration of the image generation network (e.g., output which determines “the part of the training error subjected to the similarity constraint”)), and constraining the second loss item based on the network constraint, to obtain the network loss value (para(s). [0129-0130]—see citations in claim 6 above—, where adding the “similarity constraint” to at least the second expression (e.g., second loss item) in the “target optimization function” in para. [0129] is constraining the second loss item based on the offset parameter constraint to obtain the network loss value). Regarding claim 9, Shi in view of Karras discloses the image optimization method according to claim 8, wherein Shi further discloses the calculating the output result of the initial iteration of the image generation network and the output result of the current iteration of the image generation network comprises: inputting the target feature and the target offset parameter to the initial iteration of the image generation network, to generate an initial image (para(s). [0118], [0119], and [0121]—see citation in claim 1 limitation “inputting the target feature and an initial offset parameter to…” above—, where the image part of the “output data” is an initial image generated by inputting the target image feature (e.g., “target image feature” prior to an “initial adjustment”) and an initial offset parameter (e.g., “average image feature” used to perform the “initial adjustment” of the “target image feature”) at an initial iteration of the image generation network (e.g., “initial adjustment”) at an initial iteration of the image generation network (e.g., “initial adjustment”)), and inputting the target feature and the target offset parameter to the current iteration of the image generation network, to generate a current image (para(s). [0128] and [0132]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter to…” above—, where the “intermediate reconstructed image (i.e., a reconstructed image corresponding to the target image feature)” is a current image generated by inputting the target feature (e.g., “target image feature”) and the target offset parameter (e.g., an adjusted initial “target deviation”—i.e., “adding the feature deviation output by the neural network to the cropped average value”) at a current iteration of the image generation network (e.g., “readjust[ment]”)); and calculating the initial image and the current image, to obtain the network constraint (para(s). [0128] and [0132]—see citation in claim 1 limitation “inputting the target feature and the target offset parameter to…” above—, where generating the initial image and the current image is needed to obtain the “training error” is calculating the initial image and the current image to obtain the network constraint). Regarding claim 16, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Regarding claim 17, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Regarding claim 18, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Regarding claim 19, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Shi as applied to claim 1 above, and further in view of Sankaranarayanan et al. (Sankaranarayanan; “Semantic uncertainty intervals for disentangled latent spaces,” July 20th, 2022). Regarding claim 11, Shi discloses the image optimization method according to claim 1, wherein Sankaranarayanan teaches in the same field of endeavor of image generation using at least a styleGAN image generation model the image optimization method according to claim 1 further comprising: performing image deterioration processing on an original image, to obtain the to-be-optimized image (subheadings “Model architectures” and “Model training” under section 3.2 on pg. 6, recite(s) [Model architectures] “In all our experiments, we use the StyleGAN2 [21] framework for the generator architecture G . …” [Model training] “We start by pretraining the generative model or acquiring an off-the-shelf pretrained generative model for the task at hand. In generative models such as StyleGAN, the style space that offers fine grained control over image attributes, is very high dimensional. … For the image super-resolution training, we augment the input dataset by using different levels of downsampled inputs, i.e., we take the raw input and apply a random downsampling factor from {1,4,8,16,32} and resize it to the original dimensions. …” , where “downsampl[ing]” input images is deterioration processing on an original image (e.g., “input image” to obtain a to-be-optimized image (e.g., “super-resolution” image)). Since Shi discloses the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model for generating images of improved quality (para(s). [0076-0077], recite(s) [0076] “In some embodiments, the image generation model is a Generative Adversarial Network (GAN), and accordingly, by utilizing the advantages of the GAN in terms of image generation, the image reconstruction quality of the image generation model and the quality of image frames of the target video are improved.” [0077] “In some embodiments, the image generation model is a Style-Based Architecture for GANs (StyleGAN) model or a StyleGAN2 model. Accordingly, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in terms of image generation, the image reconstruction quality of the image generation model and the quality of the image frames of the target video are improved.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Shi to incorporate image deterioration processing on an original image to obtain the to-be-optimized image to train the image generation model for the task of super-resolution image generation as taught by Sankaranarayanan above. Allowable Subject Matter Claims 3-4 and 14-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claims 3 and 14: Using claim 3 as an example, the examiner found neither prior art cited in its entirety, nor based on the prior art, found any motivation to combine any of the said prior art that teaches the following combination in the context of the claim as a whole: “The image optimization method according to claim 2, wherein the selecting the target feature from the center features comprises: inputting the center features to the image generation network, to generate center images; determining a target image from the center images, the target image being one of the center images that meets the preset similarity condition with the to-be-optimized image; and determining the center feature corresponding to the target image as the target feature.” Thus, claims 4 and 15 would also be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims in view of their dependency to these claims. As a non-limiting example, a close prior art, Yoo et al. (Yoo; “RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided Self-Exemplars,” 2022), discloses in the abstract, the 1st and 3rd paras. of section III(B) on pgs. 3 and 4, and Fig. 2: [abstract] “Recent methods for single image super-resolution (SISR) have demonstrated outstanding performance in generating high-resolution (HR) images from low-resolution (LR) images. However, most of these methods show their superiority using synthetically generated LR images, and their generalizability to real-world images is often not satisfactory. In this paper, we pay attention to two well-known strategies developed for robust super-resolution (SR), i.e., reference-based SR (RefSR) and zero-shot SR (ZSSR), and propose an integrated solution, called reference-based zero-shot SR (RZSR). Following the principle of ZSSR, we train an image-specific SR network at test time using training samples extracted only from the input image itself. To advance ZSSR, we obtain reference image patches with rich textures and high-frequency details which are also extracted only from the input image using cross-scale matching. To this end, we construct an internal reference dataset and retrieve reference image patches from the dataset using depth information. Using LR patches and their corresponding HR reference patches, we train a RefSR network that is embodied with a non-local attention module. Experimental results demonstrate the superiority of the proposed RZSR compared to the previous ZSSR methods and robustness to unseen images compared to other fully supervised SISR methods.” [1st para. of section “B. Reference Patch Retrieval”] “Similar to the previous ZSSR methods [15], [22], we can obtain LR-HR patch pairs from the original LR image and its downsampled version. We call the corresponding LR-HR patches LR son and HR father, respectively. Toward RZSR, we seek an HR cousin that can serve as a reference image for RefSR of LR son.” [3rd para. of section “B. Reference Patch Retrieval”] PNG media_image1.png 558 513 media_image1.png Greyscale [Fig. 2] PNG media_image2.png 480 1037 media_image2.png Greyscale . Although Yoo discloses generating center images (e.g., “centroid patches”) by inputting center features (e.g., “VGG features”) generated by clustering centers (e.g., “patch clusters”) into an image generation network (e.g., ”VGG network”), Yoo does not disclose and/or reasonably teach the allowable claims as a whole, particularly all the features recited in claim 1. Furthermore, it would not have been reasonable to combine Yoo to resolve the deficiencies of Shi in view of Tao as applied in the rejection of claim 2 above as Shi discloses a different image generation model architecture and loss in comparison to Yoo (e.g., para(s). [0077] of Shi discloses Style-Based Architecture for GANs while in Fig. 4 of Yoo discloses an SR network not of GAN architecture). Therefore, claims 3 and 14 would be allowable over Yoo if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103
Feb 27, 2026
Applicant Interview (Telephonic)
Feb 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month