Prosecution Insights
Last updated: April 19, 2026
Application No. 18/823,613

PRIOR FOR HIGH-RESOLUTION IMAGE SYNTHESIS

Non-Final OA §102§103§112
Filed
Sep 03, 2024
Examiner
GHEBRETINSAE, TEMESGHEN
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
4y 3m
To Grant
78%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
118 granted / 158 resolved
+12.7% vs TC avg
Minimal +4% lift
Without
With
+3.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
7 currently pending
Career history
165
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
25.3%
-14.7% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 158 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 Claim 9 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 9, “the model” lack clear antecedent basis Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1,4,8-12,15,19-20 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by OZKAN et al 2024/0303883 A14 Regarding claims 1, OZKAN teaches: A method comprising: determining a viewpoint; generating a first image using an image generator, the first image including an object in a first orientation based on the viewpoint; See claim 1 and paragraph 0034 (obtaining an image depicting at least one human face and using the trained ML model to: determine, for the obtained image, visual features of one human face in the image; generate, using the determined visual features, at least one representation in vector space which encodes a specific attribute of the human face in the image) modifying the image generator based on a second orientation of the object; ( modify, in vector space, one or more of the at least one generated representation) and generating a second image based on the first image using the modified image generator.( and generate, using the or each modified generated representation, a modified image.) claim 4. The method of claim 1, wherein the modifying of the image generator includes modifying a weight of the image generator that corresponds to the second orientation of the object. See paragraphs 0049 and 0132-0133 ( As shown in FIG. 9, the experiments were conducted by modifying illumination (illum), pose, expression (expr) and all three attributes simultaneously (all). The present techniques yield significantly better results in terms of identity scores compared to baselines. The results show that the present techniques only manipulate the targeted part(s) without changing identities.) claim 8. The method of claim 1, wherein the image generator includes a neural network, and modifying of the image generator includes changing a weight associated with the neural network that corresponds to a viewing direction of the object. See paragraph 0049 ( The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.) claim 9. The method of claim 1, wherein the model is trained to generate a 3D image using two-dimensional images. See paragraph 0110 ( Here, 3D morphable face models are used as a relaxed data generation pipeline for the generative model, since multiple face renderings can be generated for the same person by preserving or discarding some of the facial attribute) claim 10. The method of claim 1, further comprising: receiving a third image including a human head, wherein the first image is generated based on the third image and the object includes a portion of the human head. ( 0010] The present techniques allow individual aspects of an image of a face to be edited independently. For example, head pose can be edited without impacting facial expression. ) claim 11. The method of claim 1, wherein determining a viewpoint includes at least one of: determining an association between a light ray and a pixel, determining a focal length, determining a pixel size, determining an image origin, or determining a pose.( 0010] The present techniques allow individual aspects of an image of a face to be edited independently. For example, head pose can be edited without impacting facial expression. The present techniques involve projecting an image into vector space, where the vector space (also referred to herein as representation space) encodes contents or aspects of the image of the face, such as pose, expression, illumination, and likeness. In the present techniques, the representation space is used for editing images of faces and generating new versions of the original image of a face, while enabling human-understandable/human-controllable parameterisation. As noted above, at least one representation may be generated, where each representation encodes a specific attribute. In cases where multiple representations are generated, each representing a different specific attribute, the ML model may be used to modify one or more than one of the representations. That is, it is not necessary to modify all representations that are generated.) Claims 12 and 20 are rejected same rational as claim 1 since both claim the same clamed limitations. Claim 15 is rejected same as claim 4 and claim 19 is rejected same as claim 10. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1,12 and 20 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Theobald et al (2012/0097730 A1.) Regarding claims 1,12 and 20, Theobald et al. teaches: A method comprising: determining a viewpoint; generating a first image using an image generator, the first image including an object in a first orientation based on the viewpoint (0073] Operation 651 includes obtaining an input image that depicts a face of a subject. The face of subject has an initial facial expression and an initial pose. The input image in operation 651 may be the input image 102 as previously described. The initial facial expression and the initial pose are the facial expression and pose that are observable in the input image.) modifying the image generator based on a second orientation of the object;( [0074] Operation 652 includes determining a reference shape description based on the input image. The reference shape description may be a statistical representation of face shape determined using a trained machine learning model as described with respect to the reference shape description 228. [0075] Operation 653 includes determining a target shape description based on the reference shape description, a facial expression difference (e.g., the expression difference 226), and a pose difference (e.g., the pose difference 227). [0076] Operation 654 includes generating a rendered target shape image using the target shape description. The rendered target shape image represents face shape, expression, and pose) and generating a second image based on the first image using the modified image generator.( [0077] Operation 655 includes generating an output image based on the input image (e.g., the input image 102) and the rendered target shape image (e.g., the rendered target shape 333) using an image generator, such as the image generator 112. The output image in operation 655 may be consistent with the description of the generated image 541 and is a simulated image of the subject of the input image that has a final expression that is based on the initial facial expression and the facial expression difference, and a final pose that is based on the initial pose and the pose difference. The image generator in operation 655 may be a machine learning model that is trained to constrain generation of the output image based on the input image such the output image appears to depict the subject of the input image. The image generator may be a trained generator from a generative adversarial network that is trained using a discriminators that determine whether a person depicted in the output image is the subject of the input image as described with respect to the image generator training system 440. ) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2-3 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over OZKAN et al 2024/0303883 A14 in view of GAUTAM et al. WO 2024091969. Ozan teaches claim 1 as described above. See claim 1 rejection. However, Ozan does not teach wherein the image generator is configured to predict a relationship between a pixel channel and a density in a three-dimensional (3D) plane as claimed in claim 2 and wherein the density indicates a 3D spatial smoothness as claimed in claim 3. However, GAUTAM et al. WO 2024091969 the same analogs art (i.e. image processing) teaches: Neural Radiance Field (NeRF) [0018] Under the neural-field framework, field quantities are produced by sampling coordinates and feeding the sampled coordinates into a neural network. For example, Neural Radiance Field (NeRF) is an implicit 3D scene representation that takes the spatial location (x, y, z) and the viewing direction (θ, ϕ) as inputs and generates the corresponding predicted color texture and volume density as outputs. The corresponding neural network can be trained, e.g., using a set of 2D images with known camera poses and pertinent intrinsic information. After having been trained, the neural network can be used to render arbitrary views of the 3D scene by (i) querying the corresponding 3D positions and viewing directions for the various pixels in the views and (ii) performing volume rendering to construct a projected 2D image. Ozkan and Gautam are related to image generating device, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the image generator of Ozkan with Gautam’s teaching of predicting a relationship between a pixel channel and a density in a three-dimensional (3D) plane since it would have further provided additional functionality and improve generated image. Claims 13-14 are rejected same rational as claims 2 and 3 respectively. Allowable Subject Matter Claims 5-7 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEMESGHEN GHEBRETINSAE whose telephone number is (571)272-3017. The examiner can normally be reached Monday-Friday 7:30-4.00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TEMESGHEN GHEBRETINSAE can be reached at 571-272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626
Read full office action

Prosecution Timeline

Sep 03, 2024
Application Filed
Nov 12, 2024
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579731
SUB-PIXEL CURVE RENDERING IN CONTENT GENERATION SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12488727
GATE DRIVING DEVICE AND OPERATING METHOD FOR GATE DRIVING DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12481471
ELECTRONIC DEVICE FOR SHARING SCREEN WITH EXTERNAL DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Nov 25, 2025
Patent 12423903
Cloud Image Rendering for Concurrent Processes
2y 5m to grant Granted Sep 23, 2025
Patent 8031823
SYSTEM AND METHOD FOR ADAPTIVELY DESKEWING PARALLEL DATA SIGNALS RELATIVE TO A CLOCK
2y 5m to grant Granted Oct 04, 2011
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
78%
With Interview (+3.8%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 158 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month