Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,049

GENERATING 3D ANIMATED IMAGES FROM 2D STATIC IMAGES

Non-Final OA §103
Filed
May 16, 2024
Examiner
YANG, YI
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
295 granted / 415 resolved
+9.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
454
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 10-11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Moll U.S. Patent Application 20160019709 in view of Short U.S. Patent Application 20190026931, and further in view of Karafin U.S. Patent Application 20120235988. Regarding claim 11, Moll discloses a computing device configured to convert two-dimensional (2D) static images to three-dimensional (3D) animated images, the computing device comprising: one or more processors; and a computer-readable medium storing instructions that, when executed, cause the one or more processors (paragraph [0068]: FIG. 13... the computing device 200 includes a bus 202 that directly or indirectly couples the following devices: memory 204, one or more processors 206) to: receive one or more 2D static images, each 2D static image of the one or more 2D static images depicting a respective environment (paragraph [0092]: In step 291, a user colors a two-dimensional template image including a two-dimensional character; paragraph [0093]: In step 296, the colored two-dimensional template image is captured; paragraph [0060]: in FIG. 6, an exemplary template page 128 includes a template sheet 130 having colorable background 132 with coloring content 134 for generating an animated segment); generate a 3D object based on a 2D static image of the one or more 2D static images (paragraph [0093]: In step 297, the application generates a three-dimensional character based on the two-dimensional character such that the three-dimensional character is colored to correspond to the two-dimensional template image); determine a visual perspective trajectory along the 3D object, the visual perspective trajectory indicative of simulated movement within a 3D animated image associated with depth in the respective environment depicted by the 2D static image (paragraph [0093]: The three-dimensional character is animated in step 298; paragraph [0047]: if the character comes toward or away from the camera it may provide a similar effect by duplicating the current environment and placing it either in front of or behind the current scene, creating depth to the current scene in a seamless backdrop while incorporating animated features of a selected background and/or user-colored features of a template environment); and generate the 3D animated image based on the 3D object and the visual perspective trajectory such that the 3D animated image replicates the simulated movement (paragraph [0093]: The three-dimensional character is animated in step 298. Finally, in step 299 the application superimposes the three-dimensional character over images captured through the camera of the mobile device such that the three-dimensional character is included in a user's environment). Moll discloses all the features with respect to claim 11 as outlined above. However, Moll fails to disclose generating a 3D mesh, the visual perspective trajectory indicative of simulated movement within a 3D animated image at least partially along an axis in the respective environment depicted by the 2D static image. Short discloses disclose determine a visual perspective trajectory along the 3D object, the visual perspective trajectory indicative of simulated movement within a 3D animated image at least partially along an axis in the respective environment depicted by the 2D static image (paragraph [0043]: FIG. 4B depicts an example of a 3D representation of a plan 410 corresponding to the 2D representation of the plan 400... the symbolic elements 414a-414i, 416a-416i can be represented by realistically proportionate 3D representations of the symbolic elements (e.g., athletic players). In some implementations, the action elements 418a-418d can be represented by multiple different types of vectors (e.g., arrows or limited segments) to indicate open trajectories or limited trajectories). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll’s to determine trajectory as taught by Short, to use a 2D illustration for dynamically generating 3D animations. Moll as modified by Short discloses all the features with respect to claim 11 as outlined above. However, Moll as modified by Short fails to disclose generating a 3D mesh explicitly. Karafin disclose generating a 3D mesh (paragraph [0029]: The three-dimensional images are created using a process comprising one or more of: generating a three-dimensional mesh form, repositioning a pixel or group of pixels through depth parameter adjustment). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll and Short’s to generate 3D mesh as taught by Karafin, to facilitate conversion of two-dimensional images into three-dimensional images. Regarding claim 20, Moll as modified by Short and Karafin discloses the computing device of claim 11, wherein determining the visual perspective trajectory is based on one or more salient objects in the respective environment of the 2D static image (Short’s paragraph [0043]: FIG. 4B depicts an example of a 3D representation of a plan 410 corresponding to the 2D representation of the plan 400... the symbolic elements 414a-414i, 416a-416i can be represented by realistically proportionate 3D representations of the symbolic elements (e.g., athletic players). In some implementations, the action elements 418a-418d can be represented by multiple different types of vectors (e.g., arrows or limited segments) to indicate open trajectories or limited trajectories). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll’s to determine trajectory as taught by Short, to use a 2D illustration for dynamically generating 3D animations; and combine Moll and Short’s to generate 3D mesh as taught by Karafin, to facilitate conversion of two-dimensional images into three-dimensional images. Claim 1 recites the functions of the apparatus recited in claim 11 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 11 applies to the method steps of claim 1. Claim 10 recites the functions of the apparatus recited in claim 20 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 20 applies to the method steps of claim 10. Claim 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Moll U.S. Patent Application 20160019709 in view of Short U.S. Patent Application 20190026931, in view of Karafin U.S. Patent Application 20120235988, and further in view of Shastri U.S. Patent 8982185. Regarding claim 12, Moll as modified by Short and Karafin discloses generating the 3D mesh is responsive to determining to convert the 2D static image into the 3D animated image (Short’s paragraph [0050]: The 3D animation is automatically generated based on the symbolic elements, the action elements, and the rules (508); Karafin’s paragraph [0029]: The three-dimensional images are created using a process comprising one or more of: generating a three-dimensional mesh form). However, Moll as modified by Short and Karafin fails to disclose analyzing the one or more 2D static images to determine, based on one or more respective characteristics of the one or more 2D static images, whether to convert the 2D static image into the 3D image. Shastri discloses analyzing the one or more 2D static images to determine, based on one or more respective characteristics of the one or more 2D static images, whether to convert the 2D static image into the 3D image (col. 4 line 1-6: "Suitable" in this context relates to objective criteria for measuring whether or not a result of converting the 2D image data to 3D image data is likely to be of a quality such that the resulting 3D image data is viewable, where the "viewable" aspect can also be based on objective criteria or metrics). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll, Short and Karafin’s to determine whether to convert 2D image into 3D image as taught by Shastri, to convert two-dimensional content in media to three-dimensional content. Claim 2 recites the functions of the apparatus recited in claim 12 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 12 applies to the method steps of claim 2. Claim 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Moll U.S. Patent Application 20160019709 in view of Short U.S. Patent Application 20190026931, in view of Karafin U.S. Patent Application 20120235988, and further in view of Asawaroengchai U.S. Patent Application 20210158554. Regarding claim 17, Moll as modified by Short and Karafin discloses all the features with respect to claim 11 as outlined above. However, Moll as modified by Short and Karafin fails to disclose estimating, by the one or more processors and using a trained depth estimation model, a depth map representative of perceived depths associated with the 2D static image; wherein generating the 3D mesh is based on the depth map. Asawaroengchai discloses estimating, by the one or more processors and using a trained depth estimation model, a depth map representative of perceived depths associated with the 2D static image (paragraph [0038]: The machine learning model may be trained on and optimized for depth estimation of landscapes and different types of recognized objects); wherein generating the 3D mesh is based on the depth map (paragraph [0017]: The input images and their respective depth maps may then be used in a 3D mesh generation process to create 3D model representations of the input images). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll, Short and Karafin’s to estimate depth map as taught by Asawaroengchai, to automatically generating depth map images from two dimensional (2D) images. Regarding claim 18, Moll as modified by Short, Karafin and Asawaroengchai discloses the computing device of claim 11, wherein the computer-readable medium further stores instructions that, when executed, cause the one or more processors to: generating, by the one or more processors and using a trained generative neural network, one or more predicted realistic extensions of the 2D static image (Asawaroengchai’s paragraph [0069]: Segmentation map 625 may be used as input into additional neural networks or machine learning models, such as models trained to estimate depth of human faces or foreground objects. Additionally, individual segmented regions 610, 615, and 620 (realistic extensions) generated from the first neural network may also be used as inputs to additional neural networks or machine learning models); wherein generating the 3D mesh is further based on the one or more predicted realistic extensions of the 2D static image (Asawaroengchai’s paragraph [0017]: The input images and their respective depth maps may then be used in a 3D mesh generation process to create 3D model representations of the input images). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll, Short and Karafin’s to estimate depth map as taught by Asawaroengchai, to automatically generating depth map images from two dimensional (2D) images. Claim 7 recites the functions of the apparatus recited in claim 17 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 17 applies to the method steps of claim 7. Claim 8 recites the functions of the apparatus recited in claim 18 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 18 applies to the method steps of claim 8. Claim 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Moll U.S. Patent Application 20160019709 in view of Short U.S. Patent Application 20190026931, in view of Karafin U.S. Patent Application 20120235988, and further in view of Lin U.S. Patent Application 20210342984. Regarding claim 19, Moll as modified by Short and Karafin discloses overlaying the 2D static image and the 3D mesh to generate a 3D depth overlay (Moll’s paragraph [0093]: in step 299 the application superimposes the three-dimensional character over images captured through the camera of the mobile device such that the three-dimensional character is included in a user's environment; Karafin’s paragraph [0180]: FIG. 48B is an illustration of an image after depth mastering). However, Moll as modified by Short and Karafin fails to disclose reconstructing, using a trained inpainting model, one or more missing or stretched regions in the 3D depth overlay. Lin discloses reconstructing, using a trained inpainting model, one or more missing or stretched regions in the 3D depth overlay (paragraph [0090]: For missing regions reconstructed using these partially valid feature patches, they can be used as designated portions (e.g., holes) and run using a pre-trained inpainting model). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Moll, Short and Karafin’s to reconstruct missing regions as taught by Lin, to generate better quality images. Claim 9 recites the functions of the apparatus recited in claim 19 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 19 applies to the method steps of claim 9. Allowable Subject Matter Claim 3-6 and 13-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 3 and 13 are about the one or more respective characteristics of the one or more 2D static images include one or more respective quality metrics of the one or more 2D static images, and the analyzing the one or more 2D static images includes: determining, by the one or more processors, a respective quality metric for each of the one or more 2D static images using a trained image quality model; and filtering, by the one or more processors, the one or more 2D static images to discard 2D static images associated with respective quality metrics below a predetermined quality threshold. Moll 20160019709, Short 20190026931, Karafin 20120235988 and Shastri 8982185 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter. Claim 4 and 14 are about the one or more respective characteristics of the one or more 2D static images include one or more respective text quantity metrics of the one or more 2D static images, and the analyzing the one or more 2D static images includes: determining, by the one or more processors, a respective text quantity metric for each of the one or more 2D static images using an optical character recognition (OCR) model; and filtering, by the one or more processors, the one or more 2D static images to discard 2D static images associated with respective text quantity metrics below a predetermined text quantity threshold. Moll 20160019709, Short 20190026931, Karafin 20120235988 and Shastri 8982185 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter. Claim 5 and 15 are about the one or more respective characteristics of the one or more 2D static images include one or more respective logo indicators of the one or more 2D static images, and the analyzing the one or more 2D static images includes: determining, by the one or more processors, a logo indicator for each of the one or more 2D static images using a logo detection model; and filtering, by the one or more processors, the one or more 2D static images to discard 2D static images determined to include a logo. Moll 20160019709, Short 20190026931, Karafin 20120235988 and Shastri 8982185 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter. Claim 6 and 16 are about the one or more respective characteristics of the one or more 2D static images include one or more respective depth metrics of the one or more 2D static images, and the analyzing the one or more 2D static images includes: determining, by the one or more processors, a respective depth metric for each of the one or more 2D static images using a depth recognition model; and filtering, by the one or more processors, the one or more 2D static images to discard 2D static images with depth metrics below a predetermined depth threshold. Moll 20160019709, Short 20190026931, Karafin 20120235988 and Shastri 8982185 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /YI YANG/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Dec 30, 2025
Non-Final Rejection — §103
Mar 24, 2026
Interview Requested
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary
Apr 02, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586304
PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567129
Image Processing Method and Electronic Device
2y 5m to grant Granted Mar 03, 2026
Patent 12561276
SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541902
SIGN LANGUAGE GENERATION AND DISPLAY
2y 5m to grant Granted Feb 03, 2026
Patent 12541896
COMPUTER-BASED CONTENT PERSONALIZATION OF A VISUAL DISPLAY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month