Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,916

SYSTEM AND METHOD FOR DYNAMIC IMAGES VIRTUALISATION

Non-Final OA §103§DP
Filed
Jul 16, 2024
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim(s) 1-20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-20 of U.S. Patent No. US 12079924 B2 (reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because both of claims are essentially the same structure and perform essentially the same function, therefore unpatentable for obvious-type double patenting. The following table illustrates the conflicting claim pairs: Instant Appl. 1, 8, 15 2, 9, 16 3, 10, 17 4, 11, 18 5, 12, 19 6, 13, 20 7, 14 reference patent 1, 2 1 1 1 1 4 5 Claims of the instant application are compared to claims of Reference Patent in the following tables. Instant Application reference patent 1. A method of data compression, the method comprising: accessing at least one input image generated offline by static 2D computer generated imagery (CGI); subdividing each input image among the at least one input image into image tiles; and using an AI model trained to perform a data fetching prediction process that extracts an image tile from a memory before detection of any user viewing interest, the AI model creating at least one extrapolated output image that contains more visual data than a corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest. 1. A dynamic images virtualization system, comprising: (i) a controller configured to perform digital image processing by using an AI model trained to perform data fetching prediction upon at least one input image generated by a static 2D computer generated imagery (CGI) and to create extrapolated output dynamic images that contain more visual data than the at least one input image generated by the static 2D CGI, the data fetching prediction by the AI model extracting a tile from a cache memory before detection of any indication of user viewing interest; and (ii) at least one display means configured to present the extrapolated output dynamic images to at least one user, the extrapolated output dynamic images containing more visual data than the at least one input image by including the tile extracted by the AI model from the cache memory before detection of any indication of user viewing interest, wherein the at least one input image is generated offline by the static 2D CGI prior to the data fetching prediction and the extrapolated output dynamic images are free viewpoint 3D images and wherein the data fetching prediction has a reduced-latency and results in production of the extrapolated output dynamic images that comprise novel images as well as novel multi-directional and image scenery parameters in comparison with the at least one input image generated offline by the static 2D CGI prior to the data fetching prediction. 2. The system of claim 1, wherein the at least one input image is subdivided into multiple image tiles. Instant Application reference patent 2. The method of claim 1, further comprising: presenting the at least one extrapolated output image that contains more visual data than the corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest. 1. A dynamic images virtualization system, comprising: (i) a controller configured to perform digital image processing by using an AI model trained to perform data fetching prediction upon at least one input image generated by a static 2D computer generated imagery (CGI) and to create extrapolated output dynamic images that contain more visual data than the at least one input image generated by the static 2D CGI, the data fetching prediction by the AI model extracting a tile from a cache memory before detection of any indication of user viewing interest; and (ii) at least one display means configured to present the extrapolated output dynamic images to at least one user, the extrapolated output dynamic images containing more visual data than the at least one input image by including the tile extracted by the AI model from the cache memory before detection of any indication of user viewing interest, wherein the at least one input image is generated offline by the static 2D CGI prior to the data fetching prediction and the extrapolated output dynamic images are free viewpoint 3D images and wherein the data fetching prediction has a reduced-latency and results in production of the extrapolated output dynamic images that comprise novel images as well as novel multi-directional and image scenery parameters in comparison with the at least one input image generated offline by the static 2D CGI prior to the data fetching prediction. Instant Application reference patent 3. The method of claim 1, wherein: the at least one input image is generated offline by the static 2D CGI prior to the AI model performing the data fetching prediction process that extracts the image tile from the memory before detection of any user viewing interest. 1. A dynamic images virtualization system, comprising: (i) a controller configured to perform digital image processing by using an AI model trained to perform data fetching prediction upon at least one input image generated by a static 2D computer generated imagery (CGI) and to create extrapolated output dynamic images that contain more visual data than the at least one input image generated by the static 2D CGI, the data fetching prediction by the AI model extracting a tile from a cache memory before detection of any indication of user viewing interest; and (ii) at least one display means configured to present the extrapolated output dynamic images to at least one user, the extrapolated output dynamic images containing more visual data than the at least one input image by including the tile extracted by the AI model from the cache memory before detection of any indication of user viewing interest, wherein the at least one input image is generated offline by the static 2D CGI prior to the data fetching prediction and the extrapolated output dynamic images are free viewpoint 3D images and wherein the data fetching prediction has a reduced-latency and results in production of the extrapolated output dynamic images that comprise novel images as well as novel multi-directional and image scenery parameters in comparison with the at least one input image generated offline by the static 2D CGI prior to the data fetching prediction. Instant Application reference patent 4. The method of claim 1, wherein: the at least one input image is generated offline by the static 2D CGI prior to the AI model creating the at least one extrapolated output image that contains more visual data than the corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest. 1. A dynamic images virtualization system, comprising: (i) a controller configured to perform digital image processing by using an AI model trained to perform data fetching prediction upon at least one input image generated by a static 2D computer generated imagery (CGI) and to create extrapolated output dynamic images that contain more visual data than the at least one input image generated by the static 2D CGI, the data fetching prediction by the AI model extracting a tile from a cache memory before detection of any indication of user viewing interest; and (ii) at least one display means configured to present the extrapolated output dynamic images to at least one user, the extrapolated output dynamic images containing more visual data than the at least one input image by including the tile extracted by the AI model from the cache memory before detection of any indication of user viewing interest, wherein the at least one input image is generated offline by the static 2D CGI prior to the data fetching prediction and the extrapolated output dynamic images are free viewpoint 3D images and wherein the data fetching prediction has a reduced-latency and results in production of the extrapolated output dynamic images that comprise novel images as well as novel multi-directional and image scenery parameters in comparison with the at least one input image generated offline by the static 2D CGI prior to the data fetching prediction. Instant Application reference patent 5. The method of claim 1, wherein: the data fetching prediction process extracts the image tile from a local cache memory provided by a content delivery network before detection of any user viewing interest. 1. A dynamic images virtualization system, comprising: (i) a controller configured to perform digital image processing by using an AI model trained to perform data fetching prediction upon at least one input image generated by a static 2D computer generated imagery (CGI) and to create extrapolated output dynamic images that contain more visual data than the at least one input image generated by the static 2D CGI, the data fetching prediction by the AI model extracting a tile from a cache memory before detection of any indication of user viewing interest; and (ii) at least one display means configured to present the extrapolated output dynamic images to at least one user, the extrapolated output dynamic images containing more visual data than the at least one input image by including the tile extracted by the AI model from the cache memory before detection of any indication of user viewing interest, wherein the at least one input image is generated offline by the static 2D CGI prior to the data fetching prediction and the extrapolated output dynamic images are free viewpoint 3D images and wherein the data fetching prediction has a reduced-latency and results in production of the extrapolated output dynamic images that comprise novel images as well as novel multi-directional and image scenery parameters in comparison with the at least one input image generated offline by the static 2D CGI prior to the data fetching prediction. Instant Application reference patent 6. The method of claim 1, wherein: the AI model creates the at least one extrapolated output image that contains more visual data than a corresponding input image by generating at least one future tile based on the at least one input image before detection of any user viewing interest. 4. The system of claim 1, wherein the reduced latency data fetching prediction produces the extrapolated output dynamic images by calculating and generating subsequent future tiles that are based on the at least one input image. Instant Application reference patent 7. The method of claim 1, wherein: the AI model creates the at least one extrapolated output image that contains more visual data than a corresponding input image by including at least one generated future tile in the at least one extrapolated output image before detection of any user viewing interest. 5. The system of claim 4, wherein each tile of the subsequent future tiles includes an array of visual data. Claims 8-20 are rejected on the ground of nonstatutory double patenting for the same reason as claims 1-7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 20210158561 A1), referred herein as Park in view of LE CLERC et al. (US 20170178306 A1), referred herein as CLERC and Tuomi et al. (US 20210225060 A1), referred herein as Tuomi. Regarding Claim 1, Park in view of CLERC teaches a method of data compression, the method comprising (Park Abstract: Apparatuses, systems, and techniques estimate a pose of an object based on images generated from a combined image volume; [0353] ROP 2126 includes compression logic to compress depth or color data that is written to memory): accessing at least one input image Park [0060] set of images is obtained based on a 3-D image volume…The plurality of images may comprise two-dimensional (2-D) images of an object). Park does not but CLERC teaches generated offline by static 2D computer generated imagery (CGI) (CLERC [0031] generating, e.g. synthesizing, a first face in a first image, by determining a first occluded part of the first face that is occluded by an occluding object, for example a Head-Mounted Display (HMD); [0034] The first image 11 is for example a still image acquired with a digital still camera; [0515] virtual instruments may include software-defined applications for performing one or more processing operations with respect to imaging data generated by imaging devices) Park does not but Tuomi teaches subdividing each input image among the at least one input image into image tiles (Tuomi [0045] As shown in block 404, the method 400 includes dividing the image into a plurality of tiles); and using an AI model trained to perform a data fetching prediction process that extracts an image tile from a memory before detection of any user viewing interest, the AI model creating at least one extrapolated output image that contains more visual data than a corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest (Park [0064] the network design may build a 3-D voxel representation of an object by computing 2-D latent features and projecting them to a canonical 3-D voxel using a deprojection unit. This operation may be interpreted as space carving in latent space. The network may render a novel view by rotating the latent voxel representation to the new view and projecting it into the 2-D image space. Using the projected latent features, a decoder may generate a new view image by predicting the depth map of the object at the query view and assigning color for each pixel by combining corresponding pixel values at different reference views; [0065] to reconstruct and render unseen objects, the network may be trained on random 3-D meshes from a dataset; [0066] In at least one embodiment, a neural network is provided that reconstructs a latent representation of a target object given a limited set of reference views and renders the object from arbitrary viewpoints without additional training; [0516] use machine learning models or other AI to perform one or more processing steps). CLERC discloses techniques estimating a pose of an object based on images generated from a combined image volume, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Park to incorporate the teachings of CLERC, and apply the imaging data generated by imaging devices and HMD synthesizing method to techniques for estimating a pose of an object based on images generated from a combined image volume. Doing so would provide a perfect framework for immersive experiences in gaming, virtual reality, movie watching or video conferences. Tuomi discloses a method of tiled rendering of an image for display, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Park to incorporate the teachings of Tuomi, and apply the tile-based image rendering to techniques for estimating a pose of an object based on images generated from a combined image volume. Doing so would reduce the network overhead. Regarding Claim 2, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches further comprising: presenting the at least one extrapolated output image that contains more visual data than the corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest (Park [0068] In accordance with the foregoing, in at least one embodiment, the described techniques provide an end-to-end system or process for novel view reconstructions and pose estimation for a target object, where the target object may be selected for engagement by a robotic system. In at least one embodiment, a reconstruction pipeline of the system or process may obtain a collection of reference images as input and generate a flexible representation which can be rendered from novel viewpoints; [0318] a tiling unit 1858 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches). Regarding Claim 3, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches wherein: the at least one input image is generated offline by the static 2D CGI prior to the AI model performing the data fetching prediction process that extracts the image tile from the memory before detection of any user viewing interest (Park [0065] to reconstruct and render unseen objects, the network may be trained on random 3-D meshes from a dataset, such as a ShapeNet dataset, that may be textured using images from a dataset, such as a MS-COCO dataset, under different lighting conditions). Regarding Claim 4, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches wherein: the at least one input image is generated offline by the static 2D CGI prior to the AI model creating the at least one extrapolated output image that contains more visual data than the corresponding input image by inclusion of the image tile extracted from the memory before detection of any user viewing interest (Park [0098] In at least one embodiment, at block 606, the feature volumes generated at block 604 are fused or combined to generate a combined feature volume. In at least one embodiment, the combined feature volume is a canonical feature volume. In at least one embodiment, the combined feature volume is a 3-D feature volume. In at least one embodiment, the combined feature volume is generated by the modeling system 300. For example, in at least one embodiment, the block 606 generate the combined feature volume 336). Regarding Claim 5, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches wherein: the data fetching prediction process extracts the image tile from a local cache memory provided by a content delivery network before detection of any user viewing interest (Tuomi [0021] a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals); 7. The method of claim 6, wherein executing the coarse level tiling and the fine level tiling, via the same fixed function hardware, further comprises: executing the coarse level tiling to determine in which tile each of a plurality of primitives is located; and execute the fine level tiling by utilizing local cache memory to accumulate a batch of primitives and render primitives one fine tile at a time.). Regarding Claim 6, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches wherein: the AI model creates the at least one extrapolated output image that contains more visual data than a corresponding input image by generating at least one future tile based on the at least one input image before detection of any user viewing interest (Park [0073] using the plurality of images 302-306 with associated object poses and object segmentation binary masks 308-312, the system 300 may generate a representation of the object which can be rendered with arbitrary camera parameters. The object may be represented as a latent 3-D voxel grid. In at least one embodiment, the representation can be directly manipulated using standard 3-D transformations and enable novel view rendering). Regarding Claim 7, Park in view of CLERC and Tuomi teaches the method of claim 1, and further teaches wherein: the AI model creates the at least one extrapolated output image that contains more visual data than a corresponding input image by including at least one generated future tile in the at least one extrapolated output image before detection of any user viewing interest (Park [0068] process for novel view reconstructions and pose estimation for a target object, where the target object may be selected for engagement by a robotic system. In at least one embodiment, a reconstruction pipeline of the system or process may obtain a collection of reference images as input and generate a flexible representation which can be rendered from novel viewpoints. In at least one embodiment, multi-view consistency may be utilized to construct a latent representation, and the system or process may not require the use of category specific shape priors). Regarding Claims 8-14, Park in view of CLERC and Tuomi teaches a system (Park Abstract: Apparatuses, systems, and techniques estimate a pose of an object based on images generated from a combined image volume; [0353] ROP 2126 includes compression logic to compress depth or color data that is written to memory). The metes and bounds of the of the claims substantially correspond to the limitations set forth in claims 1-7; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claims 15-20, Park in view of CLERC and Tuomi teaches a non-transitory storage medium comprising instructions that, when executed by one or more microprocessors of a computer, cause the computer to perform operations (Park Abstract: Apparatuses, systems, and techniques estimate a pose of an object based on images generated from a combined image volume; [0353] ROP 2126 includes compression logic to compress depth or color data that is written to memory; [0620] Clause 21. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to). The metes and bounds of the of the claims substantially correspond to the limitations set forth in claims 1-6; thus they are rejected on similar grounds and rationale as their corresponding limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

Jul 16, 2024
Application Filed
Jan 19, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month